Skip to content

rRateLimit/gorl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GoRL - Go Rate Limiter

A Go tool for testing HTTP request rate limits. This tool sends HTTP requests at a specified rate to test API rate limiting functionality and performance.

Features

  • Send HTTP requests at a specified rate (requests/second)
  • Execute requests with multiple concurrent workers
  • Multiple rate limiting algorithms (Token Bucket, Leaky Bucket, Fixed Window, Sliding Window Log, Sliding Window Counter)
  • Real-time statistics reporting
  • Detailed result reports (response times, status code distribution, etc.)
  • Configuration via command-line arguments or JSON file
  • Support for various HTTP methods and headers

Installation

go build -o gorl main.go

Usage

Command Line Arguments

Basic usage:

./gorl -url=https://httpbin.org/get -rate=5 -duration=30s

Advanced configuration:

./gorl -url=https://api.example.com -rate=10 -algorithm=leaky-bucket -duration=60s -concurrency=3 -method=POST -headers="Content-Type:application/json,Authorization:Bearer token123" -body='{"test":"data"}'

Configuration File

Using a configuration file (JSON):

./gorl -config=config.json

Example configuration file (see config.example.json):

{
  "url": "https://httpbin.org/get",
  "requestsPerSecond": 5.0,
  "algorithm": "token-bucket",
  "duration": "30s",
  "concurrency": 2,
  "method": "GET",
  "headers": {
    "User-Agent": "GoRL Rate Limiter",
    "Accept": "application/json"
  },
  "body": ""
}

Rate Limiting Algorithms

GoRL supports multiple rate limiting algorithms, each with different characteristics:

Token Bucket (default)

  • Best for: Allowing bursts while maintaining average rate
  • How it works: Tokens are added to a bucket at a constant rate. Each request consumes one token
  • Pros: Allows short bursts, smooth for variable traffic
  • Cons: May allow more requests than expected in short periods

Leaky Bucket

  • Best for: Enforcing strict rate limits with no bursts
  • How it works: Requests are processed at a fixed rate, excess requests wait
  • Pros: Guarantees exact rate, prevents bursts
  • Cons: May introduce latency for bursty traffic

Fixed Window

  • Best for: Simple rate limiting with predictable windows
  • How it works: Counts requests in fixed time windows (e.g., per second)
  • Pros: Simple, predictable behavior
  • Cons: Can allow double the rate at window boundaries

Sliding Window Log

  • Best for: Precise rate limiting with perfect accuracy
  • How it works: Maintains a log of all request timestamps
  • Pros: Most accurate, no boundary effects
  • Cons: High memory usage for high traffic

Sliding Window Counter

  • Best for: Good approximation with moderate memory usage
  • How it works: Uses multiple sub-windows to approximate sliding behavior
  • Pros: Good accuracy, reasonable memory usage
  • Cons: More complex than fixed window

Timeout Configuration

GoRL provides detailed timeout configuration options to handle various network conditions:

Timeout Types

  1. HTTP Timeout (-http-timeout): Overall timeout for the entire HTTP request

    • Default: 30s
    • Controls the maximum time for a complete request/response cycle
  2. Connect Timeout (-connect-timeout): Timeout for establishing TCP connection

    • Default: 10s
    • Controls how long to wait for the initial TCP connection
  3. TLS Handshake Timeout (-tls-handshake-timeout): Timeout for TLS/SSL handshake

    • Default: 10s
    • Controls the maximum time for TLS negotiation
  4. Response Header Timeout (-response-header-timeout): Timeout for receiving response headers

    • Default: 10s
    • Controls how long to wait for the server to send response headers

Usage Examples

# Quick timeout for fast APIs
./gorl -url=https://api.example.com -rate=10 -http-timeout=5s -connect-timeout=2s

# Longer timeouts for slow endpoints
./gorl -url=https://slow-api.example.com -rate=2 -http-timeout=60s -response-header-timeout=30s

# Strict timeouts for testing
./gorl -url=https://api.example.com -rate=5 -connect-timeout=1s -tls-handshake-timeout=2s

Environment Variables

All timeout settings can also be configured via environment variables:

export GORL_HTTP_TIMEOUT=20s
export GORL_CONNECT_TIMEOUT=5s
export GORL_TLS_HANDSHAKE_TIMEOUT=5s
export GORL_RESPONSE_HEADER_TIMEOUT=10s
./gorl -url=https://api.example.com -rate=10

Options

Option Description Default
-url Target URL to test (required) -
-rate Requests per second 1.0
-algorithm Rate limiting algorithm token-bucket
-duration Test execution duration 10s
-concurrency Number of concurrent workers 1
-method HTTP method GET
-headers HTTP headers (key1:value1,key2:value2 format) -
-body Request body -
-config Configuration file path -
-http-timeout HTTP request timeout 30s
-connect-timeout TCP connection timeout 10s
-tls-handshake-timeout TLS handshake timeout 10s
-response-header-timeout Response header timeout 10s
-tcp-keep-alive Enable TCP keep-alive true
-tcp-keep-alive-period TCP keep-alive period 30s
-disable-keep-alives Disable HTTP keep-alives false
-max-idle-conns Maximum idle connections 100
-max-idle-conns-per-host Maximum idle connections per host 10
-live Show live statistics false
-compact Show compact one-line statistics false
-report-interval Statistics report interval 2s
-help Show help message -

Available Algorithms

  • token-bucket - Token bucket algorithm (default)
  • leaky-bucket - Leaky bucket algorithm
  • fixed-window - Fixed window algorithm
  • sliding-window-log - Sliding window log algorithm
  • sliding-window-counter - Sliding window counter algorithm

Output Example

Starting rate limit test...
Target URL: https://httpbin.org/get
Rate: 5.00 requests/second
Algorithm: Token Bucket
Duration: 30s
Concurrency: 2
Method: GET
----------------------------------------
Requests: 25 | Success: 25 | Failed: 0 | Rate: 5.00 req/s
Requests: 50 | Success: 50 | Failed: 0 | Rate: 5.00 req/s

========================================
Final Results:
Total Requests: 150
Successful Requests: 150
Failed Requests: 0
Success Rate: 100.00%

Status Code Distribution:
  200: 150 requests

Response Times:
  Min: 89.123ms
  Max: 245.567ms
  Avg: 142.345ms

Actual Rate: 5.00 requests/second
Target Rate: 5.00 requests/second
Algorithm Used: Token Bucket

License

MIT License - See the LICENSE file for details.

Performance Testing

GoRL includes comprehensive performance testing capabilities:

Benchmark Mode

Run GoRL in benchmark mode to measure performance metrics:

# Basic benchmark
./gorl -url=https://api.example.com -rate=100 -duration=30s -benchmark

# Benchmark with warmup
./gorl -url=https://api.example.com -rate=100 -duration=30s -benchmark -warmup=10s

Benchmark mode provides:

  • Request throughput (requests/second)
  • Latency statistics (min/avg/max)
  • Memory usage metrics
  • CPU time measurements
  • GC statistics

Running Performance Tests

# Run Go benchmark tests
make bench

# Run performance stress tests
make perf-test

# Run full benchmark suite
make benchmark

Benchmark Script

Use the included benchmark script for comprehensive testing:

# Run with default settings
./scripts/benchmark.sh

# Custom benchmark
BENCHMARK_URL=https://your-api.com BENCHMARK_DURATION=60s ./scripts/benchmark.sh

The benchmark script tests:

  • Different rate limiting algorithms
  • Various request rates (10-1000 req/s)
  • Multiple concurrency levels (1-100)
  • Different timeout configurations

Performance Considerations

  1. Algorithm Selection:

    • Token Bucket: Best overall performance, allows bursts
    • Leaky Bucket: Consistent rate, higher CPU usage
    • Fixed Window: Lowest memory usage, boundary effects
    • Sliding Window Log: Most accurate, highest memory usage
    • Sliding Window Counter: Good balance of accuracy and performance
  2. Concurrency Tuning:

    • Start with concurrency = CPU cores
    • Increase for I/O bound workloads
    • Monitor goroutine count for leaks
  3. Timeout Configuration:

    • Tight timeouts reduce resource usage
    • Balance between reliability and performance
    • Consider network latency in settings

Output Example

Starting rate limit test...
Target URL: https://httpbin.org/get
Rate: 5.00 requests/second
Algorithm: Token Bucket
Duration: 30s
Concurrency: 2
Method: GET
----------------------------------------
Requests: 25 | Success: 25 | Failed: 0 | Rate: 5.00 req/s
Requests: 50 | Success: 50 | Failed: 0 | Rate: 5.00 req/s

========================================
Final Results:
Total Requests: 150
Successful Requests: 150
Failed Requests: 0
Success Rate: 100.00%

Status Code Distribution:
  200: 150 requests

Response Times:
  Min: 89.123ms
  Max: 245.567ms
  Avg: 142.345ms

Actual Rate: 5.00 requests/second
Target Rate: 5.00 requests/second
Algorithm Used: Token Bucket

About

A Go tool for testing HTTP request rate limits.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •