Skip to content

Nginx rate limiting#135

Open
395ShikharSingh wants to merge 2 commits intoArchiveLabs:mainfrom
395ShikharSingh:feature/nginx-rate-limit
Open

Nginx rate limiting#135
395ShikharSingh wants to merge 2 commits intoArchiveLabs:mainfrom
395ShikharSingh:feature/nginx-rate-limit

Conversation

@395ShikharSingh
Copy link
Copy Markdown
Contributor

NGINX Rate Limiter

Implements rate limiting using Nginx limit_req to block high-volume or burst traffic before it reaches FastAPI, protecting the API from overloaded repeated attacks.

Key Features

  • Edge Protection: Nginx blocks excessive requests at the edge before they reach FastAPI
  • Endpoint-Specific Limits:
    • OPDS endpoints: No rate limiting
    • /items: No rate limiting
    • /upload & /authenticate: Strict (20/min + 5 burst)
    • Other endpoints: General (100/min + 10 burst)
  • Burst Handling: Allows controlled bursts with nodelay for better user experience

Changes

Nginx Configuration (docker/nginx/nginx.conf):

  • Added 3 limit_req_zone definitions:
    • general_limit: 100 requests/minute
    • lenient_limit: 300 requests/minute (defined but reserved for future use)
    • strict_limit: 20 requests/minute

Nginx Location Blocks (docker/nginx/conf.d/lenny.conf):

  • /v1/api/opds* → No rate limiting (regex location)
  • /v1/api/items → No rate limiting (exact match)
  • /v1/api/(upload|authenticate) → Strict rate limiting (20/min)
  • /v1/api/* (all other endpoints) → General rate limiting (100/min)
  • Fixed regex location blocks to use proxy_pass without URI path (Nginx requirement)

Rate Limiting Strategy

  1. No Rate Limiting:

    • /v1/api/opds and /v1/api/opds/{book_id} - Public catalog feeds for e-reader apps
    • /v1/api/items - Public items listing
  2. Strict Rate Limiting (20/min + 5 burst = 25 total):

    • /v1/api/upload - File uploads (resource-intensive)
    • /v1/api/authenticate - Authentication (security-sensitive)
  3. General Rate Limiting (100/min + 10 burst = 110 total):

    • All other API endpoints

Testing Results

OPDS endpoints: 20/20 requests passed (no rate limiting)
/items endpoint: 20/20 requests passed (no rate limiting)
General endpoints: 11 requests passed, then 503 (rate limited)
Strict endpoints: 8 requests passed, then 503 (rate limited)

Verification

  • Nginx configuration valid:
  • Rate limit zones configured: 3 zones
  • Location blocks correctly applied:
  • Edge protection working:

Technical Details

Nginx Location Matching Order:

  1. Exact matches (=)
  2. Regex matches (~) - in order of appearance
  3. Prefix matches - longest match first

This ensures OPDS and /items endpoints are matched before general rate limiting is applied.

Burst Handling:

  • burst=N: Allows N additional requests beyond the rate
  • nodelay: Processes burst requests immediately instead of queuing

Notes

  • Rate limiting is implemented entirely at the Nginx layer (no application-level rate limiting)
  • The lenient_limit zone (300/min) is defined but not currently used; reserved for future endpoint-specific needs
  • All rate limiting uses $binary_remote_addr as the key (client IP address)

PTAL @ronibhakta1

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements edge-level rate limiting using Nginx to protect the FastAPI application from high-volume traffic and attacks, with tiered rate limits for different endpoint categories.

Key Changes:

  • Added three rate limit zones in nginx.conf (general: 100/min, lenient: 300/min, strict: 20/min)
  • Configured location-specific rate limiting in lenny.conf with OPDS and items endpoints unprotected, strict limits on upload/authenticate, and general limits on other API endpoints
  • Used regex and exact location matches to apply different rate limiting policies

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
docker/nginx/nginx.conf Defines three rate limit zones (general_limit, lenient_limit, strict_limit) based on client IP address
docker/nginx/conf.d/lenny.conf Adds location blocks with rate limiting for different endpoint categories, keeping OPDS and items endpoints unrestricted

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@395ShikharSingh
Copy link
Copy Markdown
Contributor Author

395ShikharSingh commented Jan 9, 2026

Converting to draft , have to check some endpoints.

@395ShikharSingh 395ShikharSingh marked this pull request as draft January 9, 2026 14:49
@395ShikharSingh 395ShikharSingh force-pushed the feature/nginx-rate-limit branch from 4042c33 to 630f192 Compare January 23, 2026 22:13
@395ShikharSingh
Copy link
Copy Markdown
Contributor Author

Sharing .sh test script for testing Nginx Rate Limiter.

#!/bin/bash

# Rate Limiter Test Script
# Tests nginx rate limiting configuration

BASE_URL="http://localhost:8080"
PASS_COUNT=0
FAIL_COUNT=0
TOTAL_COUNT=0

# Colors for output
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color

print_header() {
    echo ""
    echo -e "${BLUE}======================================${NC}"
    echo -e "${BLUE}$1${NC}"
    echo -e "${BLUE}======================================${NC}"
}

test_endpoint() {
    local endpoint=$1
    local request_count=$2
    local expected_pass=$3
    local description=$4
    
    print_header "Testing: $description"
    echo "Endpoint: $endpoint"
    echo "Sending $request_count requests, expecting ~$expected_pass to pass"
    echo ""
    
    local pass=0
    local fail=0
    
    for i in $(seq 1 $request_count); do
        status=$(curl -s -o /dev/null -w "%{http_code}" "${BASE_URL}${endpoint}")
        if [ "$status" = "200" ] || [ "$status" = "307" ] || [ "$status" = "404" ] || [ "$status" = "422" ]; then
            pass=$((pass + 1))
            echo -e "  Request $i: ${GREEN}$status (passed)${NC}"
        elif [ "$status" = "503" ]; then
            fail=$((fail + 1))
            echo -e "  Request $i: ${YELLOW}$status (rate limited)${NC}"
        else
            echo -e "  Request $i: ${RED}$status (unexpected)${NC}"
        fi
    done
    
    echo ""
    echo "Results: $pass passed, $fail rate limited (503)"
    
    if [ $pass -ge $expected_pass ] && [ $fail -gt 0 ]; then
        echo -e "${GREEN}✓ Rate limiting working correctly!${NC}"
    elif [ $fail -eq 0 ] && [ $expected_pass -eq $request_count ]; then
        echo -e "${GREEN}✓ No rate limiting (as expected)${NC}"
    else
        echo -e "${YELLOW}⚠ Check results manually${NC}"
    fi
}

test_burst() {
    local endpoint=$1
    local request_count=$2
    local description=$3
    
    print_header "Burst Test: $description"
    echo "Endpoint: $endpoint"
    echo "Sending $request_count rapid requests..."
    echo ""
    
    local pass=0
    local fail=0
    
    # Use curl with parallel connections for burst testing
    for i in $(seq 1 $request_count); do
        status=$(curl -s -o /dev/null -w "%{http_code}" "${BASE_URL}${endpoint}" &)
        wait $!
        status=$(curl -s -o /dev/null -w "%{http_code}" "${BASE_URL}${endpoint}")
        if [ "$status" = "200" ] || [ "$status" = "307" ] || [ "$status" = "404" ] || [ "$status" = "422" ]; then
            pass=$((pass + 1))
        elif [ "$status" = "503" ]; then
            fail=$((fail + 1))
        fi
    done
    
    echo "Results: $pass passed, $fail rate limited (503)"
}

echo ""
echo -e "${BLUE}╔══════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║     NGINX RATE LIMITER TEST SUITE        ║${NC}"
echo -e "${BLUE}╚══════════════════════════════════════════╝${NC}"
echo ""
echo "Base URL: $BASE_URL"
echo "Starting tests..."

# Test 1: OPDS endpoint (no rate limiting)
print_header "Test 1: OPDS Endpoint (NO rate limiting expected)"
echo "Sending 25 rapid requests to /v1/api/opds..."
pass=0
fail=0
for i in $(seq 1 25); do
    status=$(curl -s -o /dev/null -w "%{http_code}" "${BASE_URL}/v1/api/opds")
    if [ "$status" = "200" ] || [ "$status" = "307" ] || [ "$status" = "404" ]; then
        pass=$((pass + 1))
    elif [ "$status" = "503" ]; then
        fail=$((fail + 1))
    fi
    printf "."
done
echo ""
echo "Results: $pass passed, $fail rate limited"
if [ $fail -eq 0 ]; then
    echo -e "${GREEN}✓ OPDS: No rate limiting applied (correct!)${NC}"
else
    echo -e "${RED}✗ OPDS: Unexpected rate limiting detected${NC}"
fi

# Test 2: /items endpoint (no rate limiting)
print_header "Test 2: /items Endpoint (NO rate limiting expected)"
echo "Sending 25 rapid requests to /v1/api/items..."
pass=0
fail=0
for i in $(seq 1 25); do
    status=$(curl -s -o /dev/null -w "%{http_code}" "${BASE_URL}/v1/api/items")
    if [ "$status" = "200" ] || [ "$status" = "307" ] || [ "$status" = "404" ]; then
        pass=$((pass + 1))
    elif [ "$status" = "503" ]; then
        fail=$((fail + 1))
    fi
    printf "."
done
echo ""
echo "Results: $pass passed, $fail rate limited"
if [ $fail -eq 0 ]; then
    echo -e "${GREEN}✓ /items: No rate limiting applied (correct!)${NC}"
else
    echo -e "${RED}✗ /items: Unexpected rate limiting detected${NC}"
fi

# Test 3: /authenticate endpoint (strict: 20/min + burst 5)
print_header "Test 3: /authenticate Endpoint (STRICT rate limiting: 20/min + 5 burst)"
echo "Sending 30 rapid requests to /v1/api/authenticate..."
echo "Expected: ~6-8 pass (1 base + 5 burst), rest rate limited"
pass=0
fail=0
for i in $(seq 1 30); do
    status=$(curl -s -o /dev/null -w "%{http_code}" -X POST "${BASE_URL}/v1/api/authenticate" -H "Content-Type: application/json" -d '{}')
    if [ "$status" = "200" ] || [ "$status" = "307" ] || [ "$status" = "404" ] || [ "$status" = "422" ]; then
        pass=$((pass + 1))
        echo -e "  Request $i: ${GREEN}$status (passed)${NC}"
    elif [ "$status" = "503" ]; then
        fail=$((fail + 1))
        echo -e "  Request $i: ${YELLOW}$status (rate limited)${NC}"
    else
        echo -e "  Request $i: $status"
    fi
done
echo ""
echo "Results: $pass passed, $fail rate limited"
if [ $fail -gt 0 ]; then
    echo -e "${GREEN}✓ /authenticate: Rate limiting working!${NC}"
else
    echo -e "${RED}✗ /authenticate: Rate limiting may not be working${NC}"
fi

# Test 4: General API endpoint (100/min + burst 10)
print_header "Test 4: General API Endpoint (GENERAL rate limiting: 100/min + 10 burst)"
echo "Sending 20 rapid requests to /v1/api/status..."
echo "Expected: ~11-15 pass (1 base + 10 burst), rest rate limited"
pass=0
fail=0
for i in $(seq 1 20); do
    status=$(curl -s -o /dev/null -w "%{http_code}" "${BASE_URL}/v1/api/status")
    if [ "$status" = "200" ] || [ "$status" = "307" ] || [ "$status" = "404" ]; then
        pass=$((pass + 1))
        echo -e "  Request $i: ${GREEN}$status (passed)${NC}"
    elif [ "$status" = "503" ]; then
        fail=$((fail + 1))
        echo -e "  Request $i: ${YELLOW}$status (rate limited)${NC}"
    else
        echo -e "  Request $i: $status"
    fi
done
echo ""
echo "Results: $pass passed, $fail rate limited"
if [ $fail -gt 0 ]; then
    echo -e "${GREEN}✓ General API: Rate limiting working!${NC}"
else
    echo -e "${YELLOW}⚠ General API: May need more requests to trigger rate limit${NC}"
fi

print_header "TEST SUMMARY"
echo ""
echo "Rate Limiting Configuration:"
echo "  - OPDS endpoints:     No rate limiting"
echo "  - /items endpoint:    No rate limiting"
echo "  - /authenticate:      Strict (20/min + 5 burst)"
echo "  - /upload:            Strict (20/min + 5 burst)"
echo "  - Other API:          General (100/min + 10 burst)"
echo ""
echo -e "${BLUE}Tests completed!${NC}"

@395ShikharSingh 395ShikharSingh marked this pull request as ready for review January 23, 2026 22:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants