Skip to content

Fix rate limiter: atomic INCR+EXPIRE with fixed window (was sliding, never expired)#699

Open
smahima27 wants to merge 1 commit intomainfrom
fix-rate-limiter-atomic
Open

Fix rate limiter: atomic INCR+EXPIRE with fixed window (was sliding, never expired)#699
smahima27 wants to merge 1 commit intomainfrom
fix-rate-limiter-atomic

Conversation

@smahima27
Copy link
Copy Markdown
Contributor

Bug

The rate limiter never returned 429s despite the middleware being wired correctly.

Root cause

Two bugs in the original implementation:

1. Non-atomic check-then-increment with sliding window reset

# BEFORE (broken)
def rate_limit_exceeded?(...)
  current_count = @redis.get(key).to_i   # read
  current_count >= limit
end

def increment_request_count(...)
  @redis.pipelined do |p|
    p.incr(key)
    p.expire(key, period)   # ← resets TTL on EVERY request!
  end
end

The expire was called on every request, which continuously reset the 60s window. Under rapid fire (105 requests in < 1s), the window never closed and the count kept climbing past the limit without triggering.

2. Race condition between get and incr — under concurrent load, multiple requests see count 0 before any increments land.

Fix

Increment first with INCR, only set EXPIRE on first request (count == 1) to establish a fixed window. Check the return value of INCR (which is the new count post-increment):

# AFTER (fixed)
def call(env)
  current_count = increment_request_count(client_id, endpoint_type)
  return rate_limit_response(...) if current_count.nil? || current_count > limit_for(endpoint_type)
  @app.call(env)
end

def increment_request_count(...)
  count = @redis.incr(key)
  @redis.expire(key, period) if count == 1   # ← fixed window, set once
  count
end

This ensures:

@smahima27 smahima27 requested a review from a team as a code owner March 19, 2026 15:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant