Skip to content

🛡️ Sentinel: Fix unbounded retries on client errors#191

Merged
abhimehro merged 1 commit intomainfrom
sentinel-retry-logic-fix-7567691608950293777
Feb 13, 2026
Merged

🛡️ Sentinel: Fix unbounded retries on client errors#191
abhimehro merged 1 commit intomainfrom
sentinel-retry-logic-fix-7567691608950293777

Conversation

@abhimehro
Copy link
Owner

  • Modified main.py to prevent retrying on 4xx errors (except 429).
  • Added Sentinel Journal entry about the learning.

PR created automatically by Jules for task 7567691608950293777 started by @abhimehro

🚨 Severity: MEDIUM
💡 Vulnerability: The retry logic blindly retried all `httpx.HTTPError` exceptions, including 400 (Bad Request) and 401/403 (Auth failures). This causes API spamming, potential account lockouts, and delays in error reporting.
🎯 Impact: Inefficient API usage, potential triggering of rate limits or account blocks, and delayed error feedback to users.
🔧 Fix: Modified `_retry_request` to check for `httpx.HTTPStatusError` and re-raise immediately if the status code is a client error (400-499) and not 429 (Too Many Requests).
✅ Verification: Verified with a reproduction script that call count reduced from 3 (retries) to 1 (fail fast) for 400 errors. Ran regression tests.

Co-authored-by: abhimehro <84992105+abhimehro@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 12, 2026 10:45
@google-labs-jules
Copy link

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@trunk-io
Copy link

trunk-io bot commented Feb 12, 2026

Merging to main in this repository is managed by Trunk.

  • To merge this pull request, check the box to the left or comment /trunk merge below.

@github-actions
Copy link

👋 Development Partner is reviewing this PR. Will provide feedback shortly.

@gemini-code-assist
Copy link

Summary of Changes

Hello @abhimehro, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a potential denial-of-service (DoS) risk by refining the application's retry mechanism. It ensures that client-side errors (4xx HTTP status codes, excluding 429) are no longer subject to unbounded retries, thereby preventing unnecessary API spamming, potential account lockouts, and improving the efficiency of error handling. The change also includes documentation of this security enhancement in the project's journal.

Highlights

  • Improved Retry Logic: Modified the _retry_request function in main.py to prevent automatic retries for HTTP 4xx client errors (e.g., 400 Bad Request, 401 Unauthorized, 403 Forbidden), with the exception of 429 (Too Many Requests). This prevents inefficient retries and potential DoS risks.
  • Sentinel Journal Update: Added a new entry to the Sentinel Journal (.jules/sentinel.md) documenting the vulnerability of unbounded retries on client errors, the learning derived from it, and the prevention strategy implemented.
Changelog
  • .jules/sentinel.md
    • Added a new journal entry for 'Unbounded Retries on Client Errors (DoS Risk)'
    • Detailed the vulnerability, learning, and prevention steps related to client error retries
  • main.py
    • Introduced a check within the _retry_request function to identify httpx.HTTPStatusError exceptions
    • Implemented logic to immediately re-raise 4xx HTTP errors (except 429) to prevent retries
    • Added debug logging for response content when a 4xx error is encountered and re-raised
Activity
  • PR created automatically by Jules for task 7567691608950293777, initiated by @abhimehro.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request prevents unbounded retries on 4xx client errors, enhancing security and stability, and adds a Sentinel Journal entry. However, the security review could not be completed as the content of main.py was inaccessible, which prevented a full SAST Reconnaissance Pass. Please ensure main.py is accessible for a complete security assessment. Additionally, consider refactoring the new error handling logic in main.py to improve maintainability by removing code duplication.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR hardens the HTTP retry behavior in main.py by failing fast on non-retriable client errors (4xx) while still allowing retries for rate limiting (429), and documents the lesson learned in the Sentinel journal.

Changes:

  • Update _retry_request() to immediately re-raise httpx.HTTPStatusError for non-429 4xx responses.
  • Add a .jules/sentinel.md journal entry documenting the retry pitfall and prevention guidance.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
main.py Prevents retries on non-429 4xx HTTP status errors inside _retry_request() to avoid unnecessary repeated requests.
.jules/sentinel.md Adds a Sentinel journal entry capturing the retry/HTTPStatusError learning and prevention guidance.

Comment on lines +606 to +611
code = e.response.status_code
if 400 <= code < 500 and code != 429:
if hasattr(e, "response") and e.response is not None:
log.debug(
f"Response content: {sanitize_for_log(e.response.text)}"
)
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the HTTPStatusError branch, e.response.status_code is accessed before the hasattr(e, "response")/e.response is not None guard. If you want this guard, it needs to wrap the status_code access too; otherwise the guard is misleading. Consider either removing the guard entirely (and rely on HTTPStatusError always having a response) or moving it before reading status_code.

Suggested change
code = e.response.status_code
if 400 <= code < 500 and code != 429:
if hasattr(e, "response") and e.response is not None:
log.debug(
f"Response content: {sanitize_for_log(e.response.text)}"
)
# Ensure the response exists before accessing status_code or text.
# While httpx.HTTPStatusError is expected to always have a response,
# this guard keeps the code robust against library or usage changes.
if not hasattr(e, "response") or e.response is None:
raise
code = e.response.status_code
if 400 <= code < 500 and code != 429:
log.debug(
f"Response content: {sanitize_for_log(e.response.text)}"
)

Copilot uses AI. Check for mistakes.
Comment on lines +603 to +608
# Security Enhancement: Do not retry client errors (4xx) except 429 (Too Many Requests).
# Retrying 4xx errors is inefficient and can trigger security alerts or rate limits.
if isinstance(e, httpx.HTTPStatusError):
code = e.response.status_code
if 400 <= code < 500 and code != 429:
if hasattr(e, "response") and e.response is not None:
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change introduces new behavior (fail-fast on non-429 4xx). There are existing tests for _retry_request logging/redaction, but none that assert we don’t retry on 4xx (e.g., mock_func.call_count == 1 with max_retries>1 for a 401/400) and that we still retry on 429. Adding a focused test would prevent regressions in retry semantics.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +13 to +19
## 2026-10-24 - Unbounded Retries on Client Errors (DoS Risk)

**Vulnerability:** The retry logic blindly retried all `httpx.HTTPError` exceptions, including 400 (Bad Request) and 401/403 (Auth failures). This causes API spamming, potential account lockouts, and delays in error reporting.

**Learning:** `httpx.HTTPStatusError` (raised by `raise_for_status()`) inherits from `httpx.HTTPError`. Generic `except httpx.HTTPError:` blocks will catch it and retry client errors unless explicitly handled.

**Prevention:** Inside retry loops, catch `httpx.HTTPStatusError` first. Check `response.status_code`. If `400 <= code < 500` (and not `429`), re-raise immediately.
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The heading/description calls this "Unbounded Retries", but _retry_request is bounded by MAX_RETRIES (currently 10). To avoid misleading future readers, consider rewording to "Retries on client errors (4xx)" (or similar) and keep the DoS discussion focused on unnecessary repeated requests rather than "unbounded" behavior.

Suggested change
## 2026-10-24 - Unbounded Retries on Client Errors (DoS Risk)
**Vulnerability:** The retry logic blindly retried all `httpx.HTTPError` exceptions, including 400 (Bad Request) and 401/403 (Auth failures). This causes API spamming, potential account lockouts, and delays in error reporting.
**Learning:** `httpx.HTTPStatusError` (raised by `raise_for_status()`) inherits from `httpx.HTTPError`. Generic `except httpx.HTTPError:` blocks will catch it and retry client errors unless explicitly handled.
**Prevention:** Inside retry loops, catch `httpx.HTTPStatusError` first. Check `response.status_code`. If `400 <= code < 500` (and not `429`), re-raise immediately.
## 2026-10-24 - Retries on Client Errors (4xx) Causing DoS Risk
**Vulnerability:** The retry logic blindly retried all `httpx.HTTPError` exceptions, including 400 (Bad Request) and 401/403 (Auth failures). Even though retries are bounded by `MAX_RETRIES`, this still causes unnecessary repeated requests, API spamming, potential account lockouts, and delays in surfacing real errors.
**Learning:** `httpx.HTTPStatusError` (raised by `raise_for_status()`) inherits from `httpx.HTTPError`. Generic `except httpx.HTTPError:` blocks will catch it and retry client errors unless explicitly handled.
**Prevention:** Inside retry loops, catch `httpx.HTTPStatusError` first. Check `response.status_code`. If `400 <= code < 500` (and not `429`), re-raise immediately so client errors fail fast instead of being retried.

Copilot uses AI. Check for mistakes.
@abhimehro abhimehro merged commit 0375bc9 into main Feb 13, 2026
25 checks passed
@abhimehro abhimehro deleted the sentinel-retry-logic-fix-7567691608950293777 branch February 13, 2026 01:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants