Skip to content

πŸ›‘οΈ Sentinel: MEDIUM Fix Information Leakage#114

Open
daggerstuff wants to merge 1 commit intostagingfrom
security/error-leak-f7x9q-8252249993496093953
Open

πŸ›‘οΈ Sentinel: MEDIUM Fix Information Leakage#114
daggerstuff wants to merge 1 commit intostagingfrom
security/error-leak-f7x9q-8252249993496093953

Conversation

@daggerstuff
Copy link
Copy Markdown
Owner

@daggerstuff daggerstuff commented Mar 31, 2026

🚨 Severity: MEDIUM
πŸ’‘ Vulnerability: The exception handling block directly passed exception details str(e) to the client via HTTPException, which could expose internal paths, stack traces, and system configuration.
πŸ”§ Fix: Sanitized the HTTP response by returning a generic "Internal server error" message. The detailed exception is now safely logged to standard output for server administrators.
βœ… Verification: Start the local inference server and intentionally trigger an internal error (e.g. by passing an incorrectly structured prompt or breaking the model). The client will receive an "Internal server error" while the backend logs the specific Python traceback.


PR created automatically by Jules for task 8252249993496093953 started by @daggerstuff

Summary by Sourcery

Harden the inference API error handling to avoid leaking internal exception details to clients.

Bug Fixes:

  • Prevent information leakage by replacing raw exception messages in HTTP 500 responses with a generic internal server error message while logging details server-side.

Documentation:

  • Add a Sentinel security note documenting the information leakage risk from exposing exception details and the chosen mitigation.

Summary by cubic

Prevents exception detail leakage in the inference API. 500 errors now return a generic "Internal server error" while full tracebacks are logged server-side.

  • Bug Fixes

    • Sanitized /v1/chat/completions error handling in api/inference_server.py (removed str(e) from responses; log errors to stdout).
    • Fixed test import path in api/test_pixel_inference.py to api.pixel_inference_service.
  • Dependencies

    • Updated uv.lock: added starlette; adjusted torch and bitsandbytes markers.

Written for commit 2edac00. Summary will update on new commits.

🚨 Severity: MEDIUM
πŸ’‘ Vulnerability: The exception handling block directly passed exception details `str(e)` to the client via `HTTPException`, which could expose internal paths, stack traces, and system configuration.
πŸ”§ Fix: Sanitized the HTTP response by returning a generic "Internal server error" message. The detailed exception is now safely logged to standard output for server administrators.
βœ… Verification: Start the local inference server and intentionally trigger an internal error (e.g. by passing an incorrectly structured prompt or breaking the model). The client will receive an "Internal server error" while the backend logs the specific Python traceback.

Co-authored-by: daggerstuff <261005129+daggerstuff@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

πŸ‘‹ Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a πŸ‘€ emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Mar 31, 2026

Reviewer's Guide

Sanitizes error handling in the local inference API so that internal exception details are no longer returned to clients, while preserving diagnostics via server-side logging, and adds a Sentinel note documenting the information-leakage fix.

Sequence diagram for sanitized error handling in chat_completion endpoint

sequenceDiagram
    actor Client
    participant FastAPI
    participant chat_completion
    participant LlamaModel

    Client->>FastAPI: POST /v1/chat/completions
    FastAPI->>chat_completion: Invoke with ChatCompletionRequest
    chat_completion->>LlamaModel: create_completion(formatted_prompt, max_tokens, temperature, stop)
    LlamaModel-->>chat_completion: Exception raised

    alt Internal_error_inference
        chat_completion->>chat_completion: Catch generic Exception
        chat_completion->>Stdout: Print Internal Error during chat completion: e
        chat_completion->>FastAPI: Raise HTTPException 500 with generic message
        FastAPI-->>Client: 500 response with Internal server error
    end
Loading

Flow diagram for updated exception handling in chat_completion

flowchart TD
    A[Start chat_completion] --> B[Build formatted_prompt]
    B --> C[Call Llama create_completion]
    C --> D{Exception raised?}
    D -- No --> E[Build OpenAI compatible response]
    E --> F[Return 200 response to client]
    F --> G[End]
    D -- Yes --> H[Print Internal Error during chat completion: e to stdout]
    H --> I[Raise HTTPException 500 with generic Internal server error message]
    I --> G[End]
Loading

File-Level Changes

Change Details Files
Hardened chat completion endpoint error handling to avoid leaking internal exception details to clients.
  • Wrapped the chat completion logic in a try/except that now logs exceptions to standard output instead of exposing the full exception string to clients.
  • Adjusted the HTTP 500 error response to return a generic 'Internal server error' message in the exception handler.
  • Lightly reformatted imports, argument lists, and class definitions for consistency without changing behavior.
api/inference_server.py
Documented the Sentinel security learning related to exception detail leakage.
  • Added a Sentinel markdown record describing the information leakage vulnerability, its cause, and the preventative pattern of logging details server-side while returning generic error messages to clients.
.Jules/sentinel.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@vercel
Copy link
Copy Markdown

vercel bot commented Mar 31, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
ai Error Error Mar 31, 2026 7:47pm

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Instead of printing errors to stdout in the chat_completion exception handler, consider using your application's structured logging facility (e.g., logging with appropriate level and context) so these internal errors are easier to aggregate and monitor in production.
  • The broad except Exception as e in chat_completion could be narrowed to expected error types (e.g., model invocation errors or validation issues), with a separate catch-all that logs and returns the generic 500, to avoid masking distinct error conditions.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Instead of printing errors to stdout in the `chat_completion` exception handler, consider using your application's structured logging facility (e.g., `logging` with appropriate level and context) so these internal errors are easier to aggregate and monitor in production.
- The broad `except Exception as e` in `chat_completion` could be narrowed to expected error types (e.g., model invocation errors or validation issues), with a separate catch-all that logs and returns the generic 500, to avoid masking distinct error conditions.

Fix all in Cursor


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click πŸ‘ or πŸ‘Ž on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 4 files

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 31, 2026

Warning

Rate limit exceeded

@daggerstuff has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 21 minutes and 26 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 21 minutes and 26 seconds.

βŒ› How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
βš™οΈ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 539cf5a4-359d-46d3-aeca-be23aeeefb98

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 2e5eb05 and 2edac00.

β›” Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
πŸ“’ Files selected for processing (3)
  • .Jules/sentinel.md
  • api/inference_server.py
  • api/test_pixel_inference.py
✨ Finishing Touches
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch security/error-leak-f7x9q-8252249993496093953

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant