Skip to content

🛡️ Sentinel: [HIGH] Fix Information Exposure through Exceptions#123

Open
daggerstuff wants to merge 1 commit intostagingfrom
security/information-exposure-api-errors-7f2a-6697975960274588231
Open

🛡️ Sentinel: [HIGH] Fix Information Exposure through Exceptions#123
daggerstuff wants to merge 1 commit intostagingfrom
security/information-exposure-api-errors-7f2a-6697975960274588231

Conversation

@daggerstuff
Copy link
Copy Markdown
Owner

@daggerstuff daggerstuff commented Apr 1, 2026

🚨 Severity: HIGH
💡 Vulnerability: Information Exposure. The /infer, /batch-infer, and /reload-model endpoints in api/pixel_inference_service.py were catching exceptions and passing the raw string representation (str(e)) to the FastAPI HTTPException detail field, potentially leaking stack traces, system paths, and internal error states to end users.
🔧 Fix: Replaced detail=str(e) with a generic detail="Internal server error" and removed the from e exception chaining clause to prevent internal details from bubbling up. The original detailed errors are still logged server-side via logger.error().
✅ Verification: Review the modified exception handlers in api/pixel_inference_service.py and verify python3 -c 'import ast; ast.parse(open("api/pixel_inference_service.py").read())' passes without syntax errors.


PR created automatically by Jules for task 6697975960274588231 started by @daggerstuff

Summary by Sourcery

Sanitize error responses in inference-related API endpoints to avoid leaking internal exception details to clients.

Bug Fixes:

  • Prevent exposure of raw exception messages and internal error details from the /infer, /batch-infer, and /reload-model endpoints by returning generic error messages instead.

Documentation:

  • Add a Sentinel entry documenting the new practice of masking API exception details while logging full errors server-side.

Summary by cubic

Sanitized inference API error responses to prevent leaking stack traces or internal paths. Replaced raw exception details with "Internal server error", removed exception chaining, and logged full errors server-side for /infer, /batch-infer, and /reload-model in api/pixel_inference_service.py.

Written for commit b8d36e3. Summary will update on new commits.

Co-authored-by: daggerstuff <261005129+daggerstuff@users.noreply.github.com>
@vercel
Copy link
Copy Markdown

vercel bot commented Apr 1, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
ai Error Error Apr 1, 2026 0:47am

Copilot AI review requested due to automatic review settings April 1, 2026 12:47
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 1, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Sanitizes error responses for inference-related endpoints to prevent leaking internal exception details while preserving server-side logging, and documents the security pattern in the Sentinel playbook.

Sequence diagram for sanitized inference error handling

sequenceDiagram
    actor User
    participant FastAPIApp
    participant PixelInferenceService
    participant InferenceEngine
    participant Logger

    User->>FastAPIApp: POST /infer
    FastAPIApp->>PixelInferenceService: infer(request)
    PixelInferenceService->>InferenceEngine: generate_response(request)
    InferenceEngine-->>PixelInferenceService: Exception e
    PixelInferenceService->>Logger: error(Inference error: e)
    PixelInferenceService-->>FastAPIApp: raise HTTPException(500, Internal server error)
    FastAPIApp-->>User: 500 Internal server error (generic message)
Loading

File-Level Changes

Change Details Files
Hardened error handling for single-request inference endpoint to avoid exposing exception details to clients.
  • Replaced HTTPException detail payload based on str(e) with a constant generic message 'Internal server error'.
  • Removed exception chaining via 'from e' when raising HTTPException, ensuring upstream stack traces are not surfaced in responses while retaining logger.error output.
api/pixel_inference_service.py
Hardened error handling for batch inference endpoint to return sanitized error messages per item.
  • Changed per-request error objects in batch responses from including str(e) to a generic 'Internal server error' string.
  • Kept detailed exception information only in backend logs via logger.error calls.
api/pixel_inference_service.py
Hardened error handling for model reload endpoint to prevent leaking internal exception messages.
  • Replaced HTTPException detail based on str(e) with a generic 'Internal server error'.
  • Removed exception chaining when raising HTTPException to avoid propagating internal stack traces to the client.
api/pixel_inference_service.py
Updated Sentinel security playbook documentation to capture the new exception-handling pattern.
  • Added an entry describing the information exposure risk of passing raw exceptions into FastAPI HTTPException detail fields.
  • Documented the mitigation pattern: log full exceptions server-side and return generic error messages without exception chaining.
.Jules/sentinel.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Consider centralizing the generic error message (e.g., a constant or helper) so that future changes to the wording or localization only need to be made in one place.
  • For the /batch-infer endpoint, you might want to include a simple machine-readable error code alongside the generic message to help clients distinguish between different failure modes without exposing internal details.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider centralizing the generic error message (e.g., a constant or helper) so that future changes to the wording or localization only need to be made in one place.
- For the `/batch-infer` endpoint, you might want to include a simple machine-readable error code alongside the generic message to help clients distinguish between different failure modes without exposing internal details.

Fix all in Cursor


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 2 files

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses information exposure risks by sanitizing error details returned from several FastAPI endpoints, ensuring internal exception strings aren’t surfaced to API clients.

Changes:

  • Replaced user-facing detail=str(e) / {"error": str(e)} with a generic "Internal server error".
  • Removed explicit exception chaining (from e) in raised HTTPExceptions.
  • Added a Sentinel learning entry documenting the pattern.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
api/pixel_inference_service.py Sanitizes error responses for /infer, /batch-infer, and /reload-model to avoid leaking internal exception details.
.Jules/sentinel.md Documents the “mask exception details” guidance for future prevention.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 407 to +409
except Exception as e:
logger.error(f"Inference error: {e}")
raise HTTPException(status_code=500, detail=str(e)) from e
raise HTTPException(status_code=500, detail="Internal server error")
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logger.error(f\"... {e}\") logs only the exception message and drops the stack trace, which makes production debugging (and incident response) significantly harder. Prefer logging with traceback (e.g., logger.exception(...) inside an except, or logger.error(..., exc_info=True)), while still returning the sanitized HTTPException detail to the client.

Copilot uses AI. Check for mistakes.
Comment on lines 423 to +425
except Exception as e:
logger.error(f"Batch inference error: {e}")
responses.append({"error": str(e)})
responses.append({"error": "Internal server error"})
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same logging issue here: the server log will not include a traceback, only str(e). Switching to a traceback-inclusive log (logger.exception(...) / exc_info=True) will preserve diagnostic detail server-side without reintroducing information exposure to clients.

Copilot uses AI. Check for mistakes.
@@ -437,7 +437,7 @@ async def reload_model():
raise HTTPException(status_code=500, detail="Failed to reload model")
except Exception as e:
logger.error(f"Reload error: {e}")
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider logging the full traceback for reload failures as well. Model reload issues can be operationally critical, and having the stack trace in logs (without exposing it to clients) improves reliability and reduces time-to-mitigate.

Suggested change
logger.error(f"Reload error: {e}")
logger.exception(f"Reload error: {e}")

Copilot uses AI. Check for mistakes.
@@ -1,2 +1,3 @@
## 2026-03-27 - Prevent SQL Error Leakage | Vulnerability: Information Exposure through Error Messages | Learning: Database error traces bubbling up to users can leak schema and path information. | Prevention: Catch database exceptions and return generic error messages (e.g. 500 Internal Server Error) to the client while logging the detailed exception server-side.
## 2026-03-29 - Server-Side Logging of Database Errors | Vulnerability: Insufficient Logging and Monitoring | Learning: Swallowing database errors without logging them hides potential malicious activity like SQL injection attempts. | Prevention: Always log database exception details using `logger.error` on the backend before returning a sanitized generic error to the client.
## 2026-03-31 - Mask API Exception Details | Vulnerability: Information Exposure through Exceptions | Learning: Passing raw exception objects (`str(e)`) directly to FastAPI `HTTPException` detail fields leaks internal stack traces and error states to end-users. | Prevention: Always log exception objects server-side using `logger.error()` and raise HTTPExceptions with generic error details like 'Internal server error', removing the `from e` exception chaining clause.
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guidance claims removing from e prevents internal details from bubbling up, but raise HTTPException(...) inside an except still implicitly retains exception context (as __context__) unless explicitly suppressed (e.g., from None). If the intent is purely to avoid leaking details to clients, the key control is the sanitized detail=...; if the intent is to suppress chaining in tracebacks, the doc should mention from None (or clarify that chaining affects tracebacks/logs, not the HTTP response body).

Suggested change
## 2026-03-31 - Mask API Exception Details | Vulnerability: Information Exposure through Exceptions | Learning: Passing raw exception objects (`str(e)`) directly to FastAPI `HTTPException` detail fields leaks internal stack traces and error states to end-users. | Prevention: Always log exception objects server-side using `logger.error()` and raise HTTPExceptions with generic error details like 'Internal server error', removing the `from e` exception chaining clause.
## 2026-03-31 - Mask API Exception Details | Vulnerability: Information Exposure through Exceptions | Learning: Passing raw exception objects (`str(e)`) directly to FastAPI `HTTPException` detail fields leaks internal stack traces and error states to end-users. | Prevention: Always log exception objects server-side using `logger.error()` and raise `HTTPException` with a generic, sanitized `detail` (for example, 'Internal server error'). If you also want to suppress exception chaining in tracebacks, raise `HTTPException(...) from None`; otherwise, the original exception is only kept in server-side traceback context, not in the HTTP response body.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants