Bitropy patches on v1.83.7-stable.patch.1: CVE-2026-42208 SQLi fix + passthrough patches#3
Open
pkieszcz wants to merge 4 commits intov1.83.7-stable-basefrom
Open
Bitropy patches on v1.83.7-stable.patch.1: CVE-2026-42208 SQLi fix + passthrough patches#3pkieszcz wants to merge 4 commits intov1.83.7-stable-basefrom
pkieszcz wants to merge 4 commits intov1.83.7-stable-basefrom
Conversation
…ream providers Prevent x-litellm-api-key (LiteLLM's virtual key) from being leaked to upstream providers when _forward_headers=True is used in passthrough endpoints.
Client-provided credentials now take precedence over server credentials in the /anthropic/ passthrough endpoint. This enables mixed mode where: 1. Client sends x-api-key → forwarded as-is (user pays via own API key) 2. Client sends Authorization → forwarded as-is (user pays via OAuth/Max) 3. No client credentials + server ANTHROPIC_API_KEY → server key used 4. No client credentials + no server key → no credentials forwarded Previously the server always sent x-api-key (even literal "None" when unconfigured), overwriting any client-provided credentials and breaking Claude Code Max (OAuth) and BYOK scenarios. Supersedes the simpler one-liner from d742c76 on v1.81.12-stable-patched. Based on the approach from PR BerriAI#20429 (closed) and reverted PR BerriAI#14821.
Defect 2 root cause: update_in_memory_guardrail passed Prisma's raw dict directly to update_in_memory_litellm_params, which calls vars() on it, raising TypeError and silently swallowing every DB update. Hot-reload of any guardrail param (presidio_language, score thresholds, URL bases, pii_entities_config, ...) was therefore broken — pod restart was the only way to pick up DB changes. Fix: isinstance(dict) -> LitellmParams(**data) conversion before the call, matching the existing pattern in initialize_guardrail. After this, the base class blanket setattr in update_in_memory_litellm_params propagates all Pydantic fields without any per-field copy in subclasses. Linear: https://linear.app/bitropy/issue/BIT-455
Defect 3: anonymize_text iterated redacted_text["items"] and applied new_text = new_text[:start] + replacement + new_text[end:] using item coordinates. Presidio's /anonymize returns item start/end as positions in the OUTPUT text (where each mask token sits after redaction), not in the original. Applying them to the original drifts proportional to len(replacement) - len(original_span), corrupting masked output on any non-trivial input. Fix: - output_parse_pii=False: return redacted_text["text"] verbatim — no re-stitching needed, Presidio already produced the correct output. - output_parse_pii=True: iterate analyze_results (pre-anonymize, with original-text coordinates) for both stitching and pii_tokens construction. Eliminates the secondary bug where pii_tokens stored text[start:end] using already-mutated coordinates. Fail-closed on /anonymize backend error preserved. Linear: https://linear.app/bitropy/issue/BIT-455
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this is
Bitropy patches rebased onto upstream
v1.83.7-stable.patch.1to pick up the fix for CVE-2026-42208 (critical pre-auth SQL injection in LiteLLM, bleepingcomputer writeup).Replaces our previous
bitropy/v1.82.3-stable-patchedbranch (PR #2).Patches
Same 2 patches as PR #2, cherry-picked onto the new base:
1. Strip
x-litellm-api-keyfrom forwarded headers (security)File:
litellm/passthrough/utils.pyWithout this, the
x-litellm-api-keyproxy auth header is forwarded to upstream providers, leaking our virtual keys.Upstream PR: BerriAI#20432 (still open / unreviewed).
2. Credential priority for Anthropic passthrough (critical)
File:
litellm/proxy/pass_through_endpoints/llm_passthrough_endpoints.pyWithout this, the proxy always sends its own
x-api-keyto Anthropic, which:x-api-key: Noneupstream when no serverANTHROPIC_API_KEYis setBehavior with this patch:
x-api-key(BYOK)ANTHROPIC_API_KEYinjected (orANTHROPIC_AUTH_TOKENenv, new in 1.83.x)The merge picks up upstream's new
AnthropicModelInfo.get_auth_header()(added inf415b72, supportingANTHROPIC_AUTH_TOKEN) for the server-fallback branch only — client credentials still take priority.Why upgrade now
Testing
Tested locally on the built image with a header-echo upstream (
mendhak/http-https-echo):x-litellm-api-keyis NOT present in upstream requestAuthorization: Bearer sk-ant-oat-...) reaches upstream as-isx-api-key: sk-ant-...) reaches upstream as-isANTHROPIC_API_KEY→ server key injected asx-api-keyx-litellm-api-key→ proxy auth rejects with 401 before forwardingDeploy
Multi-arch manifest list (
linux/amd64+linux/arm64) — Kubernetes auto-selects the right arch.Claude Code config (unchanged)
{ "env": { "ANTHROPIC_BASE_URL": "https://ai.demo.internal.bitropy.io/anthropic", "ANTHROPIC_CUSTOM_HEADERS": "x-litellm-api-key: sk-<your-virtual-key>" } }