Skip to content

Update dependency langchain to v0.3.30 [SECURITY]#3338

Open
renovate[bot] wants to merge 1 commit into
mainfrom
renovate/pypi-langchain-vulnerability
Open

Update dependency langchain to v0.3.30 [SECURITY]#3338
renovate[bot] wants to merge 1 commit into
mainfrom
renovate/pypi-langchain-vulnerability

Conversation

@renovate
Copy link
Copy Markdown
Contributor

@renovate renovate Bot commented May 13, 2026

This PR contains the following updates:

Package Change Age Confidence
langchain (changelog) 0.3.280.3.30 age confidence

LangSmith SDK: Public prompt pull deserializes untrusted manifests without trust boundary warning

CVE-2026-45134 / GHSA-3644-q5cj-c5c7

More information

Details

Description

The LangSmith SDK's prompt pull methods (pull_prompt / pull_prompt_commit in Python, pullPrompt / pullPromptCommit in JS/TS) fetch and deserialize prompt manifests from the LangSmith Hub. These manifests may contain serialized LangChain objects and model configuration that affect runtime behavior. When pulling a public prompt by owner/name identifier, the manifest content is controlled by an external party, but prior versions of the SDK did not distinguish this from pulling a prompt within the caller's own organization.

Prompt manifests can intentionally configure a model with a custom base URL, default headers, model name, or other constructor arguments. These are supported features, but they also mean the prompt contents should be treated as executable configuration rather than plain text. A prompt can also include serialized LangChain Runnable or PromptTemplate objects with attacker-controlled constructor kwargs, or secret references that, if secrets_from_env is enabled, read environment variables at deserialization time.
Applications are exposed when all of the following are true:

  • The application calls pull_prompt or pull_prompt_commit (Python) or pullPrompt or pullPromptCommit (JS/TS) with a public owner/name prompt identifier.
  • The prompt was published or modified by an untrusted or compromised account.
  • The application uses the pulled prompt without independently validating its contents.

Applications that only pull prompts from their own organization (referenced by name only, without an owner/ prefix) are not affected by the public prompt trust boundary issue described above. However, same-organization prompts carry their own risk. If an attacker gains write access to the organization (for example, through a leaked LANGSMITH_API_KEY or a compromised team member account), they can push a malicious prompt that is pulled and deserialized without any additional warning.

Impact

An attacker who publishes a malicious prompt to LangSmith Hub may be able to affect applications that pull that prompt by owner/name. If the prompt manifest reaches the SDK's deserialization path, the SDK will instantiate the referenced LangChain objects with the attacker-supplied constructor arguments rather than treating the manifest as inert data.

Realistic impacts include:

  • Server-side request forgery (SSRF), outbound request redirection, and interception of LLM traffic if a prompt manifest configures an LLM client with an attacker-controlled base_url, proxy, or equivalent endpoint-setting parameter. In typical deployments, redirected requests may include prompt contents, system prompts, retrieved context, model parameters, provider credentials, or other secrets and may disclose them to the attacker-controlled endpoint.
  • Prompt injection or behavior manipulation if a manifest embeds attacker-controlled system messages, prompt templates, or model parameters that alter the application's behavior.
  • Additional deserialization risk when include_model=True is passed, because this expands the allowlist to partner integration classes. This is not the default, but it materially increases risk when pulling prompts from outside the caller's organization.
Remediation

The LangSmith SDK now blocks pulling public prompts by owner/name by default. Callers must explicitly opt in by passing dangerously_pull_public_prompt=True (Python) or dangerouslyPullPublicPrompt: true (JS/TS) to acknowledge the trust boundary. This flag should only be set after reviewing and trusting the prompt contents, not merely the publishing account.

Upgrade to LangSmith SDK Python >= 0.8.0 or JS/TS >= 0.6.0.

Guidance for prompt pull methods

The prompt pull methods (pull_prompt / pull_prompt_commit in Python, pullPrompt / pullPromptCommit in JS/TS) should be used only with trusted prompts. Do not pull public prompts by owner/name from untrusted or unreviewed sources without understanding that the manifest contents will be deserialized and may affect runtime behavior.

When pulling prompts that include model configuration (include_model=True in Python, includeModel: true in JS/TS), the deserialization allowlist expands to include partner integration classes. Because this mode is not the default and is often unnecessary for third-party prompts, prefer the default (false) when pulling prompts from sources outside your organization.

Avoid passing secrets_from_env=True (Python) when pulling untrusted prompts. This parameter allows prompt manifests to read environment variables during deserialization. Only use it with trusted prompts from your own organization.

Same-organization prompts

Prompts pulled from the caller's own organization (referenced by name only, without an owner/ prefix) are not gated by the new dangerously_pull_public_prompt flag, but they are not inherently safe. If an attacker gains write access to the organization (for example, through a leaked LANGSMITH_API_KEY or a compromised team member account), they can push a malicious prompt that redirects LLM traffic to attacker-controlled infrastructure and may disclose any credentials attached to those requests.

The security of same-organization prompts follows a shared responsibility model. The LangSmith SDK enforces trust boundaries for public prompts pulled from external accounts, but it cannot protect against compromised credentials or accounts within the caller's own organization. Securing API keys, managing team member access, and reviewing prompt contents before production deployment are the responsibility of the organization. Organizations should treat prompts as executable configuration and apply the same review and audit practices they would apply to application code.

Credits

First reported by @​Moaaz-0x.

Severity

  • CVSS Score: 7.1 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Configuration

📅 Schedule: (in timezone US/Eastern)

  • Branch creation
    • ""
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Enabled.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 13, 2026

OpenAPI Changes

No changes detected

View full changelog

Unexpected changes? Ensure your branch is up-to-date with main (consider rebasing).

@renovate renovate Bot force-pushed the renovate/pypi-langchain-vulnerability branch 5 times, most recently from c174875 to 7b3eca9 Compare May 15, 2026 13:44
@renovate renovate Bot force-pushed the renovate/pypi-langchain-vulnerability branch from 7b3eca9 to 41f212b Compare May 15, 2026 14:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants