Update dependency langchain to v0.3.30 [SECURITY]#3338
Open
renovate[bot] wants to merge 1 commit into
Open
Conversation
OpenAPI ChangesNo changes detected Unexpected changes? Ensure your branch is up-to-date with |
c174875 to
7b3eca9
Compare
7b3eca9 to
41f212b
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
0.3.28→0.3.30LangSmith SDK: Public prompt pull deserializes untrusted manifests without trust boundary warning
CVE-2026-45134 / GHSA-3644-q5cj-c5c7
More information
Details
Description
The LangSmith SDK's prompt pull methods (
pull_prompt/pull_prompt_commitin Python,pullPrompt/pullPromptCommitin JS/TS) fetch and deserialize prompt manifests from the LangSmith Hub. These manifests may contain serialized LangChain objects and model configuration that affect runtime behavior. When pulling a public prompt byowner/nameidentifier, the manifest content is controlled by an external party, but prior versions of the SDK did not distinguish this from pulling a prompt within the caller's own organization.Prompt manifests can intentionally configure a model with a custom base URL, default headers, model name, or other constructor arguments. These are supported features, but they also mean the prompt contents should be treated as executable configuration rather than plain text. A prompt can also include serialized LangChain
RunnableorPromptTemplateobjects with attacker-controlled constructor kwargs, or secret references that, ifsecrets_from_envis enabled, read environment variables at deserialization time.Applications are exposed when all of the following are true:
pull_promptorpull_prompt_commit(Python) orpullPromptorpullPromptCommit(JS/TS) with a publicowner/nameprompt identifier.Applications that only pull prompts from their own organization (referenced by name only, without an
owner/prefix) are not affected by the public prompt trust boundary issue described above. However, same-organization prompts carry their own risk. If an attacker gains write access to the organization (for example, through a leakedLANGSMITH_API_KEYor a compromised team member account), they can push a malicious prompt that is pulled and deserialized without any additional warning.Impact
An attacker who publishes a malicious prompt to LangSmith Hub may be able to affect applications that pull that prompt by
owner/name. If the prompt manifest reaches the SDK's deserialization path, the SDK will instantiate the referenced LangChain objects with the attacker-supplied constructor arguments rather than treating the manifest as inert data.Realistic impacts include:
base_url, proxy, or equivalent endpoint-setting parameter. In typical deployments, redirected requests may include prompt contents, system prompts, retrieved context, model parameters, provider credentials, or other secrets and may disclose them to the attacker-controlled endpoint.include_model=Trueis passed, because this expands the allowlist to partner integration classes. This is not the default, but it materially increases risk when pulling prompts from outside the caller's organization.Remediation
The LangSmith SDK now blocks pulling public prompts by
owner/nameby default. Callers must explicitly opt in by passingdangerously_pull_public_prompt=True(Python) ordangerouslyPullPublicPrompt: true(JS/TS) to acknowledge the trust boundary. This flag should only be set after reviewing and trusting the prompt contents, not merely the publishing account.Upgrade to LangSmith SDK Python >= 0.8.0 or JS/TS >= 0.6.0.
Guidance for prompt pull methods
The prompt pull methods (
pull_prompt/pull_prompt_commitin Python,pullPrompt/pullPromptCommitin JS/TS) should be used only with trusted prompts. Do not pull public prompts byowner/namefrom untrusted or unreviewed sources without understanding that the manifest contents will be deserialized and may affect runtime behavior.When pulling prompts that include model configuration (
include_model=Truein Python,includeModel: truein JS/TS), the deserialization allowlist expands to include partner integration classes. Because this mode is not the default and is often unnecessary for third-party prompts, prefer the default (false) when pulling prompts from sources outside your organization.Avoid passing
secrets_from_env=True(Python) when pulling untrusted prompts. This parameter allows prompt manifests to read environment variables during deserialization. Only use it with trusted prompts from your own organization.Same-organization prompts
Prompts pulled from the caller's own organization (referenced by name only, without an
owner/prefix) are not gated by the newdangerously_pull_public_promptflag, but they are not inherently safe. If an attacker gains write access to the organization (for example, through a leakedLANGSMITH_API_KEYor a compromised team member account), they can push a malicious prompt that redirects LLM traffic to attacker-controlled infrastructure and may disclose any credentials attached to those requests.The security of same-organization prompts follows a shared responsibility model. The LangSmith SDK enforces trust boundaries for public prompts pulled from external accounts, but it cannot protect against compromised credentials or accounts within the caller's own organization. Securing API keys, managing team member access, and reviewing prompt contents before production deployment are the responsibility of the organization. Organizations should treat prompts as executable configuration and apply the same review and audit practices they would apply to application code.
Credits
First reported by @Moaaz-0x.
Severity
CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:NReferences
This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).
Configuration
📅 Schedule: (in timezone US/Eastern)
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.