Skip to content

Feat: Added Anthropic (Claude) model provider support#204

Open
shkangr wants to merge 5 commits intogoogle:mainfrom
shkangr:feat/add-anthropic-model-provider
Open

Feat: Added Anthropic (Claude) model provider support#204
shkangr wants to merge 5 commits intogoogle:mainfrom
shkangr:feat/add-anthropic-model-provider

Conversation

@shkangr
Copy link
Copy Markdown

@shkangr shkangr commented Mar 17, 2026

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

1. Link to an existing issue (if applicable):

  • Closes: #issue_number
  • Related: #issue_number

2. Or, if no issue exists, describe the change:

If applicable, please follow the issue templates to provide as much detail as
possible.

Problem:
A clear and concise description of what the problem is.
adk-js currently only supports Google Gemini models as the LLM provider. While adk-java already supports Anthropic (Claude) models via Claude.java, adk-js has no equivalent
implementation. This limits users who want to build agents powered by Claude models using the JavaScript/TypeScript SDK.

Solution:
A clear and concise description of what you want to happen and why you choose
this solution.

Add an AnthropicLlm class that extends BaseLlm, following the same provider pattern used by Gemini and ApigeeLlm. The implementation:

  • Converts between @google/genai types (Content, FunctionDeclaration, etc.) and Anthropic SDK types (MessageParam, Tool, etc.) internally, so no changes to existing agent, tool,
    or event code are required.
  • Supports both non-streaming and streaming generation via @anthropic-ai/sdk.
  • Handles Anthropic-specific constraints: system instruction as a separate system parameter, consecutive same-role message merging, schema type string normalization (uppercase →
    lowercase), and function call ID mapping.
  • Registers claude-* pattern in LLMRegistry so users can simply pass a model string like 'claude-sonnet-4-5-20250514' to LlmAgent.

This approach was chosen because it mirrors the adk-java Anthropic implementation while leveraging the existing adk-js BaseLlm abstraction, keeping the change minimal (1 new file + 3 files modified by ~2 lines each) and fully backward-compatible.

Testing Plan

Please describe the tests that you ran to verify your changes. This is required
for all PRs that are not small documentation or typo fixes.

23 new unit tests added in core/test/models/anthropic_llm_test.ts covering:

  • Constructor & basics (5 tests): Instance creation with explicit API key, default model name, environment variable API key resolution, error on missing API key, error on
    connect() (live not supported)
  • Model pattern matching (2 tests): claude-* regex matches valid Claude model names, rejects non-Claude models (gemini, gpt, llama)
  • Registry integration (3 tests): LLMRegistry.newLlm('claude-*') resolves to AnthropicLlm, works with LlmAgent via string model, works with LlmAgent via instance model
  • Content conversion (8 tests): Text content role mapping (user/model → user/assistant), functionCall → tool_use block, functionResponse → tool_result block, consecutive same-role
    message merging, user message prepend when first message is assistant, system instruction extraction from string and Content object, thought part filtering
  • Tool conversion (2 tests): FunctionDeclaration → Anthropic Tool with schema type lowercasing, nested schema normalization (array items)
  • Response conversion (3 tests): Text response → LlmResponse, tool_use response → functionCall part, usage metadata mapping (input/output tokens)

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Please include a summary of passed npm test results.
✓ core/test/models/base_llm_test.ts (6 tests)
✓ core/test/models/registry_test.ts (3 tests)
✓ core/test/models/google_llm_test.ts (12 tests)
✓ core/test/models/anthropic_llm_test.ts (23 tests) ← NEW
✓ core/test/models/apigee_llm_test.ts (43 tests)

Test Files 5 passed (5)
Tests 87 passed (87)

Manual End-to-End (E2E) Tests:

Please provide instructions on how to manually test your changes, including any
necessary setup or configuration. Please provide logs or screenshots to help
reviewers better understand the fix.

  1. Set up environment:
    Add your Anthropic API key to .env at the project root
    ▎echo 'ANTHROPIC_API_KEY=sk-ant-...' > .env

    Install dependencies and build
    ▎npm install
    ▎npm run build

  2. Run the Claude sample agent with the ADK Web UI:
    node dev/dist/cli_entrypoint.mjs web dev/samples/claude_agent.ts

  3. Open http://localhost:8000 in browser, select claude_agent, and send a message such as:
    ▎ Tell me the weather in San Francisco.

  4. Verify that:
    - The agent calls the get_weather tool (tool_use → tool_result round-trip)
    - The agent returns a final text response based on the tool result

See the screenshot below for expected behavior:

tests/e2e/models/screenshot/screen-shot-adk-js-anthropic.png

The screenshot shows the Claude agent successfully:

  • Receiving a user query
  • Invoking the get_weather function tool
  • Receiving the tool result
  • Responding with the weather information to the user

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have performed a self-review of my own code.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have added tests that prove my fix is effective or that my feature works.
  • New and existing unit tests pass locally with my changes.
  • I have manually tested my changes end-to-end.
  • Any dependent changes have been merged and published in downstream modules.

Additional context

Add any other context or screenshots about the feature request here.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the capabilities of the adk-js SDK by introducing support for Anthropic Claude large language models. Previously limited to Google Gemini, the SDK can now power agents with Claude models, providing more flexibility for users. The implementation includes a new AnthropicLlm class that seamlessly translates between internal SDK types and Anthropic's API, while also addressing specific requirements of the Claude API for message formatting and tool definitions. This change is designed to be backward-compatible and integrates smoothly into the existing BaseLlm abstraction.

Highlights

  • Anthropic (Claude) Model Support: Introduced the AnthropicLlm class, enabling the adk-js SDK to integrate with Anthropic Claude models, similar to existing support in adk-java.
  • Type Conversion and API Handling: Implemented internal conversion between @google/genai types and Anthropic SDK types, ensuring compatibility without requiring changes to existing agent, tool, or event code. This includes handling system instructions, message merging, and schema type normalization.
  • Streaming and Non-Streaming Generation: Added support for both non-streaming and streaming content generation using the @anthropic-ai/sdk.
  • LLM Registry Integration: Registered the claude-* model pattern within the LLMRegistry, allowing users to easily specify Claude models by string name.
  • Comprehensive Testing: Included 23 new unit tests covering various aspects of the AnthropicLlm class, along with end-to-end tests for basic text generation, function calling, and streaming functionality.
  • Sample Agent Provided: Added a new sample agent (claude_agent.ts) to demonstrate the usage and capabilities of Claude models within the SDK.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • core/package.json
    • Added the @anthropic-ai/sdk dependency.
  • core/src/common.ts
    • Exported the new AnthropicLlm class and AnthropicLlmParams type.
  • core/src/models/anthropic_llm.ts
    • Implemented the AnthropicLlm class, providing integration for Anthropic Claude models.
  • core/src/models/registry.ts
    • Registered the AnthropicLlm class with the LLMRegistry for automatic model resolution.
  • core/test/models/anthropic_llm_test.ts
    • Added 23 new unit tests for the AnthropicLlm class, covering its constructor, model pattern matching, registry integration, and content/tool/response conversion logic.
  • dev/samples/claude_agent.ts
    • Added a new sample LlmAgent configured to use a Claude model and a get_weather FunctionTool.
  • package-lock.json
    • Updated the package lock file to include @anthropic-ai/sdk and its transitive dependencies.
  • tests/e2e/models/anthropic_llm_e2e_test.ts
    • Added end-to-end tests for AnthropicLlm, verifying basic text generation, function calling, and streaming functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Anthropic's Claude models as an LLM provider. The implementation is well-structured, following the existing BaseLlm abstraction and includes comprehensive unit and E2E tests. My review focuses on improving robustness and clarity. I've pointed out a potential issue with non-unique ID generation for function calls and some inconsistencies and potentially confusing model name identifiers used as defaults and in samples/tests. Overall, this is a great addition to the SDK's capabilities.

@ScottMansfield
Copy link
Copy Markdown
Member

Thank you for taking the time to open this PR. At the moment, we can't merge this.

Specifically, ADK Typescript also has to coordinate with other ADK languages to ensure we have a consistent approach to models other than Google's Gemini. Since we're heading into Cloud Next (late April) and will need to ensure this coordination happens with other teams who are also busy preparing, this will have to be on hold for a while.

Also when we're looking at a 1.0 release, this kind of model support is a significant feature that has to be considered carefully. For the 1.0 release, we are not going to have this kind of alternate implementation. In the future, it will likely be included.

Once we have a strategy that is shareable, I will update here. You can leave this PR open for the time being, but it will be some time before I can look into it.

@cornellgit
Copy link
Copy Markdown
Collaborator

This is a desirable feature, would be helpful to check into contrib/ if we haven't set it up yet. Then the conformance and integration questions can be asked later, @ScottMansfield

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants