feat(google-gemini): add Google Gemini MCP with full LLM binding support#222
feat(google-gemini): add Google Gemini MCP with full LLM binding support#222viktormarinho wants to merge 3 commits intomainfrom
Conversation
Implements a new MCP for direct access to Google's Gemini API, satisfying the same LLM binding interface as the OpenRouter and Deco AI Gateway MCPs. Tools implemented (7 total): - COLLECTION_LLM_LIST: List models with filtering/pagination - COLLECTION_LLM_GET: Get single model by ID - LLM_METADATA: Model capabilities and supported URL patterns - LLM_DO_STREAM: Streaming generation via @ai-sdk/google - LLM_DO_GENERATE: Non-streaming generation via @ai-sdk/google - COMPARE_MODELS: Side-by-side model comparison - RECOMMEND_MODEL: Task-based model recommendations Key design decisions: - Uses @ai-sdk/google provider for doStream/doGenerate - Calls Gemini REST API directly for model listing - Hardcoded pricing table (Gemini API doesn't expose pricing) - Simple API key auth (no OAuth needed) - 5-minute model cache for performance Co-authored-by: Cursor <cursoragent@cursor.com>
🚀 Preview Deployments Ready!Your changes have been deployed to preview environments: 📦
|
MESH_REQUEST_CONTEXT.authorization includes the "Bearer " prefix, but both the Gemini REST API (query param) and @ai-sdk/google (x-goog-api-key header) expect a raw API key. Co-authored-by: Cursor <cursoragent@cursor.com>
There was a problem hiding this comment.
2 issues found across 20 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="google-gemini/README.md">
<violation number="1" location="google-gemini/README.md:10">
P2: The README instructs users to add the MCP to `deploy.json`, but deployment is automatic in this repo. This new step conflicts with the documented workflow and will mislead users.</violation>
</file>
<file name="google-gemini/server/tools/llm-binding.ts">
<violation number="1" location="google-gemini/server/tools/llm-binding.ts:421">
P1: The `flush()` implementation doesn't match its documented intent. `usage.promise.catch(() => {})` only adds an empty catch handler—it doesn't reject the promise. If the stream ends without a "finish" chunk, the promise hangs forever. Should call `usage.reject(new Error("Stream ended without finish chunk"))` instead.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| flush() { | ||
| // If stream finishes without a "finish" chunk, reject the promise | ||
| // so callers don't hang forever. | ||
| usage.promise.catch(() => {}); |
There was a problem hiding this comment.
P1: The flush() implementation doesn't match its documented intent. usage.promise.catch(() => {}) only adds an empty catch handler—it doesn't reject the promise. If the stream ends without a "finish" chunk, the promise hangs forever. Should call usage.reject(new Error("Stream ended without finish chunk")) instead.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At google-gemini/server/tools/llm-binding.ts, line 421:
<comment>The `flush()` implementation doesn't match its documented intent. `usage.promise.catch(() => {})` only adds an empty catch handler—it doesn't reject the promise. If the stream ends without a "finish" chunk, the promise hangs forever. Should call `usage.reject(new Error("Stream ended without finish chunk"))` instead.</comment>
<file context>
@@ -0,0 +1,664 @@
+ flush() {
+ // If stream finishes without a "finish" chunk, reject the promise
+ // so callers don't hang forever.
+ usage.promise.catch(() => {});
+ },
+ });
</file context>
| 1. Configure your MCP in `server/types/env.ts` | ||
| 2. Implement tools in `server/tools/` | ||
| 3. Rename `app.json.example` to `app.json` and customize | ||
| 4. Add to `deploy.json` for deployment |
There was a problem hiding this comment.
P2: The README instructs users to add the MCP to deploy.json, but deployment is automatic in this repo. This new step conflicts with the documented workflow and will mislead users.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At google-gemini/README.md, line 10:
<comment>The README instructs users to add the MCP to `deploy.json`, but deployment is automatic in this repo. This new step conflicts with the documented workflow and will mislead users.</comment>
<file context>
@@ -0,0 +1,13 @@
+1. Configure your MCP in `server/types/env.ts`
+2. Implement tools in `server/tools/`
+3. Rename `app.json.example` to `app.json` and customize
+4. Add to `deploy.json` for deployment
+5. Test with `bun run dev`
+
</file context>
…API keys This MCP acts as a proxy -- users must pass their own Google AI key via the authorization header. No env var fallback needed. Co-authored-by: Cursor <cursoragent@cursor.com>
Summary
google-geminiMCP that provides direct access to Google's Gemini models via the Google Generative AI API@decocms/bindings/llm), enabling seamless interchangeabilityTools
COLLECTION_LLM_LISTCOLLECTION_LLM_GETLLM_METADATALLM_DO_STREAM@ai-sdk/googleLLM_DO_GENERATE@ai-sdk/googleCOMPARE_MODELSRECOMMEND_MODELImplementation details
generativelanguage.googleapis.com/v1beta/models) directly, with pagination and 5-minute cache@ai-sdk/google(createGoogleGenerativeAI) which provides the samedoStream()/doGenerate()interface as the OpenRouter providermodels/prefix (e.g.,gemini-2.5-proinstead ofmodels/gemini-2.5-pro)Test plan
bun run checkpasses (TypeScript compilation)bun run fmtandbun run lintpassMade with Cursor
Summary by cubic
Adds a new Google Gemini MCP with full LLM binding parity, enabling direct access to Gemini models. Drop-in compatible with existing bindings, with streaming and non-streaming generation.
New Features
Migration
Written for commit c36e575. Summary will update on new commits.