-
Notifications
You must be signed in to change notification settings - Fork 205
Description
Is your feature request related to a problem? Please describe.
Currently, the repository supports various AI engines, but it lacks support for Google Gemini.
Gemini (specifically 1.5 Pro and Flash) offers massive context windows and strong reasoning capabilities that are highly beneficial for agentic workflows, particularly when analyzing large codebases or performing complex reasoning tasks.
Describe the solution you'd like
I propose adding gemini-cli as a supported engine provider. The integration should leverage the CLI's headless mode, which is specifically designed for programmatic usage and automation.
Implementation Plan:
-
Execution: Use the headless mode with JSON output to get structured responses suitable for parsing.
gemini --prompt "Your prompt here" --output-format json -
Authentication: Support standard Gemini authentication methods. The simplest integration path for CI/CD and headless environments is the API Key:
- Environment Variable:
GEMINI_API_KEY - Alternative: Google Cloud Application Default Credentials (ADC) for Vertex AI users.
- Environment Variable:
-
Response Handling: The CLI returns a structured JSON object that the agent can parse. The wrapper needs to handle the following schema:
{ "response": "The AI's actual text response...", "stats": { "models": { ... }, "tools": { ... } } } -
Streaming (Optional but recommended): The CLI also supports streaming JSON events via
--output-format stream-json, which could be used to provide real-time feedback in the UI if supported by the architecture.
Describe alternatives you've considered
- Direct API SDK: We could use the Google AI SDK directly (e.g., in Go or Node). However, using the
gemini-cliwrapper aligns with the tool-based nature of this repository and provides a unified interface for both Standard Gemini and Vertex AI without extra configuration.
Additional context
Documentation References: