Add native Gemini LLM and embedding provider support#62
Open
anaslimem wants to merge 1 commit intoraphaelmansuy:edgequake-mainfrom
Open
Add native Gemini LLM and embedding provider support#62anaslimem wants to merge 1 commit intoraphaelmansuy:edgequake-mainfrom
anaslimem wants to merge 1 commit intoraphaelmansuy:edgequake-mainfrom
Conversation
Amp-Thread-ID: https://ampcode.com/threads/T-019c7703-fded-76a6-8a0b-b428dd7bcc9c Co-authored-by: Amp <amp@ampcode.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR adds native Gemini LLM and embedding provider support to EdgeQuake, enabling seamless integration with Google's Generative Language API alongside existing providers (OpenAI, Ollama, Mock).
Key Features:
GeminiProviderimplementing bothLLMProviderandEmbeddingProvidertraitsgemini-2.5-flash(ultra-fast),gemini-2.5-pro(best reasoning)gemini-embedding-001(3072 dimensions, MTEB leader)GEMINI_API_KEYorGOOGLE_API_KEY).env.exampledocumentation with pricing infoFixes #(link to issue if applicable)
Type of change
Implementation Details
Files Changed:
edgequake/crates/edgequake-api/src/providers/gemini.rs(NEW)edgequake/crates/edgequake-api/src/providers/mod.rsgeminimodule and public export.env.exampleedgequake/crates/edgequake-api/Cargo.tomlblockingfeature toreqwestdev-dependencies for integration testsTest Files (Modified for vision provider compatibility)
e2e_document_lineage.rse2e_vector_storage_dimension.rsvision_provider: Noneandvision_model: Nonefields to workspace request initializersConfig Files (Updated)
edgequake/docker/docker-compose.ymledgequake/models.tomlKnown Limitations & Next Steps
Many E2E test files throughout the codebase initialize
CreateWorkspaceRequestandUpdateWorkspaceRequestwithout the newvision_providerandvision_modelfields. These tests will fail to compile until those structs are updated.Affected test files (estimated 20+):
e2e_provider_*.rs- Provider-related testse2e_workspace_*.rs- Workspace configuration testse2e_chat_*.rs- Chat endpoint testse2e_query_*.rs- Query engine testsTo fix in follow-up PR:
Add
vision_provider: Noneandvision_model: Noneto all workspace request initializers in test files.This PR was intentionally kept focused on the Gemini provider implementation to keep the diff manageable. A follow-up PR can address the comprehensive test updates.
Checklist
.env.examplewith Gemini configurationTesting Instructions
To verify Gemini provider works:
To use Gemini in development:
Configuration Examples:
Using Gemini for LLM only:
Using Gemini for both LLM and embeddings:
Additional Context
Recommend merging with comment: A follow-up PR should add the
vision_providerandvision_modelfields to all test workspace initializers to unblock the full test suite.