Is your feature request related to a problem? Please describe.
- Right now, the project relies on Ollama for LLM functionality. This adds extra setup steps (installing and running Ollama locally), makes it harder to deploy in different environments (e.g., cloud, CI, shared servers), and tightly couples the app to a local runtime.
Describe the solution you'd like
- Replace the existing Ollama integration with the Google Gemini API (or add Gemini as the primary/default LLM backend).
Is your feature request related to a problem? Please describe.
Describe the solution you'd like