-
-
Notifications
You must be signed in to change notification settings - Fork 1
FAQ
Everything you need to know about CodeScope.
CodeScope is a local-first AI coding assistant that uses RAG (Retrieval-Augmented Generation) to let you chat with your own codebase privately.
No. All processing, including code indexing and AI inference, happens entirely on your local machine using Ollama.
Any model supported by Ollama can be used. We recommend Llama 3, CodeLlama, or Mistral.
Yes! CodeScope is open-source and free to use. You only need to provide the hardware to run the models.
CodeScope supports most popular languages including Python, JavaScript, TypeScript, Go, Rust, C++, Java, and more.
At least 8GB is recommended for 7B/8B models. For smaller models like Phi-2, 4GB might suffice. 16GB+ is recommended for a smooth experience.
Currently, CodeScope indexes one repository at a time. Ingesting a new one will replace the previous index.
Yes, once you have downloaded the models via Ollama, CodeScope can run completely without an internet connection.
Local AI inference is CPU/GPU intensive. Having an NVIDIA GPU or Apple Silicon (M1/M2/M3) significantly speeds up responses.
Please open an issue on our GitHub repository.
- Getting Started: Getting Started
- Troubleshooting Guide: Troubleshooting
- Architecture: Architecture