Skip to content
Yiğit ERDOĞAN edited this page Jan 11, 2026 · 1 revision

❓ Frequently Asked Questions (FAQ)

Everything you need to know about CodeScope.

General Questions

What is CodeScope?

CodeScope is a local-first AI coding assistant that uses RAG (Retrieval-Augmented Generation) to let you chat with your own codebase privately.

Is my code sent to the cloud?

No. All processing, including code indexing and AI inference, happens entirely on your local machine using Ollama.

Which LLMs are supported?

Any model supported by Ollama can be used. We recommend Llama 3, CodeLlama, or Mistral.

Is it free?

Yes! CodeScope is open-source and free to use. You only need to provide the hardware to run the models.

Technical Questions

What programming languages are supported?

CodeScope supports most popular languages including Python, JavaScript, TypeScript, Go, Rust, C++, Java, and more.

How much RAM do I need?

At least 8GB is recommended for 7B/8B models. For smaller models like Phi-2, 4GB might suffice. 16GB+ is recommended for a smooth experience.

Can I index multiple repositories?

Currently, CodeScope indexes one repository at a time. Ingesting a new one will replace the previous index.

Does it work offline?

Yes, once you have downloaded the models via Ollama, CodeScope can run completely without an internet connection.

Troubleshooting

Why is the AI response slow?

Local AI inference is CPU/GPU intensive. Having an NVIDIA GPU or Apple Silicon (M1/M2/M3) significantly speeds up responses.

I found a bug, what should I do?

Please open an issue on our GitHub repository.

Next Steps

Clone this wiki locally