An enterprise-grade Retrieval-Augmented Generation (RAG) application designed to transform static study materials into interactive AI-driven conversations. Built for the modern LLM ecosystem in 2026.
- Context-Aware Retrieval: Uses Modern LCEL (LangChain Expression Language) for stable and high-performance document interrogation.
- Multimodal Document Processing: High-accuracy PDF parsing and semantic chunking.
- Vector Intelligence: Seamlessly integrates with ChromaDB for local, privacy-centric vector storage.
- Dual-Model Inference: Leverages Google Gemini 1.5 Flash for high-speed, cost-effective reasoning.
- Framework: LangChain v0.3 Core (History-aware retrievers).
- Embeddings: Google GoogleGenerativeAIEmbeddings (
embedding-001). - Database: SQLITE-optimized ChromaDB (pysqlite3 monkeypatched for cloud).
- Interface: Real-time Streamlit dashboard with conversational history.
- Explore the Live App: AI RAG Navigator Demo
- Local Setup:
git clone https://github.com/AkkiKrsingh2005/ai-rag-navigator.git cd ai-rag-navigator pip install -r requirements.txt streamlit run app.py - Environment: Add your
GOOGLE_API_KEYto the application sidebar or a.envfile.
Developed by Ankit Kumar | Portfolio