This repository contains hands-on experiments and examples using LangChain for Retrieval-Augmented Generation (RAG) and Conversational Agents with memory and tool use.
File: RAG/1a_rag_basics.py
- Loads a text file (
odyssey.txt). - Splits it into manageable chunks.
- Converts each chunk into embeddings using HuggingFace models.
- Stores embeddings in a Chroma vector database.
- Supports semantic search over the stored vectors.
How to run:
python RAG/1a_rag_basics.pyFile: agent_conversational.py
- Loads your Google Gemini API key from
.env. - Initializes a Gemini LLM with LangChain.
- Adds conversational memory (remembers chat history).
- Loads tools (e.g., math tool, LLM tool).
- Uses a prompt template and LLMChain for flexible queries.
- Demonstrates multi-turn conversation and tool use.
How to run:
python agent_conversational.py-
Clone the repository:
git clone https://github.com/ali3dev/LangChain.git cd LangChain -
Install dependencies:
pip install -r requirements.txt
-
Set up your
.envfile:GOOGLE_API_KEY=your-google-api-key-here -
Add your data:
- Place your text files (e.g.,
odyssey.txt) in theRAG/books/folder.
- Place your text files (e.g.,
See requirements.txt for the full list.
This project is for educational and experimental purposes.