This project demonstrates a production-ready approach to building agentic workflows with persistent memory and real-time tool integration. A high-performance Agentic Assistant built with LangGraph and Groq, optimized for sub-second multi-step reasoning loops. Features a robust multi-tool suiteβincluding YouTube, Tavily Search, Wikipedia, and Weatherβorchestrated with persistent SQLite memory. Designed for production-ready observability using LangSmith and a real-time streaming interface via Streamlit.
- Phase 1: The Core - Built the basic backend and integrated the Llama-3.1-8B model via Groq.
- Phase 2: The Experience - Added Streamlit for real-time streaming and a polished UI.
- Phase 3: Memory & Logic - Implemented SQLite persistence for resumable chats and complex LangGraph state management.
- Phase 4: Professional Grade - Integrated LangSmith for observability and parallel tool-calling.
- Agentic Workflow: Uses a cyclic graph to handle tool-calling loops (Search, Finance, & Math).
- Multi-Threaded Persistence: Integrated
SqliteSaverallows for multiple independent conversation threads stored in a local SQLite database. - Real-Time Streaming: A smooth, ChatGPT-like interface with transparent "Thought" blocks using Streamlit's
st.status. - Enterprise Observability: Fully integrated with LangSmith for tracing, debugging, and performance monitoring.
- Secure Architecture: Comprehensive use of environment variables (
.env) to protect sensitive API keys.
- LLM Engine: Groq (Inference)
- Logic: LangGraph & LangChain
- Frontend: Streamlit (Custom CSS & Multi-threading)
- Database: SQLite (SQLAlchemy / LangGraph Checkpointers)
- Observability: LangSmith
- UI : Streamlit
- Tools: YouTube Search, Tavily, Wikipedia, OpenWeatherMap
βββ langgraph_tool_backend.py # Core agent logic & state graph
βββ streamlit_frontend_threading.py # UI & multi-threaded execution
βββ requirements.txt # Project dependencies
βββ .env.example # API key template
βββ .gitignore # Files to exclude from Git
git clone [https://github.com/YOUR_USERNAME/LangGraph-agentic-chatbot.git](https://github.com/YOUR_USERNAME/LangGraph-agentic-chatbot.git)
cd LangGraph-agentic-chatbot
pip install -r requirements.txt
Create a .env file in the root directory. Add your credentials (see .env.example for reference):
OPENAI_API_KEY=your_openai_keyALPHA_VANTAGE_API_KEY=your_alpha_vantage_keySTOCK_PRICE_URL=https://www.alphavantage.co/query
LANGSMITH_TRACING=trueLANGSMITH_API_KEY=your_langsmith_keyLANGSMITH_PROJECT="LangGraph-Chatbot"GROQ_API_KEY=your_groq_keyTAVILY_API_KEY=your_tavily_keyOPENWEATHERMAP_API_KEY=your_weather_keyLANGSMITH_API_KEY=your_langsmith_key
Start the Streamlit server to launch the interface: streamlit run streamlit_frontend_threading.py
After interacting with the bot, visit your LangSmith dashboard to see the execution traces:
https://eu.smith.langchain.com/
By using LangGraph, the agent operates as a state machine. If a tool call fails or returns ambiguous data, the agent can loop back, refine its search, and correct itself. Every "thought" and "action" is logged via LangSmith, providing full transparency into:
LLM Reasoning:Exactly why the agent chose a specific tool.Token Usage:Monitoring costs and efficiency.Latency:Identifying bottlenecks in tool response times.
For a deep dive into the technical hurdles, "Overwhelming" moments, and the upcoming MCP (Model Context Protocol) upgrade, check out the:
