This Masterβs Thesis addresses the design, implementation, and evaluation of a chatbot using a multi-agent architecture to provide specialized technical support for local area networks (LAN). The project stems from the studentβs interest in applying generative artificial intelligence to solve complex problems in technological environments.
The main objective is to develop an intelligent support system that leverages generative AI models and a multi-agent architecture to deliver accurate and contextualized responses. The proposed solution is composed of four specialized agents: a classification agent to route queries, a knowledge agent that uses the Retrieval-Augmented Generation (RAG) technique to consult internal documentation, a connectivity agent to perform real-time network diagnostics, and an escalation agent to handle requests requiring human intervention.
Among the technical requirements, special emphasis is placed on optimizing a language model using the Low-Rank Adaptation (LoRA) technique to improve the accuracy of request classification. In addition, the project considers the development of a vector database with multilingual embeddings to ensure the systemβs knowledge remains relevant and up to date. User interaction will be enabled through the Telegram messaging platform, supported by a serverless architecture. Finally, the chatbotβs performance will be evaluated in real-world scenarios, measuring both its accuracy and user satisfaction.
The NetworkSupportChatbot class serves as the central orchestrator that initializes all agents and manages the workflow 2 . It creates a LangGraph StateGraph workflow with memory persistence using MemorySaver.
The system maintains conversation state through the AgentState TypedDict structure 3 . This state tracks:
- Message history across different agent types
- User context (question, language)
- Processing metadata (scores, actions)
- Final responses
The architecture implements four specialized agents, each with distinct responsibilities:
Routes incoming queries to appropriate specialized agents 4 . Uses pattern matching to classify requests and defaults to the knowledge agent when unclear.
Performs network diagnostics using tools like ping, DNS queries, and port checks 5 . Implements a ReAct pattern for tool usage with iterative reasoning.
Retrieves information from a vector database using semantic search 6 . Routes to escalation if knowledge relevance scores are below threshold.
Handles complex issues requiring human intervention by creating support tickets 7 . Integrates with ClickUp for ticket management.
Based on the codebase context, you're asking for the directory structure of the network support chatbot system. Here's the comprehensive directory structure for the en-medina/network-support-chatbot repository:
en-medina/network-support-chatbot/
βββ lambda/
β βββ generator/
β βββ agents/
β β βββ networksupportchatbot.py # Main orchestrator class
β β βββ triageagent.py # Query routing agent
β β βββ connectivityagent.py # Network diagnostics agent
β β βββ knowledgeagent.py # Knowledge retrieval agent
β β βββ escalationagent.py # Human escalation agent
β β βββ state.py # AgentState management
β βββ tools/
β β βββ network.py # Network diagnostic tools
β β βββ vectordb.py # Vector database integration
β β βββ model.py # Model selection utilities
β β βββ language.py # Language detection
β β βββ escalation.py # Escalation tools
β βββ parser/
β β βββ knowledge.py # Knowledge response parsers
β β βββ connectivity.py # Connectivity response parsers
β β βββ escalation.py # Escalation response parsers
β βββ train/
β β βββ data/
β β βββ json/
β β βββ triage_train.json # Training data for triage agent
β βββ test/
β β βββ playground.ipynb # Interactive development notebook
β β βββ with_langgraph.py # LangGraph testing framework
β β βββ requirements-full.txt # Development dependencies
β βββ settings.py # Configuration management
βββ infrastructure/ # AWS deployment configs
βββ diagram.drawio # System architecture diagram
βββ LLM_PROMPT.md # Development prompts
βββ Dockerfile # Container configuration
Contains the core multi-agent system implementation 1 . The main orchestrator initializes all four specialized agents and manages the LangGraph workflow.
Houses external service integrations and utility functions. Network tools provide diagnostic capabilities 2 , while the vector database tools enable knowledge retrieval.
Development and testing infrastructure including the interactive Jupyter notebook 3 for testing HuggingFace integrations and the standalone LangGraph testing framework 4 .
Training data and model fine-tuning resources, specifically the triage agent training dataset 5 containing labeled examples for query classification.
The system uses a state machine pattern where agents can route to each other based on processing outcomes 8 :
START β Triage β [Connectivity|Knowledge|Escalation] β END
Each agent implements a route_condition method that determines the next step in the workflow based on the current state.
The system integrates with multiple external services:
- AWS Bedrock: For LLM inference with different models per agent
- Pinecone: Vector database for knowledge retrieval 9
- ClickUp: Task management for escalated issues
- Telegram: User interface via webhook integration
The system deploys as containerized AWS Lambda functions with:
- Webhook handler for receiving Telegram messages
- Generator Lambda containing the multi-agent system
- SQS queuing for asynchronous message processing
- Environment-specific model selection supporting local development
- Event-Driven: SQS decouples message reception from processing
- State Machine: LangGraph manages conversation flow and context
- Multi-Agent: Domain-specific expertise through specialized agents
- Tool Integration: External capabilities via service adapters
- Multilingual: Language detection and response localization
The architecture provides scalability through stateless processing, horizontal Lambda scaling, and optimized model selection based on task complexity.
The system demonstrates a sophisticated approach to conversational AI by combining multiple specialized agents with external tool integration. The LangGraph framework enables complex workflow orchestration while maintaining conversation state across agent transitions. The modular design allows for independent agent development and testing while ensuring cohesive system behavior.