Skip to content

maitribg/csr-trainer-bot

Repository files navigation

πŸŽ“ CSR Trainer Bot

An AI-powered customer service training simulation platform that helps Customer Service Representatives (CSRs) practice and improve their skills through realistic conversations with AI-powered customer personas.

πŸ“‹ Overview

CSR Trainer Bot provides an interactive learning environment where trainees can:

  • Practice conversations with different customer personality types
  • Receive real-time AI responses based on customer personas
  • Get detailed performance analysis and feedback
  • Learn from mistakes and improve customer service skills

✨ Features

🎭 Multiple Customer Personas

  • Angry Customer: Frustrated and demanding, needs empathy and quick resolution
  • Confused Customer: Needs clear, patient explanation and guidance
  • Technical Customer: Tech-savvy, expects detailed and accurate information
  • Polite Customer: Courteous and patient, deserves warm assistance
  • Impatient Customer: In a hurry, needs quick and direct solutions

πŸ’¬ Interactive Chat Simulation

  • Real-time conversation with AI-powered customer responses
  • Natural, context-aware replies based on conversation history
  • Turn-by-turn interaction mimicking real customer service scenarios

πŸ“Š Comprehensive Analysis

  • Overall performance score (0-10 scale)
  • Inline feedback on each CSR message
  • Identification of effective and problematic responses
  • Key learning points and improvement suggestions
  • Detailed performance summary

🎨 Modern UI/UX

  • Beautiful, responsive design
  • Smooth animations and transitions
  • Mobile-friendly interface
  • Intuitive navigation
  • Real-time feedback display

πŸ—οΈ Project Structure

csr-trainer-bot/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ main.py                 # FastAPI application entry point
β”‚   β”œβ”€β”€ routers/
β”‚   β”‚   β”œβ”€β”€ chat.py            # Chat endpoint handlers
β”‚   β”‚   └── analyze.py         # Analysis endpoint handlers
β”‚   β”œβ”€β”€ schemas/
β”‚   β”‚   β”œβ”€β”€ common.py          # Shared Pydantic models
β”‚   β”‚   β”œβ”€β”€ chat.py            # Chat request/response schemas
β”‚   β”‚   └── analyze.py         # Analysis request/response schemas
β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”œβ”€β”€ model.py           # AI model service (replaceable)
β”‚   β”‚   └── analyzer.py        # Chat analysis service
β”‚   └── utils/
β”‚       β”œβ”€β”€ fileio.py          # JSON file operations
β”‚       β”œβ”€β”€ logger.py          # Logging configuration
β”‚       └── persona_cache.py   # Persona caching
β”œβ”€β”€ storage/
β”‚   β”œβ”€β”€ chat_sessions.json     # Session storage
β”‚   └── personas.json          # Persona configurations
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ index.html             # Main HTML structure
β”‚   β”œβ”€β”€ styles.css             # Styling and responsive design
β”‚   β”œβ”€β”€ script.js              # Main application logic
β”‚   β”œβ”€β”€ api.js                 # API communication layer
β”‚   └── consts.js              # Configuration and constants
β”œβ”€β”€ README.md                  # This file
β”œβ”€β”€ .gitignore                 # Git ignore rules
└── LICENSE                    # MIT License

πŸš€ Getting Started

Prerequisites

  • Python 3.8 or higher
  • pip (Python package manager)
  • Modern web browser (Chrome, Firefox, Safari, Edge)

Installation

  1. Clone the repository

    git clone <repository-url>
    cd csr-trainer-bot
  2. Install Python dependencies

    pip install fastapi uvicorn pydantic
  3. Initialize storage Storage files will be automatically created on first run.

Running the Application

  1. Start the backend server

    cd backend
    python main.py

    Or using uvicorn directly:

    uvicorn main:app --reload --host 0.0.0.0 --port 8000
  2. Access the application Open your browser and navigate to:

    http://localhost:8000
    
  3. Start training!

    • Select a customer persona
    • Click "Start Chat Session"
    • Practice your customer service skills
    • Click "End Chat" when done
    • Click "Analyze Chat" to see your performance

πŸ”§ Configuration

API Configuration

Edit frontend/consts.js to configure API settings:

const API_CONFIG = {
    BASE_URL: 'http://localhost:8000/api',
    TIMEOUT: 30000
};

Demo Mode

Enable demo mode for offline testing (no backend required):

const DEMO_MODE = {
    ENABLED: true,  // Set to true for demo mode
    MOCK_DELAY: 1500
};

Adding Custom Personas

Edit storage/personas.json to add or modify personas:

{
  "custom": {
    "name": "Custom Persona",
    "description": "Description here",
    "icon": "😎",
    "traits": ["trait1", "trait2"],
    "tips": "Tips for handling this persona"
  }
}

πŸ€– AI Model Integration

The current implementation uses a rule-based system for generating customer responses. You can easily replace it with:

Option 1: Hugging Face API

  1. Install the Hugging Face client:

    pip install huggingface_hub
  2. Set your API token:

    export HF_TOKEN=your_token_here
  3. Modify backend/services/model.py to use the get_llm_response function

Option 2: OpenAI API

  1. Install the OpenAI client:

    pip install openai
  2. Set your API key:

    export OPENAI_API_KEY=your_key_here
  3. Implement OpenAI integration in backend/services/model.py

Option 3: Local LLM

Use llama.cpp, Ollama, or similar tools to run models locally:

pip install llama-cpp-python

πŸ“Š API Endpoints

POST /api/chat

Send a chat message and receive AI response.

Request:

{
  "session_id": "uuid",
  "persona": "angry",
  "chat_history": [...]
}

Response:

{
  "session_id": "uuid",
  "message": {
    "sender": "customer",
    "text": "Response text",
    "timestamp": 1234567890
  },
  "success": true
}

POST /api/analyze

Analyze a completed chat session.

Request:

{
  "session_id": "uuid",
  "persona": "angry",
  "chat_history": [...],
  "ground_truth": "optional context"
}

Response:

{
  "session_id": "uuid",
  "score": 7.5,
  "inline_feedback": [...],
  "good_responses": [...],
  "bad_responses": [...],
  "learning_points": [...],
  "summary": "Performance summary",
  "ground_truth_revealed": "Persona details",
  "success": true
}

🌐 Deployment

Option 1: Local Network

Run the server and access from other devices on your network:

uvicorn main:app --host 0.0.0.0 --port 8000

Access via: http://<your-ip>:8000

Option 2: Cloud Deployment

Backend (Hugging Face Spaces, Railway, Render, etc.):

  1. Push your code to GitHub
  2. Connect your repository to the platform
  3. Set environment variables
  4. Deploy!

Frontend (GitHub Pages, Netlify, Vercel):

  1. Update API_CONFIG.BASE_URL in consts.js
  2. Deploy the frontend folder
  3. Configure CORS on backend

Option 3: Docker

Create a Dockerfile:

FROM python:3.9-slim
WORKDIR /app
COPY backend/ ./backend/
COPY frontend/ ./frontend/
COPY storage/ ./storage/
RUN pip install fastapi uvicorn pydantic
EXPOSE 8000
CMD ["uvicorn", "backend.main:app", "--host", "0.0.0.0", "--port", "8000"]

Build and run:

docker build -t csr-trainer-bot .
docker run -p 8000:8000 csr-trainer-bot

πŸ§ͺ Testing

Manual Testing

  1. Test each persona type
  2. Try different conversation lengths
  3. Test edge cases (empty messages, very long messages)
  4. Verify analysis accuracy

Automated Testing

Add pytest tests in backend/tests/:

pip install pytest
pytest backend/tests/

🀝 Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Built with FastAPI
  • UI inspired by modern chat interfaces
  • Designed for educational purposes

πŸ“ž Support

For issues, questions, or suggestions:

  • Open an issue on GitHub
  • Check existing documentation
  • Review the code comments

πŸ—ΊοΈ Roadmap

  • Add more persona types
  • Implement voice chat simulation
  • Add multi-language support
  • Create admin dashboard for trainers
  • Add progress tracking over time
  • Implement team leaderboards
  • Export analysis reports as PDF
  • Add video tutorials

πŸ“ˆ Performance Tips

  1. For faster responses: Use local LLM or cached responses
  2. For better analysis: Fine-tune the analyzer prompts
  3. For scalability: Use Redis for session storage
  4. For production: Add rate limiting and authentication

πŸ”’ Security Notes

  • This is a training tool, not production-ready
  • Add authentication before deploying publicly
  • Sanitize user inputs
  • Use environment variables for secrets
  • Enable HTTPS in production
  • Implement rate limiting

Built with ❀️ for customer service excellence

About

CSR Trainer Chatbot with a Simulated Angry Customer and a Training Module

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors