EmpathyBot is an interactive AI chatbot that simulates therapist-like conversations. It leverages Meta’s LLaMA 3.2-1B-Instruct model with integrated sentiment analysis to generate emotionally aware responses. The system features a full-stack implementation using React (frontend) and Flask (backend).
- LLM: LLaMA 3.2-1B-Instruct
- Frontend: React with dynamic chat UI and sentiment feedback
- Backend: Flask API with
transformers,torch, anddatasets - Sentiment Analysis:
distilbert-base-uncased-finetuned-sst-2-english
cd backend
python3 -m venv chatbot_env
source chatbot_env/bin/activate
pip install -r requirements.txt
python app.pyThe Flask backend will start on http://localhost:5050.
cd frontend
npm install
npm startFrontend runs on http://localhost:3000.
This project includes a full training pipeline using the EmpatheticDialogues dataset:
emp_dia.py– preprocesses and formats the dataset -- run the file to pre-process the datafineTune.py– trains LLaMA 3.2-1B using Hugging FaceTrainer
While the project is fully set up to fine-tune a LLaMA model, training was not completed due to limited GPU availability. The script
fineTune.pyis ready to run on machines with sufficient GPU recourcesThe chatbot currently runs on the pretrained base model to demonstrate working system architecture.
Once GPU resources are available, simply run:
python fineTune.py…and point the bot to the trained model (see below).
You can easily switch between the base model and your own fine-tuned version:
In llama3.py (if you want to test model specifically):
bot = EmpathyBot(use_finetuned_model=True)In app.py (if you want to use model in the frontend):
USE_FINE_TUNED_MODEL = TrueYour trained model should be saved to: ./empathetic_model/.