Skip to content

5CCSACCA/coursework-sdsdsdasa

Repository files navigation

coursework-sdsdsdasa

https://github.com/5CCSACCA/coursework-sdsdsdasa

Detect human emotions based on facial expressions. Together with the transcript, LLM records and returns feedbacks for later self improvements on use of languages.

(Example: you said something rude, people shows angry face, feedback will be don’t say xxx, but xxx. )

This service could be used to imrpove self-awareness in an interview. By reviewing what you said and HR's reaction, you understand what to say and what not to say.

Future expension direction: use video instead of images, capture uncommon emotions with the transcript and use service to return feedbacks.

ServiceA: YOLO (Image + transcript) -> (emotion + transcript) ServiceB: BitNet LLM (feedback)

Model used: https://github.com/alihassanml/Yolo11-Face-Emotion-Detection

Language Feedback SaaS – Phase 2 (Containerization)

This project is part of the 5CCSACCA Cloud Computing for Artificial Intelligence coursework. The goal of Phase 2 is to containerize the system using Docker and Docker Compose so that the prototype can be executed in a reproducible environment.

This README provides:

Instructions for deploying the Phase-2 version of the SaaS

The project structure

Expected input & output of the current inference script (app/main.py)

📁 Project Structure

COURSEWORK-SDSDSDASA/ │ ├── app/ │ └── main.py # Phase 2 entry point │ ├── service/ │ ├── serviceA.py # Will become the emotion+transcript service │ └── serviceB.py # Will become BitNet feedback service │ ├── media/ # Placeholder input files │ ├── image/ │ └── video/ │ ├── output/ # Placeholder output files │ ├── requirements.txt ├── Dockerfile # Phase 2 container

coursework-sdsdsdasa

Repository: https://github.com/5CCSACCA/coursework-sdsdsdasa

Language Feedback SaaS (Emotion-aware interview coach)

This project detects human emotions from face images or short videos and pairs them with spoken/written transcripts. A small LLM-based component (BitNet) analyses short utterances and emotion changes and returns actionable feedback to help improve language use and self-awareness in interviews.

Key points in this workspace:

  • Emotion detection model and inference logic in service/serviceA.py (YOLO-based model)
  • BitNet LLM analysis in app/bitnet_llm.py exposed via the API
  • FastAPI app in app/api.py with endpoints for video analysis and LLM analysis
  • Local persistence using SQLite (app.db) and optional Firestore integration for timelines

Project Layout (high level)

/app

  • api.py : FastAPI application exposing endpoints (/analyze, /analyses, /bitnet/analyze)
  • main.py : quick local runner that loads an image and runs serviceA prediction
  • bitnet_llm.py : wrapper for the BitNet LLM analyzer

/service

  • serviceA.py : emotion detection and video analysis functions
  • serviceB.py : (supporting) - LLM integration points

/data

  • db.py : SQLAlchemy + SQLite configuration (sqlite:///./app.db)
  • models.py : DB models for Analysis records

/firebase

  • firebase_client.py : Firestore client helper. Uses serviceAccountKey.json if present.

Other files: Dockerfile, docker-compose.yml, requirements.txt, example model weights (yolo11n.pt), and helper scripts.

Requirements / Prerequisites

  • Python 3.10+ (project was tested with Python 3.11 in container)
  • Docker & Docker Compose (optional) for containerized runs
  • (Optional) Firebase service account JSON at firebase/serviceAccountKey.json to enable Firestore persistence

Quick Start — Run locally (development)

  1. Create a virtual environment and install dependencies:
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
  1. Run the quick local runner (loads media/image/angry.jpeg and prints a prediction):
python app/main.py
  1. Run the API server (FastAPI / Uvicorn):
uvicorn app.api:app --reload --host 0.0.0.0 --port 8000

The API will listen by default on http://127.0.0.1:8000.

Quick Start — Run with Docker Compose

Build and start the service (recommended for coursework reproducibility):

docker compose build
docker compose up

The container runs python app/main.py by default and will expose ports as configured in docker-compose.yml (if the compose file maps uvicorn, it will start the API).

API Endpoints

The repository exposes a small FastAPI service in app/api.py with these endpoints:

  • POST /analyze — upload a video file for emotion analysis.

    • Form-key: file (multipart file upload)
    • Example:
      curl -X POST "http://127.0.0.1:8000/analyze" -F "file=@/path/to/video.mp4"
  • GET /analyses — list analysis records stored in the local DB.

  • GET /analyses/{analysis_id} — retrieve a single analysis record and timeline.

  • POST /bitnet/analyze — run BitNet LLM analysis on a short utterance.

    • JSON payload: {"text": "<utterance>", "from_emotion": "neutral", "to_emotion": "upset"} (last two fields optional)
    • Example:
      curl -sS -X POST http://127.0.0.1:8000/bitnet/analyze \
          -H "Content-Type: application/json" \
          -d '{"text":"I told them I was disappointed, but I still care.", "from_emotion":"neutral", "to_emotion":"upset"}'

Persistence & Firestore

  • Local DB: the project uses SQLite configured at sqlite:///./app.db (see data/db.py). Analysis records are stored via SQLAlchemy models in data/models.py.
  • Firestore: firebase/firebase_client.py will try to initialize Firestore if firebase/serviceAccountKey.json is present. If the file is missing, Firestore is gracefully disabled (the code logs a warning). To enable Firestore, place a valid service account JSON at that path.

Notes & Tips

  • If you see errors initializing Firestore, check that firebase/serviceAccountKey.json is valid. Without it, timeline saving to Firestore will be skipped but the local DB still works.
  • The video analysis may be CPU-intensive depending on the model files (e.g., yolo11n.pt). For production you may want to run on machines with GPU support and adapt the model loader.
  • app/main.py is a simple utility to run a single-image prediction — the API is the main way to run end-to-end analyses.

Next Steps (Suggested)

  • Expand serviceA to support batch processing of video frames
  • Add Docker Compose service that runs uvicorn for the API
  • Add tests for API endpoints and model inference

Contact

If you need help debugging or want to collaborate, open an issue in the repository or contact the project developer.


Updated to reflect current code: FastAPI endpoints, SQLite persistence, and optional Firestore integration.

About

coursework-sdsdsdasa created by GitHub Classroom

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors