Real-time chair occupancy detection and analytics system powered by YOLOv11 and DeepSort tracking. Designed for workspace optimization, facility management, and occupancy monitoring.
- Overview
- Key Features
- Screenshots
- Architecture
- Quick Start
- Installation
- Usage
- API Reference
- Configuration
- Project Structure
- Advanced Features
- Testing
- Cleanup & Maintenance
- Deployment
- Troubleshooting
- Contributing
- License
The AI Chair Occupancy Analytics Platform is an enterprise-grade solution for monitoring and analyzing chair usage patterns in real-time. It utilizes state-of-the-art computer vision and deep learning techniques to:
- Detect and track chairs and people in video streams
- Analyze occupancy patterns across time and space
- Generate insights on space utilization and peak usage periods
- Support multi-camera setups with intelligent deduplication
- Provide real-time WebSocket streams and RESTful APIs
- π’ Office Space Management: Optimize desk allocation and workspace planning
- π Library Analytics: Monitor study room and seating availability
- π₯ Healthcare Facilities: Track waiting room occupancy
- π Educational Institutions: Analyze classroom and cafeteria usage
- πͺ Retail & Hospitality: Monitor seating areas and customer flow
- β YOLOv11 Object Detection: State-of-the-art real-time detection of chairs and people
- β DeepSort Tracking: Advanced person re-identification across frames
- β Multi-Camera Support: Unified analytics with automatic chair deduplication
- β Real-Time Streaming: WebSocket support for live video and statistics
- β RESTful API: Complete API for integration with external systems
- β Web Dashboard: Modern glassmorphism UI with interactive visualizations
- β Batch Processing: Asynchronous video analysis with progress tracking
- β Database Persistence: SQLite storage for historical analytics
-
π Smart Tracking
- Chair position interpolation during occlusion
- Motion blur detection and adaptive enhancement
- Reflection detection and filtering
- Person-chair zone-based occupancy detection
-
π Comprehensive Analytics
- Frame-by-frame occupancy rates
- Per-person usage metrics (total time, chairs used, session count)
- Per-chair metrics (total usage, unique users)
- Peak and low activity window detection
- Interaction ledger with complete session history
-
π₯ Multi-Camera Intelligence
- Camera priority zones for overlapping views
- Automatic duplicate chair resolution
- Conflict-free unified occupancy counting
- Configurable overlap regions
-
π§ Production Ready
- Docker containerization with GPU support
- Environment-based configuration
- Automated cleanup utilities
- Comprehensive test suite
- Health check endpoints
graph TB
subgraph "Client Layer"
WEB[Web Dashboard<br/>HTML/CSS/JS]
API_CLIENT[External API Clients]
WS_CLIENT[WebSocket Clients]
end
subgraph "API Layer"
FASTAPI[FastAPI Application<br/>app.py]
WS[WebSocket Handler<br/>websocket_handler.py]
MULTI[Multi-Camera API<br/>api/multi_camera.py]
end
subgraph "Service Layer"
PROCESSOR[Frame Processor<br/>services/processor.py]
STREAM[Stream Generator<br/>services/stream.py]
ANALYTICS[Analytics Service<br/>services/analytics.py]
end
subgraph "ML Pipeline"
DETECTOR[YOLO Detector<br/>myutils/detector.py]
TRACKER[DeepSort Tracker<br/>myutils/tracker.py]
PROCESS[Video Processor<br/>process_video.py]
end
subgraph "Data Layer"
DB[(SQLite Database<br/>analysis.db)]
FILES[File Storage<br/>uploads/outputs/]
CONFIG[Configuration<br/>config.py]
end
WEB --> FASTAPI
API_CLIENT --> FASTAPI
WS_CLIENT --> WS
FASTAPI --> PROCESSOR
FASTAPI --> MULTI
FASTAPI --> DB
WS --> PROCESSOR
MULTI --> PROCESS
PROCESSOR --> PROCESS
STREAM --> PROCESS
PROCESS --> DETECTOR
PROCESS --> TRACKER
PROCESS --> ANALYTICS
PROCESS --> FILES
ANALYTICS --> DB
CONFIG -.-> FASTAPI
CONFIG -.-> PROCESS
style WEB fill:#e1f5ff
style FASTAPI fill:#b3e5fc
style PROCESS fill:#81d4fa
style DETECTOR fill:#4fc3f7
style DB fill:#fff59d
flowchart LR
subgraph INPUT[Input Sources]
VIDEO[Video File]
WEBCAM[Webcam]
RTSP[RTSP Stream]
end
subgraph DETECTION[Detection Phase]
YOLO[YOLOv11 Model<br/>Chair & Person Detection]
BLUR[Motion Blur<br/>Detection]
REFLECT[Reflection<br/>Filtering]
end
subgraph TRACKING[Tracking Phase]
DEEPSORT[DeepSort<br/>Person Re-ID]
CHAIR_SIG[Chair Signature<br/>Matching]
INTERPOLATE[Position<br/>Interpolation]
end
subgraph ANALYSIS[Analysis Phase]
PROXIMITY[Proximity<br/>Calculation]
OCCUPANCY[Occupancy<br/>Status Update]
ZONES[Zone-based<br/>Detection]
end
subgraph OUTPUT[Output Generation]
ANNOTATE[Frame<br/>Annotation]
METRICS[Metrics<br/>Calculation]
VIDEO_OUT[Processed Video]
JSON[JSON Results]
end
INPUT --> DETECTION
DETECTION --> TRACKING
TRACKING --> ANALYSIS
ANALYSIS --> OUTPUT
BLUR -.filter.-> TRACKING
REFLECT -.filter.-> TRACKING
style INPUT fill:#e8f5e9
style DETECTION fill:#fff9c4
style TRACKING fill:#b3e5fc
style ANALYSIS fill:#f8bbd0
style OUTPUT fill:#d1c4e9
graph TB
subgraph CAMERAS[Camera Feeds]
CAM1[Camera 1<br/>Priority: High]
CAM2[Camera 2<br/>Priority: Medium]
CAM3[Camera 3<br/>Priority: Low]
end
subgraph PROCESSING[Individual Processing]
PROC1[Processor 1]
PROC2[Processor 2]
PROC3[Processor 3]
end
subgraph MANAGER[Multi-Camera Manager]
ZONES[Zone Authority<br/>Checker]
DEDUP[Chair<br/>Deduplication]
CONFLICT[Conflict<br/>Resolution]
UNIFIED[Unified<br/>Statistics]
end
subgraph STORAGE[Data Storage]
SETUP_DB[(Setup Config<br/>Database)]
RESULTS[Merged<br/>Results]
end
CAM1 --> PROC1
CAM2 --> PROC2
CAM3 --> PROC3
PROC1 --> ZONES
PROC2 --> ZONES
PROC3 --> ZONES
ZONES --> DEDUP
DEDUP --> CONFLICT
CONFLICT --> UNIFIED
SETUP_DB -.config.-> ZONES
UNIFIED --> RESULTS
style CAM1 fill:#c8e6c9
style CAM2 fill:#fff9c4
style CAM3 fill:#ffccbc
style MANAGER fill:#b3e5fc
style RESULTS fill:#d1c4e9
sequenceDiagram
participant Client
participant FastAPI
participant Processor
participant YOLO
participant DeepSort
participant Database
participant FileSystem
Client->>FastAPI: Upload Video
FastAPI->>FileSystem: Save to uploads/
FastAPI->>Processor: process_video_for_api()
loop For Each Frame
Processor->>YOLO: Detect Objects
YOLO-->>Processor: Chairs & People
Processor->>DeepSort: Track People
DeepSort-->>Processor: Person IDs
Processor->>Processor: Calculate Occupancy
Processor->>Client: WebSocket Progress Update
end
Processor->>FileSystem: Save Processed Video
Processor->>FileSystem: Save JSON Results
Processor-->>FastAPI: Processing Complete
FastAPI->>Database: Save Summary
Database-->>FastAPI: Confirmation
FastAPI-->>Client: Success Response
graph LR
subgraph Frontend
HTML[index.html<br/>Dashboard UI]
end
subgraph Backend
APP[app.py<br/>FastAPI Server]
WS_H[websocket_handler.py<br/>Real-time Updates]
end
subgraph Services
PROC_S[processor.py<br/>Frame Generator]
STREAM_S[stream.py<br/>MJPEG Stream]
ANAL_S[analytics.py<br/>Statistics]
end
subgraph Core
PV[process_video.py<br/>Main Pipeline]
MCM[MultiCameraManager<br/>Deduplication]
COT[ChairOccupancyTracker<br/>Tracking Logic]
end
subgraph Utils
DET[detector.py<br/>YOLO Wrapper]
TRK[tracker.py<br/>DeepSort Utils]
PANEL[panel.py<br/>Overlay Panel]
end
HTML <-->|REST/WS| APP
HTML <-->|WebSocket| WS_H
APP --> PROC_S
APP --> STREAM_S
WS_H --> PROC_S
PROC_S --> PV
STREAM_S --> PV
ANAL_S --> PV
PV --> MCM
PV --> COT
COT --> DET
COT --> TRK
PV --> PANEL
style HTML fill:#e1f5ff
style APP fill:#b3e5fc
style PV fill:#81d4fa
style COT fill:#4fc3f7
- Python 3.10+
- pip package manager
- CUDA (optional, for GPU acceleration)
- Docker (optional, for containerized deployment)
# Clone the repository
git clone <repository-url>
cd chair_occupancy_project
# Create virtual environment
python -m venv AI_Chair_Occupancy_Analytics_env
source AI_Chair_Occupancy_Analytics_env/bin/activate # On Windows: AI_Chair_Occupancy_Analytics_env\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the application
python app.py
# Open your browser
# Navigate to http://localhost:8000The YOLO model (yolo11m.pt) will be automatically downloaded on first run if not present.
# With GPU support (NVIDIA)
docker-compose up --build
# Without GPU (CPU only)
docker-compose -f docker-compose.cpu.yml up --build
# Access the application
# Navigate to http://localhost:8000- Clone the repository
git clone <repository-url>
cd chair_occupancy_project- Set up Python virtual environment
python -m venv AI_Chair_Occupancy_Analytics_env
# Activate on Linux/Mac
source AI_Chair_Occupancy_Analytics_env/bin/activate
# Activate on Windows
AI_Chair_Occupancy_Analytics_env\Scripts\activate- Install dependencies
For systems with CUDA GPU:
pip install -r requirements.txtFor CPU-only systems:
# Edit requirements.txt to remove the --extra-index-url line
pip install -r requirements.txt- Download YOLO model (optional, auto-downloads on first run)
# The yolo11m.pt model is included in the repo
# Or download manually from Ultralytics-
Prerequisites: Install Docker and Docker Compose
-
For GPU support: Install NVIDIA Container Toolkit
-
Build and run
# GPU-enabled deployment
docker-compose up --build -d
# CPU-only deployment
docker-compose -f docker-compose.cpu.yml up --build -d- Access the dashboard at
http://localhost:8000 - Upload a video using the file input
- Configure parameters (optional):
- Proximity Threshold (10-500 px)
- Occupancy Frames Threshold (1-60 frames)
- Motion Blur Threshold (10-500)
- Click "Process Video" and monitor progress
- View results including:
- Annotated output video
- Occupancy rate over time chart
- Detailed analytics and interaction ledger
Access real-time occupancy detection from webcam or RTSP stream:
# View live stream endpoint
GET /api/stream/{camera_id}?source=0
# Example: Webcam stream
http://localhost:8000/api/stream/1?source=0
# Example: RTSP stream
http://localhost:8000/api/stream/1?source=rtsp://user:pass@192.168.1.100:554/stream- Configure cameras via API:
POST /api/multi-camera/setup
Content-Type: application/json
{
"name": "Office Floor 1",
"cameras": [
{
"camera_id": 1,
"priority": 10,
"zones": [
{"x1": 0, "y1": 0, "x2": 1920, "y2": 1080}
],
"video_path": "/path/to/camera1.mp4"
},
{
"camera_id": 2,
"priority": 5,
"zones": [
{"x1": 0, "y1": 0, "x2": 1920, "y2": 1080}
],
"video_path": "/path/to/camera2.mp4"
}
],
"overlap_regions": [
{
"region": {"x1": 800, "y1": 400, "x2": 1200, "y2": 800},
"camera_ids": [1, 2]
}
]
}- Process multi-camera setup:
POST /api/multi-camera/process/{setup_id}For batch processing without the web interface:
from process_video import process_video_for_api
settings = {
'proximity_threshold': 80,
'occupancy_frames_threshold': 5,
'motion_blur_threshold': 100
}
results = process_video_for_api(
input_path="input.mp4",
output_path="output.mp4",
settings=settings
)
print(f"Average occupancy: {results['average_occupancy_rate']:.2f}%")POST /process-video
Content-Type: multipart/form-data
file: <video_file>
proximity_threshold: 80 (optional)
occupancy_frames_threshold: 5 (optional)
motion_blur_threshold: 100 (optional)Response:
{
"success": true,
"message": "Video processed successfully",
"file_id": "uuid-here",
"output_video_url": "/outputs/uuid_output.mp4",
"results_api_url": "/api/results/uuid",
"processing_results": { /* full analytics */ }
}GET /api/results/{file_id}Response:
{
"average_occupancy_rate": 65.4,
"max_occupied_chairs": 12,
"total_chairs": 15,
"interaction_ledger": [
{
"person_id": 1,
"chair_id": 3,
"start_frame": 120,
"end_frame": 450,
"duration_seconds": 11.0
}
],
"per_person_stats": { /* ... */ },
"per_chair_stats": { /* ... */ },
"peak_activity_window": { /* ... */ },
"low_activity_window": { /* ... */ }
}GET /api/history?skip=0&limit=100DELETE /api/history/{file_id}GET /download/{file_id}POST /api/multi-camera/setup
Content-Type: application/json
{
"name": "Setup Name",
"cameras": [ /* camera configs */ ],
"overlap_regions": [ /* overlap definitions */ ]
}GET /api/multi-camera/setupsGET /api/multi-camera/setup/{setup_id}DELETE /api/multi-camera/setup/{setup_id}POST /api/multi-camera/process/{setup_id}GET /api/stream/{camera_id}?source=<source>
source: Camera index (0, 1, ...) or RTSP URLReturns MJPEG stream suitable for <img> tag.
GET /api/stream-statusGET /api/recordingsGET /api/recordings/{filename}const ws = new WebSocket(`ws://localhost:8000/ws/progress/${fileId}`);
ws.onmessage = (event) => {
const progress = JSON.parse(event.data);
console.log(`Progress: ${progress.percent}%`);
};const ws = new WebSocket(`ws://localhost:8000/ws/live-stats/${cameraId}`);
ws.onmessage = (event) => {
const stats = JSON.parse(event.data);
console.log(`Occupancy: ${stats.occupancy_rate}%`);
};GET /healthResponse:
{
"status": "healthy",
"version": "2.1.0"
}Create a .env file in the project root:
# Database
DATABASE_URL=sqlite:///./analysis.db
# Directories
UPLOAD_DIR=uploads
OUTPUT_DIR=outputs
# YOLO Model
MODEL_PATH=yolo11m.pt
# Processing defaults
DEFAULT_PROXIMITY_THRESHOLD=80
DEFAULT_OCCUPANCY_FRAMES=5
DEFAULT_MOTION_BLUR_THRESHOLD=100
# Limits
MAX_FILE_SIZE=104857600 # 100MB in bytes
MAX_WORKERS=2
# API Security
API_KEY=your-secret-api-key-here| Parameter | Range | Default | Description |
|---|---|---|---|
proximity_threshold |
10-500 | 80 | Maximum distance (pixels) between person and chair to consider occupied |
occupancy_frames_threshold |
1-60 | 5 | Number of consecutive frames required to confirm occupancy |
motion_blur_threshold |
10-500 | 100 | Laplacian variance threshold for blur detection (lower = more sensitive) |
| Variable | Default | Description |
|---|---|---|
DATABASE_URL |
sqlite:///./analysis.db |
SQLAlchemy database connection string |
UPLOAD_DIR |
uploads |
Directory for temporary uploaded files |
OUTPUT_DIR |
outputs |
Directory for processed videos and results |
MODEL_PATH |
yolo11m.pt |
Path to YOLO model weights |
MAX_FILE_SIZE |
104857600 |
Maximum upload size in bytes (100MB) |
MAX_WORKERS |
2 |
ThreadPoolExecutor workers for async processing |
API_KEY |
dev-key-change-me |
API key for protected endpoints (change in production!) |
Edit docker-compose.yml to customize:
environment:
- API_KEY=${API_KEY:-your-production-key}
- MAX_WORKERS=4
- DEFAULT_PROXIMITY_THRESHOLD=100
volumes:
- ./uploads:/app/uploads
- ./outputs:/app/outputs
- ./analysis.db:/app/analysis.db
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]chair_occupancy_project/
β
βββ app.py # Main FastAPI application and API endpoints
βββ config.py # Configuration management and environment variables
βββ database.py # SQLAlchemy database setup and session management
βββ schemas.py # Pydantic schemas for API validation
βββ sql_models.py # SQLAlchemy ORM models for database tables
βββ process_video.py # Core ML pipeline (YOLO + DeepSort + Analytics)
βββ websocket_handler.py # WebSocket endpoints for real-time updates
βββ cleanup.py # Utility script for managing disk space
βββ index.html # Web dashboard with glassmorphism UI
β
βββ api/ # API route modules
β βββ __init__.py
β βββ multi_camera.py # Multi-camera configuration and processing endpoints
β
βββ services/ # Business logic and processing services
β βββ __init__.py
β βββ processor.py # Frame processing generator (reusable)
β βββ stream.py # Live streaming MJPEG generator
β βββ analytics.py # Analytics calculation utilities
β
βββ myutils/ # Utility modules and helpers
β βββ detector.py # YOLO detector wrapper
β βββ tracker.py # DeepSort format conversion utilities
β βββ panel.py # Video overlay panel rendering
β
βββ tests/ # Test suite
β βββ __init__.py
β βββ conftest.py # Pytest fixtures and configuration
β βββ test_api.py # API endpoint tests
β βββ test_occupancy.py # Unit tests for core logic
β
βββ data/ # Data directory
β βββ raw_videos/ # Sample videos for testing
β
βββ uploads/ # Temporary upload directory (gitignored)
βββ outputs/ # Processed videos and results (gitignored)
β βββ recordings/ # Live stream recordings
β
βββ requirements.txt # Python dependencies (with GPU support)
βββ requirements-docker.txt # Minimal dependencies for Docker
β
βββ Dockerfile # Multi-stage Docker image definition
βββ docker-compose.yml # Docker Compose with GPU support
βββ docker-compose.cpu.yml # Docker Compose for CPU-only deployment
β
βββ .gitignore # Git ignore patterns
βββ .dockerignore # Docker build context exclusions
β
βββ analysis.db # SQLite database (auto-created)
βββ yolo11m.pt # YOLO model weights (auto-downloaded)
β
βββ README.md # This file
-
app.py: FastAPI application with all REST endpoints, middleware setup, and dependency injection -
process_video.py: Main video processing pipeline containing:MultiCameraManager: Handles multi-camera deduplication and conflict resolutionChairOccupancyTracker: Core tracking logic with smart occupancy detectionprocess_video_for_api(): Main entry point for video analysis
-
websocket_handler.py: Real-time communication layer with:ProgressManager: Tracks video processing progressLiveStatsManager: Broadcasts live stream statisticsLiveVideoStreamer: Manages WebSocket video streaming with recording
services/processor.py: Reusable frame processing generator used by both batch and streamingservices/stream.py: MJPEG stream generator for live camera feedsservices/analytics.py: Analytics calculation and aggregation logic
myutils/detector.py: Wrapper around YOLO for consistent detection interfacemyutils/tracker.py: DeepSort format conversion and tracking utilitiesmyutils/panel.py: Renders overlay panels on video frames
The system includes sophisticated tracking capabilities:
When a chair is temporarily occluded, its position is interpolated based on recent history:
# Automatically interpolates missing chair positions
interpolated_bbox = tracker.interpolate_chair_position(chair_id)Uses color histograms and spatial features to re-identify chairs after occlusion:
signature = tracker.extract_chair_signature(frame, bbox)
similarity = tracker.compare_chair_signatures(sig1, sig2)Detects and filters false positives from reflective surfaces:
reflection_zones = tracker.detect_reflections(frame)
if not tracker.is_in_reflection_zone(detection_bbox):
# Process detectionAdapts processing when rapid camera or subject movement is detected:
blur_score = tracker.detect_motion_blur(frame)
if blur_score < threshold:
# Apply interpolation or skip frameInstead of simple center-point proximity, the system divides chairs into zones:
chair_zones = tracker.get_chair_zones(chair_bbox) # {'seat': bbox, 'back': bbox}
occupied_zones = tracker.check_zone_occupancy(person_center, chair_zones)This enables detection of:
- Person sitting on chair (seat zone occupied)
- Person leaning on chair (back zone occupied)
- Person standing near chair (proximity but no zone occupation)
When multiple cameras view the same space:
multi_cam_manager = MultiCameraManager()
# Set camera priorities and zones
multi_cam_manager.set_camera_zones(camera_id=1, zones=[bbox], priority=10)
multi_cam_manager.set_camera_zones(camera_id=2, zones=[bbox], priority=5)
# Define overlapping regions
multi_cam_manager.add_overlap_region(region=bbox, camera_ids=[1, 2])
# Resolve conflicts and get unified count
unified_stats = multi_cam_manager.get_unified_occupancy_count(all_camera_data)Conflict Resolution Strategy:
- Check if chair is in camera's authoritative zone
- If in overlap region, higher priority camera wins
- Spatial proximity used to detect duplicate chairs across cameras
- Global chair registry maintains consistent IDs
# Install test dependencies
pip install pytest pytest-asyncio
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ -v --cov=. --cov-report=html
# Run specific test modules
pytest tests/test_api.py -v
pytest tests/test_occupancy.py -v
# Run specific test class
pytest tests/test_occupancy.py::TestConfigValidation -v
# Run specific test
pytest tests/test_api.py::test_health_check -vtests/
βββ conftest.py # Shared fixtures
βββ test_api.py # API endpoint tests
β βββ test_health_check()
β βββ test_process_video_endpoint()
β βββ test_get_analysis_history()
β βββ test_multi_camera_setup()
β βββ ...
βββ test_occupancy.py # Core logic tests
βββ TestConfigValidation
βββ TestChairOccupancyTracker
βββ TestMultiCameraManager
βββ ...
Example test for API endpoint:
from fastapi.testclient import TestClient
from app import app
client = TestClient(app)
def test_process_video():
with open("test_video.mp4", "rb") as video:
response = client.post(
"/process-video",
files={"file": video},
data={"proximity_threshold": 80}
)
assert response.status_code == 200
assert response.json()["success"] is TrueThe project includes a Docker service for automatic cleanup:
# In docker-compose.yml
cleanup-cron:
command: |
while true; do
python cleanup.py --retention-days 7 --max-size-mb 5000
sleep 86400
done# Preview what would be deleted (dry run)
python cleanup.py --dry-run
# Delete files older than 7 days
python cleanup.py --retention-days 7
# Keep directory under 5GB total
python cleanup.py --max-size-mb 5000
# Combine both strategies
python cleanup.py --retention-days 7 --max-size-mb 5000The OutputCleaner class implements two strategies:
- Age-Based Cleanup: Removes files older than
retention_days - Size-Based Cleanup: Removes oldest files when total size exceeds
max_size_mb
Both strategies operate on file pairs (video + JSON results) to maintain consistency.
# Backup database
cp analysis.db analysis.db.backup
# Vacuum database (reclaim space)
sqlite3 analysis.db "VACUUM;"
# View database statistics
sqlite3 analysis.db "SELECT COUNT(*) FROM analysis_results;"- Change
API_KEYfrom default value - Set appropriate
MAX_FILE_SIZEandMAX_WORKERS - Configure persistent volumes for database and outputs
- Set up automated backups for
analysis.db - Configure cleanup cron job with appropriate retention
- Set up monitoring and logging
- Enable HTTPS with reverse proxy (nginx/traefik)
- Configure firewall rules (only expose port 8000 or reverse proxy port)
- Set up container restart policies
- Review and adjust resource limits
- Create production environment file (
.env.prod):
API_KEY=your-very-secure-random-key-here
DATABASE_URL=sqlite:///./analysis.db
MAX_WORKERS=4
MAX_FILE_SIZE=524288000 # 500MB
DEFAULT_PROXIMITY_THRESHOLD=80- Update docker-compose.yml:
version: '3.8'
services:
chair-analytics:
build: .
container_name: chair_analytics_prod
ports:
- "8000:8000"
volumes:
- /var/chair_analytics/uploads:/app/uploads
- /var/chair_analytics/outputs:/app/outputs
- /var/chair_analytics/analysis.db:/app/analysis.db
env_file:
- .env.prod
restart: unless-stopped
deploy:
resources:
limits:
cpus: '4'
memory: 8G
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
cleanup-cron:
image: chair-occupancy-app
container_name: chair_cleanup_prod
command: |
sh -c "while true; do
python cleanup.py --retention-days 30 --max-size-mb 100000
sleep 86400
done"
volumes:
- /var/chair_analytics/outputs:/app/outputs
restart: unless-stopped- Deploy with Docker Compose:
docker-compose up -d --build- Set up nginx reverse proxy (optional but recommended):
server {
listen 80;
server_name your-domain.com;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /ws {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
client_max_body_size 500M;
}# Launch EC2 instance with GPU (p3.2xlarge or similar)
# Install NVIDIA drivers and Docker
# Follow Docker deployment steps above
# Set up CloudWatch for monitoring
# Configure S3 for long-term storage of processed videos# Create Compute Engine instance with GPU
# Install NVIDIA drivers
# Deploy using Docker Compose
# Use Cloud Storage for video archives
# Set up Cloud Monitoring# Create Azure VM with GPU
# Install NVIDIA drivers
# Deploy with Docker
# Use Azure Blob Storage for archives
# Configure Azure MonitorSee k8s/ directory for Kubernetes manifests (if available), or create:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: chair-analytics
spec:
replicas: 2
selector:
matchLabels:
app: chair-analytics
template:
metadata:
labels:
app: chair-analytics
spec:
containers:
- name: app
image: chair-occupancy-app:latest
ports:
- containerPort: 8000
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: chair-analytics-secrets
key: api-key
resources:
limits:
nvidia.com/gpu: 1
requests:
memory: "4Gi"
cpu: "2"Error: FileNotFoundError: yolo11m.pt not found
Solution:
# Model auto-downloads on first run, but if it fails:
# Download manually from Ultralytics
wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolo11m.ptError: CUDA out of memory
Solutions:
- Reduce
MAX_WORKERSin config - Process smaller videos
- Use CPU-only mode
- Reduce batch size in YOLO detection
# In myutils/detector.py, adjust:
results = self.model(frame, verbose=False, imgsz=640) # Reduce from 1280Error: WebSocket connection fails
Solutions:
- Check CORS settings in
app.py - Ensure FastAPI is running
- Verify WebSocket URL format
- Check firewall rules
Possible causes:
- Corrupted video file
- Out of memory
- Disk space full
Solutions:
# Check disk space
df -h
# Check memory
free -h
# Test with smaller video
# Enable verbose logging in process_video.pySolutions:
# Check logs
docker logs chair_project
# Common fix: NVIDIA runtime not configured
# Edit /etc/docker/daemon.json:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
# Restart Docker
sudo systemctl restart dockerError: sqlite3.OperationalError: database is locked
Solutions:
- Ensure only one process accesses database
- Increase timeout in
database.py - Consider PostgreSQL for production
# In database.py
engine = create_engine(
DATABASE_URL,
connect_args={"check_same_thread": False, "timeout": 30} # Increase timeout
)# In process_video.py
# Reduce detection frequency
if frame_number % 2 == 0: # Detect every 2nd frame
detections = detector.detect(frame)# Increase occupancy confirmation threshold
tracker = ChairOccupancyTracker(
proximity_threshold=60, # Stricter proximity
occupancy_frames_threshold=10 # More frames to confirm
)# Process video in chunks
# Release frames immediately
# Use generators instead of loading entire videoContributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch:
git checkout -b feature/your-feature-name - Commit changes:
git commit -m 'Add new feature' - Push to branch:
git push origin feature/your-feature-name - Open a Pull Request
- Follow PEP 8 style guide
- Add docstrings to all functions and classes
- Write tests for new features
- Update README if adding new functionality
- Ensure all tests pass before submitting PR
# Use type hints
def calculate_distance(point1: tuple, point2: tuple) -> float:
"""Calculate Euclidean distance between two points."""
pass
# Document complex logic
# Use descriptive variable names
# Keep functions focused and smallThis project is licensed under the MIT License.
MIT License
Copyright (c) 2024
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
For issues, questions, or contributions:
- Github: Open an issue on Github.
- Documentation: This README and inline code documentation
- Email: its.manas.sharma@gmail.com
- Ultralytics YOLO: State-of-the-art object detection
- DeepSort: Multiple object tracking algorithm
- FastAPI: Modern, fast web framework
- OpenCV: Computer vision library
Built for workspace optimization and smart facility management