Tooling for training and inference of a 2-stage traffic-light recognition pipeline built on AIHub traffic data.
This project separates detection and color recognition into two stages instead of predicting color directly in one step. The goal is better accuracy and easier control in production.
- Stage1 (Detection)
- Detects
traffic_lightobject locations in input images using YOLO. - Output: bounding boxes + detection confidence.
- Stage2 (Color Classification)
- Crops each detected box region (or GT box) and classifies traffic-light color.
- Output classes:
red,yellow,green,off.
- Source: AIHub
Traffic Light / Road Sign Recognition Video (Capital Area) - Raw format: images + JSON annotations
- Stage1 target:
traffic_lightobject detection - Stage2 target:
traffic_light(type=car)color classification
Stage2 labels are generated from JSON traffic-light state attributes.
Composite states (for example, red+yellow) are dropped, and no-active-light cases are mapped to off.
- Framework: Ultralytics YOLO
- Base model:
yolo11s - Default output path:
runs/traffic_stage1 - Canonical weight path:
weights/stage1_scratch.pt - Main artifacts:
weights/best.pt,results.csv, curve/matrix images
- Framework: PyTorch + torchvision
- Default backbone:
MobileNetV3-Large - Optional backbone:
MobileNetV3-Small - Input size:
224x224 - Default output path:
runs/traffic_stage2 - Canonical weight path:
weights/stage2_best.pth - Main artifacts:
weights/best.pth,results.csv,confusion_matrix*.csv, plots
stage1/
conver_to_yolo.py
stage2/
stage2_generate_preds.py
stage2_build_dataset.py
stage2_train_mobilenet.py
stage2_infer.py
stage2_utils.py
runs/
traffic_stage1/
traffic_stage2/
weights/
stage1_scratch.pt
stage1_pre_trained.pt
stage2_best.pth
commands
setup_stage2_venv.sh
requirements.txt
Default data locations (relative to project root):
data/
raw/ # raw JSON annotations
yolo/images/{train,val}
stage2/
preds/
crops/
meta/
Create and activate a virtual environment before running any training/inference command.
PROJECT_ROOT="$(git rev-parse --show-toplevel 2>/dev/null || pwd)"
cd "$PROJECT_ROOT"
bash ./setup_stage2_venv.sh
source "$PROJECT_ROOT/.venv/bin/activate"PROJECT_ROOT="$(git rev-parse --show-toplevel 2>/dev/null || pwd)"
cd "$PROJECT_ROOT"
bash ./setup_stage2_venv.sh "$PROJECT_ROOT/.venv-stage2"
source "$PROJECT_ROOT/.venv-stage2/bin/activate"python -c "import torch, ultralytics; print(torch.__version__, torch.cuda.is_available(), ultralytics.__version__)"./commands auto-selects Python in this order:
$PROJECT_ROOT/.venv/bin/python$PROJECT_ROOT/.venv-stage2/bin/pythonpython3
If you want to force a specific interpreter:
PYTHON_BIN="$PROJECT_ROOT/.venv-stage2/bin/python" ./commands infer-stage2To leave the environment:
deactivate./commands resolves default paths using the current structure (stage1, stage2, runs/traffic_stage1, runs/traffic_stage2).
./commands --help
./commands pathsMain subcommands:
sync-weights: copybest.pt/best.pthfrom run dirs intoweights/stage1-train: train Stage1 YOLO detectorstage1-predict: run Stage1 predictionbuild-train,build-val,build-all: build Stage2 crop/CSV datasettrain-stage2: train Stage2 classifierinfer-stage2: run final 2-stage inference (JSON + visualization)
Example (override Stage1 NMS IoU):
./commands infer-stage2 --iou 0.5All key scripts expose argparse help with descriptions and defaults.
python stage1/conver_to_yolo.py --help
python stage2/stage2_generate_preds.py --help
python stage2/stage2_build_dataset.py --help
python stage2/stage2_train_mobilenet.py --help
python stage2/stage2_infer.py --helpIf path arguments are omitted, project-root-based defaults are used.
- Train Stage1 (if needed)
./commands stage1-train- Build Stage2 dataset
./commands build-all- Train Stage2
./commands train-stage2- Run final 2-stage inference
./commands infer-stage2- Stage1 best weight (canonical):
./weights/stage1_scratch.pt - Stage2 best weight (canonical):
./weights/stage2_best.pth - Stage1 run checkpoint:
./runs/traffic_stage1/weights/best.pt - Stage2 run checkpoint:
./runs/traffic_stage2/weights/best.pth - Inference JSON:
./infer/json - Inference visualization:
./infer/vis