Convert chessboard images to FEN notation using lightweight CNN classifiers
π Live Demo β’ π Documentation β’ π Report Bug β’ β¨ Request Feature
chess2fen is a production-ready computer vision system that converts top-down chessboard images into FEN (Forsyth-Edwards Notation) using efficient per-square CNN classifiers. Designed for accuracy, speed, and ease of deployment, it achieves >98% FEN exact match with inference times of ~7-15ms per board on CPU.
- π― High Accuracy: 100% FEN exact match on clean images, >95% under distortions (blur, JPEG artifacts, noise)
- β‘ Fast Inference: 7-15ms per board using batched ONNX inference on CPU
- π¦ Lightweight Models: 9 CNN architectures ranging from 5.8K to 63.6K parameters
- π§ Production Ready: REST API, Docker containers, Cloud Run deployment, comprehensive monitoring
- π‘οΈ Robust: Handles various image distortions, perspective warps, and lighting conditions
- π Open Source: MIT licensed - models, code, and training pipeline all included
# Clone the repository
git clone https://github.com/YOUR_USERNAME/chess2fen.git
cd chess2fen
# Install with uv (recommended)
uv pip install -e .
# Or with pip
pip install -e .from chess2fen import infer_fen
# Infer FEN from image
fen, confidence = infer_fen(
'path/to/board.jpg',
'models/dwsep_se_a075/model_fp32.onnx'
)
print(fen) # "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR"
print(confidence.shape) # (8, 8) - confidence per square# Basic inference
python -m chess2fen.infer_api path/to/board.jpg
# Specify model
python -m chess2fen.infer_api board.jpg --model models/dwsep_se_a075/model_fp32.onnx
# Get debug output
python -m chess2fen.infer_api board.jpg --debug# Start API server
cd api && uvicorn app:app --reload
# Or use Docker Compose
docker-compose upAccess the interactive web UI at http://localhost:8000
# Upload image and get FEN
curl -X POST https://chess2fen-api-2qkqblvvma-wl.a.run.app/infer \
-F "file=@board.jpg"
# List available models
curl https://chess2fen-api-2qkqblvvma-wl.a.run.app/models
# Health check
curl https://chess2fen-api-2qkqblvvma-wl.a.run.app/healthProduction Model: dwsep_se_a075 - Optimized for accuracy and robustness
| Model | Parameters | FEN Exact | Robustnessβ | Latencyβ‘ | Size |
|---|---|---|---|---|---|
| dwsep_se_a075 | 14.7K | 100.00% | 99.99% | 7.31ms | 162 KB |
| multitask_dw_a075 | 17.4K | 100.00% | 99.92% | 7.08ms | 146 KB |
| dwsep_se_a050 | 9.8K | 100.00% | 97.56% | 6.69ms | 126 KB |
| cascade_tiny | 22.1K | 100.00% | 99.79% | 7.81ms | 211 KB |
| nanoconv_d02 | 22K | 92.69% | 99.43% | 6.45ms | 96 KB |
β Robustness: Mean accuracy across 7 distortion types (blur, JPEG, noise, perspective, rotation)
β‘Latency: Batched 64-crop inference on M4 Mac CPU (FP32). INT8 models available for edge deployment.
All 9 trained models included - Choose based on your accuracy/speed/size requirements.
Input Image (RGB)
β
Preprocessing & Tiling (8Γ8 grid)
β
ONNX Runtime Inference (batched)
β
Softmax + Argmax (13 classes per square)
β
Sanity Checks & Repair (king count, pawn ranks)
β
FEN String + Confidence Matrix
13-Class Classification per Square:
empty(0 pieces)- White pieces:
P,N,B,R,Q,K - Black pieces:
p,n,b,r,q,k
Key Design Decisions:
- Batched Inference: Process all 64 squares simultaneously for 10x speedup
- ONNX Runtime: Cross-platform CPU optimization, no PyTorch dependency at inference
- Conservative Sanity Checks: Only repair low-confidence predictions to maintain accuracy
- Multiple Architectures: Trade-off flexibility between speed, size, and accuracy
- π Live Demo - Try the API immediately
- π API Reference - REST API endpoints, request/response formats
- ποΈ Architecture - System design, data flow, components
- π¦ Model Registry - Available models, selection criteria
- π» Development Guide - Local setup, testing, debugging
- π Model Training - Train custom models, dataset preparation
- π’ Deployment Guide - Production deployment with Google Cloud Run
- β Tasks Checklist - Phase-by-phase deployment roadmap
# Train a single model
python -m chess2fen.train_one \
--model-name my_model \
--kind nanoconv \
--root data \
--train splits/train.json \
--val splits/val.json \
--outdir runs/my_experiment
# Train all architecture variants (overnight)
./scripts/run_overnight.sh
# Evaluate model performance
python -m chess2fen.eval_suite \
--model-name my_model \
--onnx runs/my_experiment/onnx/my_model_fp32.onnx \
--root data \
--val splits/val.jsonAvailable Architectures:
nanoconv,microconv,picoconv- Simple conv blocksdwsep_se- Depthwise separable + Squeeze-Excitationshufflev2- ShuffleNet V2 variantsqueezenet_micro- Compressed SqueezeNetcascade,multitask_dw- Advanced architectures
# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=src/chess2fen --cov-report=html
# Run specific test file
pytest tests/test_infer_api.py -v
# Integration tests
python scripts/test_api.py --base-url http://localhost:8000- Click Edit board after a FEN is produced to reveal the overlay that mirrors the inferred orientation and sits on top of the generated board preview.
- Interactively move pieces (select, target square), drop pieces from the color palette, erase squares, or reset back to the original prediction.
- Undo/redo keeps the last 100 edits, validation toggle checks for one white king, one black king, and no pawns on ranks 1/8, and the displayed FEN updates instantly with every edit.
- From
web/v3, runnpm run testto compile the board utilities viatsc -p ../../tsconfig.ui-tests.jsonand execute the boardβtoβFEN + undo/redo test suite located undertests/ui/board-tests.ts.
# Build container (use podman for local, docker for cloud)
podman build -t chess2fen:latest -f api/Dockerfile .
# Run container
podman run -d -p 8080:8080 -e PORT=8080 chess2fen:latest
# Test endpoints
curl http://localhost:8080/health
curl -X POST http://localhost:8080/infer -F "file=@test.jpg"# Build and push to Artifact Registry
gcloud builds submit --tag us-west2-docker.pkg.dev/PROJECT/chess2fen/api:latest
# Deploy to Cloud Run
gcloud run deploy chess2fen-api \
--image us-west2-docker.pkg.dev/PROJECT/chess2fen/api:latest \
--region us-west2 \
--allow-unauthenticated \
--memory 2Gi \
--cpu 2
# Expected deployment time: ~5 minutes
# (Normal for ML projects: installing dependencies + copying models)Note on Deployment Time: The ~5 minute Cloud Run build time is normal for ML projects. It includes:
- Installing Python dependencies (~2 min)
- Copying model files (~1 min)
- Building multi-stage Docker image (~2 min)
This can be optimized by using Cloud Build caching or pre-built base images.
This project is licensed under the MIT License - see the LICENSE file for details.
β You can:
- Use this software commercially
- Modify and distribute the code
- Use the pre-trained models in your projects
- Create derivative works
β Model Sharing: The pre-trained ONNX models are included in the repository and covered under the same MIT license. This means:
- Anyone can download and use the models
- No attribution required (but appreciated!)
- Models can be used in commercial applications
- You're encouraged to share your improvements
Why models are in git: While typically large binary files aren't tracked in git, our models are small (96KB-211KB) and essential for:
- Reproducibility (exact models used in production)
- Easy deployment (no external model downloads needed)
- Container builds (models baked into Docker image)
If you use chess2fen in your project, consider:
Powered by [chess2fen](https://github.com/YOUR_USERNAME/chess2fen)Contributions are welcome! Please see DEVELOPMENT.md for guidelines.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Run tests (
pytest tests/) - Format code (
black src/ api/ scripts/ tests/) - Commit changes (
git commit -m 'feat: add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
- β All tests must pass
- β
Code formatted with
black - β
Linting passes (
ruff) - β Type hints where applicable
- β Documentation updated
- Python 3.11+
- PyTorch (training only)
- ONNX Runtime (inference)
- FastAPI (API server)
See pyproject.toml for complete dependency list.
Current Status: β Production Ready (Phase 10 Complete)
- β Phase 0-6: Core development (models, training, evaluation)
- β Phase 7: Containerization (Docker, multi-stage builds)
- β Phase 8-9: CI/CD & Cloud deployment (GitHub Actions, Cloud Run)
- β Phase 10: Documentation finalization
- π§ Phase 11: Testing & validation (in progress)
- π Phase 12: Public launch
Live Production Service: https://chess2fen-api-2qkqblvvma-wl.a.run.app
- FEN notation: Chess Programming Wiki
- ONNX Runtime: Microsoft ONNX Runtime
- Model architectures: See models_zoo.py for references
- π Bug Reports: GitHub Issues
- π‘ Feature Requests: GitHub Issues
- π¬ Discussions: GitHub Discussions
Made with β€οΈ by Keenan Kalra
If you find this project useful, please consider giving it a β!