Skip to content

kklike32/chess2fen

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

chess2fen

Production Ready Python 3.11+ License: MIT Code style: black

Convert chessboard images to FEN notation using lightweight CNN classifiers

πŸš€ Live Demo β€’ πŸ“– Documentation β€’ πŸ› Report Bug β€’ ✨ Request Feature


🎯 Overview

chess2fen is a production-ready computer vision system that converts top-down chessboard images into FEN (Forsyth-Edwards Notation) using efficient per-square CNN classifiers. Designed for accuracy, speed, and ease of deployment, it achieves >98% FEN exact match with inference times of ~7-15ms per board on CPU.

Key Features

  • 🎯 High Accuracy: 100% FEN exact match on clean images, >95% under distortions (blur, JPEG artifacts, noise)
  • ⚑ Fast Inference: 7-15ms per board using batched ONNX inference on CPU
  • πŸ“¦ Lightweight Models: 9 CNN architectures ranging from 5.8K to 63.6K parameters
  • πŸ”§ Production Ready: REST API, Docker containers, Cloud Run deployment, comprehensive monitoring
  • πŸ›‘οΈ Robust: Handles various image distortions, perspective warps, and lighting conditions
  • πŸ”“ Open Source: MIT licensed - models, code, and training pipeline all included

πŸš€ Quick Start

Installation

# Clone the repository
git clone https://github.com/YOUR_USERNAME/chess2fen.git
cd chess2fen

# Install with uv (recommended)
uv pip install -e .

# Or with pip
pip install -e .

Python API

from chess2fen import infer_fen

# Infer FEN from image
fen, confidence = infer_fen(
    'path/to/board.jpg', 
    'models/dwsep_se_a075/model_fp32.onnx'
)

print(fen)  # "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR"
print(confidence.shape)  # (8, 8) - confidence per square

Command Line Interface

# Basic inference
python -m chess2fen.infer_api path/to/board.jpg

# Specify model
python -m chess2fen.infer_api board.jpg --model models/dwsep_se_a075/model_fp32.onnx

# Get debug output
python -m chess2fen.infer_api board.jpg --debug

REST API

Local Development

# Start API server
cd api && uvicorn app:app --reload

# Or use Docker Compose
docker-compose up

Access the interactive web UI at http://localhost:8000

Production API

# Upload image and get FEN
curl -X POST https://chess2fen-api-2qkqblvvma-wl.a.run.app/infer \
  -F "file=@board.jpg"

# List available models
curl https://chess2fen-api-2qkqblvvma-wl.a.run.app/models

# Health check
curl https://chess2fen-api-2qkqblvvma-wl.a.run.app/health

πŸ“Š Model Performance

Production Model: dwsep_se_a075 - Optimized for accuracy and robustness

Model Parameters FEN Exact Robustness† Latency‑ Size
dwsep_se_a075 14.7K 100.00% 99.99% 7.31ms 162 KB
multitask_dw_a075 17.4K 100.00% 99.92% 7.08ms 146 KB
dwsep_se_a050 9.8K 100.00% 97.56% 6.69ms 126 KB
cascade_tiny 22.1K 100.00% 99.79% 7.81ms 211 KB
nanoconv_d02 22K 92.69% 99.43% 6.45ms 96 KB

†Robustness: Mean accuracy across 7 distortion types (blur, JPEG, noise, perspective, rotation)
‑Latency: Batched 64-crop inference on M4 Mac CPU (FP32). INT8 models available for edge deployment.

All 9 trained models included - Choose based on your accuracy/speed/size requirements.

πŸ—οΈ Architecture

Input Image (RGB)
    ↓
Preprocessing & Tiling (8Γ—8 grid)
    ↓
ONNX Runtime Inference (batched)
    ↓
Softmax + Argmax (13 classes per square)
    ↓
Sanity Checks & Repair (king count, pawn ranks)
    ↓
FEN String + Confidence Matrix

13-Class Classification per Square:

  • empty (0 pieces)
  • White pieces: P, N, B, R, Q, K
  • Black pieces: p, n, b, r, q, k

Key Design Decisions:

  • Batched Inference: Process all 64 squares simultaneously for 10x speedup
  • ONNX Runtime: Cross-platform CPU optimization, no PyTorch dependency at inference
  • Conservative Sanity Checks: Only repair low-confidence predictions to maintain accuracy
  • Multiple Architectures: Trade-off flexibility between speed, size, and accuracy

πŸ“š Documentation

πŸ“š Documentation

For Users

For Developers

πŸŽ“ Training Your Own Models

# Train a single model
python -m chess2fen.train_one \
  --model-name my_model \
  --kind nanoconv \
  --root data \
  --train splits/train.json \
  --val splits/val.json \
  --outdir runs/my_experiment

# Train all architecture variants (overnight)
./scripts/run_overnight.sh

# Evaluate model performance
python -m chess2fen.eval_suite \
  --model-name my_model \
  --onnx runs/my_experiment/onnx/my_model_fp32.onnx \
  --root data \
  --val splits/val.json

Available Architectures:

  • nanoconv, microconv, picoconv - Simple conv blocks
  • dwsep_se - Depthwise separable + Squeeze-Excitation
  • shufflev2 - ShuffleNet V2 variant
  • squeezenet_micro - Compressed SqueezeNet
  • cascade, multitask_dw - Advanced architectures

πŸ§ͺ Testing

# Run all tests
pytest tests/ -v

# Run with coverage
pytest tests/ --cov=src/chess2fen --cov-report=html

# Run specific test file
pytest tests/test_infer_api.py -v

# Integration tests
python scripts/test_api.py --base-url http://localhost:8000

🌐 Web UI Board Editor

  • Click Edit board after a FEN is produced to reveal the overlay that mirrors the inferred orientation and sits on top of the generated board preview.
  • Interactively move pieces (select, target square), drop pieces from the color palette, erase squares, or reset back to the original prediction.
  • Undo/redo keeps the last 100 edits, validation toggle checks for one white king, one black king, and no pawns on ranks 1/8, and the displayed FEN updates instantly with every edit.

πŸ§ͺ Frontend Tests

  • From web/v3, run npm run test to compile the board utilities via tsc -p ../../tsconfig.ui-tests.json and execute the board‑to‑FEN + undo/redo test suite located under tests/ui/board-tests.ts.

🐳 Docker Deployment

Local Testing

# Build container (use podman for local, docker for cloud)
podman build -t chess2fen:latest -f api/Dockerfile .

# Run container
podman run -d -p 8080:8080 -e PORT=8080 chess2fen:latest

# Test endpoints
curl http://localhost:8080/health
curl -X POST http://localhost:8080/infer -F "file=@test.jpg"

Cloud Run Deployment

# Build and push to Artifact Registry
gcloud builds submit --tag us-west2-docker.pkg.dev/PROJECT/chess2fen/api:latest

# Deploy to Cloud Run
gcloud run deploy chess2fen-api \
  --image us-west2-docker.pkg.dev/PROJECT/chess2fen/api:latest \
  --region us-west2 \
  --allow-unauthenticated \
  --memory 2Gi \
  --cpu 2

# Expected deployment time: ~5 minutes
# (Normal for ML projects: installing dependencies + copying models)

Note on Deployment Time: The ~5 minute Cloud Run build time is normal for ML projects. It includes:

  • Installing Python dependencies (~2 min)
  • Copying model files (~1 min)
  • Building multi-stage Docker image (~2 min)

This can be optimized by using Cloud Build caching or pre-built base images.

πŸ“œ License & Model Sharing

This project is licensed under the MIT License - see the LICENSE file for details.

What This Means

βœ… You can:

  • Use this software commercially
  • Modify and distribute the code
  • Use the pre-trained models in your projects
  • Create derivative works

βœ… Model Sharing: The pre-trained ONNX models are included in the repository and covered under the same MIT license. This means:

  • Anyone can download and use the models
  • No attribution required (but appreciated!)
  • Models can be used in commercial applications
  • You're encouraged to share your improvements

Why models are in git: While typically large binary files aren't tracked in git, our models are small (96KB-211KB) and essential for:

  • Reproducibility (exact models used in production)
  • Easy deployment (no external model downloads needed)
  • Container builds (models baked into Docker image)

Attribution (Optional but Appreciated)

If you use chess2fen in your project, consider:

Powered by [chess2fen](https://github.com/YOUR_USERNAME/chess2fen)

🀝 Contributing

Contributions are welcome! Please see DEVELOPMENT.md for guidelines.

Development Workflow

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Run tests (pytest tests/)
  5. Format code (black src/ api/ scripts/ tests/)
  6. Commit changes (git commit -m 'feat: add amazing feature')
  7. Push to branch (git push origin feature/amazing-feature)
  8. Open a Pull Request

Code Quality Standards

  • βœ… All tests must pass
  • βœ… Code formatted with black
  • βœ… Linting passes (ruff)
  • βœ… Type hints where applicable
  • βœ… Documentation updated

πŸ› οΈ Requirements

  • Python 3.11+
  • PyTorch (training only)
  • ONNX Runtime (inference)
  • FastAPI (API server)

See pyproject.toml for complete dependency list.

πŸ“ˆ Project Status

Current Status: βœ… Production Ready (Phase 10 Complete)

  • βœ… Phase 0-6: Core development (models, training, evaluation)
  • βœ… Phase 7: Containerization (Docker, multi-stage builds)
  • βœ… Phase 8-9: CI/CD & Cloud deployment (GitHub Actions, Cloud Run)
  • βœ… Phase 10: Documentation finalization
  • 🚧 Phase 11: Testing & validation (in progress)
  • πŸ“‹ Phase 12: Public launch

Live Production Service: https://chess2fen-api-2qkqblvvma-wl.a.run.app

πŸ™ Acknowledgments

πŸ“ž Support


Made with ❀️ by Keenan Kalra

If you find this project useful, please consider giving it a ⭐!

About

Top-down chessboard image to FEN converter

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published