Open-source music analysis tool for chord recognition, beat tracking, piano visualizer, guitar diagrams, and lyrics synchronization.
Clean, intuitive interface for YouTube search, URL input, and recent video access.
Chord progression visualization with synchronized beat detection and grid layout with add-on features: Roman Numeral Analysis, Key Modulation Signals, Simplified Chord Notation, Enhanced Chord Correction, and song segmentation overlays for structural sections like intro, verse, chorus, bridge, and outro.
Interactive guitar chord diagrams with accurate fingering patterns from the official @tombatossals/chords-db database, featuring multiple chord positions and synchronized beat grid integration.
Real-time piano roll visualization with falling MIDI notes synchronized to chord playback. Features a scrolling chord strip, interactive keyboard highlighting, smoother playback-synced rendering, segmentation-aware dynamics shaping, and MIDI file export for importing chord progressions into any DAW.
Synchronized lyrics transcription with AI chatbot for contextual music analysis and translation support.
- Node.js 18+ and npm
- Python 3.9+ (for backend)
- Git LFS (for SongFormer checkpoints)
- Firebase account (free tier)
- Gemini API (free tier)
-
Clone and install Clone with submodules in one command (for fresh clones)
git lfs install git clone --recursive https://github.com/ptnghia-j/ChordMiniApp.git cd ChordMiniApp git lfs pull npm installgit pull git lfs pull
git lfs pulldownloads the large SongFormer model files referenced by this repo, including the checkpoint binaries stored as Git LFS objects.ls -la python_backend/models/Beat-Transformer/ ls -la python_backend/models/Chord-CNN-LSTM/ ls -la python_backend/models/ChordMini/Install FluidSynth for MIDI synthesis
# --- Windows --- choco install fluidsynth # --- macOS --- brew install fluidsynth # --- Linux (Debian/Ubuntu-based) --- sudo apt update sudo apt install fluidsynth -
Environment setup
cp .env.example .env.local
Edit
.env.local:NEXT_PUBLIC_PYTHON_API_URL=http://localhost:5001 NEXT_PUBLIC_FIREBASE_API_KEY=your_firebase_api_key NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your_project.firebaseapp.com NEXT_PUBLIC_FIREBASE_PROJECT_ID=your_project_id NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=your_project.appspot.com NEXT_PUBLIC_FIREBASE_MESSAGING_SENDER_ID=your_sender_id NEXT_PUBLIC_FIREBASE_APP_ID=your_app_id
-
Start Python backend (Terminal 1)
cd python_backend python -m venv myenv source myenv/bin/activate # On Windows: myenv\Scripts\activate pip install -r requirements.txt python app.py
-
Start frontend (Terminal 2)
npm run dev
-
Open application
Visit http://localhost:3000
- Docker and Docker Compose installed (Get Docker)
- Firebase account with API keys configured
-
Download configuration files
curl -O https://raw.githubusercontent.com/ptnghia-j/ChordMiniApp/main/docker-compose.prod.yml curl -O https://raw.githubusercontent.com/ptnghia-j/ChordMiniApp/main/.env.docker.example
-
Configure environment
cp .env.docker.example .env.docker # Edit .env.docker with your API keys (see API Keys Setup section below) -
Start the application
docker compose -f docker-compose.prod.yml --env-file .env.docker up -d
-
Access the application
Visit http://localhost:3000
-
Stop the application
docker compose -f docker-compose.prod.yml down
Note: If you have Docker Compose V1 installed, use
docker-compose(with hyphen) instead ofdocker compose(with space).
If you prefer using Docker Desktop GUI:
- Open Docker Desktop
- Go to "Images" tab and search for
ptnghia/chordminiapp-frontendandptnghia/chordminiapp-backend - Pull both images
- Use the "Containers" tab to manage running containers
Edit .env.docker with these required values:
NEXT_PUBLIC_FIREBASE_API_KEY- Firebase API keyNEXT_PUBLIC_FIREBASE_PROJECT_ID- Firebase project IDNEXT_PUBLIC_FIREBASE_STORAGE_BUCKET- Firebase storage bucketNEXT_PUBLIC_YOUTUBE_API_KEY- YouTube Data API v3 keyMUSIC_AI_API_KEY- Music.AI API keyGEMINI_API_KEY- Google Gemini API keyGENIUS_API_KEY- Genius API key
See the API Keys Setup section below for detailed instructions on obtaining these keys.
-
Create Firebase project
- Visit Firebase Console
- Click "Create a project"
- Follow the setup wizard
-
Enable Firestore Database
- Go to "Firestore Database" in the sidebar
- Click "Create database"
- Choose "Start in test mode" for development
-
Get Firebase configuration
- Go to Project Settings (gear icon)
- Scroll down to "Your apps"
- Click "Add app" β Web app
- Copy the configuration values to your
.env.local
-
Create Firestore collections
The app uses the following Firestore collections. They are created automatically on first write (no manual creation required):
transcriptionsβ Beat and chord analysis results (docId:${videoId}_${beatModel}_${chordModel})translationsβ Lyrics translation cache (docId: cacheKey based on content hash)lyricsβ Music.ai transcription results (docId:videoId)keyDetectionsβ Musical key analysis cache (docId: cacheKey)audioFilesβ Audio file metadata and URLs (docId:videoId)segmentationJobsβ Async SongFormer segmentation jobs and persisted results (docId:seg_<timestamp>_<uuid>)
-
Enable Anonymous Authentication
- In Firebase Console: Authentication β Sign-in method β enable Anonymous
-
Configure Firebase Storage
- Set environment variable:
NEXT_PUBLIC_FIREBASE_STORAGE_BUCKET=your_project_id.appspot.com - Folder structure:
audio/for audio filesvideo/for optional video files
- Filename pattern requirement: filenames must include the 11-character YouTube video ID in brackets, e.g.
audio_[VIDEOID]_timestamp.mp3(enforced by Storage rules) - File size limits (enforced by Storage rules):
- Audio: up to 50MB
- Video: up to 100MB
- Set environment variable:
# 1. Sign up at music.ai
# 2. Get API key from dashboard
# 3. Add to .env.local
NEXT_PUBLIC_MUSIC_AI_API_KEY=your_key_here# 1. Visit Google AI Studio
# 2. Generate API key
# 3. Add to .env.local
NEXT_PUBLIC_GEMINI_API_KEY=your_key_hereChordMiniApp uses a hybrid backend architecture:
For local development, you must run the Python backend on localhost:5001:
- URL:
http://localhost:5001 - Port Note: Uses port 5001 to avoid conflict with macOS AirPlay/AirTunes service on port 5000
Production deployments is configured based on your VPS and url should be set in the NEXT_PUBLIC_PYTHON_API_URL environment variable.
- Python 3.9+ (Python 3.9-3.11 recommended)
- Virtual environment (venv or conda)
- Git for cloning dependencies
- System dependencies (varies by OS)
-
Navigate to backend directory
cd python_backend -
Create virtual environment
python -m venv myenv # Activate virtual environment # On macOS/Linux: source myenv/bin/activate # On Windows: myenv\Scripts\activate
-
Install dependencies
pip install --no-cache-dir Cython>=0.29.0 numpy==1.22.4 pip install --no-cache-dir madmom>=0.16.1 pip install --no-cache-dir -r requirements.txt
In cases of conflict with spleeter, httpx, use --no-deps to skip installing dependencies of spleeter.
-
Start local backend on port 5001
python app.py
The backend will start on
http://localhost:5001and should display:Starting Flask app on port 5001 App is ready to serve requests Note: Using port 5001 to avoid conflict with macOS AirPlay/AirTunes on port 5000 -
Verify backend is running
Open a new terminal and test the backend:
curl http://localhost:5001/health # Should return: {"status": "healthy"} -
Start frontend development server
# In the main project directory (new terminal) npm run devThe frontend will automatically connect to
http://localhost:5001based on your.env.localconfiguration.
- Beat Detection: Beat-Transformer and madmom models
- Chord Recognition: Chord-CNN-LSTM, BTC-SL, BTC-PL models
- Audio Processing: Support for MP3, WAV, FLAC formats
Create a .env file in the python_backend directory:
# Optional: Redis URL for distributed rate limiting
REDIS_URL=redis://localhost:6379
# Optional: Genius API for lyrics
GENIUS_ACCESS_TOKEN=your_genius_token
# Flask configuration
FLASK_MAX_CONTENT_LENGTH_MB=150
CORS_ORIGINS=http://localhost:3000,http://127.0.0.1:3000Backend connectivity issues:
# 1. Verify backend is running
curl http://localhost:5001/health
# Expected: {"status": "healthy"}
# 2. Check if port 5001 is in use
lsof -i :5001 # macOS/Linux
netstat -ano | findstr :5001 # Windows
# 3. Verify environment configuration
cat .env.local | grep PYTHON_API_URL
# Expected: NEXT_PUBLIC_PYTHON_API_URL=http://localhost:5001
# 4. Check for macOS AirTunes conflict (if using port 5000)
curl -I http://localhost:5000/health
# If you see "Server: AirTunes", that's the conflict we're avoidingFrontend connection errors:
# Check browser console for errors like:
# "Failed to fetch" or "Network Error"
# This usually means the backend is not running on port 5001
# Restart both frontend and backend:
# Terminal 1 (Backend):
cd python_backend && python app.py
# Terminal 2 (Frontend):
npm run devImport errors:
# Ensure virtual environment is activated
source myenv/bin/activate # macOS/Linux
myenv\Scripts\activate # Windows
# Reinstall dependencies
pip install -r requirements.txtWe sincerely thank the following APIs and services for their support and contribution to the project.
- Google Gemini API - AI language model for roman numeral analysis, enharmonic corrections, and lyrics translation
- YouTube Search API - github.com/damonwonghv/youtube-search-api - YouTube search and video information
- yt-dlp - github.com/yt-dlp/yt-dlp - YouTube audio extraction (local)
- yt-mp3-go - github.com/vukan322/yt-mp3-go - Alternative audio extraction (production)
- LRClib - github.com/tranxuanthang/lrclib - Lyrics synchronization
- Music.ai SDK - AI-powered music transcription
We welcome contributions! Please see our Contributing Guidelines for details.






