Presentation control, reimagined. Hands-free slide navigation using offline speech recognition and hybrid similarity matching.
moves is a CLI tool that automates slide advancement during presentations based on your spoken words. By analyzing your presentation and corresponding transcript, it learns what you say during each slide, then uses speech recognition to detect when you move between sections—all offline and hands-free.
- Offline speech recognition – Uses local ONNX models; your voice stays on your machine
- Hybrid similarity engine – Combines semantic and phonetic matching for accurate slide detection
- Automatic slide generation – Extracts slides from PDF presentations and generates templates with LLM assistance (optional manual mode)
- Speaker profiles – Save and reuse multiple presentations with different speakers
- Flexible source handling – Load presentations and transcripts from local files or Google Drive
- Interactive terminal UI – Real-time feedback with Rich-powered dashboard showing current slide, similarity scores, and system state
- Prepare – Extract slides from a PDF, analyze your transcript, generate sections with speech content
- Control – Start live voice-controlled navigation with keyboard backups
- Manage – Add, edit, list, and delete speaker profiles
- Python 3.13+
uvpackage manager (or pip as fallback)
uv tool install moves-cli
# or: pip install moves-cli
# Verify installation
moves --versionmoves speaker add MyPresentation \
/path/to/presentation.pdf \
/path/to/transcript.txtYou can also use Google Drive URLs (the tool handles authentication):
moves speaker add MyPresentation \
"https://drive.google.com/file/d/.../view?usp=sharing" \
"https://drive.google.com/file/d/.../view?usp=sharing"# Set your LLM model (e.g., Gemini 2.5 Flash)
moves settings set model gemini/gemini-2.5-flash-lite
# Set your API key (securely prompted)
moves settings set keyTip: You can skip LLM setup and use
--manualmode to generate empty templates you edit yourself.
Generate sections (speech content for each slide):
# Auto mode (uses LLM)
moves speaker prepare MyPresentation
# Or manual mode (empty template to edit yourself)
moves speaker prepare MyPresentation --manualEdit ~/.moves/speakers/<speaker-id>/sections.md to add your spoken words for each slide if using manual mode.
moves present MyPresentationKeyboard shortcuts during presentation:
←/→– Previous / Next slide (manual navigation)Ins– Pause/Resume microphoneCtrl+C– Exit
The tool listens to your speech and automatically advances slides when it detects you've moved to new content.
- Getting Started Guide – Detailed walkthrough with examples
- Architecture – How the system works internally
- CLI Reference – Complete command documentation
- Configuration Guide – Setup LLM, API keys, and more
- Development Guide – For contributors and developers
┌─────────────────────────────────────────────────────────┐
│ 1. PREPARATION PHASE │
├─────────────────────────────────────────────────────────┤
│ • Extract slides from PDF │
│ • Analyze transcript to identify sections │
│ • Generate speech content for each slide (LLM or manual)│
│ • Create sections.md file with structure │
└─────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────┐
│ 2. PRESENTATION PHASE │
├─────────────────────────────────────────────────────────┤
│ • Start microphone stream (real-time audio input) │
│ • Voice Activity Detector (VAD) filters silence │
│ • Speech Recognition converts audio to text (offline) │
│ • Similarity Engine matches text to chunks │
│ ├─ Semantic similarity (embeddings) │
│ └─ Phonetic similarity (fuzzy matching) │
│ • Auto-advance when high similarity match detected │
└─────────────────────────────────────────────────────────┘
All speaker data is stored in ~/.moves/:
~/.moves/
├── settings.toml # LLM model configuration
├── settings.key # API key (Windows Credential Manager)
└── speakers/
└── <speaker-id>/
├── speaker.yaml # Speaker metadata
└── sections.md # Speech content for each slide
No speakers found?
moves speaker list
# Check ~/.moves/speakers/ directory existsSections not being created?
# Check LLM configuration
moves settings list
# Try manual mode (no LLM required)
moves speaker prepare MyPresentation --manualMicrophone not detected?
# Verify your system microphone works:
# Settings → Sound → Volume mixer (Windows)
# Then retry: moves present MyPresentationSpeech not being recognized?
- Speak clearly and at a normal pace
- Test microphone in a quiet environment
- Check that sections.md contains expected content
- Offline processing – No cloud calls except for LLM section generation
- Real-time audio – ~32ms analysis windows, responsive slide detection
- Memory efficient – Processed sections cached in
sections.md - First run slower – ONNX models (~500MB) downloaded on first use
Active Development – This tool is being actively developed and improved. Feedback and contributions are welcome.
Licensed under the GNU General Public License v3.0. See LICENSE for details.
Contributions are welcome! See Development Guide for setup instructions.
Questions? Check the FAQ in Getting Started or open an issue on GitHub.