An MCP (Model Context Protocol) server for controlling Reachy Mini robots. This allows Claude Desktop and other MCP clients to interact with Reachy Mini robots through natural language.
- Dance: Play choreographed dance moves
- Emotions: Express pre-recorded emotions
- Head Movement: Move head in different directions
- Camera: Capture images from the robot's camera
- Head Tracking: Enable face tracking mode
- 🎤 Real-Time Local TTS: Text-to-speech runs entirely on-device with streaming audio - no cloud APIs, no latency, no API costs
- Motion Control: Stop motions and query robot status
# Clone the repository
cd reachy-mini-mcp
# Create virtual environment
uv venv --python 3.10
source .venv/bin/activate
# Install dependencies
uv pip install -e .
# Optional: Install camera support
uv pip install -e ".[camera]"
# Optional: Install speech support (text-to-speech)
uv pip install -e ".[speech]"Copy .env.example to .env and configure:
cp .env.example .envAvailable environment variables:
| Variable | Description | Default |
|---|---|---|
REACHY_MINI_ROBOT_NAME |
Robot name for Zenoh discovery | reachy-mini |
REACHY_MINI_ENABLE_CAMERA |
Enable camera capture | false |
REACHY_MINI_HEAD_TRACKING_ENABLED |
Start with head tracking enabled | false |
reachy-mini-mcpAdd the MCP server using the claude mcp add command:
# Build from source (after cloning the repo)
claude mcp add --transport stdio reachy-mini -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp"
# With camera support enabled
claude mcp add --transport stdio reachy-mini --env REACHY_MINI_ENABLE_CAMERA=true -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp"
# With custom robot name
claude mcp add --transport stdio reachy-mini --env REACHY_MINI_ROBOT_NAME=my-robot -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp"Add to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"reachy-mini": {
"command": "reachy-mini-mcp",
"env": {
"REACHY_MINI_ENABLE_CAMERA": "true"
}
}
}
}If using a virtual environment:
{
"mcpServers": {
"reachy-mini": {
"command": "/path/to/reachy-mini-mcp/.venv/bin/reachy-mini-mcp",
"env": {
"REACHY_MINI_ENABLE_CAMERA": "true"
}
}
}
}Play a dance move on the robot.
Parameters:
move(string, optional): Dance name or "random". Default: "random"repeat(integer, optional): Number of times to repeat. Default: 1
Available moves: simple_nod, head_tilt_roll, side_to_side_sway, dizzy_spin, stumble_and_recover, interwoven_spirals, sharp_side_tilt, side_peekaboo, yeah_nod, uh_huh_tilt, neck_recoil, chin_lead, groovy_sway_and_roll, chicken_peck, side_glance_flick, polyrhythm_combo, grid_snap, pendulum_swing, jackson_square
Play a pre-recorded emotion.
Parameters:
emotion(string, required): Name of the emotion to play
Move the robot's head in a direction.
Parameters:
direction(string, required): One of "left", "right", "up", "down", "front"duration(float, optional): Movement duration in seconds. Default: 1.0
Capture an image from the robot's camera.
Returns: Base64-encoded JPEG image
Note: Requires REACHY_MINI_ENABLE_CAMERA=true
Toggle head tracking mode.
Parameters:
enabled(boolean, required): True to enable, False to disable
Stop all current and queued motions immediately.
Make the robot speak using real-time local text-to-speech with natural head movement animation.
Parameters:
text(string, required): The text to speakvoice(string, optional): Voice to use. Default: "alba"
Available voices: alba, marius, javert, jean, fantine, cosette, eponine, azelma
Note: Requires pocket-tts package. Install with uv pip install -e ".[speech]"
Key highlights:
- 100% Local: Runs entirely on your machine - no internet connection required after installation
- Real-Time Streaming: Audio is generated and streamed in real-time for instant response
- Zero API Costs: No cloud TTS services, no per-character fees, unlimited usage
- Low Latency: Direct local processing means minimal delay between text input and speech output
- Privacy: Your text never leaves your device
The robot's head will naturally sway and move while speaking, creating a more lifelike interaction.
Get the current robot status including connection state, queue size, and current pose.
- Python 3.10+
- Reachy Mini SDK (
reachy_mini>=1.2.7) - Running
reachy-mini-daemonor simulation - Zenoh network connectivity to the robot
# Install dev dependencies
uv pip install -e ".[dev]"
# Run linter
ruff check .
# Run type checker
mypy src/MIT