Ungula is a Latin term meaning "hoof, nail, or claw."
Autonomous AI agent platform with multi-model orchestration, extensible skills, and multi-channel messaging.
Ungula is a fully refactored version of OpenClaw, rebuilt from the ground up in Python (FastAPI) and React. It is a self-hosted AI agent system that runs 24/7 — processing tasks, managing conversations across Discord/Telegram/Slack/Signal/iMessage, executing tools in a sandboxed environment, and coordinating companion devices over your LAN.
- 8 LLM Providers — OpenRouter, Anthropic, OpenAI, Google, xAI, NVIDIA, Ollama, and custom OpenAI-compatible endpoints. Automatic failover between providers.
- 5 Messaging Channels — Discord, Telegram, Slack, Signal, and iMessage. Unified inbox with session management and SSE event streaming.
- Extensible Skills — Built-in tools (shell, file ops, web search, browser automation, URL fetch) plus a skill marketplace (ClawHub) with security scanning.
- Agent Orchestration — Per-agent configuration (model, temperature, provider), subagent spawning, context compaction, and tool-calling loops with streaming.
- Companion Nodes — Pair devices over WebSocket for distributed command execution with approval workflows.
- Vector Memory — Semantic search over conversation history and workspace files with embedding cache and auto-indexing.
- Webhook System — Inbound webhooks with signature verification, Jinja2 templates, and event retention.
- Plugin System — Discover, install, and manage plugins that extend tools and channels.
- Cron Scheduling — Schedule recurring agent tasks with cron expressions.
- Docker Sandbox — Isolate tool execution in hardened containers with resource limits.
- Security Auditing — Built-in security scanner with auto-remediation.
- React Dashboard — Full-featured frontend for chat, inbox, skills, nodes, webhooks, plugins, memory, cron, agents, usage monitoring, and more.
React Dashboard (:3001)
|
Vite proxy /api
|
+-------------------+
| FastAPI (:8001) |
| Rate-limited |
+-------------------+
/ | | \
+-------+ +--+ ++--+ +--------+
| Agent | |LLM| |WS | |Channels|
|Runner | |Reg| |Mgr| |Registry|
+-------+ +--+ +---+ +--------+
/ \ | / | | \
+------+ +-----+ | Discord Telegram Slack Signal iMessage
|Tools | |Skills| |
+------+ +-----+ |
| | | |
shell file web 8 providers
browser search (failover)
|
Docker Sandbox
+--------+
WebSocket --- | Node |
/ws/node | Client |
+--------+
cd backend
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e ".[dev]"
# Initialize config directory and generate a secure secret key
ungula initEdit ~/.ungula/config.yaml to add your LLM API keys.
- Python 3.11+
- Node.js 18+ (for the frontend)
- SQLite (bundled with Python)
- Docker (optional, for sandbox or deployment)
ungula start # Run in foreground
ungula start -d # Run as background daemon
ungula status # Check if running
ungula logs -f # Follow log output
ungula stop # Stop the daemonThe backend starts at http://localhost:8001. API docs are available at http://localhost:8001/docs (Swagger UI).
cp .env.example .env # Edit with your API keys
docker compose up # Start Ungula
docker compose --profile redis up # With Redis queuecd frontend
npm install
npm run devThe dashboard opens at http://localhost:3001 and proxies API requests to the backend. In Docker, the frontend is served directly at http://localhost:8001/.
# Register a user
curl -X POST http://localhost:8001/api/auth/register \
-H "Content-Type: application/json" \
-d '{"email": "you@example.com", "password": "your-password"}'
# Login to get a JWT token
curl -X POST http://localhost:8001/api/auth/login \
-H "Content-Type: application/json" \
-d '{"email": "you@example.com", "password": "your-password"}'
# Create a conversation and chat
curl -X POST http://localhost:8001/api/conversations/ \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"title": "Hello"}'
curl -X POST http://localhost:8001/api/chat/CONVERSATION_ID \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"content": "What can you do?"}'Ungula reads configuration from ~/.ungula/config.yaml with environment variable overrides.
| Variable | Description |
|---|---|
UNGULA_HOME |
Config directory (default: ~/.ungula) |
UNGULA_AUTH_SECRET_KEY |
JWT signing secret |
UNGULA_OPENROUTER_API_KEY |
OpenRouter API key |
UNGULA_ANTHROPIC_API_KEY |
Anthropic API key |
UNGULA_OPENAI_API_KEY |
OpenAI API key |
UNGULA_GOOGLE_API_KEY |
Google AI API key |
UNGULA_XAI_API_KEY |
xAI (Grok) API key |
UNGULA_NVIDIA_API_KEY |
NVIDIA NIM API key |
UNGULA_DISCORD_TOKEN |
Discord bot token |
UNGULA_SERVER_HOST |
Server bind address |
UNGULA_SERVER_PORT |
Server port |
The workspace directory (~/.ungula/workspace/) contains markdown files that shape agent behavior:
| File | Purpose |
|---|---|
SOUL.md |
Agent persona and behavioral boundaries |
USER.md |
User context and preferences |
IDENTITY.md |
Agent identity definition |
AGENTS.md |
Master workspace guide |
TOOLS.md |
Local tool configuration notes |
MEMORY.md |
Long-term memory |
HEARTBEAT.md |
Periodic task checklist |
BOOT.md |
Startup tasks (run on server start) |
Initialize workspace from templates:
curl -X POST http://localhost:8001/api/config/initialize-workspaceungula/
├── backend/
│ ├── ungula/
│ │ ├── agents/ # Agent runner, factory, subagents, context compaction
│ │ ├── api/routes/ # 19 route modules (~100+ endpoints)
│ │ ├── browser/ # Playwright browser automation
│ │ ├── channels/ # (see messaging/)
│ │ ├── cron/ # Cron scheduler
│ │ ├── hooks/ # Boot tasks
│ │ ├── llm/ # 8 LLM provider adapters + failover
│ │ ├── memory/ # Vector memory with embeddings
│ │ ├── messaging/ # Discord, Telegram, Slack, Signal, iMessage
│ │ ├── nodes/ # Companion device management
│ │ ├── pairing/ # Device pairing workflow
│ │ ├── plugins/ # Plugin system (loader, registry, installer)
│ │ ├── sandbox/ # Docker sandbox for tool execution
│ │ ├── security/ # Security auditor
│ │ ├── skills/ # Skills framework + built-in tools
│ │ │ └── builtin/ # shell, file_ops, browser, web_search, url_fetch, ...
│ │ ├── storage/ # SQLAlchemy models + SQLite backend
│ │ ├── tools/ # Tool registry + policy engine
│ │ ├── webhooks/ # Webhook manager + templates
│ │ ├── config.py # Pydantic configuration models
│ │ └── main.py # FastAPI application entry point
│ ├── tests/ # pytest test suite (2200+ tests)
│ └── pyproject.toml
├── frontend/
│ ├── src/
│ │ ├── pages/ # 17 page components (Chat, Inbox, Skills, Nodes, ...)
│ │ ├── components/ # Shared UI components
│ │ └── api.js # API client
│ ├── package.json
│ └── vite.config.js
├── node-client/ # Companion device client SDK
│ ├── ungula_node/
│ │ ├── client.py # WebSocket client
│ │ ├── cli.py # CLI (connect, pair, status, approve, reject)
│ │ ├── capabilities.py # Capability registration
│ │ └── handlers.py # Built-in command handlers
│ └── pyproject.toml
├── deploy/ # Service files
│ ├── ungula.service # systemd unit file
│ ├── com.ungula.agent.plist # macOS launchd plist
│ └── nginx.conf # nginx reverse proxy config
├── docs/ # Documentation
│ ├── api-reference.md # API endpoint reference
│ ├── deployment.md # Deployment guide
│ └── templates/ # Workspace file templates
├── skills/ # User skill directory
├── Dockerfile # Multi-stage Docker build
├── docker-compose.yml # Docker Compose config
├── .env.example # Environment variable template
├── CLAUDE.md # Development guidelines
└── PLAN.md # Development roadmap
| Command | Description |
|---|---|
ungula start |
Start server in foreground |
ungula start -d |
Start as background daemon |
ungula stop |
Stop the daemon |
ungula status |
Show running/stopped status and health |
ungula logs [-n 50] [-f] |
View or follow server logs |
ungula init [--force] |
Create config directory and generate config |
ungula rotate-key [-y] |
Generate new JWT secret key |
cd backend
source .venv/bin/activate
pytest # Run all tests
pytest -x # Stop on first failure
pytest --cov=ungula # With coveragecd backend
ruff check . # Lint
ruff format . # Format
cd frontend
npm run lint # ESLintSkills are Python packages in ~/.ungula/skills/ or backend/ungula/skills/builtin/. Each skill has a manifest.yaml and one or more tool modules. See existing built-in skills for examples.
Skills can also be installed from ClawHub:
curl -X POST http://localhost:8001/api/skills/clawhub/install \
-H "Content-Type: application/json" \
-d '{"slug": "skill-name"}'- API Reference — Complete endpoint documentation with request/response shapes and curl examples.
- Deployment Guide — Local development, production setup, channel configuration, Docker sandbox, and node client.

