Your AI assistant, running entirely on your machine.
LCARS-inspired interface. Local models. Zero cloud dependency.
LocalClaw is a local-first personal AI assistant CLI built on OpenClaw. It gives you the full power of the OpenClaw agent runtime — gateway, tools, sessions, skills — but defaults to local model providers like Ollama, LM Studio, and vLLM. No cloud keys required to get started.
The gateway dashboard features an LCARS-inspired interface (Library Computer Access/Retrieval System) — the iconic Star Trek computer display design — with a matching terminal UI color scheme.
Coexistence: LocalClaw installs as a separate
localclawbinary with its own state directory (~/.localclaw/) and config file (~/.localclaw/openclaw.local.json). It runs side-by-side with a standardopenclawinstallation without any interference. Different state directory, different config, different gateway port, same machine.
- Zero cloud dependency — point it at Ollama, LM Studio, or vLLM and go.
- Isolated state —
~/.localclaw/keeps sessions, locks, and agent data fully separate from any existing OpenClaw installation. - First-run onboarding — detects local model servers and walks you through picking a default model.
- Full OpenClaw feature set — gateway, TUI, agent, browser control, skills, sessions, tools — all via
localclaw <command>. - Separate gateway port — defaults to port
18790so it doesn't conflict with an OpenClaw gateway on18789.
LocalClaw has gained a suite of intelligent features that transform it from a basic local chat interface into a proactive, self-managing AI assistant.
The terminal UI footer now shows all three model tiers so you always know which model is handling your message:
agent main | session main | ollama/llama3.1:8b | primary: ollama/glm-4.7-flash-fast:latest | orch: openai-codex/gpt-5.2-codex (auto) | tokens 12k/32k (37%)
The primary: label only appears when the active model differs from the configured primary (e.g., when the fast model is handling a simple message or the orchestrator is handling a complex one).
The gateway now validates your entire model stack on every boot:
- Confirms your model server (Ollama, LM Studio, vLLM) is reachable
- Verifies your configured model is actually available
- Checks that your model's context window meets minimum requirements
- Logs clear warnings if anything is misconfigured — no more silent failures
LocalClaw uses a heuristic classifier to route every message to the right model tier — no LLM call overhead, just fast keyword and pattern matching:
| Tier | Handles | Example messages | Typical model |
|---|---|---|---|
| Fast (tiny) | Greetings, yes/no, short chat | "hi", "thanks!", "what time is it?" | llama3.2 (3B) |
| Local (primary) | Lookups, tool calls, email, calendar | "check my emails", "what's on my calendar?", "list files" | glm-4.7-flash-fast (30B) |
| API (orchestrator) | Multi-step reasoning, code, external APIs | "fix the auth bug", "search my Jira issues" | gpt-5.2-codex, claude-sonnet-4 |
The classifier categorizes messages into three complexity levels:
- Simple — no action keywords, short conversational messages → routed to the tiny fast model for sub-second responses. Tools are disabled and context is capped (default 4096 tokens) so the fast model stays fast and never hallucinates tool calls.
- Moderate — display/lookup keywords (
show,list,find,open), or tool-requiring resource keywords (email,calendar,meeting,inbox,weather,contacts,notes,browse) → stays on local primary model with full tool access - Complex — reasoning keywords (
fix,debug,create,build), external API keywords (search,send,read,check,fetch), code patterns, file paths, URLs → escalated to the API orchestrator model
Routing is per-message, not sticky — after the orchestrator handles a complex task, the next moderate message automatically returns to the local primary model. This prevents expensive API timeouts on routine follow-up questions.
Users without API keys still get a fully functional agent (fast model + local model), while users with API access get the best quality for demanding tasks.
During onboarding (localclaw configure), you can choose a strategy preset:
| Preset | Fast model | Primary model | Orchestrator | Best for |
|---|---|---|---|---|
| Balanced (recommended) | Local tiny (3B) | Local mid (8B-30B) | API model | Most users — fast chat, capable tools, quality complex tasks |
| Local only | Local tiny (3B) | Local mid (8B-30B) | Disabled | Privacy-first, air-gapped, or no API budget |
| All-API (Enterprise) | API model | API model | API model (always) | Unlimited token spend, maximum quality |
{
agents: {
defaults: {
model: { primary: "ollama/glm-4.7-flash-fast:latest" },
// Fast model for simple chat (tier 1)
routing: {
enabled: true,
fastModel: "ollama/llama3.2:latest",
maxSimpleLength: 250,
},
// API model for complex tasks (tier 3)
orchestrator: {
enabled: true,
model: "openai-codex/gpt-5.2-codex",
strategy: "auto", // "auto" | "always" | "fallback-only"
maxSimpleLength: 250,
},
},
},
}| Orchestrator strategy | Behavior |
|---|---|
auto (default) |
Complex messages → API, simple/moderate → local. Local timeout auto-escalates to API. |
always |
Always try API first, local is fallback on failure |
fallback-only |
Local handles everything, API only when local fails. Timeout auto-escalates to API. |
When an API orchestrator model is configured, LocalClaw applies tiered timeouts to local model runs:
| Setup | Timeout | On timeout |
|---|---|---|
| Local + API orchestrator | 4 minutes | Automatically escalates to the API model |
| Local only (no API) | 10 minutes | Returns timeout error |
This prevents slow local models from blocking you indefinitely. If your local model gets stuck in a multi-tool-call loop or generates slowly after receiving tool results, LocalClaw automatically hands the task to the faster API model — seamlessly, with no manual intervention.
The timeout is configurable via agents.defaults.timeoutSeconds in your config if you want to override the defaults.
Some local models (e.g. GLM-4 via Ollama) report tool calling capabilities but their chat templates lack native structured tool call support. Instead of using the API's tool calling format, they emit raw JSON in their text output like:
{"name": "exec", "parameters": {"command": "ls"}}LocalClaw detects these text-based tool calls, matches them to registered tools (with fuzzy name matching and alias support), executes them, and feeds the results back to the model — all transparently. This works for any local model regardless of template limitations.
Guards prevent false positives:
- Only activates for local providers (Ollama, LM Studio, vLLM)
- Skipped if the model already made structured tool calls in the same run
- Raw JSON tool calls are always stripped from user-facing output
The email tool includes explicit priority hints that guide models to use the structured email tool instead of shelling out to the underlying CLI via exec. This reduces multi-step tool call chains (e.g., 8 sequential exec calls) down to a single email tool call with the right action and parameters — faster, more reliable, and less likely to timeout.
Every agent turn is automatically logged to memory/sessions/ as a timestamped markdown file. Session logs include user messages, assistant responses, model info, and token counts. No more lost conversations.
Browse and search past session transcripts directly in your browser:
/sessions— full session browser UI with dark LCARS-inspired theme/api/sessions— REST API for listing, searching, and retrieving session logs- Full-text search across all sessions
On gateway startup, the proactive-briefing hook reads your recent session logs (last 24h) and writes a context summary to memory/briefing-context.md. This gives the agent awareness of recent conversations for context-aware morning briefings and follow-up reminders based on what you discussed yesterday.
Define event-driven, multi-step pipelines as simple YAML files in workspace/workflows/:
name: startup-log
trigger:
event: gateway:startup
steps:
- action: write-file
path: memory/startup-log.md
content: "Gateway started"
append: true
- action: notify
message: "System ready"Supports three step types (agent-turn, notify, write-file) and two trigger modes (event for hook-driven, schedule for cron-based).
- Clipboard — full read/write clipboard access (
pbpaste/pbcopyon macOS,xclip/wl-pasteon Linux) - Focus Mode — suppress heartbeat delivery during deep work sessions, with auto-expiry and buffered alerts
- Workspace File Watcher — monitors workspace files for changes and fires
workspace:file-changedhook events with debouncing
The user-learning hook observes your interactions and builds a preference profile over time:
- Active hours — when you typically interact
- Message style — average length, question frequency
- Tool preferences — which tools/actions you request most
- Topic frequency — common themes in your conversations
Stored at memory/user-preferences.json and available for other hooks to personalize behavior.
- Document Indexer — auto-indexes text files from
workspace/documents/for agent context - Diagram Pipeline — detects Mermaid code blocks in agent output and renders them to SVG (via
mmdc) or saves.mmdsource files - Voice Pipeline — STT via
whisper-cpp, TTS via macOSsay, with automatic capability detection
LocalClaw ships with 10 native agent tools that give the AI structured, typed access to local system capabilities — no external skill files or shell command composition needed. Every tool runs entirely on your machine with zero cloud dependency.
| Tool | Description | Key Actions |
|---|---|---|
resource_monitor |
Real-time system health | CPU, memory, disk usage, top processes, system load |
run_history |
Persistent command history | Query past executions, filter by status/time, stats, JSONL-backed |
tmux |
Terminal session manager | Create sessions, send keys, capture output, wait for text patterns |
pdf |
PDF read and edit | Extract text, metadata, merge/split PDFs, add text stamps, remove pages |
office |
Office document I/O | Read/create DOCX, read XLSX/PPTX, create spreadsheets with auto-formatting |
media |
Video/audio processing | Extract frames/clips/audio, probe metadata, thumbnails, transcode (via ffmpeg) |
transcribe |
Offline speech-to-text | Local transcription via whisper-cpp, audio format conversion, capability detection |
git |
Structured git queries | Status, log, diff, branch, stash, show, blame, remote — all parsed and typed |
archive |
Compression and archives | Create/extract/list ZIP and TAR (gz/bz2/xz/zst), gzip/gunzip single files |
network |
Network diagnostics | Ping, DNS lookup, port scan, HTTP check, traceroute, interfaces, listening ports |
All tools use JSON schema parameters and return structured results with both human-readable text and machine-parseable details. The agent selects the right tool automatically based on your request.
Optional system dependencies: Some tools wrap system binaries when available:
ffmpeg— required bymediaandtranscribe(audio conversion). Install:brew install ffmpegwhisper-cpp— required bytranscribefor local STT. Install:brew install whisper-cpptmux— required bytmuxtool. Install:brew install tmuxtar— used byarchivefor tar operations (pre-installed on macOS/Linux)
LocalClaw includes the full OpenClaw multi-channel messaging stack. The agent can receive and respond to messages across 14+ platforms — all running locally on your machine.
| Channel | Protocol | Setup complexity |
|---|---|---|
| Telegram | Bot API (grammY) | Easy — create a bot with @BotFather, paste the token |
| Baileys (Web) | Easy — scan a QR code to link your number | |
| Discord | discord.js (Bot API) | Easy — create a bot app, paste the token |
| Slack | Bolt (Socket Mode) | Moderate — create a Slack app with scopes |
| Signal | signal-cli (linked device) | Moderate — requires signal-cli setup |
| iMessage | imsg (legacy) | macOS only — work in progress |
| BlueBubbles | iMessage via BlueBubbles | macOS only — recommended for iMessage |
| Google Chat | Chat API (HTTP webhook) | Moderate — Google Workspace admin required |
| Microsoft Teams | Bot Framework (extension) | Moderate — Azure bot registration |
| Matrix | matrix-js-sdk (extension) | Moderate — homeserver + access token |
| Mattermost | Extension | Moderate |
| Zalo | Zalo OA (extension) | Moderate |
| Twitch | Extension | Moderate |
| WebChat | Built-in web UI | None — included with the gateway dashboard |
The fastest way to add a channel is during onboarding:
localclaw onboard
# → Select "Telegram" at the channel step
# → Paste your bot token from @BotFatherOr add a channel after initial setup:
# Interactive channel setup
localclaw channels add --channel telegram
# Or set the token directly
localclaw channels add --channel telegram --token "123456:ABC-DEF..."
# Check channel status
localclaw channels status
# Pair a new sender (security)
localclaw pairing approve telegram <code>Channels are configured in the channels block of ~/.localclaw/openclaw.local.json:
{
channels: {
telegram: {
botToken: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11",
allowFrom: ["+15555550123"], // Allowed sender phone numbers / usernames
},
whatsapp: {
allowFrom: ["+15555550123"],
},
discord: {
botToken: "MTIzNDU2Nzg5MDEyMzQ1Njc4OQ...",
allowFrom: ["username#1234"],
},
},
}LocalClaw treats all inbound DMs as untrusted input by default:
- DM pairing (default) — unknown senders receive a short pairing code. The bot does not process their message until approved.
- Approve senders with
localclaw pairing approve <channel> <code> - Open DMs require explicit opt-in: set
dmPolicy: "open"and add"*"toallowFrom - Run
localclaw doctorto surface risky or misconfigured DM policies
For full channel documentation, see the OpenClaw channel docs — all commands work with localclaw in place of openclaw.
LocalClaw includes built-in integrations for Jira, Confluence, Slack, and Email (Gmail) that give the agent direct, structured access to your team's project management, documentation, communication, and email tools. All API calls run locally from your machine — no intermediary cloud services.
The fastest way to configure integrations is the interactive wizard:
localclaw configure --section integrationsThis walks you through enabling each integration, entering credentials, and setting defaults. You can also select Integrations from the main localclaw configure menu.
Alternatively, edit your config file directly at ~/.localclaw/openclaw.local.json (see Manual Configuration below).
Connect your Jira Server/Data Center or Jira Cloud instance to let the agent manage issues, track projects, and automate workflows.
| Action | Description |
|---|---|
| Search issues | Query issues using JQL (Jira Query Language) with pagination |
| Get issue details | Retrieve full issue data — summary, status, assignee, priority, type, timestamps |
| Create issues | Create new issues with summary, description, type, assignee, priority, and labels |
| Add comments | Post comments on existing issues (Atlassian Document Format) |
| Transition issues | Move issues through workflow states (e.g. To Do → In Progress → Done) |
| List transitions | Discover available workflow transitions for any issue |
- A Jira Server/Data Center or Jira Cloud instance
- A Personal Access Token (PAT) — generate one in your Jira profile under Personal Access Tokens → Create token
- For Jira Cloud (alternative): An Atlassian account email and API token — generate at https://id.atlassian.com/manage-profile/security/api-tokens
Jira Server/Data Center (default — uses Personal Access Token with Bearer auth):
{
integrations: {
jira: {
enabled: true,
baseUrl: "https://jira.yourcompany.com", // Your Jira Server URL
apiToken: "MDM2OTk1...", // Personal Access Token
defaultProject: "PROJ", // Optional: default project key for new issues
timeoutSeconds: 30, // Optional: API request timeout (default: 30)
maxResults: 50, // Optional: max search results (default: 50)
},
},
}Jira Cloud (uses email + API token):
{
integrations: {
jira: {
enabled: true,
authType: "basic", // Use email + API token auth
apiVersion: "3", // REST API v3 for Cloud
baseUrl: "https://yourteam.atlassian.net", // Your Jira Cloud URL
email: "you@example.com", // Atlassian account email
apiToken: "ATATT3x...", // API token (kept secret)
defaultProject: "PROJ",
},
},
}| Config Key | Description |
|---|---|
authType |
"pat" (default, Personal Access Token, Server/DC) or "basic" (email + API token, Cloud) |
apiVersion |
"2" (default for PAT/Server) or "3" (default for Cloud) |
- "Show me all open bugs assigned to me in the PROJ project" — agent runs a JQL search and returns matching issues
- "Create a task in PROJ: Upgrade Node to v22 with high priority" — agent creates the issue and returns the new key
- "Move PROJ-123 to In Progress and add a comment that I'm starting work" — agent transitions the issue and posts the comment
- "What's the status of PROJ-456?" — agent fetches the issue and summarizes its state
Connect your Confluence Cloud or Server instance to let the agent search, read, create, and update wiki pages and documentation.
| Action | Description |
|---|---|
| Search content | Query pages using CQL (Confluence Query Language) with pagination |
| Get page details | Retrieve page metadata — title, space, status, version, timestamps, web URL |
| Read page body | Fetch the full storage-format body of any page |
| Create pages | Create new pages in a space with HTML body content, optional parent page |
| Update pages | Update existing pages with new title, body, and automatic version increment |
| List spaces | Enumerate available Confluence spaces with name, key, and type |
- A Confluence Cloud or Confluence Server instance
- An Atlassian account email address
- An API token — the same token used for Jira works if you're on Atlassian Cloud
{
integrations: {
confluence: {
enabled: true,
baseUrl: "https://yourteam.atlassian.net", // Your Confluence instance URL
email: "you@example.com", // Atlassian account email
apiToken: "ATATT3x...", // API token (kept secret)
defaultSpace: "TEAM", // Optional: default space key for new pages
timeoutSeconds: 30, // Optional: API request timeout (default: 30)
maxResults: 25, // Optional: max search results (default: 25)
},
},
}- "Find all pages in the TEAM space mentioning deployment procedures" — agent searches Confluence and returns matching pages with links
- "Read the content of page 12345678" — agent fetches and displays the full page body
- "Create a new page in TEAM called 'Q3 Retrospective' with a summary of our last sprint" — agent creates the page and returns its URL
- "Update the 'API Reference' page with the new endpoint documentation" — agent fetches the current version, updates the body, and increments the version number
- "What spaces are available in our Confluence?" — agent lists all spaces with their keys and names
Connect Slack to let the agent read and send DMs, post to channels, search conversations, and interact with your team's workspace. Messages sent via the user token appear as you — not as a bot.
Note: This is the Slack integration (agent tool for reading/writing Slack). It is separate from the Slack channel (which routes incoming Slack messages to the agent). You can use both simultaneously.
| Action | Description |
|---|---|
| Read DMs | Find any person by username and read your DM conversation with them |
| Send DMs | Send a direct message to any user by username — appears as you |
| Post messages | Send messages to any channel or thread, with link unfurling control |
| Channel history | Read recent messages from any channel by name or ID, with automatic user name resolution (up to 200 messages) |
| List DMs | Browse your 50 most recent DM conversations with user names |
| Thread replies | Fetch all replies in a specific thread, with automatic user name resolution |
| Search messages | Full-text search across the workspace (auto-detects usernames and includes DM history) |
| Find users | Search for users by name, username, or email |
| List channels | Enumerate public and private channels with topic, purpose, and membership |
| User lookup | Resolve user IDs to names, real names, emails, and bot status |
| Add reactions | React to messages with emoji |
| Set channel topic | Update the topic of a channel |
- A Slack workspace where you can install apps
- A Slack App with a Bot User — create one at https://api.slack.com/apps
- A Bot User OAuth Token (
xoxb-...) — required for channel operations - A User OAuth Token (
xoxp-...) — required for DMs, search, and sending messages as yourself
- Go to https://api.slack.com/apps and click Create New App
- Choose From scratch, give it a name (e.g. "LocalClaw"), and select your workspace
- Go to OAuth & Permissions in the left sidebar
- Under Scopes, add the bot token scopes and user token scopes listed below
- Click Install to Workspace at the top of the OAuth page and authorize
- Copy the Bot User OAuth Token (
xoxb-...) and User OAuth Token (xoxp-...) - Add both tokens to your LocalClaw config (see Configuration below)
Bot Token Scopes (under "Bot Token Scopes" in OAuth & Permissions):
| Scope | Used for |
|---|---|
chat:write |
Posting messages to channels |
channels:history |
Reading public channel history |
channels:read |
Listing public channels |
groups:history |
Reading private channel history |
groups:read |
Listing private channels |
im:history |
Reading DM history |
im:read |
Listing DM conversations |
im:write |
Opening DM conversations |
users:read |
Looking up user info |
users:read.email |
Reading user email addresses |
reactions:write |
Adding emoji reactions |
channels:join |
Auto-joining public channels when the bot needs to read history |
channels:manage |
Setting channel topics |
User Token Scopes (under "User Token Scopes" in OAuth & Permissions):
| Scope | Used for |
|---|---|
chat:write |
Sending DMs as yourself (not as the bot) |
im:history |
Reading your DM conversations |
im:read |
Listing your DMs |
im:write |
Opening DM conversations |
search:read |
Searching messages across the workspace |
users:read |
Looking up user info for DM resolution |
Why two tokens? The bot token (
xoxb-) operates as the app bot and can only see channels it's invited to. The user token (xoxp-) operates as you — it can read your DMs, search your messages, and send DMs that appear from you (not the bot). For the best experience, configure both.
{
integrations: {
slack: {
enabled: true,
botToken: "xoxb-1234-5678-abcdef", // Bot User OAuth Token (required)
userToken: "xoxp-1234-5678-ghijkl", // User OAuth Token (required for DMs + search)
appToken: "xapp-1-A0B1C2-...", // Optional: App-Level Token for Socket Mode
signingSecret: "a1b2c3d4e5f6...", // Optional: for webhook verification
defaultChannel: "#general", // Optional: default channel for posting
timeoutSeconds: 30, // Optional: API request timeout (default: 30)
},
},
}| Config Key | Required | Description |
|---|---|---|
botToken |
Yes | Bot User OAuth Token (xoxb-...) for channel operations |
userToken |
Recommended | User OAuth Token (xoxp-...) for DMs, search, and posting as yourself |
defaultChannel |
No | Default channel for post_message when no channel specified |
timeoutSeconds |
No | API request timeout in seconds (default: 30) |
- "Pull my last 10 DMs with ryan.valencia" — agent finds the user, opens the DM, and displays the conversation
- "Send a DM to john.smith saying 'Meeting moved to 3pm'" — agent sends the DM as you
- "Post a message to #engineering: Deployment complete for v2.1.0" — agent posts to the channel
- "What's been discussed in #product today?" — agent fetches recent channel history and summarizes
- "Search Slack for messages about the database migration" — agent searches across channels and returns matching messages
- "Who is user U01234ABCDE?" — agent looks up the user and returns their name and email
- "React to the last message in #general with 👍" — agent adds the emoji reaction
The Slack integration includes several quality-of-life features:
- Channel name resolution — Use
#channel-nameor justchannel-namein requests. The agent automatically resolves human-friendly names to Slack channel IDs. No need to know or copy channel IDs. - Auto-join — When the bot encounters a public channel it hasn't joined yet, it automatically calls
conversations.joinand retries. No manual/inviteneeded for public channels. - User name resolution — Channel history and thread replies display real names (e.g.
Dan Burke) alongside user IDs, so you can immediately see who said what.
Private channels still require a manual invite — have a channel admin add the bot via the channel's Integrations settings.
- "channel_not_found" on DM operations — Make sure you have a
userTokenconfigured. Bot tokens can't access user DMs. - "missing_scope" errors — Check that your Slack App has all the required scopes listed above, then reinstall the app to your workspace.
- Search returns no results — The
search:readscope must be on the User Token Scopes, not the Bot Token Scopes. Slack'ssearch.messagesAPI only works with user tokens. - DMs appear from the bot instead of you — Ensure
userTokenis set. When present, DM operations automatically use the user token so messages appear from your account. - "not_in_channel" on a public channel — The bot should auto-join automatically. If it still fails, ensure the
channels:joinbot scope is granted and the app is reinstalled to the workspace. - "not_in_channel" on a private channel — Private channels require manual invite. Go to the channel → settings → Integrations → Add apps → select your bot.
Connect your Gmail accounts to let the agent search, read, send, and reply to emails. Supports multi-account — configure personal and work accounts side by side. All operations run locally via the gog CLI with OAuth authentication.
| Action | Description |
|---|---|
| Search emails | Query emails using Gmail search syntax (e.g. from:boss is:unread, subject:invoice newer_than:7d) |
| Read messages | Fetch full message content by ID — headers, body, thread context |
| Send emails | Compose and send new emails with to, cc, subject, and body |
| Reply to threads | Reply (or reply-all) to existing messages/threads |
| List labels | Enumerate Gmail labels for any account |
| List accounts | Show configured accounts and which is the default |
- The
gogCLI installed and on your PATH - Each Gmail account authenticated via
gog auth login --account you@gmail.com - For Google Workspace accounts with custom OAuth credentials, also pass
--client <name>during auth
{
integrations: {
email: {
enabled: true,
accounts: [
{ address: "you@gmail.com", label: "home" },
{ address: "you@company.com", label: "work", client: "workspace" },
],
defaultAccount: "you@gmail.com", // Used when no account is specified
timeoutSeconds: 30, // Optional: CLI timeout (default: 30)
},
},
}| Config Key | Required | Description |
|---|---|---|
accounts |
Yes | Array of Gmail accounts, each with address and optional label/client |
defaultAccount |
No | Default account address (falls back to first in list) |
client |
No | Per-account gog OAuth client name (for Workspace custom credentials) |
timeoutSeconds |
No | CLI command timeout in seconds (default: 30) |
The agent resolves accounts by address, label, or partial match:
- "Search my work email for messages from the CFO" — resolves
worklabel to the work account - "Send an email from home to alice@example.com" — resolves
homelabel - "Read message abc123" — uses the default account
- "Search my email for unread messages from the last 3 days" — agent runs a Gmail search and returns matching messages
- "Read the email with ID 18f3a2b4c5d6e7f8" — agent fetches the full message content
- "Send an email to alice@example.com with subject 'Meeting notes' and body with the summary" — agent composes and sends
- "Reply to the thread about the deployment with 'Looks good, ship it'" — agent replies within the thread
- "What labels are set up on my work email?" — agent lists Gmail labels for the work account
For real-time email notifications (instead of polling), configure the Gmail watcher in your hooks config. This uses Google Cloud Pub/Sub push notifications to trigger hooks when new emails arrive.
Single account:
{
hooks: {
enabled: true,
token: "your-hook-token",
gmail: {
account: "you@gmail.com",
topic: "projects/your-project/topics/gog-gmail-watch",
pushToken: "your-push-verification-token",
},
},
}Multi-account (each account gets its own watcher process on a separate port):
{
hooks: {
enabled: true,
token: "your-hook-token",
gmail: {
topic: "projects/your-project/topics/gog-gmail-watch",
pushToken: "shared-push-token",
accounts: [
{ address: "you@gmail.com", label: "home" },
{ address: "you@company.com", label: "work", port: 8790 },
],
},
},
}Ports auto-increment from the base (8788) when not specified. The gateway spawns one gog gmail watch serve process per account and automatically renews the Gmail API watch registration.
All integrations can be configured in a single integrations block in your config file (~/.localclaw/openclaw.local.json):
{
// ... other config ...
integrations: {
jira: {
enabled: true,
baseUrl: "https://yourteam.atlassian.net",
email: "you@example.com",
apiToken: "ATATT3x...",
defaultProject: "PROJ",
},
confluence: {
enabled: true,
baseUrl: "https://yourteam.atlassian.net",
email: "you@example.com",
apiToken: "ATATT3x...",
defaultSpace: "TEAM",
},
slack: {
enabled: true,
botToken: "xoxb-...",
userToken: "xoxp-...", // Recommended: enables DMs + search
defaultChannel: "#general",
},
email: {
enabled: true,
accounts: [
{ address: "you@gmail.com", label: "home" },
{ address: "you@company.com", label: "work", client: "workspace" },
],
defaultAccount: "you@gmail.com",
},
},
}Tip: If you use Atlassian Cloud, the
baseUrl,apiTokenare the same for both Jira and Confluence. You only need to generate one API token.
- API tokens and bot tokens are stored in your local config file (
~/.localclaw/openclaw.local.json). The file is local to your machine and never transmitted. - Token fields are marked as sensitive in the configuration UI and CLI — they are masked in the dashboard and wizard output.
- All API requests are made directly from your machine to the respective service endpoints. No data passes through any intermediary.
- You can disable any integration at any time by setting
enabled: falseor by runninglocalclaw configure --section integrations.
Local models typically have much smaller context windows (8K-32K tokens) compared to cloud models (128K-200K+). LocalClaw includes a multi-layered context management system designed to deliver a great agentic experience even within these constraints. All of this is automatic — no configuration needed.
1. Aggressive context pruning (always-on)
Unlike cloud-optimized setups that only prune when cache TTL expires, LocalClaw prunes every turn:
- Tool results are soft-trimmed at just 20% context usage (keeping only head/tail summaries)
- Full tool results are cleared at 40% usage with a placeholder
- Each tool result is capped at 2K characters (vs 8K for cloud models)
2. Proactive memory persistence
The agent is instructed to write progress, decisions, and state to memory/ files in your workspace after every meaningful step — not just before compaction. This means context that would be lost during summarization is safely on disk.
memory/state.md— structured snapshot of the user's current situation, written in five sections:- Active tasks — what the user is working on right now
- Recent decisions — key choices or outcomes from this session
- Pending items — things the user asked about or needs to follow up on
- User context — name, preferences, time of day awareness, mood cues
- Environment — relevant tools, services, or accounts in use
memory/progress.md— completed steps and findingsmemory/plan.md— task decomposition for multi-step workmemory/notes.md— learned preferences and project conventionsmemory/YYYY-MM-DD.md— timestamped daily events and conversation highlights
Why state.md matters: The fast model (tiny tier) has no tools — it can't read files. Instead, LocalClaw injects a compact snapshot of memory/state.md directly into the fast model's system prompt (capped at 800 chars). This gives even simple "hello" or "thanks" responses awareness of the user's current situation, active tasks, and preferences — without any tool overhead.
3. Tighter compaction with early memory flush
When the context window fills up, LocalClaw summarizes old history more aggressively:
- History is capped at 30% of the context window (vs 50% for cloud)
- Memory flush triggers every compaction cycle (not just near the threshold)
- Reserve tokens floor is set to 2K (vs 20K), appropriate for small windows
4. Compact system prompts
Bootstrap files (AGENTS.md, SOUL.md, etc.) are capped at 8K characters total, leaving more room for actual conversation and tool results.
5. Task decomposition
The agent automatically breaks complex tasks into discrete steps, persisting plans and intermediate results to disk so it can recover from context compaction without losing track of multi-step work.
The defaults work well out of the box, but you can override any setting in ~/.localclaw/openclaw.local.json:
{
agents: {
defaults: {
// Context pruning
contextPruning: {
mode: "always", // "always" | "cache-ttl" | "off"
softTrimRatio: 0.2, // Start trimming at 20% of context
hardClearRatio: 0.4, // Clear old results at 40%
softTrim: { maxChars: 2000 },
},
// Compaction
compaction: {
maxHistoryShare: 0.3, // Cap history at 30% of window
reserveTokensFloor: 2000,
memoryFlush: {
compactionInterval: 1, // Flush memories every compaction
softThresholdTokens: 2000,
},
},
// System prompt budget
bootstrapMaxChars: 8000,
},
},
}| Provider | Default endpoint |
|---|---|
| Ollama | http://127.0.0.1:11434/v1 |
| LM Studio | http://127.0.0.1:1234/v1 |
| vLLM | http://127.0.0.1:8000/v1 |
To override the default endpoints, set environment variables before starting the gateway:
# Custom LM Studio endpoint (default: http://127.0.0.1:1234/v1)
export LMSTUDIO_BASE_URL=http://192.168.1.50:1234/v1
# Custom vLLM endpoint (default: http://127.0.0.1:8000/v1)
export VLLM_BASE_URL=http://192.168.1.50:8000/v1These are picked up by both the model provider discovery and the gateway health checks.
You can also point LocalClaw at any OpenAI-compatible API endpoint via the config or onboarding wizard.
On a fresh machine, this single script installs everything — git, curl, Homebrew (macOS), Node.js 22, pnpm, Ollama — then builds LocalClaw:
git clone https://github.com/sunkencity999/localclaw.git
cd localclaw
bash scripts/install-prereqs.shThe script detects your OS (macOS, Ubuntu/Debian, Fedora/RHEL, Arch) and uses the appropriate package manager. It skips anything already installed. Run with --check to see what's missing without installing anything.
After it finishes, jump straight to Quick Start.
If you prefer to install prerequisites yourself:
1. Install Node.js 22+
# macOS
brew install node@22
# Ubuntu/Debian
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt-get install -y nodejs
# Check version
node -v # Must be v22 or higher2. Install pnpm
npm install -g pnpm3. Install Ollama
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | shNote: The
localclaw onboardwizard will also offer to install Ollama automatically if it's not detected.
4. Start Ollama and pull a model
# Start the server (leave running)
ollama serve
# In a new terminal, pull a model
ollama pull llama3.1:8bGood starting points:
llama3.1:8b,gemma3:12b,qwen3:8b,glm-4.7-flash. Larger models give better results but need more RAM.
5. Clone and build LocalClaw
git clone https://github.com/sunkencity999/localclaw.git
cd localclaw
pnpm install
pnpm build6. (Optional) Install globally
npm install -g .This puts localclaw on your PATH so you can run it from anywhere instead of using pnpm localclaw.
Follow these steps in order. If you installed globally, replace pnpm localclaw with localclaw.
pnpm localclawOn first run, LocalClaw detects your running model server, lists available models, and walks you through picking a default. This creates your config at ~/.localclaw/openclaw.local.json.
For Ollama users, the wizard also offers to enable flash attention (OLLAMA_FLASH_ATTENTION=1) in your shell config — this reduces memory usage and improves throughput on supported hardware.
Important: Make sure your model server (e.g. Ollama) is running before this step so LocalClaw can discover your models automatically.
The gateway is the background service that manages agent sessions, tools, and events:
pnpm localclaw gatewayLeave this running in its own terminal (or add --verbose to see detailed logs).
Open a new terminal and launch the TUI:
pnpm localclaw tuiType a message and hit Enter. You're talking to a local AI agent with full tool access.
# One-shot agent query (no TUI)
pnpm localclaw agent --message "Summarize this project"
# Check gateway and model status
pnpm localclaw status
# Diagnose issues
pnpm localclaw doctorLocalClaw stores its config at ~/.localclaw/openclaw.local.json. Minimal example:
{
agent: {
model: "ollama/llama3.1",
},
gateway: {
mode: "local",
port: 18790,
},
}Use localclaw configure to interactively edit settings, or localclaw config set <key> <value> for quick changes.
Full configuration reference (all keys + examples): OpenClaw Configuration
Your local models
(Ollama / LM Studio / vLLM)
│
▼
┌───────────────────────────────┐
│ LocalClaw Gateway │
│ (control plane) │
│ ws://127.0.0.1:18790 │
└──────────────┬────────────────┘
│
├─ Agent runtime (RPC)
├─ CLI (localclaw …)
├─ TUI
├─ Browser control
└─ Skills + tools
LocalClaw is designed to run alongside a standard OpenClaw installation:
| OpenClaw | LocalClaw | |
|---|---|---|
| Binary | openclaw |
localclaw |
| Config file | ~/.openclaw/openclaw.json |
~/.localclaw/openclaw.local.json |
| Profile | (default) | local |
| Gateway port | 18789 |
18790 |
| State directory | ~/.openclaw/ |
~/.localclaw/ |
Both can be installed globally and run simultaneously. They use completely separate state directories, configs, sessions, and gateway instances — no shared locks or files.
LocalClaw inherits the full OpenClaw platform. Every command and feature works — just use localclaw instead of openclaw:
- Gateway — WebSocket control plane for sessions, tools, and events
- Multi-channel inbox — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and more
- Browser control — dedicated Chrome/Chromium with CDP control
- Skills — bundled, managed, and workspace skills
- Agent sessions — multi-session with agent-to-agent coordination
- Tools — bash, browser, canvas, cron, nodes, plus 10 native Core Skills (PDF, Office, media, git, network, and more)
For the full feature reference, see the OpenClaw docs.
LocalClaw is a fork of OpenClaw, the personal AI assistant built by Peter Steinberger and the community.
See CONTRIBUTING.md for guidelines and how to submit PRs.
Thanks to the OpenClaw clawtributors: