English | 简体中文
Docker-first command entry plane for OpenClaw-compatible local CLI execution. The control plane can run in Docker or on a remote OpenClaw host, while this plugin routes user commands into openclaw-worker on the machine that actually owns Claude Code, Codex, Gemini, files, and local session state.
This repository should not be described as a standalone AI service.
It is the command entry plane in the Docker-first product line:
openclaw-cli-bridge= user-facing command entryopenclaw-worker= execution plane- host-side Claude Code / Codex / Gemini = actual local capability
It does not execute Claude Code, Codex CLI, or Gemini CLI by itself. The plugin only registers commands inside OpenClaw and forwards tasks to a separate task-api worker. Without OpenClaw, openclaw-worker, and locally installed AI CLIs, this repository is just one piece of the workflow.
Practical deployment story:
- Run OpenClaw locally in Docker and use this plugin as the bridge into your host runner
- Or run OpenClaw on a remote server and still use this plugin to route into a remote or local runner
- Keep real execution on the machine that owns the CLI, files, shell, and browser context
- Registers
/cc,/codex,/gemini, and related session commands inside OpenClaw - Forwards requests to
openclaw-workertask-api endpoints such as/claude,/codex, and/gemini - Uses callback delivery so results are pushed back to the bot side without agent rewriting
- Persists per-channel session continuation in a local SQLite bridge store
- Carries one worker protocol for two interaction modes: direct commands and agent delegation
- OpenClaw bot running in Docker or on a remote host
openclaw-workertask-api running on the host machine that owns the local CLI tools- Claude Code / Codex CLI / Gemini CLI installed locally
- Callback flow tested in my own bot and channel setup, including Discord-first deployments
- Tested only in my own OpenClaw deployment
- Default config assumes Docker-to-host networking via
host.docker.internal - Other deployment topologies may require changing
apiUrl, callback behavior, and worker-side paths - Session continuation depends on worker-side CLI behavior and consistent working directory
- Gemini continuation uses the CLI's
--resume latestsemantics under the hood, not arbitrary UUID restore - This is not presented as a cross-platform guarantee product
- Even if the plugin code itself is portable, the surrounding workflow is still tied to my own deployment style
- OpenClaw loads this repository as a plugin through
openclaw.plugin.json - The plugin forwards HTTP requests to a separate task-api instead of spawning CLIs directly
- The default
apiUrlishttp://host.docker.internal:3456, which assumes:- OpenClaw bot is running in Docker
openclaw-workertask-api is running on the host machine
- Callback delivery requires a valid
callbackChannel - Authenticated task submission requires a valid
apiToken - Session continuation only works when the worker-side CLI can resume from the same working directory and session storage layout
| Mode | What runs where | Notes |
|---|---|---|
| Docker Local | OpenClaw in Docker, openclaw-worker on the same host |
Best fit for single-machine self-hosting |
| Docker + Remote Runner | OpenClaw in Docker, worker on another machine | Best fit when Docker is the product shell but execution must stay near the real machine |
| Cloud + Remote Runner | OpenClaw on a remote server, worker on the host with the real CLI/files | Main remote-control pattern |
| Single Host | OpenClaw and worker both outside Docker | Possible, but not the primary product story |
- OpenClaw
openclaw-worker- Locally installed Claude Code, Codex CLI, and/or Gemini CLI on the worker machine
- Docker-to-host connectivity if using the default
host.docker.internaltopology - A bot/channel setup that can receive worker callbacks
- Matching worker-side working directory conventions if you want reliable CLI resume behavior
Install this repository as an OpenClaw plugin in your OpenClaw deployment, then make sure the worker task-api is reachable from the bot container.
Minimal plugin config:
{
"plugins": {
"entries": {
"cli-bridge": {
"apiUrl": "http://host.docker.internal:3456",
"apiToken": "your-task-api-token",
"callbackChannel": "your-callback-channel-id",
"callbackBotToken": "your-bot-token",
"sessionStorePath": "/tmp/openclaw-cli-bridge-state.db"
}
}
}
}Three main commands:
/cc/codex/gemini
Typical usage:
/cc 帮我重构这个模块并补测试
/codex Fix the failing auth tests
/gemini 帮我解释这个报错为什么出现
Session controls:
/cc-new
/cc-recent
/cc-now
/cc-resume <id> <prompt>
/cli-state
/cli-state all
/codex-now
/codex-new
/codex-resume <id> <prompt>
/gemini-now
/gemini-new
/gemini-resume <id> <prompt>
Behavior summary:
- Claude Code: explicit session ID continuation, plus recent/current session helpers
- Codex: bridge-level session continuation mapped to real Codex sessions on the worker
- Gemini: bridge-level session continuation, but the underlying Gemini CLI resumes the latest linked session
This repository should be treated as one product with two entry modes, not two competing architectures.
Primary mode:
- Direct commands:
/cc,/codex,/gemini - The bot acts as a transport layer only
- The user talks to the CLI runner directly through the bridge
- This is the low-noise, low-token path and should be the default user experience
Secondary mode:
- Agent delegation:
cc_call,codex_call,gemini_call - The agent decides when to delegate work to a local CLI
- The result still comes back by direct callback instead of agent rewriting
- This mode is for planning, approval, or multi-step orchestration, not for ordinary chat when direct commands are enough
Practical rule:
- Use direct commands for normal CLI conversations
- Use agent tools only when you actually need the agent to plan or coordinate
- Treat the old pipeline idea as interaction methodology folded into delegated mode, not as a separate runtime product
The plugin config comes from openclaw.plugin.json and OpenClaw plugin settings.
Supported fields:
apiUrlapiTokencallbackChannelcallbackBotTokensessionStorePath
Current defaults and behavior:
apiUrldefaults tohttp://host.docker.internal:3456apiTokenis required for successful task-api callscallbackChannelis required if you want results delivered back to the bot sidecallbackBotTokenis the callback delivery identity tokensessionStorePathdefaults to/tmp/openclaw-cli-bridge-state.dbso channel-to-session mappings survive plugin restarts
/cc <prompt>/cc-new/cc-new <prompt>/cc-recent/cc-now/cc-resume <id> <prompt>/cli-state/cli-state all/codex <prompt>/codex-now/codex-new/codex-resume <id> <prompt>/gemini <prompt>/gemini-now/gemini-new/gemini-resume <id> <prompt>
Gemini note:
- Gemini keeps a logical bridge session in OpenClaw, but the underlying Gemini CLI resumes the latest linked session rather than restoring an arbitrary UUID directly.
cli_bridge_statecc_callcodex_callgemini_call
These tools also forward to the worker and rely on callback delivery. They do not replace the worker or run the CLIs inside the plugin process.
The tool path and the slash-command path now share the same task protocol on the worker side. The difference is who initiates the task, not which backend executes it.
The cli_bridge_state tool and /cli-state command are internal diagnostics for inspecting current session bindings. Useful for debugging session continuity issues.
- The bridge persists channel-to-session bindings in
sessionStorePath - The worker persists tasks, active sessions, and events in its own state store
- The runner on the host keeps provider-specific cache/state as needed for local resume behavior
These layers are complementary:
- Bridge state = channel-to-session routing view
- Worker state = execution-plane and callback view
- Runner cache = host-local provider recovery view
- Session maps are now persisted in a SQLite file at
sessionStorePath, but they are still local single-instance state rather than shared storage - If multiple bridge instances run against the same channels, they will not coordinate session ownership
- Results depend on worker callback delivery succeeding
- This plugin does not replace
openclaw-worker - Manual infrastructure wiring is still required
- This repository alone is not useful without the rest of the OpenClaw + worker + CLI stack
- Resume behavior still depends on worker-side CLI implementation details
Built by 小试AI (@AliceLJY)
WeChat public account: 我的AI小木屋
MIT
