This page centralizes command, flag, model, theme, config, and tool reference material.
buddy: start interactive REPL mode.buddy exec <prompt>: run one prompt and exit.buddy resume <session-id>: resume a saved session.buddy resume --last: resume the last session in the current directory.buddy init [--force]: guided init flow for~/.config/buddy/buddy.toml(update existing config, overwrite with backup, or cancel).buddy login [provider] [--check] [--reset]: login/check/reset provider credentials (provider-first; profile selectors still accepted with deprecation warning).buddy logout [provider]: clear saved provider login credentials.buddy trace summary <file>: summarize one JSONL runtime trace.buddy trace replay <file> --turn <n>: inspect one prompt turn from trace.buddy trace context-evolution <file>: inspect context/token/cost evolution over time.buddy traceui <file> [--stream]: interactively browse raw trace events with keyboard navigation, scrollable always-expanded detail, and live streaming follow mode.
Login soft-fail behavior:
- If the active profile uses
auth = "login"and no saved credentials exist, startup/model switching stays available. - Buddy surfaces a warning with exact recovery commands:
/login <provider>orbuddy login <provider>.
| Flag | Description |
|---|---|
-V, --version |
Print version/commit/build metadata. |
-c, --config <path> |
Use an explicit config file. |
-m, --model <profile-or-model-id> |
Override active model profile key or raw model id. |
--base-url <url> |
Override API base URL. |
--container <name> |
Execute shell/file tools inside a running container. |
--ssh <user@host> |
Execute shell/file tools over SSH. |
--tmux [session] |
Optional explicit managed tmux session name. |
--trace <path> |
Write runtime events to a JSONL trace file. |
-v, --verbose |
Increase diagnostics (-v info, -vv debug, -vvv trace). |
--no-color |
Disable ANSI colors. |
--dangerously-auto-approve |
In exec mode, bypass shell approvals. |
Execution-target note:
- When shell/files tools are enabled, local and
--containerexecution are tmux-managed by default. --tmux [session]sets an explicit managed session name.--sshuses tmux when remote tmux is available, and falls back to direct SSH otherwise.
| Command | Description |
|---|---|
/status |
Show current model, base URL, enabled tools, and session counters. |
/model [name|index] |
Switch configured model profile; for compatible OpenAI /responses models, also opens a reasoning-effort picker. |
/theme [name|index] |
Switch terminal theme (/theme with no args opens picker), persist config, and render preview blocks. |
/login [provider] |
Check/start provider login flow. |
/logout [provider] |
Clear saved provider login credentials. |
/context |
Show estimated context usage and token stats. |
/compact |
Summarize and trim older turns to reclaim context budget. |
/ps |
Show running background tasks with IDs and elapsed time. |
/kill <id> |
Cancel a running background task by ID. |
/timeout <duration> [id] |
Set timeout for a background task. |
| `/approve ask | all |
/session |
List saved sessions ordered by last use. |
/session resume <session-id|last> |
Resume a session by ID or most recent. |
/session new |
Create and switch to a new generated session ID. |
/help |
Show slash command help (only when no tasks are running). |
/quit /exit /q |
Exit interactive mode (only when no tasks are running). |
Default bundled profiles in src/templates/buddy.toml:
gpt-codex=>gpt-5.3-codex(OpenAI,api = "responses",auth = "login")gpt-spark=>gpt-5.3-codex-spark(OpenAI,api = "responses",auth = "login")openrouter-deepseek=>deepseek/deepseek-v3.2(OpenRouter,api = "completions",auth = "api-key")openrouter-glm=>z-ai/glm-5(OpenRouter,api = "completions",auth = "api-key")kimi=>kimi-k2.5(Moonshot,api = "completions",auth = "api-key")claude-sonnet=>claude-sonnet-4-5(Anthropic,api = "anthropic",auth = "api-key")claude-haiku=>claude-haiku-4-5(Anthropic,api = "anthropic",auth = "api-key")
Context-limit defaults come from src/templates/models.toml (compiled into the binary), with fallback 8192 for unknown models.
The same catalog can also define per-model auth capability flags (supports_api_key_auth, supports_login_auth) used by preflight and regression checks.
OpenAI reasoning-effort support:
- Buddy sends
reasoning.effortonly for models that advertise this capability. - Picker visibility is capability-driven (provider + protocol + model).
- Current bundled OpenAI codex profiles expose
low|medium|high|xhigh.
Built-in themes:
dark(default)light
Custom themes are defined under [themes.<name>] and can override semantic tokens.
The complete token key list is defined in src/ui/theme/mod.rs (ThemeToken::key()).
Commonly overridden tokens:
warningerrorblock_tool_bgblock_reasoning_bgblock_approval_bgblock_assistant_bgmarkdown_headingmarkdown_quotemarkdown_coderisk_lowrisk_mediumrisk_high
Highest precedence wins:
- CLI flags (
--config,--model,--base-url,--container,--ssh,--tmux,--trace,--verbose,--no-color,--dangerously-auto-approve) - Environment variables (
BUDDY_API_KEY,BUDDY_BASE_URL,BUDDY_MODEL,BUDDY_API_TIMEOUT_SECS,BUDDY_FETCH_TIMEOUT_SECS,BUDDY_TRACE_FILE,BUDDY_LOG,RUST_LOG) - Local config (
./buddy.toml) - Global config (
~/.config/buddy/buddy.toml) - Built-in defaults
First-run bootstrap:
- When no local or global config exists,
buddyautomatically starts the guided init flow. - In non-interactive terminals, buddy creates the default global config and continues with defaults.
Legacy compatibility:
AGENT_*env vars are still accepted.agent.tomllegacy config filenames are still accepted..agentxlegacy session/auth paths are still accepted with warnings.- See docs/developer/deprecations.md for removal timelines.
[models.gpt-codex]
api_base_url = "https://api.openai.com/v1"
provider = "openai" # auto | openai | openrouter | moonshot | anthropic | other
api = "responses" # responses | completions | anthropic
auth = "login" # login | api-key
reasoning_effort = "medium" # optional, only used for supported reasoning models
# Only one may be set: api_key, api_key_env, api_key_file.
# api_key_env = "OPENAI_API_KEY"
# api_key = "sk-..."
# api_key_file = "/path/to/key.txt"
model = "gpt-5.3-codex"
# context_limit = 128000
[models.gpt-spark]
api_base_url = "https://api.openai.com/v1"
provider = "openai"
api = "responses"
auth = "login"
model = "gpt-5.3-codex-spark"
reasoning_effort = "medium"
[models.openrouter-deepseek]
api_base_url = "https://openrouter.ai/api/v1"
provider = "openrouter"
api = "completions"
auth = "api-key"
api_key_env = "OPENROUTER_API_KEY"
model = "deepseek/deepseek-v3.2"
[models.openrouter-glm]
api_base_url = "https://openrouter.ai/api/v1"
provider = "openrouter"
api = "completions"
auth = "api-key"
api_key_env = "OPENROUTER_API_KEY"
model = "z-ai/glm-5"
[models.kimi]
api_base_url = "https://api.moonshot.ai/v1"
provider = "moonshot"
api = "completions"
auth = "api-key"
api_key_env = "MOONSHOT_API_KEY"
model = "kimi-k2.5"
[models.claude-sonnet]
api_base_url = "https://api.anthropic.com/v1"
provider = "anthropic"
api = "anthropic"
auth = "api-key"
api_key_env = "ANTHROPIC_API_KEY"
model = "claude-sonnet-4-5"
[models.claude-haiku]
api_base_url = "https://api.anthropic.com/v1"
provider = "anthropic"
api = "anthropic"
auth = "api-key"
api_key_env = "ANTHROPIC_API_KEY"
model = "claude-haiku-4-5"
[agent]
name = "agent-mo" # tmux session prefix: buddy-<name>
model = "gpt-spark" # active profile key from [models.<name>]
# system_prompt = "Optional additional operator instructions."
max_iterations = 20
# temperature = 0.7
# top_p = 1.0
[tools]
shell_enabled = true
fetch_enabled = true
fetch_confirm = false
fetch_allowed_domains = []
fetch_blocked_domains = ["localhost"]
files_enabled = true
files_allowed_paths = []
search_enabled = true
shell_confirm = true
shell_denylist = ["rm -rf /", "mkfs"]
[network]
api_timeout_secs = 120
fetch_timeout_secs = 20
[display]
color = true
theme = "dark"
show_tokens = false
show_tool_calls = true
persist_history = true
# Optional custom theme overrides:
# [themes.my-theme]
# warning = "#ffb454"
# block_assistant_bg = "#13302a"
[tmux]
max_sessions = 1
max_panes = 5| Tool | Description |
|---|---|
run_shell |
Execute shell commands (4K truncation). Requires risk, mutation, privesc, and why. Supports optional tmux session/pane selectors. |
fetch_url |
HTTP GET and return text (8K truncation). Uses [network].fetch_timeout_secs and host safety policy. |
read_file |
Read files (8K truncation). Respects local/container/ssh execution context. |
write_file |
Create/overwrite files with path safety policies and optional allowlist roots. |
web_search |
DuckDuckGo search and return top results. |
tmux_capture_pane |
Capture tmux pane output (optionally delayed) for terminal-state inspection. |
tmux_send_keys |
Send keys/text to tmux panes for interactive control. Requires risk, mutation, privesc, and why. |
time |
Return harness-recorded wall clock time in multiple formats. |
All tool responses return a JSON envelope with result and harness_timestamp.
OpenAI (API key):
export BUDDY_API_KEY="sk-..."
buddyOpenAI (login auth profile):
buddy login openai
buddyOllama:
ollama serve
export BUDDY_BASE_URL="http://localhost:11434/v1"
export BUDDY_MODEL="llama3.2"
buddyOpenRouter:
export BUDDY_BASE_URL="https://openrouter.ai/api/v1"
export BUDDY_API_KEY="sk-or-..."
export BUDDY_MODEL="anthropic/claude-3.5-sonnet"
buddy