Wave CLI commands for pipeline orchestration.
| Command | Purpose |
|---|---|
wave init |
Initialize a new project |
wave run |
Execute a pipeline |
wave do |
Run an ad-hoc task |
wave status |
Check pipeline status |
wave logs |
View execution logs |
wave cancel |
Cancel running pipeline |
wave chat |
Interactive analysis of pipeline runs |
wave artifacts |
List and export artifacts |
wave list |
List adapters, runs, pipelines, personas, contracts |
wave validate |
Validate configuration |
wave clean |
Clean up workspaces |
wave compose |
Validate and execute pipeline sequences |
wave doctor |
Diagnose project configuration and health |
wave suggest |
Suggest impactful pipeline runs |
wave serve |
Start the web dashboard server |
wave webui |
Open the web dashboard in a browser |
wave migrate |
Database migrations |
wave bench |
Run and analyze SWE-bench benchmarks |
Initialize a new Wave project.
wave initOutput:
Created wave.yaml
Created .wave/personas/navigator.md
Created .wave/personas/craftsman.md
Created .wave/personas/summarizer.md
Created .wave/pipelines/default.yaml
Project initialized. Run 'wave validate' to check configuration.
wave init --adapter opencode # Use different adapter
wave init --force # Overwrite existing files
wave init --merge # Merge into existing config
wave init --reconfigure # Re-run onboarding wizard with current settings as defaults
wave init --all # Include all pipelines regardless of release status
wave init --workspace ./ws # Custom workspace directory path
wave init --output config.yaml # Custom output path for wave.yaml
wave init -y # Answer yes to all confirmation promptsExecute a pipeline. Arguments can be provided as positional args or flags.
# Positional arguments (recommended for quick usage)
wave run ops-pr-review "Review auth module"
# Flag-based (explicit)
wave run --pipeline ops-pr-review --input "Review auth module"
# Mixed
wave run ops-pr-review --input "Review auth module"Output:
[run-abc123] Starting pipeline: ops-pr-review
[run-abc123] Step: analyze (navigator) - started
[run-abc123] Step: analyze (navigator) - completed (45s)
[run-abc123] Step: review (auditor) - started
[run-abc123] Step: review (auditor) - completed (1m12s)
[run-abc123] Pipeline completed in 1m57s
wave run impl-impl-hotfix --dry-run # Preview without executing
wave run impl-speckit --from-step implement # Start from step (auto-recovers input)
wave run impl-speckit --from-step implement --force # Skip validation for --from-step
wave run impl-recinq --from-step report --run impl-recinq-20260219-fa19 # Recover input from specific run
wave run migrate --timeout 60 # Custom timeout (minutes)
wave run test --mock # Use mock adapter for testing
wave run build -o json # NDJSON output to stdout (pipe-friendly)
wave run deploy -o text # Plain text progress to stderr
wave run review -o text -v # Plain text with real-time tool activity
wave run check -o quiet # Only final result to stderr
wave run build --model haiku # Override adapter model for this run
wave run ops-debug --preserve-workspace # Preserve workspace from previous run (for debugging)
wave run --detach impl-issue "fix login bug" # Detach: run in background, survive shell exitThe --detach flag spawns the pipeline as a background process that survives shell exit.
The command prints the run ID and returns immediately. Use wave logs and wave cancel to manage it.
wave run --detach impl-issue -- "https://github.com/org/repo/issues/42"
# → Pipeline 'impl-issue' launched (detached)
# → Run ID: impl-issue-20260317-...
# → Logs: wave logs impl-issue-20260317-...
# → Cancel: wave cancel impl-issue-20260317-...This is the same mechanism the TUI uses internally — the subprocess runs in its own session group
(setsid), so killing the parent terminal has no effect on the pipeline.
Run an ad-hoc task without a pipeline file.
wave do "fix the typo in README.md"Output:
[run-xyz789] Generated 2-step pipeline: navigate -> execute
[run-xyz789] Step: navigate (navigator) - started
[run-xyz789] Step: navigate (navigator) - completed (23s)
[run-xyz789] Step: execute (craftsman) - started
[run-xyz789] Step: execute (craftsman) - completed (1m05s)
[run-xyz789] Task completed in 1m28s
wave do "audit auth" --persona auditor # Use specific persona
wave do "test" --dry-run # Preview only
wave do "deploy" --mock # Use mock adapter for testing
wave do "audit" --model opus # Override adapter model for this runGenerate and execute a custom multi-step pipeline dynamically using the philosopher persona.
wave meta "implement user authentication"Output:
Invoking philosopher to generate pipeline...
This may take a moment while the AI designs your pipeline.
Generated pipeline: implement-auth
Steps:
1. navigate [navigator]
2. specify [philosopher]
3. implement [craftsman]
4. review [auditor]
Meta pipeline completed (3m28s)
Total steps: 4, Total tokens: 45k
wave meta "build API" --save api-pipeline.yaml # Save generated pipeline
wave meta "refactor code" --dry-run # Preview without executing
wave meta "add tests" --mock # Use mock adapter for testing
wave meta "refactor" --model opus # Override adapter model for this runThe --save flag is particularly useful for turning dynamically generated pipelines into reusable templates:
# Generate, save, and later re-run the same pipeline
wave meta "implement OAuth2 flow" --save .wave/pipelines/oauth2.yaml
wave run oauth2 "add Google provider"Check pipeline run status.
wave statusOutput:
RUN_ID PIPELINE STATUS STEP ELAPSED TOKENS
run-abc123 ops-pr-review running review 2m15s 12k
run-xyz789 impl-hotfix completed - 5m23s 28k
wave status run-abc123Output:
Run ID: run-abc123
Pipeline: ops-pr-review
Status: running
Step: review
Started: 2026-02-03 14:30:22
Elapsed: 2m15s
Input: Review auth module
Steps:
analyze completed 45s
review running 1m30s
wave status --all # Show all recent runs
wave status --format json # JSON output for scriptingView execution logs.
wave logs run-abc123Output:
[14:30:22] started analyze (navigator) Starting analysis
[14:31:07] completed analyze (navigator) 45.0s 2.1k tokens Found 5 relevant files
[14:31:08] started review (auditor) Beginning review
[14:32:20] completed review (auditor) 72.0s 1.5k tokens Checking security patterns
wave logs --step analyze # Filter by step
wave logs --errors # Show only errors
wave logs --tail 20 # Last 20 entries
wave logs --follow # Stream in real-time
wave logs --since 10m # Last 10 minutes
wave logs --level error # Log level filter (all, info, error)
wave logs --format json # Output as JSON for scriptingWave has two distinct observability mechanisms that serve different purposes:
wave logsreads the event history from the state database after events have been recorded. It works on both running and completed pipelines and is the primary tool for post-hoc debugging.--outputmodes (text,json,quiet) control how real-time progress is rendered to the terminal during execution. They determine what you see while a pipeline runs.
wave logs |
--output modes |
|
|---|---|---|
| Mechanism | Reads recorded events from SQLite state DB | Renders progress events to terminal in real-time |
| Data source | .wave/state.db (persisted) |
Live event stream (ephemeral) |
| Timing | During or after execution | Only during execution |
| Typical use | Post-hoc debugging, audit trail, scripting | Watching progress, CI output formatting |
1. Debugging a failed step
After a pipeline fails, use wave logs to inspect what happened:
wave logs impl-issue-20260320-abc123 --errors
wave logs impl-issue-20260320-abc123 --step implement --format jsonThe logs are persisted in the state database, so you can inspect them long after the run finishes.
2. Watching a pipeline run live
To see real-time progress with tool activity while a pipeline executes:
wave run impl-issue -o text -v -- "https://github.com/org/repo/issues/42"The -o text flag renders plain-text progress to stderr, and -v adds real-time tool activity lines. This output is ephemeral — once the terminal is closed, it is gone.
3. Scripting and CI integration
For machine-readable output, combine both mechanisms:
# Real-time: stream structured JSON events during execution
wave run impl-issue -o json -- "https://github.com/org/repo/issues/42"
# Post-hoc: query the state DB after completion
wave logs impl-issue-20260320-abc123 --format jsonUse -o json when you need to process events as they happen (e.g., updating a CI dashboard). Use wave logs --format json when you need to analyze a completed run (e.g., extracting step durations for metrics).
Cancel a running pipeline.
wave cancel run-abc123Output:
Cancellation requested for run-abc123 (ops-pr-review)
Pipeline will stop after current step completes.
wave cancel run-abc123 --force
wave cancel run-abc123 -f # Short flag for --forceOutput:
Force cancellation sent to run-abc123 (ops-pr-review)
Process terminated.
wave cancel --format json # Output cancellation result as JSON
wave cancel -f --format text # Force cancel with text output (default)List and export artifacts.
wave artifacts run-abc123Output:
STEP ARTIFACT TYPE SIZE PATH
analyze analysis.json json 2.1 KB .wave/workspaces/.../analysis.json
review findings.md md 4.5 KB .wave/workspaces/.../findings.md
wave artifacts run-abc123 --export ./outputOutput:
Exported 2 artifacts to ./output/
./output/analyze/analysis.json
./output/review/findings.md
wave artifacts --step analyze # Filter by step
wave artifacts --format json # JSON outputList Wave configuration, resources, and execution history.
wave list # Show all categories
wave list adapters # List configured adapters
wave list runs # List recent pipeline runs
wave list pipelines # List available pipelines
wave list personas # List configured personas
wave list contracts # List contract schemaswave list adaptersOutput:
Adapters
────────────────────────────────────────────────────────────
✓ claude
binary: claude • mode: headless • format: json
wave list runsOutput:
Recent Pipeline Runs
────────────────────────────────────────────────────────────────────────────────
RUN_ID PIPELINE STATUS STARTED DURATION
run-abc123 ops-pr-review completed 2026-02-03 14:30 5m23s
run-xyz789 impl-hotfix failed 2026-02-03 09:30 2m15s
wave list pipelinesOutput:
Pipelines
────────────────────────────────────────────────────────────
ops-pr-review [4 steps]
Automated code review workflow
○ analyze → review → report → notify
impl-speckit [5 steps]
Feature development pipeline
○ navigate → specify → plan → implement → validate
wave list personasOutput:
Personas
────────────────────────────────────────────────────────────
navigator
adapter: claude • temp: 0.1 • allow:3
Read-only codebase exploration
craftsman
adapter: claude • temp: 0.7 • allow:5
Implementation and testing
wave list contractsOutput:
Contracts
────────────────────────────────────────────────────────────
navigation [json-schema]
used by:
• impl-speckit → navigate (navigator)
specification [json-schema]
used by:
• impl-speckit → specify (philosopher)
validation-report [json-schema]
(unused)
wave list runs --run-status failed # Filter by status
wave list runs --limit 20 # Show more runs
wave list runs --run-pipeline impl-hotfix # Filter by pipeline
wave list --format json # JSON outputValidate configuration.
wave validateOutput (success):
Validating wave.yaml...
Adapters: 1 defined
Personas: 5 defined
Pipelines: 3 discovered
All validation checks passed.
Output (errors):
Validating wave.yaml...
ERROR: Persona 'craftsman' references undefined adapter 'opencode'
ERROR: System prompt file not found: .wave/personas/missing.md
Validation failed with 2 errors.
wave validate -v # Show all checks (global --verbose flag)
wave validate --pipeline impl-hotfix.yaml # Validate specific pipelineClean up workspaces.
wave clean --dry-runOutput:
Would delete:
.wave/workspaces/run-abc123/ (ops-pr-review, 145 MB)
.wave/workspaces/run-xyz789/ (impl-hotfix, 23 MB)
Total: 168 MB across 2 runs
Run without --dry-run to delete.
wave clean --all # Clean all workspaces
wave clean --older-than 7d # Clean runs older than 7 days
wave clean --status completed # Clean only completed runs
wave clean --keep-last 5 # Keep 5 most recent
wave clean --force # Skip confirmation
wave clean --dry-run # Preview what would be deleted
wave clean --quiet # Suppress output for scriptingOpen the embedded web operations dashboard in a browser. This is a convenience alias that starts the dashboard server and opens it in the default browser.
wave webuiThis is equivalent to running wave serve and manually opening the URL.
wave webui --port 9090 # Custom port
wave webui --no-open # Start server without opening browserStart the web dashboard server. Provides real-time pipeline monitoring, execution control, DAG visualization, and artifact browsing through a web interface.
Note:
wave serverequires thewebuibuild tag. Install withgo install -tags webuior use a release binary that includes the web UI.
wave serveOutput:
Starting Wave dashboard on http://127.0.0.1:8080
| Flag | Default | Description |
|---|---|---|
--port |
8080 |
Port to listen on |
--bind |
127.0.0.1 |
Address to bind to |
--token |
"" |
Authentication token (required for non-localhost binding) |
--db |
.wave/state.db |
Path to state database |
--manifest |
wave.yaml |
Path to manifest file |
When binding to a non-localhost address (--bind 0.0.0.0), authentication is required. The token can be provided via:
--tokenflagWAVE_SERVE_TOKENenvironment variable- Auto-generated (printed to stderr on startup)
# Local development (no auth required)
wave serve
# Custom port
wave serve --port 9090
# Expose on all interfaces with explicit token
wave serve --bind 0.0.0.0 --token mysecret
# Use custom database path
wave serve --db .wave/state.dbValidate artifact compatibility between adjacent pipelines in a sequence and optionally execute them in order.
wave compose impl-speckit wave-evolve wave-reviewOutput:
Validating pipeline sequence: impl-speckit → wave-evolve → wave-review
impl-speckit → wave-evolve: compatible (3 artifacts)
wave-evolve → wave-review: compatible (2 artifacts)
Executing pipeline sequence...
[run-abc123] impl-speckit completed (2m15s)
[run-def456] wave-evolve completed (3m42s)
[run-ghi789] wave-review completed (1m08s)
Sequence completed in 7m05s
| Flag | Default | Description |
|---|---|---|
--validate-only |
false |
Check compatibility without executing |
--input |
"" |
Input data passed to every pipeline in the sequence |
--mock |
false |
Use mock adapter (for testing) |
--parallel |
false |
Enable parallel execution (use -- to separate stages) |
--fail-fast |
true |
Stop on first failure |
# Validate without executing
wave compose impl-speckit wave-evolve --validate-only
# Pass input to all pipelines
wave compose pipeline-a pipeline-b --input "build feature X"
# Parallel execution with stage separator
wave compose --parallel A B -- C
# Use mock adapter for testing
wave compose impl-speckit wave-evolve --mockRun diagnostic checks on Wave project configuration, tools, and environment.
wave doctorOutput:
Wave Doctor
────────────────────────────────────────
✓ Manifest valid
✓ Adapters configured
✓ Personas resolved
⚠ 2 pipelines reference missing contracts
Checks: 12 passed, 1 warning, 0 errors
| Flag | Default | Description |
|---|---|---|
--fix |
false |
Auto-install missing dependencies where possible |
--optimize |
false |
Scan project and propose wave.yaml improvements |
--dry-run |
false |
Show proposed changes without writing (requires --optimize) |
--skip-ai |
false |
Skip AI-powered analysis, deterministic scan only (requires --optimize) |
--skip-codebase |
false |
Skip forge API codebase analysis |
--yes, -y |
false |
Accept all proposed changes without confirmation (requires --optimize) |
--json |
false |
Output in JSON format |
# Auto-fix issues
wave doctor --fix
# Propose manifest optimizations
wave doctor --optimize
# Preview optimizations without applying
wave doctor --optimize --dry-run
# Non-interactive optimization
wave doctor --optimize --yes
# JSON output for scripting
wave doctor --json| Code | Meaning |
|---|---|
| 0 | All checks passed |
| 1 | Warnings detected (non-blocking) |
| 2 | Errors detected (action required) |
Analyze codebase health and suggest pipeline runs that would be most impactful.
wave suggestOutput:
Suggested pipelines:
1. [P1] wave-test-hardening
Reason: Test coverage below threshold in internal/pipeline/
Input: internal/pipeline/
2. [P2] wave-security-audit
Reason: 3 packages have no input validation
Input: internal/adapter/ internal/workspace/
3. [P3] wave-evolve
Reason: 5 TODOs found in production code
| Flag | Default | Description |
|---|---|---|
--limit |
5 |
Maximum number of suggestions |
--dry-run |
false |
Show what would be suggested without executing |
--json |
false |
Output in JSON format |
# Limit suggestions
wave suggest --limit 3
# JSON output
wave suggest --json
# Preview mode
wave suggest --dry-runInteractive analysis and exploration of pipeline runs. Opens a conversational session where you can investigate step outputs, artifacts, and execution details.
wave chat run-abc123| Flag | Default | Description |
|---|---|---|
--step |
"" |
Focus context on a specific step |
--model |
sonnet |
Model to use for the chat session |
--list |
false |
List recent runs |
--continue |
"" |
Continue work in a step's workspace (read-write) |
--rewrite |
"" |
Re-execute a step with modified prompt |
--extend |
"" |
Add supplementary instructions to a step |
# List recent runs
wave chat --list
# Chat about a specific step
wave chat run-abc123 --step implement
# Continue working in a step's workspace
wave chat run-abc123 --continue implement
# Re-execute a step with a new prompt
wave chat run-abc123 --rewrite implement
# Add supplementary instructions to a step
wave chat run-abc123 --extend implementDatabase migration commands.
wave migrate statusOutput:
Current version: 3
Available migrations: 5
Pending: 2
v1 core_tables applied 2026-01-15 10:00:00
v2 add_artifacts applied 2026-01-20 14:30:00
v3 add_metrics applied 2026-02-01 09:00:00
v4 add_checkpoints pending
v5 add_relay pending
wave migrate upOutput:
Applying migration v4: add_checkpoints... done
Applying migration v5: add_relay... done
Migrations complete. Current version: 5
Verify that applied migrations match their expected checksums and are in a consistent state.
wave migrate validatewave migrate down 3Output:
Rolling back to version 3...
Reverting v5: add_relay... done
Reverting v4: add_checkpoints... done
Rollback complete. Current version: 3
When launched without --no-tui, Wave provides an interactive terminal UI with a guided workflow that progresses through four phases:
| Phase | View | Description |
|---|---|---|
| Health | Health checks | Infrastructure health checks run automatically on startup |
| Proposals | Suggest view | Shows recommended pipeline runs based on codebase analysis |
| Fleet | Pipelines view | Monitors active pipeline runs and their step progress |
| Attached | Live output | Shows real-time output from a running pipeline step |
- Health — Runs automatically on startup. If all checks pass, transitions to Proposals after a brief delay. If errors are detected, the user can choose to continue or address issues first.
- Proposals — Displays pipeline suggestions from
wave suggest. Press Tab to switch to Fleet view. - Fleet — Shows active runs. Press Tab to switch back to Proposals. Select a run to attach.
- Attached — Shows live output from the selected pipeline. Tab is blocked during attachment. Detaching returns to Fleet view.
| Key | Action |
|---|---|
| Tab | Toggle between Proposals and Fleet views |
| Enter | Select/attach to a pipeline run |
| Esc | Detach from live output (returns to Fleet) |
| q | Quit |
To disable the TUI and use plain text output, pass --no-tui or -o text.
Key source: internal/tui/content.go
Run and analyze SWE-bench benchmarks. Compare Wave pipeline performance against standalone Claude Code.
Execute a pipeline against each task in a JSONL benchmark dataset.
wave bench run --dataset swe-bench-lite.jsonl --pipeline bench-solve
wave bench run --dataset tasks.jsonl --pipeline bench-solve --limit 10
wave bench run --dataset tasks.jsonl --mode claude --label baseline-v1
wave bench run --dataset tasks.jsonl --pipeline bench-solve --output results.json| Flag | Default | Description |
|---|---|---|
--dataset |
Path to JSONL dataset file (required) | |
--pipeline |
Pipeline name to execute per task (required unless --mode=claude) |
|
--mode |
wave |
Execution mode: wave or claude |
--label |
Human-readable label for this run | |
--limit |
0 |
Maximum number of tasks to run (0 = all) |
--timeout |
0 |
Per-task timeout in seconds (0 = no limit) |
--output |
Path to write JSON results file | |
--datasets-dir |
.wave/bench/datasets |
Directory to search for dataset files |
--keep-workspaces |
false |
Preserve task worktrees after completion |
Generate a summary from a previous benchmark run's results file.
wave bench report --results results.json
wave bench report --results results.json --jsonCompare two benchmark result files to show per-task differences.
wave bench compare --base baseline.json --compare wave-run.json
wave bench compare --base baseline.json --compare wave-run.json --json| Flag | Default | Description |
|---|---|---|
--base |
Path to base/baseline results JSON (required) | |
--compare |
Path to comparison results JSON (required) |
List available benchmark datasets in the datasets directory.
wave bench list
wave bench list --datasets-dir ./my-datasetsAll commands support:
| Flag | Short | Description |
|---|---|---|
--help |
-h |
Show help |
--version |
Show version | |
--manifest |
-m |
Path to manifest file (default: wave.yaml) |
--debug |
-d |
Enable debug mode |
--output |
-o |
Output format: auto, json, text, quiet (default: auto) |
--verbose |
-v |
Include real-time tool activity |
--json |
Output in JSON format (equivalent to --output json) |
|
--quiet |
-q |
Suppress non-essential output (equivalent to --output quiet) |
--no-color |
Disable colored output | |
--no-tui |
Disable TUI and use text output |
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | General error (includes pipeline failures, timeouts, validation errors) |
| 2 | Usage error (invalid arguments or configuration) |
- Quick Start - Run your first pipeline
- Pipelines - Define multi-step workflows
- Manifest Reference - Configuration options