Summary
Status Update (March 27, 2026)
Phases 1 through 5 are now merged in main. Phase 5 shipped in PR #166 on March 26, 2026, completing the workspace-provider configuration and setup path that the remote-provider work depends on.
What landed in Phase 5:
orbitdock init --workspace-provider ...
orbitdock start --workspace-provider ... startup override resolution
GET/PUT /api/server/workspace-provider
orbitdock config get workspace-provider and orbitdock config set workspace-provider <value>
- provider selection wired into
orbitdock setup
What is still open in this issue:
- Daytona and other remote workspace providers
- mission/workspace lifecycle policy and provider-specific provisioning config
- Swift/client polish for provisioning and remote workspace UX
Status Update (March 26, 2026)
Phases 3 and 4 are now merged. Phase 3 shipped in PR #152 on March 25, 2026, adding the managed-mode SyncWriter, spool and drain behavior, heartbeat replication, and the CLI/server wiring needed to run a workspace OrbitDock against an upstream control plane. Phase 4 shipped in PR #165 on March 26, 2026, adding the control-plane /api/sync receiver, workspace sync schema, sequence and dedup handling, heartbeat tracking, and provisioning state support in mission orchestration.
What landed across Phases 3 and 4:
- managed-mode
SyncWriter batching, spool, retry, heartbeat, and shutdown flow
PersistenceWriter fanout into sync replication when --managed is enabled
- control-plane
POST /api/sync with workspace-token auth, sequence validation, replay, and ack responses
workspaces, sync_log, and mission_issues.workspace_id persistence support
provisioning mission state plumbing on the server
What is still open in this issue:
orbitdock init and configuration flow for selecting and storing workspace-provider settings
- remote workspace providers such as Daytona
- client polish for provisioning and remote workspace UX
Status Update (March 24, 2026)
Phase 1 shipped in PR #150: WorkspaceProvider extraction is now merged. dispatch_issue() is now an orchestration boundary, and the local workspace lifecycle runs through LocalWorkspaceProvider with no intended behavior change.
What landed:
WorkspaceProvider trait + dispatch boundary
LocalWorkspaceProvider covering local workspace creation, environment setup, session creation, agent launch, and prompt delivery
DispatchRequest / DispatchResult handoff types for provider-owned execution
- Extracted pure helpers for branch naming and MCP config generation, plus new unit coverage
What is still open in this issue:
- sync replication to the control plane (
SyncCommand, spool, /api/sync)
orbitdock init and managed-mode config flow
- remote providers like Daytona
Status Update (March 24, 2026)
Phase 2 is now in PR #151: the sync-command serialization boundary is implemented and ready for review. OrbitDock now has a typed SyncCommand mirror for PersistCommand, a SyncEnvelope wrapper, bidirectional conversion between persistence and sync commands, and round-trip coverage across every current persistence variant.
What landed:
- serializable
SyncCommand mirror for every current persistence command
- sync-only payload mirrors for the boxed persistence params that were not directly serializable
PersistCommand → SyncCommand conversion for the full persistence surface
SyncCommand → PersistCommand conversion with row response channels explicitly dropped on restore
- round-trip serialization tests for all current variants plus envelope coverage
What is still open in this issue:
SyncWriter batching, spool, and retry behavior
/api/sync endpoint and sequence handling on the control plane
orbitdock init and managed-mode config flow
- remote providers like Daytona
OrbitDock's server is already a portable binary with its own SQLite database. To support workflows where mission control dispatches issues to remote development environments (VMs, containers, remote machines), we need:
- Pluggable workspace providers — abstract where code execution happens (local worktrees vs remote VMs vs future providers)
- DB-first sync to control plane — remote workspaces run a full OrbitDock with local SQLite as source of truth, then replicate to the main server
orbitdock init command — guided onboarding that configures workspace provider, API keys, and tracker credentials
- Remote agent execution — run OrbitDock + agents inside provisioned workspaces, synced back to the main OrbitDock server
The server architecture doesn't change — same REST + WebSocket API, same mission orchestrator, same client compatibility. The only things that change are: what happens inside dispatch_issue(), and a new sync layer that replicates remote workspace state to the control plane.
Key Architecture Finding: DB-First Sync via PersistCommand Replication
The Problem
Each mission ticket maps to a remote workspace. The workspace runs a full OrbitDock server with its own local SQLite. When the workspace is destroyed, that data is gone. We need the control plane (main OrbitDock server) to have all session data before the workspace dies.
The Solution: PersistCommand Is Already the Replication Protocol
PersistCommand is a sealed enum of every possible DB mutation (55 variants). Instead of designing a new sync format, we serialize each PersistCommand after the local write succeeds and ship it to the main server.
┌─────────────────────────────────────────────────┐
│ Remote Workspace │
│ │
│ codex-core / claude CLI │
│ │ events │
│ ▼ │
│ SessionActor → PersistenceWriter → SQLite (SoT) │
│ │ │
│ │ after local write │
│ ▼ │
│ SyncQueue (on-disk spool) │
│ │ │
└────────────────────────│──────────────────────────┘
│ HTTP POST /api/sync
▼
┌──────────────────┐
│ Main Server │
│ (control plane) │
│ │
│ /api/sync │
│ │ │
│ ▼ │
│ SQLite (replica)│──► WebSocket → Swift Client
└──────────────────┘
Why this works:
- Local DB is always source of truth — fast writes, resilient to network blips
- Sync is eventual — if the network drops, commands spool to disk and flush when reconnected (same pattern as
hook_forward.rs spool at <data_dir>/spool/)
- All agent providers use the same path — Claude hooks point to the workspace's own localhost OrbitDock, not the main server. Codex is embedded. Both write to local SQLite, both get synced identically.
- No new protocol —
PersistCommand already represents every mutation. The main server's execute_command() already knows how to replay every variant.
- Pre-allocated IDs — Main server generates
od- UUIDs for session_id, workspace_id before provisioning, so IDs never collide.
PersistCommand Serialization
Current state: PersistCommand only derives Debug. It has 55 variants but only 2 contain non-serializable types:
RowAppend { ..., sequence_tx: Option<oneshot::Sender<u64>> }
RowUpsert { ..., sequence_tx: Option<oneshot::Sender<u64>> }
Approach: Create a SyncCommand mirror type that's Serialize + Deserialize, derived from PersistCommand with the sequence_tx field stripped. After each local write, if the server is in managed mode, convert the PersistCommand → SyncCommand and enqueue.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SyncEnvelope {
pub sequence: u64, // monotonic, for ordering + dedup
pub workspace_id: String, // which workspace sent this
pub timestamp: String, // ISO-8601
pub command: SyncCommand, // serializable mirror of PersistCommand
}
Spool-on-Failure Pattern
Reuse the existing hook transport spool design from hook_forward.rs:
- Location:
<data_dir>/sync-spool/
- Filename:
{timestamp_ms}-{sequence}.json (enables ordering + uniqueness)
- Retry: On each tick, drain spool in order before sending new commands
- Batching: Group commands into batches (same 50-command / 100ms pattern as
PersistenceWriter)
- Auth: Bearer token via existing
auth_tokens system (odtk_<id>_<secret>)
Main Server /api/sync Endpoint
POST /api/sync
Authorization: Bearer <workspace_callback_token>
Body: { "commands": [SyncEnvelope, ...] }
Response:
200 — { "acked_through": <sequence> }
409 — sequence gap detected (client should resend from last acked)
401 — invalid token
The endpoint:
- Validates the bearer token → resolves to a workspace_id
- Checks sequence continuity (no gaps)
- Converts each
SyncCommand back to PersistCommand
- Feeds them through the main server's existing
PersistenceWriter channel
- Broadcasts
ServerMessage events to connected Swift clients
This means the Swift app sees remote sessions appear and update in real-time, through the exact same WebSocket path it already uses.
OrbitDock Already Controls Both Agent Providers
OrbitDock is the control plane for both Claude Code and Codex. The orbitdock binary already:
- Claude Code: Manages via HTTP hooks. Hook transport is already remote-capable —
orbitdock install-hooks --server-url <remote> stores the URL in ~/.orbitdock/hook_transport_config.json. In a workspace, hooks point to localhost (the workspace's own OrbitDock), not the main server.
- Codex: Manages via embedded
codex-core Rust library. No external binary needed — Codex is built into the orbitdock binary itself.
Running OrbitDock inside a remote workspace gives you both agent providers for free. The sync layer handles replication to the control plane uniformly for both.
Relevant code:
- Hook installation:
crates/server/src/admin/install_hooks.rs
- Hook spool (reuse pattern):
crates/server/src/admin/hook_forward.rs (line 288+)
- Claude connector:
crates/connector-claude/src/lib.rs (subprocess spawn)
- Codex connector:
crates/connector-codex/src/lib.rs (embedded codex-core, direct Rust calls)
- Session creation:
crates/server/src/runtime/session_creation.rs (provider dispatch)
- PersistCommand enum:
crates/server/src/infrastructure/persistence/commands.rs (55 variants, ~490 lines)
- PersistenceWriter batching:
crates/server/src/infrastructure/persistence/writer.rs (~146 lines)
- Command dispatcher:
crates/server/src/infrastructure/persistence/mod.rs (execute_command at ~line 189)
- Auth tokens:
crates/server/src/infrastructure/auth_tokens.rs
Design
Workspace Provider Trait
A thin abstraction over "where code runs":
pub enum WorkspaceProviderKind {
Local, // current behavior — git worktrees + local processes
Daytona, // Daytona VMs
// future: Docker, SSH, Gitpod, Codespaces, etc.
}
#[async_trait]
pub trait WorkspaceProvider: Send + Sync {
/// Provision an isolated workspace for a mission issue
async fn create(&self, req: CreateWorkspaceRequest) -> Result<Workspace>;
/// Run the coding agent inside the workspace
async fn start_agent(&self, workspace: &Workspace, prompt: &str, settings: &AgentSettings) -> Result<AgentHandle>;
/// Check workspace health / status
async fn status(&self, workspace_id: &str) -> Result<WorkspaceStatus>;
/// Tear down the workspace (flush sync queue first)
async fn destroy(&self, workspace_id: &str) -> Result<()>;
}
LocalWorkspaceProvider — wraps the existing worktree creation + local process spawning logic. Zero behavior change for current users. No sync layer (local provider writes directly to the same DB).
DaytonaWorkspaceProvider — calls Daytona REST API to create/manage workspaces. Inside the workspace:
- OrbitDock binary is pre-installed in the workspace image
- OrbitDock starts in managed mode with pre-allocated IDs and sync callback URL
- For Claude: hooks point to localhost workspace OrbitDock → local SQLite → sync to control plane
- For Codex: embedded
codex-core → local SQLite → sync to control plane
- Both agent providers follow the exact same data path
Current Dispatch Flow (What Changes)
The current dispatch_issue() in runtime/mission_dispatch.rs does:
1. Update state → "claimed"
2. create_tracked_worktree() ← LOCAL: git worktree add
3. Write .mcp.json for mission tools ← LOCAL: filesystem write
4. Render prompt from template
5. prepare_persist_direct_session() ← LOCAL: session with local cwd
6. launch_prepared_direct_session() ← LOCAL: spawn local process
7. Send initial prompt via action channel
8. Update state → "running"
With the workspace provider trait, steps 2-3 and 5-7 become provider-specific:
1. Update state → "claimed"
2. provider.create(workspace_req) ← PROVIDER: create workspace (instant for local, async for remote)
3. Update state → "provisioning" ← NEW STATE (skipped for local — instant transition)
4. Render prompt from template
5. provider.start_agent(workspace, prompt, settings) ← PROVIDER: start agent in workspace
6. Update state → "running"
What stays generic: Provider selection, prompt rendering, state machine transitions, tracker updates, reconciliation loop.
What becomes provider-specific: Workspace creation, environment setup (.mcp.json, hooks), agent process management.
State Machine Addition
Local provider: [claimed] → [running] (instant — no provisioning delay)
Remote provider: [claimed] → [provisioning] → [running]
Full flow:
[queued]
│ dispatch
[claimed]
│ create workspace
[provisioning] ← NEW (only for non-instant providers)
│ workspace ready, agent started, sync connected
[running]
│
├─ completed → [completed] → graceful drain → destroy workspace
├─ failed ───→ [failed] → graceful drain → destroy workspace
└─ stalled ──→ [stalled] → force-destroy workspace
Current orchestration states: queued, claimed, running, retryQueued, completed, failed, blocked.
Client shows "Setting up environment..." during provisioning.
Important: destroy always waits for the sync queue to drain first (graceful drain). Only stall-timeout triggers a force-destroy.
Workspace Lifecycle & Sync Details
Startup (main server provisions workspace)
- Main server pre-allocates IDs:
workspace_id, session_id (both od- + UUIDv4)
- Main server generates a scoped auth token for this workspace (stored in
auth_tokens table)
- Main server creates workspace record in
workspaces table (status: creating)
- Main server calls provider API → create workspace (repo URL, branch
mission/{issue-identifier}, image)
- Provider clones repo, boots environment with pre-baked OrbitDock image
- Main server execs startup command inside workspace:
orbitdock start --managed \
--workspace-id $WORKSPACE_ID \
--session-id $SESSION_ID \
--sync-url https://dock.example.com:4000/api/sync \
--sync-token $CALLBACK_TOKEN \
--provider codex \ # or claude
--prompt-file /tmp/mission-prompt.md
- Workspace's OrbitDock starts in managed mode:
- Creates session with the pre-allocated session_id
- Starts the SyncWriter background task
- Starts the agent (codex embedded or claude subprocess)
- Sends initial prompt
Steady State (workspace running)
- Agent writes events → PersistenceWriter → local SQLite (fast, guaranteed)
- SyncWriter intercepts each PersistCommand after local write → converts to SyncCommand → enqueues
- Background task batches and POSTs to main server's
/api/sync every 100ms (or on batch full)
- If POST fails → spool to
<data_dir>/sync-spool/ → retry on next tick
- Heartbeat piggybacked on sync batches (empty batch = heartbeat-only POST)
- Main server detects missing heartbeats → marks workspace
unhealthy after threshold
Shutdown (clean completion)
- Agent completes → pushes branch → session ends
- OrbitDock in workspace flushes remaining sync queue (blocks up to 30s timeout)
- Workspace's OrbitDock sends final sync batch with session end command
- Workspace's OrbitDock exits cleanly
- Main server receives session end via sync → updates tracker → transitions to
completed
- Main server calls provider API → delete workspace
Crash Recovery
- If workspace dies unexpectedly, main server detects via missing heartbeats
- Main server marks session as
stalled after stall_timeout_secs
- Data is preserved up to the last successful sync batch
- Acceptable data loss: at most one batch window (100ms) of commands
orbitdock init
Guided onboarding command that collects configuration and writes to the encrypted config table. The wizard adapts based on which workspace provider is selected — no separate "mode" concept:
$ orbitdock init
Workspace Provider
› Local — agents run on this machine (default)
Daytona — provision Daytona VMs
Docker — spin up local containers
SSH — run on remote machines via SSH
[Daytona selected]
Daytona Configuration
API URL: https://daytona.yourcompany.com
API Key: dtn_••••••••
Agent Keys
These get injected into each workspace as env vars.
Anthropic API Key (for Claude): sk-ant-••••••••
OpenAI API Key (for Codex): sk-••••••••
Issue Tracker
Linear API Key: lin_api_••••••••
Server bind address: 0.0.0.0:4000
✓ Configuration saved (encrypted)
Run `orbitdock start` to launch.
Provider Differences
|
Local |
Daytona (or other remote) |
| Workspace |
Local git worktrees |
Remote VMs / containers |
| Claude auth |
User's existing ~/.claude/ session |
ANTHROPIC_API_KEY injected into workspace |
| Codex auth |
OPENAI_API_KEY on host |
OPENAI_API_KEY injected into workspace |
| Agent runtime |
OrbitDock on host manages both agent providers |
OrbitDock in workspace manages both agent providers |
| DB sync |
N/A (single SQLite) |
Workspace SQLite → control plane via /api/sync |
| Hook transport |
Hooks point to localhost |
Hooks point to workspace's own localhost OrbitDock |
Local provider skips remote provider config and API key prompts — agents piggyback on existing local installs and auth sessions.
Remote providers (Daytona, Docker, SSH, etc.) collect provider-specific connection details and API keys needed to provision workspaces and authenticate agents headlessly.
Config written by init
Stored in the existing config table with AES-256-GCM encryption (same pattern as openai_api_key and linear_api_key):
workspace_provider = "local" | "daytona" | "docker" | "ssh" | ...
daytona_api_url = "enc:..." (only for daytona provider)
daytona_api_key = "enc:..." (only for daytona provider)
anthropic_api_key = "enc:..." (only for remote providers)
openai_api_key = "enc:..." (already exists)
linear_api_key = "enc:..." (already exists)
server_bind = "0.0.0.0:4000"
orbitdock init should be re-runnable, pre-filling existing values. Also support orbitdock config set/get for scripted/CI setups.
Daytona Integration
Infrastructure module
New module at src/infrastructure/daytona/:
pub struct DaytonaClient {
http: reqwest::Client, // already a workspace dependency
api_url: String,
api_key: String,
}
impl DaytonaClient {
async fn create_workspace(&self, req: CreateWorkspaceRequest) -> Result<DaytonaWorkspace>;
async fn get_workspace(&self, id: &str) -> Result<DaytonaWorkspace>;
async fn exec(&self, workspace_id: &str, cmd: &[&str], env: &[(&str, &str)]) -> Result<ExecResult>;
async fn delete_workspace(&self, id: &str) -> Result<()>;
}
Workspace creation + agent start flow
- Server calls Daytona API → create workspace (repo URL, branch
mission/{issue-identifier}, image)
- Daytona clones repo, checks out branch, boots container with pre-baked OrbitDock image
- Server execs
orbitdock start --managed ... inside workspace via Daytona exec API
- Workspace's OrbitDock starts with pre-allocated IDs, creates session, starts agent
- Both agent providers write to local SQLite → SyncWriter replicates to main server
- Agent completes → pushes branch → graceful drain → main server detects completion → updates tracker → destroys workspace
Workspace base image
Pre-bake OrbitDock + agent tooling into a workspace image:
FROM daytonai/workspace:latest
RUN npm install -g @anthropic-ai/claude-code
COPY orbitdock /usr/local/bin/ # single binary handles both agent providers
Configurable in MISSION.md or as a server-level default from init.
MISSION.md Changes
New optional workspace: section:
workspace:
provider: daytona # override server default
image: team/custom:v2 # provider-specific
retention: destroy # destroy | keep_duration
retention_ttl: 3600 # seconds, only if retention=keep_duration
resources:
cpu: 4
memory: 8Gi
setup_commands: # run after workspace creation, before agent
- npm install
- cargo build
If workspace: is omitted, the server's config table value (set during orbitdock init) is the default. Local-only users don't need to add this section.
Schema Additions
-- V035: Track provisioned workspaces
CREATE TABLE workspaces (
id TEXT PRIMARY KEY,
mission_issue_id TEXT REFERENCES mission_issues(id),
session_id TEXT, -- pre-allocated, linked OrbitDock session
provider TEXT NOT NULL DEFAULT 'local', -- local | daytona | docker | ssh | future providers
external_id TEXT, -- provider-specific workspace ID
repo_url TEXT,
branch TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'creating', -- creating | ready | running | destroying | destroyed | failed
connection_info TEXT, -- JSON blob (provider-specific metadata)
sync_token TEXT, -- auth token ID for /api/sync callback
sync_acked_through INTEGER DEFAULT 0, -- last acked sync sequence number
last_heartbeat_at TEXT, -- for stall detection
created_at TEXT NOT NULL DEFAULT (datetime('now')),
ready_at TEXT,
destroyed_at TEXT
);
-- Link mission issues to workspaces
ALTER TABLE mission_issues ADD COLUMN workspace_id TEXT REFERENCES workspaces(id);
-- Track sync state for remote workspaces
CREATE TABLE sync_log (
id INTEGER PRIMARY KEY AUTOINCREMENT,
workspace_id TEXT NOT NULL REFERENCES workspaces(id),
sequence INTEGER NOT NULL, -- monotonic per workspace
command_json TEXT NOT NULL, -- serialized SyncCommand
received_at TEXT NOT NULL DEFAULT (datetime('now')),
UNIQUE(workspace_id, sequence)
);
The sync_log table on the main server serves as an audit trail and dedup mechanism. Commands are replayed through execute_command() after insertion.
Server Startup
# Local provider (default, current behavior)
orbitdock start
# With a specific workspace provider (configured via init or flag)
orbitdock start --workspace-provider daytona
# Or just use whatever init configured
orbitdock start # reads workspace_provider from config table
# Managed mode (inside a remote workspace — not user-facing)
orbitdock start --managed \
--workspace-id $WORKSPACE_ID \
--session-id $SESSION_ID \
--sync-url $SYNC_URL \
--sync-token $SYNC_TOKEN \
--provider codex \
--prompt-file /tmp/mission-prompt.md
The --workspace-provider flag overrides the config table value. If neither is set, defaults to local.
The --managed flag is internal — used when the main server starts OrbitDock inside a remote workspace. It enables the SyncWriter and disables the mission orchestrator (the workspace only runs its assigned session).
Client Impact
Minimal changes to the Swift app:
provisioning orchestration state — add case provisioning to OrchestrationState enum in MissionIssueItem.swift, add icon in MissionIssueRow.swift, add filter in MissionComponents.swift, add section in MissionOverviewTab.swift
- Remote server URL — settings flow for connecting to a remote server (already supported via multi-endpoint)
- Workspace info display — show provider-specific links instead of filesystem paths for remote workspaces
- Sync status indicator — optional badge showing sync lag for remote sessions
- Everything else is unchanged — same API, same WebSocket events, same mission lifecycle UI. The Swift client doesn't know or care whether a session is local or synced from a remote workspace.
Implementation Plan
Phase 1: Trait Extraction (zero behavior change)
Phase 2: SyncCommand + Serialization
Phase 3: SyncWriter + Spool
Phase 4: /api/sync Endpoint + Schema
Phase 5: orbitdock init + Config
Phase 6: Daytona Provider
Phase 7: Client Updates + Polish
Future Work (separate issues)
Phase 6A: Mission Provider CLI
CLI shape we want to grow toward:
orbitdock mission provider get
orbitdock mission provider set <provider>
orbitdock mission provider config get <key>
orbitdock mission provider config set <key> <value>
orbitdock mission provider test
Why this belongs here:
- remote VM providers only matter for Mission Control dispatch
orbitdock setup should stay focused on baseline OrbitDock installation/bootstrap
- future remote providers should slot into the same mission-owned surface without Daytona-specific top-level commands
Summary
Status Update (March 27, 2026)
Phases 1 through 5 are now merged in
main. Phase 5 shipped in PR #166 on March 26, 2026, completing the workspace-provider configuration and setup path that the remote-provider work depends on.What landed in Phase 5:
orbitdock init --workspace-provider ...orbitdock start --workspace-provider ...startup override resolutionGET/PUT /api/server/workspace-providerorbitdock config get workspace-providerandorbitdock config set workspace-provider <value>orbitdock setupWhat is still open in this issue:
Status Update (March 26, 2026)
Phases 3 and 4 are now merged. Phase 3 shipped in PR #152 on March 25, 2026, adding the managed-mode
SyncWriter, spool and drain behavior, heartbeat replication, and the CLI/server wiring needed to run a workspace OrbitDock against an upstream control plane. Phase 4 shipped in PR #165 on March 26, 2026, adding the control-plane/api/syncreceiver, workspace sync schema, sequence and dedup handling, heartbeat tracking, andprovisioningstate support in mission orchestration.What landed across Phases 3 and 4:
SyncWriterbatching, spool, retry, heartbeat, and shutdown flowPersistenceWriterfanout into sync replication when--managedis enabledPOST /api/syncwith workspace-token auth, sequence validation, replay, and ack responsesworkspaces,sync_log, andmission_issues.workspace_idpersistence supportprovisioningmission state plumbing on the serverWhat is still open in this issue:
orbitdock initand configuration flow for selecting and storing workspace-provider settingsStatus Update (March 24, 2026)
Phase 1 shipped in PR #150: WorkspaceProvider extraction is now merged.
dispatch_issue()is now an orchestration boundary, and the local workspace lifecycle runs throughLocalWorkspaceProviderwith no intended behavior change.What landed:
WorkspaceProvidertrait + dispatch boundaryLocalWorkspaceProvidercovering local workspace creation, environment setup, session creation, agent launch, and prompt deliveryDispatchRequest/DispatchResulthandoff types for provider-owned executionWhat is still open in this issue:
SyncCommand, spool,/api/sync)orbitdock initand managed-mode config flowStatus Update (March 24, 2026)
Phase 2 is now in PR #151: the sync-command serialization boundary is implemented and ready for review. OrbitDock now has a typed
SyncCommandmirror forPersistCommand, aSyncEnvelopewrapper, bidirectional conversion between persistence and sync commands, and round-trip coverage across every current persistence variant.What landed:
SyncCommandmirror for every current persistence commandPersistCommand→SyncCommandconversion for the full persistence surfaceSyncCommand→PersistCommandconversion with row response channels explicitly dropped on restoreWhat is still open in this issue:
SyncWriterbatching, spool, and retry behavior/api/syncendpoint and sequence handling on the control planeorbitdock initand managed-mode config flowOrbitDock's server is already a portable binary with its own SQLite database. To support workflows where mission control dispatches issues to remote development environments (VMs, containers, remote machines), we need:
orbitdock initcommand — guided onboarding that configures workspace provider, API keys, and tracker credentialsThe server architecture doesn't change — same REST + WebSocket API, same mission orchestrator, same client compatibility. The only things that change are: what happens inside
dispatch_issue(), and a new sync layer that replicates remote workspace state to the control plane.Key Architecture Finding: DB-First Sync via PersistCommand Replication
The Problem
Each mission ticket maps to a remote workspace. The workspace runs a full OrbitDock server with its own local SQLite. When the workspace is destroyed, that data is gone. We need the control plane (main OrbitDock server) to have all session data before the workspace dies.
The Solution: PersistCommand Is Already the Replication Protocol
PersistCommandis a sealed enum of every possible DB mutation (55 variants). Instead of designing a new sync format, we serialize eachPersistCommandafter the local write succeeds and ship it to the main server.Why this works:
hook_forward.rsspool at<data_dir>/spool/)PersistCommandalready represents every mutation. The main server'sexecute_command()already knows how to replay every variant.od-UUIDs for session_id, workspace_id before provisioning, so IDs never collide.PersistCommand Serialization
Current state:
PersistCommandonly derivesDebug. It has 55 variants but only 2 contain non-serializable types:RowAppend { ..., sequence_tx: Option<oneshot::Sender<u64>> }RowUpsert { ..., sequence_tx: Option<oneshot::Sender<u64>> }Approach: Create a
SyncCommandmirror type that'sSerialize + Deserialize, derived fromPersistCommandwith thesequence_txfield stripped. After each local write, if the server is in managed mode, convert thePersistCommand→SyncCommandand enqueue.Spool-on-Failure Pattern
Reuse the existing hook transport spool design from
hook_forward.rs:<data_dir>/sync-spool/{timestamp_ms}-{sequence}.json(enables ordering + uniqueness)PersistenceWriter)auth_tokenssystem (odtk_<id>_<secret>)Main Server
/api/syncEndpointThe endpoint:
SyncCommandback toPersistCommandPersistenceWriterchannelServerMessageevents to connected Swift clientsThis means the Swift app sees remote sessions appear and update in real-time, through the exact same WebSocket path it already uses.
OrbitDock Already Controls Both Agent Providers
OrbitDock is the control plane for both Claude Code and Codex. The
orbitdockbinary already:orbitdock install-hooks --server-url <remote>stores the URL in~/.orbitdock/hook_transport_config.json. In a workspace, hooks point to localhost (the workspace's own OrbitDock), not the main server.codex-coreRust library. No external binary needed — Codex is built into theorbitdockbinary itself.Running OrbitDock inside a remote workspace gives you both agent providers for free. The sync layer handles replication to the control plane uniformly for both.
Relevant code:
crates/server/src/admin/install_hooks.rscrates/server/src/admin/hook_forward.rs(line 288+)crates/connector-claude/src/lib.rs(subprocess spawn)crates/connector-codex/src/lib.rs(embeddedcodex-core, direct Rust calls)crates/server/src/runtime/session_creation.rs(provider dispatch)crates/server/src/infrastructure/persistence/commands.rs(55 variants, ~490 lines)crates/server/src/infrastructure/persistence/writer.rs(~146 lines)crates/server/src/infrastructure/persistence/mod.rs(execute_commandat ~line 189)crates/server/src/infrastructure/auth_tokens.rsDesign
Workspace Provider Trait
A thin abstraction over "where code runs":
LocalWorkspaceProvider— wraps the existing worktree creation + local process spawning logic. Zero behavior change for current users. No sync layer (local provider writes directly to the same DB).DaytonaWorkspaceProvider— calls Daytona REST API to create/manage workspaces. Inside the workspace:codex-core→ local SQLite → sync to control planeCurrent Dispatch Flow (What Changes)
The current
dispatch_issue()inruntime/mission_dispatch.rsdoes:With the workspace provider trait, steps 2-3 and 5-7 become provider-specific:
What stays generic: Provider selection, prompt rendering, state machine transitions, tracker updates, reconciliation loop.
What becomes provider-specific: Workspace creation, environment setup (.mcp.json, hooks), agent process management.
State Machine Addition
Current orchestration states:
queued,claimed,running,retryQueued,completed,failed,blocked.Client shows "Setting up environment..." during
provisioning.Important:
destroyalways waits for the sync queue to drain first (graceful drain). Only stall-timeout triggers a force-destroy.Workspace Lifecycle & Sync Details
Startup (main server provisions workspace)
workspace_id,session_id(bothod-+ UUIDv4)auth_tokenstable)workspacestable (status:creating)mission/{issue-identifier}, image)Steady State (workspace running)
/api/syncevery 100ms (or on batch full)<data_dir>/sync-spool/→ retry on next tickunhealthyafter thresholdShutdown (clean completion)
completedCrash Recovery
stalledafterstall_timeout_secsorbitdock initGuided onboarding command that collects configuration and writes to the encrypted config table. The wizard adapts based on which workspace provider is selected — no separate "mode" concept:
Provider Differences
~/.claude/sessionANTHROPIC_API_KEYinjected into workspaceOPENAI_API_KEYon hostOPENAI_API_KEYinjected into workspace/api/syncLocal provider skips remote provider config and API key prompts — agents piggyback on existing local installs and auth sessions.
Remote providers (Daytona, Docker, SSH, etc.) collect provider-specific connection details and API keys needed to provision workspaces and authenticate agents headlessly.
Config written by init
Stored in the existing config table with AES-256-GCM encryption (same pattern as
openai_api_keyandlinear_api_key):orbitdock initshould be re-runnable, pre-filling existing values. Also supportorbitdock config set/getfor scripted/CI setups.Daytona Integration
Infrastructure module
New module at
src/infrastructure/daytona/:Workspace creation + agent start flow
mission/{issue-identifier}, image)orbitdock start --managed ...inside workspace via Daytona exec APIWorkspace base image
Pre-bake OrbitDock + agent tooling into a workspace image:
Configurable in MISSION.md or as a server-level default from init.
MISSION.md Changes
New optional
workspace:section:If
workspace:is omitted, the server's config table value (set duringorbitdock init) is the default. Local-only users don't need to add this section.Schema Additions
The
sync_logtable on the main server serves as an audit trail and dedup mechanism. Commands are replayed throughexecute_command()after insertion.Server Startup
The
--workspace-providerflag overrides the config table value. If neither is set, defaults tolocal.The
--managedflag is internal — used when the main server starts OrbitDock inside a remote workspace. It enables the SyncWriter and disables the mission orchestrator (the workspace only runs its assigned session).Client Impact
Minimal changes to the Swift app:
provisioningorchestration state — addcase provisioningtoOrchestrationStateenum inMissionIssueItem.swift, add icon inMissionIssueRow.swift, add filter inMissionComponents.swift, add section inMissionOverviewTab.swiftImplementation Plan
Phase 1: Trait Extraction (zero behavior change)
WorkspaceProvidertrait indomain/workspaces/LocalWorkspaceProviderwrapping the existing local dispatch lifecycledispatch_issue()to call the trait instead of direct worktree/session codeLocalWorkspaceProvideras the default — shipped in PR ♻️ Extract WorkspaceProvider trait from dispatch_issue #150 on March 24, 2026Phase 2: SyncCommand + Serialization
SyncCommandenum — serializable mirror ofPersistCommand(striponeshot::Senderfields)From<&PersistCommand> for Option<SyncCommand>andFrom<SyncCommand> for PersistCommandSyncEnvelopestruct withsequence,workspace_id,timestamp,commandPhase 3: SyncWriter + Spool
SyncWriter— background tokio task that:SyncCommands via mpsc channel/api/syncon the main server<data_dir>/sync-spool/(reusehook_forward.rsspool pattern)PersistenceWriterto optionally feed aSyncWriterchannel after each local write (only when--managedflag is set)SyncWriter::shutdown()flushes remaining queue with 30s timeoutPhase 4:
/api/syncEndpoint + Schemaworkspacestable (V038__workspace_sync.sql)sync_logtable (V038__workspace_sync.sql)workspace_idcolumn tomission_issuesPOST /api/synchandler:sync_logfor audit/dedupSyncCommand→PersistCommandand feed through shared persistence semantics{ "acked_through": sequence }workspaces.last_heartbeat_aton each sync POSTprovisioningto mission issue state machine and mission orchestration plumbing — shipped in PR ✨ Complete issue 23 phase 4 sync receiver #165 on March 26, 2026Phase 5:
orbitdock init+ Configinitcommand in CLI (crates/cli/src/cli.rs— command already exists as minimal bootstrap)orbitdock setup; provider-specific remote config remains for the Daytona slicePersistCommand::SetConfig+ encrypted config tableorbitdock config set/getfor scripted access--workspace-providerflag onorbitdock start--managedflag onorbitdock start(enables SyncWriter and managed sync wiring)POST /api/server/openai-key) — implemented onfeat/issue-23-phase-5-provider-configPhase 6: Daytona Provider
DaytonaClientHTTP client ininfrastructure/daytona/DaytonaWorkspaceProviderimplementing the trait:create()→ Daytona API: create workspace with repo + branch + imagestart_agent()→ Daytona exec:orbitdock start --managed ...with pre-allocated IDsstatus()→ checkworkspaces.last_heartbeat_at+ Daytona API healthdestroy()→ call workspace's/shutdownfor graceful drain → Daytona API: delete workspaceworkspace:section in MISSION.md config parser (domain/mission_control/config.rs)Phase 7: Client Updates + Polish
case provisioningtoOrchestrationState, updateMissionIssueRow,MissionComponents,MissionOverviewTabretentionsupport (destroy immediately vs TTL-based cleanup)Future Work (separate issues)
keep_until_mergedretention (watch for PR merges to trigger cleanup)Phase 6A: Mission Provider CLI
orbitdock mission provider get|setfor the default remote workspace provider used by mission dispatchorbitdock mission provider config get|set <key>for provider-agnostic remote workspace config keysorbitdock mission provider testto run provider validation/smoke-test checks from the CLICLI shape we want to grow toward:
orbitdock mission provider getorbitdock mission provider set <provider>orbitdock mission provider config get <key>orbitdock mission provider config set <key> <value>orbitdock mission provider testWhy this belongs here:
orbitdock setupshould stay focused on baseline OrbitDock installation/bootstrap