Skip to content

🔌 Pluggable workspace providers, orbitdock init, and Daytona VM support #23

@Robdel12

Description

@Robdel12

Summary

Status Update (March 27, 2026)

Phases 1 through 5 are now merged in main. Phase 5 shipped in PR #166 on March 26, 2026, completing the workspace-provider configuration and setup path that the remote-provider work depends on.

What landed in Phase 5:

  • orbitdock init --workspace-provider ...
  • orbitdock start --workspace-provider ... startup override resolution
  • GET/PUT /api/server/workspace-provider
  • orbitdock config get workspace-provider and orbitdock config set workspace-provider <value>
  • provider selection wired into orbitdock setup

What is still open in this issue:

  • Daytona and other remote workspace providers
  • mission/workspace lifecycle policy and provider-specific provisioning config
  • Swift/client polish for provisioning and remote workspace UX

Status Update (March 26, 2026)

Phases 3 and 4 are now merged. Phase 3 shipped in PR #152 on March 25, 2026, adding the managed-mode SyncWriter, spool and drain behavior, heartbeat replication, and the CLI/server wiring needed to run a workspace OrbitDock against an upstream control plane. Phase 4 shipped in PR #165 on March 26, 2026, adding the control-plane /api/sync receiver, workspace sync schema, sequence and dedup handling, heartbeat tracking, and provisioning state support in mission orchestration.

What landed across Phases 3 and 4:

  • managed-mode SyncWriter batching, spool, retry, heartbeat, and shutdown flow
  • PersistenceWriter fanout into sync replication when --managed is enabled
  • control-plane POST /api/sync with workspace-token auth, sequence validation, replay, and ack responses
  • workspaces, sync_log, and mission_issues.workspace_id persistence support
  • provisioning mission state plumbing on the server

What is still open in this issue:

  • orbitdock init and configuration flow for selecting and storing workspace-provider settings
  • remote workspace providers such as Daytona
  • client polish for provisioning and remote workspace UX

Status Update (March 24, 2026)

Phase 1 shipped in PR #150: WorkspaceProvider extraction is now merged. dispatch_issue() is now an orchestration boundary, and the local workspace lifecycle runs through LocalWorkspaceProvider with no intended behavior change.

What landed:

  • WorkspaceProvider trait + dispatch boundary
  • LocalWorkspaceProvider covering local workspace creation, environment setup, session creation, agent launch, and prompt delivery
  • DispatchRequest / DispatchResult handoff types for provider-owned execution
  • Extracted pure helpers for branch naming and MCP config generation, plus new unit coverage

What is still open in this issue:

  • sync replication to the control plane (SyncCommand, spool, /api/sync)
  • orbitdock init and managed-mode config flow
  • remote providers like Daytona

Status Update (March 24, 2026)

Phase 2 is now in PR #151: the sync-command serialization boundary is implemented and ready for review. OrbitDock now has a typed SyncCommand mirror for PersistCommand, a SyncEnvelope wrapper, bidirectional conversion between persistence and sync commands, and round-trip coverage across every current persistence variant.

What landed:

  • serializable SyncCommand mirror for every current persistence command
  • sync-only payload mirrors for the boxed persistence params that were not directly serializable
  • PersistCommandSyncCommand conversion for the full persistence surface
  • SyncCommandPersistCommand conversion with row response channels explicitly dropped on restore
  • round-trip serialization tests for all current variants plus envelope coverage

What is still open in this issue:

  • SyncWriter batching, spool, and retry behavior
  • /api/sync endpoint and sequence handling on the control plane
  • orbitdock init and managed-mode config flow
  • remote providers like Daytona

OrbitDock's server is already a portable binary with its own SQLite database. To support workflows where mission control dispatches issues to remote development environments (VMs, containers, remote machines), we need:

  1. Pluggable workspace providers — abstract where code execution happens (local worktrees vs remote VMs vs future providers)
  2. DB-first sync to control plane — remote workspaces run a full OrbitDock with local SQLite as source of truth, then replicate to the main server
  3. orbitdock init command — guided onboarding that configures workspace provider, API keys, and tracker credentials
  4. Remote agent execution — run OrbitDock + agents inside provisioned workspaces, synced back to the main OrbitDock server

The server architecture doesn't change — same REST + WebSocket API, same mission orchestrator, same client compatibility. The only things that change are: what happens inside dispatch_issue(), and a new sync layer that replicates remote workspace state to the control plane.


Key Architecture Finding: DB-First Sync via PersistCommand Replication

The Problem

Each mission ticket maps to a remote workspace. The workspace runs a full OrbitDock server with its own local SQLite. When the workspace is destroyed, that data is gone. We need the control plane (main OrbitDock server) to have all session data before the workspace dies.

The Solution: PersistCommand Is Already the Replication Protocol

PersistCommand is a sealed enum of every possible DB mutation (55 variants). Instead of designing a new sync format, we serialize each PersistCommand after the local write succeeds and ship it to the main server.

┌─────────────────────────────────────────────────┐
│  Remote Workspace                                │
│                                                   │
│  codex-core / claude CLI                         │
│       │ events                                    │
│       ▼                                           │
│  SessionActor → PersistenceWriter → SQLite (SoT) │
│                        │                          │
│                        │ after local write         │
│                        ▼                          │
│                   SyncQueue (on-disk spool)        │
│                        │                          │
└────────────────────────│──────────────────────────┘
                         │ HTTP POST /api/sync
                         ▼
              ┌──────────────────┐
              │   Main Server    │
              │  (control plane) │
              │                  │
              │  /api/sync       │
              │     │            │
              │     ▼            │
              │  SQLite (replica)│──► WebSocket → Swift Client
              └──────────────────┘

Why this works:

  • Local DB is always source of truth — fast writes, resilient to network blips
  • Sync is eventual — if the network drops, commands spool to disk and flush when reconnected (same pattern as hook_forward.rs spool at <data_dir>/spool/)
  • All agent providers use the same path — Claude hooks point to the workspace's own localhost OrbitDock, not the main server. Codex is embedded. Both write to local SQLite, both get synced identically.
  • No new protocolPersistCommand already represents every mutation. The main server's execute_command() already knows how to replay every variant.
  • Pre-allocated IDs — Main server generates od- UUIDs for session_id, workspace_id before provisioning, so IDs never collide.

PersistCommand Serialization

Current state: PersistCommand only derives Debug. It has 55 variants but only 2 contain non-serializable types:

  • RowAppend { ..., sequence_tx: Option<oneshot::Sender<u64>> }
  • RowUpsert { ..., sequence_tx: Option<oneshot::Sender<u64>> }

Approach: Create a SyncCommand mirror type that's Serialize + Deserialize, derived from PersistCommand with the sequence_tx field stripped. After each local write, if the server is in managed mode, convert the PersistCommandSyncCommand and enqueue.

#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SyncEnvelope {
    pub sequence: u64,          // monotonic, for ordering + dedup
    pub workspace_id: String,   // which workspace sent this
    pub timestamp: String,      // ISO-8601
    pub command: SyncCommand,   // serializable mirror of PersistCommand
}

Spool-on-Failure Pattern

Reuse the existing hook transport spool design from hook_forward.rs:

  • Location: <data_dir>/sync-spool/
  • Filename: {timestamp_ms}-{sequence}.json (enables ordering + uniqueness)
  • Retry: On each tick, drain spool in order before sending new commands
  • Batching: Group commands into batches (same 50-command / 100ms pattern as PersistenceWriter)
  • Auth: Bearer token via existing auth_tokens system (odtk_<id>_<secret>)

Main Server /api/sync Endpoint

POST /api/sync
Authorization: Bearer <workspace_callback_token>

Body: { "commands": [SyncEnvelope, ...] }

Response:
  200 — { "acked_through": <sequence> }
  409 — sequence gap detected (client should resend from last acked)
  401 — invalid token

The endpoint:

  1. Validates the bearer token → resolves to a workspace_id
  2. Checks sequence continuity (no gaps)
  3. Converts each SyncCommand back to PersistCommand
  4. Feeds them through the main server's existing PersistenceWriter channel
  5. Broadcasts ServerMessage events to connected Swift clients

This means the Swift app sees remote sessions appear and update in real-time, through the exact same WebSocket path it already uses.


OrbitDock Already Controls Both Agent Providers

OrbitDock is the control plane for both Claude Code and Codex. The orbitdock binary already:

  • Claude Code: Manages via HTTP hooks. Hook transport is already remote-capable — orbitdock install-hooks --server-url <remote> stores the URL in ~/.orbitdock/hook_transport_config.json. In a workspace, hooks point to localhost (the workspace's own OrbitDock), not the main server.
  • Codex: Manages via embedded codex-core Rust library. No external binary needed — Codex is built into the orbitdock binary itself.

Running OrbitDock inside a remote workspace gives you both agent providers for free. The sync layer handles replication to the control plane uniformly for both.

Relevant code:

  • Hook installation: crates/server/src/admin/install_hooks.rs
  • Hook spool (reuse pattern): crates/server/src/admin/hook_forward.rs (line 288+)
  • Claude connector: crates/connector-claude/src/lib.rs (subprocess spawn)
  • Codex connector: crates/connector-codex/src/lib.rs (embedded codex-core, direct Rust calls)
  • Session creation: crates/server/src/runtime/session_creation.rs (provider dispatch)
  • PersistCommand enum: crates/server/src/infrastructure/persistence/commands.rs (55 variants, ~490 lines)
  • PersistenceWriter batching: crates/server/src/infrastructure/persistence/writer.rs (~146 lines)
  • Command dispatcher: crates/server/src/infrastructure/persistence/mod.rs (execute_command at ~line 189)
  • Auth tokens: crates/server/src/infrastructure/auth_tokens.rs

Design

Workspace Provider Trait

A thin abstraction over "where code runs":

pub enum WorkspaceProviderKind {
    Local,    // current behavior — git worktrees + local processes
    Daytona,  // Daytona VMs
    // future: Docker, SSH, Gitpod, Codespaces, etc.
}

#[async_trait]
pub trait WorkspaceProvider: Send + Sync {
    /// Provision an isolated workspace for a mission issue
    async fn create(&self, req: CreateWorkspaceRequest) -> Result<Workspace>;

    /// Run the coding agent inside the workspace
    async fn start_agent(&self, workspace: &Workspace, prompt: &str, settings: &AgentSettings) -> Result<AgentHandle>;

    /// Check workspace health / status
    async fn status(&self, workspace_id: &str) -> Result<WorkspaceStatus>;

    /// Tear down the workspace (flush sync queue first)
    async fn destroy(&self, workspace_id: &str) -> Result<()>;
}

LocalWorkspaceProvider — wraps the existing worktree creation + local process spawning logic. Zero behavior change for current users. No sync layer (local provider writes directly to the same DB).

DaytonaWorkspaceProvider — calls Daytona REST API to create/manage workspaces. Inside the workspace:

  1. OrbitDock binary is pre-installed in the workspace image
  2. OrbitDock starts in managed mode with pre-allocated IDs and sync callback URL
  3. For Claude: hooks point to localhost workspace OrbitDock → local SQLite → sync to control plane
  4. For Codex: embedded codex-core → local SQLite → sync to control plane
  5. Both agent providers follow the exact same data path

Current Dispatch Flow (What Changes)

The current dispatch_issue() in runtime/mission_dispatch.rs does:

1. Update state → "claimed"
2. create_tracked_worktree()           ← LOCAL: git worktree add
3. Write .mcp.json for mission tools   ← LOCAL: filesystem write
4. Render prompt from template
5. prepare_persist_direct_session()    ← LOCAL: session with local cwd
6. launch_prepared_direct_session()    ← LOCAL: spawn local process
7. Send initial prompt via action channel
8. Update state → "running"

With the workspace provider trait, steps 2-3 and 5-7 become provider-specific:

1. Update state → "claimed"
2. provider.create(workspace_req)      ← PROVIDER: create workspace (instant for local, async for remote)
3. Update state → "provisioning"       ← NEW STATE (skipped for local — instant transition)
4. Render prompt from template
5. provider.start_agent(workspace, prompt, settings)  ← PROVIDER: start agent in workspace
6. Update state → "running"

What stays generic: Provider selection, prompt rendering, state machine transitions, tracker updates, reconciliation loop.

What becomes provider-specific: Workspace creation, environment setup (.mcp.json, hooks), agent process management.

State Machine Addition

Local provider:    [claimed] → [running]              (instant — no provisioning delay)
Remote provider:   [claimed] → [provisioning] → [running]

Full flow:
[queued]
  │ dispatch
[claimed]
  │ create workspace
[provisioning]        ← NEW (only for non-instant providers)
  │ workspace ready, agent started, sync connected
[running]
  │
  ├─ completed → [completed]  → graceful drain → destroy workspace
  ├─ failed ───→ [failed]     → graceful drain → destroy workspace
  └─ stalled ──→ [stalled]    → force-destroy workspace

Current orchestration states: queued, claimed, running, retryQueued, completed, failed, blocked.

Client shows "Setting up environment..." during provisioning.

Important: destroy always waits for the sync queue to drain first (graceful drain). Only stall-timeout triggers a force-destroy.


Workspace Lifecycle & Sync Details

Startup (main server provisions workspace)

  1. Main server pre-allocates IDs: workspace_id, session_id (both od- + UUIDv4)
  2. Main server generates a scoped auth token for this workspace (stored in auth_tokens table)
  3. Main server creates workspace record in workspaces table (status: creating)
  4. Main server calls provider API → create workspace (repo URL, branch mission/{issue-identifier}, image)
  5. Provider clones repo, boots environment with pre-baked OrbitDock image
  6. Main server execs startup command inside workspace:
    orbitdock start --managed \
      --workspace-id $WORKSPACE_ID \
      --session-id $SESSION_ID \
      --sync-url https://dock.example.com:4000/api/sync \
      --sync-token $CALLBACK_TOKEN \
      --provider codex \  # or claude
      --prompt-file /tmp/mission-prompt.md
  7. Workspace's OrbitDock starts in managed mode:
    • Creates session with the pre-allocated session_id
    • Starts the SyncWriter background task
    • Starts the agent (codex embedded or claude subprocess)
    • Sends initial prompt

Steady State (workspace running)

  • Agent writes events → PersistenceWriter → local SQLite (fast, guaranteed)
  • SyncWriter intercepts each PersistCommand after local write → converts to SyncCommand → enqueues
  • Background task batches and POSTs to main server's /api/sync every 100ms (or on batch full)
  • If POST fails → spool to <data_dir>/sync-spool/ → retry on next tick
  • Heartbeat piggybacked on sync batches (empty batch = heartbeat-only POST)
  • Main server detects missing heartbeats → marks workspace unhealthy after threshold

Shutdown (clean completion)

  1. Agent completes → pushes branch → session ends
  2. OrbitDock in workspace flushes remaining sync queue (blocks up to 30s timeout)
  3. Workspace's OrbitDock sends final sync batch with session end command
  4. Workspace's OrbitDock exits cleanly
  5. Main server receives session end via sync → updates tracker → transitions to completed
  6. Main server calls provider API → delete workspace

Crash Recovery

  • If workspace dies unexpectedly, main server detects via missing heartbeats
  • Main server marks session as stalled after stall_timeout_secs
  • Data is preserved up to the last successful sync batch
  • Acceptable data loss: at most one batch window (100ms) of commands

orbitdock init

Guided onboarding command that collects configuration and writes to the encrypted config table. The wizard adapts based on which workspace provider is selected — no separate "mode" concept:

$ orbitdock init

Workspace Provider
  › Local     — agents run on this machine (default)
    Daytona   — provision Daytona VMs
    Docker    — spin up local containers
    SSH       — run on remote machines via SSH

[Daytona selected]

Daytona Configuration
  API URL:  https://daytona.yourcompany.com
  API Key:  dtn_••••••••

Agent Keys
  These get injected into each workspace as env vars.

  Anthropic API Key (for Claude): sk-ant-••••••••
  OpenAI API Key (for Codex):     sk-••••••••

Issue Tracker
  Linear API Key: lin_api_••••••••

Server bind address: 0.0.0.0:4000

✓ Configuration saved (encrypted)
Run `orbitdock start` to launch.

Provider Differences

Local Daytona (or other remote)
Workspace Local git worktrees Remote VMs / containers
Claude auth User's existing ~/.claude/ session ANTHROPIC_API_KEY injected into workspace
Codex auth OPENAI_API_KEY on host OPENAI_API_KEY injected into workspace
Agent runtime OrbitDock on host manages both agent providers OrbitDock in workspace manages both agent providers
DB sync N/A (single SQLite) Workspace SQLite → control plane via /api/sync
Hook transport Hooks point to localhost Hooks point to workspace's own localhost OrbitDock

Local provider skips remote provider config and API key prompts — agents piggyback on existing local installs and auth sessions.

Remote providers (Daytona, Docker, SSH, etc.) collect provider-specific connection details and API keys needed to provision workspaces and authenticate agents headlessly.

Config written by init

Stored in the existing config table with AES-256-GCM encryption (same pattern as openai_api_key and linear_api_key):

workspace_provider = "local" | "daytona" | "docker" | "ssh" | ...
daytona_api_url    = "enc:..."   (only for daytona provider)
daytona_api_key    = "enc:..."   (only for daytona provider)
anthropic_api_key  = "enc:..."   (only for remote providers)
openai_api_key     = "enc:..."   (already exists)
linear_api_key     = "enc:..."   (already exists)
server_bind        = "0.0.0.0:4000"

orbitdock init should be re-runnable, pre-filling existing values. Also support orbitdock config set/get for scripted/CI setups.


Daytona Integration

Infrastructure module

New module at src/infrastructure/daytona/:

pub struct DaytonaClient {
    http: reqwest::Client,  // already a workspace dependency
    api_url: String,
    api_key: String,
}

impl DaytonaClient {
    async fn create_workspace(&self, req: CreateWorkspaceRequest) -> Result<DaytonaWorkspace>;
    async fn get_workspace(&self, id: &str) -> Result<DaytonaWorkspace>;
    async fn exec(&self, workspace_id: &str, cmd: &[&str], env: &[(&str, &str)]) -> Result<ExecResult>;
    async fn delete_workspace(&self, id: &str) -> Result<()>;
}

Workspace creation + agent start flow

  1. Server calls Daytona API → create workspace (repo URL, branch mission/{issue-identifier}, image)
  2. Daytona clones repo, checks out branch, boots container with pre-baked OrbitDock image
  3. Server execs orbitdock start --managed ... inside workspace via Daytona exec API
  4. Workspace's OrbitDock starts with pre-allocated IDs, creates session, starts agent
  5. Both agent providers write to local SQLite → SyncWriter replicates to main server
  6. Agent completes → pushes branch → graceful drain → main server detects completion → updates tracker → destroys workspace

Workspace base image

Pre-bake OrbitDock + agent tooling into a workspace image:

FROM daytonai/workspace:latest
RUN npm install -g @anthropic-ai/claude-code
COPY orbitdock /usr/local/bin/   # single binary handles both agent providers

Configurable in MISSION.md or as a server-level default from init.


MISSION.md Changes

New optional workspace: section:

workspace:
  provider: daytona           # override server default
  image: team/custom:v2       # provider-specific
  retention: destroy          # destroy | keep_duration
  retention_ttl: 3600         # seconds, only if retention=keep_duration
  resources:
    cpu: 4
    memory: 8Gi
  setup_commands:              # run after workspace creation, before agent
    - npm install
    - cargo build

If workspace: is omitted, the server's config table value (set during orbitdock init) is the default. Local-only users don't need to add this section.


Schema Additions

-- V035: Track provisioned workspaces
CREATE TABLE workspaces (
    id TEXT PRIMARY KEY,
    mission_issue_id TEXT REFERENCES mission_issues(id),
    session_id TEXT,                          -- pre-allocated, linked OrbitDock session
    provider TEXT NOT NULL DEFAULT 'local',   -- local | daytona | docker | ssh | future providers
    external_id TEXT,                         -- provider-specific workspace ID
    repo_url TEXT,
    branch TEXT NOT NULL,
    status TEXT NOT NULL DEFAULT 'creating',  -- creating | ready | running | destroying | destroyed | failed
    connection_info TEXT,                     -- JSON blob (provider-specific metadata)
    sync_token TEXT,                          -- auth token ID for /api/sync callback
    sync_acked_through INTEGER DEFAULT 0,    -- last acked sync sequence number
    last_heartbeat_at TEXT,                   -- for stall detection
    created_at TEXT NOT NULL DEFAULT (datetime('now')),
    ready_at TEXT,
    destroyed_at TEXT
);

-- Link mission issues to workspaces
ALTER TABLE mission_issues ADD COLUMN workspace_id TEXT REFERENCES workspaces(id);

-- Track sync state for remote workspaces
CREATE TABLE sync_log (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    workspace_id TEXT NOT NULL REFERENCES workspaces(id),
    sequence INTEGER NOT NULL,               -- monotonic per workspace
    command_json TEXT NOT NULL,               -- serialized SyncCommand
    received_at TEXT NOT NULL DEFAULT (datetime('now')),
    UNIQUE(workspace_id, sequence)
);

The sync_log table on the main server serves as an audit trail and dedup mechanism. Commands are replayed through execute_command() after insertion.


Server Startup

# Local provider (default, current behavior)
orbitdock start

# With a specific workspace provider (configured via init or flag)
orbitdock start --workspace-provider daytona

# Or just use whatever init configured
orbitdock start  # reads workspace_provider from config table

# Managed mode (inside a remote workspace — not user-facing)
orbitdock start --managed \
  --workspace-id $WORKSPACE_ID \
  --session-id $SESSION_ID \
  --sync-url $SYNC_URL \
  --sync-token $SYNC_TOKEN \
  --provider codex \
  --prompt-file /tmp/mission-prompt.md

The --workspace-provider flag overrides the config table value. If neither is set, defaults to local.

The --managed flag is internal — used when the main server starts OrbitDock inside a remote workspace. It enables the SyncWriter and disables the mission orchestrator (the workspace only runs its assigned session).


Client Impact

Minimal changes to the Swift app:

  1. provisioning orchestration state — add case provisioning to OrchestrationState enum in MissionIssueItem.swift, add icon in MissionIssueRow.swift, add filter in MissionComponents.swift, add section in MissionOverviewTab.swift
  2. Remote server URL — settings flow for connecting to a remote server (already supported via multi-endpoint)
  3. Workspace info display — show provider-specific links instead of filesystem paths for remote workspaces
  4. Sync status indicator — optional badge showing sync lag for remote sessions
  5. Everything else is unchanged — same API, same WebSocket events, same mission lifecycle UI. The Swift client doesn't know or care whether a session is local or synced from a remote workspace.

Implementation Plan

Phase 1: Trait Extraction (zero behavior change)

  • Define WorkspaceProvider trait in domain/workspaces/
  • Implement LocalWorkspaceProvider wrapping the existing local dispatch lifecycle
  • Refactor dispatch_issue() to call the trait instead of direct worktree/session code
  • Wire LocalWorkspaceProvider as the default — shipped in PR ♻️ Extract WorkspaceProvider trait from dispatch_issue #150 on March 24, 2026
  • Add coverage for extracted helper logic (branch naming and MCP config generation)

Phase 2: SyncCommand + Serialization

  • Create SyncCommand enum — serializable mirror of PersistCommand (strip oneshot::Sender fields)
  • Implement From<&PersistCommand> for Option<SyncCommand> and From<SyncCommand> for PersistCommand
  • Create SyncEnvelope struct with sequence, workspace_id, timestamp, command
  • Add the serialization boundary and sync-only payload mirrors needed for the current persistence surface
  • Add round-trip serialization coverage for every current SyncCommand variant — in PR ✨ Add Phase 2 sync command serialization #151 on March 24, 2026

Phase 3: SyncWriter + Spool

  • Implement SyncWriter — background tokio task that:
    • Receives SyncCommands via mpsc channel
    • Batches them (50 commands / 100ms, same as PersistenceWriter)
    • POSTs to /api/sync on the main server
    • On failure: spools to <data_dir>/sync-spool/ (reuse hook_forward.rs spool pattern)
    • On reconnect: drains spool in order before sending new commands
    • Sends heartbeat if no commands for 10s
  • Modify PersistenceWriter to optionally feed a SyncWriter channel after each local write (only when --managed flag is set)
  • Implement graceful drain: SyncWriter::shutdown() flushes remaining queue with 30s timeout
  • Integration coverage for managed sync batching, spool, and replay behavior — shipped in PR ✨ Complete phase 3 sync writer #152 on March 25, 2026

Phase 4: /api/sync Endpoint + Schema

  • Add workspaces table (V038__workspace_sync.sql)
  • Add sync_log table (V038__workspace_sync.sql)
  • Add workspace_id column to mission_issues
  • Implement POST /api/sync handler:
    • Validate bearer token → resolve workspace_id
    • Check sequence continuity
    • Insert into sync_log for audit/dedup
    • Convert SyncCommandPersistCommand and feed through shared persistence semantics
    • Broadcast refreshes/deltas to connected clients
    • Return { "acked_through": sequence }
  • Implement heartbeat tracking: update workspaces.last_heartbeat_at on each sync POST
  • Add provisioning to mission issue state machine and mission orchestration plumbing — shipped in PR ✨ Complete issue 23 phase 4 sync receiver #165 on March 26, 2026

Phase 5: orbitdock init + Config

  • Expand init command in CLI (crates/cli/src/cli.rs — command already exists as minimal bootstrap)
  • Guided interactive flow: workspace provider selection is wired into orbitdock setup; provider-specific remote config remains for the Daytona slice
  • Store config via existing PersistCommand::SetConfig + encrypted config table
  • Add orbitdock config set/get for scripted access
  • --workspace-provider flag on orbitdock start
  • --managed flag on orbitdock start (enables SyncWriter and managed sync wiring)
  • REST endpoints for workspace provider config (same pattern as POST /api/server/openai-key) — implemented on feat/issue-23-phase-5-provider-config

Phase 6: Daytona Provider

  • DaytonaClient HTTP client in infrastructure/daytona/
  • DaytonaWorkspaceProvider implementing the trait:
    • create() → Daytona API: create workspace with repo + branch + image
    • start_agent() → Daytona exec: orbitdock start --managed ... with pre-allocated IDs
    • status() → check workspaces.last_heartbeat_at + Daytona API health
    • destroy() → call workspace's /shutdown for graceful drain → Daytona API: delete workspace
  • Pre-allocate session_id + workspace_id + sync auth token before provisioning
  • Workspace lifecycle management (create → monitor heartbeats → graceful destroy on session end)
  • workspace: section in MISSION.md config parser (domain/mission_control/config.rs)
  • Workspace cleanup on stall/failure (graceful drain attempt, then force-destroy)

Phase 7: Client Updates + Polish

  • Swift: add case provisioning to OrchestrationState, update MissionIssueRow, MissionComponents, MissionOverviewTab
  • Provisioning progress display ("Setting up environment...")
  • Workspace info in issue detail (provider links vs local paths)
  • Error UX for provisioning failures (surface provider API errors, not generic "failed")
  • Optional sync status indicator for remote sessions
  • retention support (destroy immediately vs TTL-based cleanup)

Future Work (separate issues)

  • Additional providers (Docker, SSH, Gitpod, Codespaces)
  • Workspace caching/reuse (avoid fresh clone for every issue)
  • keep_until_merged retention (watch for PR merges to trigger cleanup)
  • Bidirectional sync (control plane → workspace for config changes mid-session)
  • Sync compression (gzip batches for bandwidth-constrained environments)

Phase 6A: Mission Provider CLI

  • Add a mission-owned provider CLI surface instead of expanding top-level setup/server commands
  • orbitdock mission provider get|set for the default remote workspace provider used by mission dispatch
  • orbitdock mission provider config get|set <key> for provider-agnostic remote workspace config keys
  • orbitdock mission provider test to run provider validation/smoke-test checks from the CLI
  • Keep provider specifics behind the mission/provider abstraction so Daytona is only one backend implementation

CLI shape we want to grow toward:

  • orbitdock mission provider get
  • orbitdock mission provider set <provider>
  • orbitdock mission provider config get <key>
  • orbitdock mission provider config set <key> <value>
  • orbitdock mission provider test

Why this belongs here:

  • remote VM providers only matter for Mission Control dispatch
  • orbitdock setup should stay focused on baseline OrbitDock installation/bootstrap
  • future remote providers should slot into the same mission-owned surface without Daytona-specific top-level commands

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    Status

    In progress

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions