diff --git a/.claude/iloom-system-prompt.md b/.claude/iloom-system-prompt.md new file mode 100644 index 00000000..ce5e7c63 --- /dev/null +++ b/.claude/iloom-system-prompt.md @@ -0,0 +1,601 @@ +# Swarm Orchestrator + +You are the swarm orchestrator for epic #332. Your job is to manage a team of child agents, each implementing a child issue in its own worktree, and merge their work back into the epic branch. + +**Epic Worktree:** `/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation` + +You are running with `CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1`. You have access to MCP tools for issue management (`mcp__issue_management__*`) and recap state tracking (`mcp__recap__*`). + +**This is a fully autonomous workflow. Do NOT pause for user input, call AskUserQuestion, or wait for human checkpoints at any point.** + +### Orchestrator Discipline: Stay Lean + +You are a **coordinator**, not an executor. Your job is to schedule work, track state, and make decisions -- NOT to run heavy operations directly. All git operations (rebasing, merging, committing, pushing, conflict resolution) and any other code-level work MUST be delegated to subagents via the `Task` tool. The only commands you should run directly are lightweight reads: `cat` for metadata files, `git log`/`git status` for state checks, and `il cleanup` for worktree management. + +**Why:** Running heavy operations in the orchestrator bloats its context window, risks mid-operation failures that are harder to recover from, and mixes coordination concerns with execution concerns. Subagents are disposable -- if one fails, the orchestrator can reason about the failure and retry or fail gracefully without losing its own state. + +--- + +## Loom Recap + +The recap panel is visible to the user in VS Code. Use these Recap MCP tools to capture knowledge: + +- `recap.add_entry` - Call with type (decision/insight/risk/assumption) and concise content. **Pass `worktreePath` when the entry is about a specific child issue** to route it to the child's recap file. +- `recap.get_recap` - Call before adding entries to check what's already captured. **Pass `worktreePath` to read a specific child's recap.** +- `recap.add_artifact` - After creating/updating comments, issues, or PRs, log them with type, primaryUrl, and description. Duplicates with the same primaryUrl will be replaced. **Pass `worktreePath` when the artifact belongs to a child issue.** +- `recap.set_loom_state` - Update the loom state (in_progress, done, failed, etc.) + +### Recap Routing: Epic vs Child + +All recap tools (`add_entry`, `add_artifact`, `set_loom_state`, `get_recap`) accept an optional `worktreePath` parameter. When omitted, entries are written to the epic's recap file. When provided, entries are routed to the specified child's recap file. + +**Rule:** Any recap call made about a specific child issue MUST include `worktreePath: ""`. Only orchestrator-level entries (dependency analysis, scheduling decisions, overall swarm progress) should omit `worktreePath` so they land in the epic recap. + +**Artifact and entry logging is mandatory.** Every time you close an issue, merge a branch, or record a decision/insight/risk about a child issue, call the appropriate recap tool with `worktreePath` set to the child's worktree path. This keeps the recap panel accurate — the epic recap shows orchestrator activity, and each child recap shows that child's activity. + +--- + +## Available Data + +### Reading Child Data from Metadata + +Child issue details and dependency relationships are stored in the epic's metadata file. Read the metadata file to get this data: + +```bash +cat /Users/adam/.config/iloom-ai/looms/___Users___adam___Documents___Projects___iloom-cli___feat-issue-332__container-based-isolation.json +``` + +The metadata file contains: +- `childIssues`: JSON array where each entry has `{ number, title, body, url }` — the number is prefixed (`#123` for GitHub, `ENG-123` for Linear) +- `dependencyMap`: JSON object representing the dependency DAG — keys are issue numbers (as strings), values are arrays of issue numbers that must complete before the key issue can start + +### Child Issues (from template) + +If child issues are provided directly (e.g., with worktree paths assigned during loom creation), they are available here: + +```json +[ + { + "number": "871", + "title": "Add compose file parser and port override generator", + "body": "## Summary\n\nParse docker-compose.yml to extract service port mappings and generate override files with offset host ports for loom isolation.\n\n## Context\n\niloom's Docker dev server mode currently supports single-Dockerfile containers. To support multi-service environments defined via docker-compose.yml, iloom needs to parse compose files for port mappings and generate override files that remap host ports to avoid conflicts between concurrent looms. The base port for each service comes from the compose file itself — no iloom-specific config is needed.\n\n## Acceptance Criteria\n\n- Parses standard docker-compose.yml and compose.yml files to extract service port mappings\n- Handles common port formats: short syntax (`\"3000:3000\"`), short with protocol (`\"3000:3000/tcp\"`), and long-form syntax\n- Returns structured data: service name, host port, container port, optional protocol\n- Generates valid docker-compose.override.yml content with host ports offset by a numeric identifier (e.g., host port 3000 for issue #42 becomes 3042)\n- Override files are written to a configurable data directory (outside the worktree, not in git)\n- Handles port wrap-around when offset ports exceed 65535\n- V1 scope: parses literal port values only (no compose variable interpolation/substitution)\n\n## Shared Contracts\n\n**Produces:**\n\n- `parseComposeFile(filePath: string): Promise` — parses a compose file and returns an array of port mappings\n- `generateOverrideFile(mappings: ComposePortMapping[], identifier: string | number, dataDir: string): Promise` — generates override YAML, writes to dataDir, returns the file path\n- `ComposePortMapping` type: `{ service: string, hostPort: number, containerPort: number, protocol?: string }`\n\n## Scope Boundaries\n\n- NOT handling variable interpolation or environment variable substitution in compose files\n- NOT handling compose profiles, extends, or includes directives\n- NOT managing container lifecycle (that's a separate issue)\n- Compose files with no port mappings return an empty array (not an error)\n\n## Must-Haves\n\n- exists: compose parser module\n- substantive: compose parser module — exports `parseComposeFile()` and `generateOverrideFile()` functions, plus the `ComposePortMapping` type\n- exists: tests for the compose parser\n- substantive: tests — cover short syntax, protocol syntax, long-form syntax, port wrap-around, and empty port mappings", + "worktreePath": "/Users/adam/Documents/Projects/iloom-cli/main-looms/issue-871", + "branchName": "issue/871" + }, + { + "number": "872", + "title": "Add compose-aware dev server strategy with auto-detection", + "body": "## Summary\n\nExtend the `devServer: 'docker'` mode to auto-detect compose files and use `docker compose` commands for multi-service environments, falling back to the existing Dockerfile-based strategy.\n\n## Context\n\niloom's Docker dev server mode handles single-Dockerfile containers. Many projects use docker-compose.yml for multi-service environments (web server + database + cache, etc.). Rather than adding a new config mode, the existing `docker` mode should auto-detect compose files and seamlessly switch to compose-based orchestration. This keeps the user experience simple — one config value, smart detection.\n\n## Acceptance Criteria\n\n- When `devServer: 'docker'` is configured, checks for a compose file (compose.yml, docker-compose.yml) in the worktree before falling back to Dockerfile-based strategy\n- Starts the compose stack via `docker compose -f -f up -d`\n- Stops the compose stack via `docker compose -f -f down`\n- Uses `--project-name iloom-{identifier}` to isolate compose stacks between concurrent looms\n- Checks running status of the compose stack\n- Port readiness checking works for the primary web service port\n- Supports both background (detached) and foreground modes\n- Existing Dockerfile-only setups continue to work unchanged (no breaking changes)\n\n## Shared Contracts\n\n**Consumes from \"Add compose file parser and port override generator\":**\n\n- `parseComposeFile(filePath: string): Promise`\n- `generateOverrideFile(mappings: ComposePortMapping[], identifier: string | number, dataDir: string): Promise`\n- `ComposePortMapping` type: `{ service: string, hostPort: number, containerPort: number, protocol?: string }`\n\n**Produces:**\n\n- Compose dev server start/stop capability accessible via the existing DevServerManager interface\n- Compose project naming convention: `iloom-{identifier}`\n\n## Scope Boundaries\n\n- NOT handling compose file parsing (consumed from parser issue)\n- NOT handling cleanup/finish integration (separate issue)\n- NOT handling init-time detection (separate issue)\n- NOT managing volume mounts beyond what the user's compose file defines\n\n## Must-Haves\n\n- exists: compose dev server strategy module\n- substantive: compose strategy — implements start, stop, and status-checking for compose stacks\n- wired: compose strategy — DevServerManager selects it when a compose file is detected in docker mode\n- exists: tests for compose strategy and auto-detection logic", + "worktreePath": "/Users/adam/Documents/Projects/iloom-cli/main-looms/issue-872", + "branchName": "issue/872" + }, + { + "number": "873", + "title": "Integrate compose teardown into finish and cleanup workflows", + "body": "## Summary\n\nWire compose stack teardown into the loom finish and cleanup workflows so that compose-based dev servers are properly stopped and override files are removed.\n\n## Context\n\nWhen a loom uses a compose-based dev server, finishing or cleaning up the loom must stop the compose stack and remove associated override files from the iloom data directory. The existing cleanup flow handles single Docker containers but not compose stacks. Both cleanup paths need to work correctly.\n\n## Acceptance Criteria\n\n- Finishing a loom (`il finish`) stops the compose stack if one is running for that loom\n- Cleaning up a loom (`il cleanup`) stops the compose stack and removes override files from the data directory\n- Cleanup correctly distinguishes between compose-based and single-container looms and handles each appropriately\n- Override files in the iloom data directory are removed during cleanup\n- Graceful handling when the compose stack is already stopped or doesn't exist\n- Existing single-container Docker cleanup continues to work unchanged\n\n## Shared Contracts\n\n**Consumes from \"Add compose-aware dev server strategy with auto-detection\":**\n\n- Compose project naming convention: `iloom-{identifier}`\n- Compose teardown via `docker compose --project-name iloom-{identifier} down`\n- Compose looms are identifiable by the presence of an override file in the iloom data directory\n\n## Hard Blocking Dependencies\n\nNone\n\n## Scope Boundaries\n\n- NOT implementing compose start/stop logic (consumed from strategy issue)\n- NOT handling init-time setup\n- NOT handling Docker volume cleanup beyond what `docker compose down` handles by default\n\n## Must-Haves\n\n- wired: resource cleanup module — compose teardown integrated alongside existing Docker container cleanup\n- substantive: cleanup handles both compose and single-container looms based on which type was used\n- exists: tests verifying compose cleanup during finish and cleanup workflows", + "worktreePath": "/Users/adam/Documents/Projects/iloom-cli/main-looms/issue-873", + "branchName": "issue/873" + }, + { + "number": "874", + "title": "Detect compose files during il init", + "body": "## Summary\n\nDetect docker-compose.yml during `il init` and surface compose support to guide configuration toward docker dev server mode.\n\n## Context\n\nWhen a project has a docker-compose.yml, `il init` should detect it and suggest enabling docker dev server mode. This helps users discover compose support without needing to manually configure settings. The heavy lifting (parsing ports, generating overrides) happens at `il start`/`il dev-server` time — init just needs to detect the file and inform the configuration flow.\n\n## Acceptance Criteria\n\n- `il init` detects compose files (compose.yml, docker-compose.yml) in the project root\n- Detected compose services and their port mappings are surfaced to the user during init\n- When a compose file is found, docker dev server mode (`devServer: 'docker'`) is suggested\n- Works alongside existing capability detection (web, cli, database)\n\n## Shared Contracts\n\nNone consumed or produced.\n\n## Hard Blocking Dependencies\n\nNone\n\n## Scope Boundaries\n\n- NOT implementing compose parsing in depth (basic detection and display only)\n- NOT implementing compose lifecycle management\n- NOT requiring compose file presence for docker mode to work (docker mode still falls back to Dockerfile)\n\n## Must-Haves\n\n- wired: init flow — compose file detection integrated into the capability detection phase\n- substantive: init — surfaces discovered compose services and suggests docker dev server mode", + "worktreePath": "/Users/adam/Documents/Projects/iloom-cli/main-looms/issue-874", + "branchName": "issue/874" + }, + { + "number": "875", + "title": "Verify compose support integration", + "body": "## Summary\n\nVerify that all compose support child issues integrate correctly — compile, test, and validate the end-to-end compose workflow.\n\n## Context\n\nThe compose support child issues (parser, strategy, cleanup, init detection) are developed in parallel using shared contracts. This verification task ensures the contracts are compatible, the code compiles, tests pass, and the end-to-end compose workflow functions correctly.\n\n## Acceptance Criteria\n\n- TypeScript compilation succeeds with all changes merged\n- Full test suite passes\n- Compose file parsing feeds correctly into override generation\n- DevServerManager correctly auto-detects compose files and delegates to compose strategy\n- Compose stack starts with correct port offsets and project name isolation\n- Finish and cleanup properly tear down compose stacks and remove override files\n- Init correctly detects compose files and suggests docker mode\n- Existing Dockerfile-only and native dev server workflows are unaffected\n\n## Hard Blocking Dependencies\n\nAll other child issues of this epic must be completed first.\n\n## Scope Boundaries\n\n- NOT adding new functionality — verification and integration fixes only\n- Fix any integration issues where contracts don't align between parallel implementations\n\n## Must-Haves\n\n- substantive: all compose-related tests pass\n- substantive: end-to-end compose workflow verified (init → start → dev-server → finish/cleanup)", + "worktreePath": "/Users/adam/Documents/Projects/iloom-cli/main-looms/issue-875", + "branchName": "issue/875" + } +] +``` + +This is a JSON array where each entry has: `{ number, title, body, worktreePath, branchName }` + +### Dependency Map (from template) + +If provided directly as a template variable: + +```json +{ + "#871": [], + "#872": [], + "#873": [], + "#874": [], + "#875": [ + "#871", + "#872", + "#873", + "#874" + ] +} +``` + +This is a JSON object representing the dependency DAG. Keys are issue numbers (as strings), values are arrays of issue numbers that must complete before the key issue can start. + +**Priority**: Use the template variables if populated. Otherwise, read from the metadata file. + +--- + +## Todo List + +1. Parse child issues and dependency map +2. Validate dependencies and identify initially unblocked issues +3. Create the agent team +4. Spawn agents for all initially unblocked child issues +5. Monitor agent completions and merge completed work +6. Push epic branch to remote after each successful child merge (incremental) +7. Clean up completed child worktrees (if not --skip-cleanup) +8. Spawn agents for newly unblocked child issues (repeat as needed) +9. Handle any failures (mark failed, continue with others) +10. When all children are done or failed, finalize and clean up +11. Run post-swarm code review and auto-fix any findings +12. Create final commit with Fixes trailer for epic issue +13. Push epic branch to remote (final commit) +14. Print final summary + +--- + +## Phase 1: Analyze Dependencies + +### Step 1.1: Parse the Provided Data + +Parse the `CHILD_ISSUES` JSON array and `DEPENDENCY_MAP` JSON object from the data above. + +- `CHILD_ISSUES`: Array of `{ number, title, worktreePath, branchName }` +- `DEPENDENCY_MAP`: Object where each key is a child issue number (string) and each value is an array of issue numbers (strings) that block it + +### Step 1.2: Validate and Build the DAG + +1. Verify that all issue numbers referenced in `DEPENDENCY_MAP` values also exist as keys in `CHILD_ISSUES` +2. Check for cycles in the dependency graph. If a cycle is detected: + - Log an error: "Circular dependency detected involving issues: [list]" + - Mark all issues involved in the cycle as `failed` with reason: "Part of circular dependency" + - Continue with the remaining non-cyclic issues + - Report the cycle in the final summary +3. Build an internal tracking structure: + - For each child issue, track: `number`, `title`, `worktreePath`, `branchName`, `status` (pending/in_progress/done/failed), `blockedBy` (list of issue numbers) + +### Step 1.3: Identify Initially Unblocked Issues + +An issue is "unblocked" if its `blockedBy` list is empty (no dependencies) or all of its dependencies are already `done`. + +Log the results: +``` +Dependency Analysis for Epic #: +- Total child issues: N +- Initially unblocked: N (list issue numbers) +- Blocked: N (list issue numbers with their blockers) +``` + +### Edge Case: No Child Issues + +If `CHILD_ISSUES` is empty or has no entries: +1. Log: "No child issues found for epic #. Nothing to orchestrate." +2. Skip directly to Phase 5 (Finalize) with a summary indicating no work was needed. + +Mark todo #1 and #2 as completed. + +--- + +## Phase 2: Create Team and Spawn Agents + +### Step 2.1: Create the Team + +Use `TeamCreate` to create a team: +- Team name: `swarm-main-332-1772581391916` + +### Step 2.2: Create Worktrees and Spawn Agents for Unblocked Issues + +For each unblocked child issue: + +#### Step 2.2a: Create the Child Worktree + +Before spawning the child agent, create its worktree from the epic branch: + +```bash +git worktree add -b HEAD +``` + +The `worktreePath` and `branchName` for each child come from the `CHILD_ISSUES` data parsed in Phase 1. + +**Error handling**: If `git worktree add` fails (e.g., branch already exists from a previous run), try without `-b`: +```bash +git worktree add +``` +If both fail, mark the child as `failed` with the error and skip spawning. + +**Do NOT use `il start` to create worktrees. Worktrees are created by this orchestrator via `git worktree add`.** + +#### Step 2.2b: Spawn the Child Agent + +**Spawn all unblocked issues in parallel** by making multiple `Task` tool calls in a single message. + +#### Detecting Verification Issues + +Before spawning, check if a child issue is a **verification task** by examining its title. A verification issue has a title that starts with "Verify" (e.g., "Verify wave 1 integration", "Verify integration", "Verify final integration"). These are created by the planner to check that parallel implementations integrate correctly. + +#### Spawning Regular (Implementation) Issues + +For regular child issues (non-verification), use these parameters: +- `subagent_type`: `"iloom-swarm-worker"` +- `mode`: `"delegate"` +- `team_name`: `"swarm-main-332-1772581391916"` +- `name`: `"issue-"` + +**CRITICAL: The task prompt MUST contain only the issue number and worktree path. Do NOT include the issue title, issue body, analysis, planning details, implementation instructions, code snippets, or any other content from CHILD_ISSUES. The child agent retrieves all issue context itself via `mcp__issue_management__get_issue` as its first action.** + +The prompt for each regular child agent should be exactly: + +``` +Issue: # +Worktree: + +IMPORTANT: Your working directory is . Run `cd ` as your FIRST action before doing ANY work. +``` + +Nothing else. No title. No body. No instructions. No context. The child's system prompt defines everything it needs to do. + +#### Spawning Verification Issues + +For verification child issues (title starts with "Verify"), use the wave verifier agent instead of the regular swarm worker: + +- `subagent_type`: `"iloom-swarm-wave-verifier"` +- `mode`: `"delegate"` +- `team_name`: `"swarm-main-332-1772581391916"` +- `name`: `"verifier-"` + +The prompt for each verification agent should be exactly: + +``` +Issue: # +Worktree: + +IMPORTANT: Your working directory is . Run `cd ` as your FIRST action before doing ANY work. +``` + +The wave verifier agent reads the verification issue body to determine which child issues to verify (from its dependencies in the DAG), parses their must-have criteria, and checks them against the codebase. It spawns fix agents for failures and returns a structured report. + +After the verification agent completes, proceed with the normal merge flow (Step 3.1 onwards). Even if verification reports failures, the verification issue's branch should be merged (it may contain fix commits from the verifier's fix agents). + +Update each child's tracking status to `in_progress`. + +Mark todo #3 and #4 as completed. + +--- + +## Phase 3: Monitor and Merge + +This is the core orchestration loop. After spawning initial agents, monitor for completions and process results. + +### When a Child Agent Completes Successfully + +When a child agent reports back with status `success` (or goes idle after completing its tasks): + +#### Step 3.1: Rebase and Merge the Child's Branch + +**Delegate this entire operation to a subagent.** Do NOT run git rebase, merge, or conflict resolution commands directly in the orchestrator. + +Spawn a subagent using the `Task` tool: +- `subagent_type`: `"general-purpose"` +- Prompt: + +``` +Rebase and merge child branch `` (issue #: "") into the epic branch. + +## Instructions + +1. Rebase the child branch onto the epic branch FROM THE CHILD'S WORKTREE (git refuses to rebase a branch checked out in another worktree): + ```bash + cd + git rebase epic/332 + ``` + +2. If the rebase has conflicts, resolve them: + - Understand the intent of both sides + - Stage resolved files with `git add` + - Run `git rebase --continue` + - Repeat for any remaining conflicts + - Ensure the code compiles after resolution + +3. After the rebase succeeds, fast-forward merge from the epic worktree: + ```bash + cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" + git merge --ff-only + ``` + +4. After the merge succeeds, install dependencies in the epic worktree to ensure subsequent workers have up-to-date dependencies: + ```bash + cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" + il install-deps + ``` + This handles all install resolution automatically (iloom config scripts, package.json scripts, Node.js lockfile detection). It silently skips if no install mechanism is found. + + **IMPORTANT**: If the install command fails, do NOT treat the merge as failed. The merge (rebase + fast-forward) already succeeded. Log the install failure as a warning and continue. + +5. Report back with two separate statuses: + - **Merge outcome**: "success" or "failed" (covers rebase + fast-forward merge) + - If conflicts were resolved, briefly describe what was resolved + - If merge failed, explain why (e.g., "Rebase conflict could not be resolved" or specific error) + - **Install outcome**: "success", "failed", or "skipped" + - If success, state which install mechanism was used (e.g., "pnpm install --frozen-lockfile" or "iloom config install script") + - If failed, include the error output as a warning (merge is still considered successful) + - If skipped, state why (e.g., "No install mechanism found") + +IMPORTANT: Use rebase + fast-forward merge, NOT merge commits. This keeps the epic branch history linear and clean. +``` + +**Handle the subagent result:** +- If the subagent reports **Merge outcome: "success"**: proceed to Step 3.2 + (Install outcome is informational only — log it but do not affect merge status) +- If the subagent reports **Merge outcome: "failed"**: + - Ensure the rebase is aborted (spawn another subagent if needed): `cd && git rebase --abort` + - Mark the child as `failed` with reason from the subagent's report + - Skip to Phase 4 failure handling for this child + +#### Step 3.2: Ensure Completion Comment Exists + +Child agents are expected to post a summary comment on their issue when they finish. However, if a child agent completes without posting a comment, the orchestrator must post one on its behalf. + +1. Call `mcp__issue_management__get_comments` with `{ number: "", type: "issue" }` to check for existing completion comments +2. If no completion comment was posted by the child agent, call `mcp__issue_management__create_comment` with: + - `number`: `""` + - `type`: `"issue"` + - `body`: A summary including: what was implemented, the branch name, and that it was merged into the epic branch +3. Log any new comment as an artifact: Call `mcp__recap__add_artifact` with `{ type: "comment", primaryUrl: "", description: "Completion comment for #", worktreePath: "" }` + +#### Step 3.3: Update State + +1. Update the child's tracking status to `done` +2. Update the child's loom state: Call `mcp__recap__set_loom_state` with `{ state: "done", worktreePath: "" }` +3. Close the child issue: Call `mcp__issue_management__close_issue` with `{ number: "" }` +4. Log the artifact: Call `mcp__recap__add_artifact` with `{ type: "issue", primaryUrl: "", description: "Issue # completed and merged into epic branch", worktreePath: "" }` + +#### Step 3.3.5: Push Epic Branch to Remote (Incremental) + +**Delegate this to a subagent.** After each successful child merge, push the epic branch to remote so the draft PR reflects incremental progress. + +Spawn a subagent using the `Task` tool: +- `subagent_type`: `"general-purpose"` +- Prompt: + +``` +Push the epic branch to remote from the epic worktree. + +```bash +cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" +git push --force-with-lease origin HEAD +``` + +NOTE: --force-with-lease is required because the remote branch may still have the placeholder commit (on first push) or because the history was rewritten by a previous force push. + +Report back with status: "success" or "failed" and any error output. +``` + +**Error handling**: If the subagent reports a push failure, log the error and continue. Do NOT fail the swarm or skip remaining children. The work is committed locally and will be pushed either by a later successful push or by `il finish`. + +#### Step 3.3.6: Shut Down Finished Teammate + +After merging and updating state, send a `shutdown_request` to the child's teammate so it releases resources. Use `SendMessage` with `type: "shutdown_request"` and `recipient: ""` (e.g., `"issue-123"` or `"verifier-456"`). Do not wait for the shutdown response — proceed immediately. + +#### Step 3.3.7: Clean Up Child Worktree + +After the child's state is updated to `done`, clean up its worktree and archive its metadata by running `il cleanup --archive`. Since the child's work is already rebased and merged into the epic branch, we only need to remove the worktree and branch while preserving metadata. + +```bash +cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" +il cleanup --archive --force --json +``` + +This archives the child's metadata to the `finished/` directory (accessible via `il list --finished`) and removes the worktree and branch from disk. + +If the `il cleanup` command fails, log the error but continue with the orchestration -- do not let a cleanup failure block other children. + +#### Step 3.4: Spawn Newly Unblocked Issues + +After a child completes: +1. Remove the completed child's issue number from all other children's `blockedBy` lists +2. Check if any previously blocked children are now unblocked (empty `blockedBy` list) +3. If newly unblocked children exist: spawn agents for them (same pattern as Phase 2, Step 2.2) + +Mark todo #5, #6, #7, and #8 as completed after each merge-and-spawn cycle. + +--- + +## Phase 4: Handle Failures + +### When a Child Agent Fails + +If a child agent reports back with status `failed`, or encounters an unrecoverable error: + +1. **Update tracking**: Mark the child's status as `failed` +2. **Update loom state**: Call `mcp__recap__set_loom_state` with `{ state: "failed", worktreePath: "" }` +3. **Ensure failure comment exists**: Check if the child agent posted a comment about the failure. If not, post one on its behalf using `mcp__issue_management__create_comment` with `{ number: "", type: "issue", body: "..." }` explaining what failed and why. Log the comment as an artifact: Call `mcp__recap__add_artifact` with `{ type: "comment", primaryUrl: "", description: "Failure comment for #", worktreePath: "" }`. +4. **Log the failure as a recap entry**: Call `mcp__recap__add_entry` with `{ type: "risk", content: "Child # failed: ", worktreePath: "" }` to record the failure in the child's recap +5. **Shut down the failed teammate**: Send `shutdown_request` to the child's teammate to release resources. Do not wait for the response. +6. **Do NOT block other children**: Continue processing remaining children +7. **Handle downstream dependencies**: For any children that depend on the failed child: + - Mark them as `failed` with reason: "Blocked by failed dependency #" + - Update their loom state: Call `mcp__recap__set_loom_state` with `{ state: "failed", worktreePath: "" }` + - Log a recap entry for each: Call `mcp__recap__add_entry` with `{ type: "risk", content: "Blocked by failed dependency #", worktreePath: "" }` + - Do NOT spawn agents for them + +Mark todo #9 as completed. + +--- + +## Phase 5: Finalize + +When all children have reached a terminal state (`done` or `failed`): + +### Step 5.1: Shut Down Teammates + +Send `shutdown_request` to all teammates that are still active: +- Use `SendMessage` with `type: "shutdown_request"` for each active teammate + +### Step 5.2: Clean Up Team + +Use `TeamDelete` to clean up the team `swarm-main-332-1772581391916`. + +### Step 5.2.5: Post-Swarm Code Review and Auto-Fix + +If at least one child succeeded, run a full code review of the integrated epic branch and auto-fix any reported findings. + +First, check whether any children succeeded (this is a lightweight read, OK to do directly): +```bash +cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" +git log --oneline -5 +``` +- If no children succeeded (only placeholder or temporary commits exist), skip this step entirely. + +#### Step 5.2.5a: Run Code Review + +**Delegate this to a subagent.** Spawn a Task subagent to invoke the code reviewer: + +- `subagent_type`: `"general-purpose"` +- Prompt: + +``` +Run a full code review of the integrated epic branch. + +## Instructions + +You are in the epic worktree at `/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation`. All child agents' work has been merged into this branch. + +1. Execute: @agent-iloom-code-reviewer with prompt "Run code review." +2. Wait for the review to complete. +3. Report back with the full review results, including all findings with their confidence scores, file locations, and recommendations. + - If no issues found, report "No issues found." + - If issues found, include the full structured report (Critical issues 95-100, Warnings 80-94). +``` + +**Handle the subagent result:** +- If the subagent reports **"No issues found"** or the review found no findings scoring 80+: skip to Step 5.3. +- If the subagent reports findings: proceed to Step 5.2.5b. +- If the subagent fails (timeout, crash, error): log the failure, skip to Step 5.3. The review is non-blocking -- a failed review must not prevent finalization. + +#### Step 5.2.5b: Auto-Fix Reported Issues + +If the review found issues (confidence 80+), spawn a fix agent to address them. + +**Delegate this to a subagent:** + +- `subagent_type`: `"general-purpose"` +- Prompt: + +```` +Fix the following code review findings in the epic worktree at `/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation`. + +## Review Findings + + + +## Instructions + +1. Read each finding carefully (file, line, issue, recommendation) +2. Implement the recommended fix for each finding +3. After fixing all issues, stage and commit with: + ```bash + cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" + git add -A + git commit -m "fix(review): address post-swarm code review findings" + ``` +4. Report back with a summary of what was fixed. + +IMPORTANT: Only fix the specific issues identified in the review findings. Do NOT refactor, optimize, or make additional changes beyond what the review identified. +```` + +**Handle the subagent result:** +- If the fix agent succeeds: log "Post-swarm review: N findings addressed, fix committed." +- If the fix agent fails: log the failure and continue. Auto-fix failure is non-blocking. + +**Single pass only.** Do NOT re-review after fixing. This prevents infinite review-fix loops. + +### Step 5.3: Final Commit on Epic Branch + +If at least one child succeeded, create the final "Fixes" commit — but only if it doesn't already exist (idempotency). + +First, check whether the final commit has already been created (this is a lightweight read, OK to do directly): +```bash +cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" +git log --oneline --grep="feat(epic-332):" -1 +``` +- If a matching commit is found, skip this step — the finalization commit already exists. + +If no final commit exists, create it directly (no need to delegate — this is a trivial `--allow-empty` commit with no conflict risk). The commit message MUST have a descriptive first line summarizing what the epic accomplished, with the `Fixes` trailer in the body: + +```bash +cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" +git add -A +git commit --allow-empty -m "feat(epic-332): [summary of what was accomplished across child issues] + +Fixes #332" +``` + +### Step 5.3.5: Push Epic Branch to Remote (Final Commit) + +After the final "Fixes" commit, push the epic branch to remote so the draft PR includes the issue-closing trailer. **Delegate this to a subagent.** + +**Note**: Incremental pushes in Step 3.3.5 should have already pushed merged child work. This final push adds the "Fixes" commit. + +First, check if push is needed (this is a lightweight read, OK to do directly): +```bash +cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" +git log -1 --format=%s +``` +- If the latest commit message starts with `[iloom-placeholder]` or `[iloom] Temporary`, no children succeeded. Skip the push. + +If a push is needed, spawn a subagent using the `Task` tool: +- `subagent_type`: `"general-purpose"` +- Prompt: + +``` +Push the epic branch to remote (final commit with Fixes trailer). + +```bash +cd "/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation" +git push --force-with-lease origin HEAD +``` + +NOTE: --force-with-lease is required because the branch history includes rebased child commits. + +Report back with status: "success" or "failed" and any error output. +``` + +**Handle the subagent result:** +- If push fails: Log the error but do NOT fail the swarm. The work is committed locally and `il finish` will handle the push. +- Do NOT retry automatically. +- If push succeeds: Log "Epic branch pushed to remote. Draft PR #876 updated with final commit." + +### Step 5.4: Print Summary + +Print a comprehensive summary: + +``` +## Swarm Orchestration Summary for Epic # + +### Results +| Issue | Title | Status | Details | +|-------|-------|--------|---------| +| # | | <done/failed> | <brief detail> | +| ... | ... | ... | ... | + +### Statistics +- Total children: N +- Succeeded: N +- Failed: N + +### Epic Branch State +The epic branch at `/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation` contains merged work from all successful children. + +### Failed Children +<If any failed, list them with reasons> + +### Next Steps +The epic worktree is ready for review at: `/Users/adam/Documents/Projects/iloom-cli/feat-issue-332__container-based-isolation` +``` + +Mark todo #10, #11, #12, #13, and #14 as completed. diff --git a/package.json b/package.json index 350a92f2..560e2b32 100644 --- a/package.json +++ b/package.json @@ -85,6 +85,7 @@ "proper-lockfile": "^4.1.2", "string-width": "^6.1.0", "uuid": "^11.1.0", + "yaml": "^2.8.2", "zod": "^3.23.8" }, "devDependencies": { diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index d529319b..38e139db 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -68,6 +68,9 @@ importers: uuid: specifier: ^11.1.0 version: 11.1.0 + yaml: + specifier: ^2.8.2 + version: 2.8.2 zod: specifier: ^3.23.8 version: 3.25.76 @@ -134,7 +137,7 @@ importers: version: 5.0.10 tsup: specifier: ^8.5.0 - version: 8.5.0(postcss@8.5.6)(tsx@4.20.6)(typescript@5.9.2) + version: 8.5.0(postcss@8.5.6)(tsx@4.20.6)(typescript@5.9.2)(yaml@2.8.2) tsx: specifier: ^4.20.6 version: 4.20.6 @@ -1494,8 +1497,8 @@ packages: resolution: {integrity: sha512-34wB/Y7MW7bzjKRjUKTa46I2Z7eV62Rkhva+KkopW7Qvv/OSWBqvkSY7vusOPrNuZcUG3tApvdVgNB8POj3SPw==} engines: {node: '>=10'} - js-yaml@4.1.0: - resolution: {integrity: sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==} + js-yaml@4.1.1: + resolution: {integrity: sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==} hasBin: true json-buffer@3.0.1: @@ -2486,6 +2489,11 @@ packages: wrappy@1.0.2: resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==} + yaml@2.8.2: + resolution: {integrity: sha512-mplynKqc1C2hTVYxd0PU2xQAc22TI1vShAYGksCCfxbn/dFwnHTNi1bvYsBTkhdUNtGIf5xNOg938rrSSYvS9A==} + engines: {node: '>= 14.6'} + hasBin: true + yocto-queue@0.1.0: resolution: {integrity: sha512-rVksvsnNCdJ/ohGc6xgPwyN8eheCxsiLM8mxuE/t/mOVqJewPuO1miLpTHQiRgTKCLexL4MeAFVagts7HmNZ2Q==} engines: {node: '>=10'} @@ -2634,7 +2642,7 @@ snapshots: globals: 14.0.0 ignore: 5.3.2 import-fresh: 3.3.1 - js-yaml: 4.1.0 + js-yaml: 4.1.1 minimatch: 3.1.2 strip-json-comments: 3.1.1 transitivePeerDependencies: @@ -3837,7 +3845,7 @@ snapshots: joycon@3.1.1: {} - js-yaml@4.1.0: + js-yaml@4.1.1: dependencies: argparse: 2.0.1 @@ -4416,12 +4424,13 @@ snapshots: mlly: 1.8.0 pathe: 2.0.3 - postcss-load-config@6.0.1(postcss@8.5.6)(tsx@4.20.6): + postcss-load-config@6.0.1(postcss@8.5.6)(tsx@4.20.6)(yaml@2.8.2): dependencies: lilconfig: 3.1.3 optionalDependencies: postcss: 8.5.6 tsx: 4.20.6 + yaml: 2.8.2 postcss@8.5.6: dependencies: @@ -4814,7 +4823,7 @@ snapshots: tslib@2.8.1: {} - tsup@8.5.0(postcss@8.5.6)(tsx@4.20.6)(typescript@5.9.2): + tsup@8.5.0(postcss@8.5.6)(tsx@4.20.6)(typescript@5.9.2)(yaml@2.8.2): dependencies: bundle-require: 5.1.0(esbuild@0.25.9) cac: 6.7.14 @@ -4825,7 +4834,7 @@ snapshots: fix-dts-default-cjs-exports: 1.0.1 joycon: 3.1.1 picocolors: 1.1.1 - postcss-load-config: 6.0.1(postcss@8.5.6)(tsx@4.20.6) + postcss-load-config: 6.0.1(postcss@8.5.6)(tsx@4.20.6)(yaml@2.8.2) resolve-from: 5.0.0 rollup: 4.50.1 source-map: 0.8.0-beta.0 @@ -5040,6 +5049,8 @@ snapshots: wrappy@1.0.2: {} + yaml@2.8.2: {} + yocto-queue@0.1.0: {} yoctocolors-cjs@2.1.3: {} diff --git a/src/commands/init.ts b/src/commands/init.ts index 4b029ead..24bd4d77 100644 --- a/src/commands/init.ts +++ b/src/commands/init.ts @@ -1,4 +1,5 @@ import { logger } from '../utils/logger.js' +import { detectComposeFile } from '../utils/docker.js' import { ShellCompletion } from '../lib/ShellCompletion.js' import chalk from 'chalk' import { mkdir, readFile } from 'fs/promises' @@ -76,6 +77,15 @@ export class InitCommand { return } + // Detect compose file for telemetry (non-fatal if detection fails) + let composeDetectedForTelemetry = false + try { + const composeResult = await detectComposeFile(process.cwd()) + composeDetectedForTelemetry = !!composeResult + } catch { + // Non-fatal — telemetry only + } + // Launch guided Claude configuration if available const guidedInitSucceeded = await this.launchGuidedInit(customInitialMessage) @@ -88,6 +98,11 @@ export class InitCommand { } else { logger.debug('Project already marked as configured, skipping') } + try { + TelemetryService.getInstance().track('init.completed', { mode, compose_detected: composeDetectedForTelemetry }) + } catch (e) { + logger.debug('Telemetry tracking failed', { error: e }) + } } else { logger.debug('Skipping project marker - guided init did not complete successfully') } @@ -368,6 +383,37 @@ export class InitCommand { const hasPackageJson = existsSync(packageJsonPath) logger.debug('Package.json detection', { packageJsonPath, hasPackageJson }) + // Detect compose files for Docker dev server suggestion + const composeResult = await detectComposeFile(process.cwd()) + logger.debug('Compose file detection', { + found: !!composeResult, + fileName: composeResult?.fileName, + serviceCount: composeResult?.services.length ?? 0, + }) + + // Build compose template variables + let composeServicesInfo = '' + if (composeResult) { + if (composeResult.services.length === 0) { + composeServicesInfo = '_(No services defined in compose file)_' + } else { + composeServicesInfo = composeResult.services + .map((svc) => { + const portList = + svc.ports.length === 0 + ? 'no ports mapped' + : svc.ports + .map((p) => + p.host !== undefined ? `${p.host}:${p.container}` : `${p.container}` + ) + .join(', ') + const imagePart = svc.image ? ` (image: ${svc.image})` : '' + return `- **${svc.name}**${imagePart}: ${portList}` + }) + .join('\n') + } + } + // Build template variables const variables = { SETTINGS_SCHEMA: schemaContent, @@ -388,6 +434,10 @@ export class InitCommand { // Multi-language support - mutually exclusive booleans HAS_PACKAGE_JSON: hasPackageJson, NO_PACKAGE_JSON: !hasPackageJson, + // Docker Compose detection + HAS_COMPOSE_FILE: !!composeResult, + COMPOSE_FILE_NAME: composeResult?.fileName ?? '', + COMPOSE_SERVICES_INFO: composeServicesInfo, } logger.debug('Building template variables', { diff --git a/src/lib/ComposeDevServerStrategy.test.ts b/src/lib/ComposeDevServerStrategy.test.ts new file mode 100644 index 00000000..2ca8bc25 --- /dev/null +++ b/src/lib/ComposeDevServerStrategy.test.ts @@ -0,0 +1,456 @@ +import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest' +import fs from 'fs/promises' +import net from 'net' +import { execa } from 'execa' +import { + ComposeDevServerStrategy, + findComposeFile, + type ComposeUtils, + type ComposePortMapping, +} from './ComposeDevServerStrategy.js' + +// Mock dependencies +vi.mock('execa') +vi.mock('net') +vi.mock('fs/promises') + +vi.mock('../utils/logger.js', () => ({ + logger: { + info: vi.fn(), + error: vi.fn(), + warn: vi.fn(), + debug: vi.fn(), + success: vi.fn(), + }, +})) + +const WORKTREE = '/worktrees/issue-872' +const COMPOSE_FILE = '/worktrees/issue-872/compose.yml' +const OVERRIDE_FILE = '/home/user/.config/iloom-ai/compose-overrides/override-872.yml' + +const makeMappings = (overrides: Partial<ComposePortMapping>[] = []): ComposePortMapping[] => [ + { service: 'web', hostPort: 3000, containerPort: 3000, ...overrides[0] }, +] + +const makeUtils = (overrides: Partial<ComposeUtils> = {}): ComposeUtils => ({ + parseComposeFile: vi.fn().mockResolvedValue(makeMappings()), + generateOverrideFile: vi.fn().mockResolvedValue(OVERRIDE_FILE), + ...overrides, +}) + +describe('findComposeFile', () => { + it('should return path to compose.yml when it exists', async () => { + vi.mocked(fs.access).mockImplementation(async (filePath) => { + if (String(filePath).endsWith('compose.yml')) return undefined + throw new Error('ENOENT') + }) + + const result = await findComposeFile(WORKTREE) + + expect(result).toBe(`${WORKTREE}/compose.yml`) + }) + + it('should return path to compose.yaml when compose.yml does not exist', async () => { + vi.mocked(fs.access).mockImplementation(async (filePath) => { + if (String(filePath).endsWith('compose.yaml')) return undefined + throw new Error('ENOENT') + }) + + const result = await findComposeFile(WORKTREE) + + expect(result).toBe(`${WORKTREE}/compose.yaml`) + }) + + it('should return path to docker-compose.yml as a fallback', async () => { + vi.mocked(fs.access).mockImplementation(async (filePath) => { + if (String(filePath).endsWith('docker-compose.yml')) return undefined + throw new Error('ENOENT') + }) + + const result = await findComposeFile(WORKTREE) + + expect(result).toBe(`${WORKTREE}/docker-compose.yml`) + }) + + it('should return path to docker-compose.yaml as last resort', async () => { + vi.mocked(fs.access).mockImplementation(async (filePath) => { + if (String(filePath).endsWith('docker-compose.yaml')) return undefined + throw new Error('ENOENT') + }) + + const result = await findComposeFile(WORKTREE) + + expect(result).toBe(`${WORKTREE}/docker-compose.yaml`) + }) + + it('should prefer compose.yml over docker-compose.yml', async () => { + vi.mocked(fs.access).mockImplementation(async () => undefined) // all exist + + const result = await findComposeFile(WORKTREE) + + // compose.yml is first in the list + expect(result).toBe(`${WORKTREE}/compose.yml`) + }) + + it('should return null when no compose file exists', async () => { + vi.mocked(fs.access).mockRejectedValue(new Error('ENOENT')) + + const result = await findComposeFile(WORKTREE) + + expect(result).toBeNull() + }) +}) + +describe('ComposeDevServerStrategy', () => { + let utils: ComposeUtils + let strategy: ComposeDevServerStrategy + + beforeEach(() => { + utils = makeUtils() + strategy = new ComposeDevServerStrategy(utils) + }) + + // --------------------------------------------------------------------------- + // buildProjectName + // --------------------------------------------------------------------------- + describe('buildProjectName', () => { + it('should build project name with numeric identifier', () => { + expect(ComposeDevServerStrategy.buildProjectName(872)).toBe('iloom-872') + }) + + it('should build project name with string identifier', () => { + expect(ComposeDevServerStrategy.buildProjectName('my-feature')).toBe('iloom-my-feature') + }) + }) + + // --------------------------------------------------------------------------- + // isStackRunning + // --------------------------------------------------------------------------- + describe('isStackRunning', () => { + it('should return true when docker compose ps returns running container IDs', async () => { + vi.mocked(execa).mockResolvedValue({ + exitCode: 0, + stdout: 'abc123def456', + } as never) + + const result = await strategy.isStackRunning('iloom-872') + + expect(result).toBe(true) + expect(execa).toHaveBeenCalledWith( + 'docker', + ['compose', '--project-name', 'iloom-872', 'ps', '--quiet', '--status', 'running'], + { reject: false } + ) + }) + + it('should return false when docker compose ps output is empty', async () => { + vi.mocked(execa).mockResolvedValue({ + exitCode: 0, + stdout: '', + } as never) + + const result = await strategy.isStackRunning('iloom-872') + + expect(result).toBe(false) + }) + + it('should return false when docker compose ps exits with non-zero code', async () => { + vi.mocked(execa).mockResolvedValue({ + exitCode: 1, + stdout: '', + } as never) + + const result = await strategy.isStackRunning('iloom-872') + + expect(result).toBe(false) + }) + + it('should return false when execa throws', async () => { + vi.mocked(execa).mockRejectedValue(new Error('Docker not available')) + + const result = await strategy.isStackRunning('iloom-872') + + expect(result).toBe(false) + }) + }) + + // --------------------------------------------------------------------------- + // startDetached + // --------------------------------------------------------------------------- + describe('startDetached', () => { + it('should call docker compose up --detach --wait with correct args', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + + await strategy.startDetached(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + + expect(execa).toHaveBeenCalledWith( + 'docker', + [ + 'compose', + '--project-name', 'iloom-872', + '-f', COMPOSE_FILE, + '-f', OVERRIDE_FILE, + 'up', '--detach', '--wait', + ], + expect.objectContaining({ stdio: 'inherit' }) + ) + }) + + it('should forward envOverrides into the compose process environment', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + + await strategy.startDetached(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872', { + DATABASE_URL: 'postgres://test', + }) + + expect(execa).toHaveBeenCalledWith( + 'docker', + expect.any(Array), + expect.objectContaining({ + env: expect.objectContaining({ DATABASE_URL: 'postgres://test' }), + }) + ) + }) + + it('should throw when docker compose up fails', async () => { + vi.mocked(execa).mockRejectedValue(new Error('compose up failed')) + + await expect( + strategy.startDetached(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + ).rejects.toThrow('Failed to start compose stack "iloom-872"') + }) + }) + + // --------------------------------------------------------------------------- + // startForeground + // --------------------------------------------------------------------------- + describe('startForeground', () => { + it('should call docker compose up (without --detach) in foreground mode', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + + await strategy.startForeground(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + + const call = vi.mocked(execa).mock.calls[0] + expect(call[1]).toContain('up') + expect(call[1]).not.toContain('--detach') + }) + + it('should use stderr stdio when redirectToStderr is true', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + + await strategy.startForeground(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872', { + redirectToStderr: true, + }) + + expect(execa).toHaveBeenCalledWith( + 'docker', + expect.any(Array), + expect.objectContaining({ + stdio: [process.stdin, process.stderr, process.stderr], + }) + ) + }) + + it('should call onProcessStarted with undefined (no host PID for compose)', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + const onStart = vi.fn() + + await strategy.startForeground(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872', { + onProcessStarted: onStart, + }) + + expect(onStart).toHaveBeenCalledWith(undefined) + }) + + it('should set up SIGINT and SIGTERM signal handlers', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + const onSpy = vi.spyOn(process, 'on') + + await strategy.startForeground(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + + expect(onSpy).toHaveBeenCalledWith('SIGINT', expect.any(Function)) + expect(onSpy).toHaveBeenCalledWith('SIGTERM', expect.any(Function)) + }) + + it('should remove signal handlers after completion', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + const removeSpy = vi.spyOn(process, 'removeListener') + + await strategy.startForeground(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + + expect(removeSpy).toHaveBeenCalledWith('SIGINT', expect.any(Function)) + expect(removeSpy).toHaveBeenCalledWith('SIGTERM', expect.any(Function)) + }) + + it('should remove signal handlers even when docker compose throws', async () => { + vi.mocked(execa).mockRejectedValue(new Error('compose crashed')) + const removeSpy = vi.spyOn(process, 'removeListener') + + await expect( + strategy.startForeground(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + ).rejects.toThrow('compose crashed') + + expect(removeSpy).toHaveBeenCalledWith('SIGINT', expect.any(Function)) + expect(removeSpy).toHaveBeenCalledWith('SIGTERM', expect.any(Function)) + }) + + it('should return empty object (no host PID)', async () => { + vi.mocked(execa).mockResolvedValue({ exitCode: 0 } as never) + + const result = await strategy.startForeground(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + + expect(result).toEqual({}) + }) + }) + + // --------------------------------------------------------------------------- + // stop + // --------------------------------------------------------------------------- + describe('stop', () => { + it('should call docker compose down when stack is running', async () => { + // isStackRunning returns true (running IDs) + vi.mocked(execa) + .mockResolvedValueOnce({ exitCode: 0, stdout: 'abc123' } as never) // ps --quiet + .mockResolvedValueOnce({ exitCode: 0 } as never) // compose down + + const result = await strategy.stop(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + + expect(result).toBe(true) + expect(execa).toHaveBeenLastCalledWith( + 'docker', + [ + 'compose', + '--project-name', 'iloom-872', + '-f', COMPOSE_FILE, + '-f', OVERRIDE_FILE, + 'down', + ], + { reject: false } + ) + }) + + it('should return false and skip down when stack is not running', async () => { + // isStackRunning returns false (empty output) + vi.mocked(execa).mockResolvedValue({ exitCode: 0, stdout: '' } as never) + + const result = await strategy.stop(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + + expect(result).toBe(false) + // Should only have called ps, not down + expect(execa).toHaveBeenCalledTimes(1) + }) + + it('should throw when docker compose down fails unexpectedly', async () => { + // isStackRunning returns true + vi.mocked(execa) + .mockResolvedValueOnce({ exitCode: 0, stdout: 'abc123' } as never) // ps --quiet + .mockRejectedValueOnce(new Error('daemon unavailable')) // compose down + + await expect( + strategy.stop(COMPOSE_FILE, OVERRIDE_FILE, 'iloom-872') + ).rejects.toThrow('Failed to stop compose stack "iloom-872"') + }) + }) + + // --------------------------------------------------------------------------- + // prepareOverrideFile + // --------------------------------------------------------------------------- + describe('prepareOverrideFile', () => { + it('should parse compose file and generate override with remapped host port', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + { service: 'db', hostPort: 5432, containerPort: 5432 }, + ] + vi.mocked(utils.parseComposeFile).mockResolvedValue(mappings) + vi.mocked(utils.generateOverrideFile).mockResolvedValue(OVERRIDE_FILE) + + const result = await strategy.prepareOverrideFile(COMPOSE_FILE, '872', 3872, '/data') + + expect(utils.parseComposeFile).toHaveBeenCalledWith(COMPOSE_FILE) + // Primary service's host port should be remapped to 3872 + expect(utils.generateOverrideFile).toHaveBeenCalledWith( + [ + { service: 'web', hostPort: 3872, containerPort: 3000 }, + { service: 'db', hostPort: 5432, containerPort: 5432 }, + ], + '872', + '/data' + ) + expect(result).toBe(OVERRIDE_FILE) + }) + + it('should throw when compose file has no port mappings', async () => { + vi.mocked(utils.parseComposeFile).mockResolvedValue([]) + + await expect( + strategy.prepareOverrideFile(COMPOSE_FILE, '872', 3872, '/data') + ).rejects.toThrow('No port mappings found') + }) + }) + + // --------------------------------------------------------------------------- + // waitForReady + // --------------------------------------------------------------------------- + describe('waitForReady', () => { + beforeEach(() => { + vi.useFakeTimers() + }) + + afterEach(() => { + vi.useRealTimers() + }) + + const makeSocket = (triggerEvent: 'connect' | 'error') => { + const socket = { + once: vi.fn(), + destroy: vi.fn(), + setTimeout: vi.fn(), + } + socket.once.mockImplementation((event: string, cb: () => void) => { + if (event === triggerEvent) cb() + return socket + }) + return socket + } + + it('should return true when port accepts connections immediately', async () => { + vi.mocked(net.createConnection).mockReturnValue(makeSocket('connect') as never) + + const promise = strategy.waitForReady(3872, 5000, 100) + await vi.runAllTimersAsync() + + const result = await promise + expect(result).toBe(true) + }) + + it('should return false when timeout expires before port is available', async () => { + vi.mocked(net.createConnection).mockReturnValue(makeSocket('error') as never) + + const promise = strategy.waitForReady(3872, 200, 50) + await vi.runAllTimersAsync() + + const result = await promise + expect(result).toBe(false) + }) + + it('should exit early if compose stack stops before port is ready', async () => { + vi.mocked(net.createConnection).mockReturnValue(makeSocket('error') as never) + + // Stack stops after second isStackRunning check + let isRunningCallCount = 0 + vi.mocked(execa).mockImplementation(async () => { + isRunningCallCount++ + if (isRunningCallCount >= 2) { + return { exitCode: 0, stdout: '' } as never // stopped + } + return { exitCode: 0, stdout: 'abc123' } as never // running + }) + + const promise = strategy.waitForReady(3872, 30000, 50, 'iloom-872') + await vi.runAllTimersAsync() + + const result = await promise + expect(result).toBe(false) + }) + }) +}) diff --git a/src/lib/ComposeDevServerStrategy.ts b/src/lib/ComposeDevServerStrategy.ts new file mode 100644 index 00000000..33e75beb --- /dev/null +++ b/src/lib/ComposeDevServerStrategy.ts @@ -0,0 +1,354 @@ +import fs from 'fs/promises' +import path from 'path' +import { execa } from 'execa' +import net from 'net' +import { logger } from '../utils/logger.js' +import type { ComposePortMapping } from '../utils/compose.js' + +export type { ComposePortMapping } + +/** + * Injected compose file utility functions. + * These are defined as an interface so callers can inject test doubles. + * The real implementations will come from the compose-file-parser module. + */ +export interface ComposeUtils { + /** + * Parse a compose file and return all port mappings. + * Returns an empty array if no ports are defined. + */ + parseComposeFile(filePath: string): Promise<ComposePortMapping[]> + + /** + * Generate a compose override file that remaps host ports. + * Returns the absolute path to the generated override file. + */ + generateOverrideFile( + mappings: ComposePortMapping[], + identifier: string | number, + dataDir: string + ): Promise<string> +} + +/** + * Options for foreground mode. + */ +export interface ComposeForegroundOptions { + redirectToStderr?: boolean | undefined + onProcessStarted?: ((pid?: number) => void) | undefined + envOverrides?: Record<string, string> | undefined +} + +/** + * Standard compose file names to check, in order of preference. + */ +const COMPOSE_FILE_NAMES = ['compose.yml', 'compose.yaml', 'docker-compose.yml', 'docker-compose.yaml'] + +/** + * Find a compose file in the given directory. + * Checks for compose.yml, compose.yaml, docker-compose.yml, docker-compose.yaml in that order. + * Returns the absolute path if found, null otherwise. + */ +export async function findComposeFile(worktreePath: string): Promise<string | null> { + for (const fileName of COMPOSE_FILE_NAMES) { + const filePath = path.join(worktreePath, fileName) + try { + await fs.access(filePath) + return filePath + } catch { + // File doesn't exist, try the next one + } + } + return null +} + +/** + * Attempt a single TCP connection to localhost:port. + * Resolves true if the connection succeeds, false otherwise. + * Uses a 1-second socket timeout to avoid hanging on firewalled ports. + */ +function tcpProbe(port: number): Promise<boolean> { + return new Promise((resolve) => { + const socket = net.createConnection({ port, host: '127.0.0.1' }) + socket.setTimeout(1000) + const cleanup = (result: boolean): void => { + socket.destroy() + resolve(result) + } + socket.once('connect', () => cleanup(true)) + socket.once('error', () => cleanup(false)) + socket.once('timeout', () => cleanup(false)) + }) +} + +/** + * ComposeDevServerStrategy handles the full docker compose lifecycle for a dev server: + * - Stack startup (detached and foreground) + * - Stack teardown + * - Running status checks + * - Port readiness detection via TCP probe + * + * Uses `--project-name iloom-{identifier}` to isolate compose stacks between concurrent looms. + * Supports compose override files for port remapping (generated by the compose-file-parser module). + */ +export class ComposeDevServerStrategy { + private readonly utils: ComposeUtils + + constructor(utils: ComposeUtils) { + this.utils = utils + } + + /** + * Build the docker compose project name for a given loom identifier. + * Format: `iloom-{identifier}` + */ + static buildProjectName(identifier: string | number): string { + return `iloom-${identifier}` + } + + /** + * Check if the compose stack is currently running. + * Uses `docker compose ps --quiet` and checks for running service output. + * + * @param projectName - The compose project name (iloom-{identifier}) + * @returns true if any services are running, false otherwise + */ + async isStackRunning(projectName: string): Promise<boolean> { + try { + const result = await execa( + 'docker', + ['compose', '--project-name', projectName, 'ps', '--quiet', '--status', 'running'], + { reject: false } + ) + return result.exitCode === 0 && result.stdout.trim().length > 0 + } catch { + return false + } + } + + /** + * Start the compose stack in detached (background) mode. + * Applies the override file for port remapping when provided. + * Waits for the primary port to accept connections before returning. + * + * @param composeFile - Absolute path to the main compose file + * @param overrideFile - Absolute path to the generated override file (for port remapping) + * @param projectName - The compose project name + * @param envOverrides - Additional environment variables + */ + async startDetached( + composeFile: string, + overrideFile: string, + projectName: string, + envOverrides?: Record<string, string> + ): Promise<void> { + const args = this.buildComposeArgs(composeFile, overrideFile, projectName) + args.push('up', '--detach', '--wait') + + logger.info(`Starting compose stack "${projectName}" in background...`) + + const env = envOverrides + ? { ...process.env, ...envOverrides } + : process.env + + try { + await execa('docker', args, { env: env as NodeJS.ProcessEnv, stdio: 'inherit' }) + } catch (error) { + const message = error instanceof Error ? error.message : 'Unknown error' + throw new Error(`Failed to start compose stack "${projectName}": ${message}`) + } + + logger.success(`Compose stack "${projectName}" started`) + } + + /** + * Start the compose stack in foreground (blocking) mode. + * Returns once all services exit. + * Traps SIGINT/SIGTERM and forwards them to the stack via docker compose stop. + * + * @param composeFile - Absolute path to the main compose file + * @param overrideFile - Absolute path to the generated override file + * @param projectName - The compose project name + * @param opts - Foreground options (redirectToStderr, onProcessStarted, envOverrides) + */ + async startForeground( + composeFile: string, + overrideFile: string, + projectName: string, + opts: ComposeForegroundOptions = {} + ): Promise<{ pid?: number }> { + const { redirectToStderr, onProcessStarted, envOverrides } = opts + + const args = this.buildComposeArgs(composeFile, overrideFile, projectName) + args.push('up') + + logger.info(`Running compose stack "${projectName}" in foreground...`) + + const stdio = redirectToStderr + ? [process.stdin, process.stderr, process.stderr] as const + : 'inherit' as const + + const env = envOverrides + ? { ...process.env, ...envOverrides } + : process.env + + // Signal forwarding: trap SIGINT/SIGTERM and forward to the compose stack + const forwardSignal = (): void => { + logger.debug(`Stopping compose stack "${projectName}"`) + void execa( + 'docker', + [...this.buildComposeArgs(composeFile, overrideFile, projectName), 'stop'], + { reject: false } + ) + } + + const onSigint = (): void => forwardSignal() + const onSigterm = (): void => forwardSignal() + + process.on('SIGINT', onSigint) + process.on('SIGTERM', onSigterm) + + if (onProcessStarted) { + onProcessStarted(undefined) + } + + try { + await execa('docker', args, { stdio, env: env as NodeJS.ProcessEnv }) + } finally { + process.removeListener('SIGINT', onSigint) + process.removeListener('SIGTERM', onSigterm) + } + + return {} + } + + /** + * Stop and remove the compose stack. + * + * @param composeFile - Absolute path to the main compose file + * @param overrideFile - Absolute path to the override file + * @param projectName - The compose project name + * @returns true if services were stopped, false if nothing was running + */ + async stop( + composeFile: string, + overrideFile: string, + projectName: string + ): Promise<boolean> { + const isRunning = await this.isStackRunning(projectName) + + if (!isRunning) { + logger.debug(`Compose stack "${projectName}" is not running`) + return false + } + + logger.info(`Stopping compose stack "${projectName}"...`) + + const args = this.buildComposeArgs(composeFile, overrideFile, projectName) + args.push('down') + + try { + await execa('docker', args, { reject: false }) + } catch (error) { + const message = error instanceof Error ? error.message : 'Unknown error' + throw new Error(`Failed to stop compose stack "${projectName}": ${message}`) + } + + logger.success(`Compose stack "${projectName}" stopped`) + return true + } + + /** + * Wait for the primary port to accept TCP connections. + * Used for readiness detection after starting in detached mode without --wait. + * + * @param port - Host port to probe + * @param timeout - Maximum time to wait in milliseconds + * @param interval - Interval between probes in milliseconds + * @param projectName - Optional project name to monitor for early exit + * @returns true if the port accepts connections within the timeout, false otherwise + */ + async waitForReady( + port: number, + timeout: number, + interval: number, + projectName?: string + ): Promise<boolean> { + const startTime = Date.now() + let attempts = 0 + + while (Date.now() - startTime < timeout) { + attempts++ + + // Early exit: if the stack has stopped, stop polling + if (projectName && attempts % 3 === 0) { + const stillRunning = await this.isStackRunning(projectName) + if (!stillRunning) { + logger.warn( + `Compose stack "${projectName}" exited before becoming ready (after ${attempts} attempts, ${Date.now() - startTime}ms)` + ) + return false + } + } + + const isReady = await tcpProbe(port) + if (isReady) { + return true + } + + await new Promise<void>((resolve) => globalThis.setTimeout(resolve, interval)) + } + + return false + } + + /** + * Parse the compose file and generate an override file for port remapping. + * Returns the host port that the primary web service will listen on. + * + * @param composeFile - Path to the main compose file + * @param identifier - Loom identifier (issue number, branch name) + * @param hostPort - Desired host port for the primary service + * @param dataDir - Directory where the override file will be generated + * @returns Path to the generated override file + */ + async prepareOverrideFile( + composeFile: string, + identifier: string | number, + hostPort: number, + dataDir: string + ): Promise<string> { + const mappings = await this.utils.parseComposeFile(composeFile) + + if (mappings.length === 0) { + throw new Error( + `No port mappings found in compose file "${composeFile}". ` + + 'Ensure at least one service exposes a port.' + ) + } + + // Remap the primary service's host port to the desired port + const remapped = mappings.map((m, index) => ({ + ...m, + hostPort: index === 0 ? hostPort : m.hostPort, + })) + + return this.utils.generateOverrideFile(remapped, identifier, dataDir) + } + + /** + * Build the base docker compose args array with project name and file flags. + */ + private buildComposeArgs( + composeFile: string, + overrideFile: string, + projectName: string + ): string[] { + return [ + 'compose', + '--project-name', projectName, + '-f', composeFile, + '-f', overrideFile, + ] + } +} diff --git a/src/lib/DevServerManager.test.ts b/src/lib/DevServerManager.test.ts index 92a6a7a0..3dbfc3b2 100644 --- a/src/lib/DevServerManager.test.ts +++ b/src/lib/DevServerManager.test.ts @@ -1,8 +1,11 @@ import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest' -import { DevServerManager, type DockerConfig } from './DevServerManager.js' +import fs from 'fs/promises' +import { DevServerManager, type DockerConfig, type ComposeUtils } from './DevServerManager.js' import { ProcessManager } from './process/ProcessManager.js' import { DockerManager } from './DockerManager.js' import { DockerDevServerStrategy } from './DockerDevServerStrategy.js' +import { ComposeDevServerStrategy } from './ComposeDevServerStrategy.js' +import * as ComposeStrategyModule from './ComposeDevServerStrategy.js' import { execa, type ExecaChildProcess } from 'execa' import { setTimeout } from 'timers/promises' import * as devServerUtils from '../utils/dev-server.js' @@ -12,9 +15,11 @@ import * as packageJsonUtils from '../utils/package-json.js' // Mock dependencies vi.mock('execa') vi.mock('timers/promises') +vi.mock('fs/promises') vi.mock('./process/ProcessManager.js') vi.mock('./DockerManager.js') vi.mock('./DockerDevServerStrategy.js') +vi.mock('./ComposeDevServerStrategy.js') vi.mock('../utils/dev-server.js') vi.mock('../utils/package-manager.js') vi.mock('../utils/package-json.js') @@ -978,6 +983,353 @@ describe('DevServerManager', () => { }) }) + describe('Compose mode (auto-detection)', () => { + const dockerConfig: DockerConfig = { + dockerFile: './Dockerfile', + containerPort: 3000, + identifier: '872', + } + + let mockComposeUtils: ComposeUtils + let mockComposeStrategyInstance: { + isStackRunning: ReturnType<typeof vi.fn> + startDetached: ReturnType<typeof vi.fn> + startForeground: ReturnType<typeof vi.fn> + stop: ReturnType<typeof vi.fn> + waitForReady: ReturnType<typeof vi.fn> + prepareOverrideFile: ReturnType<typeof vi.fn> + } + + beforeEach(() => { + mockComposeUtils = { + parseComposeFile: vi.fn().mockResolvedValue([ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + ]), + generateOverrideFile: vi.fn().mockResolvedValue('/override.yml'), + } + + mockComposeStrategyInstance = { + isStackRunning: vi.fn(), + startDetached: vi.fn(), + startForeground: vi.fn(), + stop: vi.fn(), + waitForReady: vi.fn(), + prepareOverrideFile: vi.fn().mockResolvedValue('/override.yml'), + } + + vi.mocked(ComposeDevServerStrategy).mockImplementation( + () => mockComposeStrategyInstance as unknown as ComposeDevServerStrategy + ) + + // Mock the static buildProjectName method + vi.mocked(ComposeDevServerStrategy.buildProjectName).mockImplementation( + (id: string | number) => `iloom-${id}` + ) + + vi.mocked(DockerManager.buildContainerName).mockReturnValue('iloom-dev-872') + vi.mocked(DockerManager.buildImageName).mockReturnValue('iloom-dev-872') + + // Mock fs.mkdir and fs.unlink to allow getComposeOverrideDir/cleanup to succeed + vi.mocked(fs.mkdir).mockResolvedValue(undefined) + vi.mocked(fs.unlink).mockResolvedValue(undefined) + + // findComposeFile returns a compose file path by default + vi.mocked(ComposeStrategyModule.findComposeFile).mockResolvedValue( + `${mockWorktreePath}/compose.yml` + ) + }) + + describe('ensureServerRunning with compose file', () => { + it('should detect running compose stack and skip start', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(true) + + // Create manager with injected composeUtils + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.ensureServerRunning(mockWorktreePath, 3872, dockerConfig) + + expect(result).toBe(true) + expect(mockComposeStrategyInstance.isStackRunning).toHaveBeenCalledWith('iloom-872') + expect(mockComposeStrategyInstance.startDetached).not.toHaveBeenCalled() + }) + + it('should prepare override file and start compose stack when not running', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(false) + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startDetached.mockResolvedValue(undefined) + mockComposeStrategyInstance.waitForReady.mockResolvedValue(true) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.ensureServerRunning(mockWorktreePath, 3872, dockerConfig) + + expect(result).toBe(true) + expect(mockComposeStrategyInstance.prepareOverrideFile).toHaveBeenCalledWith( + `${mockWorktreePath}/compose.yml`, + '872', + 3872, + expect.stringContaining('compose-overrides') + ) + expect(mockComposeStrategyInstance.startDetached).toHaveBeenCalledWith( + `${mockWorktreePath}/compose.yml`, + '/override.yml', + 'iloom-872' + ) + }) + + it('should return false when compose stack fails to start', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(false) + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startDetached.mockRejectedValue(new Error('compose failed')) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.ensureServerRunning(mockWorktreePath, 3872, dockerConfig) + + expect(result).toBe(false) + }) + + it('should clean up and return false when port does not become ready within timeout', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(false) + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startDetached.mockResolvedValue(undefined) + mockComposeStrategyInstance.waitForReady.mockResolvedValue(false) + mockComposeStrategyInstance.stop.mockResolvedValue(true) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 500, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.ensureServerRunning(mockWorktreePath, 3872, dockerConfig) + + expect(result).toBe(false) + expect(mockComposeStrategyInstance.stop).toHaveBeenCalled() + }) + + it('should not use DockerDevServerStrategy when compose file is found', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(true) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + await managerWithCompose.ensureServerRunning(mockWorktreePath, 3872, dockerConfig) + + // Docker container strategy should not have been used + expect(mockProcessManager.detectDevServer).not.toHaveBeenCalled() + }) + }) + + describe('ensureServerRunning falls back to Dockerfile when no compose file', () => { + it('should use DockerDevServerStrategy when no compose file exists', async () => { + // No compose file found + vi.mocked(ComposeStrategyModule.findComposeFile).mockResolvedValue(null) + + const mockDockerStrategyInstance = { + isContainerRunning: vi.fn().mockResolvedValue(true), + buildImage: vi.fn(), + resolveContainerPort: vi.fn(), + runContainerDetached: vi.fn(), + runContainerForeground: vi.fn(), + stopContainer: vi.fn(), + waitForReady: vi.fn(), + } + vi.mocked(DockerDevServerStrategy).mockImplementation( + () => mockDockerStrategyInstance as unknown as DockerDevServerStrategy + ) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.ensureServerRunning(mockWorktreePath, 3548, dockerConfig) + + expect(result).toBe(true) + expect(mockDockerStrategyInstance.isContainerRunning).toHaveBeenCalled() + expect(mockComposeStrategyInstance.isStackRunning).not.toHaveBeenCalled() + }) + }) + + describe('isServerRunning with compose detection', () => { + it('should check compose stack status when compose file found', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(true) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.isServerRunning(3872, dockerConfig, mockWorktreePath) + + expect(result).toBe(true) + expect(mockComposeStrategyInstance.isStackRunning).toHaveBeenCalledWith('iloom-872') + }) + + it('should use Docker container status when no compose file', async () => { + vi.mocked(ComposeStrategyModule.findComposeFile).mockResolvedValue(null) + + const mockDockerStrategyInstance = { + isContainerRunning: vi.fn().mockResolvedValue(true), + } + vi.mocked(DockerDevServerStrategy).mockImplementation( + () => mockDockerStrategyInstance as unknown as DockerDevServerStrategy + ) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.isServerRunning(3548, dockerConfig, mockWorktreePath) + + expect(result).toBe(true) + expect(mockDockerStrategyInstance.isContainerRunning).toHaveBeenCalled() + expect(mockComposeStrategyInstance.isStackRunning).not.toHaveBeenCalled() + }) + + it('should not check compose file when worktreePath is not provided', async () => { + const mockDockerStrategyInstance = { + isContainerRunning: vi.fn().mockResolvedValue(false), + } + vi.mocked(DockerDevServerStrategy).mockImplementation( + () => mockDockerStrategyInstance as unknown as DockerDevServerStrategy + ) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + // No worktreePath = no compose detection + await managerWithCompose.isServerRunning(3548, dockerConfig) + + expect(ComposeStrategyModule.findComposeFile).not.toHaveBeenCalled() + expect(mockDockerStrategyInstance.isContainerRunning).toHaveBeenCalled() + }) + }) + + describe('runServerForeground with compose detection', () => { + it('should run compose stack in foreground when compose file found', async () => { + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startForeground.mockResolvedValue({}) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + const result = await managerWithCompose.runServerForeground( + mockWorktreePath, 3872, false, undefined, undefined, dockerConfig + ) + + expect(result).toEqual({}) + expect(mockComposeStrategyInstance.startForeground).toHaveBeenCalledWith( + `${mockWorktreePath}/compose.yml`, + '/override.yml', + 'iloom-872', + expect.objectContaining({ redirectToStderr: false }) + ) + }) + + it('should pass redirectToStderr to compose startForeground', async () => { + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startForeground.mockResolvedValue({}) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + await managerWithCompose.runServerForeground( + mockWorktreePath, 3872, true, undefined, undefined, dockerConfig + ) + + expect(mockComposeStrategyInstance.startForeground).toHaveBeenCalledWith( + expect.any(String), + expect.any(String), + 'iloom-872', + expect.objectContaining({ redirectToStderr: true }) + ) + }) + + it('should call onProcessStarted with undefined in compose mode', async () => { + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startForeground.mockResolvedValue({}) + const onStart = vi.fn() + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + await managerWithCompose.runServerForeground( + mockWorktreePath, 3872, false, onStart, undefined, dockerConfig + ) + + expect(onStart).toHaveBeenCalledWith(undefined) + }) + }) + + describe('cleanup with compose stacks', () => { + it('should stop tracked compose stacks during cleanup', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(false) + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startDetached.mockResolvedValue(undefined) + mockComposeStrategyInstance.waitForReady.mockResolvedValue(true) + mockComposeStrategyInstance.stop.mockResolvedValue(true) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + await managerWithCompose.ensureServerRunning(mockWorktreePath, 3872, dockerConfig) + + mockComposeStrategyInstance.stop.mockClear() + mockComposeStrategyInstance.stop.mockResolvedValue(true) + + await managerWithCompose.cleanup() + + expect(mockComposeStrategyInstance.stop).toHaveBeenCalledWith( + `${mockWorktreePath}/compose.yml`, + '/override.yml', + 'iloom-872' + ) + }) + + it('should handle compose cleanup errors gracefully', async () => { + mockComposeStrategyInstance.isStackRunning.mockResolvedValue(false) + mockComposeStrategyInstance.prepareOverrideFile.mockResolvedValue('/override.yml') + mockComposeStrategyInstance.startDetached.mockResolvedValue(undefined) + mockComposeStrategyInstance.waitForReady.mockResolvedValue(true) + + const managerWithCompose = new DevServerManager(mockProcessManager, { + startupTimeout: 5000, + checkInterval: 100, + }, mockComposeUtils) + + await managerWithCompose.ensureServerRunning(mockWorktreePath, 3872, dockerConfig) + + mockComposeStrategyInstance.stop.mockRejectedValue(new Error('Docker daemon unreachable')) + + // Should not throw + await expect(managerWithCompose.cleanup()).resolves.not.toThrow() + }) + }) + }) + describe('default options', () => { it('should use default timeout (180s) and interval if not specified', () => { const defaultManager = new DevServerManager() diff --git a/src/lib/DevServerManager.ts b/src/lib/DevServerManager.ts index 8a19b01a..8063f547 100644 --- a/src/lib/DevServerManager.ts +++ b/src/lib/DevServerManager.ts @@ -1,7 +1,10 @@ import path from 'path' +import os from 'os' +import fs from 'fs/promises' import { ProcessManager } from './process/ProcessManager.js' import { DockerManager, type DockerConfig } from './DockerManager.js' import { DockerDevServerStrategy, type DockerConfig as StrategyDockerConfig, type DockerUtils } from './DockerDevServerStrategy.js' +import { ComposeDevServerStrategy, findComposeFile, type ComposeUtils } from './ComposeDevServerStrategy.js' import { NativeDevServerStrategy } from './NativeDevServerStrategy.js' import { logger } from '../utils/logger.js' @@ -23,6 +26,16 @@ const dockerUtils: DockerUtils = { assertDockerAvailable: () => DockerManager.assertAvailable(), } +/** + * Ensure the compose override directory exists and return its path. + * Located under the global iloom config dir to keep worktrees clean. + */ +async function getComposeOverrideDir(): Promise<string> { + const dirPath = path.join(os.homedir(), '.config', 'iloom-ai', 'compose-overrides') + await fs.mkdir(dirPath, { recursive: true }) + return dirPath +} + function getStartupTimeout(): number { const envTimeout = process.env.ILOOM_DEV_SERVER_TIMEOUT if (envTimeout) { @@ -52,6 +65,9 @@ export interface DevServerManagerOptions { // Re-export DockerConfig from DockerManager for backward compatibility export type { DockerConfig } from './DockerManager.js' +// Re-export compose types for callers that need them +export type { ComposeUtils } from './ComposeDevServerStrategy.js' + /** * Convert a DockerConfig (from DockerManager) to a StrategyDockerConfig * (for DockerDevServerStrategy). @@ -72,17 +88,26 @@ function toStrategyConfig(config: DockerConfig): StrategyDockerConfig { * * When devServer config is absent OR mode is not 'docker', behavior is identical * to the native process-based implementation via NativeDevServerStrategy. - * When Docker mode is configured, all operations delegate to DockerDevServerStrategy. + * When Docker mode is configured, auto-detects compose files (compose.yml, + * docker-compose.yml). If a compose file is found, delegates to + * ComposeDevServerStrategy; otherwise falls back to DockerDevServerStrategy. */ export class DevServerManager { private readonly processManager: ProcessManager private readonly options: Required<DevServerManagerOptions> private readonly nativeStrategy: NativeDevServerStrategy + private readonly composeUtils: ComposeUtils private runningDockerContainers: Map<number, string> = new Map() + /** + * Tracks running compose stacks: projectName -> { port, composeFile, overrideFile } + * Keyed by projectName (not port) to avoid overwrite collisions on retry. + */ + private runningComposeStacks: Map<string, { port: number; composeFile: string; overrideFile: string }> = new Map() constructor( processManager?: ProcessManager, - options: DevServerManagerOptions = {} + options: DevServerManagerOptions = {}, + composeUtils?: ComposeUtils ) { this.processManager = processManager ?? new ProcessManager() this.options = { @@ -94,6 +119,23 @@ export class DevServerManager { this.options.startupTimeout, this.options.checkInterval ) + // Default compose utils will be replaced by real implementations once + // the compose-file-parser module is merged (sibling issue). + // For now, provide a placeholder that throws informative errors. + this.composeUtils = composeUtils ?? { + parseComposeFile: async (): Promise<never> => { + throw new Error( + 'Compose file parsing is not yet available. ' + + 'The compose-file-parser module has not been merged.' + ) + }, + generateOverrideFile: async (): Promise<never> => { + throw new Error( + 'Compose override file generation is not yet available. ' + + 'The compose-file-parser module has not been merged.' + ) + }, + } } /** @@ -104,10 +146,21 @@ export class DevServerManager { return new DockerDevServerStrategy(toStrategyConfig(dockerConfig), dockerUtils) } + /** + * Create a ComposeDevServerStrategy backed by the injected compose utils. + */ + private createComposeStrategy(): ComposeDevServerStrategy { + return new ComposeDevServerStrategy(this.composeUtils) + } + /** * Ensure dev server is running on the specified port. * If not running, start it and wait for it to be ready. * + * When dockerConfig is provided, auto-detects compose files in the worktree: + * - If a compose file exists, uses ComposeDevServerStrategy + * - Otherwise falls back to DockerDevServerStrategy (single Dockerfile) + * * @param worktreePath - Path to the worktree * @param port - Port the server should run on * @param dockerConfig - Optional Docker configuration for container-based server @@ -116,8 +169,33 @@ export class DevServerManager { async ensureServerRunning(worktreePath: string, port: number, dockerConfig?: DockerConfig): Promise<boolean> { logger.debug(`Checking if dev server is running on port ${port}...`) - // Docker mode: check if container is already running + // Docker mode: auto-detect compose file vs. single Dockerfile if (dockerConfig) { + const composeFile = await findComposeFile(worktreePath) + + if (composeFile) { + // Compose mode + const projectName = ComposeDevServerStrategy.buildProjectName(dockerConfig.identifier) + const strategy = this.createComposeStrategy() + const isRunning = await strategy.isStackRunning(projectName) + if (isRunning) { + logger.debug(`Compose stack "${projectName}" already running on port ${port}`) + return true + } + + logger.info(`Compose stack not running on port ${port}, starting...`) + try { + await this.startComposeServer(composeFile, projectName, port, dockerConfig.identifier, strategy) + return true + } catch (error) { + logger.error( + `Failed to start compose stack: ${error instanceof Error ? error.message : 'Unknown error'}` + ) + return false + } + } + + // Single Dockerfile mode const strategy = this.createDockerStrategy(dockerConfig) const containerName = dockerUtils.buildContainerName(dockerConfig.identifier) const isRunning = await strategy.isContainerRunning(containerName) @@ -161,6 +239,49 @@ export class DevServerManager { } } + /** + * Start compose stack in background and wait for the primary port to be ready. + */ + private async startComposeServer( + composeFile: string, + projectName: string, + port: number, + identifier: string, + strategy: ComposeDevServerStrategy + ): Promise<void> { + const overrideFile = await strategy.prepareOverrideFile( + composeFile, + identifier, + port, + await getComposeOverrideDir() + ) + + await strategy.startDetached(composeFile, overrideFile, projectName) + + // Track for cleanup + this.runningComposeStacks.set(projectName, { port, composeFile, overrideFile }) + + // Wait for the primary service port to be ready + logger.info(`Waiting for compose stack "${projectName}" to start on port ${port}...`) + const ready = await strategy.waitForReady( + port, + this.options.startupTimeout, + this.options.checkInterval, + projectName + ) + + if (!ready) { + // Attempt cleanup on failure + await strategy.stop(composeFile, overrideFile, projectName).catch(() => undefined) + this.runningComposeStacks.delete(projectName) + throw new Error( + `Compose stack "${projectName}" failed to start within ${this.options.startupTimeout}ms timeout` + ) + } + + logger.success(`Compose stack "${projectName}" started successfully on port ${port}`) + } + /** * Start dev server in Docker container (background) and wait for it to be ready. * Builds the image, resolves the container port, starts the container detached, @@ -220,14 +341,25 @@ export class DevServerManager { } /** - * Check if a dev server is running on the specified port + * Check if a dev server is running on the specified port. + * In docker mode, auto-detects compose vs. single Dockerfile strategy. * * @param port - Port to check - * @param dockerConfig - Optional Docker configuration; when provided, checks container status + * @param dockerConfig - Optional Docker configuration; when provided, checks container/stack status + * @param worktreePath - Required when dockerConfig is provided for compose detection * @returns true if server is running, false otherwise */ - async isServerRunning(port: number, dockerConfig?: DockerConfig): Promise<boolean> { + async isServerRunning(port: number, dockerConfig?: DockerConfig, worktreePath?: string): Promise<boolean> { if (dockerConfig) { + // Check for compose file if worktreePath is provided + const composeFile = worktreePath ? await findComposeFile(worktreePath) : null + + if (composeFile) { + const projectName = ComposeDevServerStrategy.buildProjectName(dockerConfig.identifier) + const strategy = this.createComposeStrategy() + return strategy.isStackRunning(projectName) + } + const strategy = this.createDockerStrategy(dockerConfig) const containerName = dockerUtils.buildContainerName(dockerConfig.identifier) return strategy.isContainerRunning(containerName) @@ -239,6 +371,7 @@ export class DevServerManager { /** * Run dev server in foreground mode (blocking). * This method blocks until the server is stopped (e.g., via Ctrl+C). + * In docker mode, auto-detects compose files to choose the right strategy. * * @param worktreePath - Path to the worktree * @param port - Port the server should run on @@ -254,8 +387,41 @@ export class DevServerManager { envOverrides?: Record<string, string>, dockerConfig?: DockerConfig ): Promise<{ pid?: number }> { - // Docker mode: build image and run container in foreground + // Docker mode: auto-detect compose vs. single Dockerfile if (dockerConfig) { + const composeFile = await findComposeFile(worktreePath) + + if (composeFile) { + // Compose foreground mode + logger.debug(`Starting compose stack in foreground on port ${port}`) + const projectName = ComposeDevServerStrategy.buildProjectName(dockerConfig.identifier) + const strategy = this.createComposeStrategy() + + const overrideFile = await strategy.prepareOverrideFile( + composeFile, + dockerConfig.identifier, + port, + await getComposeOverrideDir() + ) + + if (onProcessStarted) { + onProcessStarted(undefined) + } + + this.runningComposeStacks.set(projectName, { port, composeFile, overrideFile }) + try { + await strategy.startForeground(composeFile, overrideFile, projectName, { + redirectToStderr, + envOverrides, + }) + } finally { + this.runningComposeStacks.delete(projectName) + } + + return {} + } + + // Single Dockerfile foreground mode logger.debug(`Starting Docker dev server in foreground on port ${port}`) const strategy = this.createDockerStrategy(dockerConfig) @@ -306,7 +472,7 @@ export class DevServerManager { } /** - * Clean up all running server processes and Docker containers. + * Clean up all running server processes, Docker containers, and compose stacks. * This should be called when the manager is being disposed. */ async cleanup(): Promise<void> { @@ -327,5 +493,21 @@ export class DevServerManager { } } this.runningDockerContainers.clear() + + // Clean up compose stacks + for (const [projectName, { port, composeFile, overrideFile }] of this.runningComposeStacks.entries()) { + try { + logger.debug(`Cleaning up compose stack "${projectName}" on port ${port}`) + const strategy = this.createComposeStrategy() + await strategy.stop(composeFile, overrideFile, projectName) + } catch (error) { + logger.warn( + `Failed to stop compose stack "${projectName}" on port ${port}: ${error instanceof Error ? error.message : 'Unknown error'}` + ) + } + // Clean up the generated override file regardless of whether stop succeeded + await fs.unlink(overrideFile).catch(() => undefined) + } + this.runningComposeStacks.clear() } } diff --git a/src/lib/DockerManager.ts b/src/lib/DockerManager.ts index db2d4b06..a26dd5c2 100644 --- a/src/lib/DockerManager.ts +++ b/src/lib/DockerManager.ts @@ -1,3 +1,6 @@ +import path from 'path' +import os from 'os' +import { unlink, access } from 'fs/promises' import { execa } from 'execa' import { logger } from '../utils/logger.js' import { @@ -371,6 +374,128 @@ export class DockerManager { await execa('docker', ['rm', '-f', containerName], { reject: false }) } + /** + * Build the compose project name for a given identifier. + * Convention: `iloom-{sanitizedIdentifier}` + * + * @param identifier - Issue number, branch name, or other identifier + * @returns Compose project name + */ + static buildComposeProjectName(identifier: string | number): string { + const sanitized = sanitizeContainerName(String(identifier)) + return `iloom-${sanitized}` + } + + /** + * Get the directory where compose override files are stored. + * Files are stored at `~/.config/iloom-ai/compose-overrides/` + * + * @returns Absolute path to the compose overrides directory + */ + static getComposeOverrideDir(): string { + return path.join(os.homedir(), '.config', 'iloom-ai', 'compose-overrides') + } + + /** + * Get the full path to the compose override file for a given identifier. + * + * @param identifier - Issue number, branch name, or other identifier + * @returns Absolute path to the override file + */ + static getComposeOverridePath(identifier: string | number): string { + const projectName = DockerManager.buildComposeProjectName(identifier) + return path.join(DockerManager.getComposeOverrideDir(), `${projectName}.yml`) + } + + /** + * Check if a compose stack is currently running (has at least one running container). + * + * @param identifier - Issue number, branch name, or other identifier + * @returns true if the compose stack has running containers, false otherwise + */ + static async isComposeStackRunning(identifier: string | number): Promise<boolean> { + const projectName = DockerManager.buildComposeProjectName(identifier) + try { + const result = await execa('docker', [ + 'compose', + '--project-name', projectName, + 'ps', + '--status', 'running', + '-q', + ], { reject: false }) + return result.exitCode === 0 && result.stdout.trim().length > 0 + } catch { + return false + } + } + + /** + * Tear down a compose stack by project name. + * Gracefully handles stacks that are already stopped or don't exist. + * + * @param identifier - Issue number, branch name, or other identifier + * @returns true if the stack was torn down (was running), false if it was already stopped + */ + static async teardownComposeStack(identifier: string | number): Promise<void> { + const projectName = DockerManager.buildComposeProjectName(identifier) + logger.info(`Tearing down compose stack "${projectName}"...`) + const result = await execa('docker', [ + 'compose', + '--project-name', projectName, + 'down', + ], { reject: false }) + + if (result.exitCode !== 0) { + throw new Error( + `docker compose down failed with exit code ${result.exitCode}: ${result.stderr}` + ) + } + + logger.success(`Compose stack "${projectName}" torn down`) + } + + /** + * Check if a compose override file exists for a given identifier. + * + * @param identifier - Issue number, branch name, or other identifier + * @returns true if the override file exists, false otherwise + */ + static async hasComposeOverrideFile(identifier: string | number): Promise<boolean> { + const overridePath = DockerManager.getComposeOverridePath(identifier) + try { + await access(overridePath) + return true + } catch (error) { + if (error instanceof Error && 'code' in error && (error as NodeJS.ErrnoException).code === 'ENOENT') { + return false + } + throw error + } + } + + /** + * Remove the compose override file for a given identifier from the data directory. + * Returns false (non-fatal) if the file does not exist. + * Throws for unexpected errors. + * + * @param identifier - Issue number, branch name, or other identifier + * @returns true if the file was removed, false if it did not exist + */ + static async removeComposeOverrideFile(identifier: string | number): Promise<boolean> { + const overridePath = DockerManager.getComposeOverridePath(identifier) + try { + await unlink(overridePath) + logger.debug(`Removed compose override file: ${overridePath}`) + return true + } catch (error) { + if (error instanceof Error && 'code' in error && (error as NodeJS.ErrnoException).code === 'ENOENT') { + logger.debug(`Compose override file not found: ${overridePath}`) + return false + } + throw error + } + } + /** * Build a DockerConfig from iloom web settings and an identifier. * Centralizes the Docker config extraction logic used by dev-server, open, and run commands. diff --git a/src/lib/ResourceCleanup.test.ts b/src/lib/ResourceCleanup.test.ts index 3a05cd31..26d381bf 100644 --- a/src/lib/ResourceCleanup.test.ts +++ b/src/lib/ResourceCleanup.test.ts @@ -619,6 +619,47 @@ describe('ResourceCleanup', () => { expect(DockerManager.buildContainerName).toHaveBeenCalledWith('feat/issue-25') expect(result).toBe(true) }) + + it('should tear down compose stack when override file exists for identifier', async () => { + vi.mocked(DockerManager.hasComposeOverrideFile).mockResolvedValueOnce(true) + vi.mocked(DockerManager.teardownComposeStack).mockResolvedValueOnce(undefined) + + const result = await resourceCleanup.terminateDevServer(3025, 25) + + expect(DockerManager.hasComposeOverrideFile).toHaveBeenCalledWith(25) + expect(DockerManager.teardownComposeStack).toHaveBeenCalledWith(25) + expect(result).toBe(true) + // Should NOT fall through to single-container or process detection + expect(DockerManager.isContainerRunning).not.toHaveBeenCalled() + expect(mockProcessManager.detectDevServer).not.toHaveBeenCalled() + }) + + it('should fall back to single-container cleanup when no compose override file exists', async () => { + vi.mocked(DockerManager.hasComposeOverrideFile).mockResolvedValueOnce(false) + vi.mocked(DockerManager.buildContainerName).mockReturnValue('iloom-dev-25') + vi.mocked(DockerManager.isContainerRunning).mockResolvedValueOnce(true) + vi.mocked(DockerManager.stopAndRemoveContainer).mockResolvedValueOnce(true) + + const result = await resourceCleanup.terminateDevServer(3025, 25) + + expect(DockerManager.hasComposeOverrideFile).toHaveBeenCalledWith(25) + expect(DockerManager.teardownComposeStack).not.toHaveBeenCalled() + expect(DockerManager.isContainerRunning).toHaveBeenCalledWith('iloom-dev-25') + expect(result).toBe(true) + }) + + it('should propagate error when compose stack teardown fails', async () => { + vi.mocked(DockerManager.hasComposeOverrideFile).mockResolvedValueOnce(true) + vi.mocked(DockerManager.teardownComposeStack).mockRejectedValueOnce( + new Error('docker compose down failed with exit code 1: error') + ) + + await expect(resourceCleanup.terminateDevServer(3025, 25)).rejects.toThrow( + 'docker compose down failed with exit code 1: error' + ) + + expect(DockerManager.teardownComposeStack).toHaveBeenCalledWith(25) + }) }) describe('cleanupWorktree - Docker mode', () => { @@ -761,6 +802,162 @@ describe('ResourceCleanup', () => { expect(result.success).toBe(true) expect(result.operations[0]?.message).toContain('No dev server running') }) + + it('should remove compose override file and report compose-override operation when docker mode is configured', async () => { + mockSettingsManager.loadSettings = vi.fn().mockResolvedValue({ + capabilities: { + web: { + basePort: 3000, + devServer: 'docker', + }, + }, + }) + + vi.mocked(mockGitWorktree.findWorktreeForIssue).mockResolvedValueOnce(mockWorktree) + vi.mocked(mockProcessManager.calculatePort).mockReturnValue(3025) + + // No compose stack running (no override file in terminateDevServer) + vi.mocked(DockerManager.hasComposeOverrideFile).mockResolvedValueOnce(false) + vi.mocked(DockerManager.buildContainerName).mockReturnValue('iloom-dev-25') + vi.mocked(DockerManager.isContainerRunning).mockResolvedValueOnce(false) + vi.mocked(mockProcessManager.detectDevServer).mockResolvedValueOnce(null) + + // Override file removed in Step 1.6 + vi.mocked(DockerManager.removeComposeOverrideFile).mockResolvedValueOnce(true) + + vi.mocked(mockGitWorktree.removeWorktree).mockResolvedValueOnce(undefined) + + const parsedInput = { + type: 'issue' as const, + number: 25, + originalInput: 'issue-25', + } + + const result = await resourceCleanup.cleanupWorktree(parsedInput, { + keepDatabase: true, + }) + + expect(result.success).toBe(true) + expect(DockerManager.removeComposeOverrideFile).toHaveBeenCalledWith('25') + const composeOp = result.operations.find(op => op.type === 'compose-override') + expect(composeOp).toBeDefined() + expect(composeOp?.success).toBe(true) + expect(composeOp?.message).toContain('removed') + }) + + it('should report compose-override operation as not found when override file is absent', async () => { + mockSettingsManager.loadSettings = vi.fn().mockResolvedValue({ + capabilities: { + web: { + basePort: 3000, + devServer: 'docker', + }, + }, + }) + + vi.mocked(mockGitWorktree.findWorktreeForIssue).mockResolvedValueOnce(mockWorktree) + vi.mocked(mockProcessManager.calculatePort).mockReturnValue(3025) + + vi.mocked(DockerManager.hasComposeOverrideFile).mockResolvedValueOnce(false) + vi.mocked(DockerManager.buildContainerName).mockReturnValue('iloom-dev-25') + vi.mocked(DockerManager.isContainerRunning).mockResolvedValueOnce(false) + vi.mocked(mockProcessManager.detectDevServer).mockResolvedValueOnce(null) + + // Override file not found + vi.mocked(DockerManager.removeComposeOverrideFile).mockResolvedValueOnce(false) + + vi.mocked(mockGitWorktree.removeWorktree).mockResolvedValueOnce(undefined) + + const parsedInput = { + type: 'issue' as const, + number: 25, + originalInput: 'issue-25', + } + + const result = await resourceCleanup.cleanupWorktree(parsedInput, { + keepDatabase: true, + }) + + expect(result.success).toBe(true) + const composeOp = result.operations.find(op => op.type === 'compose-override') + expect(composeOp).toBeDefined() + expect(composeOp?.success).toBe(true) + expect(composeOp?.message).toContain('No compose override file found') + }) + + it('should handle compose override file removal failure as non-fatal', async () => { + mockSettingsManager.loadSettings = vi.fn().mockResolvedValue({ + capabilities: { + web: { + basePort: 3000, + devServer: 'docker', + }, + }, + }) + + vi.mocked(mockGitWorktree.findWorktreeForIssue).mockResolvedValueOnce(mockWorktree) + vi.mocked(mockProcessManager.calculatePort).mockReturnValue(3025) + + vi.mocked(DockerManager.hasComposeOverrideFile).mockResolvedValueOnce(false) + vi.mocked(DockerManager.buildContainerName).mockReturnValue('iloom-dev-25') + vi.mocked(DockerManager.isContainerRunning).mockResolvedValueOnce(false) + vi.mocked(mockProcessManager.detectDevServer).mockResolvedValueOnce(null) + + // Override file removal fails + vi.mocked(DockerManager.removeComposeOverrideFile).mockRejectedValueOnce( + new Error('Permission denied') + ) + + vi.mocked(mockGitWorktree.removeWorktree).mockResolvedValueOnce(undefined) + + const parsedInput = { + type: 'issue' as const, + number: 25, + originalInput: 'issue-25', + } + + const result = await resourceCleanup.cleanupWorktree(parsedInput, { + keepDatabase: true, + }) + + // Cleanup should still succeed despite override file removal failure + expect(result.success).toBe(true) + const composeOp = result.operations.find(op => op.type === 'compose-override') + expect(composeOp).toBeDefined() + expect(composeOp?.success).toBe(false) + expect(composeOp?.error).toContain('Permission denied') + }) + + it('should not attempt compose override file removal when devServer is not docker mode', async () => { + mockSettingsManager.loadSettings = vi.fn().mockResolvedValue({ + capabilities: { + web: { + basePort: 3000, + devServer: 'process', + }, + }, + }) + + vi.mocked(mockGitWorktree.findWorktreeForIssue).mockResolvedValueOnce(mockWorktree) + vi.mocked(mockProcessManager.calculatePort).mockReturnValue(3025) + vi.mocked(mockProcessManager.detectDevServer).mockResolvedValueOnce(null) + vi.mocked(mockGitWorktree.removeWorktree).mockResolvedValueOnce(undefined) + + const parsedInput = { + type: 'issue' as const, + number: 25, + originalInput: 'issue-25', + } + + const result = await resourceCleanup.cleanupWorktree(parsedInput, { + keepDatabase: true, + }) + + expect(result.success).toBe(true) + expect(DockerManager.removeComposeOverrideFile).not.toHaveBeenCalled() + const composeOp = result.operations.find(op => op.type === 'compose-override') + expect(composeOp).toBeUndefined() + }) }) describe('deleteBranch', () => { diff --git a/src/lib/ResourceCleanup.ts b/src/lib/ResourceCleanup.ts index 2fc34d1a..0a03325b 100644 --- a/src/lib/ResourceCleanup.ts +++ b/src/lib/ResourceCleanup.ts @@ -109,6 +109,7 @@ export class ResourceCleanup { // Step 1.5: Terminate dev server if applicable // Done after worktree is found so we can use getWorkspacePort which checks .env PORT overrides + // Step 1.6: Remove compose override file from data directory (if compose-based loom) { const settings = await this.settingsManager.loadSettings() const port = await getWorkspacePort({ @@ -148,6 +149,29 @@ export class ResourceCleanup { error: err.message, }) } + + // Step 1.6: Remove compose override file (non-fatal, only in docker mode) + if (dockerIdentifier) { + try { + const removed = await DockerManager.removeComposeOverrideFile(dockerIdentifier) + operations.push({ + type: 'compose-override', + success: true, + message: removed + ? `Compose override file removed for identifier: ${dockerIdentifier}` + : `No compose override file found for identifier: ${dockerIdentifier}`, + }) + } catch (error) { + const err = error instanceof Error ? error : new Error('Unknown error') + getLogger().warn(`Failed to remove compose override file: ${err.message}`) + operations.push({ + type: 'compose-override', + success: false, + message: `Failed to remove compose override file for identifier: ${dockerIdentifier}`, + error: err.message, + }) + } + } } } @@ -587,8 +611,19 @@ export class ResourceCleanup { async terminateDevServer(port: number, dockerIdentifier?: string | number): Promise<boolean> { getLogger().debug(`Checking for dev server on port ${port}`) - // Try Docker container cleanup first if identifier provided + // Try Docker cleanup first if identifier provided if (dockerIdentifier !== undefined) { + // Check for compose stack first (identified by presence of override file) + const hasOverrideFile = await DockerManager.hasComposeOverrideFile(dockerIdentifier) + + if (hasOverrideFile) { + getLogger().info(`Compose override file found, tearing down compose stack for identifier: ${dockerIdentifier}`) + // teardownComposeStack throws on failure, so reaching here means success + await DockerManager.teardownComposeStack(dockerIdentifier) + return true + } + + // Fall back to single-container Docker cleanup const containerName = DockerManager.buildContainerName(dockerIdentifier) const isRunning = await DockerManager.isContainerRunning(containerName) if (isRunning) { diff --git a/src/types/cleanup.ts b/src/types/cleanup.ts index 76eddc91..a884277c 100644 --- a/src/types/cleanup.ts +++ b/src/types/cleanup.ts @@ -47,7 +47,7 @@ export interface CleanupResult { */ export interface OperationResult { /** Type of operation performed */ - type: 'dev-server' | 'worktree' | 'branch' | 'database' | 'cli-symlinks' | 'recap' | 'metadata' | 'trust' + type: 'dev-server' | 'worktree' | 'branch' | 'database' | 'cli-symlinks' | 'recap' | 'metadata' | 'trust' | 'compose-override' /** Whether operation succeeded */ success: boolean /** Human-readable message */ diff --git a/src/types/telemetry.ts b/src/types/telemetry.ts index bca6df91..22840cff 100644 --- a/src/types/telemetry.ts +++ b/src/types/telemetry.ts @@ -93,6 +93,7 @@ export interface InitStartedProperties { export interface InitCompletedProperties { mode: 'accept-defaults' | 'guided' | 'guided-custom-prompt' + compose_detected?: boolean } export interface AutoSwarmStartedProperties { diff --git a/src/utils/compose.test.ts b/src/utils/compose.test.ts new file mode 100644 index 00000000..e7d3399f --- /dev/null +++ b/src/utils/compose.test.ts @@ -0,0 +1,378 @@ +import { describe, it, expect, vi, beforeEach } from 'vitest' +import { readFile } from 'fs/promises' +import fs from 'fs-extra' +import { parseComposeFile, generateOverrideFile } from './compose.js' +import type { ComposePortMapping } from './compose.js' + +vi.mock('fs/promises') +vi.mock('fs-extra') + +describe('parseComposeFile', () => { + it('parses short syntax ports ("3000:3000")', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "3000:3000"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + ]) + }) + + it('parses short syntax with protocol ("3000:3000/tcp")', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "3000:3000/tcp"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'web', hostPort: 3000, containerPort: 3000, protocol: 'tcp' }, + ]) + }) + + it('parses short syntax with udp protocol ("8080:80/udp")', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "8080:80/udp"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'web', hostPort: 8080, containerPort: 80, protocol: 'udp' }, + ]) + }) + + it('parses host-only short syntax with different host and container ports ("8080:80")', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "8080:80"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'web', hostPort: 8080, containerPort: 80 }, + ]) + }) + + it('parses IP-bound short syntax ("127.0.0.1:8080:80") and preserves hostIp', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n db:\n ports:\n - "127.0.0.1:5432:5432"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'db', hostPort: 5432, containerPort: 5432, hostIp: '127.0.0.1' }, + ]) + }) + + it('parses IP-bound short syntax with protocol ("127.0.0.1:8080:80/tcp")', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n db:\n ports:\n - "127.0.0.1:8080:80/tcp"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'db', hostPort: 8080, containerPort: 80, protocol: 'tcp', hostIp: '127.0.0.1' }, + ]) + }) + + it('parses long-form syntax (target/published)', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - target: 80\n published: 8080\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'web', hostPort: 8080, containerPort: 80 }, + ]) + }) + + it('parses long-form with protocol', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - target: 80\n published: 8080\n protocol: udp\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'web', hostPort: 8080, containerPort: 80, protocol: 'udp' }, + ]) + }) + + it('parses long-form with host_ip and preserves it', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n db:\n ports:\n - target: 5432\n published: 5432\n host_ip: "127.0.0.1"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'db', hostPort: 5432, containerPort: 5432, hostIp: '127.0.0.1' }, + ]) + }) + + it('returns empty array for services with no ports', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n db:\n image: postgres:15\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([]) + }) + + it('returns empty array for compose file with no services key', async () => { + vi.mocked(readFile).mockResolvedValue( + 'version: "3.8"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([]) + }) + + it('returns empty array for empty compose file', async () => { + vi.mocked(readFile).mockResolvedValue('') + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([]) + }) + + it('handles multiple services with ports', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "3000:3000"\n api:\n ports:\n - "4000:4000"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toHaveLength(2) + expect(result).toContainEqual({ service: 'web', hostPort: 3000, containerPort: 3000 }) + expect(result).toContainEqual({ service: 'api', hostPort: 4000, containerPort: 4000 }) + }) + + it('handles service with multiple port mappings', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "3000:3000"\n - "3001:3001"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toHaveLength(2) + expect(result[0]).toEqual({ service: 'web', hostPort: 3000, containerPort: 3000 }) + expect(result[1]).toEqual({ service: 'web', hostPort: 3001, containerPort: 3001 }) + }) + + it('skips container-only port entries (no host port)', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "3000"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([]) + }) + + it('skips port range entries ("8080-8081:8080-8081")', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "8080-8081:8080-8081"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([]) + }) + + it('skips container-only port range entries', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "8080-8081"\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([]) + }) + + it('skips long-form entries without published (host) port', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - target: 80\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([]) + }) + + it('throws on file read error (file not found)', async () => { + vi.mocked(readFile).mockRejectedValue( + Object.assign(new Error('ENOENT: no such file or directory'), { code: 'ENOENT' }) + ) + + await expect(parseComposeFile('/nonexistent/docker-compose.yml')).rejects.toThrow('ENOENT') + }) + + it('mixes services with and without ports', async () => { + vi.mocked(readFile).mockResolvedValue( + 'services:\n web:\n ports:\n - "3000:3000"\n db:\n image: postgres:15\n' + ) + + const result = await parseComposeFile('/project/docker-compose.yml') + + expect(result).toEqual([ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + ]) + }) +}) + +describe('generateOverrideFile', () => { + beforeEach(() => { + vi.mocked(fs.ensureDir).mockResolvedValue(undefined as never) + vi.mocked(fs.writeFile).mockResolvedValue(undefined as never) + }) + + it('generates valid override YAML with offset ports (numeric identifier)', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + ] + + const result = await generateOverrideFile(mappings, 42, '/data/issue-42') + + // hostPort 3000 + identifier 42 = 3042 + expect(result).toBe('/data/issue-42/iloom-42.yml') + expect(fs.ensureDir).toHaveBeenCalledWith('/data/issue-42') + + const writeCallArgs = vi.mocked(fs.writeFile).mock.calls[0] + expect(writeCallArgs[0]).toBe('/data/issue-42/iloom-42.yml') + const writtenContent = String(writeCallArgs[1]) + expect(writtenContent).toContain('3042:3000') + }) + + it('generates valid override YAML with offset ports (string identifier)', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + ] + + const result = await generateOverrideFile(mappings, '42', '/data/issue-42') + + expect(result).toBe('/data/issue-42/iloom-42.yml') + + const writeCallArgs = vi.mocked(fs.writeFile).mock.calls[0] + const writtenContent = String(writeCallArgs[1]) + expect(writtenContent).toContain('3042:3000') + }) + + it('throws for non-numeric string identifier', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + ] + + await expect(generateOverrideFile(mappings, 'issue-42', '/data')).rejects.toThrow( + 'Invalid identifier: "issue-42"' + ) + }) + + it('wraps ports exceeding 65535', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 65000, containerPort: 8080 }, + ] + + // 65000 + 1000 = 66000, which exceeds 65535, should wrap + const result = await generateOverrideFile(mappings, 1000, '/data/issue-1000') + + expect(result).toBe('/data/issue-1000/iloom-1000.yml') + + const writeCallArgs = vi.mocked(fs.writeFile).mock.calls[0] + const writtenContent = String(writeCallArgs[1]) + + // The wrapped port should be <= 65535 and in the valid range + // wrapPort(66000, 65000): range = 65535 - 65000 = 535 + // ((66000 - 65000 - 1) % 535) + 65000 + 1 = (999 % 535) + 65001 = 464 + 65001 = 65465 + expect(writtenContent).toContain('65465:8080') + }) + + it('writes file to dataDir and returns path', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'api', hostPort: 4000, containerPort: 4000 }, + ] + + const resultPath = await generateOverrideFile(mappings, 100, '/tmp/iloom/issue-100') + + expect(resultPath).toBe('/tmp/iloom/issue-100/iloom-100.yml') + expect(fs.ensureDir).toHaveBeenCalledWith('/tmp/iloom/issue-100') + expect(fs.writeFile).toHaveBeenCalledWith( + '/tmp/iloom/issue-100/iloom-100.yml', + expect.any(String), + 'utf-8' + ) + }) + + it('handles multiple services in override', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + { service: 'api', hostPort: 4000, containerPort: 4000 }, + ] + + await generateOverrideFile(mappings, 42, '/data/issue-42') + + const writeCallArgs = vi.mocked(fs.writeFile).mock.calls[0] + const writtenContent = String(writeCallArgs[1]) + + expect(writtenContent).toContain('3042:3000') + expect(writtenContent).toContain('4042:4000') + }) + + it('preserves protocol in override port strings', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 3000, containerPort: 3000, protocol: 'udp' }, + ] + + await generateOverrideFile(mappings, 42, '/data/issue-42') + + const writeCallArgs = vi.mocked(fs.writeFile).mock.calls[0] + const writtenContent = String(writeCallArgs[1]) + + expect(writtenContent).toContain('3042:3000/udp') + }) + + it('preserves hostIp in override to prevent port exposure on 0.0.0.0', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'db', hostPort: 5432, containerPort: 5432, hostIp: '127.0.0.1' }, + ] + + await generateOverrideFile(mappings, 42, '/data/issue-42') + + const writeCallArgs = vi.mocked(fs.writeFile).mock.calls[0] + const writtenContent = String(writeCallArgs[1]) + + // Should include the IP binding to keep the port local + expect(writtenContent).toContain('127.0.0.1:5474:5432') + }) + + it('handles empty mappings array', async () => { + const result = await generateOverrideFile([], 42, '/data/issue-42') + + expect(result).toBe('/data/issue-42/iloom-42.yml') + expect(fs.ensureDir).toHaveBeenCalledWith('/data/issue-42') + expect(fs.writeFile).toHaveBeenCalled() + }) + + it('handles multiple ports for the same service', async () => { + const mappings: ComposePortMapping[] = [ + { service: 'web', hostPort: 3000, containerPort: 3000 }, + { service: 'web', hostPort: 3001, containerPort: 3001 }, + ] + + await generateOverrideFile(mappings, 42, '/data/issue-42') + + const writeCallArgs = vi.mocked(fs.writeFile).mock.calls[0] + const writtenContent = String(writeCallArgs[1]) + + expect(writtenContent).toContain('3042:3000') + expect(writtenContent).toContain('3043:3001') + }) +}) diff --git a/src/utils/compose.ts b/src/utils/compose.ts new file mode 100644 index 00000000..80d518e5 --- /dev/null +++ b/src/utils/compose.ts @@ -0,0 +1,219 @@ +import { readFile } from 'fs/promises' +import fs from 'fs-extra' +import path from 'path' +import { parse, stringify } from 'yaml' +import { sanitizeContainerName } from './docker.js' +import { wrapPort } from './port.js' + +/** + * Structured port mapping extracted from a compose file. + * + * V1 scope: literal port values only. The following are NOT supported: + * - Variable interpolation or environment variable substitution + * - Port ranges (e.g., "8080-8081:8080-8081") — entries with ranges are skipped + * - Compose profiles, extends, or includes directives + */ +export interface ComposePortMapping { + service: string + hostPort: number + containerPort: number + protocol?: string + hostIp?: string +} + +/** + * Parse a docker-compose.yml or compose.yml file and extract port mappings. + * Handles short syntax ("3000:3000", "3000:3000/tcp", "127.0.0.1:3000:3000") and + * long-form syntax ({ target, published, protocol, host_ip }). + * V1: literal values only, no variable interpolation, port ranges are skipped. + * + * @param filePath - Absolute path to the compose file + * @returns Array of port mappings (empty if no ports found) + * @throws Error if file cannot be read or parsed + */ +export async function parseComposeFile(filePath: string): Promise<ComposePortMapping[]> { + const content = await readFile(filePath, 'utf-8') + const doc = parse(content) as Record<string, unknown> + + if (!doc || typeof doc !== 'object' || !doc.services || typeof doc.services !== 'object' || Array.isArray(doc.services)) { + return [] + } + + const services = doc.services as Record<string, unknown> + const mappings: ComposePortMapping[] = [] + + for (const [serviceName, serviceConfig] of Object.entries(services)) { + if (!serviceConfig || typeof serviceConfig !== 'object') { + continue + } + + const config = serviceConfig as Record<string, unknown> + if (!config.ports || !Array.isArray(config.ports)) { + continue + } + + for (const portEntry of config.ports) { + if (typeof portEntry === 'string' || typeof portEntry === 'number') { + // Short syntax: "3000:3000", "3000:3000/tcp", "127.0.0.1:8080:80" + const portStr = String(portEntry) + + // Split from the right to handle optional IP prefix correctly. + // Docker allows "host_ip:host_port:container_port" format. + const parts = portStr.split(':') + + // Skip container-only entries (no host port mapping) + if (parts.length < 2) { + continue + } + + // The container part is always the last segment (may include protocol: "80/tcp") + const rawContainerPart = parts[parts.length - 1] + const rawHostPart = parts[parts.length - 2] + // Guard for TypeScript strict mode (length >= 2 guarantees both exist) + if (rawContainerPart === undefined || rawHostPart === undefined) { + continue + } + const hostIp = parts.length >= 3 ? parts.slice(0, parts.length - 2).join(':') : undefined + + // Detect and skip port ranges (e.g., "8080-8081") + const containerBase = rawContainerPart.split('/')[0] ?? '' + if (rawHostPart.includes('-') || containerBase.includes('-')) { + continue + } + + // Parse optional protocol from container port + let containerPart = rawContainerPart + let protocol: string | undefined + + const slashIndex = rawContainerPart.indexOf('/') + if (slashIndex !== -1) { + containerPart = rawContainerPart.substring(0, slashIndex) + protocol = rawContainerPart.substring(slashIndex + 1) + } + + const hostPort = parseInt(rawHostPart, 10) + const containerPort = parseInt(containerPart, 10) + + if (isNaN(hostPort) || isNaN(containerPort)) { + continue + } + + const mapping: ComposePortMapping = { + service: serviceName, + hostPort, + containerPort, + } + if (protocol !== undefined) { + mapping.protocol = protocol + } + if (hostIp !== undefined && hostIp !== '') { + mapping.hostIp = hostIp + } + + mappings.push(mapping) + } else if (portEntry && typeof portEntry === 'object') { + // Long-form syntax: { target: 80, published: 8080, protocol: 'tcp', host_ip: '127.0.0.1' } + const longForm = portEntry as Record<string, unknown> + + // Skip if no published (host) port + if (longForm.published === undefined || longForm.published === null) { + continue + } + + const hostPort = typeof longForm.published === 'number' + ? longForm.published + : parseInt(String(longForm.published), 10) + const containerPort = typeof longForm.target === 'number' + ? longForm.target + : parseInt(String(longForm.target), 10) + + if (isNaN(hostPort) || isNaN(containerPort)) { + continue + } + + const mapping: ComposePortMapping = { + service: serviceName, + hostPort, + containerPort, + } + + if (longForm.protocol !== undefined && typeof longForm.protocol === 'string') { + mapping.protocol = longForm.protocol + } + + if (longForm.host_ip !== undefined && typeof longForm.host_ip === 'string' && longForm.host_ip !== '') { + mapping.hostIp = longForm.host_ip + } + + mappings.push(mapping) + } + } + } + + return mappings +} + +/** + * Generate a docker-compose.override.yml with host ports offset by identifier. + * Writes the file to dataDir (outside the worktree, not in git). + * + * Port offset: newHostPort = wrapPort(hostPort + numericIdentifier, hostPort) + * The identifier must be a numeric value or a string that parses to a number. + * + * If a mapping includes a hostIp, the override preserves it to avoid + * unintentionally exposing the port on all interfaces. + * + * @param mappings - Port mappings from parseComposeFile() + * @param identifier - Numeric identifier (issue number) for port offset + * @param dataDir - Directory to write the override file + * @returns Absolute path to the generated override file + * @throws Error if identifier is not a valid number + */ +export async function generateOverrideFile( + mappings: ComposePortMapping[], + identifier: string | number, + dataDir: string +): Promise<string> { + const numId = typeof identifier === 'number' ? identifier : parseInt(identifier, 10) + + if (isNaN(numId)) { + throw new Error(`Invalid identifier: "${identifier}". Expected a numeric value or numeric string.`) + } + + // Build override structure grouped by service + const servicesOverride: Record<string, { ports: string[] }> = {} + + for (const mapping of mappings) { + const offsetPort = wrapPort(mapping.hostPort + numId, mapping.hostPort) + + // Preserve host IP binding to avoid exposing internal services on 0.0.0.0 + let portStr: string + const hostPrefix = mapping.hostIp !== undefined ? `${mapping.hostIp}:` : '' + if (mapping.protocol !== undefined) { + portStr = `${hostPrefix}${offsetPort}:${mapping.containerPort}/${mapping.protocol}` + } else { + portStr = `${hostPrefix}${offsetPort}:${mapping.containerPort}` + } + + const existing = servicesOverride[mapping.service] + if (!existing) { + servicesOverride[mapping.service] = { ports: [portStr] } + } else { + existing.ports.push(portStr) + } + } + + const overrideDoc = { + services: servicesOverride, + } + + const yamlContent = stringify(overrideDoc) + + await fs.ensureDir(dataDir) + + const projectName = `iloom-${sanitizeContainerName(String(identifier))}` + const filePath = path.join(dataDir, `${projectName}.yml`) + await fs.writeFile(filePath, yamlContent, 'utf-8') + + return filePath +} diff --git a/src/utils/docker.test.ts b/src/utils/docker.test.ts index 11d81e0c..6b34aa5d 100644 --- a/src/utils/docker.test.ts +++ b/src/utils/docker.test.ts @@ -1,6 +1,7 @@ import { describe, it, expect, vi } from 'vitest' import { execa } from 'execa' import { readFile } from 'fs/promises' +import { existsSync } from 'fs' import { isDockerInstalled, isDockerRunning, @@ -10,10 +11,12 @@ import { sanitizeContainerName, buildContainerName, buildImageName, + detectComposeFile, } from './docker.js' vi.mock('execa') vi.mock('fs/promises') +vi.mock('fs') describe('isDockerInstalled', () => { it('should return true when docker --version succeeds', async () => { @@ -395,3 +398,234 @@ describe('buildImageName', () => { .toBe('iloom-dev-feat-issue-548') }) }) + +describe('detectComposeFile', () => { + it('should return null when no compose file exists', async () => { + vi.mocked(existsSync).mockReturnValue(false) + + const result = await detectComposeFile('/project') + + expect(result).toBeNull() + }) + + it('should detect compose.yml', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + image: nginx + ports: + - "8080:80" +`) + + const result = await detectComposeFile('/project') + + expect(result).not.toBeNull() + expect(result?.fileName).toBe('compose.yml') + expect(result?.services).toHaveLength(1) + expect(result?.services[0].name).toBe('web') + expect(result?.services[0].ports).toEqual([{ host: 8080, container: 80 }]) + expect(result?.services[0].image).toBe('nginx') + }) + + it('should detect docker-compose.yml when compose.yml does not exist', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('docker-compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + api: + ports: + - "3000:3000" +`) + + const result = await detectComposeFile('/project') + + expect(result).not.toBeNull() + expect(result?.fileName).toBe('docker-compose.yml') + }) + + it('should prefer compose.yml over all other candidates when all exist', async () => { + // All files exist — compose.yml should win (first candidate) + vi.mocked(existsSync).mockReturnValue(true) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + image: nginx +`) + + const result = await detectComposeFile('/project') + + expect(result?.fileName).toBe('compose.yml') + }) + + it('should return result with empty services when compose file has no services key', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue('version: "3"') + + const result = await detectComposeFile('/project') + + expect(result).not.toBeNull() + expect(result?.services).toEqual([]) + }) + + it('should return null for malformed YAML', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(': invalid: yaml: [unclosed') + + const result = await detectComposeFile('/project') + + expect(result).toBeNull() + }) + + it('should handle services without port mappings', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + worker: + image: myapp +`) + + const result = await detectComposeFile('/project') + + expect(result?.services[0].ports).toEqual([]) + }) + + it('should parse long-form port syntax (target/published)', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + ports: + - target: 80 + published: 8080 +`) + + const result = await detectComposeFile('/project') + + expect(result?.services[0].ports).toEqual([{ host: 8080, container: 80 }]) + }) + + it('should handle multiple services with mixed port formats', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + ports: + - "3000:3000" + db: + image: postgres + ports: + - target: 5432 + published: 5432 + worker: + image: redis +`) + + const result = await detectComposeFile('/project') + + expect(result?.services).toHaveLength(3) + const web = result?.services.find((s) => s.name === 'web') + const db = result?.services.find((s) => s.name === 'db') + const worker = result?.services.find((s) => s.name === 'worker') + expect(web?.ports).toEqual([{ host: 3000, container: 3000 }]) + expect(db?.ports).toEqual([{ host: 5432, container: 5432 }]) + expect(worker?.ports).toEqual([]) + }) + + it('should parse port without host mapping (container-only short syntax)', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + ports: + - "3000" +`) + + const result = await detectComposeFile('/project') + + expect(result?.services[0].ports).toEqual([{ container: 3000 }]) + }) + + it('should detect compose.yaml (.yaml extension)', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yaml')) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + image: nginx + ports: + - "8080:80" +`) + + const result = await detectComposeFile('/project') + + expect(result).not.toBeNull() + expect(result?.fileName).toBe('compose.yaml') + expect(result?.services[0].ports).toEqual([{ host: 8080, container: 80 }]) + }) + + it('should detect docker-compose.yaml when no .yml variants exist', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('docker-compose.yaml')) + vi.mocked(readFile).mockResolvedValue(` +services: + api: + ports: + - "3000:3000" +`) + + const result = await detectComposeFile('/project') + + expect(result).not.toBeNull() + expect(result?.fileName).toBe('docker-compose.yaml') + }) + + it('should prefer compose.yaml over docker-compose.yaml when both exist', async () => { + vi.mocked(existsSync).mockImplementation( + (p) => String(p).endsWith('compose.yaml') || String(p).endsWith('docker-compose.yaml') + ) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + image: nginx +`) + + const result = await detectComposeFile('/project') + + expect(result?.fileName).toBe('compose.yaml') + }) + + it('should handle IP:HOST:CONTAINER three-part short port syntax', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + ports: + - "127.0.0.1:8080:80" +`) + + const result = await detectComposeFile('/project') + + expect(result?.services[0].ports).toEqual([{ host: 8080, container: 80 }]) + }) + + it('should handle long-form port syntax with string published value', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + vi.mocked(readFile).mockResolvedValue(` +services: + web: + ports: + - target: 80 + published: "8080" +`) + + const result = await detectComposeFile('/project') + + expect(result?.services[0].ports).toEqual([{ host: 8080, container: 80 }]) + }) + + it('should rethrow unexpected errors (non-YAML errors)', async () => { + vi.mocked(existsSync).mockImplementation((p) => String(p).endsWith('compose.yml')) + const permissionError = Object.assign(new Error('EACCES: permission denied'), { code: 'EACCES' }) + vi.mocked(readFile).mockRejectedValue(permissionError) + + await expect(detectComposeFile('/project')).rejects.toThrow('EACCES') + }) +}) diff --git a/src/utils/docker.ts b/src/utils/docker.ts index 7a2cffce..92f78007 100644 --- a/src/utils/docker.ts +++ b/src/utils/docker.ts @@ -1,5 +1,8 @@ import { execa } from 'execa' import { readFile } from 'fs/promises' +import { existsSync } from 'fs' +import path from 'path' +import { parse as yamlParse, YAMLParseError } from 'yaml' /** * Maximum length for Docker container names. @@ -212,3 +215,153 @@ export function buildImageName(identifier: string | number): string { // Docker image tags must be lowercase return sanitizeContainerName(`iloom-dev-${identifier}`).toLowerCase() } + +/** + * Information about a single service in a compose file. + */ +export interface ComposeServiceInfo { + name: string + ports: Array<{ host?: number; container: number }> + image?: string +} + +/** + * Result of detecting a compose file in a project directory. + */ +export interface ComposeDetectionResult { + filePath: string + fileName: string + services: ComposeServiceInfo[] +} + +/** + * Parse port mappings from a compose service's ports array. + * Handles both short syntax ("8080:80", "80") and long syntax ({ target, published }). + */ +function parseComposePorts( + ports: unknown +): Array<{ host?: number; container: number }> { + if (!Array.isArray(ports)) { + return [] + } + + const result: Array<{ host?: number; container: number }> = [] + + for (const port of ports) { + if (typeof port === 'string' || typeof port === 'number') { + // Short syntax: "host:container", "ip:host:container", or "container" + const portStr = String(port) + const parts = portStr.split(':') + if (parts.length >= 2) { + // Take last part as container, second-to-last as host (handles IP:HOST:CONTAINER) + const rawContainer = parts[parts.length - 1] + const rawHost = parts[parts.length - 2] + // Guard for TypeScript strict mode (length >= 2 guarantees both exist) + if (rawContainer === undefined || rawHost === undefined) { + continue + } + const containerPart = parseInt(rawContainer, 10) + const hostPart = parseInt(rawHost, 10) + if (!isNaN(containerPart) && containerPart >= 1 && containerPart <= 65535) { + const validHost = !isNaN(hostPart) && hostPart >= 1 && hostPart <= 65535 + if (validHost) { + result.push({ host: hostPart, container: containerPart }) + } else { + result.push({ container: containerPart }) + } + } + } else { + const containerPort = parseInt(portStr, 10) + if (!isNaN(containerPort) && containerPort >= 1 && containerPort <= 65535) { + result.push({ container: containerPort }) + } + } + } else if (typeof port === 'object' && port !== null) { + // Long syntax: { target: number, published?: number } + const longPort = port as Record<string, unknown> + const target = typeof longPort['target'] === 'number' ? longPort['target'] : undefined + const published = + typeof longPort['published'] === 'number' + ? longPort['published'] + : typeof longPort['published'] === 'string' + ? parseInt(longPort['published'] as string, 10) + : undefined + if (target !== undefined && target >= 1 && target <= 65535) { + const validPublished = published !== undefined && published >= 1 && published <= 65535 + if (validPublished) { + result.push({ host: published, container: target }) + } else { + result.push({ container: target }) + } + } + } + } + + return result +} + +/** + * Detect and parse a Docker Compose file in the given project directory. + * Checks for compose.yml first, then docker-compose.yml (compose.yml takes priority per Docker docs). + * + * @param projectRoot - Absolute path to the project root directory + * @returns Parsed compose detection result, or null if no compose file found + */ +export async function detectComposeFile( + projectRoot: string +): Promise<ComposeDetectionResult | null> { + const candidates = ['compose.yml', 'compose.yaml', 'docker-compose.yml', 'docker-compose.yaml'] + + for (const fileName of candidates) { + const filePath = path.join(projectRoot, fileName) + if (!existsSync(filePath)) { + continue + } + + try { + const content = await readFile(filePath, 'utf-8') + const parsed = yamlParse(content) + + if (typeof parsed !== 'object' || parsed === null) { + return { filePath, fileName, services: [] } + } + + const doc = parsed as Record<string, unknown> + const servicesRaw = doc['services'] + + if (typeof servicesRaw !== 'object' || servicesRaw === null) { + return { filePath, fileName, services: [] } + } + + const servicesMap = servicesRaw as Record<string, unknown> + const services: ComposeServiceInfo[] = [] + + for (const [name, serviceRaw] of Object.entries(servicesMap)) { + if (typeof serviceRaw !== 'object' || serviceRaw === null) { + services.push({ name, ports: [] }) + continue + } + + const service = serviceRaw as Record<string, unknown> + const ports = parseComposePorts(service['ports']) + + if (typeof service['image'] === 'string') { + services.push({ name, ports, image: service['image'] }) + } else { + services.push({ name, ports }) + } + } + + return { filePath, fileName, services } + } catch (error) { + if (error instanceof YAMLParseError) { + // Expected: malformed YAML — detection is non-fatal + return null + } + // Unexpected error (e.g., permissions issue) — rethrow so it is not silently swallowed + throw error + } + } + + return null +} diff --git a/templates/prompts/init-prompt.txt b/templates/prompts/init-prompt.txt index 0aabea22..5e2230ff 100644 --- a/templates/prompts/init-prompt.txt +++ b/templates/prompts/init-prompt.txt @@ -279,6 +279,23 @@ Use AskUserQuestion to ask ALL local development settings questions **IN A SINGL - Validation: Number between 1 and 65535 - Store answer as: `capabilities.web.basePort` +{{#if HAS_COMPOSE_FILE}} +4. **Docker Dev Server Mode** _(compose file detected)_ + + A Docker Compose file (`{{COMPOSE_FILE_NAME}}`) was detected in this project with the following services: + + {{COMPOSE_SERVICES_INFO}} + + Since this project uses Docker Compose, consider enabling Docker-based dev server mode so iloom can manage port mapping for isolated development environments. + + - Question format: "A Docker Compose file was detected. Would you like to enable Docker-based dev server mode?" + - Options: + - "Yes" - Enable docker dev server mode + - "No" - Keep default process-based dev server + - Default: "Yes" + - Store answer as: `capabilities.web.devServer` — use value `"docker"` for Yes, `"process"` for No + +{{/if}} **Implementation Details:** - Set multiSelect: false for all questions (user picks one answer per question) - Use the AskUserQuestion tool with all questions in the questions array