Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,6 +195,7 @@ All bindings on `127.0.0.1:` only. Script: `scripts/townhouse-dev-infra.sh`. Con
| **Townhouse npm-tarball compose templates** | `packages/townhouse/compose/` (source) → `dist/compose/` (built output) |
| Compose loader + materializer API | `packages/townhouse/src/compose-loader.ts` |
| Image-manifest digest registry (per release) | `packages/townhouse/dist/image-manifest.json` (CI-produced; not committed) |
| DockerOrchestrator HS-profile entry point | `packages/townhouse/src/docker/orchestrator.ts` (`upHs`, `waitForHsHostname`) |

## Browser Verification

Expand Down
1,028 changes: 1,028 additions & 0 deletions _bmad-output/implementation-artifacts/45-3-docker-orchestrator-profile-param.md

Large diffs are not rendered by default.

15 changes: 15 additions & 0 deletions _bmad-output/implementation-artifacts/deferred-work.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,3 +189,18 @@ _Six cross-repo patches (P3, P4, P5, P6, P7, Q1) shipped in lock-step via [conne
- `tarball-contents.test.ts` afterAll cleanup deletes the tarball even on test failure, killing post-mortem inspection. Consider keeping the tarball when an assertion fails (vitest's task context exposes failure state).
- Manifest-alignment test path resolution via `import.meta.url + '../../dist/...'` is fragile under bundler reconfiguration. Same pattern is acknowledged in compose-loader.ts:30.
- `tarball-contents.test.ts` "freshness precondition" only checks `existsSync(DIST_COMPOSE_HS)` — stale dist (e.g., dev rebuilt last week, manifest changed since) passes the gate. Add mtime-vs-source comparison or a digest cross-check against current `image-manifest.json`.

## Deferred from: code review of 45-3-docker-orchestrator-profile-param (2026-05-09)

- README documents the anon-disabled error message verbatim — drift hazard between code and doc. Recommend exporting the message string as a const both code and doc reference. [`packages/townhouse/README.md` § "DockerOrchestrator Profiles"]
- Magic numbers (timeouts: 120_000 / 2_000 / 5_000 / 180_000 / 60_000; maxBuffer 16 MiB; stderr truncation 500) not named constants. [`packages/townhouse/src/docker/orchestrator.ts`]
- AC #5 ECONNREFUSED retry-within-budget path has no dedicated unit test — branch in `waitForHsHostname` swallows non-anon-disabled errors and continues, but no test asserts the retry behavior. AC #12 didn't enumerate this case. [`packages/townhouse/src/docker/orchestrator-hs.test.ts`]
- AC #12 "constructor stores profile/composePath" assertion is `instanceof`-only — private fields never observably verified. Consider `Object.getOwnPropertyDescriptor`/`@ts-expect-error` access or a behavior-driven check. [`packages/townhouse/src/docker/orchestrator-hs.test.ts:479-491`]
- Integration test container assertion uses substring `name=townhouse-hs-` filter — pollutes when host has leftover containers from prior runs. Use exact-name filter or list-and-include. [`packages/townhouse/src/__integration__/orchestrator-hs.test.ts:161-167`]
- Integration test relies on vitest `it`-order: third `it` calls `orch.down()` and asserts `townhouse-hs-anon` volume survives, while `afterAll` runs `down -v`. Order-dependence not enforced. [`packages/townhouse/src/__integration__/orchestrator-hs.test.ts`]
- `process.env['TOWNHOUSE_WALLET_PASSWORD']` mutated in `beforeAll` without try/finally restore — leaks across worker reuse if `beforeAll` throws between set and the matching `afterAll` delete. [`packages/townhouse/src/__integration__/orchestrator-hs.test.ts:129, :153`]
- No partial-failure rollback when `docker compose up` exits non-zero or times out — Node's `timeout` kills the CLI but dockerd keeps going, leaving a half-started stack. Story 45.4 retry policy will dictate whether to attempt `docker compose down` in the catch path. [`packages/townhouse/src/docker/orchestrator.ts:213-231`]
- User-visible `OrchestratorError` message truncates stderr to 500 chars; full stderr preserved on `error.stderr` field but human-readable diagnostic is gutted for multi-line compose YAML errors. [`packages/townhouse/src/docker/orchestrator.ts:228, :432`]
- `composePath` not validated as absolute or existing on disk at construct time — defense-in-depth gap. Current callers pass paths from `materializeComposeTemplate` so the gap is only relevant to direct API consumers. [`packages/townhouse/src/docker/orchestrator.ts:159`]
- Non-503 / non-200 statuses (404 from a connector pre-v3.5.0 without the endpoint, 500, 502) are silently retried for the full 120s budget. AC #5 specifies 503 fast-fail and ECONNREFUSED retry but is silent on other statuses; could fast-fail 404 with an actionable "connector pre-v3.5.0" diagnostic. [`packages/townhouse/src/docker/orchestrator.ts:284-294`]
- `activeNodes` mutated before `upHs/upDev` could fail — leaves stale state on error. Pre-existing in dev path; flagged for symmetry. Move assignment to after success or implement actual-state tracking in a follow-up. [`packages/townhouse/src/docker/orchestrator.ts:174`]
4 changes: 2 additions & 2 deletions _bmad-output/implementation-artifacts/sprint-status.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# generated: 2026-04-27
# last_updated: 2026-05-09 (Story 45.2donev0.1.0-rc5 published; verified tarball ships dist/compose/{hs,dev}.yml + image-manifest.json with all 5 images digest-pinned. RC2-RC4 spent on iteratively fixing CI bugs surfaced only by the live publish-npm path: (a) shared digest helper had to live inside packages/townhouse/ for Docker context, (b) build step needed --filter "...") for workspace-dep DTS resolution, (c) pnpm pack does NOT support --filter so chdir was required)
# last_updated: 2026-05-09 (Story 45.3reviewPR #44 opened; dual-path orchestrator (dockerode dev / compose hs), OrchestratorError, getHsHostname admin-client, 14 HS unit tests + 3 integration test stubs; 71 existing orchestrator tests pass verbatim)
# project: toon
# project_key: NOKEY
# tracking_system: file-system
Expand Down Expand Up @@ -500,7 +500,7 @@ development_status:
epic-45: in-progress
45-1-multi-arch-townhouse-image-publish-ci: done # done: workflow run https://github.com/toon-protocol/town/actions/runs/25603167091 produced 4 multi-arch + cosign-signed images and image-manifest.json — town#37 town#38 town#39 town#40 town#41
45-2-embed-compose-templates-and-image-manifest-in-npm-tarball: done # done: tag v0.1.0-rc5 published; tarball ships dist/compose/{hs,dev}.yml + image-manifest.json with all 5 images digest-pinned (workflow run 25614777350) — town#43
45-3-docker-orchestrator-profile-param: backlog
45-3-docker-orchestrator-profile-param: review # PR #44 — orchestrator HS profile + getHsHostname + OrchestratorError
45-4-townhouse-hs-up-subcommand-apex-only-boot: backlog # CRITICAL PATH; depends on 44.1
epic-45-retrospective: optional

Expand Down
39 changes: 39 additions & 0 deletions packages/townhouse/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,45 @@ The package-local `packages/townhouse/compose/townhouse-dev.yml` is the canonica

For backward compatibility, `docker-compose-townhouse-dev.yml` at the repo root is preserved and continues to be used by `scripts/townhouse-dev-infra.sh`. A follow-up story will route the script through the package-local copy.

## DockerOrchestrator Profiles

The `DockerOrchestrator` class drives both the contributor dev stack and
the operator HS-mode apex stack via a single `profile: 'dev' | 'hs'`
parameter:

- **`profile: 'dev'`** (default) — uses `dockerode` for fine-grained
programmatic control. Matches the lifecycle the existing `townhouse up`
CLI has shipped since Epic 21. No `composePath` required.
- **`profile: 'hs'`** — shells out to `docker compose -f <composePath> up -d`
with `--profile <type>` flags for each enabled peer. Waits on the
connector's `GET /admin/hs-hostname` endpoint (connector v3.5.0+) until
the `.anyone` hostname is published. Requires `composePath` (typically
the path returned by `materializeComposeTemplate('hs')`).

Example (HS-mode caller, as Story 45.4's `townhouse hs up` will use):
```typescript
import { materializeComposeTemplate, DockerOrchestrator } from '@toon-protocol/townhouse';
import Docker from 'dockerode';

const { composePath } = materializeComposeTemplate('hs');
const docker = new Docker();
const orch = new DockerOrchestrator(docker, config, walletManager, {
profile: 'hs',
composePath,
});
await orch.up([]); // apex-only (connector + townhouse-api)
```

### Connector Anon Requirement (HS Profile)

The HS profile's readiness gate calls `GET /admin/hs-hostname`. The
connector container MUST be configured with `anon.enabled: true` —
if anon is disabled, the endpoint returns 503 and the orchestrator
throws `OrchestratorError("connector is anon-disabled — set
anon.enabled: true in the connector config")`. Story 45.4's
`townhouse hs up` generates the connector config with `anon.enabled: true`
by default; manual configurations should mirror that setting.

## Running the townhouse as a hidden service (laptop)

`docker-compose-townhouse-hs.yml` brings up the full operator stack —
Expand Down
118 changes: 118 additions & 0 deletions packages/townhouse/src/__integration__/orchestrator-hs.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
/**
* HS-profile orchestrator integration test (Story 45.3, AC #13).
*
* Boots the real apex stack via the published HS compose template and
* asserts hostname publication + volume preservation on down().
*
* Prerequisites (skip gates enforce this):
* RUN_DOCKER_INTEGRATION=1 — opt-in to Docker-required tests
* SKIP_DOCKER unset or falsy — sandbox environments set this to skip
* dist/image-manifest.json — produced by `pnpm build` after the publish CI
* run; download via:
* gh run download <run-id> --name image-manifest
* -D packages/townhouse/dist/
*
* Typical CI invocation:
* RUN_DOCKER_INTEGRATION=1 pnpm --filter @toon-protocol/townhouse test:integration
* -- orchestrator-hs
*
* First run pulls connector + townhouse-api images (~2-3 min cold cache).
*/

import { describe, it, expect, beforeAll, afterAll } from 'vitest';
import { execSync } from 'node:child_process';
import { mkdtempSync, rmSync } from 'node:fs';
import { tmpdir } from 'node:os';
import { join } from 'node:path';
import Docker from 'dockerode';
import { DockerOrchestrator } from '../docker/orchestrator.js';
import { materializeComposeTemplate } from '../compose-loader.js';
import { ConnectorAdminClient } from '../connector/admin-client.js';
import { isTruthyEnv } from './_test-helpers.js';

// ── Skip gates ──────────────────────────────────────────────────────────────
const SKIP_DOCKER = isTruthyEnv(process.env['SKIP_DOCKER']);
const RUN_INTEGRATION = process.env['RUN_DOCKER_INTEGRATION'] === '1';
const shouldRun = RUN_INTEGRATION && !SKIP_DOCKER;

if (!shouldRun) {
console.warn(
'\n⚠️ Skipping HS-profile orchestrator integration test.\n' +
' Set RUN_DOCKER_INTEGRATION=1 and ensure SKIP_DOCKER is unset.\n' +
' Ensure dist/image-manifest.json is present (run `pnpm build` after\n' +
' downloading the manifest from the latest publish CI run).\n'
);
}

describe.skipIf(!shouldRun)(
'HS profile orchestrator boots apex-only stack',
() => {
let tmpDir: string;
let composePath: string;
let orch: DockerOrchestrator;

beforeAll(async () => {
tmpDir = mkdtempSync(join(tmpdir(), 'townhouse-hs-orch-'));
({ composePath } = materializeComposeTemplate('hs', {
townhouseHome: tmpDir,
}));
// The HS template uses ${TOWNHOUSE_WALLET_PASSWORD:?} — must be set or
// docker compose up fails immediately with a substitution error.
process.env['TOWNHOUSE_WALLET_PASSWORD'] = 'integration-test-pwd';
const docker = new Docker();
orch = new DockerOrchestrator(docker, undefined as never, undefined, {
profile: 'hs',
composePath,
});
await orch.up([]); // apex-only: connector + townhouse-api
}, 240_000);

afterAll(async () => {
try {
await orch.down();
} catch {
/* best-effort */
}
// Wipe named volumes so subsequent runs get a fresh .anyone address.
try {
execSync(`docker compose -f "${composePath}" down -v`, {
timeout: 30_000,
});
} catch {
/* best-effort */
}
rmSync(tmpDir, { recursive: true, force: true });
delete process.env['TOWNHOUSE_WALLET_PASSWORD'];
}, 60_000);

it('exactly two containers running: connector + townhouse-api', () => {
const out = execSync(
'docker ps --filter name=townhouse-hs- --format "{{.Names}}"',
{ encoding: 'utf-8' }
);
const names = out.trim().split('\n').filter(Boolean).sort();
expect(names).toEqual(['townhouse-hs-api', 'townhouse-hs-connector']);
}, 10_000);

it('getHsHostname() returns a non-null .anyone address', async () => {
const client = new ConnectorAdminClient('http://127.0.0.1:9401', 5_000);
const result = await client.getHsHostname();
expect(result.hostname).toMatch(/\.anyone$/);
expect(result.publishedAt).toBeTruthy();
}, 10_000);

it('down() stops containers but preserves townhouse-hs-anon volume', async () => {
await orch.down();
const containers = execSync(
'docker ps -a --filter name=townhouse-hs- --format "{{.Names}}"',
{ encoding: 'utf-8' }
);
expect(containers.trim()).toBe('');
const volumes = execSync(
'docker volume ls --filter name=townhouse-hs-anon --format "{{.Name}}"',
{ encoding: 'utf-8' }
);
expect(volumes.trim()).toBe('townhouse-hs-anon');
}, 60_000);
}
);
12 changes: 9 additions & 3 deletions packages/townhouse/src/cli.ts
Original file line number Diff line number Diff line change
Expand Up @@ -333,7 +333,9 @@ async function handleStatus(
docker: Docker,
config: TownhouseConfig
): Promise<void> {
const orchestrator = new DockerOrchestrator(docker, config);
const orchestrator = new DockerOrchestrator(docker, config, undefined, {
profile: 'dev',
});
const statuses = await orchestrator.status();

console.log('Node Status:');
Expand Down Expand Up @@ -503,7 +505,9 @@ async function handleUp(
}
}

const orchestrator = new DockerOrchestrator(docker, config, walletManager);
const orchestrator = new DockerOrchestrator(docker, config, walletManager, {
profile: 'dev',
});

// Wire up progress reporting
orchestrator.on(
Expand Down Expand Up @@ -669,7 +673,9 @@ async function handleDown(
config: TownhouseConfig,
docker: Docker
): Promise<void> {
const orchestrator = new DockerOrchestrator(docker, config);
const orchestrator = new DockerOrchestrator(docker, config, undefined, {
profile: 'dev',
});

orchestrator.on(
'containerState',
Expand Down
76 changes: 76 additions & 0 deletions packages/townhouse/src/connector/admin-client.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,82 @@ describe('ConnectorAdminClient', () => {
});
});

describe('getHsHostname() (Story 45.3 / AC #7)', () => {
it('returns hostname + publishedAt when bootstrap is complete (200 with non-null fields)', async () => {
const body = {
hostname: 'abc123.anyone',
publishedAt: '2026-05-09T00:00:00Z',
};
fetchMock.mockResolvedValue({
ok: true,
status: 200,
json: async () => body,
});

const client = new ConnectorAdminClient('http://localhost:9401');
const result = await client.getHsHostname();

expect(result.hostname).toBe('abc123.anyone');
expect(result.publishedAt).toBe('2026-05-09T00:00:00Z');
});

it('returns nulls when bootstrap is still in progress (200 with null fields)', async () => {
const body = { hostname: null, publishedAt: null };
fetchMock.mockResolvedValue({
ok: true,
status: 200,
json: async () => body,
});

const client = new ConnectorAdminClient('http://localhost:9401');
const result = await client.getHsHostname();

expect(result.hostname).toBeNull();
expect(result.publishedAt).toBeNull();
});

it('throws anon-disabled error on 503 response', async () => {
fetchMock.mockResolvedValue({
ok: false,
status: 503,
statusText: 'Service Unavailable',
json: async () => ({ error: 'anon-disabled' }),
});

const client = new ConnectorAdminClient('http://localhost:9401');

await expect(client.getHsHostname()).rejects.toThrow('anon-disabled');
});

it('throws on shape-violating response (hostname: number)', async () => {
fetchMock.mockResolvedValue({
ok: true,
status: 200,
json: async () => ({ hostname: 42, publishedAt: null }),
});

const client = new ConnectorAdminClient('http://localhost:9401');

await expect(client.getHsHostname()).rejects.toThrow(
/invalid hs-hostname response shape/
);
});

it('throws on shape-violating response (publishedAt: number)', async () => {
fetchMock.mockResolvedValue({
ok: true,
status: 200,
json: async () => ({ hostname: 'x.anyone', publishedAt: 99 }),
});

const client = new ConnectorAdminClient('http://localhost:9401');

await expect(client.getHsHostname()).rejects.toThrow(
/invalid hs-hostname response shape/
);
});
});

describe('constructor', () => {
it('accepts base URL without trailing slash', async () => {
fetchMock.mockResolvedValue({ ok: true, json: async () => HEALTHY_BODY });
Expand Down
Loading
Loading