From 59e12319967b84331ed7a0634e9e199c951c4deb Mon Sep 17 00:00:00 2001 From: Claude Date: Sat, 21 Mar 2026 12:40:33 +0000 Subject: [PATCH 1/5] docs: add channels integration plan (Option A vs B) Explores two approaches for one-way Claude Code channel integration: - Option A: Extend existing `jack mcp serve` with channel capability - Option B: Standalone `jack channel serve` command with webhook support Recommends starting with Option A (extend MCP server) for zero-friction deploy notifications and log error alerts, with extraction path to B if webhook ingestion becomes needed. https://claude.ai/code/session_01ScGRSLQkBzUEUkhiPhBbnC --- docs/plans/channels-integration.md | 198 +++++++++++++++++++++++++++++ 1 file changed, 198 insertions(+) create mode 100644 docs/plans/channels-integration.md diff --git a/docs/plans/channels-integration.md b/docs/plans/channels-integration.md new file mode 100644 index 0000000..2428132 --- /dev/null +++ b/docs/plans/channels-integration.md @@ -0,0 +1,198 @@ +# Jack + Claude Code Channels: One-Way Integration Plan + +## Goal + +Push deployment events, production errors, and log alerts into a running Claude Code session so Claude can react using Jack's existing MCP tools — without the user needing to be at the terminal. + +## Background + +Claude Code channels are MCP servers that declare `claude/channel` capability and emit `notifications/claude/channel` events. Claude receives these as `` tags in its context. The channel runs as a subprocess spawned by Claude Code, communicating over stdio. + +Jack already has a local MCP server (`jack mcp serve`) with 27 tools and 3 resources. The question is whether to extend it or build a separate channel server. + +--- + +## Option A: Extend `jack mcp serve` + +Add `claude/channel` capability to the existing MCP server. The same process that handles tool calls also subscribes to events and pushes notifications. + +### How it works + +1. Add `experimental: { 'claude/channel': {} }` to the server's capabilities in `apps/cli/src/mcp/server.ts` +2. Add an `instructions` string telling Claude what events to expect +3. After server connects, start background event loops: + - **Deploy watcher**: Poll control plane for deployment status changes on linked project + - **Log stream**: Connect to the project's SSE log stream, filter for errors/exceptions, push as channel events +4. User starts Claude Code with: `claude --channels server:jack` + +### Event sources + +| Event | Source | Mechanism | +|-------|--------|-----------| +| Deploy completed/failed | Control plane `/v1/projects/{id}/overview` | Poll every 5s after deploy starts | +| Production error | Log worker SSE stream | Subscribe on startup, filter `level: "error"` or exceptions | +| Cron execution result | Control plane | Poll or future webhook | + +### What changes + +``` +apps/cli/src/mcp/ +├── server.ts # Add channel capability + instructions +├── channel/ # NEW directory +│ ├── events.ts # Event emitter, notification dispatch +│ ├── deploy-watcher.ts # Polls control plane for deploy status +│ └── log-watcher.ts # SSE subscription for error alerts +└── tools/index.ts # Unchanged +``` + +### Notification examples + +```xml + + +Deployment live at https://user-my-api.runjack.xyz + + + + +TypeError: Cannot read property 'id' of undefined + at handler (src/index.ts:42:15) +Request: GET /api/users/123 → 500 + + + + +Build failed: Module not found: ./missing-import + +``` + +### Tradeoffs + +| | | +|---|---| +| **Pro** | Zero new infrastructure — reuses auth, project link, deploy mode detection | +| **Pro** | Single process — no extra server to manage | +| **Pro** | MCP tools available in same session — Claude can immediately `execute_sql`, `tail_logs`, `test_endpoint` to investigate | +| **Pro** | Already installed for Jack users via `jack mcp install` | +| **Con** | Channel lifecycle tied to MCP server — can't run channel without all 27 tools | +| **Con** | Polling from inside the MCP server adds background work to a stdio process | +| **Con** | `--channels server:jack` syntax requires the server name in `.mcp.json` to be "jack" (it already is) | +| **Con** | Harder to test channel behavior in isolation | + +### Key decision: What triggers the deploy watcher? + +The control plane has no push mechanism for deploy events. Options: + +- **A1: Always poll** — On startup, start polling the linked project's overview endpoint every 10s. Simple but wasteful. +- **A2: Tool-triggered** — When Claude calls `deploy_project`, start polling for that deployment ID until it resolves. No wasted polls, but misses deploys from other terminals. +- **A3: Hybrid** — Poll on startup at low frequency (30s), increase to 5s after a deploy tool call. Best coverage, more complex. + +**Recommendation: A2 (tool-triggered).** Most natural — Claude deploys, then gets notified of the result. Deploys from other terminals are edge cases that can be added later. + +--- + +## Option B: Standalone `jack channel serve` + +New CLI command that starts a dedicated channel server. It's a separate MCP server process that only does channel events — no tools. + +### How it works + +1. New command: `jack channel serve` starts an MCP server with only `claude/channel` capability +2. Optionally starts a local HTTP server for webhook ingestion (`--webhook-port 8788`) +3. Connects to control plane SSE for log streaming +4. User registers it in `.mcp.json` separately and starts with: `claude --channels server:jack-channel` + +### Event sources + +Same as Option A, plus: + +| Event | Source | Mechanism | +|-------|--------|-----------| +| CI webhook | Local HTTP POST to `localhost:8788` | GitHub Actions `workflow_run` webhook via smee.io or similar | +| Custom hooks | Local HTTP POST | `jack ship --notify` POSTs to channel after deploy | + +### What changes + +``` +apps/cli/src/ +├── commands/ +│ └── channel.ts # NEW: `jack channel serve` command +├── channel/ # NEW directory +│ ├── server.ts # Channel-only MCP server +│ ├── webhook-server.ts # Local HTTP listener for webhooks +│ ├── deploy-watcher.ts # Polls control plane +│ └── log-watcher.ts # SSE subscription +└── mcp/ + └── server.ts # Unchanged +``` + +### Tradeoffs + +| | | +|---|---| +| **Pro** | Clean separation — channel concerns don't touch MCP tool server | +| **Pro** | Can run independently of `jack mcp serve` | +| **Pro** | Webhook endpoint enables CI/CD integration (GitHub Actions → channel → Claude) | +| **Pro** | Easier to test in isolation | +| **Pro** | Can evolve independently (add two-way later without touching tool server) | +| **Con** | Two processes to manage — user runs both `jack mcp serve` and `jack channel serve` | +| **Con** | Duplicates some setup: auth loading, project detection, deploy mode checks | +| **Con** | User must configure `.mcp.json` with a second entry and pass both to `--channels` | +| **Con** | Webhook port requires the external system to know `localhost:8788` — doesn't work for remote CI without tunneling | + +### Key decision: Webhook server or polling only? + +- **B1: Polling only** — Same as Option A, just in a separate process. Simpler but loses the webhook advantage that justifies the separation. +- **B2: Webhook + polling** — HTTP server for external events, polling for deploy status. More useful but more surface area. + +**Recommendation: B2 (webhook + polling).** The webhook endpoint is the main reason to choose Option B over A. Without it, the separation adds complexity for no gain. + +--- + +## Comparison Matrix + +| Dimension | Option A (extend MCP) | Option B (standalone) | +|-----------|----------------------|----------------------| +| **Setup effort for user** | None — already have `jack mcp serve` | Must add second server to `.mcp.json` | +| **Implementation effort** | ~200 LOC added to existing server | ~400 LOC new command + server | +| **Event sources** | Deploy status, log errors | Deploy status, log errors, webhooks | +| **CI/CD integration** | Not possible | Yes, via local webhook endpoint | +| **Process management** | Single process | Two processes | +| **Auth/config reuse** | Full — same process | Partial — must reload from disk | +| **Testability** | Harder — mixed with tool server | Easier — isolated | +| **Future two-way support** | Adds reply tools to already-large server | Clean addition to focused server | +| **Deploy mode support** | Both (managed + BYO) | Both (managed + BYO) | + +--- + +## Recommendation + +**Start with Option A**, then extract to B if webhooks become important. + +Rationale: +1. **Zero friction** — Jack users already have `jack mcp serve` configured. Adding channel capability is a server-side change they get for free. +2. **Tool-triggered deploy watching** (A2) is the highest-value event and needs no new infrastructure. +3. **Log error streaming** reuses existing SSE infrastructure from `log-worker`. +4. The webhook use case (Option B's main advantage) requires tunneling for remote CI, which adds setup complexity that undermines Jack's "zero friction" philosophy. +5. If webhook demand emerges, the channel event code (`deploy-watcher.ts`, `log-watcher.ts`) extracts cleanly into a standalone server. + +### Implementation order + +1. **Phase 1: Deploy notifications** — Add channel capability, emit deploy_complete/deploy_failed events after `deploy_project` tool calls. ~100 LOC. +2. **Phase 2: Log error alerts** — Subscribe to project's log SSE stream, filter errors/exceptions, push as channel events. ~100 LOC. +3. **Phase 3 (if needed): Extract to standalone** — Move channel code to `jack channel serve` command, add webhook HTTP server. + +--- + +## Open Questions + +1. **Managed-only or both modes?** Log streaming only works for managed (Jack Cloud) projects today. BYO projects would need wrangler tail, which has different output format. Should Phase 1-2 be managed-only? + +2. **Event filtering** — Should the user be able to configure which events they receive? e.g., only errors above a severity, only for specific projects? Or start simple (all events for linked project)? + +3. **Multi-project** — The MCP server currently operates on the working directory's project. Should the channel watch multiple projects, or just the current one? + +4. **Channel instructions** — The `instructions` string goes into Claude's system prompt. What should Claude do when it receives events? Suggestions: + - Deploy success: summarize and note the URL + - Deploy failure: investigate build logs + - Production error: check recent deploys, read relevant code, suggest fix From d14dcfdaf0db8b781e3aa5cde96459edaea41ec5 Mon Sep 17 00:00:00 2001 From: hellno Date: Sat, 21 Mar 2026 13:52:33 +0100 Subject: [PATCH 2/5] docs: narrow channels plan to Option A with deploy notifications MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Decisions made: - Extend `jack mcp serve` (not standalone server) - Local Claude Code users only (channels are stdio/local-only) - Phase 1: deploy status notifications (tool-triggered polling) - Managed mode only for v1 - Session-scoped is acceptable Discarded standalone option — webhook ingestion needs tunneling, too much friction. Extraction path documented if needed later. --- docs/plans/channels-integration.md | 296 ++++++++++++++--------------- 1 file changed, 144 insertions(+), 152 deletions(-) diff --git a/docs/plans/channels-integration.md b/docs/plans/channels-integration.md index 2428132..fb364ed 100644 --- a/docs/plans/channels-integration.md +++ b/docs/plans/channels-integration.md @@ -1,198 +1,190 @@ -# Jack + Claude Code Channels: One-Way Integration Plan +# Jack + Claude Code Channels: One-Way Deploy Notifications -## Goal +## Decision -Push deployment events, production errors, and log alerts into a running Claude Code session so Claude can react using Jack's existing MCP tools — without the user needing to be at the terminal. +**Extend `jack mcp serve`** to push deploy status events into Claude Code sessions. -## Background +Scope: Local Claude Code users only. Session-scoped (events flow while Claude Code is running). Phase 1 is deploy notifications only. -Claude Code channels are MCP servers that declare `claude/channel` capability and emit `notifications/claude/channel` events. Claude receives these as `` tags in its context. The channel runs as a subprocess spawned by Claude Code, communicating over stdio. +## Context -Jack already has a local MCP server (`jack mcp serve`) with 27 tools and 3 resources. The question is whether to extend it or build a separate channel server. +### What are Claude Code channels? ---- +An MCP server that declares `claude/channel` capability and emits `notifications/claude/channel` events. Claude receives these as `` tags. The channel is a stdio subprocess — strictly local, no cloud/remote support. + +### Constraints discovered + +- **Channels are local-only**: stdio subprocess on user's machine. Won't work with claude.ai web, `claude --remote`, or GitHub Actions. +- **Control plane has no push mechanism**: Deploy status requires polling `GET /v1/projects/{id}/deployments/latest`. No webhooks, no pub/sub. +- **Local and remote MCP don't share code**: Local MCP (27 tools, stdio) and remote MCP at mcp.getjack.org (14 tools, HTTP) are independent. Channels can only attach to local. +- **Research preview**: Custom channels require `--dangerously-load-development-channels` until allowlisted by Anthropic. -## Option A: Extend `jack mcp serve` +### Why extend `jack mcp serve` (not standalone) -Add `claude/channel` capability to the existing MCP server. The same process that handles tool calls also subscribes to events and pushes notifications. +| Factor | Extend MCP server | Standalone `jack channel serve` | +|--------|-------------------|-------------------------------| +| User setup | None — already configured | New `.mcp.json` entry + `--channels` flag | +| Auth/config | Reuses in-process | Must reload from disk | +| Process count | Same single process | Second process to manage | +| Webhook ingestion | No | Yes (local HTTP server) | +| Testability | Harder (mixed with tools) | Easier (isolated) | + +**Decision: Extend.** The standalone option's main advantage is webhook ingestion, which requires tunneling for remote CI — too much friction. If webhooks become needed later, the channel code extracts cleanly into a standalone server. + +--- + +## Design: Deploy Status Channel ### How it works -1. Add `experimental: { 'claude/channel': {} }` to the server's capabilities in `apps/cli/src/mcp/server.ts` -2. Add an `instructions` string telling Claude what events to expect -3. After server connects, start background event loops: - - **Deploy watcher**: Poll control plane for deployment status changes on linked project - - **Log stream**: Connect to the project's SSE log stream, filter for errors/exceptions, push as channel events -4. User starts Claude Code with: `claude --channels server:jack` +1. Add `experimental: { 'claude/channel': {} }` to MCP server capabilities +2. Add `instructions` string to Claude's system prompt describing events +3. After `deploy_project` tool call completes, start polling control plane for final deployment status +4. Emit `notifications/claude/channel` when status resolves to `live` or `failed` +5. User enables with: `claude --channels server:jack` -### Event sources +### Trigger: Tool-triggered polling (not always-on) -| Event | Source | Mechanism | -|-------|--------|-----------| -| Deploy completed/failed | Control plane `/v1/projects/{id}/overview` | Poll every 5s after deploy starts | -| Production error | Log worker SSE stream | Subscribe on startup, filter `level: "error"` or exceptions | -| Cron execution result | Control plane | Poll or future webhook | +The control plane has no push mechanism. Three polling strategies were considered: -### What changes +| Strategy | Description | Pros | Cons | +|----------|-------------|------|------| +| Always poll | Poll overview endpoint every 10s on startup | Catches all deploys | Wasteful, noisy | +| **Tool-triggered** | Poll after `deploy_project` call until resolved | No wasted polls, natural UX | Misses deploys from other terminals | +| Hybrid | Low-freq poll + high-freq after deploy | Best coverage | Complex | -``` -apps/cli/src/mcp/ -├── server.ts # Add channel capability + instructions -├── channel/ # NEW directory -│ ├── events.ts # Event emitter, notification dispatch -│ ├── deploy-watcher.ts # Polls control plane for deploy status -│ └── log-watcher.ts # SSE subscription for error alerts -└── tools/index.ts # Unchanged -``` +**Chosen: Tool-triggered.** Claude deploys → polls for result → gets notified. Deploys from other terminals are an edge case for later. -### Notification examples +### Notification format ```xml - - + Deployment live at https://user-my-api.runjack.xyz +``` - - -TypeError: Cannot read property 'id' of undefined - at handler (src/index.ts:42:15) -Request: GET /api/users/123 → 500 - - - - +```xml + Build failed: Module not found: ./missing-import ``` -### Tradeoffs - -| | | -|---|---| -| **Pro** | Zero new infrastructure — reuses auth, project link, deploy mode detection | -| **Pro** | Single process — no extra server to manage | -| **Pro** | MCP tools available in same session — Claude can immediately `execute_sql`, `tail_logs`, `test_endpoint` to investigate | -| **Pro** | Already installed for Jack users via `jack mcp install` | -| **Con** | Channel lifecycle tied to MCP server — can't run channel without all 27 tools | -| **Con** | Polling from inside the MCP server adds background work to a stdio process | -| **Con** | `--channels server:jack` syntax requires the server name in `.mcp.json` to be "jack" (it already is) | -| **Con** | Harder to test channel behavior in isolation | - -### Key decision: What triggers the deploy watcher? - -The control plane has no push mechanism for deploy events. Options: - -- **A1: Always poll** — On startup, start polling the linked project's overview endpoint every 10s. Simple but wasteful. -- **A2: Tool-triggered** — When Claude calls `deploy_project`, start polling for that deployment ID until it resolves. No wasted polls, but misses deploys from other terminals. -- **A3: Hybrid** — Poll on startup at low frequency (30s), increase to 5s after a deploy tool call. Best coverage, more complex. - -**Recommendation: A2 (tool-triggered).** Most natural — Claude deploys, then gets notified of the result. Deploys from other terminals are edge cases that can be added later. - ---- +### Channel instructions (added to Claude's system prompt) -## Option B: Standalone `jack channel serve` - -New CLI command that starts a dedicated channel server. It's a separate MCP server process that only does channel events — no tools. - -### How it works - -1. New command: `jack channel serve` starts an MCP server with only `claude/channel` capability -2. Optionally starts a local HTTP server for webhook ingestion (`--webhook-port 8788`) -3. Connects to control plane SSE for log streaming -4. User registers it in `.mcp.json` separately and starts with: `claude --channels server:jack-channel` - -### Event sources - -Same as Option A, plus: +``` +Events from the jack channel are deployment status notifications. They arrive as + tags. -| Event | Source | Mechanism | -|-------|--------|-----------| -| CI webhook | Local HTTP POST to `localhost:8788` | GitHub Actions `workflow_run` webhook via smee.io or similar | -| Custom hooks | Local HTTP POST | `jack ship --notify` POSTs to channel after deploy | +- deploy_complete: The deployment is live. Note the URL and confirm to the user. +- deploy_failed: The deployment failed. Read the error message, check the relevant + source code, and suggest a fix. Use tail_logs or test_endpoint to gather more context + if needed. +``` ### What changes ``` -apps/cli/src/ -├── commands/ -│ └── channel.ts # NEW: `jack channel serve` command -├── channel/ # NEW directory -│ ├── server.ts # Channel-only MCP server -│ ├── webhook-server.ts # Local HTTP listener for webhooks -│ ├── deploy-watcher.ts # Polls control plane -│ └── log-watcher.ts # SSE subscription -└── mcp/ - └── server.ts # Unchanged +apps/cli/src/mcp/ +├── server.ts # Add channel capability, instructions, wire up deploy watcher +├── channel/ # NEW +│ └── deploy-watcher.ts # Poll control plane after deploy, emit notifications +└── tools/index.ts # After deploy_project succeeds, trigger watcher ``` -### Tradeoffs - -| | | -|---|---| -| **Pro** | Clean separation — channel concerns don't touch MCP tool server | -| **Pro** | Can run independently of `jack mcp serve` | -| **Pro** | Webhook endpoint enables CI/CD integration (GitHub Actions → channel → Claude) | -| **Pro** | Easier to test in isolation | -| **Pro** | Can evolve independently (add two-way later without touching tool server) | -| **Con** | Two processes to manage — user runs both `jack mcp serve` and `jack channel serve` | -| **Con** | Duplicates some setup: auth loading, project detection, deploy mode checks | -| **Con** | User must configure `.mcp.json` with a second entry and pass both to `--channels` | -| **Con** | Webhook port requires the external system to know `localhost:8788` — doesn't work for remote CI without tunneling | - -### Key decision: Webhook server or polling only? - -- **B1: Polling only** — Same as Option A, just in a separate process. Simpler but loses the webhook advantage that justifies the separation. -- **B2: Webhook + polling** — HTTP server for external events, polling for deploy status. More useful but more surface area. - -**Recommendation: B2 (webhook + polling).** The webhook endpoint is the main reason to choose Option B over A. Without it, the separation adds complexity for no gain. - ---- - -## Comparison Matrix +### Implementation sketch + +**`apps/cli/src/mcp/server.ts`** — Add channel capability: +```typescript +const server = new McpServer( + { name: "jack", version }, + { + capabilities: { + tools: {}, + resources: {}, + experimental: { "claude/channel": {} }, + }, + instructions: CHANNEL_INSTRUCTIONS, + } +); +``` -| Dimension | Option A (extend MCP) | Option B (standalone) | -|-----------|----------------------|----------------------| -| **Setup effort for user** | None — already have `jack mcp serve` | Must add second server to `.mcp.json` | -| **Implementation effort** | ~200 LOC added to existing server | ~400 LOC new command + server | -| **Event sources** | Deploy status, log errors | Deploy status, log errors, webhooks | -| **CI/CD integration** | Not possible | Yes, via local webhook endpoint | -| **Process management** | Single process | Two processes | -| **Auth/config reuse** | Full — same process | Partial — must reload from disk | -| **Testability** | Harder — mixed with tool server | Easier — isolated | -| **Future two-way support** | Adds reply tools to already-large server | Clean addition to focused server | -| **Deploy mode support** | Both (managed + BYO) | Both (managed + BYO) | +**`apps/cli/src/mcp/channel/deploy-watcher.ts`** — Poll and notify: +```typescript +export async function watchDeployment( + server: Server, + projectId: string, + deploymentId: string, + projectName: string, + deployMode: string +) { + const maxAttempts = 60; // 5 min at 5s intervals + for (let i = 0; i < maxAttempts; i++) { + await sleep(5000); + const status = await fetchDeploymentStatus(projectId, deploymentId); + + if (status === "live" || status === "failed") { + await server.notification({ + method: "notifications/claude/channel", + params: { + content: status === "live" + ? `Deployment live at ${url}` + : `Build failed: ${errorMessage}`, + meta: { + event: status === "live" ? "deploy_complete" : "deploy_failed", + project: projectName, + deployment_id: deploymentId, + deploy_mode: deployMode, + }, + }, + }); + return; + } + } +} +``` ---- +**`apps/cli/src/mcp/tools/index.ts`** — Trigger watcher after deploy: +```typescript +case "deploy_project": { + const result = await deployProject(projectPath, options); -## Recommendation + // Fire-and-forget: watch for final status in background + if (result.deploymentId && result.deployMode === "managed") { + watchDeployment(server, projectId, result.deploymentId, result.projectName, result.deployMode) + .catch(err => console.error("[channel] deploy watcher error:", err)); + } -**Start with Option A**, then extract to B if webhooks become important. + return formatSuccessResponse(result, startTime); +} +``` -Rationale: -1. **Zero friction** — Jack users already have `jack mcp serve` configured. Adding channel capability is a server-side change they get for free. -2. **Tool-triggered deploy watching** (A2) is the highest-value event and needs no new infrastructure. -3. **Log error streaming** reuses existing SSE infrastructure from `log-worker`. -4. The webhook use case (Option B's main advantage) requires tunneling for remote CI, which adds setup complexity that undermines Jack's "zero friction" philosophy. -5. If webhook demand emerges, the channel event code (`deploy-watcher.ts`, `log-watcher.ts`) extracts cleanly into a standalone server. +### Scope boundaries -### Implementation order +**In scope (Phase 1):** +- Channel capability declaration in MCP server +- Deploy status notifications (complete/failed) after `deploy_project` tool calls +- Managed mode only (control plane polling) +- Single project (working directory's linked project) -1. **Phase 1: Deploy notifications** — Add channel capability, emit deploy_complete/deploy_failed events after `deploy_project` tool calls. ~100 LOC. -2. **Phase 2: Log error alerts** — Subscribe to project's log SSE stream, filter errors/exceptions, push as channel events. ~100 LOC. -3. **Phase 3 (if needed): Extract to standalone** — Move channel code to `jack channel serve` command, add webhook HTTP server. +**Out of scope (future phases):** +- Production error streaming (Phase 2 — requires SSE log subscription) +- BYO mode deploy watching (would need wrangler deployment polling) +- Multi-project watching +- Event filtering / configuration +- Webhook ingestion (would require extracting to standalone) +- Two-way channel (reply tools) +- claude.ai web / remote MCP support (channels are local-only) --- -## Open Questions - -1. **Managed-only or both modes?** Log streaming only works for managed (Jack Cloud) projects today. BYO projects would need wrangler tail, which has different output format. Should Phase 1-2 be managed-only? +## Discarded: Option B (Standalone `jack channel serve`) -2. **Event filtering** — Should the user be able to configure which events they receive? e.g., only errors above a severity, only for specific projects? Or start simple (all events for linked project)? +A separate CLI command starting a channel-only MCP server with optional webhook HTTP endpoint. -3. **Multi-project** — The MCP server currently operates on the working directory's project. Should the channel watch multiple projects, or just the current one? +**Why discarded:** +- Adds a second process for users to manage +- Requires separate `.mcp.json` entry and `--channels` config +- Main advantage (webhook ingestion) needs tunneling for remote CI — too much friction +- Duplicates auth/config loading already available in-process -4. **Channel instructions** — The `instructions` string goes into Claude's system prompt. What should Claude do when it receives events? Suggestions: - - Deploy success: summarize and note the URL - - Deploy failure: investigate build logs - - Production error: check recent deploys, read relevant code, suggest fix +**Extraction path:** If webhook demand emerges, `channel/deploy-watcher.ts` moves cleanly into a standalone server. The notification logic is transport-agnostic. From 3b8daa280a1bbdecfbcd03304ce758351c521363 Mon Sep 17 00:00:00 2001 From: hellno Date: Sat, 21 Mar 2026 14:05:01 +0100 Subject: [PATCH 3/5] =?UTF-8?q?docs:=20rewrite=20channels=20plan=20?= =?UTF-8?q?=E2=80=94=20sync=20deploys=20+=20error=20streaming?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Key insight: channels are wrong primitive for deploy notifications. The deploy_project MCP tool should just poll until resolved (~15 LOC). Channels are right for production error streaming — async events Claude can't know about unless something pushes them in. Priority 1: Make deploy_project return final status (no channels) Priority 2: Stream production errors via channel (genuinely unique) --- docs/plans/channels-integration.md | 298 ++++++++++++++++------------- 1 file changed, 166 insertions(+), 132 deletions(-) diff --git a/docs/plans/channels-integration.md b/docs/plans/channels-integration.md index fb364ed..abc2d72 100644 --- a/docs/plans/channels-integration.md +++ b/docs/plans/channels-integration.md @@ -1,190 +1,224 @@ -# Jack + Claude Code Channels: One-Way Deploy Notifications +# Jack + Claude Code Channels Integration -## Decision +## Summary -**Extend `jack mcp serve`** to push deploy status events into Claude Code sessions. +Two independent improvements to Jack's Claude Code experience: -Scope: Local Claude Code users only. Session-scoped (events flow while Claude Code is running). Phase 1 is deploy notifications only. +1. **Fix `deploy_project` to return final status** — Make the MCP tool poll until the deployment resolves instead of returning "building". No channels needed. ~15 LOC. +2. **Production error streaming via channel** — Push real-time production errors into Claude's session so it can auto-investigate. This is the genuine channels use case. ~150 LOC. ## Context ### What are Claude Code channels? -An MCP server that declares `claude/channel` capability and emits `notifications/claude/channel` events. Claude receives these as `` tags. The channel is a stdio subprocess — strictly local, no cloud/remote support. +An MCP server that declares `claude/channel` capability and emits `notifications/claude/channel` events. Claude receives these as `` tags. Strictly local (stdio subprocess), session-scoped (events flow only while Claude Code is running). -### Constraints discovered +### Why NOT use channels for deploy notifications -- **Channels are local-only**: stdio subprocess on user's machine. Won't work with claude.ai web, `claude --remote`, or GitHub Actions. -- **Control plane has no push mechanism**: Deploy status requires polling `GET /v1/projects/{id}/deployments/latest`. No webhooks, no pub/sub. -- **Local and remote MCP don't share code**: Local MCP (27 tools, stdio) and remote MCP at mcp.getjack.org (14 tools, HTTP) are independent. Channels can only attach to local. -- **Research preview**: Custom channels require `--dangerously-load-development-channels` until allowlisted by Anthropic. +We initially planned channels for deploy status notifications. After analysis: -### Why extend `jack mcp serve` (not standalone) +- **`deploy_project` MCP tool**: Claude awaits the result. If the tool polls until resolved, Claude gets the final status inline. No channel needed. +- **`jack ship` via Bash**: Claude sees CLI output directly. Already knows the outcome. +- **Channels add complexity for no gain here**: Background polling, MCP server lifecycle concerns, `--channels` flag — all to deliver a notification that the tool should just return synchronously. -| Factor | Extend MCP server | Standalone `jack channel serve` | -|--------|-------------------|-------------------------------| -| User setup | None — already configured | New `.mcp.json` entry + `--channels` flag | -| Auth/config | Reuses in-process | Must reload from disk | -| Process count | Same single process | Second process to manage | -| Webhook ingestion | No | Yes (local HTTP server) | -| Testability | Harder (mixed with tools) | Easier (isolated) | +Deploy notifications via channels solve a problem that doesn't exist. The tool should just wait for the result. -**Decision: Extend.** The standalone option's main advantage is webhook ingestion, which requires tunneling for remote CI — too much friction. If webhooks become needed later, the channel code extracts cleanly into a standalone server. +### Where channels ARE the right primitive + +Events that happen **outside of any tool call**: production errors hitting your live app, cron failures, unexpected 500s. Things Claude can't know about unless something pushes them in. No other platform does this. + +### Constraints + +- **Channels are local-only**: stdio subprocess. Won't work with claude.ai web, `claude --remote`, or GitHub Actions. +- **Control plane has no push for deploys**: Status requires polling `GET /v1/projects/{id}/deployments/latest`. +- **Control plane HAS push for logs**: SSE stream at `GET /v1/projects/{id}/logs/stream` (1-hour sessions). +- **Research preview**: Custom channels require `--dangerously-load-development-channels` until allowlisted. +- **One MCP server per Claude Code session**, operating on the working directory's project. --- -## Design: Deploy Status Channel +## Priority 1: Synchronous Deploy Status (~15 LOC) -### How it works +### Problem -1. Add `experimental: { 'claude/channel': {} }` to MCP server capabilities -2. Add `instructions` string to Claude's system prompt describing events -3. After `deploy_project` tool call completes, start polling control plane for final deployment status -4. Emit `notifications/claude/channel` when status resolves to `live` or `failed` -5. User enables with: `claude --channels server:jack` +`deploy_project` MCP tool calls `deployProject()` which uploads code and returns immediately with `status: "building"` or `"queued"`. Claude doesn't know the final outcome. + +### Fix + +Add a polling loop in the MCP tool handler (not in the shared library — CLI has its own UX for this). Poll `GET /v1/projects/{id}/deployments/latest` every 3s until status resolves to `live` or `failed`, with a 5-minute timeout. + +### What changes + +``` +apps/cli/src/mcp/tools/index.ts # Add polling after deployProject() call +``` + +### Implementation sketch + +```typescript +case "deploy_project": { + const result = await deployProject(projectPath, options); + + // For managed deploys, poll until final status + if (result.deploymentId && result.deployMode === "managed" + && result.deployStatus !== "live" && result.deployStatus !== "failed") { + const final = await pollDeploymentStatus(projectId, result.deploymentId, 100, 3000); + if (final) { + result.deployStatus = final.status; + result.errorMessage = final.error_message; + result.workerUrl = final.url ?? result.workerUrl; + } + } + + return formatSuccessResponse(result, startTime); +} +``` + +### Post-deploy auto-verify + +Update the MCP server `instructions` (or AGENTS.md) to tell Claude: -### Trigger: Tool-triggered polling (not always-on) +> After a successful deployment, call `test_endpoint` on the project URL to verify it's responding correctly. If the endpoint returns an error, use `tail_logs` to investigate. -The control plane has no push mechanism. Three polling strategies were considered: +This creates the **code → ship → verify → fix loop** that makes Jack unique. No other platform auto-verifies deploys through the AI session. + +--- + +## Priority 2: Production Error Streaming via Channel (~150 LOC) + +### Problem + +When a production error happens (500, uncaught exception), nobody knows until a user reports it or the developer checks logs manually. Claude is sitting right there with all the tools to investigate, but has no way to learn about it. + +### Solution + +Extend `jack mcp serve` with `claude/channel` capability. On startup (when channel is enabled), subscribe to the project's log SSE stream. Filter for errors/exceptions. Push them into Claude's session as channel events. + +### How it works + +1. Add `experimental: { 'claude/channel': {} }` to MCP server capabilities +2. Add `instructions` telling Claude how to handle error events +3. On server startup, detect the linked project and start a log session +4. Connect to the SSE stream, filter for `level: "error"` or exceptions +5. Emit `notifications/claude/channel` for each error +6. User enables with: `claude --channels server:jack` -| Strategy | Description | Pros | Cons | -|----------|-------------|------|------| -| Always poll | Poll overview endpoint every 10s on startup | Catches all deploys | Wasteful, noisy | -| **Tool-triggered** | Poll after `deploy_project` call until resolved | No wasted polls, natural UX | Misses deploys from other terminals | -| Hybrid | Low-freq poll + high-freq after deploy | Best coverage | Complex | +### Event flow -**Chosen: Tool-triggered.** Claude deploys → polls for result → gets notified. Deploys from other terminals are an edge case for later. +``` +User hits deployed app → 500 error + ↓ +Tenant Worker logs error + ↓ +log-worker (tail consumer) → LogStreamDO + ↓ +SSE stream → jack MCP server (channel subscriber) + ↓ +notifications/claude/channel → Claude Code session + ↓ +Claude reads error, checks code, uses tail_logs/test_endpoint to investigate +``` ### Notification format ```xml - -Deployment live at https://user-my-api.runjack.xyz + +TypeError: Cannot read property 'id' of undefined + at handler (src/index.ts:42:15) +Request: GET /api/users/123 → 500 -``` -```xml - -Build failed: Module not found: ./missing-import + +Uncaught ReferenceError: config is not defined + at scheduled (src/cron.ts:8:3) ``` -### Channel instructions (added to Claude's system prompt) +### Channel instructions ``` -Events from the jack channel are deployment status notifications. They arrive as - tags. +Events from the jack channel are production alerts from your deployed project. +They arrive as tags. + +- event="error": A request to your deployed app returned an error. Read the + stack trace, find the relevant source code, and suggest a fix. Use tail_logs + to see if it's recurring. Use test_endpoint to reproduce if possible. +- event="exception": An uncaught exception in your deployed code. This is + urgent — check the source, understand the cause, and suggest a fix. -- deploy_complete: The deployment is live. Note the URL and confirm to the user. -- deploy_failed: The deployment failed. Read the error message, check the relevant - source code, and suggest a fix. Use tail_logs or test_endpoint to gather more context - if needed. +Do NOT redeploy automatically. Present the fix and let the user decide. ``` ### What changes ``` apps/cli/src/mcp/ -├── server.ts # Add channel capability, instructions, wire up deploy watcher +├── server.ts # Add channel capability + instructions, start log subscriber ├── channel/ # NEW -│ └── deploy-watcher.ts # Poll control plane after deploy, emit notifications -└── tools/index.ts # After deploy_project succeeds, trigger watcher +│ └── log-subscriber.ts # SSE log stream → filtered channel notifications +└── tools/index.ts # Unchanged ``` -### Implementation sketch +### Scope boundaries -**`apps/cli/src/mcp/server.ts`** — Add channel capability: -```typescript -const server = new McpServer( - { name: "jack", version }, - { - capabilities: { - tools: {}, - resources: {}, - experimental: { "claude/channel": {} }, - }, - instructions: CHANNEL_INSTRUCTIONS, - } -); -``` +**In scope:** +- Channel capability declaration in MCP server +- Log SSE subscription for the linked project (managed mode only) +- Error/exception filtering and notification +- Graceful handling of SSE disconnects (reconnect with backoff) -**`apps/cli/src/mcp/channel/deploy-watcher.ts`** — Poll and notify: -```typescript -export async function watchDeployment( - server: Server, - projectId: string, - deploymentId: string, - projectName: string, - deployMode: string -) { - const maxAttempts = 60; // 5 min at 5s intervals - for (let i = 0; i < maxAttempts; i++) { - await sleep(5000); - const status = await fetchDeploymentStatus(projectId, deploymentId); - - if (status === "live" || status === "failed") { - await server.notification({ - method: "notifications/claude/channel", - params: { - content: status === "live" - ? `Deployment live at ${url}` - : `Build failed: ${errorMessage}`, - meta: { - event: status === "live" ? "deploy_complete" : "deploy_failed", - project: projectName, - deployment_id: deploymentId, - deploy_mode: deployMode, - }, - }, - }); - return; - } - } -} -``` +**Out of scope:** +- BYO mode (would need `wrangler tail` — different format and auth) +- Multi-project watching +- Event filtering configuration +- Two-way channel (reply tools) +- Auto-fix and redeploy (too risky — present fix, let user decide) -**`apps/cli/src/mcp/tools/index.ts`** — Trigger watcher after deploy: -```typescript -case "deploy_project": { - const result = await deployProject(projectPath, options); +--- - // Fire-and-forget: watch for final status in background - if (result.deploymentId && result.deployMode === "managed") { - watchDeployment(server, projectId, result.deploymentId, result.projectName, result.deployMode) - .catch(err => console.error("[channel] deploy watcher error:", err)); - } +## Why This Makes Jack Unique - return formatSuccessResponse(result, startTime); -} +### The self-verifying deploy loop (Priority 1) + +``` +Claude writes code + → deploy_project (waits for "live") + → test_endpoint (auto-verify) + → if error: tail_logs → read source → fix → redeploy + → repeat until healthy ``` -### Scope boundaries +No other platform does this. Vercel gives you a preview URL. GitHub gives you a check. Jack gives you an AI that ships, tests, and iterates until it works. -**In scope (Phase 1):** -- Channel capability declaration in MCP server -- Deploy status notifications (complete/failed) after `deploy_project` tool calls -- Managed mode only (control plane polling) -- Single project (working directory's linked project) +### The production-aware coding session (Priority 2) -**Out of scope (future phases):** -- Production error streaming (Phase 2 — requires SSE log subscription) -- BYO mode deploy watching (would need wrangler deployment polling) -- Multi-project watching -- Event filtering / configuration -- Webhook ingestion (would require extracting to standalone) -- Two-way channel (reply tools) -- claude.ai web / remote MCP support (channels are local-only) +``` +User's app running in production + → 500 error hits + → Claude sees it in real-time via channel + → Claude reads stack trace, finds bug in source + → Claude suggests fix (user deploys when ready) +``` + +No other platform streams production errors directly into your AI coding session. This turns Claude from a code-writing tool into a production partner. --- -## Discarded: Option B (Standalone `jack channel serve`) +## Discarded Options + +### Standalone `jack channel serve` + +Separate process with optional webhook HTTP endpoint. Discarded because: +- Adds process management complexity +- Webhook ingestion needs tunneling for remote CI +- Duplicates auth/config loading + +Extraction path exists if webhooks become needed. + +### Deploy notifications via channel -A separate CLI command starting a channel-only MCP server with optional webhook HTTP endpoint. +Initially planned, then discarded. The `deploy_project` tool should return final status synchronously. Channels are for async events Claude can't predict, not for results of tool calls Claude already made. -**Why discarded:** -- Adds a second process for users to manage -- Requires separate `.mcp.json` entry and `--channels` config -- Main advantage (webhook ingestion) needs tunneling for remote CI — too much friction -- Duplicates auth/config loading already available in-process +### Background deployment polling -**Extraction path:** If webhook demand emerges, `channel/deploy-watcher.ts` moves cleanly into a standalone server. The notification logic is transport-agnostic. +Polling control plane every 15s to catch deploys from any source (CLI, MCP, other terminals). Discarded as overcomplicated and fickle for the value delivered. From 98dcef1a081531c01298aaada8cad5ff5c5a8039 Mon Sep 17 00:00:00 2001 From: hellno Date: Sat, 21 Mar 2026 14:21:29 +0100 Subject: [PATCH 4/5] docs: add demo script, real error patterns, and testing strategy Concrete additions: - Live demo script: API + D1 schema drift (the #1 vibecoding error) - Real error patterns from all templates ranked by frequency - Log stream filter criteria from actual LogStreamDO format - Testing: unit (InMemoryTransport), integration (live project), E2E (demo) - Demo designed for screen recording with actual user value --- docs/plans/channels-integration.md | 240 +++++++++++++++++++++++++++++ 1 file changed, 240 insertions(+) diff --git a/docs/plans/channels-integration.md b/docs/plans/channels-integration.md index abc2d72..cd04cf9 100644 --- a/docs/plans/channels-integration.md +++ b/docs/plans/channels-integration.md @@ -204,6 +204,246 @@ No other platform streams production errors directly into your AI coding session --- +## Demo: The Production Error Loop (API + Database) + +A reproducible demo script that can be screen-recorded. Shows real value, not vibe-imagined. + +### Scenario + +User creates an API with a database. Adds a feature (filtering by priority). Deploys it. A user hits the new endpoint — it 500s because the DB column doesn't exist. Claude sees the error in real-time, investigates, and suggests the fix. + +This is the #1 error pattern across all Jack templates: **code references a column/table that doesn't exist in D1**. It happens constantly when vibecoders add features without thinking about migrations. + +### Prerequisites + +- Claude Code with `jack mcp serve` configured +- Jack CLI authenticated (managed mode) +- Channel enabled: `claude --dangerously-load-development-channels server:jack --channels server:jack` + +### Script + +```bash +# ── Step 1: Create and deploy a working API ────────────────────────── +# (In Claude Code session) +# > "Create a task API with a database. Include CRUD endpoints for tasks +# with title, status, and created_at fields." + +# Claude uses create_project + deploy_project + execute_sql to: +# - Create project from API template +# - Add D1 database with tasks table (id, title, status, created_at) +# - Deploy and verify with test_endpoint + +# Verify it works: +curl https://user-task-api.runjack.xyz/api/tasks +# → {"tasks": []} + +curl -X POST https://user-task-api.runjack.xyz/api/tasks \ + -H "Content-Type: application/json" \ + -d '{"title": "Ship channels feature"}' +# → {"task": {"id": 1, "title": "Ship channels feature", "status": "todo", ...}} + + +# ── Step 2: Add a feature that breaks production ───────────────────── +# > "Add a priority field to tasks. Support filtering by priority: +# GET /api/tasks?priority=high" + +# Claude adds the priority field to the code and redeploys. +# BUT: the D1 table still has the old schema — no "priority" column. +# Claude may or may not remember to run the migration. +# (This is the realistic vibecoding failure mode.) + + +# ── Step 3: A user hits the broken endpoint ────────────────────────── +# (From another terminal, simulating a real user) +curl "https://user-task-api.runjack.xyz/api/tasks?priority=high" +# → 500 Internal Server Error + + +# ── Step 4: Claude sees the error via channel ──────────────────────── +# In Claude Code's terminal, the channel event arrives: +# +# +# D1_ERROR: no such column: priority +# Request: GET /api/tasks?priority=high → 500 +# +# +# Claude reacts: +# "I see a production error — the `priority` column doesn't exist in the +# tasks table. The code filters on it but the migration was never run. +# +# Fix: Run this SQL to add the column: +# ALTER TABLE tasks ADD COLUMN priority TEXT DEFAULT 'medium'; +# +# Want me to run this migration?" + +# User approves → Claude runs execute_sql → endpoint works +``` + +### Why this demo is real + +1. **The error is the #1 failure mode** — schema drift is the most common bug in Jack apps with D1 +2. **No fake setup** — uses the actual API template, actual D1, actual log streaming +3. **The fix is actionable** — Claude suggests a specific SQL migration, not vague advice +4. **End-to-end value** — from error to fix in one interaction, no context switching + +### What makes it compelling on video + +- The user isn't even at the terminal when the error happens — someone else (or curl) triggers it +- Claude catches it in real-time and immediately knows what's wrong +- The fix is a one-liner SQL migration Claude can execute right there +- After the fix, Claude can call `test_endpoint` to verify it works + +--- + +## Real Error Patterns Across Templates + +These are the actual failure modes from Jack template source code, ranked by frequency: + +### Tier 1: Happens constantly (demo-worthy) + +| Error | Templates | Log signature | Claude can fix? | +|-------|-----------|---------------|-----------------| +| Missing D1 column/table | api, cron, ai-chat, saas | `D1_ERROR: no such table/column` | Yes — `ALTER TABLE` or create table | +| Missing/invalid secret | saas, telegram-bot | `TypeError: Cannot read property of undefined` (on `env.STRIPE_KEY`) | Yes — identify which secret, tell user to set it | +| Unhandled JSON parse | api | `SyntaxError: Unexpected token` | Yes — add try/catch around `req.json()` | + +### Tier 2: Happens often (future channel value) + +| Error | Templates | Log signature | Claude can fix? | +|-------|-----------|---------------|-----------------| +| External fetch timeout | cron, telegram-bot | `fetch failed` or `AbortError` | Suggest — add timeout, retry logic | +| AI quota exceeded | ai-chat, semantic-search | `429 Too Many Requests` from Workers AI | Suggest — add rate limiting or quota check | +| Schema mismatch after update | saas (Better Auth) | `D1_ERROR: table X has no column named Y` | Yes — generate migration SQL | + +### Tier 3: Edge cases (nice to catch) + +| Error | Templates | Log signature | +|-------|-----------|---------------| +| Cron URL unreachable | cron | `fetch to https://... failed` in scheduled handler | +| WebSocket upgrade failure | chat | `Expected 101 Switching Protocols` | +| Stripe webhook signature invalid | saas | `Webhook signature verification failed` | + +### What the log stream actually contains + +From `LogStreamDO.normalizeTailEvent()`: + +```typescript +{ + type: "event", + ts: 1711234567890, + outcome: "exception", // or "ok", "exceededCpu", "exceededMemory", etc. + request: { method: "GET", url: "https://user-task-api.runjack.xyz/api/tasks?priority=high" }, + logs: [ + { ts: 1711234567891, level: "error", message: ["D1_ERROR: no such column: priority"] } + ], + exceptions: [ + { ts: 1711234567892, name: "Error", message: "D1_ERROR: no such column: priority" } + ] +} +``` + +**Channel filter criteria:** +- `exceptions.length > 0` — always push (uncaught errors) +- `logs` with `level === "error"` — always push +- `outcome === "exception"` or `outcome === "exceededCpu"` — always push +- `outcome === "ok"` and no error logs — drop (normal traffic) + +--- + +## Testing Strategy + +### Unit: Channel notification delivery + +Use `InMemoryTransport` from MCP SDK — no subprocess, no network: + +```typescript +import { Client } from "@modelcontextprotocol/sdk/client/index.js"; +import { Server } from "@modelcontextprotocol/sdk/server/index.js"; +import { InMemoryTransport } from "@modelcontextprotocol/sdk/inMemory.js"; + +test("channel emits error notification", async () => { + const server = new Server( + { name: "jack", version: "0.1.0" }, + { capabilities: { experimental: { "claude/channel": {} } } } + ); + + const client = new Client({ name: "test", version: "1.0" }); + const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair(); + + const received: any[] = []; + client.setNotificationHandler( + { method: "notifications/claude/channel" }, + (notification) => { received.push(notification); } + ); + + await Promise.all([ + client.connect(clientTransport), + server.connect(serverTransport), + ]); + + // Simulate an error event + await server.notification({ + method: "notifications/claude/channel", + params: { + content: "D1_ERROR: no such column: priority", + meta: { event: "error", project: "task-api" }, + }, + }); + + expect(received).toHaveLength(1); + expect(received[0].params.meta.event).toBe("error"); +}); +``` + +### Unit: Log event filtering + +Test that the filter correctly separates errors from normal traffic: + +```typescript +test("filters error events from log stream", () => { + const errorEvent = { + type: "event", ts: Date.now(), outcome: "exception", + request: { method: "GET", url: "/api/tasks" }, + logs: [{ ts: Date.now(), level: "error", message: ["D1_ERROR: no such column"] }], + exceptions: [{ ts: Date.now(), name: "Error", message: "D1_ERROR: no such column" }], + }; + + const okEvent = { + type: "event", ts: Date.now(), outcome: "ok", + request: { method: "GET", url: "/health" }, + logs: [], exceptions: [], + }; + + expect(shouldEmitChannelNotification(errorEvent)).toBe(true); + expect(shouldEmitChannelNotification(okEvent)).toBe(false); +}); +``` + +### Integration: Full channel with live project + +```bash +# 1. Deploy a test project +jack new channel-test --template api +jack ship + +# 2. Start Claude Code with channel +claude --dangerously-load-development-channels server:jack --channels server:jack + +# 3. Trigger an error (from another terminal) +curl -X POST https://user-channel-test.runjack.xyz/api/echo \ + -H "Content-Type: text/plain" -d "not json" +# → 500 (SyntaxError: Unexpected token) + +# 4. Verify channel event appears in Claude Code session +# Look for: +``` + +### E2E: The demo script above + +Run the full demo script. Record the terminal. This IS the test — if the error flows through and Claude reacts correctly, the feature works. + +--- + ## Discarded Options ### Standalone `jack channel serve` From acae87e9e2f0402ef775cc0df484ad2b5a78512c Mon Sep 17 00:00:00 2001 From: hellno Date: Mon, 23 Mar 2026 09:33:31 +0100 Subject: [PATCH 5/5] feat(mcp): add deploy polling and channel error streaming MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Two improvements to the MCP server: 1. deploy_project now polls until deployment resolves to "live" or "failed" instead of returning "building". Polls fetchProjectOverview every 3s, 60 attempts max (3 min timeout). Managed mode only. 2. Channel capability (claude/channel) declared on MCP server. Log subscriber connects to project's SSE log stream and pushes production errors/exceptions into Claude's session as channel notifications. Includes dedup (60s window), clean shutdown via AbortController, and exponential backoff reconnect. Channel is blocked by Anthropic's allowlist during research preview — --dangerously-load-development-channels bypass is unreliable. Deploy polling tested and working (returns "live" in ~10s). --- apps/cli/src/mcp/channel/log-subscriber.ts | 211 +++++++++++++++++++++ apps/cli/src/mcp/server.ts | 22 +++ apps/cli/src/mcp/tools/index.ts | 48 ++++- apps/cli/tests/channel-log-filter.test.ts | 158 +++++++++++++++ 4 files changed, 438 insertions(+), 1 deletion(-) create mode 100644 apps/cli/src/mcp/channel/log-subscriber.ts create mode 100644 apps/cli/tests/channel-log-filter.test.ts diff --git a/apps/cli/src/mcp/channel/log-subscriber.ts b/apps/cli/src/mcp/channel/log-subscriber.ts new file mode 100644 index 0000000..51791b3 --- /dev/null +++ b/apps/cli/src/mcp/channel/log-subscriber.ts @@ -0,0 +1,211 @@ +import { basename } from "node:path"; +import type { Server as McpServer } from "@modelcontextprotocol/sdk/server/index.js"; +import { authFetch } from "../../lib/auth/index.ts"; +import { getControlApiUrl, startLogSession } from "../../lib/control-plane.ts"; +import { getDeployMode, getProjectId } from "../../lib/project-link.ts"; +import type { DebugLogger, McpServerOptions } from "../types.ts"; + +export interface LogEvent { + type: string; + ts: number; + outcome: string | null; + request: { method?: string; url?: string } | null; + logs: Array<{ ts: number | null; level: string | null; message: unknown[] }>; + exceptions: Array<{ + ts: number | null; + name: string | null; + message: string | null; + }>; +} + +const ERROR_OUTCOMES = new Set([ + "exception", + "exceededCpu", + "exceededMemory", + "exceededWallTime", + "scriptNotFound", +]); + +/** Determine whether a log event should trigger a channel notification. */ +export function shouldEmitChannelNotification(event: LogEvent): boolean { + if (event.exceptions.length > 0) return true; + if (event.logs.some((l) => l.level === "error")) return true; + if (event.outcome && ERROR_OUTCOMES.has(event.outcome)) return true; + return false; +} + +/** + * Format a log event into channel notification content and metadata. + */ +export function formatChannelContent(event: LogEvent): { + content: string; + meta: Record; +} { + const parts: string[] = []; + + for (const exc of event.exceptions) { + parts.push(`${exc.name ?? "Error"}: ${exc.message ?? "Unknown error"}`); + } + for (const log of event.logs) { + if (log.level === "error") { + parts.push(log.message.map(String).join(" ")); + } + } + // For resource-limit outcomes with no exceptions/error logs, describe the outcome + if (parts.length === 0 && event.outcome && ERROR_OUTCOMES.has(event.outcome)) { + parts.push(`Worker ${event.outcome}`); + } + if (event.request) { + parts.push(`Request: ${event.request.method ?? "?"} ${event.request.url ?? "?"}`); + } + + const eventType = event.exceptions.length > 0 ? "exception" : "error"; + + return { + content: parts.join("\n"), + meta: { + event: eventType, + outcome: event.outcome ?? "unknown", + }, + }; +} + +/** + * Subscribe to a project's real-time log stream and emit channel notifications + * for errors and exceptions. Runs until the server closes, with reconnect-on-failure. + * + * Only works for managed (Jack Cloud) projects — silently skips BYO projects. + * Deduplicates repeated errors within a 60-second window to avoid flooding Claude's context. + */ +export async function startChannelLogSubscriber( + server: McpServer, + options: McpServerOptions, + debug: DebugLogger, +): Promise { + const projectPath = options.projectPath ?? process.cwd(); + + const deployMode = await getDeployMode(projectPath).catch(() => null); + if (deployMode !== "managed") { + debug("Channel log subscriber: not a managed project, skipping"); + return; + } + + const projectId = await getProjectId(projectPath); + if (!projectId) { + debug("Channel log subscriber: no project ID found, skipping"); + return; + } + + const projectName = basename(projectPath); + + debug("Channel log subscriber: starting", { projectId, projectName }); + + // Abort when the server closes so the process can exit cleanly + const abortController = new AbortController(); + server.onclose = () => abortController.abort(); + + // Deduplicate: suppress identical error messages within a 60s window + const DEDUP_WINDOW_MS = 60_000; + const recentErrors = new Map(); + + function isDuplicate(content: string): boolean { + const now = Date.now(); + // Prune expired entries + for (const [key, entry] of recentErrors) { + if (now - entry.firstSeen > DEDUP_WINDOW_MS) { + recentErrors.delete(key); + } + } + const existing = recentErrors.get(content); + if (existing) { + existing.count++; + return true; + } + recentErrors.set(content, { count: 1, firstSeen: now }); + return false; + } + + let backoff = 5000; + const maxBackoff = 60000; + + while (!abortController.signal.aborted) { + try { + const session = await startLogSession(projectId, "channel"); + const streamUrl = `${getControlApiUrl()}${session.stream.url}`; + + debug("Channel log subscriber: connected to SSE stream"); + backoff = 5000; + + const response = await authFetch(streamUrl, { + method: "GET", + headers: { Accept: "text/event-stream" }, + signal: abortController.signal, + }); + + if (!response.ok || !response.body) { + throw new Error(`Failed to open log stream: ${response.status}`); + } + + const reader = response.body.getReader(); + const decoder = new TextDecoder(); + let buffer = ""; + + while (!abortController.signal.aborted) { + const { done, value } = await reader.read(); + if (done) break; + + buffer += decoder.decode(value, { stream: true }); + const lines = buffer.split("\n"); + buffer = lines.pop() || ""; + + for (const line of lines) { + if (!line.startsWith("data:")) continue; + const data = line.slice(5).trim(); + if (!data) continue; + + let parsed: LogEvent | null = null; + try { + parsed = JSON.parse(data) as LogEvent; + } catch { + continue; + } + + if (parsed?.type !== "event") continue; + if (!shouldEmitChannelNotification(parsed)) continue; + + const { content, meta } = formatChannelContent(parsed); + + if (isDuplicate(content)) { + debug("Channel: suppressed duplicate error", { content: content.slice(0, 80) }); + continue; + } + + await server.notification({ + method: "notifications/claude/channel", + params: { + content, + meta: { ...meta, project: projectName }, + }, + }); + + debug("Channel: emitted error notification", { + event: meta.event, + project: projectName, + }); + } + } + } catch (err) { + if (abortController.signal.aborted) break; + debug("Channel log subscriber: connection error, retrying", { + error: String(err), + backoff, + }); + } + + if (abortController.signal.aborted) break; + await new Promise((r) => setTimeout(r, backoff)); + backoff = Math.min(backoff * 1.5, maxBackoff); + } + + debug("Channel log subscriber: stopped"); +} diff --git a/apps/cli/src/mcp/server.ts b/apps/cli/src/mcp/server.ts index 2d7dc1e..e1337b5 100644 --- a/apps/cli/src/mcp/server.ts +++ b/apps/cli/src/mcp/server.ts @@ -1,6 +1,7 @@ import { Server as McpServer } from "@modelcontextprotocol/sdk/server/index.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; import pkg from "../../package.json" with { type: "json" }; +import { startChannelLogSubscriber } from "./channel/log-subscriber.ts"; import { registerResources } from "./resources/index.ts"; import { registerTools } from "./tools/index.ts"; import type { McpServerOptions } from "./types.ts"; @@ -19,6 +20,20 @@ export function createDebugLogger(enabled: boolean) { }; } +const CHANNEL_INSTRUCTIONS = `Events from the jack channel are production alerts from your deployed project. +They arrive as tags. + +- event="error": A request to your deployed app hit an error. Read the error message, + find the relevant source code, and suggest a fix. Use tail_logs to check if it's + recurring. Use test_endpoint to reproduce if possible. +- event="exception": An uncaught exception in your deployed code. Check the source, + understand the cause, and suggest a fix. +- outcome="exceededCpu" / "exceededMemory" / "exceededWallTime": A resource limit was hit. + Suggest code optimization rather than looking for bugs. + +If you see the same error repeatedly, diagnose it once and note the frequency. +Do NOT redeploy automatically. Present the fix and let the user decide.`; + export async function createMcpServer(options: McpServerOptions = {}) { const debug = createDebugLogger(options.debug ?? false); @@ -33,7 +48,9 @@ export async function createMcpServer(options: McpServerOptions = {}) { capabilities: { tools: {}, resources: {}, + experimental: { "claude/channel": {} }, }, + instructions: CHANNEL_INSTRUCTIONS, }, ); @@ -93,6 +110,11 @@ export async function startMcpServer(options: McpServerOptions = {}) { await server.connect(transport); + // Start channel log subscriber for production error streaming (fire-and-forget) + startChannelLogSubscriber(server, options, debug).catch((err) => + debug("Channel log subscriber failed to start", { error: String(err) }), + ); + debug("MCP server connected and ready"); // Keep the server running indefinitely. diff --git a/apps/cli/src/mcp/tools/index.ts b/apps/cli/src/mcp/tools/index.ts index 7ad6d76..44d30b7 100644 --- a/apps/cli/src/mcp/tools/index.ts +++ b/apps/cli/src/mcp/tools/index.ts @@ -2,7 +2,11 @@ import type { Server as McpServer } from "@modelcontextprotocol/sdk/server/index import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js"; import { z } from "zod"; import { authFetch } from "../../lib/auth/index.ts"; -import { getControlApiUrl, startLogSession } from "../../lib/control-plane.ts"; +import { + fetchProjectOverview, + getControlApiUrl, + startLogSession, +} from "../../lib/control-plane.ts"; import { JackError, JackErrorCode } from "../../lib/errors.ts"; import { getDeployMode, getProjectId } from "../../lib/project-link.ts"; import { createProject, deployProject, getProjectStatus } from "../../lib/project-operations.ts"; @@ -42,6 +46,31 @@ import { Events, track, withTelemetry } from "../../lib/telemetry.ts"; import type { DebugLogger, McpServerOptions } from "../types.ts"; import { formatErrorResponse, formatSuccessResponse } from "../utils.ts"; +/** + * Poll control plane until a deployment reaches a terminal status. + * Used by the deploy_project tool to return final status instead of "building". + */ +async function pollDeploymentStatus( + projectId: string, + deploymentId: string, + maxAttempts = 60, + intervalMs = 3000, +): Promise<{ status: string; error_message: string | null } | null> { + for (let i = 0; i < maxAttempts; i++) { + await new Promise((r) => setTimeout(r, intervalMs)); + try { + const overview = await fetchProjectOverview(projectId); + const dep = overview.latest_deployment; + if (dep?.id === deploymentId && (dep.status === "live" || dep.status === "failed")) { + return { status: dep.status, error_message: dep.error_message ?? null }; + } + } catch { + // Network blip — keep polling + } + } + return null; +} + // Tool schemas const CreateProjectSchema = z.object({ name: z.string().optional().describe("Project name (auto-generated if not provided)"), @@ -1038,6 +1067,23 @@ export function registerTools(server: McpServer, _options: McpServerOptions, deb const result = await wrappedDeployProject(args.project_path, args.message); + // For managed deploys, poll until final status + if ( + result.deploymentId && + result.deployMode === "managed" && + result.deployStatus !== "live" && + result.deployStatus !== "failed" + ) { + const projectId = await getProjectId(args.project_path ?? process.cwd()); + if (projectId) { + const final = await pollDeploymentStatus(projectId, result.deploymentId); + if (final) { + result.deployStatus = final.status; + result.errorMessage = final.error_message; + } + } + } + return { content: [ { diff --git a/apps/cli/tests/channel-log-filter.test.ts b/apps/cli/tests/channel-log-filter.test.ts new file mode 100644 index 0000000..7e1eb19 --- /dev/null +++ b/apps/cli/tests/channel-log-filter.test.ts @@ -0,0 +1,158 @@ +import { describe, expect, test } from "bun:test"; +import { + formatChannelContent, + shouldEmitChannelNotification, +} from "../src/mcp/channel/log-subscriber.ts"; + +function makeEvent(overrides: { + outcome?: string | null; + logs?: Array<{ ts: number | null; level: string | null; message: unknown[] }>; + exceptions?: Array<{ ts: number | null; name: string | null; message: string | null }>; + request?: { method?: string; url?: string } | null; +}) { + return { + type: "event" as const, + ts: Date.now(), + outcome: overrides.outcome ?? "ok", + request: overrides.request ?? null, + logs: overrides.logs ?? [], + exceptions: overrides.exceptions ?? [], + }; +} + +describe("shouldEmitChannelNotification", () => { + test("returns true for events with exceptions", () => { + const event = makeEvent({ + exceptions: [{ ts: Date.now(), name: "TypeError", message: "Cannot read property 'id'" }], + }); + expect(shouldEmitChannelNotification(event)).toBe(true); + }); + + test("returns true for events with error-level logs", () => { + const event = makeEvent({ + logs: [{ ts: Date.now(), level: "error", message: ["D1_ERROR: no such column: priority"] }], + }); + expect(shouldEmitChannelNotification(event)).toBe(true); + }); + + test("returns true for exception outcome", () => { + const event = makeEvent({ outcome: "exception" }); + expect(shouldEmitChannelNotification(event)).toBe(true); + }); + + test("returns true for exceededCpu outcome", () => { + const event = makeEvent({ outcome: "exceededCpu" }); + expect(shouldEmitChannelNotification(event)).toBe(true); + }); + + test("returns true for exceededMemory outcome", () => { + const event = makeEvent({ outcome: "exceededMemory" }); + expect(shouldEmitChannelNotification(event)).toBe(true); + }); + + test("returns true for exceededWallTime outcome", () => { + const event = makeEvent({ outcome: "exceededWallTime" }); + expect(shouldEmitChannelNotification(event)).toBe(true); + }); + + test("returns true for scriptNotFound outcome", () => { + const event = makeEvent({ outcome: "scriptNotFound" }); + expect(shouldEmitChannelNotification(event)).toBe(true); + }); + + test("returns false for normal ok events", () => { + const event = makeEvent({ + outcome: "ok", + logs: [{ ts: Date.now(), level: "log", message: ["Request handled"] }], + }); + expect(shouldEmitChannelNotification(event)).toBe(false); + }); + + test("returns false for empty events", () => { + const event = makeEvent({}); + expect(shouldEmitChannelNotification(event)).toBe(false); + }); + + test("returns false for warn-level logs without errors", () => { + const event = makeEvent({ + logs: [{ ts: Date.now(), level: "warn", message: ["Deprecation warning"] }], + }); + expect(shouldEmitChannelNotification(event)).toBe(false); + }); +}); + +describe("formatChannelContent", () => { + test("formats exception with name and message", () => { + const event = makeEvent({ + outcome: "exception", + exceptions: [{ ts: Date.now(), name: "TypeError", message: "Cannot read property 'id'" }], + }); + const { content, meta } = formatChannelContent(event); + expect(content).toContain("TypeError: Cannot read property 'id'"); + expect(meta.event).toBe("exception"); + }); + + test("formats error logs", () => { + const event = makeEvent({ + logs: [ + { ts: Date.now(), level: "error", message: ["D1_ERROR:", "no such column: priority"] }, + ], + }); + const { content, meta } = formatChannelContent(event); + expect(content).toContain("D1_ERROR: no such column: priority"); + expect(meta.event).toBe("error"); + }); + + test("includes request info", () => { + const event = makeEvent({ + request: { method: "GET", url: "https://example.runjack.xyz/api/tasks" }, + exceptions: [{ ts: Date.now(), name: "Error", message: "fail" }], + }); + const { content } = formatChannelContent(event); + expect(content).toContain("Request: GET https://example.runjack.xyz/api/tasks"); + }); + + test("sets event type to exception when exceptions present", () => { + const event = makeEvent({ + exceptions: [{ ts: Date.now(), name: "Error", message: "fail" }], + logs: [{ ts: Date.now(), level: "error", message: ["also an error"] }], + }); + const { meta } = formatChannelContent(event); + expect(meta.event).toBe("exception"); + }); + + test("sets event type to error when only error logs present", () => { + const event = makeEvent({ + logs: [{ ts: Date.now(), level: "error", message: ["some error"] }], + }); + const { meta } = formatChannelContent(event); + expect(meta.event).toBe("error"); + }); + + test("handles null exception fields gracefully", () => { + const event = makeEvent({ + exceptions: [{ ts: null, name: null, message: null }], + }); + const { content } = formatChannelContent(event); + expect(content).toContain("Error: Unknown error"); + }); + + test("includes outcome in meta", () => { + const event = makeEvent({ + outcome: "exceededCpu", + exceptions: [{ ts: Date.now(), name: "Error", message: "CPU limit" }], + }); + const { meta } = formatChannelContent(event); + expect(meta.outcome).toBe("exceededCpu"); + }); + + test("describes resource-limit outcome when no exceptions or error logs", () => { + const event = makeEvent({ + outcome: "exceededCpu", + request: { method: "POST", url: "/api/heavy" }, + }); + const { content } = formatChannelContent(event); + expect(content).toContain("Worker exceededCpu"); + expect(content).toContain("Request: POST /api/heavy"); + }); +});