feat: Browsa WebSocket serve + Uint8Array migration (v14.0.0)#65
feat: Browsa WebSocket serve + Uint8Array migration (v14.0.0)#65flyingrobots merged 95 commits intomainfrom
Conversation
…(B157) Replace hard node:crypto and node:stream imports with lazy-loaded fallbacks using top-level await import() in try/catch blocks. This allows the library to be imported in browser environments without crashing at module evaluation time. Key changes: - InMemoryGraphAdapter: new `hash` constructor option for injecting a synchronous SHA-1 function; Buffer-free hash helpers using TextEncoder + Uint8Array; node:stream dynamically imported in logNodesStream() only - defaultCrypto: lazy-loads node:crypto, throws helpful error when invoked without fallback in browsers - sha1sync: minimal sync SHA-1 implementation (~110 LOC) for browser content addressing, exported as @git-stunts/git-warp/sha1sync - browser.js: curated browser entry point re-exporting only browser-safe code, exported as @git-stunts/git-warp/browser - package.json: ./browser and ./sha1sync export map entries
Vue 3 + Vite demo with 4 WarpGraph viewports running entirely in the browser. Each viewport has its own writer, colored nodes, sync controls, a time-travel ceiling slider, and per-node provenance inspection (Da Cone). All viewports share one InMemoryGraphAdapter with sha1sync hash function and WebCryptoAdapter — zero server, zero Docker, zero setup. Features: - 2x2 responsive grid with force-directed canvas graph rendering - Per-viewport add/remove nodes with color picker - Online/offline toggle per viewport - Bilateral sync between any viewport pair, or sync-all - Time slider (materialize at ceiling) with LIVE mode - Da Cone: click a node to inspect properties and edges - Vite stubs for node:crypto/stream/module and roaring (never invoked) - ESLint ignores demo/ directory (separate app conventions)
Swap the hand-rolled force-directed canvas renderer for ELK's layered (Sugiyama) layout engine with inline SVG rendering. - Uses existing elkAdapter.js + elkLayout.js pipeline from the visualization module — no new layout code needed - Orthogonal edge routing with arrowheads via SVG markers - Colored node badges with glow effect on selection - ELK bundle (~1.4 MB) lazy-loaded as a separate chunk - Empty state placeholder text when no nodes exist
Only render SVG elements for nodes and edges that intersect the current camera viewport. ELK still layouts the full graph, but the DOM only contains what's visible. - Camera: pan (drag), zoom (scroll wheel) with cursor-anchored zoom - ResizeObserver tracks container dimensions - Cull margin (60px) prevents pop-in at viewport edges - Edge visibility: shown if either endpoint node is visible or any path point crosses the viewport - HUD counter: "visible/total" node count in bottom-right - Auto-fit on layout: centers and scales graph to fill viewport - grab/grabbing cursors for pan affordance
Replace Vue's v-for culling (which creates/destroys DOM elements as nodes enter/leave the viewport) with fixed-size object pools. Node and edge pools use monotonic high-water marks — once a slot is allocated, it persists for the component's lifetime. Vue keys by slot index (0..N), so DOM elements are reused via attribute patching, never created or destroyed during camera movement. - nodePool/edgePool: computed arrays with HWM growth, never shrink - Active slots get current node/edge data; inactive slots get frozen sentinel objects positioned off-screen with v-show=false - EMPTY_NODE/EMPTY_EDGE: Object.freeze'd sentinels to avoid reactive overhead on inactive slots - Same viewport culling + pan/zoom as before, just zero GC pressure from DOM node lifecycle during interaction
Three issues prevented materialize() from working in the browser: 1. **Buffer.byteLength in @git-stunts/trailer-codec**: The trailer codec's MessageNormalizer uses Buffer.byteLength() which doesn't exist in browsers. This caused detectMessageKind() to silently return null (swallowed by try/catch), making the patch walker think every commit was a non-patch and returning 0 patches. Fix: Vite plugin that replaces Buffer.byteLength() calls with TextEncoder-based equivalent, scoped only to trailer-codec files. A global Buffer polyfill was rejected because it breaks cbor-x, which detects Buffer and then expects Buffer.prototype.utf8Write (a V8-only method). 2. **Vue reactive Proxy vs ES private fields**: WarpGraph's ProvenanceIndex uses #private fields. Vue's reactive() wraps objects in Proxies, and private field access requires `this` to be the real instance — not a Proxy. This caused "Receiver must be an instance of class ProvenanceIndex" errors. Fix: markRaw() on WarpGraph instances to exclude them from Vue's reactivity system. 3. **SVG resize feedback loop**: The SVG element's intrinsic sizing caused the flex container to grow, triggering ResizeObserver, which changed the viewBox, which grew the SVG again — an infinite loop that pushed controls off-screen. Fix: position: absolute + overflow: hidden on the viewport body to decouple SVG sizing from flex layout. Also adds InsecureCryptoAdapter for plain HTTP contexts (Docker) and server.allowedHosts for Vite dev server access.
Fix: materializeViewport() was iterating raw ORSet.entries.keys(), which includes tombstoned elements. Nodes and edges survived removal because the tombstone dots were never checked. Now checks each element's dots against the tombstone set before rendering. Also includes the pointer capture fix from the previous session: deferred setPointerCapture until the pan threshold is exceeded, preventing clicks from being stolen by the SVG background. Feat: add a scenario runner with five pre-built scenarios that demonstrate multi-writer CRDT behavior (Two Writers, Offline Divergence, Add & Remove, Edge Network, Time Travel). Each scenario plays a scripted sequence of graph operations with animated delays. Accessible from the header bar.
InMemoryGraphAdapter (browser-safe) returns Uint8Array from readTree(), but TreePort declared Buffer. Since Buffer extends Uint8Array, widening the port type is non-breaking — GitGraphAdapter's Buffer return is still assignable. Fixes 22 pre-existing TS2322 errors in test files using InMemoryGraphAdapter with TreePort-typed parameters.
Rename the node inspector panel component from the informal "DaCone" to the professional "Inspector". Updates component file, CSS classes, imports, and template references across all three affected files.
Phase 1 of the browsa architecture pivot — from in-memory demo to a real tool connecting browsers to live git-warp repos via WebSocket. WebSocketServerPort (src/ports/): Abstract port for WebSocket server creation. Adapters for Node (ws), Bun (Bun.serve), and Deno (Deno.upgradeWebSocket) can implement this without domain code depending on any runtime directly. WarpServeService (src/domain/services/): Domain service that bridges WarpGraph instances to browser clients. Versioned JSON envelope protocol (v1) supports: - hello: announce available graphs on connect - open: subscribe to a graph, receive materialized state - mutate: batch mutations (addNode, removeNode, addEdge, etc.) - inspect: get full node properties - seek: time-travel materialization with ceiling - diff: live push of StateDiffResult when graph changes - Error envelopes with typed codes (E_UNKNOWN_GRAPH, etc.) Multi-graph aware: one server can serve all graphs in a repo. Version-guarded: rejects messages with unsupported protocol version. Test coverage: 21 tests covering construction, connection lifecycle, all protocol message types, live diff push, malformed message handling, and clean shutdown.
Detailed task breakdown (T1–T11) for transforming browsa from an in-memory demo into a real developer tool connecting browsers to live git-warp repos via WebSocket. Includes protocol specification, architecture diagram, per-task file lists, behavioral test requirements, implementation notes, dependency chains, and recovery notes for context resumption.
Implements WebSocketServerPort using the `ws` npm package (MIT, v8.19.0).
This is the only file that imports `ws` directly — all domain code goes
through the port abstraction.
NodeWsAdapter:
- createServer(onConnection) returns { listen, close }
- listen(port, host?) starts a WebSocketServer, resolves with actual port
- Incoming connections are wrapped into WsConnection objects
- send() guards on readyState to prevent writes to closed sockets
- close() terminates all active connections before shutting down
wrapConnection() helper extracts the ws-to-port bridging, keeping
createServer under the max-lines-per-function limit.
Test coverage: 9 integration tests using globalThis.WebSocket (Node 22+)
covering instantiation, start/stop, message delivery, round-trip JSON,
server-initiated close, concurrent connections, and custom host binding.
Starts a WebSocket server exposing graph(s) in the repository for
browser-based viewing and mutation via the browsa client.
CLI interface:
git warp serve [--port <number>] [--graph <name>] [--host <addr>]
--port WebSocket server port (default: 3000)
--graph Scope to a single graph (omit to serve all graphs in repo)
--host Bind address (default: 127.0.0.1)
Implementation:
- bin/cli/commands/serve.js: discovers graphs, opens WarpGraph
instances, wires NodeWsAdapter + WarpServeService, starts server
- bin/warp-graph.js: long-running command support — when a handler
returns a `close` function, the process stays alive and wires
SIGINT/SIGTERM for clean shutdown instead of calling process.exit()
- Zod schema (serveSchema) for port/host validation
- Registered in COMMANDS, KNOWN_COMMANDS, and HELP_TEXT
Test coverage: 7 unit tests with fully mocked dependencies.
Dependency: ws@8.19.0 (MIT) added to package.json.
Pure JavaScript WebSocket client for the git-warp serve protocol.
No framework dependencies — uses only globalThis.WebSocket (injectable
for testing via constructor options).
API:
connect() → Promise<HelloPayload>
open({ graph, writerId })→ Promise<StatePayload>
mutate({ graph, ops }) → Promise<AckPayload>
inspect({ graph, nodeId })→ Promise<InspectPayload>
seek({ graph, ceiling }) → Promise<StatePayload>
onDiff(handler) → push listener for live graph diffs
onDisconnect(handler) → connection loss notification
onError(handler) → server error push listener
close() → clean disconnect
Internals:
Request-response correlation via monotonic IDs (req-1, req-2, ...).
Pending promises stored in a Map, resolved/rejected by matching
response ID. Disconnect rejects all pending requests.
Test coverage: 13 tests with mock WebSocket covering connect, open,
mutate, inspect, seek, diff push, disconnect, pending rejection,
protocol versioning, and clean close.
Complete architecture pivot: the browser now connects to `git warp serve` over WebSocket instead of running WarpGraph in-memory. graphStore.js — full rewrite: - Uses WarpSocket for all server communication - Single-viewport model (multiple browser windows = multiple writers) - Connection lifecycle: connect, hello, openGraph, reconnect - Mutations via socket.mutate(), state via diffs/seek - Node inspection via socket.inspect() - Server URL from ?server= param / localStorage - writerId persisted to localStorage via crypto.randomUUID() Component changes: - App.vue: connection bar (status dot, graph dropdown, writerId badge), single GraphViewport, reconnect prompt - Controls.vue: removed viewportId prop, sync buttons, online/offline - GraphViewport.vue: removed viewportId prop, reads store directly - Inspector.vue: displays server-reported props via inspect() - TimeSlider.vue: removed viewportId prop, reads store directly Deleted: - ScenarioPanel.vue (multi-writer scenarios don't apply) - InProcessSyncBus.js (no in-memory sync needed) - InsecureCryptoAdapter.js (no in-browser crypto needed) Refs: TASKS.md T5, T6
Add 7 new backlog items for upgrading @git-stunts/git-cas from v3.0.0 to v5.2.4 and leveraging new capabilities: - B158: version bump (S, unblocks all P7) - B159: CDC chunking for seek cache (S, ~98% chunk reuse) - B160: blob attachments via CAS (M, completes BlobValue ops) - B161: encrypted seek cache (S) - B162: observability alignment (M, bridge LoggerPort + ObservabilityPort) - B163: streaming restore for large states (M) - B164: graph encryption at rest (L, envelope encryption for patches) Wave 7 added to execution order. B158 is the keystone — all others depend on it. Inventory updated: 141 total tracked items.
B158: Bumped @git-stunts/git-cas from ^3.0.0 to ^5.2.4 (two major versions). 4872 tests pass with zero regressions. New capabilities now available: ObservabilityPort, streaming restore, CDC chunking, envelope encryption (DEK/KEK), key rotation. B159: CasSeekCacheAdapter now constructs CAS with CDC chunking (strategy: 'cdc') instead of fixed-size chunking. Consecutive seek snapshots share most of their content — CDC's rolling-hash boundaries yield ~98% chunk reuse on incremental edits, significantly reducing Git object storage for the seek cache. Also fixed pre-existing tar vulnerability (GHSA-qffp-2rhf-9h96) via npm audit fix.
B161: CasSeekCacheAdapter now accepts an optional encryptionKey constructor param. When set, store() and restore() pass the key to git-cas for AES-256-GCM encryption/decryption of cached state snapshots. 6 new tests. B162: New LoggerObservabilityBridge adapter translates git-cas ObservabilityPort calls (metric, log, span) into git-warp LoggerPort calls. CasSeekCacheAdapter accepts optional logger param; when provided, constructs the bridge and passes it to the CAS instance as the observability port. 7 bridge tests + 2 adapter tests.
… attachments (B160) Introduces a hexagonal port (`BlobStoragePort`) and CAS adapter (`CasBlobAdapter`) for content blob storage. PatchBuilderV2's `attachContent()`/`attachEdgeContent()` now use CAS (CDC-chunked, optionally encrypted) when `blobStorage` is injected, falling back to raw `persistence.writeBlob()` without it. Query methods `getContent()`/`getEdgeContent()` retrieve via CAS with automatic fallback to raw Git blobs for backward compatibility. Wired through WarpGraph.open(), Writer, and all patch creation paths. 16 new tests across 4 test files (4909 total).
The browser import tree is fully isolated post-WebSocket pivot: main.js → Vue/Pinia, graphStore.js → WarpSocket (pure WebSocket), GraphCanvas.vue → elkAdapter/elkLayout (pure JS). Zero browsa files import any stubbed module. Removed: - demo/browsa/src/stubs/ (empty.js, node-crypto.js, node-stream.js, node-module.js) - trailerCodecBufferShim() Vite plugin (trailer-codec not in browser import tree) - All 9 resolve.alias entries (none referenced by browsa code) - optimizeDeps block (cbor-x not in browser import tree) Kept: - vue() plugin - target: 'es2022' (elkjs uses modern syntax) - server.fs.allow (GraphCanvas imports from ../../src/visualization/)
- README: add `git warp serve` to CLI section, add Browser Demo (Browsa) subsection with quickstart instructions - CHANGELOG: document Vite stubs removal in [Unreleased] Removed - TASKS: mark T7 (Vite cleanup), T8 (connection UI), T9 (polish) as DONE
test/runtime/deno/*.test.ts import `jsr:@std/assert` which is a Deno-only package. These tests are designed to run inside Docker via `npm run test:deno`, not under Node/vitest. The default include glob was matching them, causing 7 spurious failures on every local `npx vitest run`. Add explicit exclude for test/runtime/deno/** to vitest.config.js.
…on (T10) BunWsAdapter uses Bun.serve() with the websocket handler option. DenoWsAdapter uses Deno.serve() + Deno.upgradeWebSocket(). The serve CLI command now auto-detects the runtime via globalThis.Bun / globalThis.Deno and dynamically imports only the relevant adapter, so the ws npm package is never loaded on non-Node runtimes. - 26 new tests (13 per adapter) with mocked runtime globals - globals.d.ts extended with Bun WS types and Deno.upgradeWebSocket - ESLint test globals updated; removed redundant /* global */ comments - CHANGELOG, README, and TASKS.md updated
All three WebSocket adapters (Node, Bun, Deno) now accept a staticDir constructor option. When set, non-WebSocket HTTP requests serve static files from the directory with correct MIME types and SPA client-side routing fallback (extensionless paths serve index.html). New shared staticFileHandler.js provides path traversal containment (resolve-based normalization), null byte rejection, and a minimal MIME type map covering typical SPA assets. Usage: git warp serve --port 3000 --static demo/browsa/dist - 18 new tests (14 static handler + 4 Node integration) - CHANGELOG, README, TASKS.md, and HELP_TEXT updated
…g (B163) CasSeekCacheAdapter.get() now prefers cas.restoreStream() (git-cas v4+) when available, accumulating chunks via async iterator. Falls back to cas.restore() for older git-cas versions. This enables I/O pipelining for unencrypted caches where chunk reads overlap with accumulation.
…atch read/write sites (B164) When patchBlobStorage (a BlobStoragePort, e.g. CasBlobAdapter with encryption key) is injected, patch CBOR is written/read via CAS instead of raw Git blobs. An `eg-encrypted: true` trailer marks encrypted patches. Read sites detect the trailer and throw EncryptionError if no patchBlobStorage is configured. Threading: WarpGraph.open() → constructor → createPatch() → PatchBuilderV2 → commit(), and all 6 read sites (_loadWriterPatches, _loadPatchBySha, materializeAdvanced patchLoader, _hasSchema1Patches, SyncProtocol.loadPatchFromCommit, WormholeService.processCommit). No behavior change when patchBlobStorage is not provided.
- 8 integration tests in WarpGraph.encryption.test.js covering: write/read encrypted, re-open with key, EncryptionError without key, mixed encrypted/plain, multi-patch accumulation, provenance compat - 6 unit tests in WarpMessageCodec.test.js for eg-encrypted trailer encode/decode roundtrip
- ROADMAP: mark B163 and B164 as DONE, move to standalone-done inventory - CHANGELOG: add streaming restore and graph encryption entries - index.d.ts: add BlobStoragePort, EncryptionError, blobStorage and patchBlobStorage params to WarpGraph.open()
- WarpServeService: double-listen guard, close active sockets on shutdown, reorder openGraphs after materialization, pre-validate full op batch before createPatch - bytes.js: reject malformed base64 with RangeError - NodeWsAdapter: reset state on failed listen/close - isLoopback: handle full 127.0.0.0/8 range - materializeAdvanced: wrap buildSeekCacheKey in try/catch - BunWsAdapter test: save/restore globalThis.Bun - WarpServeService test: replace 22 boolean vi.waitFor callbacks with assertion-based ones
PR Feedback Round 5 — Resolved in 135a833Fixed (10 issues)
False Positives (12 threads)
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
…B165, B167) - listen() now defers _server assignment and subscription registration until server.listen() resolves. On bind failure (e.g., EADDRINUSE), subscriptions are cleaned up instead of leaking ghost broadcast handlers. - _onConnection fallback catch now sends generic "Internal error" instead of raw err.message, preventing internal detail leakage to untrusted WebSocket clients. - Added tests: listen-failure cleanup, double-listen guard, error sanitization, attachContent/attachEdgeContent mutation pipeline smoke tests. - Registered B165–B167 (Defensive Hardening Sprint) in ROADMAP.md.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
jsr.json (1)
11-24:⚠️ Potential issue | 🔴 CriticalPublish the new
./browserentrypoint.Line 11 adds
./browser, but Lines 15-24 never shipbrowser.js, so the JSR artifact will expose a broken public subpath import.📦 Minimal manifest fix
"publish": { "include": [ + "browser.js", "index.js", "index.d.ts", "src/**/*.js", "src/**/*.d.ts",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@jsr.json` around lines 11 - 24, The manifest exposes a new public subpath "./browser" but the publish.include array doesn't list browser.js, causing published artifacts to miss that entrypoint; update the publish.include array to add browser.js so the "./browser" entrypoint is shipped (ensure the string "browser.js" is included alongside other entries in the "publish.include" list to match the new "./browser" mapping).
♻️ Duplicate comments (4)
eslint.config.js (1)
280-291:⚠️ Potential issue | 🟠 MajorAlso block dynamic Buffer loads in domain code.
Lines 280-291 only cover static
importdeclarations. A domain module can still pull in Buffer viaimport("node:buffer")(and anyrequire("buffer")path), so the Uint8Array boundary is still bypassable. Please extend the existingsrc/domain/**/*.jsno-restricted-syntaxrule at Lines 299-312 instead of adding a secondno-restricted-syntaxoverride.Suggested hardening
{ files: ["src/domain/**/*.js"], rules: { "no-restricted-syntax": ["error", + { + "selector": "ImportExpression[source.value=/^(?:buffer|node:buffer)$/]", + "message": "Use Uint8Array + helpers from domain/utils/bytes.js. Buffer is confined to infrastructure adapters.", + }, + { + "selector": "CallExpression[callee.name='require'][arguments.0.value=/^(?:buffer|node:buffer)$/]", + "message": "Use Uint8Array + helpers from domain/utils/bytes.js. Buffer is confined to infrastructure adapters.", + }, { "selector": "CallExpression[callee.object.name='Date'][callee.property.name='now']", "message": "Date.now() is banned in domain code. Use ClockPort / ClockAdapter instead.", },#!/bin/bash # Verify the current config only restricts static imports, then look for any # dynamic buffer loads in domain code. sed -n '273,314p' eslint.config.js rg -nP --type=js 'import\s*\(\s*["'"'"'](?:node:buffer|buffer)["'"'"']\s*\)|\brequire\s*\(\s*["'"'"'](?:node:buffer|buffer)["'"'"']\s*\)' src/domain🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@eslint.config.js` around lines 280 - 291, The current no-restricted-imports entries block only static imports for "node:buffer" and "buffer" but dynamic loads (import(...) / require(...)) are still allowed; modify the existing no-restricted-syntax override that targets domain files (the override with "files": ["src/domain/**/*.js"] and rule "no-restricted-syntax") to include AST selectors that reject ImportExpression and require CallExpression for those same module names (e.g., selectors targeting ImportExpression whose source value is "node:buffer" or "buffer", and CallExpression where callee is require and the first argument value is "node:buffer" or "buffer"), rather than adding a second no-restricted-syntax override, so dynamic imports/require of Buffer are forbidden in domain code just like static imports are.src/infrastructure/adapters/CasBlobAdapter.js (1)
135-151:⚠️ Potential issue | 🟠 MajorOnly apply the legacy-blob fallback to
readManifest()failures.After
readManifest()succeeds,oidis already known to be a CAS tree. With the current sharedtry/catch, a laterrestore()failure such as a missing chunk (bad object/does not exist) is treated as legacy content and falls back toreadBlob(oid), which masks the real restore error.Proposed fix
async retrieve(oid) { const cas = await this._getCas(); - - try { - const manifest = await cas.readManifest({ treeOid: oid }); - /** `@type` {{ manifest: *, encryptionKey?: Buffer|Uint8Array }} */ - const restoreOpts = { manifest }; - if (this._encryptionKey) { - restoreOpts.encryptionKey = this._encryptionKey; - } - const { buffer } = await cas.restore(restoreOpts); - return buffer; - } catch (err) { - // Fallback: OID may be a raw Git blob (pre-CAS content). - // Only fall through for "not a manifest" errors (missing tree, bad format). - // Rethrow corruption, decryption, and I/O errors. - if (!isLegacyBlobError(err)) { - throw err; - } - return await this._persistence.readBlob(oid); - } + let manifest; + try { + manifest = await cas.readManifest({ treeOid: oid }); + } catch (err) { + if (!isLegacyBlobError(err)) { + throw err; + } + return await this._persistence.readBlob(oid); + } + + /** `@type` {{ manifest: *, encryptionKey?: Buffer|Uint8Array }} */ + const restoreOpts = { manifest }; + if (this._encryptionKey) { + restoreOpts.encryptionKey = this._encryptionKey; + } + const { buffer } = await cas.restore(restoreOpts); + return buffer; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/infrastructure/adapters/CasBlobAdapter.js` around lines 135 - 151, The fallback to legacy blobs is too broad because the try/catch wraps both cas.readManifest and cas.restore; instead only catch errors from cas.readManifest and apply isLegacyBlobError-based fallback there. Concretely, call cas.readManifest({ treeOid: oid }) inside its own try/catch (using isLegacyBlobError to decide to call this._persistence.readBlob(oid)), and if readManifest succeeds then call cas.restore(restoreOpts) without that legacy-error catch so restore errors (e.g., missing chunks, corruption, decryption, I/O) propagate; adjust references to cas.readManifest, cas.restore, isLegacyBlobError, and this._persistence.readBlob accordingly.src/domain/services/WarpServeService.js (1)
244-258:⚠️ Potential issue | 🟠 MajorDelay
_serverand subscriptions until startup succeeds.The guard at Lines 245-247 only covers the already-started path. Line 248 stores
this._serverand Lines 250-256 register subscriptions before the bind completes, so a thrownlisten()orsubscribe()leaves the instance stuck in"Server is already listening"with leaked diff callbacks.Possible fix
async listen(port, host) { if (this._server) { throw new Error('Server is already listening'); } - this._server = this._wsPort.createServer((conn) => this._onConnection(conn)); - - // Subscribe to each graph for live diff push - for (const [graphName, graph] of this._graphs) { - const sub = graph.subscribe({ - onChange: (/** `@type` {unknown} */ diff) => this._broadcastDiff(graphName, diff), - }); - this._subscriptions.set(graphName, sub); - } - - return await this._server.listen(port, host); + const server = this._wsPort.createServer((conn) => this._onConnection(conn)); + /** `@type` {Map<string, { unsubscribe: () => void }>} */ + const subscriptions = new Map(); + try { + const addr = await server.listen(port, host); + for (const [graphName, graph] of this._graphs) { + const sub = graph.subscribe({ + onChange: (/** `@type` {unknown} */ diff) => this._broadcastDiff(graphName, diff), + }); + subscriptions.set(graphName, sub); + } + this._server = server; + this._subscriptions = subscriptions; + return addr; + } catch (err) { + for (const [, sub] of subscriptions) { + sub.unsubscribe(); + } + await server.close().catch(() => {}); + throw err; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/services/WarpServeService.js` around lines 244 - 258, The listen method currently assigns this._server and registers subscriptions before the async bind completes, causing leaked state if listen() or subscribe() throws; change listen to create the server locally (e.g., const server = this._wsPort.createServer(...)) and perform await server.listen(port, host) first, then on success set this._server = server and move the subscription loop (graph.subscribe with this._broadcastDiff) after the successful bind, storing each subscription into this._subscriptions; also ensure you catch errors from server.listen or graph.subscribe and clean up any partial subscriptions or close the local server before rethrowing so the instance isn't left in a half-initialized state.bin/cli/commands/serve.js (1)
43-45:⚠️ Potential issue | 🟠 MajorPrefix-matching
127.weakens the--exposesafety check.Line 44 now whitelists any hostname beginning with
127.. Values like127.example.comare not IP literals and can still resolve off-box, so they bypass the non-loopback bind guard. Only treat parsed IP literals in127.0.0.0/8as loopback, and include mapped::ffff:127.x.x.xforms.🔒 Safer loopback check
import process from 'node:process'; import { resolve } from 'node:path'; import { stat } from 'node:fs/promises'; +import { isIP } from 'node:net'; @@ function isLoopback(h) { - return h === '127.0.0.1' || h === '::1' || h === 'localhost' || h.startsWith('127.'); + if (h.toLowerCase() === 'localhost' || h === '::1') { + return true; + } + if (h.startsWith('::ffff:')) { + const mapped = h.slice('::ffff:'.length); + return isIP(mapped) === 4 && mapped.split('.')[0] === '127'; + } + return isIP(h) === 4 && h.split('.')[0] === '127'; }Please add a regression test for a
127.*hostname that is not an IP literal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/cli/commands/serve.js` around lines 43 - 45, The isLoopback function incorrectly treats any hostname starting with "127." as loopback; update isLoopback to parse the hostname as an IP literal (use Node's net.isIP or equivalent) and then check that the parsed address is in 127.0.0.0/8 or the mapped ::ffff:127.x.x.x form rather than prefix-matching strings, and ensure "localhost" and "::1" still pass; also add a regression test that asserts a non-literal hostname like "127.example.com" is NOT considered loopback by isLoopback and that a literal "127.0.0.1" (and ::ffff:127.0.0.1) still are.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/domain/services/WarpServeService.js`:
- Around line 120-125: The props bag is created with a normal object literal
which allows graph-derived keys like "__proto__" or "constructor" to mutate the
prototype; when initializing the props entry in nodePropsMap for decoded.nodeId,
create a null-prototype object (e.g. via Object.create(null)) instead of {} so
decoded.propKey can be used safely, then continue assigning
props[decoded.propKey] = lwwValue(reg) as before; update the initialization
logic around nodePropsMap.get(decoded.nodeId) / nodePropsMap.set(...) to use the
null-prototype object.
In `@src/infrastructure/adapters/NodeWsAdapter.js`:
- Around line 18-28: wrapConnection currently registers message and close
listeners on ws but omits a socket-level 'error' listener; add ws.on('error',
...) inside wrapConnection so socket errors don't crash the process, and route
the error to an existing error handler if one exists (or at minimum log the
error) — reference ws, wrapConnection, messageHandler and closeHandler when
adding this handler so it mirrors the other event wiring and safely consumes or
delegates socket errors.
In `@src/infrastructure/adapters/sha1sync.js`:
- Around line 107-119: Add a TypeScript declaration for the public sha1sync
module so the new ./sha1sync entrypoint has types; create a file named
src/infrastructure/adapters/sha1sync.d.ts that declares the exported function
sha1sync with the correct signature (accepting a Uint8Array and returning a
string) so TypeScript consumers and Release Preflight stop warning about a
missing declaration for the sha1sync export.
In `@test/unit/domain/WarpGraph.encryption.test.js`:
- Around line 127-138: The test only captures a rejection from WarpGraph.open
but not a later rejection from graph.materialize, so it can miss the error if
the implementation defers the check; change the test to handle both cases by
awaiting openPromise and, if it resolves to a graph, call graph.materialize()
and capture any thrown error, otherwise capture the open() rejection — then
assert the captured error is an EncryptionError and its message matches
/encrypted patches/; reference WarpGraph.open, the openPromise variable, and the
graph.materialize() call when implementing this.
- Around line 25-41: The in-memory blob store is too permissive and returns
mutable buffers by reference; update the store(…) and retrieve(…) in this test
helper so store only accepts/operates on Uint8Array (reject or coerce strings
upstream, or throw if a string is passed) and always store a copied Uint8Array
(use a new Uint8Array(buf) copy) into this._blobs; change retrieve(oid) to
return a copy of the stored buffer (not the original reference) so callers
cannot mutate the backing buffer. Ensure references to store and retrieve in
tests use the Uint8Array-only contract and that stored buffers are
created/copied via new Uint8Array(...) before insertion into this._blobs.
- Around line 182-203: The test currently materializes the same WarpGraph
instance that performed the patch, which can hide regressions in the persistence
read path; after performing graph.patch and asserting patchStorage._blobs is
empty, close the current graph and reopen a fresh WarpGraph via WarpGraph.open
(using the same persistence, graphName 'plain-test', and writerId) and then call
materialize(), hasNode('user:normal'), and getNodeProps('user:normal') on the
reopened instance to verify the unencrypted persistence path actually persisted
and can be read by a new graph instance.
In `@test/unit/infrastructure/adapters/CasBlobAdapter.test.js`:
- Around line 9-20: The test currently relies on lastConstructorArgs to infer
reuse but that can be identical across two instantiations; modify
MockContentAddressableStore to track explicit constructor calls by adding a
counter (e.g., casConstructorCount) and incrementing it in the constructor
(inside class MockContentAddressableStore); in the reuse test assert
casConstructorCount === 1 (or appropriate expected count) in addition to any
existing checks; update other referenced sections (around lines noted) to use
the same counter check so tests verify actual reuse rather than identical
options.
---
Outside diff comments:
In `@jsr.json`:
- Around line 11-24: The manifest exposes a new public subpath "./browser" but
the publish.include array doesn't list browser.js, causing published artifacts
to miss that entrypoint; update the publish.include array to add browser.js so
the "./browser" entrypoint is shipped (ensure the string "browser.js" is
included alongside other entries in the "publish.include" list to match the new
"./browser" mapping).
---
Duplicate comments:
In `@bin/cli/commands/serve.js`:
- Around line 43-45: The isLoopback function incorrectly treats any hostname
starting with "127." as loopback; update isLoopback to parse the hostname as an
IP literal (use Node's net.isIP or equivalent) and then check that the parsed
address is in 127.0.0.0/8 or the mapped ::ffff:127.x.x.x form rather than
prefix-matching strings, and ensure "localhost" and "::1" still pass; also add a
regression test that asserts a non-literal hostname like "127.example.com" is
NOT considered loopback by isLoopback and that a literal "127.0.0.1" (and
::ffff:127.0.0.1) still are.
In `@eslint.config.js`:
- Around line 280-291: The current no-restricted-imports entries block only
static imports for "node:buffer" and "buffer" but dynamic loads (import(...) /
require(...)) are still allowed; modify the existing no-restricted-syntax
override that targets domain files (the override with "files":
["src/domain/**/*.js"] and rule "no-restricted-syntax") to include AST selectors
that reject ImportExpression and require CallExpression for those same module
names (e.g., selectors targeting ImportExpression whose source value is
"node:buffer" or "buffer", and CallExpression where callee is require and the
first argument value is "node:buffer" or "buffer"), rather than adding a second
no-restricted-syntax override, so dynamic imports/require of Buffer are
forbidden in domain code just like static imports are.
In `@src/domain/services/WarpServeService.js`:
- Around line 244-258: The listen method currently assigns this._server and
registers subscriptions before the async bind completes, causing leaked state if
listen() or subscribe() throws; change listen to create the server locally
(e.g., const server = this._wsPort.createServer(...)) and perform await
server.listen(port, host) first, then on success set this._server = server and
move the subscription loop (graph.subscribe with this._broadcastDiff) after the
successful bind, storing each subscription into this._subscriptions; also ensure
you catch errors from server.listen or graph.subscribe and clean up any partial
subscriptions or close the local server before rethrowing so the instance isn't
left in a half-initialized state.
In `@src/infrastructure/adapters/CasBlobAdapter.js`:
- Around line 135-151: The fallback to legacy blobs is too broad because the
try/catch wraps both cas.readManifest and cas.restore; instead only catch errors
from cas.readManifest and apply isLegacyBlobError-based fallback there.
Concretely, call cas.readManifest({ treeOid: oid }) inside its own try/catch
(using isLegacyBlobError to decide to call this._persistence.readBlob(oid)), and
if readManifest succeeds then call cas.restore(restoreOpts) without that
legacy-error catch so restore errors (e.g., missing chunks, corruption,
decryption, I/O) propagate; adjust references to cas.readManifest, cas.restore,
isLegacyBlobError, and this._persistence.readBlob accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ad100dd8-619e-45d0-befb-152954b78393
📒 Files selected for processing (26)
CHANGELOG.mdbin/cli/commands/serve.jsbin/cli/schemas.jsbin/warp-graph.jseslint.config.jsindex.d.tsindex.jsjsr.jsonsrc/domain/services/WarpServeService.jssrc/domain/utils/bytes.jssrc/domain/utils/defaultCrypto.jssrc/domain/warp/materializeAdvanced.methods.jssrc/infrastructure/adapters/CasBlobAdapter.jssrc/infrastructure/adapters/NodeCryptoAdapter.jssrc/infrastructure/adapters/NodeWsAdapter.jssrc/infrastructure/adapters/TrustCryptoAdapter.jssrc/infrastructure/adapters/sha1sync.jstest/unit/cli/commands/serve.test.jstest/unit/cli/schemas.test.jstest/unit/domain/WarpGraph.encryption.test.jstest/unit/domain/services/WarpServeService.test.jstest/unit/infrastructure/adapters/BunWsAdapter.test.jstest/unit/infrastructure/adapters/CasBlobAdapter.test.jstest/unit/infrastructure/adapters/httpAdapterUtils.test.jstest/unit/infrastructure/adapters/lazyCasInit.test.jstest/unit/infrastructure/adapters/sha1sync.test.js
✅ Files skipped from review due to trivial changes (1)
- test/unit/infrastructure/adapters/lazyCasInit.test.js
🚧 Files skipped from review as they are similar to previous changes (5)
- bin/warp-graph.js
- test/unit/infrastructure/adapters/BunWsAdapter.test.js
- test/unit/infrastructure/adapters/sha1sync.test.js
- src/domain/utils/bytes.js
- src/infrastructure/adapters/TrustCryptoAdapter.js
| let props = nodePropsMap.get(decoded.nodeId); | ||
| if (!props) { | ||
| props = {}; | ||
| nodePropsMap.set(decoded.nodeId, props); | ||
| } | ||
| props[decoded.propKey] = lwwValue(reg); |
There was a problem hiding this comment.
Use a null-prototype object for serialized node props.
Line 122 starts each props bag with {} even though decoded.propKey comes from graph data. Keys like __proto__ or constructor won’t round-trip as ordinary properties here, and they can mutate the payload object’s prototype instead.
Possible fix
let props = nodePropsMap.get(decoded.nodeId);
if (!props) {
- props = {};
+ props = /** `@type` {Record<string, unknown>} */ (Object.create(null));
nodePropsMap.set(decoded.nodeId, props);
}
props[decoded.propKey] = lwwValue(reg);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/domain/services/WarpServeService.js` around lines 120 - 125, The props
bag is created with a normal object literal which allows graph-derived keys like
"__proto__" or "constructor" to mutate the prototype; when initializing the
props entry in nodePropsMap for decoded.nodeId, create a null-prototype object
(e.g. via Object.create(null)) instead of {} so decoded.propKey can be used
safely, then continue assigning props[decoded.propKey] = lwwValue(reg) as
before; update the initialization logic around nodePropsMap.get(decoded.nodeId)
/ nodePropsMap.set(...) to use the null-prototype object.
| ws.on('message', (/** @type {import('ws').RawData} */ data) => { | ||
| if (messageHandler) { | ||
| messageHandler(messageToString(data)); | ||
| } | ||
| }); | ||
|
|
||
| ws.on('close', (/** @type {number} */ code, /** @type {Buffer} */ reason) => { | ||
| if (closeHandler) { | ||
| closeHandler(code, reason?.toString()); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "NodeWsAdapter.js" -type fRepository: git-stunts/git-warp
Length of output: 109
🏁 Script executed:
cat -n src/infrastructure/adapters/NodeWsAdapter.jsRepository: git-stunts/git-warp
Length of output: 8364
🏁 Script executed:
cat -n package.json | grep -A 5 -B 5 '"ws"'Repository: git-stunts/git-warp
Length of output: 416
🌐 Web query:
ws npm library error handling unhandled error event EventEmitter
💡 Result:
In Node.js, any EventEmitter that emits an 'error' event will crash the process if you didn’t register at least one 'error' listener—this is what produces throw er; // Unhandled 'error' event. [1]
With the ws library, the fix is to always attach error handlers on:
- the server (
WebSocketServer) - each connection (
wssocket) - (optionally) the underlying HTTP server / upgrade path for bad handshakes
Minimal robust pattern (server)
import http from "http";
import WebSocket, { WebSocketServer } from "ws";
const server = http.createServer();
server.on("error", (err) => console.error("HTTP server error:", err));
const wss = new WebSocketServer({ server });
wss.on("error", (err) => console.error("WSS error:", err));
wss.on("connection", (ws, req) => {
ws.on("error", (err) => console.error("WS client error:", err)); // prevents “Unhandled 'error' event”
ws.on("close", (code, reason) => {
// cleanup / logging
});
ws.on("message", (data) => {
// handle message
});
});
server.listen(8080);ws itself shows this same idea in its docs (e.g., ws.on('error', console.error)). [2]
Typical cause of the error you’re seeing
A network/protocol issue (very commonly ECONNRESET, bad frames, etc.) triggers an 'error' on the WebSocket instance; if you didn’t attach ws.on('error', ...), Node treats it as unhandled and throws. [3]
Sources:
[1] Node.js EventEmitter error-event behavior (no listener ⇒ throw) (r2.nodejs.org)
[2] ws package docs example using ws.on('error', ...) (npmjs.com)
[3] Example stack trace showing ws emitting an 'error' leading to “Unhandled 'error' event” (stackoverflow.com)
Citations:
- 1: https://r2.nodejs.org/docs/v5.10.0/api/all.html?utm_source=openai
- 2: https://www.npmjs.com/package/ws?utm_source=openai
- 3: https://stackoverflow.com/questions/69886630/node-js-websocket-unhandeled-error-events-js-websocket-js?utm_source=openai
Add a socket-level error handler in wrapConnection.
Lines 18–28 register message and close listeners, but each ws connection is an EventEmitter that can emit error. Without a listener, any socket-level error (e.g., ECONNRESET, malformed frames) will crash the process. Server-level error handlers do not prevent individual socket errors.
Possible fix
ws.on('message', (/** `@type` {import('ws').RawData} */ data) => {
if (messageHandler) {
messageHandler(messageToString(data));
}
});
+
+ ws.on('error', () => {
+ // Prevent unhandled socket-level errors; cleanup still happens via `close`.
+ });
ws.on('close', (/** `@type` {number} */ code, /** `@type` {Buffer} */ reason) => {
if (closeHandler) {
closeHandler(code, reason?.toString());
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/infrastructure/adapters/NodeWsAdapter.js` around lines 18 - 28,
wrapConnection currently registers message and close listeners on ws but omits a
socket-level 'error' listener; add ws.on('error', ...) inside wrapConnection so
socket errors don't crash the process, and route the error to an existing error
handler if one exists (or at minimum log the error) — reference ws,
wrapConnection, messageHandler and closeHandler when adding this handler so it
mirrors the other event wiring and safely consumes or delegates socket errors.
| function preprocess(msg) { | ||
| const bitLen = msg.length * 8; | ||
| const totalBytes = msg.length + 1 + ((119 - (msg.length % 64)) % 64) + 8; | ||
| const padded = new Uint8Array(totalBytes); | ||
| padded.set(msg); | ||
| padded[msg.length] = 0x80; | ||
| const dv = new DataView(padded.buffer); | ||
| // SHA-1 spec requires 64-bit big-endian message length. High 32 bits at | ||
| // offset -8 are zero-initialized, so this is correct for messages < 512 MB. | ||
| // Messages >= 2^32 bits (~512 MB) would need the high word set explicitly. | ||
| dv.setUint32(totalBytes - 4, bitLen, false); | ||
|
|
||
| const blocks = []; | ||
| for (let i = 0; i < totalBytes; i += 64) { | ||
| const block = new Uint32Array(80); | ||
| for (let j = 0; j < 16; j++) { | ||
| block[j] = dv.getUint32(i + j * 4, false); | ||
| } | ||
| for (let j = 16; j < 80; j++) { | ||
| block[j] = rotl(block[j - 3] ^ block[j - 8] ^ block[j - 14] ^ block[j - 16], 1); | ||
| } | ||
| blocks.push(block); | ||
| } | ||
| return blocks; |
There was a problem hiding this comment.
Don't materialize every expanded block before hashing.
Lines 38-47 allocate and retain an 80-word schedule for each 64-byte chunk. That is ~320 bytes of schedule data per 64-byte block, so sha1sync() can hit multi-GB transient memory long before the 512 MB guard on Line 108.
♻️ Stream the schedules instead of collecting them
-function preprocess(msg) {
+function* preprocess(msg) {
const bitLen = msg.length * 8;
const totalBytes = msg.length + 1 + ((119 - (msg.length % 64)) % 64) + 8;
const padded = new Uint8Array(totalBytes);
padded.set(msg);
padded[msg.length] = 0x80;
const dv = new DataView(padded.buffer);
dv.setUint32(totalBytes - 4, bitLen, false);
- const blocks = [];
for (let i = 0; i < totalBytes; i += 64) {
const block = new Uint32Array(80);
for (let j = 0; j < 16; j++) {
block[j] = dv.getUint32(i + j * 4, false);
}
for (let j = 16; j < 80; j++) {
block[j] = rotl(block[j - 3] ^ block[j - 8] ^ block[j - 14] ^ block[j - 16], 1);
}
- blocks.push(block);
+ yield block;
}
- return blocks;
}
@@
- const blocks = preprocess(data);
const state = [0x67452301, 0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0];
- for (const w of blocks) {
+ for (const w of preprocess(data)) {
processBlock(state, w);
}Also applies to: 111-116
| async store(/** @type {string|Uint8Array} */ content) { | ||
| this._counter++; | ||
| // Generate a fake OID (40-char hex) | ||
| const oid = this._counter.toString(16).padStart(40, '0'); | ||
| const buf = typeof content === 'string' | ||
| ? new TextEncoder().encode(content) | ||
| : new Uint8Array(content); | ||
| this._blobs.set(oid, buf); | ||
| return oid; | ||
| } | ||
|
|
||
| async retrieve(/** @type {string} */ oid) { | ||
| const buf = this._blobs.get(oid); | ||
| if (!buf) { | ||
| throw new Error(`InMemoryBlobStorage: OID not found: ${oid}`); | ||
| } | ||
| return buf; |
There was a problem hiding this comment.
Keep the blob-storage double strict and immutable.
store() still accepts string, and retrieve() returns the backing Uint8Array by reference. That makes this double looser than the new Uint8Array-only port boundary and can hide in-place mutation bugs on repeated reads.
Suggested fix
- async store(/** `@type` {string|Uint8Array} */ content) {
+ async store(/** `@type` {Uint8Array} */ content) {
this._counter++;
// Generate a fake OID (40-char hex)
const oid = this._counter.toString(16).padStart(40, '0');
- const buf = typeof content === 'string'
- ? new TextEncoder().encode(content)
- : new Uint8Array(content);
+ const buf = new Uint8Array(content);
this._blobs.set(oid, buf);
return oid;
}
async retrieve(/** `@type` {string} */ oid) {
const buf = this._blobs.get(oid);
if (!buf) {
throw new Error(`InMemoryBlobStorage: OID not found: ${oid}`);
}
- return buf;
+ return new Uint8Array(buf);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async store(/** @type {string|Uint8Array} */ content) { | |
| this._counter++; | |
| // Generate a fake OID (40-char hex) | |
| const oid = this._counter.toString(16).padStart(40, '0'); | |
| const buf = typeof content === 'string' | |
| ? new TextEncoder().encode(content) | |
| : new Uint8Array(content); | |
| this._blobs.set(oid, buf); | |
| return oid; | |
| } | |
| async retrieve(/** @type {string} */ oid) { | |
| const buf = this._blobs.get(oid); | |
| if (!buf) { | |
| throw new Error(`InMemoryBlobStorage: OID not found: ${oid}`); | |
| } | |
| return buf; | |
| async store(/** `@type` {Uint8Array} */ content) { | |
| this._counter++; | |
| // Generate a fake OID (40-char hex) | |
| const oid = this._counter.toString(16).padStart(40, '0'); | |
| const buf = new Uint8Array(content); | |
| this._blobs.set(oid, buf); | |
| return oid; | |
| } | |
| async retrieve(/** `@type` {string} */ oid) { | |
| const buf = this._blobs.get(oid); | |
| if (!buf) { | |
| throw new Error(`InMemoryBlobStorage: OID not found: ${oid}`); | |
| } | |
| return new Uint8Array(buf); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/WarpGraph.encryption.test.js` around lines 25 - 41, The
in-memory blob store is too permissive and returns mutable buffers by reference;
update the store(…) and retrieve(…) in this test helper so store only
accepts/operates on Uint8Array (reject or coerce strings upstream, or throw if a
string is passed) and always store a copied Uint8Array (use a new
Uint8Array(buf) copy) into this._blobs; change retrieve(oid) to return a copy of
the stored buffer (not the original reference) so callers cannot mutate the
backing buffer. Ensure references to store and retrieve in tests use the
Uint8Array-only contract and that stored buffers are created/copied via new
Uint8Array(...) before insertion into this._blobs.
| // Re-open WITHOUT patchBlobStorage — should fail during open() or materialize() | ||
| // (the migration boundary check in open() reads the tip patch) | ||
| const openPromise = WarpGraph.open({ | ||
| persistence: repo.persistence, | ||
| graphName: 'enc-test', | ||
| writerId: 'writer-2', | ||
| }); | ||
|
|
||
| /** @type {unknown} */ | ||
| const err = await openPromise.then(() => null, (e) => e); | ||
| expect(err).toBeInstanceOf(EncryptionError); | ||
| expect(/** @type {Error} */ (err).message).toMatch(/encrypted patches/); |
There was a problem hiding this comment.
Exercise both failure points this test claims to cover.
The comment says the missing patchBlobStorage check may fail in open() or materialize(), but the assertion only accepts an open() rejection. If the implementation defers the error until materialization, this becomes a false negative.
Suggested fix
- const openPromise = WarpGraph.open({
+ const openedOrErr = await WarpGraph.open({
persistence: repo.persistence,
graphName: 'enc-test',
writerId: 'writer-2',
- });
-
- /** `@type` {unknown} */
- const err = await openPromise.then(() => null, (e) => e);
+ }).then((graph) => graph, (e) => e);
+
+ /** `@type` {unknown} */
+ const err = openedOrErr instanceof Error
+ ? openedOrErr
+ : await openedOrErr.materialize().then(() => null, (e) => e);
expect(err).toBeInstanceOf(EncryptionError);
expect(/** `@type` {Error} */ (err).message).toMatch(/encrypted patches/);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/WarpGraph.encryption.test.js` around lines 127 - 138, The
test only captures a rejection from WarpGraph.open but not a later rejection
from graph.materialize, so it can miss the error if the implementation defers
the check; change the test to handle both cases by awaiting openPromise and, if
it resolves to a graph, call graph.materialize() and capture any thrown error,
otherwise capture the open() rejection — then assert the captured error is an
EncryptionError and its message matches /encrypted patches/; reference
WarpGraph.open, the openPromise variable, and the graph.materialize() call when
implementing this.
| it('no behavior change when patchBlobStorage is not provided', async () => { | ||
| const graph = await WarpGraph.open({ | ||
| persistence: repo.persistence, | ||
| graphName: 'plain-test', | ||
| writerId: 'writer-1', | ||
| }); | ||
|
|
||
| await graph.patch(p => { | ||
| p.addNode('user:normal'); | ||
| p.setProperty('user:normal', 'status', 'active'); | ||
| }); | ||
|
|
||
| // patchBlobStorage should be empty — patches went to persistence directly | ||
| expect(patchStorage._blobs.size).toBe(0); | ||
|
|
||
| const state = await graph.materialize(); | ||
| expect(state).toBeTruthy(); | ||
|
|
||
| expect(await graph.hasNode('user:normal')).toBe(true); | ||
| const props = /** @type {any} */ (await graph.getNodeProps('user:normal')); | ||
| expect(props.status).toBe('active'); | ||
| }); |
There was a problem hiding this comment.
Reopen before asserting the plain persistence path.
This test materializes the same graph instance that wrote the patch. That can still pass from in-memory state even if the unencrypted read/write path regressed. Reopening a fresh WarpGraph is needed to verify the “no behavior change” claim.
Suggested fix
it('no behavior change when patchBlobStorage is not provided', async () => {
const graph = await WarpGraph.open({
persistence: repo.persistence,
graphName: 'plain-test',
writerId: 'writer-1',
});
await graph.patch(p => {
p.addNode('user:normal');
p.setProperty('user:normal', 'status', 'active');
});
// patchBlobStorage should be empty — patches went to persistence directly
expect(patchStorage._blobs.size).toBe(0);
- const state = await graph.materialize();
+ const reopened = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'plain-test',
+ writerId: 'writer-2',
+ });
+ const state = await reopened.materialize();
expect(state).toBeTruthy();
- expect(await graph.hasNode('user:normal')).toBe(true);
- const props = /** `@type` {any} */ (await graph.getNodeProps('user:normal'));
+ expect(await reopened.hasNode('user:normal')).toBe(true);
+ const props = /** `@type` {any} */ (await reopened.getNodeProps('user:normal'));
expect(props.status).toBe('active');
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/domain/WarpGraph.encryption.test.js` around lines 182 - 203, The
test currently materializes the same WarpGraph instance that performed the
patch, which can hide regressions in the persistence read path; after performing
graph.patch and asserting patchStorage._blobs is empty, close the current graph
and reopen a fresh WarpGraph via WarpGraph.open (using the same persistence,
graphName 'plain-test', and writerId) and then call materialize(),
hasNode('user:normal'), and getNodeProps('user:normal') on the reopened instance
to verify the unencrypted persistence path actually persisted and can be read by
a new graph instance.
| /** Captures constructor args for assertion. @type {any} */ | ||
| let lastConstructorArgs = {}; | ||
|
|
||
| class MockContentAddressableStore { | ||
| constructor(/** @type {any} */ opts) { | ||
| lastConstructorArgs = opts; | ||
| this.readManifest = mockReadManifest; | ||
| this.restore = mockRestore; | ||
| this.store = mockStore; | ||
| this.createTree = mockCreateTree; | ||
| } | ||
| } |
There was a problem hiding this comment.
Count CAS constructor invocations explicitly in the reuse test.
lastConstructorArgs staying equal after two store() calls does not prove the CAS instance was reused; a second initialization with the same options would still pass this test.
Proposed fix
/** Captures constructor args for assertion. `@type` {any} */
let lastConstructorArgs = {};
+let constructorCallCount = 0;
class MockContentAddressableStore {
constructor(/** `@type` {any} */ opts) {
+ constructorCallCount += 1;
lastConstructorArgs = opts;
this.readManifest = mockReadManifest;
this.restore = mockRestore;
this.store = mockStore;
this.createTree = mockCreateTree;
@@
beforeEach(() => {
vi.clearAllMocks();
lastConstructorArgs = {};
+ constructorCallCount = 0;
});
@@
it('reuses CAS instance across multiple calls', async () => {
mockStore.mockResolvedValue({});
mockCreateTree.mockResolvedValue('tree-oid');
@@
await adapter.store('data1');
const firstArgs = { ...lastConstructorArgs };
await adapter.store('data2');
// Same instance (constructor called only once)
expect(lastConstructorArgs).toEqual(firstArgs);
+ expect(constructorCallCount).toBe(1);
});Also applies to: 67-70, 321-335
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/infrastructure/adapters/CasBlobAdapter.test.js` around lines 9 -
20, The test currently relies on lastConstructorArgs to infer reuse but that can
be identical across two instantiations; modify MockContentAddressableStore to
track explicit constructor calls by adding a counter (e.g., casConstructorCount)
and incrementing it in the constructor (inside class
MockContentAddressableStore); in the reuse test assert casConstructorCount === 1
(or appropriate expected count) in addition to any existing checks; update other
referenced sections (around lines noted) to use the same counter check so tests
verify actual reuse rather than identical options.
- Add "types" field to "./browser" and "./sha1sync" exports in package.json with new browser.d.ts and sha1sync.d.ts declaration files - Add browser.js, browser.d.ts, sha1sync.d.ts to jsr.json publish.include - Add --port, --host, --expose flags to serve help text - Update B166 in ROADMAP.md to remove stale WarpSocket.js file reference - Clarify defaultCrypto.js comment: failure is bundler-specific, not generic
…vability - BunWsAdapter/DenoWsAdapter: buffer messages arriving before onMessage handler is set, flush on handler registration - NodeWsAdapter: accept optional onError callback for runtime server errors, wire through listenWithHttp/listenWsOnly/createStaticHandler - Rename startup error handler variables to avoid shadowing onError callback - Extract createDenoHandler() to stay under max-lines-per-function limit - wsAdapterUtils: hoist TextDecoder to module level for reuse - staticFileHandler: Object.freeze() FORBIDDEN and NOT_FOUND constants - sha1sync: clarify comment about uint32 bitLen guard - BunWsData type: add messageBuffer field - Tests: replace real timers with fake timers (BunWsAdapter close test), promise-based approach (NodeWsAdapter close test), add buffer tests for Bun/Deno adapters, add onError acceptance test for NodeWsAdapter
…tize errors - WarpServeService: reject non-integer finite ceilings in seek (extract _validateCeiling), add 1 MiB message size limit, add 64 KiB property value size limit for wildcard args (extract validateWildcardArg) - SyncProtocol/WormholeService: null-guard readBlob/retrieve results with PersistenceError(E_MISSING_OBJECT) instead of passing null to codec - PatchBuilderV2: document "encrypted" field as legacy name for external blob storage (ref ADR-0002) - bytes.js: replace regex in hexDecode with direct charCode validation loop via hexCharValue() helper - patch.methods.js: align _readPatchBlob @this to WarpGraphWithMixins - seekCacheKey.js: document intentional sync→async change (WebCrypto) - eslint: add WormholeService.js and WarpServeService.js to relaxed block - Tests: non-integer ceiling, oversized message, oversized property value
- WarpServeService tests: add afterEach(service.close()) to all describe blocks with beforeEach, add cross-graph mutate rejection test, improve _onMessage monkey-patch comment - CasBlobAdapter tests: replace Buffer.from() with Uint8Array/TextEncoder - CasSeekCacheAdapter tests: replace all Buffer.from()/Buffer.alloc() with Uint8Array/TextEncoder equivalents - staticFileHandler tests: replace Buffer.from([...]) with Uint8Array - WarpGraph.encryption test: replace manual .then(() => null, (e) => e) with expect().rejects.toThrow() - serve CLI test: add positive regex assertion for ephemeral writerId format - serve.js: simplify resolveTargetGraphs return type (remove redundant persistence passthrough) - CasBlobAdapter: switch dynamic LoggerObservabilityBridge import to static - eslint: add WebSocket to source file globals (used by DenoWsAdapter) - index.d.ts: add WarpServeService and WebSocketServerPort type declarations - bad_code.md: log WarpServeService domain/infrastructure coupling concern
Release Preflight
If you tag this commit as |
NodeWsAdapter.wrapConnection() dropped messages arriving before onMessage(handler) was called. Added a messageBuffer array matching the pattern already used in BunWsAdapter: messages are queued when no handler is set and flushed synchronously when the handler is registered.
Add JSDoc to isLoopback() explicitly documenting that wildcard addresses (0.0.0.0, ::, 0:0:0:0:0:0:0:0) are not loopback and require --expose. Add three tests verifying assertExposeSafety rejects each wildcard without --expose.
When readBlob() returns null (corrupt or missing blob), _readPatchBlob now throws PersistenceError with E_MISSING_OBJECT instead of passing null to the CBOR decoder, which would crash with a confusing error. Matches the existing guard pattern in SyncProtocol and WormholeService.
- Narrow encryptionKey JSDoc from Buffer|Uint8Array to Uint8Array per project convention (Buffer confined to adapter internals only) - Add null guard on legacy blob fallback in retrieve() so a missing blob throws a descriptive error instead of silently returning null - Add test covering the null-OID fallback path
Remove blank lines between bullet points within Fixed sections and rename non-standard "### Upgraded" section to "### Changed", merging its items into the existing Changed section per Keep a Changelog spec.
…size check - Extract correlation ID from raw message before the async _onMessage call so the catch handler can correlate errors without re-parsing JSON. - Replace bare `Function` types with a `GraphHandle` typedef carrying specific method signatures throughout WarpServeService. - Add comment documenting that the 1 MiB size check uses String.length (UTF-16 code units), which is approximate for multi-byte content.
- Remove unused `getInstance()` static method and `instance` variable from MockWarpServeService in serve.test.js - Tighten symlink-outside-root assertion from `.not.toBe(200)` to `.toBe(404)` in staticFileHandler.test.js
Also fix pre-existing complexity violation in bytes.js base64Decode by extracting validation into a separate validateBase64 helper.
…vice - Use `/** @type {*} */` cast in readPatchBlob.test.js to avoid requiring full WarpGraphWithMixins surface in minimal mock contexts - Replace inline GraphHandle typedef with direct WarpGraph import type to avoid wildcard ratchet breach and signature mismatch - Cast dynamic patch[op] dispatch through Record<string, ...> to satisfy tsc index signature check - Add `@type {string|undefined}` to pre-extracted message ID variable
Release Preflight
If you tag this commit as |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
…ce manifest These exports were added to index.js in earlier commits but the type-surface.m8.json manifest was not updated, causing the CI check-dts-surface gate to fail.
Release Preflight
If you tag this commit as |
Summary
git warp serve— New CLI command that opens a WebSocket server exposing WARP graphs to browser clients. Supports Node.js, Bun, and Deno via runtime-detected adapters (NodeWsAdapter,BunWsAdapter,DenoWsAdapter). Optional--static <dir>flag serves a built SPA on the same port.Browsa architecture pivot — Rewired the Vue demo app from in-memory
WarpGraphinstances to a live WebSocket connection viaWarpSocket. The browser now views/edits a real Git-backed graph throughgit warp serve.Uint8Array migration (BREAKING) — Domain layer now uses
Uint8Arrayexclusively;Bufferconfined to infrastructure adapters. ESLintno-restricted-globalsenforces the boundary. All port contracts, return types, and domain utilities updated.git-cas v5.2.4 — CDC chunking for seek cache (~98% chunk reuse), encrypted seek cache via AES-256-GCM, streaming restore, observability bridge.
Graph encryption at rest (B164) —
patchBlobStorageoption onWarpGraph.open()encrypts/decrypts patch CBOR via CAS. Mixed encrypted/plain patches supported.Content attachments via CAS (B160) —
BlobStoragePortandCasBlobAdapterfor CDC-chunked, optionally encrypted content blob storage.Security hardening — Mutation op allowlist + per-op arg validation in
WarpServeService,hexDecodeinput validation, protocol version checking, request timeouts.Test plan
tsc --noEmit: 0 errors (strict mode)git warp serve→ open browsa in browser → add/remove nodes → time-travel slider → inspect nodeSummary by CodeRabbit
New Features
Changed
Upgraded
Tests
Documentation