[kafka pr-1 · 01/6] Walking skeleton: register endpoint into named CG#390
Draft
[kafka pr-1 · 01/6] Walking skeleton: register endpoint into named CG#390
Conversation
Replace the developer-machine absolute path fallback with one derived from REPO_ROOT (already computed in the same file) so the e2e test is portable across machines and CI. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Nothing in packages/kafka/src or tests imports from @origintrail-official/dkg-core; the dependency was carried over but never used. Removing it shrinks the install graph and clarifies the package's actual coupling. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The kafka package now hands a bare KA across the KafkaEndpointPublisher
boundary; the {public: <doc>} envelope expected by agent.publish is
applied by the route-handler adapter in packages/cli. This mirrors the
EpcisPublisher pattern and keeps the kafka package agnostic of the
agent.publish payload shape, which sets the contract for slices 02-07.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The publisher closure's first parameter was previously named targetContextGraphId, shadowing the outer targetContextGraphId variable defined in the same handler. Renaming the closure parameter to cgId removes the shadow and makes the two scopes obviously distinct. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace the generic criticalityTargets.kosava floor (60% lines/funcs/ statements, 50% branches) with a kafka-specific export pinned to the package's measured coverage (100% lines/funcs/statements, 50% branches). Without this, untested code can be added freely while CI still passes. Mirror the kosavaEpcisCoverage pattern, including the bare '../../vitest.coverage' import path used by EPCIS. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Five backticks inside JS comments at lines 742, 868, 881, 898, 899 were not escaped, so bash interpreted them as command substitution inside the surrounding `node -e "..."` double-quoted string. On every `devnet start` this produced noisy errors like: scripts/devnet.sh: line 694: continue: only meaningful in a `for'... scripts/devnet.sh: line 694: nonce++: command not found scripts/devnet.sh: line 694: syntax error near unexpected token `idId' The text was inside JS `//` comments so the staking script still ran, but the diagnostics drowned out real failures (e.g. the stale-bytecode revert this commit's sibling fixes). Lines 730-733 and 768 already use `\`` correctly; these five were oversights. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When hardhat is left running across edits to .sol sources, the on-chain bytecode lags the recompiled artifacts on disk. The next `devnet start` short-circuits in `start_hardhat` (alive PID), then `deploy_contracts` skips on the still-present `.devnet/hardhat/deployed` marker, so daemons connect to addresses whose code predates the latest source. View methods added since that deploy revert with "function selector not recognized"; ethers surfaces this as `require(false)` (no return data, no fallback). On a 6-node devnet this manifests as `0/4 core node(s) staked` because the staking loop's `CSS.getNodeStakeV10` probe fails for every node. Detect the mismatch by comparing artifact mtimes to the marker. If any contract artifact JSON is newer than the marker, kill the running hardhat so the existing fresh-start path (which clears the marker plus `localhost_contracts.json` + `hardhat_contracts.json`) runs and `deploy_contracts` re-deploys onto a fresh chain. No-op when artifacts are unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two issues kept the live e2e from passing:
1. The endpoint-row SPARQL queried the default graph, but the daemon
stores published triples in named per-CG graphs (see
packages/cli/src/daemon/routes/query.ts — every cg-scoped read in
the codebase wraps its WHERE in `GRAPH ?g { ... }`). Without the
wrapper the BGP returned zero bindings even though the data was
present, so the 20s wait always timed out.
2. The completion check gated on `response.result.type === 'bindings'`,
but `/api/query` returns `{ result: { bindings: [...] } }` with no
`type` discriminator on this path — `QueryResult`'s optional `type`
is only set on a couple of legacy callers. The check was always
false, so even when the SPARQL did match (after fix #1) the loop
still spun out.
Both surfaced together because the fresh devnet harness was finally
healthy enough to run the slice end-to-end (PR #383 unblocked the
chain side). Verified locally: `DKG_KAFKA_E2E=1 pnpm --filter
@origintrail-official/dkg-kafka exec vitest run
test/e2e/walking-skeleton.test.ts` now passes in <1s.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This was referenced May 4, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack position
This is the integration branch for PR-1 of the kafka-registry plan
(foundation: endpoint registration + auth scopes). It stays in DRAFT
until slices 02–06 have all merged into it. Then this PR becomes the single
review-and-merge surface for the foundation as a whole.
```
main
└── feat/kafka-walking-skeleton ← THIS PR (draft until full foundation lands)
├── feat/kafka-explicit-cg (slice 02 → sub-PR into this branch)
├── feat/kafka-default-private (slice 03 → sub-PR into this branch)
├── feat/kafka-probe (slice 04 → sub-PR into this branch)
├── feat/kafka-list-revoke-verify (slice 05 — later)
└── feat/kafka-auth-scopes (slice 06 — later, HITL)
```
Slice 07 (subscription verb) is the second PR-to-main and branches off
`main` after this one merges.
What slice 01 adds
Thinnest end-to-end path through `packages/kafka`: a caller registers a
Kafka topic endpoint as a Knowledge Asset in a named Context Graph, then
SPARQL-discovers it back. Walking-skeleton scope only — no probe, no auth
scopes, no privacy flag, no soft-revoke, no list/get. Those are explicit
non-goals for this slice and ship in 02–06.
Includes two cherry-picked devnet fixes (PR #383 lineage) that unblocked the live e2e: backtick escaping in `scripts/devnet.sh` and a stale-bytecode auto-restart guard in `start_hardhat`. Plus a kafka-only e2e SPARQL fix (`GRAPH ?g` wrapper + bindings-array check).
Test plan
Related