-
Notifications
You must be signed in to change notification settings - Fork 0
M1 foundations: roadmap governance + smoke scripts + deterministic graph commit digest #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughIntroduces extensive documentation, tooling, and workspace changes: new roadmap DAG automation and pre-commit hook; new jitos-planner crate with SLAPS types; deterministic canonical graph-commit hashing and related tests; GraphQL smoke tests; many ARCH/SPEC/ROADMAP docs; TeX build targets and md→TeX tooling; xtask wrapper for hooks and updater. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as GraphQL Client
participant Server as jitosd (GraphQL)
participant Kernel as Kernel State
rect rgba(200,230,255,0.7)
Note over Client,Kernel: M1 smoke test — canonical digest & SWS flow
Client->>Server: kernelInfo()
Server->>Kernel: read system digest
Kernel-->>Server: H0 (digest)
Server-->>Client: H0
Client->>Server: createSws()
Server->>Kernel: allocate SWS (swsId="0")
Kernel-->>Server: swsId
Server-->>Client: swsId
Client->>Server: applyRewrite(AddNode)
Server->>Kernel: validate + append rewrite to SWS
Kernel-->>Server: accepted (rewriteIdx, viewDigest)
Server-->>Client: accepted
Client->>Server: graph(view for SWS)
Server->>Kernel: read SWS digest
Kernel-->>Server: viewDigest ≠ H0
Server-->>Client: viewDigest
Client->>Server: kernelInfo() (again)
Server->>Kernel: read system digest
Kernel-->>Server: H0 (unchanged)
Server-->>Client: H0
Client->>Server: rewrites(view) query
Server->>Kernel: fetch rewrite log
Kernel-->>Server: [ordered rewrites]
Server-->>Client: ordered rewrites
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 53
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/jitos-graph/src/lib.rs (1)
1-1: Remove the TypeScript directive from this Rust file.
// @ts-checkis a TypeScript compiler directive that has absolutely zero business being in a Rust source file. This is clearly a copy-paste artifact that should have been caught before commit. It's cosmetic but screams "I didn't proofread my code."🔎 Proposed fix
-// @ts-check use jitos_core::Hash;
| echo "pre-commit: updating ROADMAP DAG statuses from checkboxes..." | ||
|
|
||
| python3 scripts/update_roadmap_dags.py | ||
|
|
||
| # Stage any updates produced by the script. | ||
| git add -u docs/ROADMAP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CRITICAL: Auto-staging generated changes bypasses user review.
This pre-commit hook automatically stages generated DAG updates (git add -u docs/ROADMAP) without user review or confirmation. This is problematic because:
-
Security/Compliance Risk: Per project learnings, pre-commit hooks are part of security and compliance. Auto-staging untrusted generated output could introduce malicious or incorrect content.
-
Bug Amplification: If
update_roadmap_dags.pyhas a bug, corrupted changes are silently committed. -
Audit Trail: Users may not notice what was auto-generated, making it harder to debug issues later.
-
Unexpected Behavior: Users expect pre-commit hooks to validate, not modify their commits.
Based on learnings: "Treat pre-commit hooks as part of security and compliance; fix underlying issues rather than bypassing hooks with --no-verify"
🔎 Recommended approaches (pick one)
Option 1: Fail instead of auto-stage (force explicit user action)
python3 scripts/update_roadmap_dags.py
-# Stage any updates produced by the script.
-git add -u docs/ROADMAP
+# Check if any files were modified
+if ! git diff --quiet docs/ROADMAP; then
+ echo "ERROR: ROADMAP DAG updates required but not staged." >&2
+ echo "Run: python3 scripts/update_roadmap_dags.py && git add docs/ROADMAP" >&2
+ exit 1
+fiOption 2: Prompt for confirmation (requires tty)
python3 scripts/update_roadmap_dags.py
+# Check if updater made changes
+if ! git diff --quiet docs/ROADMAP; then
+ echo "ROADMAP DAG updates were generated. Review changes:"
+ git diff docs/ROADMAP
+ printf "Stage these changes? [y/N] "
+ read -r response
+ if [ "$response" = "y" ] || [ "$response" = "Y" ]; then
+ git add -u docs/ROADMAP
+ else
+ echo "Aborting commit. Stage changes manually when ready." >&2
+ exit 1
+ fi
+fi
+
-# Stage any updates produced by the script.
-git add -u docs/ROADMAPOption 3: Document and accept risk (least recommended)
Add prominent comment explaining the auto-staging behavior and rationale.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo "pre-commit: updating ROADMAP DAG statuses from checkboxes..." | |
| python3 scripts/update_roadmap_dags.py | |
| # Stage any updates produced by the script. | |
| git add -u docs/ROADMAP | |
| echo "pre-commit: updating ROADMAP DAG statuses from checkboxes..." | |
| python3 scripts/update_roadmap_dags.py | |
| # Check if any files were modified | |
| if ! git diff --quiet docs/ROADMAP; then | |
| echo "ERROR: ROADMAP DAG updates required but not staged." >&2 | |
| echo "Run: python3 scripts/update_roadmap_dags.py && git add docs/ROADMAP" >&2 | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
.githooks/pre-commit around lines 15-20: the hook currently runs
scripts/update_roadmap_dags.py and then auto-stages its output with "git add -u
docs/ROADMAP", which silently modifies commits and bypasses user review; remove
the automatic git add and instead either (a) make the hook fail when changes are
detected and print a clear message instructing the user to inspect and stage the
generated changes manually, or (b) if interactive shells are available, prompt
the user to confirm staging before running git add; implement one of these
behaviors and document the chosen approach in a comment at the top of the hook.
| # Sanity check: ensure the updater is now clean. | ||
| python3 scripts/update_roadmap_dags.py --check >/dev/null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sanity check runs AFTER staging, allowing inconsistent state.
The --check validation runs after git add -u docs/ROADMAP, meaning if the check fails, the repository is left with staged changes that don't pass validation. This violates the principle of atomic commits.
🔎 Fix: Validate before staging
python3 scripts/update_roadmap_dags.py
+# Sanity check: ensure the updater produced valid output
+python3 scripts/update_roadmap_dags.py --check >/dev/null
+
# Stage any updates produced by the script.
git add -u docs/ROADMAP
-
-# Sanity check: ensure the updater is now clean.
-python3 scripts/update_roadmap_dags.py --check >/dev/nullThis ensures validation passes before modifying the staging area.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Sanity check: ensure the updater is now clean. | |
| python3 scripts/update_roadmap_dags.py --check >/dev/null | |
| python3 scripts/update_roadmap_dags.py | |
| # Sanity check: ensure the updater produced valid output | |
| python3 scripts/update_roadmap_dags.py --check >/dev/null | |
| # Stage any updates produced by the script. | |
| git add -u docs/ROADMAP |
🤖 Prompt for AI Agents
.githooks/pre-commit around lines 22-23: the script currently runs the updater
check after staging which can leave invalid changes staged; move the sanity
check to run before any git add -u docs/ROADMAP invocation, ensure the hook
executes python3 scripts/update_roadmap_dags.py --check and exits non-zero on
failure so staging is only performed when validation passes, then proceed to run
git add -u docs/ROADMAP.
| # Generated PDFs (built by docs/tex Makefile targets) | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CRITICAL: Overly broad PDF ignore pattern will hide potentially important files.
The pattern *.pdf ignores ALL PDF files repository-wide, not just those generated by docs/tex/Makefile. This is dangerous because:
- Documentation assets: Reference PDFs, diagrams, or specification documents that should be committed will be silently ignored
- Silent failures: Contributors won't see warnings when they try to commit legitimate PDFs
- Comment mismatch: The comment says "built by docs/tex Makefile targets" but the pattern isn't scoped to that directory
🔎 Proposed fix: Scope PDF ignore to build output directories
-# Generated PDFs (built by docs/tex Makefile targets)
-*.pdf
+# Generated PDFs (built by docs/tex Makefile targets)
+/docs/tex/**/*.pdf
+/target/**/*.pdfIf you need to ignore PDFs in additional specific locations, add them explicitly. Never use a root-level wildcard for common file extensions that might be legitimate documentation assets.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Generated PDFs (built by docs/tex Makefile targets) | |
| # Generated PDFs (built by docs/tex Makefile targets) | |
| /docs/tex/**/*.pdf | |
| /target/**/*.pdf |
🤖 Prompt for AI Agents
In .gitignore around lines 47-48 the entry "*.pdf" is too broad and contradicts
the comment about PDFs generated by docs/tex/Makefile; replace the
repository-wide wildcard with a scoped ignore (e.g., the specific build output
path(s) used by the Makefile such as docs/tex/output/ or docs/build/**/*.pdf) or
list explicit paths for generated PDFs only, and remove the global "*.pdf" so
legitimate documentation PDFs in other locations are no longer ignored.
|
|
||
| /// Deterministic node ID (content-addressed, not insertion-order-dependent) | ||
| #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)] | ||
| #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Ord derivation enables deterministic hashing but ordering is semantically arbitrary.
Adding PartialOrd and Ord to NodeId is correct for canonical graph ordering (as needed by SPEC-WARP-0001). The derived lexicographic ordering of the underlying Hash bytes provides a deterministic total order.
However, this ordering has no semantic meaning—it's purely for canonical serialization. Consider documenting this to prevent misuse:
🔎 Suggested documentation addition
/// Deterministic node ID (content-addressed, not insertion-order-dependent)
+///
+/// ## Ordering
+/// `NodeId` implements `Ord` to enable canonical sorting for deterministic graph hashing.
+/// The ordering is lexicographic over the underlying hash bytes and has no semantic meaning
+/// beyond providing a consistent, deterministic total order.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
pub struct NodeId(pub Hash);Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
crates/jitos-graph/src/ids.rs around line 14: the derived PartialOrd/Ord on
NodeId provides a deterministic lexicographic order of the underlying hash for
canonical serialization (SPEC-WARP-0001) but has no semantic meaning; add a
short doc comment above the struct/derive clarifying that the ordering is only
for deterministic canonicalization and must not be used for semantic comparison
or logic, and mention SPEC-WARP-0001 as the rationale so future readers do not
misuse Ord.
| req = urllib.request.Request( | ||
| url=url, | ||
| data=body, | ||
| headers={"Content-Type": "application/json"}, | ||
| method="POST", | ||
| ) | ||
| try: | ||
| with urllib.request.urlopen(req, timeout=timeout_s) as resp: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
URL scheme not validated - allows file:// and custom schemes.
urllib.request.urlopen accepts file:// URLs, which could read local files if a malicious URL is passed via --url. While this script is intended for local use against jitosd, the pattern is dangerous if copied.
🔎 Proposed fix
def _post_json(url: str, payload: dict[str, Any], timeout_s: float) -> dict[str, Any]:
+ if not url.startswith(("http://", "https://")):
+ raise SmokeTransportError(f"URL must use http:// or https:// scheme: {url}")
body = json.dumps(payload).encode("utf-8")Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.10)
51-56: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
58-58: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.
(S310)
🤖 Prompt for AI Agents
In scripts/smoke_m1_graphql.py around lines 51-58, the code directly calls
urllib.request.urlopen(req, ...) which allows unsafe schemes (e.g., file://);
before creating or opening the request validate the URL: parse it with
urllib.parse.urlparse, ensure scheme.lower() is 'http' or 'https' and that
netloc is non-empty, and reject/raise/exit on any other scheme so only HTTP(S)
URLs are allowed; put this check before building the Request and provide a clear
error message on rejection.
| if "errors" in raw and raw["errors"]: | ||
| raise SmokeGraphQLError(json.dumps(raw["errors"], indent=2, sort_keys=True)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Unnecessary key check before dictionary access.
if "errors" in raw and raw["errors"] can be simplified. If errors key doesn't exist, raw.get("errors") returns None which is falsy.
🔎 Proposed fix
- if "errors" in raw and raw["errors"]:
+ if raw.get("errors"):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if "errors" in raw and raw["errors"]: | |
| raise SmokeGraphQLError(json.dumps(raw["errors"], indent=2, sort_keys=True)) | |
| if raw.get("errors"): | |
| raise SmokeGraphQLError(json.dumps(raw["errors"], indent=2, sort_keys=True)) |
🧰 Tools
🪛 Ruff (0.14.10)
77-77: Unnecessary key check before dictionary access
Replace with dict.get
(RUF019)
🤖 Prompt for AI Agents
In scripts/smoke_m1_graphql.py around lines 77 to 78, the check 'if "errors" in
raw and raw["errors"]' is verbose; replace it with a single truthy check using
raw.get("errors") and raise SmokeGraphQLError when that returns a truthy value.
Update the condition to use raw.get("errors") so it safely handles missing keys
and falsy values in one check.
| except Exception as e: # defensive | ||
| print(f"FAIL: unexpected error: {e}", file=sys.stderr) | ||
| return 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Bare Exception catch obscures error types.
Catching Exception without re-raising or logging the traceback makes debugging harder. The defensive comment acknowledges this, but consider at minimum logging the full traceback in verbose mode.
🔎 Proposed fix
+ import traceback
+
except Exception as e: # defensive
- print(f"FAIL: unexpected error: {e}", file=sys.stderr)
+ if args.verbose:
+ traceback.print_exc()
+ print(f"FAIL: unexpected error: {type(e).__name__}: {e}", file=sys.stderr)
return 2Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.10)
253-253: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
In scripts/smoke_m1_graphql.py around lines 253-255, the bare except Exception
swallows details; modify the handler to print the current error message to
stderr as before and also emit the full traceback (e.g., via
traceback.print_exc() or logging.exception) so callers can debug; if the script
has a verbose flag, only print the traceback when verbose is enabled, otherwise
write a short message and still return 2.
| [package] | ||
| name = "xtask" | ||
| version = "0.1.0" | ||
| edition = "2021" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Missing license field in package metadata.
While xtask is typically an internal workspace tool, it's recommended to include at least a license field for consistency and to clarify licensing terms. This is especially important if the repository may be open-sourced or if any part of the xtask code could be referenced elsewhere.
🔎 Suggested metadata addition
[package]
name = "xtask"
version = "0.1.0"
edition = "2021"
+license = "Apache-2.0" # or match your project's license📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| [package] | |
| name = "xtask" | |
| version = "0.1.0" | |
| edition = "2021" | |
| [package] | |
| name = "xtask" | |
| version = "0.1.0" | |
| edition = "2021" | |
| license = "Apache-2.0" |
🤖 Prompt for AI Agents
In xtask/Cargo.toml around lines 1 to 4, the package metadata is missing a
license field; add an appropriate license entry (e.g., license = "Apache-2.0" or
license = "MIT OR Apache-2.0") to the [package] section, or add license-file =
"LICENSE" if you prefer referencing a local license file, ensuring the chosen
value matches the repository license and CI/publishing expectations.
| let cmd = args | ||
| .next() | ||
| .ok_or_else(|| anyhow!("missing command (try: `cargo run -p xtask -- roadmap-dags`)"))?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete usage hint — only mentions one of two commands.
The error message suggests roadmap-dags but omits install-githooks. Either list both or emit a proper usage block.
🔎 Proposed fix
let cmd = args
.next()
- .ok_or_else(|| anyhow!("missing command (try: `cargo run -p xtask -- roadmap-dags`)"))?;
+ .ok_or_else(|| anyhow!("missing command\n\nUsage: cargo run -p xtask -- <command>\n\nCommands:\n roadmap-dags Run scripts/update_roadmap_dags.py\n install-githooks Set core.hooksPath to .githooks"))?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let cmd = args | |
| .next() | |
| .ok_or_else(|| anyhow!("missing command (try: `cargo run -p xtask -- roadmap-dags`)"))?; | |
| let cmd = args | |
| .next() | |
| .ok_or_else(|| anyhow!("missing command\n\nUsage: cargo run -p xtask -- <command>\n\nCommands:\n roadmap-dags Run scripts/update_roadmap_dags.py\n install-githooks Set core.hooksPath to .githooks"))?; |
🤖 Prompt for AI Agents
In xtask/src/main.rs around lines 7 to 9, the missing-command error message only
mentions `roadmap-dags` and omits `install-githooks`; update the message to
provide a complete usage hint by either listing both supported commands (e.g.,
`roadmap-dags`, `install-githooks`) in the string or replace it with a short
usage block that shows correct invocation and available subcommands; ensure the
message remains user-friendly and includes the `cargo run -p xtask -- <command>`
form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
crates/jitos-graph/src/lib.rs
Outdated
| .ok_or_else(|| { | ||
| jitos_core::canonical::CanonicalError::Decode( | ||
| "edge source references missing node".into(), | ||
| ) | ||
| })?; | ||
| let to = self | ||
| .nodes | ||
| .get(e.target) | ||
| .map(|n| n.id) | ||
| .ok_or_else(|| { | ||
| jitos_core::canonical::CanonicalError::Decode( | ||
| "edge target references missing node".into(), | ||
| ) | ||
| })?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Semantic mismatch: using CanonicalError::Decode for graph integrity violations.
You're using CanonicalError::Decode to signal "edge references missing node," which is a graph integrity error, not a decoding error. This conflates two distinct failure modes and will confuse anyone debugging from error messages alone.
Either introduce a dedicated error variant (e.g., CanonicalError::InvalidGraph) or use a different error type entirely for graph validation failures.
| scalar JSON | ||
| scalar Timestamp # ISO-8601 string | ||
| scalar Hash # lowercase hex string, length 64, 32-byte BLAKE3 digest | ||
| scalar U64 | ||
| scalar U32 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Scalar definitions are precise but consider adding validation directives.
The Hash scalar comment specifies "lowercase hex string, length 64, 32-byte BLAKE3 digest" but GraphQL scalars don't inherently enforce this. Consider adding a note that server-side validation MUST reject non-conforming inputs, or document that this is handled by the resolver layer.
🤖 Prompt for AI Agents
In docs/SPECS/SPEC-NET-0001-graphql-sdl-v0.md around lines 154 to 158, the Hash
scalar is described as "lowercase hex string, length 64, 32-byte BLAKE3 digest"
but the spec doesn't require server-side enforcement; update the document to
explicitly state that servers MUST validate Hash inputs and reject
non-conforming values (or alternatively document that resolvers are responsible
for validation), and add a short example or directive indicating expected
validation behavior and error handling so implementers know to enforce
lowercase, 64-character hex and BLAKE3 digest correctness server-side.
| ## External blockers (things v0 depends on elsewhere in JITOS) | ||
|
|
||
| These are the “you can’t GraphQL your way out of physics” constraints: | ||
|
|
||
| 1. **Stable node/edge identity rules:** GraphQL needs stable IDs; the kernel must define canonical NodeId/EdgeId semantics. | ||
| 2. **Canonical rewrite op schema:** `RewriteInput.ops: [JSON]` is a placeholder until `WarpOps`/`WarpTickPatchV1` are nailed. | ||
| 3. **SWS correctness + collapse semantics:** Collapse needs deterministic diff/merge rules and conflict policy. | ||
| 4. **Event stream model:** Subscriptions require a coherent event bus: tick events, rewrite applied events, task events. | ||
| 5. **Auth/capabilities model:** Even if localhost-only initially, the architecture must map GraphQL calls → kernel capabilities. | ||
| 6. **Pagination/index strategy:** Graph snapshots can be huge. Need sane limits and cursoring, or the viewer will DDoS your own daemon. | ||
| 7. **echo-tasks integration (for submitIntent to be real):** Until planning exists, intent can be stored as nodes but not expanded into a DAG. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider cross-referencing resolved blockers.
Blocker #1 (stable node/edge identity rules) appears to be addressed by SPEC-WARP-0001 and the NodeId/canonical hashing implementation in this PR. Update this section to indicate partial resolution, or add status annotations.
🤖 Prompt for AI Agents
In docs/SPECS/SPEC-NET-0001-graphql-sdl-v0.md around lines 549 to 559, the
External blockers list still marks blocker #1 (stable node/edge identity rules)
as unresolved even though SPEC-WARP-0001 and the new NodeId/canonical hashing in
this PR partially address it; update this section to reflect that status by
adding a brief status annotation for blocker #1 (e.g., "partially resolved — see
SPEC-WARP-0001 and NodeId canonical hashing in this PR") or cross-reference the
SPEC-WARP-0001 spec and the PR identifier, and keep the rest of the blocker list
unchanged.
| ``` | ||
| NodeLeafHashV0(node) = blake3( canonical_encode({ | ||
| version: "node-leaf-v0", | ||
| node_id: NodeId, | ||
| kind: String, | ||
| payload_bytes: Bytes, | ||
| attachment: Optional<Hash>, | ||
| }) ) | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Add language specifier to pseudo-code blocks.
The linter flags these code blocks for missing language specifiers. Use text or plaintext for pseudo-code notation to satisfy markdownlint while maintaining readability.
🔎 Proposed fix
-```
+```text
NodeLeafHashV0(node) = blake3( canonical_encode({📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| NodeLeafHashV0(node) = blake3( canonical_encode({ | |
| version: "node-leaf-v0", | |
| node_id: NodeId, | |
| kind: String, | |
| payload_bytes: Bytes, | |
| attachment: Optional<Hash>, | |
| }) ) | |
| ``` |
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
47-47: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
docs/SPECS/SPEC-WARP-0002-incremental-graph-hash.md lines 47-55: the fenced
pseudo-code block lacks a language specifier which triggers the markdown linter;
update the opening fence to include a plain/text specifier (e.g., ```text or
```plaintext) so the block becomes a valid code fence with language, leaving the
block contents unchanged.
| for (num, _title, _stat), status in zip(phases, phase_statuses): | ||
| assignments[f"P{num}"] = status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Add strict=True to zip() to enforce length invariant.
While phases and phase_statuses should always have equal length (since phase_statuses is derived from phases), using strict=True makes this invariant explicit and will raise ValueError if a future refactor breaks this assumption.
🔎 Proposed fix
- for (num, _title, _stat), status in zip(phases, phase_statuses):
+ for (num, _title, _stat), status in zip(phases, phase_statuses, strict=True):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (num, _title, _stat), status in zip(phases, phase_statuses): | |
| assignments[f"P{num}"] = status | |
| for (num, _title, _stat), status in zip(phases, phase_statuses, strict=True): | |
| assignments[f"P{num}"] = status |
🧰 Tools
🪛 Ruff (0.14.10)
207-207: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
🤖 Prompt for AI Agents
In scripts/update_roadmap_dags.py around lines 207 to 208, the zip(phases,
phase_statuses) assumes equal lengths but does not enforce it; change the call
to zip(phases, phase_statuses, strict=True) so Python raises a ValueError if the
lengths diverge, making the invariant explicit and preventing silent mismatches
at runtime.
| if not moc.exists(): | ||
| raise SystemExit(f"missing {moc}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Missing existence check parity for milestone READMEs.
You check moc.exists() for the main roadmap but only skip missing milestone READMEs silently at line 265-266. Consider whether a missing milestone README should warn rather than silently skip.
🧰 Tools
🪛 Ruff (0.14.10)
240-240: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In scripts/update_roadmap_dags.py around lines 239-240 and 265-266, you
currently raise SystemExit when the main roadmap moc file is missing but
silently skip missing milestone README files; make the behavior consistent by
replacing the silent skip for milestone READMEs with an explicit warning (or
conversely change the main check to a warning if you prefer non-fatal), e.g.,
detect milestone README existence where lines ~265-266 skip them and call the
repository/process logger to emit a clear warning message including the missing
path and continue, or change the earlier SystemExit to a non-fatal warning if
missing milestones should not fail the run.
| @dataclass(frozen=True) | ||
| class MilestoneProgress: | ||
| num: int | ||
| readme: Path | ||
| phases: List[Tuple[int, str, CheckboxStats]] | ||
| total_stats: CheckboxStats | ||
|
|
||
| @property | ||
| def any_checked(self) -> bool: | ||
| return self.total_stats.any_checked | ||
|
|
||
| @property | ||
| def all_checked(self) -> bool: | ||
| return self.total_stats.all_checked |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Nested dataclass inside main() is unconventional.
Defining MilestoneProgress inside main() works but makes it untestable in isolation. Consider moving it to module level for consistency with CheckboxStats.
🤖 Prompt for AI Agents
In scripts/update_roadmap_dags.py around lines 247 to 260, the MilestoneProgress
dataclass is defined nested inside main(), which makes it hard to import and
test separately; move the @dataclass class MilestoneProgress (including its
fields and properties any_checked/all_checked) out of main() to module scope
(near CheckboxStats or other top-level dataclasses), update any references in
main() to use the relocated class, and run tests/linters to ensure imports and
type hints remain correct.
| "install-githooks" => { | ||
| // This sets a local repo config (not global). It's the simplest way to enable | ||
| // version-controlled hooks in `.githooks/`. | ||
| run("git", &["config".into(), "core.hooksPath".into(), ".githooks".into()])?; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Unnecessary heap allocations from .into() on string literals.
Every call to .into() on a &str allocates a new String. The run function signature forces this. Consider accepting impl IntoIterator<Item = impl AsRef<OsStr>> or at minimum &[&str] to avoid the ceremony.
🔎 Proposed refactor for run signature
-fn run(bin: &str, args: &[String]) -> Result<()> {
+fn run<I, S>(bin: &str, args: I) -> Result<()>
+where
+ I: IntoIterator<Item = S>,
+ S: AsRef<std::ffi::OsStr>,
+{
let status = Command::new(bin)
.args(args)
.status()
.with_context(|| format!("failed to run `{bin}`"))?;
if !status.success() {
- bail!("command failed: `{bin} {}`", args.join(" "));
+ bail!("command failed: `{bin}` (exit code: {:?})", status.code());
}
Ok(())
}Then simplify the call site:
"install-githooks" => {
- run("git", &["config".into(), "core.hooksPath".into(), ".githooks".into()])?;
+ run("git", ["config", "core.hooksPath", ".githooks"])?;
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In xtask/src/main.rs around lines 17 to 21 the call site is creating unnecessary
String allocations by calling .into() on string literals when invoking
run("git", &["config".into(), "core.hooksPath".into(), ".githooks".into()])?.
Change the run signature to accept arguments without allocating, e.g. run(cmd:
impl AsRef<OsStr>, args: impl IntoIterator<Item = impl AsRef<OsStr>>) (or at
minimum run(cmd: &str, args: &[&str])) and update the run implementation to only
convert AsRef<OsStr> to OsString/OsStr when spawning the process; then update
this and other call sites to pass string slices (&"config", &"core.hooksPath",
&".githooks" or ["config", "core.hooksPath", ".githooks"]) so no intermediate
String allocations occur.
| if !status.success() { | ||
| bail!("command failed: `{bin} {}`", args.join(" ")); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error message omits exit code — useless for debugging.
When python3 or git fails, you currently report only that it failed. Include the exit code so operators can distinguish between "script returned error" (1) and "python3 not found" (127).
🔎 Proposed fix
if !status.success() {
- bail!("command failed: `{bin} {}`", args.join(" "));
+ bail!(
+ "command `{bin}` failed with exit code {:?}",
+ status.code()
+ );
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if !status.success() { | |
| bail!("command failed: `{bin} {}`", args.join(" ")); | |
| } | |
| if !status.success() { | |
| bail!( | |
| "command `{bin}` failed with exit code {:?}", | |
| status.code() | |
| ); | |
| } |
🤖 Prompt for AI Agents
In xtask/src/main.rs around lines 36 to 38, the error message when a spawned
command fails omits the process exit code; change the bail! call to include the
exit status by extracting status.code() (and handling None for signal
termination) and include that value in the formatted message so callers see e.g.
"command failed: `git ...` (exit code: 127)" or indicate signal termination when
code is None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
♻️ Duplicate comments (2)
crates/jitos-graph/src/lib.rs (1)
98-107: CRITICAL: Wrong error type for graph integrity violations.Lines 99-101 and 103-106 abuse
CanonicalError::Decodeto signal graph invariant violations. This is categorically wrong:
- Decode errors = "I can't parse these bytes"
- Graph integrity errors = "Your graph has dangling edges"
These are COMPLETELY different failure modes. Using the same error type conflates them and will cause:
- Incorrect error handling by callers expecting decode failures
- Misleading error messages in logs
- Inability to distinguish serialization bugs from graph construction bugs
Introduce a proper error type (e.g.,
GraphIntegrityErroror add a variant toCanonicalError).🔎 Recommended fix
Add a new error variant to
jitos_core::canonical::CanonicalError:// In crates/jitos-core/src/canonical.rs pub enum CanonicalError { Decode(String), Encode(String), InvalidGraph(String), // NEW }Then update the error construction:
let from = self.nodes.get(e.source).map(|n| n.id).ok_or_else(|| { - jitos_core::canonical::CanonicalError::Decode( + jitos_core::canonical::CanonicalError::InvalidGraph( "edge source references missing node".into(), ) })?; let to = self.nodes.get(e.target).map(|n| n.id).ok_or_else(|| { - jitos_core::canonical::CanonicalError::Decode( + jitos_core::canonical::CanonicalError::InvalidGraph( "edge target references missing node".into(), ) })?;crates/jitos-graph/tests/graph_commit_digest_tests.rs (1)
13-25: Apply rustfmt to fix formatting violations (CI blocker).The function signature and body violate rustfmt rules, causing CI to fail. This is a merge blocker.
Run
cargo fmt --allimmediately.#!/bin/bash cargo fmt --all -- --check || cargo fmt --all
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (3)
crates/jitos-graph/src/lib.rscrates/jitos-graph/tests/graph_commit_digest_tests.rscrates/jitos-planner/src/lib.rs
🧰 Additional context used
🧬 Code graph analysis (2)
crates/jitos-graph/tests/graph_commit_digest_tests.rs (2)
crates/jitos-graph/src/ids.rs (1)
from_hash(19-21)crates/jitos-graph/src/lib.rs (1)
new(64-66)
crates/jitos-graph/src/lib.rs (1)
crates/jitos-core/src/canonical.rs (2)
encode(75-80)hash_canonical(92-96)
🔇 Additional comments (3)
crates/jitos-graph/src/lib.rs (2)
74-92: LGTM: Node canonicalization is correct.The node encoding and sorting logic properly implements deterministic hashing:
- Uses canonical CBOR encoding for payloads (line 84)
- Sorts by NodeId to eliminate insertion-order dependency (line 92)
- Propagates encoding errors appropriately
The doc comment clearly describes the invariants. Well done.
109-111: Remove hash collision claim; hash is cryptographically secure at 256 bits.The original concern conflates two distinct issues. The birthday paradox claim (50% collision probability at ~2^16 edges) is mathematically incorrect. BLAKE3-256 achieves 50% collision probability at ~2^128 edges—the reviewer's threshold is off by 2^112 orders of magnitude. At realistic edge counts, hash collisions are cryptographically impossible.
The legitimate concern is different: the graph's public
SlotMap<EdgeKey, WarpEdge>field allows semantically identical edges (samefrom,to,edge_type,attachment) to coexist. Duringcompute_hash_checked(), these produce identicaledge_ids, resulting in deterministic but potentially unintended deduplication in the commitment. If semantic uniqueness is required, enforce it at insertion time; if it's acceptable, document it explicitly in the SPEC.Likely an incorrect or invalid review comment.
crates/jitos-graph/tests/graph_commit_digest_tests.rs (1)
27-60: Good test, but insufficient coverage of insertion order invariance.This test correctly verifies node insertion order doesn't affect the hash. However:
- Missing edge insertion order test: What if edges are inserted in different orders? The current test inserts edges in the same order relative to nodes.
- Missing determinism test: Call
compute_hash()twice ong1and verify both calls return the same hash.- Toy scale: Only 2 nodes. Test with 10+ nodes to catch subtle ordering bugs.
Add these test cases:
#[test] fn graph_hash_is_deterministic() { let mut g = WarpGraph::new(); let a = insert_node(&mut g, node_id(1), "demo.A", serde_json::json!({"k": "A"})); let b = insert_node(&mut g, node_id(2), "demo.B", serde_json::json!({"k": "B"})); g.edges.insert(WarpEdge { source: a, target: b, edge_type: "demo.edge".to_string(), attachment: None, }); let h1 = g.compute_hash(); let h2 = g.compute_hash(); assert_eq!(h1, h2, "multiple calls must return same hash"); } #[test] fn graph_hash_is_invariant_under_edge_insertion_order() { let mut g1 = WarpGraph::new(); let a1 = insert_node(&mut g1, node_id(1), "demo.A", serde_json::json!({"k": "A"})); let b1 = insert_node(&mut g1, node_id(2), "demo.B", serde_json::json!({"k": "B"})); // Insert edge A->B first g1.edges.insert(WarpEdge { source: a1, target: b1, edge_type: "e1".to_string(), attachment: None }); g1.edges.insert(WarpEdge { source: b1, target: a1, edge_type: "e2".to_string(), attachment: None }); let mut g2 = WarpGraph::new(); let a2 = insert_node(&mut g2, node_id(1), "demo.A", serde_json::json!({"k": "A"})); let b2 = insert_node(&mut g2, node_id(2), "demo.B", serde_json::json!({"k": "B"})); // Insert edge B->A first (reversed order) g2.edges.insert(WarpEdge { source: b2, target: a2, edge_type: "e2".to_string(), attachment: None }); g2.edges.insert(WarpEdge { source: a2, target: b2, edge_type: "e1".to_string(), attachment: None }); assert_eq!(g1.compute_hash(), g2.compute_hash(), "edge insertion order must not affect hash"); }
| #[derive(Debug, Clone, Serialize)] | ||
| struct EdgeCommitV0 { | ||
| edge_id: Hash, | ||
| from: NodeId, | ||
| to: NodeId, | ||
| kind: String, | ||
| payload_bytes: Option<Vec<u8>>, | ||
| attachment: Option<Hash>, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Remove the unused payload_bytes field from EdgeCommitV0.
Line 34 declares payload_bytes: Option<Vec<u8>> but it's hardcoded to None at line 118. WarpEdge has no payload field, so this is dead code masquerading as "future-proofing."
Either implement edge payloads now (add the field to WarpEdge, encode it, test it) or remove this field entirely. Carrying unused schema fields degrades clarity and makes the canonical format harder to reason about.
🔎 Proposed fix
#[derive(Debug, Clone, Serialize)]
struct EdgeCommitV0 {
edge_id: Hash,
from: NodeId,
to: NodeId,
kind: String,
- payload_bytes: Option<Vec<u8>>,
attachment: Option<Hash>,
}And remove the corresponding assignment at line 118:
edges.push(EdgeCommitV0 {
edge_id,
from,
to,
kind: e.edge_type.clone(),
- payload_bytes: None,
attachment: e.attachment,
});Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In crates/jitos-graph/src/lib.rs around lines 28 to 36 (and remove the
corresponding assignment at line 118), the EdgeCommitV0 struct contains an
unused payload_bytes: Option<Vec<u8>> field that is always set to None; remove
this field from EdgeCommitV0, delete the payload_bytes assignment where
EdgeCommitV0 is constructed (line ~118), and update any related
derives/serialization expectations and tests or schema code that referenced it
so the struct and its serialized form no longer include payload_bytes.
| pub fn compute_hash(&self) -> Hash { | ||
| // TODO: Implement deterministic sorting/hashing logic | ||
| // For now, a placeholder | ||
| Hash([0u8; 32]) | ||
| self.compute_hash_checked() | ||
| .expect("canonical graph hashing must succeed") | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the file
fd -e rs "lib.rs" crates/jitos-graph/ -exec wc -l {} \;Repository: flyingrobots/loom
Length of output: 302
🏁 Script executed:
# Read the relevant section around lines 69-72
cat -n crates/jitos-graph/src/lib.rs | sed -n '60,120p'Repository: flyingrobots/loom
Length of output: 3011
🏁 Script executed:
# Search for compute_hash_checked implementation and documentation
rg "compute_hash_checked" crates/jitos-graph/src/lib.rs -A 15 -B 2Repository: flyingrobots/loom
Length of output: 1340
🏁 Script executed:
# Look for dangling edge checks around lines 98-107 mentioned in the review
rg "dangling" crates/jitos-graph/src/lib.rs -B 3 -A 3Repository: flyingrobots/loom
Length of output: 43
🏁 Script executed:
# Search for any documentation or tests related to compute_hash
rg "compute_hash" crates/jitos-graph/ -B 2 -A 2 --type rsRepository: flyingrobots/loom
Length of output: 87
🏁 Script executed:
cat -n crates/jitos-graph/src/lib.rsRepository: flyingrobots/loom
Length of output: 5158
🏁 Script executed:
# Also check if there are tests for compute_hash that demonstrate the panic behavior
rg "compute_hash" . --type-list | grep -i testRepository: flyingrobots/loom
Length of output: 450
🏁 Script executed:
# Search for any comments or documentation about invariants or error conditions
rg "dangling\|invariant\|valid" crates/jitos-graph/src/lib.rs -B 2 -A 2Repository: flyingrobots/loom
Length of output: 43
Document panic conditions for compute_hash() or make the fallible API primary.
The compute_hash() wrapper panics when edges reference deleted nodes (lines 98-107), but its doc comment (line 68) makes no mention of this. The .expect() message "canonical graph hashing must succeed" is false—it fails immediately on any edge with a missing source or target node.
This is a silent API contract violation. Callers relying on compute_hash() for production graphs with any chance of inconsistency will panic without warning. Three remedies:
- Add panic documentation to the doc comment explicitly stating when it panics, or
- Deprecate
compute_hash()and promotecompute_hash_checked()as the primary API, or - Add a
validate()method requiring callers to check graph invariants before hashing.
The current state is indefensible—a public method that unconditionally panics on corrupted data with zero documentation is a ticking bomb. compute_hash_checked() exists and is well-documented; make that the standard path.
| let from = self.nodes.get(e.source).map(|n| n.id).ok_or_else(|| { | ||
| jitos_core::canonical::CanonicalError::Decode( | ||
| "edge source references missing node".into(), | ||
| ) | ||
| })?; | ||
| let to = self.nodes.get(e.target).map(|n| n.id).ok_or_else(|| { | ||
| jitos_core::canonical::CanonicalError::Decode( | ||
| "edge target references missing node".into(), | ||
| ) | ||
| })?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Improve error messages with context.
The error strings "edge source references missing node" and "edge target references missing node" (lines 100, 105) don't identify:
- Which edge (by index or key)
- Which NodeKey was invalid
- What the target NodeId was supposed to be
This makes debugging impossible. Add structured context.
🔎 Proposed improvement
let from = self.nodes.get(e.source).map(|n| n.id).ok_or_else(|| {
jitos_core::canonical::CanonicalError::Decode(
- "edge source references missing node".into(),
+ format!("edge source {:?} references missing node", e.source),
)
})?;
let to = self.nodes.get(e.target).map(|n| n.id).ok_or_else(|| {
jitos_core::canonical::CanonicalError::Decode(
- "edge target references missing node".into(),
+ format!("edge target {:?} references missing node", e.target),
)
})?;🤖 Prompt for AI Agents
In crates/jitos-graph/src/lib.rs around lines 98 to 107, the Decode errors for
missing nodes are too generic; update the ok_or_else closures to include
structured context (edge identifier or index, the offending NodeKey value
e.source/e.target, and the expected NodeId or target id) in the error message or
by constructing a richer CanonicalError variant. Capture the edge index/key (or
pass it into the closure), format the NodeKey using Debug/Display (e.g. {:?} or
to_string()), and build an informative message like "edge <edge_id_or_index>
source <node_key> missing (expected node id: ...)" for source and similarly for
target so logs pinpoint which edge and node caused the failure.
| fn h(byte: u8) -> Hash { | ||
| Hash([byte; 32]) | ||
| } | ||
|
|
||
| fn node_id(byte: u8) -> NodeId { | ||
| NodeId::from_hash(h(byte)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Test helper h() creates unrealistic hash values.
Repeating a single byte 32 times (line 6) produces hashes with no entropy. Real hashes have uniform bit distribution. If tests expand to cover hash-sensitive behavior (collisions, ordering edge cases), this helper will give false confidence.
Consider using blake3::hash(bytes) or at least [byte, 0, 0, ..., 0] to avoid internal collisions.
🤖 Prompt for AI Agents
crates/jitos-graph/tests/graph_commit_digest_tests.rs around lines 5 to 11: the
test helper h(byte) builds a Hash by repeating a single byte 32 times which
yields unrealistic, low-entropy hashes and can mask hash-sensitive bugs; change
h to produce more realistic hashes (for example, compute blake3::hash(&[byte])
and use its 32-byte output for Hash, or at minimum construct a 32-byte array
like [byte, 0, 1, 2, ...] so values vary across positions) and keep
node_id(byte) using NodeId::from_hash(h(byte)); update imports for blake3 if
needed and adjust any test expectations that relied on the old deterministic
single-byte pattern.
| #[test] | ||
| fn graph_hash_changes_when_node_payload_changes() { | ||
| let mut g1 = WarpGraph::new(); | ||
| insert_node(&mut g1, node_id(1), "demo.A", serde_json::json!({"k": "A"})); | ||
|
|
||
| let mut g2 = WarpGraph::new(); | ||
| insert_node( | ||
| &mut g2, | ||
| node_id(1), | ||
| "demo.A", | ||
| serde_json::json!({"k": "A2"}), | ||
| ); | ||
|
|
||
| let h1 = g1.compute_hash(); | ||
| let h2 = g2.compute_hash(); | ||
|
|
||
| assert_ne!(h1, h2, "payload changes must change the graph hash"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test is correct but coverage is woefully incomplete.
This test verifies payload sensitivity, which is good. But where are the tests for:
- Empty graph: Does
WarpGraph::new().compute_hash()work? - Dangling edges: Does
compute_hash()panic as expected when edges reference deleted nodes? - Attachments: Do node/edge attachments affect the hash?
- Type changes: Does changing
node_typeoredge_typechange the hash? - Edge semantics: Multiple edges between same nodes? Self-loops?
These are NOT edge cases - they're core functionality that SPEC-WARP-0001 must handle. Add them.
Do you want me to generate a comprehensive test suite covering these cases?
🤖 Prompt for AI Agents
In crates/jitos-graph/tests/graph_commit_digest_tests.rs around lines 62 to 79,
the existing test only checks payload sensitivity; add a comprehensive test
suite covering empty graph hashing, dangling-edge behavior, attachment impact,
node/edge type changes, multiple edges and self-loops: write separate tests that
(1) assert WarpGraph::new().compute_hash() returns a stable hash (and does not
panic), (2) create a graph, insert an edge to a node, remove that node and
assert compute_hash() either panics as designed or returns a defined error (use
should_panic or Result assertions consistent with API), (3) add attachments to
nodes and edges and assert hashes change when attachments change, (4) change
node_type and edge_type values and assert hashes change, and (5) create multiple
parallel edges and self-loop edges and assert their presence affects the
computed hash; structure tests with clear setup/teardown and use existing
helpers like insert_node/insert_edge and consistent assertions
(assert_eq/assert_ne or should_panic) to match the library's panic/error
semantics.
| //! `jitos-planner` | ||
| //! | ||
| //! Planner-side schemas and helpers. | ||
| //! | ||
| //! Note: This crate currently contains only the SLAPS schema types; it exists so the | ||
| //! workspace remains buildable while milestones are executed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation is self-contradicting and reveals placeholder architecture.
Lines 3-4 claim this crate provides "schemas and helpers" (plural, broad scope), but Lines 5-6 immediately contradict this by admitting it "currently contains only the SLAPS schema types." Pick one truth.
More damning: the justification "it exists so the workspace remains buildable while milestones are executed" is a massive architectural red flag. This reads as "we created an empty crate to avoid compiler errors." If this crate truly has no purpose beyond being a stub, it shouldn't exist yet. If it does have a purpose, document that purpose honestly.
Additional pedantic concerns:
- Wrapping the crate name in backticks in its own documentation (
jitos-planner) is unconventional and serves no purpose—readers already know which crate they're in. - The term "helpers" is never substantiated. What helpers? Where? If none exist, don't promise them.
- "SLAPS" is never explained or cross-referenced. What is SLAPS? Link to
slapsmodule docs or external specifications.
🔎 Proposed documentation fix
-//! `jitos-planner`
+//! Planner-side schemas for the Jitos planning system.
//!
-//! Planner-side schemas and helpers.
-//!
-//! Note: This crate currently contains only the SLAPS schema types; it exists so the
-//! workspace remains buildable while milestones are executed.
+//! This crate defines the SLAPS (Structured Logical Action Plan Schema) v2 data
+//! structures used for planning and coordination. See the [`slaps`] module for details.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| //! `jitos-planner` | |
| //! | |
| //! Planner-side schemas and helpers. | |
| //! | |
| //! Note: This crate currently contains only the SLAPS schema types; it exists so the | |
| //! workspace remains buildable while milestones are executed. | |
| //! Planner-side schemas for the Jitos planning system. | |
| //! | |
| //! This crate defines the SLAPS (Structured Logical Action Plan Schema) v2 data | |
| //! structures used for planning and coordination. See the [`slaps`] module for details. |
🤖 Prompt for AI Agents
crates/jitos-planner/src/lib.rs lines 1-6: the crate-level docstring is
contradictory and placeholder-like; update it to a single truthful statement
about the crate’s current purpose (either remove the crate if it truly serves
only to silence the compiler, or document its real role), avoid wrapping the
crate name in backticks, remove or replace the vague term "helpers" unless such
APIs exist (or list/link them), and add a brief explanation or intra-crate link
for "SLAPS" (e.g., link to the slaps module or external spec) so the docstring
accurately reflects contents and future intent.
| //! Note: This crate currently contains only the SLAPS schema types; it exists so the | ||
| //! workspace remains buildable while milestones are executed. | ||
| pub mod slaps; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Suboptimal API ergonomics: consider re-exporting SLAPS types at the crate root.
The bare pub mod slaps; declaration forces consumers to write jitos_planner::slaps::Slaps, jitos_planner::slaps::Target, etc. For a crate whose sole documented purpose is exposing SLAPS schema types, this extra module nesting is ceremony without benefit.
Standard Rust convention: when a crate has a single focused responsibility, re-export the primary types at the root for ergonomic access. Compare:
- Current:
use jitos_planner::slaps::{Slaps, Target, Context}; - Better:
use jitos_planner::{Slaps, Target, Context};
If the slaps module contains internal helpers or types that shouldn't be at the root, selectively re-export only the public API types.
🔎 Proposed ergonomic improvement
pub mod slaps;
+
+// Re-export primary types for ergonomic access
+pub use slaps::{Context, ContextLink, Scope, Slaps, SuccessCriteria, Target};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub mod slaps; | |
| pub mod slaps; | |
| // Re-export primary types for ergonomic access | |
| pub use slaps::{Context, ContextLink, Scope, Slaps, SuccessCriteria, Target}; |
Consolidates local milestone/roadmap/doc work and adds deterministic graph commit digest hashing (SPEC-WARP-0001) implementation + tests.
Key contents:
How to verify: