Skip to content

feat(api)!: adopt checked Delaunay v0.7.7 APIs#128

Open
acgetchell wants to merge 2 commits into
mainfrom
feat/checked-delaunay-api
Open

feat(api)!: adopt checked Delaunay v0.7.7 APIs#128
acgetchell wants to merge 2 commits into
mainfrom
feat/checked-delaunay-api

Conversation

@acgetchell
Copy link
Copy Markdown
Owner

  • Upgrade to delaunay 0.7.7 and route backend construction through checked, topology-aware validation.
  • Harden CDT initialization, checkpoint restore, and mutation paths so invalid Delaunay state is rejected with typed errors.
  • Replace public simulation result fields with constructors and accessors, and add typed output/checkpoint/backend operation categories.
  • Move mock backend fixtures into the testing prelude and widen Euler characteristic arithmetic to i128.

BREAKING CHANGE: DelaunayBackend::from_triangulation now returns Result, SimulationResultsBackend exposes accessors instead of public fields, TriangulationQuery::euler_characteristic returns i128, mock backend fixtures move from prelude::geometry to prelude::testing, and checked backend reconstruction may reject checkpoints or explicit toroidal connectivity accepted previously.

- Upgrade to delaunay 0.7.7 and route backend construction through checked, topology-aware validation.
- Harden CDT initialization, checkpoint restore, and mutation paths so invalid Delaunay state is rejected with typed errors.
- Replace public simulation result fields with constructors and accessors, and add typed output/checkpoint/backend operation categories.
- Move mock backend fixtures into the testing prelude and widen Euler characteristic arithmetic to i128.

BREAKING CHANGE: DelaunayBackend::from_triangulation now returns Result, SimulationResultsBackend exposes accessors instead of public fields, TriangulationQuery::euler_characteristic returns i128, mock backend fixtures move from prelude::geometry to prelude::testing, and checked backend reconstruction may reject checkpoints or explicit toroidal connectivity accepted previously.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 16, 2026

Walkthrough

This PR introduces typed error enums, makes SimulationResultsBackend fields private with validated constructors, enforces fallible Delaunay/backend validation, adds stale-foliation bookkeeping semantics, optimizes ergodic moves to avoid allocations, narrows CLI dimension support to 2, widens Euler-characteristic types to i128, and updates docs/tests/tooling accordingly.

Changes

Typed Error System and Type Widening

Layer / File(s) Summary
Error type enums and Display implementations
src/errors.rs
Adds OutputFormat, CheckpointOperation, BackendMutationOperation enums; updates CdtError payloads and marks DelaunayValidationLevel non-exhaustive.
Delaunay operation enums and backend error typing
src/geometry/backends/delaunay.rs
Introduces DelaunayOperation and NonFlippableEdgeReason, removes struct-level DataType bounds, updates constructor to return Result and updates tests.
Mock backend operation enums
src/geometry/backends/mock.rs
Adds MockOperation, MockStorageTarget, MockNonFlippableReason and updates MockError variants and tests.
Euler characteristic widening to i128
src/geometry/traits.rs, src/util.rs, src/cdt/triangulation.rs
Replaces saturating usize→i32 helper with direct i128 casts and updates Euler-characteristic APIs and caches.

Delaunay Backend Validation and Construction

Layer / File(s) Summary
Fallible Delaunay backend construction
src/geometry/backends/delaunay.rs, src/cdt/triangulation/builders.rs
from_triangulation now validates and returns Result; builders use validated_backend to remap errors.
Delaunay backend test updates and validation coverage
src/geometry/backends/delaunay.rs
Extensive test updates to use validated_backend/.expect(...), plus new display tests and deserialization validations.
Triangulation builder validation refactor
src/cdt/triangulation/builders.rs
Constructors now route through validated_backend; updated builder tests including explicit non-Delaunay failure assertions.
Triangulation construction and validation helpers
src/cdt/triangulation.rs, src/cdt/triangulation/validation.rs, src/cdt/triangulation/moves.rs
Refactors try_new/with_topology to assemble via from_parts_before_validation, introduces stale_foliation_error, and updates helpers/tests to assert validation where needed.

SimulationResultsBackend Encapsulation and Field Privacy

Layer / File(s) Summary
SimulationResultsBackend field encapsulation
src/cdt/results.rs, src/lib.rs
Fields made private; adds SimulationResultsBackend::new that validates config, action config, and final triangulation.
SimulationResultsBackend accessor methods and examples
src/cdt/results.rs
Examples and tests updated to use accessor methods (steps(), measurements(), triangulation(), etc.).
Equilibrium iteration and observables
src/cdt/results.rs
Reworks average/variance computations to iterate without allocating full equilibrium collections.
Output error handling with typed OutputFormat
src/cdt/results.rs
CSV/JSON writers now use OutputFormat and map filesystem/serialization errors into CdtError::OutputWriteFailed { format, ... }.
SimulationResultsBackend constructor validation tests
src/cdt/results.rs
Adds tests rejecting invalid Metropolis config, NaN action couplings, and stale foliation final triangulation.

Stale Foliation Bookkeeping Semantics

Layer / File(s) Summary
FoliationError::StaleBookkeeping variant
src/cdt/foliation.rs
Adds StaleBookkeeping { synced_at_modification: Option<u64>, current_modification_count: u64 } and Display formatting.
Foliation validation and stale-bookkeeping checks
src/cdt/triangulation/foliation.rs
Public queries now hide stale foliation and return StaleBookkeeping for foliation-dependent validations.
Cell classification and stale-bookkeeping rejection
src/cdt/triangulation/foliation.rs
validate_cell_classification and classify_all_cells fail with StaleBookkeeping when bookkeeping is stale; synchronization ordering adjusted.
Backend mutation operation typing in foliation
src/cdt/triangulation/foliation.rs
Replaces string operation names with BackendMutationOperation variants in error paths.
Foliation stale-bookkeeping test coverage
src/cdt/triangulation/foliation.rs
Adds/updates tests to assert stale behavior and hidden query results.
Causality validation stale-foliation fast-fail
src/cdt/triangulation/validation.rs
validate_causality_delaunay fails fast with stale-foliation error; tests adjusted accordingly.

Ergodic Moves Optimization and Typed Error Handling

Layer / File(s) Summary
Backend operation typing in ergodic moves
src/cdt/ergodic_moves.rs
reject_backend and mutation error reporting now use BackendMutationOperation.
Allocation-free helper refactoring
src/cdt/ergodic_moves.rs
Removes pick() and allocation-heavy labels(); introduces fixed-size helpers (vertex_labels3, neighbors3) and allocation-free centroid computation.
(1,3) and (3,1) move refactoring with reservoir sampling
src/cdt/ergodic_moves.rs
Single-pass reservoir-style candidate selection replaces vector collection, preserving causality checks and error handling.
Ergodic moves test updates and toroidal coverage
src/cdt/ergodic_moves.rs
Adds toroidal-centroid edge-case tests; updates fixtures and assertions to use typed operations and validated backends.

Checkpoint and Results API Updates

Layer / File(s) Summary
Metropolis checkpoint to results refactoring
src/cdt/metropolis.rs
CdtMcmcCheckpoint::into_results constructs SimulationResultsBackend via validated from_parts(...); docs clarify in-memory vs serialized resume requirements.
Metropolis test accessor and checkpoint patterns
src/cdt/metropolis.rs
Tests converted to accessor usage; introduces serializable_rejected_checkpoint helper and expands serialized-resume validations.

Dimension Support Narrowing and Configuration

Layer / File(s) Summary
CLI dimension argument narrowing
src/config.rs, Cargo.toml
Clap parser restricted to accept only 2; CdtConfig::validate updated and tests adjusted.
Dimension validation test updates
src/config.rs, tests/cli.rs
Integration tests updated for parse-time range errors and merge-with-override semantics (DimensionOverride::Clear).

Documentation and Example Updates

Layer / File(s) Summary
Prelude testing module and re-exports
src/lib.rs
Adds prelude::testing with mock fixtures; expands re-exports for new error/operation enums.
Doc example and test import updates
Various files
Examples/tests now typically import from prelude::testing and use accessor methods; many doc-tests updated to use .expect(...) construction semantics.
CLI examples and code organization documentation
docs/CLI_EXAMPLES.md, docs/code_organization.md, docs/dev/rust.md
Docs updated to reflect dimension-2-only and prelude/testing guidance; bumped delaunay version reference.
Generator documentation and toroidal limitations
src/geometry/generators.rs
Document and test explicit-toroidal rejection for delaunay v0.7.7 and adjust examples/tests.
Example code refactoring for accessor usage
examples/*, tests/*
Examples and tests updated to use results.*() accessors and typed Output/Checkpoint enums.

Dependency Update and Utility Cleanup

Layer / File(s) Summary
Delaunay crate version bump
Cargo.toml
Bumps delaunay from 0.7.6 to 0.7.7 and removes float-ord dependency.
Utility function removal
src/util.rs
Removes saturating_usize_to_i32, adds usize_to_f64 returning Option.
Benchmark fixture updates
benches/cdt_benchmarks.rs
Bench fixtures updated to use from_cdt_strip, validated SimulationResultsBackend::new, and other adjusted set-up.
Tooling, scripts, semgrep, and tests
justfile, scripts/*, semgrep.yaml, scripts/tests/*
Adds CI/bench tooling recipes, restructures changelog post-processing, expands benchmark model parsing, updates hardware utils and tests, and introduces semgrep rules and fixtures.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • acgetchell/causal-triangulations#121: Both PRs modify the simulation output/checkpoint pipeline, including typed output/checkpoint error enums and the examples/output_and_checkpoint.rs example refactoring.
  • acgetchell/causal-triangulations#110: Both PRs modify ergodic-move implementation in src/cdt/ergodic_moves.rs, including move candidate selection and backend-mutation error mapping refactoring.
  • acgetchell/causal-triangulations#83: This PR extends foliation work by adding FoliationError::StaleBookkeeping semantics and updating related validation logic throughout the triangulation module.

"I hop to test and types with glee—
Enums, clean backends, and fewer allocs for me,
Foliation kept fresh, results tucked away,
Two-dimensional steps lead the way!" 🐇✨

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/checked-delaunay-api

@acgetchell acgetchell enabled auto-merge May 16, 2026 04:17
@codecov
Copy link
Copy Markdown

codecov Bot commented May 16, 2026

Codecov Report

❌ Patch coverage is 94.47092% with 58 lines in your changes missing coverage. Please review.
✅ Project coverage is 92.26%. Comparing base (bd3f0f2) to head (b775761).

Files with missing lines Patch % Lines
src/cdt/results.rs 88.66% 23 Missing ⚠️
src/cdt/ergodic_moves.rs 89.04% 16 Missing ⚠️
src/geometry/backends/delaunay.rs 96.74% 7 Missing ⚠️
src/cdt/triangulation/foliation.rs 92.95% 5 Missing ⚠️
src/cdt/triangulation/builders.rs 86.95% 3 Missing ⚠️
src/cdt/observables.rs 85.71% 2 Missing ⚠️
src/geometry/backends/mock.rs 97.18% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #128      +/-   ##
==========================================
- Coverage   92.49%   92.26%   -0.24%     
==========================================
  Files          19       19              
  Lines       10315    10828     +513     
==========================================
+ Hits         9541     9990     +449     
- Misses        774      838      +64     
Flag Coverage Δ
unittests 92.26% <94.47%> (-0.24%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (5)
src/cdt/triangulation/validation.rs (1)

180-182: ⚡ Quick win

Document the stale-foliation failure mode in validate_causality_delaunay docs.

This method now returns FoliationError::StaleBookkeeping when bookkeeping is out of sync, but the current error docs still only describe face/label/causality violations.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/cdt/triangulation/validation.rs` around lines 180 - 182, The docs for
validate_causality_delaunay are missing the new failure mode: when bookkeeping
is out of sync the function returns FoliationError::StaleBookkeeping (triggered
by has_current_foliation() -> stale_foliation_error()). Update the
function-level documentation for validate_causality_delaunay to mention this
failure case, describe the condition (when has_current_foliation() is false /
bookkeeping is stale), and note that callers should handle or propagate
FoliationError::StaleBookkeeping alongside the existing face/label/causality
errors.
src/cdt/foliation.rs (1)

191-197: ⚡ Quick win

Add direct unit coverage for StaleBookkeeping diagnostics.

This new variant is user-facing in error reporting but currently has no explicit equality/display assertions in this module’s test suite. Add one focused test for to_string() content and one for field-sensitive equality.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/cdt/foliation.rs` around lines 191 - 197, Add two unit tests in this
module that construct the StaleBookkeeping variant directly (e.g.,
StaleBookkeeping { synced_at_modification: Some(123),
current_modification_count: 456 }) and assert behavior: one test asserts that
variant.to_string() (or format!("{}", variant)) contains the expected
human-readable message and the key field values (check substrings for
"synced_at_modification" or the numeric values and
"current_modification_count"), and the other test asserts field-sensitive
equality by comparing instances with different synced_at_modification (Some vs
None) and different current_modification_count values to ensure PartialEq
distinguishes them (assert eq for identical fields and neq for differing
fields). Ensure tests are named clearly (e.g., stale_bookkeeping_display and
stale_bookkeeping_equality) and placed under the module's #[cfg(test)] tests.
src/geometry/backends/delaunay.rs (1)

297-333: ⚡ Quick win

Thread DelaunayOperation through post-mutation validation too.

The new typed operation enum is still bypassed by validate_mutation_or_restore, which takes a raw &'static str. That leaves one remaining place where operation names can drift from the structured taxonomy you just introduced. Passing DelaunayOperation into that helper and formatting it there would keep all mutation diagnostics aligned.

♻️ Proposed refactor
-    fn validate_mutation_or_restore(
+    fn validate_mutation_or_restore(
         &mut self,
         dt_before: RawTriangulation<VertexData, CellData, D>,
         facets_before: HashMap<EdgeKey, FacetHandle>,
-        operation: &'static str,
+        operation: DelaunayOperation,
         target: impl Display,
     ) -> Result<(), DelaunayError> {
         self.validate_mutation_or_restore(
             dt_before,
             facets_before,
-            "insert_vertex",
+            DelaunayOperation::InsertVertex,
             format!("{coords:?}"),
         )?;

Apply the same change at the other call sites (FlipK1Remove, FlipK2, FlipK1Insert).

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/geometry/backends/delaunay.rs` around lines 297 - 333,
validate_mutation_or_restore currently accepts a raw &'static str for the
operation name which bypasses the new DelaunayOperation enum; change
validate_mutation_or_restore to accept a DelaunayOperation (or generic
Into<String>/Display) and format the operation inside the function using
DelaunayOperation's Display impl, then update all call sites (e.g., places
handling InsertVertex, MoveVertex, SubdivideFace, FlipK1Remove, FlipK1Insert,
FlipK2, Clear, ReserveCapacity) to pass the enum variant instead of a string so
post-mutation validation uses the typed operation consistently.
src/cdt/ergodic_moves.rs (1)

912-946: 💤 Low value

Complex control flow in neighbors3 could benefit from a comment.

The nested loop with early returns and slot-filling logic is correct but dense. A brief inline comment explaining the invariant (collecting exactly 3 distinct non-self neighbors from 3 adjacent faces) would help future readers.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/cdt/ergodic_moves.rs` around lines 912 - 946, The neighbors3 function's
nested loops and slot-filling are correct but dense; add a concise inline
comment at the top of fn neighbors3 (near the call to
triangulation.geometry().adjacent_faces) that states the invariant: we expect
exactly 3 adjacent faces and we collect exactly three distinct vertex handles
that are not the input vertex (one neighbor per face), explaining the purpose of
neighbors array, neighbor_count, and the dedup/skip logic (skip self, skip
already seen) so future readers understand why early returns and slot checks
exist; reference the calls triangulation.geometry().adjacent_faces and
triangulation.geometry().face_vertices in the comment for clarity.
src/cdt/triangulation.rs (1)

1432-1445: 💤 Low value

Test assertion may be fragile if error message wording changes.

The test checkpoint_rejects_invalid_toroidal_period asserts that the error message contains "invalid toroidal period". If the upstream delaunay crate or internal error formatting changes this wording, the test will fail. Consider also matching on the structured error variant if possible, or documenting the dependency on this specific message.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/cdt/triangulation.rs` around lines 1432 - 1445, The test
checkpoint_rejects_invalid_toroidal_period is brittle because it asserts on an
exact substring of the error text; change it to match the structured
error/variant returned by from_str::<CdtTriangulation<DelaunayBackend2D>>()
instead of the message text: capture the Err value into a variable, then
pattern-match/assert on the concrete error enum/variant or use a
downcast/inspection helper to verify it is the "invalid toroidal period" error
(or the corresponding variant name) rather than using
error.to_string().contains(...); update the assertion in this test so it checks
the error type/variant for CdtTriangulation/DelaunayBackend2D deserialization
failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/cdt/results.rs`:
- Around line 477-487: The doctest examples pass a raw Delaunay mesh
(CdtTriangulation::from_seeded_points) into SimulationResultsBackend::new, but
the constructor now validates that the triangulation is a CDT; replace calls to
CdtTriangulation::from_seeded_points(...) with a CDT-valid builder such as
CdtTriangulation::from_cdt_strip(...) (or another CDT-valid constructor) in the
examples referenced (around the blocks that call SimulationResultsBackend::new)
so the tests/doc build use a CDT-valid triangulation; update all similar
occurrences (noted at lines ~519-529 and ~775-783 in the same file) to maintain
consistency.

---

Nitpick comments:
In `@src/cdt/ergodic_moves.rs`:
- Around line 912-946: The neighbors3 function's nested loops and slot-filling
are correct but dense; add a concise inline comment at the top of fn neighbors3
(near the call to triangulation.geometry().adjacent_faces) that states the
invariant: we expect exactly 3 adjacent faces and we collect exactly three
distinct vertex handles that are not the input vertex (one neighbor per face),
explaining the purpose of neighbors array, neighbor_count, and the dedup/skip
logic (skip self, skip already seen) so future readers understand why early
returns and slot checks exist; reference the calls
triangulation.geometry().adjacent_faces and
triangulation.geometry().face_vertices in the comment for clarity.

In `@src/cdt/foliation.rs`:
- Around line 191-197: Add two unit tests in this module that construct the
StaleBookkeeping variant directly (e.g., StaleBookkeeping {
synced_at_modification: Some(123), current_modification_count: 456 }) and assert
behavior: one test asserts that variant.to_string() (or format!("{}", variant))
contains the expected human-readable message and the key field values (check
substrings for "synced_at_modification" or the numeric values and
"current_modification_count"), and the other test asserts field-sensitive
equality by comparing instances with different synced_at_modification (Some vs
None) and different current_modification_count values to ensure PartialEq
distinguishes them (assert eq for identical fields and neq for differing
fields). Ensure tests are named clearly (e.g., stale_bookkeeping_display and
stale_bookkeeping_equality) and placed under the module's #[cfg(test)] tests.

In `@src/cdt/triangulation.rs`:
- Around line 1432-1445: The test checkpoint_rejects_invalid_toroidal_period is
brittle because it asserts on an exact substring of the error text; change it to
match the structured error/variant returned by
from_str::<CdtTriangulation<DelaunayBackend2D>>() instead of the message text:
capture the Err value into a variable, then pattern-match/assert on the concrete
error enum/variant or use a downcast/inspection helper to verify it is the
"invalid toroidal period" error (or the corresponding variant name) rather than
using error.to_string().contains(...); update the assertion in this test so it
checks the error type/variant for CdtTriangulation/DelaunayBackend2D
deserialization failure.

In `@src/cdt/triangulation/validation.rs`:
- Around line 180-182: The docs for validate_causality_delaunay are missing the
new failure mode: when bookkeeping is out of sync the function returns
FoliationError::StaleBookkeeping (triggered by has_current_foliation() ->
stale_foliation_error()). Update the function-level documentation for
validate_causality_delaunay to mention this failure case, describe the condition
(when has_current_foliation() is false / bookkeeping is stale), and note that
callers should handle or propagate FoliationError::StaleBookkeeping alongside
the existing face/label/causality errors.

In `@src/geometry/backends/delaunay.rs`:
- Around line 297-333: validate_mutation_or_restore currently accepts a raw
&'static str for the operation name which bypasses the new DelaunayOperation
enum; change validate_mutation_or_restore to accept a DelaunayOperation (or
generic Into<String>/Display) and format the operation inside the function using
DelaunayOperation's Display impl, then update all call sites (e.g., places
handling InsertVertex, MoveVertex, SubdivideFace, FlipK1Remove, FlipK1Insert,
FlipK2, Clear, ReserveCapacity) to pass the enum variant instead of a string so
post-mutation validation uses the typed operation consistently.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: a3a3adee-3ed8-4231-b298-f34616a26220

📥 Commits

Reviewing files that changed from the base of the PR and between bd3f0f2 and ad0d31a.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (29)
  • Cargo.toml
  • benches/cdt_benchmarks.rs
  • docs/CLI_EXAMPLES.md
  • docs/code_organization.md
  • docs/dev/rust.md
  • examples/basic_cdt.rs
  • examples/output_and_checkpoint.rs
  • src/cdt/ergodic_moves.rs
  • src/cdt/foliation.rs
  • src/cdt/metropolis.rs
  • src/cdt/results.rs
  • src/cdt/triangulation.rs
  • src/cdt/triangulation/builders.rs
  • src/cdt/triangulation/foliation.rs
  • src/cdt/triangulation/moves.rs
  • src/cdt/triangulation/validation.rs
  • src/config.rs
  • src/errors.rs
  • src/geometry/backends/delaunay.rs
  • src/geometry/backends/mock.rs
  • src/geometry/generators.rs
  • src/geometry/operations.rs
  • src/geometry/traits.rs
  • src/lib.rs
  • src/util.rs
  • tests/cli.rs
  • tests/integration_tests.rs
  • tests/physics_integration.rs
  • tests/regressions.rs
💤 Files with no reviewable changes (1)
  • src/util.rs

Comment thread src/cdt/results.rs Outdated
- Add project Semgrep rules for silent conversion fallbacks, partial_cmp ordering defaults, function-local imports, and Clippy allow suppressions.
- Add slow CI, benchmark smoke/compile, unused-dependency, publish-check, and YAML formatting helpers to the justfile and developer docs.
- Harden changelog archive/postprocessing and benchmark baseline parsing for ID-keyed or unsized workloads.
- Preserve checked numeric conversions in CDT observables/results and keep doctests and benchmarks on CDT-valid fixtures.
- Expose config override and geometry result types through intended public preludes while narrowing backend handle exports.

BREAKING CHANGE: prelude::geometry no longer re-exports DelaunayVertexHandle, DelaunayEdgeHandle, or DelaunayFaceHandle. Import them from causal_triangulations::geometry::backends::delaunay when handle-specific code needs them.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@justfile`:
- Around line 336-338: The publish-check recipe fails if package keywords are
null because the inline jq '.packages[0].keywords[]' returns exit code 5; fix it
by capturing cargo metadata once (e.g., assign metadata=$(cargo metadata
--no-deps --format-version=1 2>/dev/null)) and then extract keywords with a
null-coalescing jq expression like '.packages[0].keywords // [] | .[]' so the
command becomes echo "$metadata" | jq -r '.packages[0].keywords // [] | .[]'
(keep the existing variables such as keywords and count) to ensure the step
succeeds when keywords are missing.

In `@scripts/archive_changelog.py`:
- Around line 352-359: archive_dir_rel = Path(os.path.relpath(...)) can raise
ValueError on Windows when paths are on different drives; wrap the
os.path.relpath(...) call used to build archive_dir_rel in a try/except
ValueError block in scripts/archive_changelog.py, catch the ValueError, set
archive_dir_rel to a safe fallback (for example str(archive_dir) or
Path(archive_dir).as_posix()), and emit a LOGGER.warning that includes the
exception details along with archive_dir and changelog_path.parent so the code
continues to the existing fallback logic instead of crashing.

In `@scripts/benchmark_models.py`:
- Around line 412-415: The sort over dim_benchmarks uses b.comparison_key which
can raise for unsized benchmarks; change the tiebreaker to a display-only key
that won't raise (e.g. use b.benchmark_id or b.header_line()) so the sort
remains robust while preserving comparison semantics from
extract_benchmark_data(); update the lambda in the sorted call that currently
references comparison_key to use (b.benchmark_id or b.header_line()) instead.

In `@scripts/benchmark_utils.py`:
- Around line 1781-1792: The _matching_baseline function currently returns early
when current.benchmark_id is set even if
baseline_results.get(current.comparison_key) returned None, preventing the
legacy "{points}_{dimension}" fallback from being tried; update the logic in
_matching_baseline so that you first try
baseline_results.get(current.comparison_key) and return it if found, but if it's
None and current.points is not None then attempt
baseline_results.get(f"{current.points}_{current.dimension}") (i.e., remove the
short-circuit that checks current.benchmark_id and ensure the legacy key
fallback runs when the direct comparison_key lookup misses).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yml

Review profile: CHILL

Plan: Pro

Run ID: 7ff41179-d035-4ac4-9289-ab60fa41561e

📥 Commits

Reviewing files that changed from the base of the PR and between ad0d31a and b775761.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (31)
  • Cargo.toml
  • benches/cdt_benchmarks.rs
  • docs/dev/commands.md
  • docs/dev/tooling-alignment.md
  • justfile
  • scripts/archive_changelog.py
  • scripts/benchmark_models.py
  • scripts/benchmark_utils.py
  • scripts/hardware_utils.py
  • scripts/postprocess_changelog.py
  • scripts/tests/test_archive_changelog.py
  • scripts/tests/test_benchmark_models.py
  • scripts/tests/test_benchmark_utils.py
  • scripts/tests/test_hardware_utils.py
  • scripts/tests/test_postprocess_changelog.py
  • semgrep.yaml
  • src/cdt/ergodic_moves.rs
  • src/cdt/foliation.rs
  • src/cdt/metropolis.rs
  • src/cdt/observables.rs
  • src/cdt/results.rs
  • src/cdt/triangulation.rs
  • src/cdt/triangulation/builders.rs
  • src/cdt/triangulation/validation.rs
  • src/config.rs
  • src/geometry/backends/delaunay.rs
  • src/geometry/traits.rs
  • src/lib.rs
  • src/util.rs
  • tests/proptest_foliation.rs
  • tests/semgrep/src/project_rules/rust_style.rs
✅ Files skipped from review due to trivial changes (1)
  • tests/proptest_foliation.rs
🚧 Files skipped from review as they are similar to previous changes (10)
  • src/cdt/foliation.rs
  • Cargo.toml
  • src/cdt/triangulation/validation.rs
  • src/cdt/triangulation/builders.rs
  • src/cdt/results.rs
  • src/cdt/metropolis.rs
  • src/cdt/triangulation.rs
  • src/lib.rs
  • src/cdt/ergodic_moves.rs
  • src/geometry/backends/delaunay.rs

Comment thread justfile
Comment on lines +336 to +338
keywords=$(cargo metadata --no-deps --format-version=1 2>/dev/null \
| jq -r '.packages[0].keywords[]')
count=0
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail

echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords[]' >/dev/null
# Expected: jq exits non-zero (current recipe can fail early).

echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords // [] | .[]' >/dev/null
# Expected: exits zero and prints nothing.

Repository: acgetchell/causal-triangulations

Length of output: 134


🏁 Script executed:

# First, check if the justfile exists and read the relevant section
if [ -f justfile ]; then
  echo "=== Lines 330-345 of justfile ==="
  sed -n '330,345p' justfile
  echo ""
  echo "=== Total lines in justfile ==="
  wc -l justfile
else
  echo "justfile not found"
  git ls-files | grep -i justfile
fi

Repository: acgetchell/causal-triangulations

Length of output: 654


🏁 Script executed:

#!/usr/bin/env bash
set -euo pipefail

# Test both approaches with clear separation
echo "=== Testing original code (should fail) ==="
if echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords[]' >/dev/null 2>&1; then
  echo "✓ Original approach succeeded"
else
  echo "✗ Original approach failed with exit code: $?"
fi

echo ""
echo "=== Testing proposed fix (should succeed) ==="
if echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords // [] | .[]' >/dev/null 2>&1; then
  echo "✓ Proposed fix succeeded"
else
  echo "✗ Proposed fix failed with exit code: $?"
fi

echo ""
echo "=== Testing with actual metadata structure ==="
test_json='{"packages":[{"keywords":["foo","bar"],"categories":["category1"],"description":"Test"}]}'
echo "With keywords present:"
echo "$test_json" | jq -r '.packages[0].keywords // [] | .[]'

Repository: acgetchell/causal-triangulations

Length of output: 316


🏁 Script executed:

# Check if jq is mentioned/used in the project
rg -i 'jq' justfile --context 3

Repository: acgetchell/causal-triangulations

Length of output: 2400


Harden publish-check against missing keywords metadata.

The jq command at line 337 (jq -r '.packages[0].keywords[]') fails with exit code 5 when keywords is null or absent, terminating the entire recipe before validation messages print. Under set -euo pipefail, this breaks the workflow without clear diagnostics.

Apply the proposed fix to capture metadata once and use null coalescing:

Suggested fix
 publish-check: _ensure-jq
     #!/usr/bin/env bash
     set -euo pipefail
     echo "🔍 Validating crates.io metadata..."
     errors=0
+    metadata_json="$(cargo metadata --no-deps --format-version=1 2>/dev/null)"
 
-    keywords=$(cargo metadata --no-deps --format-version=1 2>/dev/null \
-        | jq -r '.packages[0].keywords[]')
+    keywords=$(jq -r '.packages[0].keywords // [] | .[]' <<<"$metadata_json")
@@
-    cat_count=$(cargo metadata --no-deps --format-version=1 2>/dev/null \
-        | jq '.packages[0].categories | length')
+    cat_count=$(jq '.packages[0].categories // [] | length' <<<"$metadata_json")
@@
-    desc=$(cargo metadata --no-deps --format-version=1 2>/dev/null \
-        | jq -r '.packages[0].description // ""')
+    desc=$(jq -r '.packages[0].description // ""' <<<"$metadata_json")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@justfile` around lines 336 - 338, The publish-check recipe fails if package
keywords are null because the inline jq '.packages[0].keywords[]' returns exit
code 5; fix it by capturing cargo metadata once (e.g., assign metadata=$(cargo
metadata --no-deps --format-version=1 2>/dev/null)) and then extract keywords
with a null-coalescing jq expression like '.packages[0].keywords // [] | .[]' so
the command becomes echo "$metadata" | jq -r '.packages[0].keywords // [] | .[]'
(keep the existing variables such as keywords and count) to ensure the step
succeeds when keywords are missing.

Comment on lines +352 to +359
archive_dir_rel = Path(os.path.relpath(archive_dir, changelog_path.parent)).as_posix()
if archive_dir_rel == ".." or archive_dir_rel.startswith("../") or Path(archive_dir_rel).is_absolute():
LOGGER.warning(
"Archive directory %s is outside changelog directory %s; generated Markdown links use %s",
archive_dir,
changelog_path.parent,
archive_dir_rel,
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🌐 Web query:

Does Python's os.path.relpath raise ValueError on Windows when path and start are on different drives?

💡 Result:

Yes, Python's os.path.relpath function raises a ValueError on Windows when the path and the start directory are located on different drives [1][2]. Because Windows uses separate drive letters (e.g., C:, D:) to represent distinct filesystems, it is impossible to compute a relative path between them [3][4]. The official Python documentation explicitly states: "On Windows, ValueError is raised when path and start are on different drives" [1][2]. This behavior is a long-standing design choice in the ntpath module [4]. If you encounter this error, you must either use absolute paths or handle the drive difference manually, as a relative path cannot be constructed across different mount points [3].

Citations:


🏁 Script executed:

cat -n scripts/archive_changelog.py | sed -n '340,365p'

Repository: acgetchell/causal-triangulations

Length of output: 1336


🏁 Script executed:

cat -n scripts/archive_changelog.py | sed -n '300,350p'

Repository: acgetchell/causal-triangulations

Length of output: 2158


Wrap os.path.relpath() in a try-except block to handle ValueError on Windows.

On line 352, os.path.relpath() can raise ValueError on Windows when paths are on different drives (e.g., C: and D:), causing the fallback path logic to crash instead of warning and continuing.

Suggested fix
     try:
         archive_dir_rel = archive_dir.relative_to(changelog_path.parent).as_posix()
     except ValueError:
-        archive_dir_rel = Path(os.path.relpath(archive_dir, changelog_path.parent)).as_posix()
+        try:
+            archive_dir_rel = Path(os.path.relpath(archive_dir, changelog_path.parent)).as_posix()
+        except ValueError:
+            archive_dir_rel = archive_dir.as_posix()
         if archive_dir_rel == ".." or archive_dir_rel.startswith("../") or Path(archive_dir_rel).is_absolute():
             LOGGER.warning(
                 "Archive directory %s is outside changelog directory %s; generated Markdown links use %s",
                 archive_dir,
                 changelog_path.parent,
                 archive_dir_rel,
             )
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/archive_changelog.py` around lines 352 - 359, archive_dir_rel =
Path(os.path.relpath(...)) can raise ValueError on Windows when paths are on
different drives; wrap the os.path.relpath(...) call used to build
archive_dir_rel in a try/except ValueError block in
scripts/archive_changelog.py, catch the ValueError, set archive_dir_rel to a
safe fallback (for example str(archive_dir) or Path(archive_dir).as_posix()),
and emit a LOGGER.warning that includes the exception details along with
archive_dir and changelog_path.parent so the code continues to the existing
fallback logic instead of crashing.

Comment on lines +412 to 415
dim_benchmarks = sorted(
by_dimension[dimension],
key=lambda b: (b.points is None, b.points or 0, b.comparison_key),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Avoid using comparison_key as a display-sort key.

comparison_key intentionally raises for unsized benchmarks without benchmark_id, so this sort can now fail on data that extract_benchmark_data() otherwise accepts. A display-only tiebreaker like benchmark_id or header_line() keeps table rendering robust without weakening comparison matching.

Suggested fix
         dim_benchmarks = sorted(
             by_dimension[dimension],
-            key=lambda b: (b.points is None, b.points or 0, b.comparison_key),
+            key=lambda b: (
+                b.points is None,
+                b.points or 0,
+                b.benchmark_id or b.header_line(),
+            ),
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
dim_benchmarks = sorted(
by_dimension[dimension],
key=lambda b: (b.points is None, b.points or 0, b.comparison_key),
)
dim_benchmarks = sorted(
by_dimension[dimension],
key=lambda b: (
b.points is None,
b.points or 0,
b.benchmark_id or b.header_line(),
),
)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/benchmark_models.py` around lines 412 - 415, The sort over
dim_benchmarks uses b.comparison_key which can raise for unsized benchmarks;
change the tiebreaker to a display-only key that won't raise (e.g. use
b.benchmark_id or b.header_line()) so the sort remains robust while preserving
comparison semantics from extract_benchmark_data(); update the lambda in the
sorted call that currently references comparison_key to use (b.benchmark_id or
b.header_line()) instead.

Comment on lines +1781 to +1792
@staticmethod
def _matching_baseline(current: BenchmarkData, baseline_results: dict[str, BenchmarkData]) -> BenchmarkData | None:
"""Return the matching baseline entry, falling back to legacy keys for point-sized benchmarks."""
try:
baseline_benchmark = baseline_results.get(current.comparison_key)
except ValueError:
return None
if baseline_benchmark is not None or current.benchmark_id:
return baseline_benchmark
if current.points is None:
return None
return baseline_results.get(f"{current.points}_{current.dimension}")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Preserve the legacy key fallback after a missed benchmark_id lookup.

If current.benchmark_id is set but the baseline was generated before Benchmark ID: existed, this returns None immediately and never tries the existing {points}_{dimension} key. That will silently skip valid baseline matches for sized benchmarks and can hide real regressions.

Suggested fix
     `@staticmethod`
     def _matching_baseline(current: BenchmarkData, baseline_results: dict[str, BenchmarkData]) -> BenchmarkData | None:
         """Return the matching baseline entry, falling back to legacy keys for point-sized benchmarks."""
         try:
             baseline_benchmark = baseline_results.get(current.comparison_key)
         except ValueError:
-            return None
-        if baseline_benchmark is not None or current.benchmark_id:
+            baseline_benchmark = None
+        if baseline_benchmark is not None:
             return baseline_benchmark
         if current.points is None:
             return None
         return baseline_results.get(f"{current.points}_{current.dimension}")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/benchmark_utils.py` around lines 1781 - 1792, The _matching_baseline
function currently returns early when current.benchmark_id is set even if
baseline_results.get(current.comparison_key) returned None, preventing the
legacy "{points}_{dimension}" fallback from being tried; update the logic in
_matching_baseline so that you first try
baseline_results.get(current.comparison_key) and return it if found, but if it's
None and current.points is not None then attempt
baseline_results.get(f"{current.points}_{current.dimension}") (i.e., remove the
short-circuit that checks current.benchmark_id and ensure the legacy key
fallback runs when the direct comparison_key lookup misses).

@github-actions
Copy link
Copy Markdown

🔴 Performance Regression Detected

Performance Analysis Report

CDT Performance Analysis Report

Generated: 2026-05-16T16:02:43+00:00

Summary

  • Total benchmarks: 44
  • Regressions: 12
  • Improvements: 16
  • Stable: 16
  • New benchmarks: 0
  • Average change: 104.3%
  • Median change: -0.0%

🔴 Performance Regressions

Benchmark Change Current Baseline Ratio
ergodic_moves/move/Move13Add +1125.1% 113.2µs 9.2µs 12.25x
ergodic_moves/move/EdgeFlip +1070.7% 109.5µs 9.4µs 11.71x
ergodic_moves/move/Move22 +1068.4% 109.6µs 9.4µs 11.68x
ergodic_moves/random_move_attempt +769.7% 84.2µs 9.7µs 8.70x
metropolis_simulation/metropolis_steps/100 +354.5% 50.8ms 11.2ms 4.55x
metropolis_simulation/metropolis_steps/50 +235.1% 16.2ms 4.8ms 3.35x
triangulation_creation/delaunay_backend/5 +143.7% 173.4µs 71.2µs 2.44x
metropolis_simulation/metropolis_steps/10 +120.9% 4.0ms 1.8ms 2.21x
triangulation_creation/delaunay_backend/10 +81.1% 700.2µs 386.7µs 1.81x
geometry_queries/is_valid +51.8% 427.3µs 281.5µs 1.52x
simulation_analysis/acceptance_rate +33.5% 3.8ns 2.8ns 1.33x
triangulation_creation/delaunay_backend/20 +21.1% 1.9ms 1.6ms 1.21x

🟢 Performance Improvements

Benchmark Change Current Baseline Ratio
triangulation_creation/delaunay_backend/100 -57.7% 13.9ms 32.8ms 2.37x
ergodic_moves/move/Move31Remove -57.4% 3.6µs 8.5µs 2.35x
simulation_analysis/spectral_dimension_estimate -49.0% 25.5µs 49.9µs 1.96x
simulation_analysis/hausdorff_dimension_estimate -38.9% 4.8µs 7.9µs 1.64x
simulation_analysis/average_volume_profile -38.4% 49.1ns 79.6ns 1.62x
validation/validate -36.9% 232.8µs 369.0µs 1.58x
triangulation_creation/delaunay_backend/50 -33.0% 6.0ms 8.9ms 1.49x
simulation_analysis/volume_fluctuations -29.2% 87.4ns 123.6ns 1.41x
edge_counting/uncached/25 -24.9% 756.2ns 1.0µs 1.33x
edge_counting/uncached/100 -24.6% 3.4µs 4.5µs 1.33x
edge_counting/uncached/50 -21.5% 1.6µs 2.0µs 1.27x
edge_counting/uncached/200 -18.9% 7.2µs 8.9µs 1.23x
geometry_queries/euler_characteristic -16.3% 1.7µs 2.0µs 1.20x
geometry_queries/iterate_faces -14.7% 107.9ns 126.6ns 1.17x
action_calculations/calculate_action/100 -13.6% 5.9ns 6.9ns 1.16x
geometry_queries/iterate_edges -13.1% 2.0µs 2.3µs 1.15x

✅ Stable Benchmarks

No significant changes detected in 16 benchmarks.

⚠️ This PR introduces performance regressions that exceed the threshold. Please review the changes.


Performance analysis powered by Criterion.rs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant