feat(api)!: adopt checked Delaunay v0.7.7 APIs#128
Conversation
- Upgrade to delaunay 0.7.7 and route backend construction through checked, topology-aware validation. - Harden CDT initialization, checkpoint restore, and mutation paths so invalid Delaunay state is rejected with typed errors. - Replace public simulation result fields with constructors and accessors, and add typed output/checkpoint/backend operation categories. - Move mock backend fixtures into the testing prelude and widen Euler characteristic arithmetic to i128. BREAKING CHANGE: DelaunayBackend::from_triangulation now returns Result, SimulationResultsBackend exposes accessors instead of public fields, TriangulationQuery::euler_characteristic returns i128, mock backend fixtures move from prelude::geometry to prelude::testing, and checked backend reconstruction may reject checkpoints or explicit toroidal connectivity accepted previously.
WalkthroughThis PR introduces typed error enums, makes SimulationResultsBackend fields private with validated constructors, enforces fallible Delaunay/backend validation, adds stale-foliation bookkeeping semantics, optimizes ergodic moves to avoid allocations, narrows CLI dimension support to 2, widens Euler-characteristic types to i128, and updates docs/tests/tooling accordingly. ChangesTyped Error System and Type Widening
Delaunay Backend Validation and Construction
SimulationResultsBackend Encapsulation and Field Privacy
Stale Foliation Bookkeeping Semantics
Ergodic Moves Optimization and Typed Error Handling
Checkpoint and Results API Updates
Dimension Support Narrowing and Configuration
Documentation and Example Updates
Dependency Update and Utility Cleanup
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #128 +/- ##
==========================================
- Coverage 92.49% 92.26% -0.24%
==========================================
Files 19 19
Lines 10315 10828 +513
==========================================
+ Hits 9541 9990 +449
- Misses 774 838 +64
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
src/cdt/triangulation/validation.rs (1)
180-182: ⚡ Quick winDocument the stale-foliation failure mode in
validate_causality_delaunaydocs.This method now returns
FoliationError::StaleBookkeepingwhen bookkeeping is out of sync, but the current error docs still only describe face/label/causality violations.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/cdt/triangulation/validation.rs` around lines 180 - 182, The docs for validate_causality_delaunay are missing the new failure mode: when bookkeeping is out of sync the function returns FoliationError::StaleBookkeeping (triggered by has_current_foliation() -> stale_foliation_error()). Update the function-level documentation for validate_causality_delaunay to mention this failure case, describe the condition (when has_current_foliation() is false / bookkeeping is stale), and note that callers should handle or propagate FoliationError::StaleBookkeeping alongside the existing face/label/causality errors.src/cdt/foliation.rs (1)
191-197: ⚡ Quick winAdd direct unit coverage for
StaleBookkeepingdiagnostics.This new variant is user-facing in error reporting but currently has no explicit equality/display assertions in this module’s test suite. Add one focused test for
to_string()content and one for field-sensitive equality.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/cdt/foliation.rs` around lines 191 - 197, Add two unit tests in this module that construct the StaleBookkeeping variant directly (e.g., StaleBookkeeping { synced_at_modification: Some(123), current_modification_count: 456 }) and assert behavior: one test asserts that variant.to_string() (or format!("{}", variant)) contains the expected human-readable message and the key field values (check substrings for "synced_at_modification" or the numeric values and "current_modification_count"), and the other test asserts field-sensitive equality by comparing instances with different synced_at_modification (Some vs None) and different current_modification_count values to ensure PartialEq distinguishes them (assert eq for identical fields and neq for differing fields). Ensure tests are named clearly (e.g., stale_bookkeeping_display and stale_bookkeeping_equality) and placed under the module's #[cfg(test)] tests.src/geometry/backends/delaunay.rs (1)
297-333: ⚡ Quick winThread
DelaunayOperationthrough post-mutation validation too.The new typed operation enum is still bypassed by
validate_mutation_or_restore, which takes a raw&'static str. That leaves one remaining place where operation names can drift from the structured taxonomy you just introduced. PassingDelaunayOperationinto that helper and formatting it there would keep all mutation diagnostics aligned.♻️ Proposed refactor
- fn validate_mutation_or_restore( + fn validate_mutation_or_restore( &mut self, dt_before: RawTriangulation<VertexData, CellData, D>, facets_before: HashMap<EdgeKey, FacetHandle>, - operation: &'static str, + operation: DelaunayOperation, target: impl Display, ) -> Result<(), DelaunayError> {self.validate_mutation_or_restore( dt_before, facets_before, - "insert_vertex", + DelaunayOperation::InsertVertex, format!("{coords:?}"), )?;Apply the same change at the other call sites (
FlipK1Remove,FlipK2,FlipK1Insert).🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/geometry/backends/delaunay.rs` around lines 297 - 333, validate_mutation_or_restore currently accepts a raw &'static str for the operation name which bypasses the new DelaunayOperation enum; change validate_mutation_or_restore to accept a DelaunayOperation (or generic Into<String>/Display) and format the operation inside the function using DelaunayOperation's Display impl, then update all call sites (e.g., places handling InsertVertex, MoveVertex, SubdivideFace, FlipK1Remove, FlipK1Insert, FlipK2, Clear, ReserveCapacity) to pass the enum variant instead of a string so post-mutation validation uses the typed operation consistently.src/cdt/ergodic_moves.rs (1)
912-946: 💤 Low valueComplex control flow in
neighbors3could benefit from a comment.The nested loop with early returns and slot-filling logic is correct but dense. A brief inline comment explaining the invariant (collecting exactly 3 distinct non-self neighbors from 3 adjacent faces) would help future readers.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/cdt/ergodic_moves.rs` around lines 912 - 946, The neighbors3 function's nested loops and slot-filling are correct but dense; add a concise inline comment at the top of fn neighbors3 (near the call to triangulation.geometry().adjacent_faces) that states the invariant: we expect exactly 3 adjacent faces and we collect exactly three distinct vertex handles that are not the input vertex (one neighbor per face), explaining the purpose of neighbors array, neighbor_count, and the dedup/skip logic (skip self, skip already seen) so future readers understand why early returns and slot checks exist; reference the calls triangulation.geometry().adjacent_faces and triangulation.geometry().face_vertices in the comment for clarity.src/cdt/triangulation.rs (1)
1432-1445: 💤 Low valueTest assertion may be fragile if error message wording changes.
The test
checkpoint_rejects_invalid_toroidal_periodasserts that the error message contains"invalid toroidal period". If the upstreamdelaunaycrate or internal error formatting changes this wording, the test will fail. Consider also matching on the structured error variant if possible, or documenting the dependency on this specific message.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/cdt/triangulation.rs` around lines 1432 - 1445, The test checkpoint_rejects_invalid_toroidal_period is brittle because it asserts on an exact substring of the error text; change it to match the structured error/variant returned by from_str::<CdtTriangulation<DelaunayBackend2D>>() instead of the message text: capture the Err value into a variable, then pattern-match/assert on the concrete error enum/variant or use a downcast/inspection helper to verify it is the "invalid toroidal period" error (or the corresponding variant name) rather than using error.to_string().contains(...); update the assertion in this test so it checks the error type/variant for CdtTriangulation/DelaunayBackend2D deserialization failure.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/cdt/results.rs`:
- Around line 477-487: The doctest examples pass a raw Delaunay mesh
(CdtTriangulation::from_seeded_points) into SimulationResultsBackend::new, but
the constructor now validates that the triangulation is a CDT; replace calls to
CdtTriangulation::from_seeded_points(...) with a CDT-valid builder such as
CdtTriangulation::from_cdt_strip(...) (or another CDT-valid constructor) in the
examples referenced (around the blocks that call SimulationResultsBackend::new)
so the tests/doc build use a CDT-valid triangulation; update all similar
occurrences (noted at lines ~519-529 and ~775-783 in the same file) to maintain
consistency.
---
Nitpick comments:
In `@src/cdt/ergodic_moves.rs`:
- Around line 912-946: The neighbors3 function's nested loops and slot-filling
are correct but dense; add a concise inline comment at the top of fn neighbors3
(near the call to triangulation.geometry().adjacent_faces) that states the
invariant: we expect exactly 3 adjacent faces and we collect exactly three
distinct vertex handles that are not the input vertex (one neighbor per face),
explaining the purpose of neighbors array, neighbor_count, and the dedup/skip
logic (skip self, skip already seen) so future readers understand why early
returns and slot checks exist; reference the calls
triangulation.geometry().adjacent_faces and
triangulation.geometry().face_vertices in the comment for clarity.
In `@src/cdt/foliation.rs`:
- Around line 191-197: Add two unit tests in this module that construct the
StaleBookkeeping variant directly (e.g., StaleBookkeeping {
synced_at_modification: Some(123), current_modification_count: 456 }) and assert
behavior: one test asserts that variant.to_string() (or format!("{}", variant))
contains the expected human-readable message and the key field values (check
substrings for "synced_at_modification" or the numeric values and
"current_modification_count"), and the other test asserts field-sensitive
equality by comparing instances with different synced_at_modification (Some vs
None) and different current_modification_count values to ensure PartialEq
distinguishes them (assert eq for identical fields and neq for differing
fields). Ensure tests are named clearly (e.g., stale_bookkeeping_display and
stale_bookkeeping_equality) and placed under the module's #[cfg(test)] tests.
In `@src/cdt/triangulation.rs`:
- Around line 1432-1445: The test checkpoint_rejects_invalid_toroidal_period is
brittle because it asserts on an exact substring of the error text; change it to
match the structured error/variant returned by
from_str::<CdtTriangulation<DelaunayBackend2D>>() instead of the message text:
capture the Err value into a variable, then pattern-match/assert on the concrete
error enum/variant or use a downcast/inspection helper to verify it is the
"invalid toroidal period" error (or the corresponding variant name) rather than
using error.to_string().contains(...); update the assertion in this test so it
checks the error type/variant for CdtTriangulation/DelaunayBackend2D
deserialization failure.
In `@src/cdt/triangulation/validation.rs`:
- Around line 180-182: The docs for validate_causality_delaunay are missing the
new failure mode: when bookkeeping is out of sync the function returns
FoliationError::StaleBookkeeping (triggered by has_current_foliation() ->
stale_foliation_error()). Update the function-level documentation for
validate_causality_delaunay to mention this failure case, describe the condition
(when has_current_foliation() is false / bookkeeping is stale), and note that
callers should handle or propagate FoliationError::StaleBookkeeping alongside
the existing face/label/causality errors.
In `@src/geometry/backends/delaunay.rs`:
- Around line 297-333: validate_mutation_or_restore currently accepts a raw
&'static str for the operation name which bypasses the new DelaunayOperation
enum; change validate_mutation_or_restore to accept a DelaunayOperation (or
generic Into<String>/Display) and format the operation inside the function using
DelaunayOperation's Display impl, then update all call sites (e.g., places
handling InsertVertex, MoveVertex, SubdivideFace, FlipK1Remove, FlipK1Insert,
FlipK2, Clear, ReserveCapacity) to pass the enum variant instead of a string so
post-mutation validation uses the typed operation consistently.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: a3a3adee-3ed8-4231-b298-f34616a26220
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (29)
Cargo.tomlbenches/cdt_benchmarks.rsdocs/CLI_EXAMPLES.mddocs/code_organization.mddocs/dev/rust.mdexamples/basic_cdt.rsexamples/output_and_checkpoint.rssrc/cdt/ergodic_moves.rssrc/cdt/foliation.rssrc/cdt/metropolis.rssrc/cdt/results.rssrc/cdt/triangulation.rssrc/cdt/triangulation/builders.rssrc/cdt/triangulation/foliation.rssrc/cdt/triangulation/moves.rssrc/cdt/triangulation/validation.rssrc/config.rssrc/errors.rssrc/geometry/backends/delaunay.rssrc/geometry/backends/mock.rssrc/geometry/generators.rssrc/geometry/operations.rssrc/geometry/traits.rssrc/lib.rssrc/util.rstests/cli.rstests/integration_tests.rstests/physics_integration.rstests/regressions.rs
💤 Files with no reviewable changes (1)
- src/util.rs
- Add project Semgrep rules for silent conversion fallbacks, partial_cmp ordering defaults, function-local imports, and Clippy allow suppressions. - Add slow CI, benchmark smoke/compile, unused-dependency, publish-check, and YAML formatting helpers to the justfile and developer docs. - Harden changelog archive/postprocessing and benchmark baseline parsing for ID-keyed or unsized workloads. - Preserve checked numeric conversions in CDT observables/results and keep doctests and benchmarks on CDT-valid fixtures. - Expose config override and geometry result types through intended public preludes while narrowing backend handle exports. BREAKING CHANGE: prelude::geometry no longer re-exports DelaunayVertexHandle, DelaunayEdgeHandle, or DelaunayFaceHandle. Import them from causal_triangulations::geometry::backends::delaunay when handle-specific code needs them.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@justfile`:
- Around line 336-338: The publish-check recipe fails if package keywords are
null because the inline jq '.packages[0].keywords[]' returns exit code 5; fix it
by capturing cargo metadata once (e.g., assign metadata=$(cargo metadata
--no-deps --format-version=1 2>/dev/null)) and then extract keywords with a
null-coalescing jq expression like '.packages[0].keywords // [] | .[]' so the
command becomes echo "$metadata" | jq -r '.packages[0].keywords // [] | .[]'
(keep the existing variables such as keywords and count) to ensure the step
succeeds when keywords are missing.
In `@scripts/archive_changelog.py`:
- Around line 352-359: archive_dir_rel = Path(os.path.relpath(...)) can raise
ValueError on Windows when paths are on different drives; wrap the
os.path.relpath(...) call used to build archive_dir_rel in a try/except
ValueError block in scripts/archive_changelog.py, catch the ValueError, set
archive_dir_rel to a safe fallback (for example str(archive_dir) or
Path(archive_dir).as_posix()), and emit a LOGGER.warning that includes the
exception details along with archive_dir and changelog_path.parent so the code
continues to the existing fallback logic instead of crashing.
In `@scripts/benchmark_models.py`:
- Around line 412-415: The sort over dim_benchmarks uses b.comparison_key which
can raise for unsized benchmarks; change the tiebreaker to a display-only key
that won't raise (e.g. use b.benchmark_id or b.header_line()) so the sort
remains robust while preserving comparison semantics from
extract_benchmark_data(); update the lambda in the sorted call that currently
references comparison_key to use (b.benchmark_id or b.header_line()) instead.
In `@scripts/benchmark_utils.py`:
- Around line 1781-1792: The _matching_baseline function currently returns early
when current.benchmark_id is set even if
baseline_results.get(current.comparison_key) returned None, preventing the
legacy "{points}_{dimension}" fallback from being tried; update the logic in
_matching_baseline so that you first try
baseline_results.get(current.comparison_key) and return it if found, but if it's
None and current.points is not None then attempt
baseline_results.get(f"{current.points}_{current.dimension}") (i.e., remove the
short-circuit that checks current.benchmark_id and ensure the legacy key
fallback runs when the direct comparison_key lookup misses).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 7ff41179-d035-4ac4-9289-ab60fa41561e
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (31)
Cargo.tomlbenches/cdt_benchmarks.rsdocs/dev/commands.mddocs/dev/tooling-alignment.mdjustfilescripts/archive_changelog.pyscripts/benchmark_models.pyscripts/benchmark_utils.pyscripts/hardware_utils.pyscripts/postprocess_changelog.pyscripts/tests/test_archive_changelog.pyscripts/tests/test_benchmark_models.pyscripts/tests/test_benchmark_utils.pyscripts/tests/test_hardware_utils.pyscripts/tests/test_postprocess_changelog.pysemgrep.yamlsrc/cdt/ergodic_moves.rssrc/cdt/foliation.rssrc/cdt/metropolis.rssrc/cdt/observables.rssrc/cdt/results.rssrc/cdt/triangulation.rssrc/cdt/triangulation/builders.rssrc/cdt/triangulation/validation.rssrc/config.rssrc/geometry/backends/delaunay.rssrc/geometry/traits.rssrc/lib.rssrc/util.rstests/proptest_foliation.rstests/semgrep/src/project_rules/rust_style.rs
✅ Files skipped from review due to trivial changes (1)
- tests/proptest_foliation.rs
🚧 Files skipped from review as they are similar to previous changes (10)
- src/cdt/foliation.rs
- Cargo.toml
- src/cdt/triangulation/validation.rs
- src/cdt/triangulation/builders.rs
- src/cdt/results.rs
- src/cdt/metropolis.rs
- src/cdt/triangulation.rs
- src/lib.rs
- src/cdt/ergodic_moves.rs
- src/geometry/backends/delaunay.rs
| keywords=$(cargo metadata --no-deps --format-version=1 2>/dev/null \ | ||
| | jq -r '.packages[0].keywords[]') | ||
| count=0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords[]' >/dev/null
# Expected: jq exits non-zero (current recipe can fail early).
echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords // [] | .[]' >/dev/null
# Expected: exits zero and prints nothing.Repository: acgetchell/causal-triangulations
Length of output: 134
🏁 Script executed:
# First, check if the justfile exists and read the relevant section
if [ -f justfile ]; then
echo "=== Lines 330-345 of justfile ==="
sed -n '330,345p' justfile
echo ""
echo "=== Total lines in justfile ==="
wc -l justfile
else
echo "justfile not found"
git ls-files | grep -i justfile
fiRepository: acgetchell/causal-triangulations
Length of output: 654
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
# Test both approaches with clear separation
echo "=== Testing original code (should fail) ==="
if echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords[]' >/dev/null 2>&1; then
echo "✓ Original approach succeeded"
else
echo "✗ Original approach failed with exit code: $?"
fi
echo ""
echo "=== Testing proposed fix (should succeed) ==="
if echo '{"packages":[{"description":"x"}]}' | jq -r '.packages[0].keywords // [] | .[]' >/dev/null 2>&1; then
echo "✓ Proposed fix succeeded"
else
echo "✗ Proposed fix failed with exit code: $?"
fi
echo ""
echo "=== Testing with actual metadata structure ==="
test_json='{"packages":[{"keywords":["foo","bar"],"categories":["category1"],"description":"Test"}]}'
echo "With keywords present:"
echo "$test_json" | jq -r '.packages[0].keywords // [] | .[]'Repository: acgetchell/causal-triangulations
Length of output: 316
🏁 Script executed:
# Check if jq is mentioned/used in the project
rg -i 'jq' justfile --context 3Repository: acgetchell/causal-triangulations
Length of output: 2400
Harden publish-check against missing keywords metadata.
The jq command at line 337 (jq -r '.packages[0].keywords[]') fails with exit code 5 when keywords is null or absent, terminating the entire recipe before validation messages print. Under set -euo pipefail, this breaks the workflow without clear diagnostics.
Apply the proposed fix to capture metadata once and use null coalescing:
Suggested fix
publish-check: _ensure-jq
#!/usr/bin/env bash
set -euo pipefail
echo "🔍 Validating crates.io metadata..."
errors=0
+ metadata_json="$(cargo metadata --no-deps --format-version=1 2>/dev/null)"
- keywords=$(cargo metadata --no-deps --format-version=1 2>/dev/null \
- | jq -r '.packages[0].keywords[]')
+ keywords=$(jq -r '.packages[0].keywords // [] | .[]' <<<"$metadata_json")
@@
- cat_count=$(cargo metadata --no-deps --format-version=1 2>/dev/null \
- | jq '.packages[0].categories | length')
+ cat_count=$(jq '.packages[0].categories // [] | length' <<<"$metadata_json")
@@
- desc=$(cargo metadata --no-deps --format-version=1 2>/dev/null \
- | jq -r '.packages[0].description // ""')
+ desc=$(jq -r '.packages[0].description // ""' <<<"$metadata_json")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@justfile` around lines 336 - 338, The publish-check recipe fails if package
keywords are null because the inline jq '.packages[0].keywords[]' returns exit
code 5; fix it by capturing cargo metadata once (e.g., assign metadata=$(cargo
metadata --no-deps --format-version=1 2>/dev/null)) and then extract keywords
with a null-coalescing jq expression like '.packages[0].keywords // [] | .[]' so
the command becomes echo "$metadata" | jq -r '.packages[0].keywords // [] | .[]'
(keep the existing variables such as keywords and count) to ensure the step
succeeds when keywords are missing.
| archive_dir_rel = Path(os.path.relpath(archive_dir, changelog_path.parent)).as_posix() | ||
| if archive_dir_rel == ".." or archive_dir_rel.startswith("../") or Path(archive_dir_rel).is_absolute(): | ||
| LOGGER.warning( | ||
| "Archive directory %s is outside changelog directory %s; generated Markdown links use %s", | ||
| archive_dir, | ||
| changelog_path.parent, | ||
| archive_dir_rel, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does Python's os.path.relpath raise ValueError on Windows when path and start are on different drives?
💡 Result:
Yes, Python's os.path.relpath function raises a ValueError on Windows when the path and the start directory are located on different drives [1][2]. Because Windows uses separate drive letters (e.g., C:, D:) to represent distinct filesystems, it is impossible to compute a relative path between them [3][4]. The official Python documentation explicitly states: "On Windows, ValueError is raised when path and start are on different drives" [1][2]. This behavior is a long-standing design choice in the ntpath module [4]. If you encounter this error, you must either use absolute paths or handle the drive difference manually, as a relative path cannot be constructed across different mount points [3].
Citations:
- 1: https://docs.python.org/3.12/library/os.path.html
- 2: https://docs.python.org/3/library/os.path.html
- 3: https://stackoverflow.com/questions/75735028/using-os-path-relpath-on-windows-network-drive
- 4: os.path.relpath does not document ValueError on Windows with different drives python/cpython#85827
🏁 Script executed:
cat -n scripts/archive_changelog.py | sed -n '340,365p'Repository: acgetchell/causal-triangulations
Length of output: 1336
🏁 Script executed:
cat -n scripts/archive_changelog.py | sed -n '300,350p'Repository: acgetchell/causal-triangulations
Length of output: 2158
Wrap os.path.relpath() in a try-except block to handle ValueError on Windows.
On line 352, os.path.relpath() can raise ValueError on Windows when paths are on different drives (e.g., C: and D:), causing the fallback path logic to crash instead of warning and continuing.
Suggested fix
try:
archive_dir_rel = archive_dir.relative_to(changelog_path.parent).as_posix()
except ValueError:
- archive_dir_rel = Path(os.path.relpath(archive_dir, changelog_path.parent)).as_posix()
+ try:
+ archive_dir_rel = Path(os.path.relpath(archive_dir, changelog_path.parent)).as_posix()
+ except ValueError:
+ archive_dir_rel = archive_dir.as_posix()
if archive_dir_rel == ".." or archive_dir_rel.startswith("../") or Path(archive_dir_rel).is_absolute():
LOGGER.warning(
"Archive directory %s is outside changelog directory %s; generated Markdown links use %s",
archive_dir,
changelog_path.parent,
archive_dir_rel,
)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scripts/archive_changelog.py` around lines 352 - 359, archive_dir_rel =
Path(os.path.relpath(...)) can raise ValueError on Windows when paths are on
different drives; wrap the os.path.relpath(...) call used to build
archive_dir_rel in a try/except ValueError block in
scripts/archive_changelog.py, catch the ValueError, set archive_dir_rel to a
safe fallback (for example str(archive_dir) or Path(archive_dir).as_posix()),
and emit a LOGGER.warning that includes the exception details along with
archive_dir and changelog_path.parent so the code continues to the existing
fallback logic instead of crashing.
| dim_benchmarks = sorted( | ||
| by_dimension[dimension], | ||
| key=lambda b: (b.points is None, b.points or 0, b.comparison_key), | ||
| ) |
There was a problem hiding this comment.
Avoid using comparison_key as a display-sort key.
comparison_key intentionally raises for unsized benchmarks without benchmark_id, so this sort can now fail on data that extract_benchmark_data() otherwise accepts. A display-only tiebreaker like benchmark_id or header_line() keeps table rendering robust without weakening comparison matching.
Suggested fix
dim_benchmarks = sorted(
by_dimension[dimension],
- key=lambda b: (b.points is None, b.points or 0, b.comparison_key),
+ key=lambda b: (
+ b.points is None,
+ b.points or 0,
+ b.benchmark_id or b.header_line(),
+ ),
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| dim_benchmarks = sorted( | |
| by_dimension[dimension], | |
| key=lambda b: (b.points is None, b.points or 0, b.comparison_key), | |
| ) | |
| dim_benchmarks = sorted( | |
| by_dimension[dimension], | |
| key=lambda b: ( | |
| b.points is None, | |
| b.points or 0, | |
| b.benchmark_id or b.header_line(), | |
| ), | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scripts/benchmark_models.py` around lines 412 - 415, The sort over
dim_benchmarks uses b.comparison_key which can raise for unsized benchmarks;
change the tiebreaker to a display-only key that won't raise (e.g. use
b.benchmark_id or b.header_line()) so the sort remains robust while preserving
comparison semantics from extract_benchmark_data(); update the lambda in the
sorted call that currently references comparison_key to use (b.benchmark_id or
b.header_line()) instead.
| @staticmethod | ||
| def _matching_baseline(current: BenchmarkData, baseline_results: dict[str, BenchmarkData]) -> BenchmarkData | None: | ||
| """Return the matching baseline entry, falling back to legacy keys for point-sized benchmarks.""" | ||
| try: | ||
| baseline_benchmark = baseline_results.get(current.comparison_key) | ||
| except ValueError: | ||
| return None | ||
| if baseline_benchmark is not None or current.benchmark_id: | ||
| return baseline_benchmark | ||
| if current.points is None: | ||
| return None | ||
| return baseline_results.get(f"{current.points}_{current.dimension}") |
There was a problem hiding this comment.
Preserve the legacy key fallback after a missed benchmark_id lookup.
If current.benchmark_id is set but the baseline was generated before Benchmark ID: existed, this returns None immediately and never tries the existing {points}_{dimension} key. That will silently skip valid baseline matches for sized benchmarks and can hide real regressions.
Suggested fix
`@staticmethod`
def _matching_baseline(current: BenchmarkData, baseline_results: dict[str, BenchmarkData]) -> BenchmarkData | None:
"""Return the matching baseline entry, falling back to legacy keys for point-sized benchmarks."""
try:
baseline_benchmark = baseline_results.get(current.comparison_key)
except ValueError:
- return None
- if baseline_benchmark is not None or current.benchmark_id:
+ baseline_benchmark = None
+ if baseline_benchmark is not None:
return baseline_benchmark
if current.points is None:
return None
return baseline_results.get(f"{current.points}_{current.dimension}")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scripts/benchmark_utils.py` around lines 1781 - 1792, The _matching_baseline
function currently returns early when current.benchmark_id is set even if
baseline_results.get(current.comparison_key) returned None, preventing the
legacy "{points}_{dimension}" fallback from being tried; update the logic in
_matching_baseline so that you first try
baseline_results.get(current.comparison_key) and return it if found, but if it's
None and current.points is not None then attempt
baseline_results.get(f"{current.points}_{current.dimension}") (i.e., remove the
short-circuit that checks current.benchmark_id and ensure the legacy key
fallback runs when the direct comparison_key lookup misses).
🔴 Performance Regression DetectedPerformance Analysis ReportCDT Performance Analysis ReportGenerated: 2026-05-16T16:02:43+00:00 Summary
🔴 Performance Regressions
🟢 Performance Improvements
✅ Stable BenchmarksNo significant changes detected in 16 benchmarks. Performance analysis powered by Criterion.rs |
BREAKING CHANGE: DelaunayBackend::from_triangulation now returns Result, SimulationResultsBackend exposes accessors instead of public fields, TriangulationQuery::euler_characteristic returns i128, mock backend fixtures move from prelude::geometry to prelude::testing, and checked backend reconstruction may reject checkpoints or explicit toroidal connectivity accepted previously.