Conversation
📝 WalkthroughWalkthroughReplaces dynamic BatchDataSlice with a const-generic AddressBatchSnapshot, moves proof reconstruction to batched const-generic routines returning fixed-size proof arrays, updates callers and circuit inputs to use borrowed slices and fixed-size proofs, and converts several functions to propagate errors via Results. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Consumer as AddressQueueConsumer
participant Queue as StreamingAddressQueue
participant Indexer as AddressQueueData
participant Proof as ProofReconstructor
participant Staging as StagingTree
Consumer->>Queue: get_batch_snapshot<const HEIGHT>(start,end,hashchain_idx)
Queue->>Indexer: read batch raw data (addresses, low elements, leaves_hash_chains)
alt leaves_hashchain present
Indexer-->>Queue: leaves_hashchain
else derive from addresses
Queue->>Proof: create_hash_chain_from_slice(addresses_slice)
Proof-->>Queue: leaves_hashchain / error
end
Queue->>Proof: reconstruct_proofs::<HEIGHT>(start..actual_end)
Proof-->>Queue: low_element_proofs (fixed-size arrays)
Queue-->>Consumer: AddressBatchSnapshot {addresses, low_element_*, low_element_proofs, leaves_hashchain}
Consumer->>Staging: staging_tree.process_batch(snapshot...)
Staging-->>Consumer: result / mapped error
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@forester/src/processor/v2/helpers.rs`:
- Around line 506-516: get_batch_snapshot currently panics if
data.leaves_hash_chains lacks an entry; either update the TestIndexer producer
in sdk-libs/program-test/src/indexer/test_indexer.rs to populate
AddressQueueData.leaves_hash_chains, or add a compatibility fallback inside
get_batch_snapshot: when data.leaves_hash_chains.get(hashchain_idx) is None,
derive the needed hash-chain from the corresponding leaves (using the same
hash-chain construction used elsewhere) or fall back to a sensible default
(e.g., compute from data.leaves or use an Option) rather than returning an
error; locate symbols get_batch_snapshot, AddressQueueData, and
leaves_hash_chains to implement the change.
In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs`:
- Around line 246-257: The code attempts to return
ProverClientError::IntegerConversion when try_into() fails for
low_element_indices[i] and low_element_next_indices[i], but that variant doesn't
exist; either add an IntegerConversion variant to the ProverClientError enum in
prover/client/src/errors.rs or change the map_err calls in proof_inputs.rs
(around low_element_index and low_element_next_index) to use an existing
ProverClientError variant (e.g., the crate's conversion/invalid-input error
variant) and preserve the formatted message; update both occurrences to the
chosen existing variant so the code compiles.
In `@prover/client/tests/batch_address_append.rs`:
- Around line 25-26: The call to spawn_prover().await uses .unwrap() incorrectly
because spawn_prover returns () not Result; remove the .unwrap() so the line
becomes spawn_prover().await and leave the preceding println as-is; update the
test in batch_address_append.rs to match other tests that call
spawn_prover().await without unwrapping.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 83867487-eb0a-4fd2-82fd-9f90e44c794d
⛔ Files ignored due to path filters (5)
forester-utils/src/address_staging_tree.rsis excluded by none and included by noneprogram-tests/utils/src/e2e_test_env.rsis excluded by none and included by noneprogram-tests/utils/src/test_batch_forester.rsis excluded by none and included by nonesparse-merkle-tree/src/indexed_changelog.rsis excluded by none and included by nonesparse-merkle-tree/tests/indexed_changelog.rsis excluded by none and included by none
📒 Files selected for processing (6)
forester/src/processor/v2/helpers.rsforester/src/processor/v2/strategy/address.rsprover/client/src/proof_types/batch_address_append/proof_inputs.rsprover/client/tests/batch_address_append.rssdk-libs/client/src/indexer/types/queue.rssdk-libs/program-test/src/indexer/test_indexer.rs
b07fd20 to
4650c0a
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
sdk-libs/client/src/indexer/types/queue.rs (1)
70-122: 🧹 Nitpick | 🔵 TrivialConsider unifying
reconstruct_proofwith the lookup-based implementation.There's significant code duplication between
reconstruct_proof(lines 70-122) andreconstruct_proof_with_lookup(lines 155-205). More importantly, callers of the single-proof method get O(HEIGHT × n) performance due to the linear scan at line 95-98, while batch callers enjoy O(HEIGHT) via HashMap.You could eliminate duplication and give everyone the faster path:
♻️ Proposed refactor to unify proof reconstruction
pub fn reconstruct_proof<const HEIGHT: usize>( &self, address_idx: usize, ) -> Result<[[u8; 32]; HEIGHT], IndexerError> { - let leaf_index = *self.low_element_indices.get(address_idx).ok_or_else(|| { - IndexerError::MissingResult { - context: "reconstruct_proof".to_string(), - message: format!( - "address_idx {} out of bounds for low_element_indices (len {})", - address_idx, - self.low_element_indices.len(), - ), - } - })?; - let mut proof = [[0u8; 32]; HEIGHT]; - let mut pos = leaf_index; - - for (level, proof_element) in proof.iter_mut().enumerate() { - let sibling_pos = if pos.is_multiple_of(2) { - pos + 1 - } else { - pos - 1 - }; - let sibling_idx = Self::encode_node_index(level, sibling_pos); - - let hash_idx = self - .nodes - .iter() - .position(|&n| n == sibling_idx) - .ok_or_else(|| IndexerError::MissingResult { - context: "reconstruct_proof".to_string(), - message: format!( - "Missing proof node at level {} position {} (encoded: {})", - level, sibling_pos, sibling_idx - ), - })?; - let hash = - self.node_hashes - .get(hash_idx) - .ok_or_else(|| IndexerError::MissingResult { - context: "reconstruct_proof".to_string(), - message: format!( - "node_hashes index {} out of bounds (len {})", - hash_idx, - self.node_hashes.len(), - ), - })?; - *proof_element = *hash; - pos /= 2; - } - - Ok(proof) + let node_lookup = self.build_node_lookup(); + self.reconstruct_proof_with_lookup::<HEIGHT>(address_idx, &node_lookup) }The tradeoff: single-proof calls now pay the HashMap build cost. If you expect frequent single-proof calls on small node sets, you could keep both paths and document the performance characteristics. But for typical batch workloads, unification simplifies maintenance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/client/src/indexer/types/queue.rs` around lines 70 - 122, reconstruct_proof duplicates logic and does an O(HEIGHT × n) linear scan over self.nodes/node_hashes; replace its loop with the lookup-based approach used in reconstruct_proof_with_lookup by building the HashMap (mapping node index -> hash) once and then using encode_node_index to fetch siblings in O(1), or extract the HashMap construction into a shared helper used by both reconstruct_proof and reconstruct_proof_with_lookup so single-proof callers pay only the HashMap build cost and both functions reuse the same fast lookup (refer to reconstruct_proof, reconstruct_proof_with_lookup, self.nodes, self.node_hashes, and Self::encode_node_index).forester/src/processor/v2/strategy/address.rs (1)
335-366: 🧹 Nitpick | 🔵 TrivialError mapping is incomplete for all
AddressStagingTreeErrorvariants.The
map_address_staging_errorfunction only handles two specificCircuitInputserror cases (HashchainMismatchandProofPatchFailed), but theAddressStagingTreeErrorenum (context snippet 2) has four variants:
SparseRootMismatch- not mappedCircuitInputs- partially mapped (only two inner variants)MissingSubtrees- not mappedRootSerialization- not mappedThe catch-all at line 365 converts these to generic
anyhow::anyhow!errors, losing the structured error information thatV2Errorvariants could provide. For better observability and consistent error handling, consider adding explicit mappings:♻️ Suggested improvement
fn map_address_staging_error(tree: &str, err: ForesterUtilsError) -> anyhow::Error { match err { + ForesterUtilsError::AddressStagingTree(AddressStagingTreeError::SparseRootMismatch { + computed, + expected, + start_index, + }) => V2Error::StaleTree { + tree_id: tree.to_string(), + details: format!( + "sparse root mismatch: computed {:?}[..4] != expected {:?}[..4] (start_index={})", + &computed[..4], &expected[..4], start_index + ), + } + .into(), ForesterUtilsError::AddressStagingTree(AddressStagingTreeError::CircuitInputs { // ... existing handling }) => { /* ... */ } // ... rest } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@forester/src/processor/v2/strategy/address.rs` around lines 335 - 366, map_address_staging_error currently only converts two specific CircuitInputs cases into V2Error and falls back to a generic anyhow for everything else; update it to exhaustively match all AddressStagingTreeError variants (SparseRootMismatch, CircuitInputs, MissingSubtrees, RootSerialization) and convert each into an appropriate V2Error variant (add new V2Error mappings if needed) so structured errors are preserved; inside the CircuitInputs arm also handle any remaining ProverClientError variants beyond HashchainMismatch and ProofPatchFailed (mapping them to distinct V2Error variants or a descriptive V2Error::ProverClientError) and return those .into() values instead of the generic anyhow fallback, ensuring map_address_staging_error returns meaningful V2Error::... variants for every AddressStagingTreeError case.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs`:
- Around line 202-210: Validate slice lengths before slicing/indexing: ensure
zkp_batch_size <= new_element_values.len() before doing let new_element_values =
&new_element_values[..zkp_batch_size]; and also assert/evaluate that all
parallel input slices (e.g., patched_low_element_next_values,
patched_low_element_next_indices, patched_low_element_values,
patched_low_element_indices and any other input slices used later at lines
referencing indices 246, 252, 272) have matching lengths equal to zkp_batch_size
(or to each other as required). If they don’t match, return an Err or
early-return with a clear error instead of slicing/indexing; update the
surrounding function (the proof input construction routine) to perform these
preconditions at the top so subsequent Vec::with_capacity and indexed accesses
cannot panic.
In `@prover/client/tests/batch_address_append.rs`:
- Around line 61-67: Replace the bare unwrap on the proof conversion in the test
with an expect that supplies context so failures show proof depth vs expected
tree height; specifically change the call on
non_inclusion_proof.merkle_proof.as_slice().try_into().unwrap() to use
.expect(...) with a message that includes identifying values (e.g.,
non_inclusion_proof.merkle_proof.len() or the expected tree height) so the panic
explains why the conversion to the fixed-size array failed when running the
batch_address_append test.
In `@sdk-libs/client/src/indexer/types/queue.rs`:
- Around line 269-281: Add edge-case tests around reconstruct_proofs and
reconstruct_proof: create tests named reconstruct_proofs_empty_range,
reconstruct_proofs_single_element, and reconstruct_proof_out_of_bounds that use
build_queue_data::<40>(...) to construct a queue; in
reconstruct_proofs_empty_range call queue.reconstruct_proofs::<40>(0..0) and
assert the returned Vec is empty; in reconstruct_proofs_single_element call
queue.reconstruct_proofs::<40>(3..4) and compare length and element equality
against queue.reconstruct_proof::<40>(3); and in reconstruct_proof_out_of_bounds
call queue.reconstruct_proof::<40>(10) on a small queue and assert it returns an
Err. Ensure each test uses the same generic parameter (::<40>) and unwrap/assert
appropriately to match existing test patterns.
---
Outside diff comments:
In `@forester/src/processor/v2/strategy/address.rs`:
- Around line 335-366: map_address_staging_error currently only converts two
specific CircuitInputs cases into V2Error and falls back to a generic anyhow for
everything else; update it to exhaustively match all AddressStagingTreeError
variants (SparseRootMismatch, CircuitInputs, MissingSubtrees, RootSerialization)
and convert each into an appropriate V2Error variant (add new V2Error mappings
if needed) so structured errors are preserved; inside the CircuitInputs arm also
handle any remaining ProverClientError variants beyond HashchainMismatch and
ProofPatchFailed (mapping them to distinct V2Error variants or a descriptive
V2Error::ProverClientError) and return those .into() values instead of the
generic anyhow fallback, ensuring map_address_staging_error returns meaningful
V2Error::... variants for every AddressStagingTreeError case.
In `@sdk-libs/client/src/indexer/types/queue.rs`:
- Around line 70-122: reconstruct_proof duplicates logic and does an O(HEIGHT ×
n) linear scan over self.nodes/node_hashes; replace its loop with the
lookup-based approach used in reconstruct_proof_with_lookup by building the
HashMap (mapping node index -> hash) once and then using encode_node_index to
fetch siblings in O(1), or extract the HashMap construction into a shared helper
used by both reconstruct_proof and reconstruct_proof_with_lookup so single-proof
callers pay only the HashMap build cost and both functions reuse the same fast
lookup (refer to reconstruct_proof, reconstruct_proof_with_lookup, self.nodes,
self.node_hashes, and Self::encode_node_index).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 49a80497-febd-41e6-8b40-d03003ba8e77
⛔ Files ignored due to path filters (7)
Cargo.lockis excluded by!**/*.lockand included by noneforester-utils/src/address_staging_tree.rsis excluded by none and included by noneprogram-tests/utils/src/e2e_test_env.rsis excluded by none and included by noneprogram-tests/utils/src/mock_batched_forester.rsis excluded by none and included by noneprogram-tests/utils/src/test_batch_forester.rsis excluded by none and included by nonesparse-merkle-tree/src/indexed_changelog.rsis excluded by none and included by nonesparse-merkle-tree/tests/indexed_changelog.rsis excluded by none and included by none
📒 Files selected for processing (12)
forester/src/processor/v2/helpers.rsforester/src/processor/v2/strategy/address.rsprover/client/src/constants.rsprover/client/src/errors.rsprover/client/src/helpers.rsprover/client/src/proof_types/batch_address_append/proof_inputs.rsprover/client/src/proof_types/batch_append/proof_inputs.rsprover/client/src/proof_types/batch_update/proof_inputs.rsprover/client/src/prover.rsprover/client/tests/batch_address_append.rssdk-libs/client/src/indexer/types/queue.rssdk-libs/program-test/src/indexer/test_indexer.rs
4650c0a to
b937409
Compare
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
prover/client/src/prover.rs (1)
53-71:⚠️ Potential issue | 🟠 MajorReplace blocking
sleepin async function withtokio::time::sleep.The
health_checkfunction is async but usesstd::thread::sleep, which blocks executor threads and degrades concurrent request handling. Replace withtokio::time::sleep(...).awaitat line 69, and removethread::sleepfrom imports.♻️ Proposed fix
use std::{ process::Command, sync::atomic::{AtomicBool, Ordering}, - thread::sleep, time::Duration, }; @@ Err(_) => { - sleep(Duration::from_secs(timeout as u64)); + tokio::time::sleep(Duration::from_secs(timeout as u64)).await; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/prover.rs` around lines 53 - 71, The async function health_check is using blocking std::thread::sleep; replace that call with tokio::time::sleep(...).await inside the for-loop so the sleep is non-blocking, and remove any std::thread::sleep import; update the import list to use tokio::time::Duration or std::time::Duration as needed and ensure the call is awaited in health_check.sdk-libs/client/src/indexer/types/queue.rs (1)
70-122: 🧹 Nitpick | 🔵 TrivialCode duplication between
reconstruct_proofandreconstruct_proof_with_lookup.These two methods share nearly identical logic—only the node lookup differs. This creates a maintenance burden: if the proof reconstruction algorithm changes, both must be updated.
Consider having
reconstruct_proofdelegate toreconstruct_proof_with_lookup:♻️ Suggested refactor
pub fn reconstruct_proof<const HEIGHT: usize>( &self, address_idx: usize, ) -> Result<[[u8; 32]; HEIGHT], IndexerError> { - let leaf_index = *self.low_element_indices.get(address_idx).ok_or_else(|| { - IndexerError::MissingResult { - context: "reconstruct_proof".to_string(), - message: format!( - "address_idx {} out of bounds for low_element_indices (len {})", - address_idx, - self.low_element_indices.len(), - ), - } - })?; - let mut proof = [[0u8; 32]; HEIGHT]; - let mut pos = leaf_index; - - for (level, proof_element) in proof.iter_mut().enumerate() { - let sibling_pos = if pos.is_multiple_of(2) { - pos + 1 - } else { - pos - 1 - }; - let sibling_idx = Self::encode_node_index(level, sibling_pos); - - let hash_idx = self - .nodes - .iter() - .position(|&n| n == sibling_idx) - .ok_or_else(|| IndexerError::MissingResult { - context: "reconstruct_proof".to_string(), - message: format!( - "Missing proof node at level {} position {} (encoded: {})", - level, sibling_pos, sibling_idx - ), - })?; - let hash = - self.node_hashes - .get(hash_idx) - .ok_or_else(|| IndexerError::MissingResult { - context: "reconstruct_proof".to_string(), - message: format!( - "node_hashes index {} out of bounds (len {})", - hash_idx, - self.node_hashes.len(), - ), - })?; - *proof_element = *hash; - pos /= 2; - } - - Ok(proof) + let node_lookup = self.build_node_lookup(); + self.reconstruct_proof_with_lookup::<HEIGHT>(address_idx, &node_lookup) }This trades a small performance cost for single-proof calls (building the HashMap) in exchange for DRY code. If single-proof performance is critical, you could keep both but extract the core loop into a private generic helper.
Also applies to: 155-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sdk-libs/client/src/indexer/types/queue.rs` around lines 70 - 122, The reconstruct_proof implementation duplicates the logic in reconstruct_proof_with_lookup; refactor by making reconstruct_proof delegate to reconstruct_proof_with_lookup (or extract the shared loop into a private helper) so node-lookup differences are isolated: have reconstruct_proof build or call into the same lookup mechanism used by reconstruct_proof_with_lookup (using low_element_indices, nodes, node_hashes and Self::encode_node_index) or provide a private helper fn (e.g., reconstruct_proof_core) that takes a closure/lookup for finding sibling hashes and contains the iteration over levels and pos; update reconstruct_proof to call that helper to eliminate the duplicated loop and error construction.
♻️ Duplicate comments (1)
prover/client/src/proof_types/batch_address_append/proof_inputs.rs (1)
202-210:⚠️ Potential issue | 🟠 MajorMissing input length validation can cause panics.
Line 202 slices
new_element_values[..zkp_batch_size]without checking bounds. Ifzkp_batch_size > new_element_values.len(), this panics. Similarly, the loop starting at line 244 indexes intolow_element_*arrays usingiwithout validating their lengths match.🛡️ Proposed fix: validate lengths up front
pub fn get_batch_address_append_circuit_inputs<const HEIGHT: usize>( ... ) -> Result<BatchAddressAppendInputs, ProverClientError> { + if zkp_batch_size > new_element_values.len() { + return Err(ProverClientError::GenericError(format!( + "zkp_batch_size ({}) exceeds new_element_values length ({})", + zkp_batch_size, new_element_values.len() + ))); + } + let min_len = zkp_batch_size.min(new_element_values.len()); + if low_element_values.len() < min_len + || low_element_next_values.len() < min_len + || low_element_indices.len() < min_len + || low_element_next_indices.len() < min_len + || low_element_proofs.len() < min_len + { + return Err(ProverClientError::GenericError(format!( + "input length mismatch: expected at least {}, got low_values={}, low_next_values={}, \ + low_indices={}, low_next_indices={}, low_proofs={}", + min_len, + low_element_values.len(), + low_element_next_values.len(), + low_element_indices.len(), + low_element_next_indices.len(), + low_element_proofs.len() + ))); + } let new_element_values = &new_element_values[..zkp_batch_size];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs` around lines 202 - 210, The code slices new_element_values with new_element_values[..zkp_batch_size] and later indexes low_element_values/low_element_indices/low_element_next_values/etc. without validating lengths, which can panic; add upfront length checks comparing zkp_batch_size to new_element_values.len() and ensure all low_element_* vecs have at least zkp_batch_size entries (or match new_element_values.len()) before creating the slice or looping, returning an error (or truncating safely) if validation fails so the code never performs out-of-bounds slicing/indexing in the functions handling new_element_values, zkp_batch_size, low_element_values, low_element_indices, low_element_next_values, low_element_next_indices, and related vectors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@prover/client/src/prover.rs`:
- Around line 53-71: The async function health_check is using blocking
std::thread::sleep; replace that call with tokio::time::sleep(...).await inside
the for-loop so the sleep is non-blocking, and remove any std::thread::sleep
import; update the import list to use tokio::time::Duration or
std::time::Duration as needed and ensure the call is awaited in health_check.
In `@sdk-libs/client/src/indexer/types/queue.rs`:
- Around line 70-122: The reconstruct_proof implementation duplicates the logic
in reconstruct_proof_with_lookup; refactor by making reconstruct_proof delegate
to reconstruct_proof_with_lookup (or extract the shared loop into a private
helper) so node-lookup differences are isolated: have reconstruct_proof build or
call into the same lookup mechanism used by reconstruct_proof_with_lookup (using
low_element_indices, nodes, node_hashes and Self::encode_node_index) or provide
a private helper fn (e.g., reconstruct_proof_core) that takes a closure/lookup
for finding sibling hashes and contains the iteration over levels and pos;
update reconstruct_proof to call that helper to eliminate the duplicated loop
and error construction.
---
Duplicate comments:
In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs`:
- Around line 202-210: The code slices new_element_values with
new_element_values[..zkp_batch_size] and later indexes
low_element_values/low_element_indices/low_element_next_values/etc. without
validating lengths, which can panic; add upfront length checks comparing
zkp_batch_size to new_element_values.len() and ensure all low_element_* vecs
have at least zkp_batch_size entries (or match new_element_values.len()) before
creating the slice or looping, returning an error (or truncating safely) if
validation fails so the code never performs out-of-bounds slicing/indexing in
the functions handling new_element_values, zkp_batch_size, low_element_values,
low_element_indices, low_element_next_values, low_element_next_indices, and
related vectors.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 58a5f9a3-24e1-49a0-b5b9-5befb0c33fa7
⛔ Files ignored due to path filters (8)
Cargo.lockis excluded by!**/*.lockand included by noneexternal/photonis excluded by none and included by noneforester-utils/src/address_staging_tree.rsis excluded by none and included by noneprogram-tests/utils/src/e2e_test_env.rsis excluded by none and included by noneprogram-tests/utils/src/mock_batched_forester.rsis excluded by none and included by noneprogram-tests/utils/src/test_batch_forester.rsis excluded by none and included by nonesparse-merkle-tree/src/indexed_changelog.rsis excluded by none and included by nonesparse-merkle-tree/tests/indexed_changelog.rsis excluded by none and included by none
📒 Files selected for processing (12)
forester/src/processor/v2/helpers.rsforester/src/processor/v2/strategy/address.rsprover/client/src/constants.rsprover/client/src/errors.rsprover/client/src/helpers.rsprover/client/src/proof_types/batch_address_append/proof_inputs.rsprover/client/src/proof_types/batch_append/proof_inputs.rsprover/client/src/proof_types/batch_update/proof_inputs.rsprover/client/src/prover.rsprover/client/tests/batch_address_append.rssdk-libs/client/src/indexer/types/queue.rssdk-libs/program-test/src/indexer/test_indexer.rs
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
prover/client/src/proof_types/batch_address_append/proof_inputs.rs (1)
576-579: 🧹 Nitpick | 🔵 TrivialMinor: Redundant slice syntax.
new_element_values[0..]is equivalent to justnew_element_values. The slice already starts at index 0.🧹 Suggested cleanup
- new_element_values: new_element_values[0..] + new_element_values: new_element_values .iter() .map(|v| BigUint::from_bytes_be(v)) .collect(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs` around lines 576 - 579, The code uses a redundant slice expression new_element_values[0..].iter() when mapping bytes to BigUint; simplify by iterating directly over new_element_values (e.g., use new_element_values.iter() or into_iter() as appropriate) in the map that constructs new_element_values for the struct—update the site that performs .map(|v| BigUint::from_bytes_be(v)).collect() to remove the unnecessary [0..] slice.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@prover/client/src/proof_types/batch_address_append/proof_inputs.rs`:
- Around line 576-579: The code uses a redundant slice expression
new_element_values[0..].iter() when mapping bytes to BigUint; simplify by
iterating directly over new_element_values (e.g., use new_element_values.iter()
or into_iter() as appropriate) in the map that constructs new_element_values for
the struct—update the site that performs .map(|v|
BigUint::from_bytes_be(v)).collect() to remove the unnecessary [0..] slice.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 66581aa8-7c8c-41f3-8820-f985926f6df8
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lockand included by none
📒 Files selected for processing (1)
prover/client/src/proof_types/batch_address_append/proof_inputs.rs
| .collect() | ||
| } | ||
|
|
||
| fn reconstruct_proof_with_lookup<const HEIGHT: usize>( |
There was a problem hiding this comment.
reconstruct_proof and reconstruct_proof_with_lookup share ~40 lines of identical iteration/error-handling logic, differing only in the node lookup strategy (linear scan vs HashMap). Consider having reconstruct_proof delegate to reconstruct_proof_with_lookup with a one-off HashMap, or extracting the shared loop body into a helper that takes a generic lookup closure.
| /// Reconstruct a merkle proof for a given low_element_index from the deduplicated nodes. | ||
| /// The tree_height is needed to know how many levels to traverse. | ||
| pub fn reconstruct_proof( | ||
| pub fn reconstruct_proof<const HEIGHT: usize>( |
There was a problem hiding this comment.
reconstruct_proof (singular) is pub but only called in tests within this module. If there are no external consumers, narrowing to pub(crate) or gating with #[cfg(test)] would make that intent explicit.
Summary by CodeRabbit
Refactor
Bug Fixes
Tests
Chores