Enforce single AttestationData per block (leanSpec PR #510)#258
Enforce single AttestationData per block (leanSpec PR #510)#258MegaRedHand merged 11 commits intodevnet4from
Conversation
Add block validation rejecting duplicate AttestationData entries and a compaction step in block building that merges proofs sharing the same AttestationData into a single attestation entry. For empty proofs (skip-sig mode), participant bitfields are unioned. For real proofs, the proof with the most participants is kept until lean-multisig supports recursive aggregation.
🤖 Codex Code ReviewFindings
I did not run tests locally: Automated review by OpenAI Codex · gpt-5.4 · custom prompt |
🤖 Kimi Code ReviewReview of PR #258 for ethlambda. Overall, this is a solid implementation of the "unique AttestationData per block" consensus rule with proper compaction logic for block production. Critical IssuesNone found. The validation logic correctly enforces consensus rules before state transition. Correctness & Logiccrates/blockchain/src/store.rs:1014-1023 Consider adding a comment explaining this trade-off, or implementing subset merging (if one bitfield is a subset of another, keep the superset): // TODO: If bitfields are disjoint, we could keep both by concatenating proof_data
// when lean-multisig supports proof aggregation.crates/blockchain/src/store.rs:935 Automated review by Kimi (Moonshot AI) · kimi-k2.5 · custom prompt |
🤖 Claude Code ReviewNow I have enough context. Here's my review: Review: PR #258 — Enforce single AttestationData per blockThe overall approach is sound and well-structured. The validation/building split is correct, and the tests cover the happy paths well. A few issues worth addressing: Bug: Mixed empty/real proofs selects wrong proofIn // Current: may select empty proof over real proof
let best_idx = indices
.iter()
.copied()
.max_by_key(|&i| proofs[i].participant_indices().count())
.expect("group is non-empty");Fix: prefer real proofs, tiebreak by participant count: .max_by_key(|&i| (!proofs[i].proof_data.is_empty(), proofs[i].participant_indices().count()))There's no test for the mixed case. This scenario may not arise in practice today (all proofs in a given run are either all-real or all-empty), but the logic is silently wrong and becomes a latent consensus bug. Silent coverage loss with real proofsWhen multiple real proofs share the same The comment acknowledges "Full merge deferred until lean-multisig supports recursive aggregation", which is a fair constraint. But the behavior is only documented in the PR description, not in the code comment. The inline comment at line ~1004 should explicitly state the consequence: // Cannot merge real proofs yet (lean-multisig doesn't support recursive aggregation).
// Keeping only the best-coverage proof; the other proofs' validator coverage is lost
// for this block. TODO: merge when lean-multisig supports recursive aggregation.
|
| Severity | Item |
|---|---|
| Medium bug | Mixed empty/real proofs: empty proof may win over real proof |
| High design gap | Silent validator coverage loss when discarding real proofs (needs better inline docs) |
| Medium | expect panics in union_aggregation_bits — no graceful error path |
| Low | Double clone per new unique entry |
| Low | Missing test for mixed empty/real proof case |
| Nit | debug_assert_eq for a load-bearing invariant |
Automated review by Claude (Anthropic) · sonnet · custom prompt
Greptile SummaryThis PR enforces the new leanSpec protocol invariant that each unique The implementation is logically sound and well-tested for the expected cases, with one subtle edge noted below. Confidence Score: 4/5Safe to merge with one latent correctness edge case worth addressing before recursive aggregation lands. Both findings are P2 in practice (skip-sig and real-proof modes are mutually exclusive today; spec-transition enforcement is explicitly deferred), but the mixed-proof participant_indices issue is a future correctness trap that could silently produce blocks with cryptographically invalid proofs once the modes interact. crates/blockchain/src/store.rs — specifically the else branch of compact_attestations (lines 1004-1013) and the intermediate process_block call in the justification loop (lines 1144-1166).
|
| Filename | Overview |
|---|---|
| crates/blockchain/src/store.rs | Adds O(n) duplicate-AttestationData check in on_block_core and compact_attestations helper in build_block; five new unit tests cover the main paths; one latent mixed-proof edge case in the real-proofs branch. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[build_block called] --> B[extend_proofs_greedily loop]
B --> C{justification advanced?}
C -- yes --> B
C -- no --> D[compact_attestations]
D --> E{any duplicate AttestationData?}
E -- no --> F[fast path: return as-is]
E -- yes --> G{all proofs empty?}
G -- yes --> H[union aggregation_bits → single empty proof]
G -- no --> I[keep proof with most participant_indices]
H --> J[final build_block + process_block]
I --> J
F --> J
J --> K[on_block_core receives block]
K --> L[HashSet uniqueness check]
L --> M{duplicates?}
M -- yes --> N[Err: DuplicateAttestationData]
M -- no --> O[sig verification + state transition]
O --> P[block accepted]
Comments Outside Diff (1)
-
crates/blockchain/src/store.rs, line 1144-1166 (link)Uncompacted intermediate block may conflict with future state-transition enforcement
The
aggregated_attestationspassed to the intermediateprocess_blockcall inside the justification-check loop can still contain duplicateAttestationDataentries (produced byextend_proofs_greedily). Currently onlyon_block_coreenforces the uniqueness invariant, so this is harmless. But when the leanSpec PR #510 state-transition fixtures land andprocess_blockitself begins rejecting duplicates, these intermediate calls will start returning an error and the loop will break unexpectedly. The PR description acknowledges the deferred fixture update, but neither the comment on line 1169 nor the loop body flags this forward-compatibility gap.Prompt To Fix With AI
This is a comment left during a code review. Path: crates/blockchain/src/store.rs Line: 1144-1166 Comment: **Uncompacted intermediate block may conflict with future state-transition enforcement** The `aggregated_attestations` passed to the intermediate `process_block` call inside the justification-check loop can still contain duplicate `AttestationData` entries (produced by `extend_proofs_greedily`). Currently only `on_block_core` enforces the uniqueness invariant, so this is harmless. But when the leanSpec PR #510 state-transition fixtures land and `process_block` itself begins rejecting duplicates, these intermediate calls will start returning an error and the loop will break unexpectedly. The PR description acknowledges the deferred fixture update, but neither the comment on line 1169 nor the loop body flags this forward-compatibility gap. How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: crates/blockchain/src/store.rs
Line: 1144-1166
Comment:
**Uncompacted intermediate block may conflict with future state-transition enforcement**
The `aggregated_attestations` passed to the intermediate `process_block` call inside the justification-check loop can still contain duplicate `AttestationData` entries (produced by `extend_proofs_greedily`). Currently only `on_block_core` enforces the uniqueness invariant, so this is harmless. But when the leanSpec PR #510 state-transition fixtures land and `process_block` itself begins rejecting duplicates, these intermediate calls will start returning an error and the loop will break unexpectedly. The PR description acknowledges the deferred fixture update, but neither the comment on line 1169 nor the loop body flags this forward-compatibility gap.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: crates/blockchain/src/store.rs
Line: 1004-1013
Comment:
**`participant_indices()` called on empty proofs in mixed-proof groups**
In the `else` branch (real-proofs path), `max_by_key(|&i| proofs[i].participant_indices().count())` iterates over *all* indices in the group, including any that have `proof_data.is_empty()`. `participant_indices()` counts bits in the `participants` bitfield regardless of whether `proof_data` is present, so an empty proof with a large bitfield could win the comparison and be selected as "best". In a fully mixed group this would embed a proof with no cryptographic backing into the final block, which would then fail signature verification at the receiving node. In practice skip-sig mode and real-proof mode are mutually exclusive deployments, so this path won't trigger today — but it is a latent correctness trap worth guarding explicitly (e.g. by restricting the `max_by_key` iteration to indices where `!proofs[i].proof_data.is_empty()`).
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "Enforce single AttestationData per block..." | Re-trigger Greptile
crates/blockchain/src/store.rs
Outdated
| } else { | ||
| // Cannot merge real proofs; keep the one with the most participants | ||
| let best_idx = indices | ||
| .iter() | ||
| .copied() | ||
| .max_by_key(|&i| proofs[i].participant_indices().count()) | ||
| .expect("group is non-empty"); | ||
| compacted_atts.push(attestations[best_idx].clone()); | ||
| compacted_proofs.push(proofs[best_idx].clone()); | ||
| } |
There was a problem hiding this comment.
participant_indices() called on empty proofs in mixed-proof groups
In the else branch (real-proofs path), max_by_key(|&i| proofs[i].participant_indices().count()) iterates over all indices in the group, including any that have proof_data.is_empty(). participant_indices() counts bits in the participants bitfield regardless of whether proof_data is present, so an empty proof with a large bitfield could win the comparison and be selected as "best". In a fully mixed group this would embed a proof with no cryptographic backing into the final block, which would then fail signature verification at the receiving node. In practice skip-sig mode and real-proof mode are mutually exclusive deployments, so this path won't trigger today — but it is a latent correctness trap worth guarding explicitly (e.g. by restricting the max_by_key iteration to indices where !proofs[i].proof_data.is_empty()).
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/blockchain/src/store.rs
Line: 1004-1013
Comment:
**`participant_indices()` called on empty proofs in mixed-proof groups**
In the `else` branch (real-proofs path), `max_by_key(|&i| proofs[i].participant_indices().count())` iterates over *all* indices in the group, including any that have `proof_data.is_empty()`. `participant_indices()` counts bits in the `participants` bitfield regardless of whether `proof_data` is present, so an empty proof with a large bitfield could win the comparison and be selected as "best". In a fully mixed group this would embed a proof with no cryptographic backing into the final block, which would then fail signature verification at the receiving node. In practice skip-sig mode and real-proof mode are mutually exclusive deployments, so this path won't trigger today — but it is a latent correctness trap worth guarding explicitly (e.g. by restricting the `max_by_key` iteration to indices where `!proofs[i].proof_data.is_empty()`).
How can I resolve this? If you propose a fix, please make it concise.Wrap the parallel attestation/proof vecs into Vec<Option<(att, proof)>> so items can be moved out by index via .take() instead of cloned. Also use early-exit duplicate detection in on_block_core and avoid a double clone in the grouping loop by cloning from the entry key.
The merge from devnet4 bumped the hash to ad9a322 (dual-key commit) which expects test key files with attestation_public/proposal_public fields that don't exist at that revision, causing 83 KeyError failures in the leanSpec test suite.
## Summary - Expose `is_prepared_for(slot)` and `advance_preparation()` on `ValidatorSecretKey`, delegating to the leansig `SignatureSchemeSecretKey` trait - Before signing in `KeyManager::sign_message()`, check if the target slot is within the prepared window and advance if needed - Return a descriptive error if the key's activation interval is fully exhausted - Add a timing test for `advance_preparation()` (526ms in release mode on Apple Silicon) ## Root Cause XMSS keys use a Top-Bottom Tree Traversal scheme where only two consecutive bottom trees are loaded in memory at any time. Each bottom tree covers `sqrt(LIFETIME) = 2^16 = 65,536` slots, so the prepared window spans `131,072` slots (~6 days at 4s/slot). The leansig library provides `advance_preparation()` to slide this window forward by computing the next bottom tree, but ethlambda's `KeyManager` never called it. When the devnet at `admin@ethlambda-1` reached slot 131,072, all 4 nodes panicked simultaneously: ``` Signing: key not yet prepared for this epoch, try calling sk.advance_preparation. ``` The fix checks the prepared interval before every sign operation and advances the window on demand. This is a lazy approach — `advance_preparation` is called at signing time rather than proactively in the tick loop — because: - It happens once every ~3 days (65,536 slots) - The computation (one bottom tree of hash leaves) takes ~526ms in release mode - It keeps the change minimal and avoids tick-loop complexity ## Test plan - [x] `make fmt` clean - [x] `make lint` clean - [x] `make test` passes (existing tests use small lifetimes or skip verification) - [x] `test_advance_preparation_duration` passes (`cargo test -p ethlambda-types test_advance_preparation_duration --release -- --ignored --nocapture`) - [ ] Deploy to devnet with fresh genesis and verify it runs past slot 131,072 without panic
This PR adds additional "big message" checks in the P2P layer: - when encoding a payload, check it doesn't go over the max uncompressed size - when responding to `BlocksByRoot` requests, skip big blocks None of these code paths should be triggered during normal operation, but they improve network stability by keeping errors local.
crates/blockchain/src/store.rs
Outdated
| if all_empty { | ||
| // Merge: take all entries and fold their participant bitfields | ||
| let mut merged_bits = None; | ||
| for &idx in indices { | ||
| let (att, _) = items[idx].take().expect("index used once"); | ||
| merged_bits = Some(match merged_bits { | ||
| None => att.aggregation_bits, | ||
| Some(acc) => union_aggregation_bits(&acc, &att.aggregation_bits), | ||
| }); | ||
| } | ||
| let merged_bits = merged_bits.expect("group is non-empty"); | ||
| compacted_proofs.push(AggregatedSignatureProof::empty(merged_bits.clone())); | ||
| compacted_atts.push(AggregatedAttestation { | ||
| aggregation_bits: merged_bits, | ||
| data, | ||
| }); | ||
| } else { |
There was a problem hiding this comment.
This isn't in the spec from what I see. It seems unnecessary
There was a problem hiding this comment.
You're right, removed it in the latest commit along with its test. The merge path (union of participant bitfields) is now the only path for duplicate entries.
The else branch handling multiple real proofs for the same AttestationData is not in the spec and doesn't arise in practice (skip-sig and real-proof modes are mutually exclusive). Simplify to always merge via union of participant bitfields.
Motivation
Currently,
build_blockcan produce multiple attestation entries sharing the sameAttestationData-- each backed by a different aggregated signature proof. This happens whenextend_proofs_greedilyselects multiple proofs for the same vote data (e.g., from different aggregation intervals covering non-overlapping validator sets).leanSpec PR #510 introduces a new protocol invariant: each unique
AttestationDatamust appear at most once per block. Multiple proofs for the same vote are compacted into a single entry, reducing block size and simplifying downstream processing. The invariant is enforced on both the building and validation sides.Description
Block validation (
on_block_core)A cheap O(n) uniqueness check is inserted before signature verification and state transition (the two most expensive steps). If a block contains duplicate
AttestationDataentries, it is rejected with a newStoreError::DuplicateAttestationDataerror. This usesHashSet<&AttestationData>-- possible becauseAttestationDataalready derivesHash + Eq.Block building (
build_block)After the existing greedy proof selection loop, a new
compact_attestationsstep groups all(attestation, proof)pairs by theirAttestationDataand merges duplicates:union_aggregation_bits, producing a singleAggregatedSignatureProof::empty(merged_bits).The intermediate block built inside the justification-check loop is not compacted -- it only tests whether justification advances, and vote counting is identical regardless of entry layout.
New helpers
union_aggregation_bits(a, b)AggregationBitsbitfieldscompact_attestations(atts, proofs)AttestationData, merges duplicates, preserves first-occurrence orderFuture work
When lean-multisig adds recursive aggregation support, the
elsebranch incompact_attestations(real proofs) can be upgraded to cryptographically merge all proofs instead of keeping only the best one. This will recover the small amount of validator coverage currently lost when multiple real proofs exist for the sameAttestationData.Test plan
compact_attestations_no_duplicates-- distinct data passes through unchangedcompact_attestations_merges_empty_proofs-- two entries with same data + empty proofs merge into one with unioned participants covering all validatorscompact_attestations_real_proofs_keeps_best-- two entries with same data + real proofs keeps the one with more participantscompact_attestations_preserves_order-- multiple data entries (some duplicated) output in first-occurrence orderon_block_rejects_duplicate_attestation_data-- block with duplicate entries returnsDuplicateAttestationDataerror viaon_block_without_verificationcargo fmt --all -- --checkcleancargo clippy -p ethlambda-blockchain -- -D warningsclean