fix(signer): handle missing/behind KES period on registration#2952
fix(signer): handle missing/behind KES period on registration#2952leepl37 wants to merge 4 commits intoinput-output-hk:mainfrom
Conversation
| let mut attempt = 0; | ||
| let max_retries = 5; | ||
| let retry_delay = std::time::Duration::from_secs(1); | ||
|
|
||
| let (current_kes_period, kes_evolutions) = loop { | ||
| attempt += 1; | ||
|
|
||
| let kes_period = self.services.chain_observer.get_current_kes_period().await?; | ||
|
|
||
| let kes_period = match kes_period { | ||
| Some(kes_period) => kes_period, | ||
| None => { | ||
| if attempt >= max_retries { | ||
| return Err( | ||
| RunnerError::NoValueError("current_kes_period".to_string()).into() | ||
| ); | ||
| } | ||
| warn!( | ||
| self.logger, | ||
| "Current KES period is not available yet. Retrying..."; | ||
| "attempt" => attempt, | ||
| "max_retries" => max_retries | ||
| ); | ||
| tokio::time::sleep(retry_delay).await; | ||
| continue; | ||
| } | ||
| }; | ||
|
|
||
| let check_result = match &operational_certificate { | ||
| Some(op_cert) => { | ||
| let start_kes_period = op_cert.get_start_kes_period(); | ||
| if kes_period < start_kes_period { | ||
| Err(start_kes_period) | ||
| } else { | ||
| Ok(Some(kes_period - start_kes_period)) | ||
| } | ||
| } | ||
| None => Ok(None), | ||
| }; | ||
|
|
||
| match check_result { | ||
| Ok(evolutions) => break (kes_period, evolutions), | ||
| Err(start_kes_period) => { | ||
| if attempt >= max_retries { | ||
| warn!( | ||
| self.logger, | ||
| "Current KES period is behind operational certificate start period."; | ||
| "current_kes_period" => u64::from(kes_period), | ||
| "start_kes_period" => u64::from(start_kes_period) | ||
| ); | ||
| return Err( | ||
| RunnerError::NoValueError("kes_period_underflow".to_string()).into(), | ||
| ); | ||
| } | ||
| warn!( | ||
| self.logger, | ||
| "KES period mismatch. Retrying..."; | ||
| "current_kes_period" => u64::from(kes_period), | ||
| "start_kes_period" => u64::from(start_kes_period), | ||
| "attempt" => attempt, | ||
| "max_retries" => max_retries | ||
| ); | ||
| tokio::time::sleep(retry_delay).await; | ||
| } | ||
| } | ||
| }; |
There was a problem hiding this comment.
I agree that we could trigger an error if the Cardano node is not completely synced:
current_kes_periodisNone- or if
current_kes_periodis less thanoperational_certificate.get_start_kes_period()
However, this code is too complex and does not take advantage of the retry mechanism of the state machine (an error will restart the state machine which will eventually run this same register_signer_to_aggregator code later).
There was a problem hiding this comment.
Thanks for the review @jpraynaud!
I agree that the retry loop adds complexity and that relying on the State Machine is usually the cleaner default.
The main reason I tried this workaround was to solve a race condition I kept hitting in the E2E Devnet tests:
-
The Trigger: When the
cardano-nodeis briefly syncing at an epoch boundary, it reports an outdated KES period. -
The Problem: This error causes the State Machine to trigger its standard
run_intervalsleep (default 5s). -
The Failure: Since Devnet epochs are extremely short, even a single backoff cycle creates a risk of missing the registration window.
Evidence:
Devnet Config: The epoch duration is only 75 seconds (0.75s slot length * 100 slots).
The Error: Before adding this loop, I consistently encountered this failure in E2E tests:
Error(Unretryable): Mithril End to End test failed
Caused by:
0: Requesting aggregator `mithril-aggregator-1`
1: Minimum expected mithril stake distribution epoch not reached: 20 < 21
This confirms the Signer missed the window for epoch 21 because it was sleeping during the transition time.
Mainnet & CI Verification:
-
Devnet: The CI build passed with this fix, confirming it stabilizes the race condition.
-
Mainnet: This logic is also safe for Mainnet. While the standard 5s backoff is negligible on Mainnet (due to 5-day epochs), adding this fast retry makes the Mainnet signer more robust against transient node glitches without introducing risks.
Question: Is there a preferred way to trigger an "immediate" retry (skipping the standard backoff) for this specific transient error? If not, I can remove the loop, but I am worried it will re-introduce these random CI failures.
Content
This PR hardens signer registration by avoiding implicit defaults for the KES period.
Previously,
get_current_kes_period()returningNonewould fall back toKesPeriod(0)(viaunwrap_or_default), which could produce incorrect kes_evolutions.Changes
RunnerError::NoValueErrorwhen the KES period is missing (None), so the signer retries safely instead of defaulting to 0.current < startby logging a warning and returning an error, preventing registration with invalid data.Impact
Prevents registration with invalid KES evolution data during sync or stale chain info.
Pre-submit checklist
Comments
Issue(s)
N/A