Skip to content

fix(signer): handle missing/behind KES period on registration#2952

Open
leepl37 wants to merge 4 commits intoinput-output-hk:mainfrom
leepl37:fix/signer-kes-period
Open

fix(signer): handle missing/behind KES period on registration#2952
leepl37 wants to merge 4 commits intoinput-output-hk:mainfrom
leepl37:fix/signer-kes-period

Conversation

@leepl37
Copy link
Copy Markdown
Contributor

@leepl37 leepl37 commented Jan 24, 2026

Content

This PR hardens signer registration by avoiding implicit defaults for the KES period.

Previously, get_current_kes_period() returning None would fall back to KesPeriod(0) (via unwrap_or_default), which could produce incorrect kes_evolutions.

Changes

  • Return RunnerError::NoValueError when the KES period is missing (None), so the signer retries safely instead of defaulting to 0.
  • Guards against current < start by logging a warning and returning an error, preventing registration with invalid data.

Impact
Prevents registration with invalid KES evolution data during sync or stale chain info.

Pre-submit checklist

  • Branch
    • Tests are provided (if possible)
    • Crates versions are updated (if relevant)
    • CHANGELOG file is updated (if relevant)
    • Commit sequence broadly makes sense
    • Key commits have useful messages
  • PR
    • All check jobs of the CI have succeeded
    • Self-reviewed the diff
    • Useful pull request description
    • Reviewer requested
  • Documentation
    • Update README file (if relevant)
    • Update documentation website (if relevant)
    • Add dev blog post (if relevant)
    • Add ADR blog post or Dev ADR entry (if relevant)
    • No new TODOs introduced

Comments

Issue(s)

N/A

@jpraynaud jpraynaud requested review from Alenar and jpraynaud January 26, 2026 07:16
Copy link
Copy Markdown
Member

@jpraynaud jpraynaud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @leepl37 for this PR.

I left a comment explaining why it needs to be modified before we can merge it.

Comment on lines +201 to +266
let mut attempt = 0;
let max_retries = 5;
let retry_delay = std::time::Duration::from_secs(1);

let (current_kes_period, kes_evolutions) = loop {
attempt += 1;

let kes_period = self.services.chain_observer.get_current_kes_period().await?;

let kes_period = match kes_period {
Some(kes_period) => kes_period,
None => {
if attempt >= max_retries {
return Err(
RunnerError::NoValueError("current_kes_period".to_string()).into()
);
}
warn!(
self.logger,
"Current KES period is not available yet. Retrying...";
"attempt" => attempt,
"max_retries" => max_retries
);
tokio::time::sleep(retry_delay).await;
continue;
}
};

let check_result = match &operational_certificate {
Some(op_cert) => {
let start_kes_period = op_cert.get_start_kes_period();
if kes_period < start_kes_period {
Err(start_kes_period)
} else {
Ok(Some(kes_period - start_kes_period))
}
}
None => Ok(None),
};

match check_result {
Ok(evolutions) => break (kes_period, evolutions),
Err(start_kes_period) => {
if attempt >= max_retries {
warn!(
self.logger,
"Current KES period is behind operational certificate start period.";
"current_kes_period" => u64::from(kes_period),
"start_kes_period" => u64::from(start_kes_period)
);
return Err(
RunnerError::NoValueError("kes_period_underflow".to_string()).into(),
);
}
warn!(
self.logger,
"KES period mismatch. Retrying...";
"current_kes_period" => u64::from(kes_period),
"start_kes_period" => u64::from(start_kes_period),
"attempt" => attempt,
"max_retries" => max_retries
);
tokio::time::sleep(retry_delay).await;
}
}
};
Copy link
Copy Markdown
Member

@jpraynaud jpraynaud Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we could trigger an error if the Cardano node is not completely synced:

  • current_kes_period is None
  • or if current_kes_period is less than operational_certificate.get_start_kes_period()

However, this code is too complex and does not take advantage of the retry mechanism of the state machine (an error will restart the state machine which will eventually run this same register_signer_to_aggregator code later).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review @jpraynaud!
I agree that the retry loop adds complexity and that relying on the State Machine is usually the cleaner default.

The main reason I tried this workaround was to solve a race condition I kept hitting in the E2E Devnet tests:

  1. The Trigger: When the cardano-node is briefly syncing at an epoch boundary, it reports an outdated KES period.

  2. The Problem: This error causes the State Machine to trigger its standard run_interval sleep (default 5s).

  3. The Failure: Since Devnet epochs are extremely short, even a single backoff cycle creates a risk of missing the registration window.

Evidence:

Devnet Config: The epoch duration is only 75 seconds (0.75s slot length * 100 slots).

The Error: Before adding this loop, I consistently encountered this failure in E2E tests:

Error(Unretryable): Mithril End to End test failed
Caused by:
    0: Requesting aggregator `mithril-aggregator-1`
    1: Minimum expected mithril stake distribution epoch not reached: 20 < 21

This confirms the Signer missed the window for epoch 21 because it was sleeping during the transition time.

Mainnet & CI Verification:

  • Devnet: The CI build passed with this fix, confirming it stabilizes the race condition.

  • Mainnet: This logic is also safe for Mainnet. While the standard 5s backoff is negligible on Mainnet (due to 5-day epochs), adding this fast retry makes the Mainnet signer more robust against transient node glitches without introducing risks.

Question: Is there a preferred way to trigger an "immediate" retry (skipping the standard backoff) for this specific transient error? If not, I can remove the loop, but I am worried it will re-introduce these random CI failures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants