Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 21 additions & 10 deletions src/net_processing.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1193,6 +1193,16 @@ static uint16_t GetHeadersLimit(const CNode& pfrom, bool compressed)
return MAX_HEADERS_UNCOMPRESSED_RESULT;
}

// Returns true when peer is a verified masternode that has opted in to receive recsigs.
// Such peers participate in the signing flow that populates creatingInstantSendLocks, so
// they can reconstruct an ISDLOCK locally from the recsig and don't need the ISDLOCK inv.
// Non-MN peers (e.g. nodes running with -watchquorums) also opt in to recsigs via
// QSENDRECSIGS but still need ISDLOCK invs because they don't run the signing flow.
static bool PeerReconstructsISLockFromRecsig(const CNode& pnode, const Peer& peer)
{
return peer.m_wants_recsigs && !pnode.GetVerifiedProRegTxHash().IsNull();
Comment thread
knst marked this conversation as resolved.
}
Comment on lines +1196 to +1204
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Suggestion: Helper assumes every verified-MN peer can reconstruct an ISDLOCK

PeerReconstructsISLockFromRecsig treats m_wants_recsigs && verified-MN as sufficient to assume the peer will reconstruct an ISDLOCK locally from the recsig. That holds for masternodes that are members of the relevant InstantSend quorum running the signer worker, but it is not strictly enforced: a verified MN that is not in the active quorum (or is outside the recovery window) will still receive recsigs but lacks the share data needed to assemble the ISDLOCK, and under this code we will not send it the inv either. The behavior is still strictly safer than the pre-fix state and other MN peers will fill the gap, but the comment overstates the invariant. Tightening the comment to call this out as a best-effort optimization (or confirming the IS team is comfortable with the edge case) would prevent future readers from over-trusting the predicate.

source: ['claude']

🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `src/net_processing.cpp`:
- [SUGGESTION] lines 1196-1204: Helper assumes every verified-MN peer can reconstruct an ISDLOCK
  PeerReconstructsISLockFromRecsig treats `m_wants_recsigs && verified-MN` as sufficient to assume the peer will reconstruct an ISDLOCK locally from the recsig. That holds for masternodes that are members of the relevant InstantSend quorum running the signer worker, but it is not strictly enforced: a verified MN that is not in the active quorum (or is outside the recovery window) will still receive recsigs but lacks the share data needed to assemble the ISDLOCK, and under this code we will not send it the inv either. The behavior is still strictly safer than the pre-fix state and other MN peers will fill the gap, but the comment overstates the invariant. Tightening the comment to call this out as a best-effort optimization (or confirming the IS team is comfortable with the edge case) would prevent future readers from over-trusting the predicate.


static void PushInv(Peer& peer, const CInv& inv)
{
auto inv_relay = peer.GetInvRelay();
Expand All @@ -1204,13 +1214,6 @@ static void PushInv(Peer& peer, const CInv& inv)
return;
}

// Skip ISDLOCK inv announcements for peers that want recsigs, as they can reconstruct
// the islock from the recsig
if (inv.type == MSG_ISDLOCK && peer.m_wants_recsigs) {
LogPrint(BCLog::NET, "%s -- skipping ISDLOCK inv (peer wants recsigs): %s peer=%d\n", __func__, inv.ToString(), peer.m_id);
return;
}

LOCK(inv_relay->m_tx_inventory_mutex);
if (inv_relay->m_tx_inventory_known_filter.contains(inv.hash)) {
LogPrint(BCLog::NET, "%s -- skipping known inv: %s peer=%d\n", __func__, inv.ToString(), peer.m_id);
Expand Down Expand Up @@ -2541,6 +2544,11 @@ void PeerManagerImpl::RelayInvFiltered(const CInv& inv, const CTransaction& rela
return;
}
} // LOCK(tx_relay->m_bloom_filter_mutex)
if (inv.type == MSG_ISDLOCK && PeerReconstructsISLockFromRecsig(*pnode, *peer)) {
LogPrint(BCLog::NET, "%s -- skipping ISDLOCK inv (peer wants recsigs): %s peer=%d\n",
__func__, inv.ToString(), peer->m_id);
return;
}
PushInv(*peer, inv);
});
}
Expand All @@ -2566,6 +2574,11 @@ void PeerManagerImpl::RelayInvFiltered(const CInv& inv, const uint256& relatedTx
return;
}
} // LOCK(tx_relay->m_bloom_filter_mutex)
if (inv.type == MSG_ISDLOCK && PeerReconstructsISLockFromRecsig(*pnode, *peer)) {
LogPrint(BCLog::NET, "%s -- skipping ISDLOCK inv (peer wants recsigs): %s peer=%d\n",
__func__, inv.ToString(), peer->m_id);
return;
}
Comment on lines 2547 to +2581
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 Nitpick: Duplicated ISDLOCK skip-and-log block in both RelayInvFiltered overloads

The four-line if (inv.type == MSG_ISDLOCK && PeerReconstructsISLockFromRecsig(*pnode, *peer)) { LogPrint(...); return; } is copy-pasted into both RelayInvFiltered overloads (2547-2551 and 2577-2581) and is the structural twin of the guard at 6364. Extracting a small helper such as ShouldSkipISDLockInv(pnode, peer, inv) would avoid the three sites drifting apart in future edits. Pure readability/maintainability nit, not a defect.

source: ['claude']

PushInv(*peer, inv);
});
}
Expand Down Expand Up @@ -6348,9 +6361,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
if (islock == nullptr) continue;
uint256 isLockHash{::SerializeHash(*islock)};
tx_relay->m_tx_inventory_known_filter.insert(isLockHash);
// Skip ISDLOCK inv announcements for peers that want recsigs, as they can reconstruct
// the islock from the recsig
if (!peer->m_wants_recsigs) {
if (!PeerReconstructsISLockFromRecsig(*pto, *peer)) {
queueAndMaybePushInv(CInv(MSG_ISDLOCK, isLockHash));
}
}
Expand Down
Loading