backport: bitcoin#18933, #23123, #23147, #23549, #26074, #26207, #26307, #26341, #26352, #26656, #26886, #26996, #27035#7192
backport: bitcoin#18933, #23123, #23147, #23549, #26074, #26207, #26307, #26341, #26352, #26656, #26886, #26996, #27035#7192knst wants to merge 13 commits intodashpay:developfrom
Conversation
|
WalkthroughAdds a new blockchain.scanblocks RPC (actions: start/status/abort) that scans blockfilters for descriptor-derived scripts and returns matching block hashes; introduces scan concurrency state and a BlockFiltersScanReserver. REST /rest/mempool/contents.json now accepts query parameters Sequence Diagram(s)sequenceDiagram
participant Client
participant RPCHandler as "RPC Handler\n(scanblocks)"
participant FilterIndex as "BlockFilterIndex"
participant GlobalState as "Global State\n(g_scanfilter_*)"
participant Collector as "Result Collector"
Client->>RPCHandler: start(action=start, descriptors, start_height, stop_height)
activate RPCHandler
RPCHandler->>GlobalState: set in_progress = true
RPCHandler->>RPCHandler: build script set from descriptors
loop for each block in range
RPCHandler->>FilterIndex: request filter for block
FilterIndex-->>RPCHandler: filter (or not found)
RPCHandler->>RPCHandler: match filter against script set
alt match
RPCHandler->>Collector: add block to relevant_blocks
end
RPCHandler->>GlobalState: update progress, progress_height
end
RPCHandler->>GlobalState: set in_progress = false
RPCHandler-->>Client: return {from_height, to_height, relevant_blocks}
deactivate RPCHandler
Client->>RPCHandler: status(action=status)
activate RPCHandler
RPCHandler->>GlobalState: read in_progress, progress, progress_height
RPCHandler-->>Client: return status
deactivate RPCHandler
Client->>RPCHandler: abort(action=abort)
activate RPCHandler
RPCHandler->>GlobalState: attempt reserve & set should_abort
alt reserved
RPCHandler-->>Client: {aborted: true}
else not reserved
RPCHandler-->>Client: {aborted: false}
end
deactivate RPCHandler
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/wallet/walletdb.cpp (1)
866-877:⚠️ Potential issue | 🟠 Major
tx_corruptpath still bypasses the newNEED_RESCANrecovery flow.When corrupt tx records are detected, this path sets
DBErrors::CORRUPT, which prevents the laterrescan_required -> NEED_RESCANmapping from applying to this recoverable case.Proposed fix
- } else if (wss.tx_corrupt) { + } else if (wss.tx_corrupt) { pwallet->WalletLogPrintf("Error: Corrupt transaction found. This can be fixed by removing transactions from wallet and rescanning.\n"); // Set tx_corrupt back to false so that the error is only printed once (per corrupt tx) wss.tx_corrupt = false; - result = DBErrors::CORRUPT; + fNoncriticalErrors = true; + rescan_required = true; } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/walletdb.cpp` around lines 866 - 877, The tx_corrupt branch currently sets result = DBErrors::CORRUPT which prevents the later rescan_required -> NEED_RESCAN recovery from being applied; change the wss.tx_corrupt handling in walletdb.cpp so that after logging and clearing wss.tx_corrupt it also sets rescan_required = true (and does not set result = DBErrors::CORRUPT), or otherwise leave result as a non-fatal status so the later mapping can convert rescan_required into NEED_RESCAN; update the block around wss.tx_corrupt, pwallet->WalletLogPrintf, and result to implement this behavior.src/wallet/wallet.cpp (1)
3403-3414:⚠️ Potential issue | 🟠 MajorReplace stale
-rescangating withrescan_requiredin startup rescan logic.Line [3406] still controls “full rescan” behavior via
-rescan==2, but startup-rescanis being removed in this PR. This leaves a removed-flag dependency in the critical attach/rescan path.✅ Direct fix
// No need to read and scan block if block was created before // our wallet birthday (as adjusted for block time variability) - // unless a full rescan was requested - if (gArgs.GetIntArg("-rescan", 0) != 2) { + // unless a full rescan is required + if (!rescan_required) { std::optional<int64_t> time_first_key; for (auto spk_man : walletInstance->GetAllScriptPubKeyMans()) { int64_t time = spk_man->GetTimeFirstKey(); if (!time_first_key || time < *time_first_key) time_first_key = time; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/wallet.cpp` around lines 3403 - 3414, The startup rescan gating still checks gArgs.GetIntArg("-rescan", 0) != 2; replace that removed-flag check with the rescan_required boolean so the "skip reading/scanning blocks older than wallet birthday" optimization runs only when a full rescan is NOT required. Concretely, in the block surrounding walletInstance->GetAllScriptPubKeyMans(), change the if condition from gArgs.GetIntArg("-rescan", 0) != 2 to if (!rescan_required), keeping the time_first_key calculation and the chain.findFirstBlockWithTimeAndHeight(*time_first_key - TIMESTAMP_WINDOW, rescan_height, FoundBlock().height(rescan_height)) call unchanged.
🧹 Nitpick comments (7)
test/functional/feature_notifications.py (1)
97-97: Consider removing redundant reconnect call.At Line 97,
self.connect_nodes(0, 1)looks leftover from the old restart-based flow. Since this path no longer restarts/disconnects node 1, this adds unnecessary connection churn.♻️ Proposed cleanup
- self.connect_nodes(0, 1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/feature_notifications.py` at line 97, Remove the redundant reconnection call self.connect_nodes(0, 1) from the test path that no longer restarts or disconnects node 1; locate the invocation of self.connect_nodes(0, 1) in the feature_notifications test and delete it, ensuring there are no subsequent assumptions that node 1 was disconnected (adjust any assertions if they incorrectly expect a reconnect), so the test uses the existing live connection without churn.test/functional/wallet_import_rescan.py (1)
174-181: Avoid mining empty blocks at batch boundaries (i == 0) and on empty final flushes.Line 174 and Line 223 trigger block generation on the first iteration, and Line 189 always mines a final block even when no variants remain. This adds avoidable blocks and test time.
Proposed cleanup
@@ - for i, variant in enumerate(IMPORT_VARIANTS): - if i % 10 == 0: + for i, variant in enumerate(IMPORT_VARIANTS): + if i > 0 and i % 10 == 0: blockhash = self.generate(self.nodes[0], 1)[0] conf_height = self.nodes[0].getblockcount() timestamp = self.nodes[0].getblockheader(blockhash)["time"] for var in last_variants: var.confirmation_height = conf_height var.timestamp = timestamp last_variants.clear() @@ - blockhash = self.generate(self.nodes[0], 1)[0] - conf_height = self.nodes[0].getblockcount() - timestamp = self.nodes[0].getblockheader(blockhash)["time"] - for var in last_variants: - var.confirmation_height = conf_height - var.timestamp = timestamp - last_variants.clear() + if last_variants: + blockhash = self.generate(self.nodes[0], 1)[0] + conf_height = self.nodes[0].getblockcount() + timestamp = self.nodes[0].getblockheader(blockhash)["time"] + for var in last_variants: + var.confirmation_height = conf_height + var.timestamp = timestamp + last_variants.clear() @@ - for i, variant in enumerate(IMPORT_VARIANTS): - if i % 10 == 0: - blockhash = self.generate(self.nodes[0], 1)[0] - conf_height = self.nodes[0].getblockcount() + 1 + conf_height = self.nodes[0].getblockcount() + 1 + for i, variant in enumerate(IMPORT_VARIANTS): + if i > 0 and i % 10 == 0: + self.generate(self.nodes[0], 1) + conf_height = self.nodes[0].getblockcount() + 1 variant.sent_amount = get_rand_amount() variant.sent_txid = self.nodes[0].sendtoaddress(variant.address["address"], variant.sent_amount) variant.confirmation_height = conf_heightAlso applies to: 189-195, 223-226
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/wallet_import_rescan.py` around lines 174 - 181, The test is mining unnecessary empty blocks at batch boundaries and at the final flush; change the block-generation guards so generate(self.nodes[0], 1) is only called when there are variants to confirm and not on the very first iteration: update the condition in the loop from "if i % 10 == 0:" to something like "if i % 10 == 0 and i != 0 and last_variants:" (or equivalent truthy check on last_variants) and similarly guard the final flush that always mines a block (the block generation around the final last_variants handling) so it only mines when last_variants is non-empty before updating confirmation_height/timestamp and calling last_variants.clear().test/functional/wallet_upgradetohd.py (2)
185-185: Usenodealias for consistency with the rest of the file.The file defines
node = self.nodes[0]at line 49 and usesnode.rescanblockchain()everywhere else (lines 97, 124, 139, 153, 176, 196, 223). This line should follow the same pattern.Suggested fix
- self.nodes[0].rescanblockchain() + node.rescanblockchain()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/wallet_upgradetohd.py` at line 185, Replace the direct use of self.nodes[0].rescanblockchain() with the existing node alias to match the rest of the file; locate the test method where rescanblockchain() is called (currently using self.nodes[0]) and change it to node.rescanblockchain() — the alias node is defined earlier (node = self.nodes[0]) and is used throughout functions like the test routines that call rescanblockchain().
206-206: Usenodealias and verify rescan placement.Same style inconsistency as line 185—should use
node.rescanblockchain().Additionally, this rescan is placed after
encryptwallet()but beforeupgradetohd()(line 209). Since encryption doesn't change wallet keys, rescanning here won't discover new coins. The actual recovery happens via the rescan at line 223 after the HD upgrade andkeypoolrefill(). Consider whether this rescan is needed at all, or if it was simply carried over from the previous restart logic.Suggested fix for consistency
- self.nodes[0].rescanblockchain() + node.rescanblockchain()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/wallet_upgradetohd.py` at line 206, Replace the direct self.nodes[0].rescanblockchain() call with the node alias (node.rescanblockchain()) for consistency, and either remove this rescan (it is redundant because encryptwallet() does not add keys) or move it to after upgradetohd() and keypoolrefill() so the rescan actually recovers coins created by the HD upgrade; update the call site near encryptwallet()/upgradetohd()/keypoolrefill() accordingly (functions: encryptwallet(), upgradetohd(), keypoolrefill(), node.rescanblockchain()).test/functional/mempool_updatefromblock.py (1)
50-53: Unused loop variabletx.The loop variable
txis not used within the loop body. Consider using underscore to indicate it's intentionally unused.♻️ Suggested fix
- for j, tx in enumerate(tx_id[0:i]): + for j, _ in enumerate(tx_id[0:i]):Alternatively, use
range(i)since you only need the index:- for j, tx in enumerate(tx_id[0:i]): + for j in range(i):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/mempool_updatefromblock.py` around lines 50 - 53, The loop in mempool_updatefromblock.py uses an unused variable `tx` in "for j, tx in enumerate(tx_id[0:i]):"; change the loop to avoid the unused variable (either "for j, _ in enumerate(tx_id[0:i]):" or use an index-only form like "for j in range(i):") and keep the body that computes vout and calls inputs.append(wallet.get_utxo(txid=tx_id[j], vout=vout)) unchanged.src/rpc/util.h (1)
143-147: AlignRPCArgOptionsmember defaults with project header convention.Use explicit default for POD only, and rely on default constructors for
std::string/std::vector.Based on learnings: In header files, prefer in-class initialization for POD types and avoid brace initialization for non-POD members that already have default constructors.♻️ Suggested style-only cleanup
struct RPCArgOptions { - std::string oneline_description{}; //!< Should be empty unless it is supposed to override the auto-generated summary line - std::vector<std::string> type_str{}; //!< Should be empty unless it is supposed to override the auto-generated type strings. Vector length is either 0 or 2, m_opts.type_str.at(0) will override the type of the value in a key-value pair, m_opts.type_str.at(1) will override the type in the argument description. - bool hidden{false}; //!< For testing only + std::string oneline_description; //!< Should be empty unless it is supposed to override the auto-generated summary line + std::vector<std::string> type_str; //!< Should be empty unless it is supposed to override the auto-generated type strings. Vector length is either 0 or 2, m_opts.type_str.at(0) will override the type of the value in a key-value pair, m_opts.type_str.at(1) will override the type in the argument description. + bool hidden = false; //!< For testing only };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/rpc/util.h` around lines 143 - 147, The struct RPCArgOptions currently uses brace-init for non-POD members; remove the explicit initializers for std::string oneline_description and std::vector<std::string> type_str and keep an in-class default only for the POD member hidden (e.g., initialize hidden to false using the project's preferred style), so update RPCArgOptions to rely on the members' default constructors for oneline_description and type_str and explicit default for hidden; locate the struct by name RPCArgOptions to make the change.src/rpc/blockchain.cpp (1)
2698-2704: Add explicitscanobjectsvalidation forstart.
startcurrently relies onget_array()to fail later with a generic type error. A direct check gives clearer RPC feedback.💡 Proposed fix
else if (request.params[0].get_str() == "start") { BlockFiltersScanReserver reserver; if (!reserver.reserve()) { throw JSONRPCError(RPC_INVALID_PARAMETER, "Scan already in progress, use action \"abort\" or \"status\""); } + if (request.params.size() < 2 || request.params[1].isNull()) { + throw JSONRPCError(RPC_MISC_ERROR, "scanobjects argument is required for the start action"); + } const std::string filtertype_name{request.params[4].isNull() ? "basic" : request.params[4].get_str()};Also applies to: 2743-2743
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/rpc/blockchain.cpp` around lines 2698 - 2704, In the "start" branch (else if (request.params[0].get_str() == "start")) add an explicit validation that the scanobjects parameter is present and is an array (e.g. check request.params[1].isArray() or !isNull() + isArray()) before calling get_array(), and if the check fails throw JSONRPCError(RPC_INVALID_PARAMETER, "scanobjects must be an array"); place this check immediately after reserving BlockFiltersScanReserver and before reading filtertype_name, and apply the same explicit validation in the other start-handling site referenced around the second spot (~2743) so both code paths return a clear RPC error instead of a generic type error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/rpc/blockchain.cpp`:
- Around line 2710-2713: The code currently grabs the BlockFilterIndex via
GetBlockFilterIndex(filtertype) but does not verify the index is fully synced
before beginning the scan, which can yield incomplete results; update the logic
after obtaining BlockFilterIndex* index to check the index's sync state (e.g.
call index->IsSynced() or index->BlocksSynced()) and either block until sync
completes (using an available helper like index->BlockUntilSynced() if present)
or return a JSONRPCError indicating the index is still syncing and the caller
should retry later; reference GetBlockFilterIndex and the BlockFilterIndex
instance used by the start scanning code so the sync check happens before any
scan/iteration begins.
- Around line 2753-2786: The chunked scan loop mishandles failed lookups and
reuses state across iterations: ensure LookupFilterRange's boolean failure is
handled (log the error via LogPrintf/LogPrint and abort the RPC loop or return
an error to avoid returning partial results) rather than silently continuing;
clear the local std::vector<BlockFilter> filters before each chunk fetch so
filters from prior chunks aren't reused; and update start_index to the next
block after the processed range (use the previously-computed next pointer or
start_index = next) instead of setting start_index = block to avoid
overlap/reprocessing and keep g_scanfilter_progress_height/progress consistent
with the actual processed height. Also ensure blocks/results are only appended
when LookupFilterRange succeeds.
In `@test/functional/feature_fee_estimation.py`:
- Around line 63-65: The test currently appends outputs to unconflist and
memutxo before verifying the batch RPC sendrawtransaction results; change the
logic in the batch handling flow (where batch() is called for
from_node.sendrawtransaction requests) to inspect each batch response for an
error object and only append to unconflist/memutxo when the corresponding
sendrawtransaction succeeded, and if any entry has an error, fail the test or
handle the error path (e.g., raise/assert) instead of silently continuing;
update the code around the sendrawtransaction batch processing and any loop that
later uses memutxo/unconflist so it validates per-request results before
trusting memutxo state.
In `@test/functional/rpc_scanblocks.py`:
- Line 35: The literal tpub value assigned to variable parent_key in
test/functional/rpc_scanblocks.py is a test-only extended public key that
triggers secret-scan false positives; either replace the hard-coded string with
a runtime-derived test key (e.g., generate from a fixed test mnemonic or use a
helper function like generate_test_xpub()) or mark the literal as an allowed
test value for the scanner by adding the project’s approved suppression
tag/comment next to parent_key so the scanner treats it as a benign test
fixture.
---
Outside diff comments:
In `@src/wallet/wallet.cpp`:
- Around line 3403-3414: The startup rescan gating still checks
gArgs.GetIntArg("-rescan", 0) != 2; replace that removed-flag check with the
rescan_required boolean so the "skip reading/scanning blocks older than wallet
birthday" optimization runs only when a full rescan is NOT required. Concretely,
in the block surrounding walletInstance->GetAllScriptPubKeyMans(), change the if
condition from gArgs.GetIntArg("-rescan", 0) != 2 to if (!rescan_required),
keeping the time_first_key calculation and the
chain.findFirstBlockWithTimeAndHeight(*time_first_key - TIMESTAMP_WINDOW,
rescan_height, FoundBlock().height(rescan_height)) call unchanged.
In `@src/wallet/walletdb.cpp`:
- Around line 866-877: The tx_corrupt branch currently sets result =
DBErrors::CORRUPT which prevents the later rescan_required -> NEED_RESCAN
recovery from being applied; change the wss.tx_corrupt handling in walletdb.cpp
so that after logging and clearing wss.tx_corrupt it also sets rescan_required =
true (and does not set result = DBErrors::CORRUPT), or otherwise leave result as
a non-fatal status so the later mapping can convert rescan_required into
NEED_RESCAN; update the block around wss.tx_corrupt, pwallet->WalletLogPrintf,
and result to implement this behavior.
---
Nitpick comments:
In `@src/rpc/blockchain.cpp`:
- Around line 2698-2704: In the "start" branch (else if
(request.params[0].get_str() == "start")) add an explicit validation that the
scanobjects parameter is present and is an array (e.g. check
request.params[1].isArray() or !isNull() + isArray()) before calling
get_array(), and if the check fails throw JSONRPCError(RPC_INVALID_PARAMETER,
"scanobjects must be an array"); place this check immediately after reserving
BlockFiltersScanReserver and before reading filtertype_name, and apply the same
explicit validation in the other start-handling site referenced around the
second spot (~2743) so both code paths return a clear RPC error instead of a
generic type error.
In `@src/rpc/util.h`:
- Around line 143-147: The struct RPCArgOptions currently uses brace-init for
non-POD members; remove the explicit initializers for std::string
oneline_description and std::vector<std::string> type_str and keep an in-class
default only for the POD member hidden (e.g., initialize hidden to false using
the project's preferred style), so update RPCArgOptions to rely on the members'
default constructors for oneline_description and type_str and explicit default
for hidden; locate the struct by name RPCArgOptions to make the change.
In `@test/functional/feature_notifications.py`:
- Line 97: Remove the redundant reconnection call self.connect_nodes(0, 1) from
the test path that no longer restarts or disconnects node 1; locate the
invocation of self.connect_nodes(0, 1) in the feature_notifications test and
delete it, ensuring there are no subsequent assumptions that node 1 was
disconnected (adjust any assertions if they incorrectly expect a reconnect), so
the test uses the existing live connection without churn.
In `@test/functional/mempool_updatefromblock.py`:
- Around line 50-53: The loop in mempool_updatefromblock.py uses an unused
variable `tx` in "for j, tx in enumerate(tx_id[0:i]):"; change the loop to avoid
the unused variable (either "for j, _ in enumerate(tx_id[0:i]):" or use an
index-only form like "for j in range(i):") and keep the body that computes vout
and calls inputs.append(wallet.get_utxo(txid=tx_id[j], vout=vout)) unchanged.
In `@test/functional/wallet_import_rescan.py`:
- Around line 174-181: The test is mining unnecessary empty blocks at batch
boundaries and at the final flush; change the block-generation guards so
generate(self.nodes[0], 1) is only called when there are variants to confirm and
not on the very first iteration: update the condition in the loop from "if i %
10 == 0:" to something like "if i % 10 == 0 and i != 0 and last_variants:" (or
equivalent truthy check on last_variants) and similarly guard the final flush
that always mines a block (the block generation around the final last_variants
handling) so it only mines when last_variants is non-empty before updating
confirmation_height/timestamp and calling last_variants.clear().
In `@test/functional/wallet_upgradetohd.py`:
- Line 185: Replace the direct use of self.nodes[0].rescanblockchain() with the
existing node alias to match the rest of the file; locate the test method where
rescanblockchain() is called (currently using self.nodes[0]) and change it to
node.rescanblockchain() — the alias node is defined earlier (node =
self.nodes[0]) and is used throughout functions like the test routines that call
rescanblockchain().
- Line 206: Replace the direct self.nodes[0].rescanblockchain() call with the
node alias (node.rescanblockchain()) for consistency, and either remove this
rescan (it is redundant because encryptwallet() does not add keys) or move it to
after upgradetohd() and keypoolrefill() so the rescan actually recovers coins
created by the HD upgrade; update the call site near
encryptwallet()/upgradetohd()/keypoolrefill() accordingly (functions:
encryptwallet(), upgradetohd(), keypoolrefill(), node.rescanblockchain()).
ℹ️ Review info
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (59)
doc/REST-interface.mddoc/descriptors.mddoc/release-notes-remove-rescan.mdsrc/dummywallet.cppsrc/init.cppsrc/rest.cppsrc/rpc/blockchain.cppsrc/rpc/client.cppsrc/rpc/mining.cppsrc/rpc/server.cppsrc/rpc/util.cppsrc/rpc/util.hsrc/test/fuzz/rpc.cppsrc/wallet/init.cppsrc/wallet/rpc/backup.cppsrc/wallet/rpc/coins.cppsrc/wallet/rpc/spend.cppsrc/wallet/rpc/wallet.cppsrc/wallet/salvage.cppsrc/wallet/test/wallet_tests.cppsrc/wallet/wallet.cppsrc/wallet/wallet.hsrc/wallet/walletdb.cppsrc/wallet/walletdb.hsrc/wallet/wallettool.cpptest/functional/feature_bip68_sequence.pytest/functional/feature_dbcrash.pytest/functional/feature_fee_estimation.pytest/functional/feature_notifications.pytest/functional/interface_rest.pytest/functional/interface_rpc.pytest/functional/interface_usdt_utxocache.pytest/functional/interface_zmq.pytest/functional/mempool_accept.pytest/functional/mempool_datacarrier.pytest/functional/mempool_package_onemore.pytest/functional/mempool_persist.pytest/functional/mempool_reorg.pytest/functional/mempool_spend_coinbase.pytest/functional/mempool_unbroadcast.pytest/functional/mempool_updatefromblock.pytest/functional/mining_getblocktemplate_longpoll.pytest/functional/mining_prioritisetransaction.pytest/functional/p2p_blocksonly.pytest/functional/p2p_filter.pytest/functional/p2p_permissions.pytest/functional/rpc_generate.pytest/functional/rpc_mempool_info.pytest/functional/rpc_packages.pytest/functional/rpc_rawtransaction.pytest/functional/rpc_scanblocks.pytest/functional/rpc_scantxoutset.pytest/functional/test_framework/wallet.pytest/functional/test_runner.pytest/functional/wallet_basic.pytest/functional/wallet_fast_rescan.pytest/functional/wallet_hd.pytest/functional/wallet_import_rescan.pytest/functional/wallet_upgradetohd.py
💤 Files with no reviewable changes (21)
- test/functional/rpc_mempool_info.py
- test/functional/p2p_blocksonly.py
- test/functional/rpc_rawtransaction.py
- test/functional/mempool_unbroadcast.py
- src/dummywallet.cpp
- test/functional/interface_zmq.py
- test/functional/mempool_datacarrier.py
- test/functional/feature_bip68_sequence.py
- test/functional/mempool_persist.py
- test/functional/mempool_package_onemore.py
- src/wallet/init.cpp
- test/functional/mining_prioritisetransaction.py
- test/functional/p2p_filter.py
- test/functional/mempool_spend_coinbase.py
- test/functional/wallet_fast_rescan.py
- test/functional/p2p_permissions.py
- test/functional/rpc_scantxoutset.py
- test/functional/mempool_reorg.py
- test/functional/feature_dbcrash.py
- test/functional/mempool_accept.py
- test/functional/interface_usdt_utxocache.py
| BlockFilterIndex* index = GetBlockFilterIndex(filtertype); | ||
| if (!index) { | ||
| throw JSONRPCError(RPC_MISC_ERROR, "Index is not enabled for filtertype " + filtertype_name); | ||
| } |
There was a problem hiding this comment.
Block filter index sync should be checked before scanning starts.
Without waiting for sync, start can run against a lagging index and return incomplete results.
💡 Proposed fix
BlockFilterIndex* index = GetBlockFilterIndex(filtertype);
if (!index) {
throw JSONRPCError(RPC_MISC_ERROR, "Index is not enabled for filtertype " + filtertype_name);
}
+ if (!index->BlockUntilSyncedToCurrentChain()) {
+ throw JSONRPCError(RPC_MISC_ERROR, "Block filters are still in the process of being indexed.");
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rpc/blockchain.cpp` around lines 2710 - 2713, The code currently grabs
the BlockFilterIndex via GetBlockFilterIndex(filtertype) but does not verify the
index is fully synced before beginning the scan, which can yield incomplete
results; update the logic after obtaining BlockFilterIndex* index to check the
index's sync state (e.g. call index->IsSynced() or index->BlocksSynced()) and
either block until sync completes (using an available helper like
index->BlockUntilSynced() if present) or return a JSONRPCError indicating the
index is still syncing and the caller should retry later; reference
GetBlockFilterIndex and the BlockFilterIndex instance used by the start scanning
code so the sync check happens before any scan/iteration begins.
| std::vector<BlockFilter> filters; | ||
| const CBlockIndex* start_block = block; // for progress reporting | ||
| const int total_blocks_to_process = stop_block->nHeight - start_block->nHeight; | ||
|
|
||
| g_scanfilter_should_abort_scan = false; | ||
| g_scanfilter_progress = 0; | ||
| g_scanfilter_progress_height = start_block->nHeight; | ||
|
|
||
| while (block) { | ||
| node.rpc_interruption_point(); // allow a clean shutdown | ||
| if (g_scanfilter_should_abort_scan) { | ||
| LogPrintf("scanblocks RPC aborted at height %d.\n", block->nHeight); | ||
| break; | ||
| } | ||
| const CBlockIndex* next = nullptr; | ||
| { | ||
| LOCK(cs_main); | ||
| CChain& active_chain = chainman.ActiveChain(); | ||
| next = active_chain.Next(block); | ||
| if (block == stop_block) next = nullptr; | ||
| } | ||
| if (start_index->nHeight + amount_per_chunk == block->nHeight || next == nullptr) { | ||
| LogPrint(BCLog::RPC, "Fetching blockfilters from height %d to height %d.\n", start_index->nHeight, block->nHeight); | ||
| if (index->LookupFilterRange(start_index->nHeight, block, filters)) { | ||
| for (const BlockFilter& filter : filters) { | ||
| // compare the elements-set with each filter | ||
| if (filter.GetFilter().MatchAny(needle_set)) { | ||
| blocks.push_back(filter.GetBlockHash().GetHex()); | ||
| LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex()); | ||
| } | ||
| } | ||
| } | ||
| start_index = block; | ||
|
|
There was a problem hiding this comment.
Chunk scan loop can silently degrade correctness and progress fidelity.
On Line 2776, a failed LookupFilterRange is ignored, so the RPC may return partial data as if successful.
Also, reusing filters across chunks and setting start_index = block on Line 2785 risks overlap/reprocessing behavior.
💡 Proposed fix
std::vector<BlockFilter> filters;
@@
if (start_index->nHeight + amount_per_chunk == block->nHeight || next == nullptr) {
LogPrint(BCLog::RPC, "Fetching blockfilters from height %d to height %d.\n", start_index->nHeight, block->nHeight);
- if (index->LookupFilterRange(start_index->nHeight, block, filters)) {
- for (const BlockFilter& filter : filters) {
- // compare the elements-set with each filter
- if (filter.GetFilter().MatchAny(needle_set)) {
- blocks.push_back(filter.GetBlockHash().GetHex());
- LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex());
- }
- }
- }
- start_index = block;
+ filters.clear();
+ if (!index->LookupFilterRange(start_index->nHeight, block, filters)) {
+ throw JSONRPCError(RPC_MISC_ERROR, "Failed to read block filters during scan");
+ }
+ for (const BlockFilter& filter : filters) {
+ // compare the elements-set with each filter
+ if (filter.GetFilter().MatchAny(needle_set)) {
+ blocks.push_back(filter.GetBlockHash().GetHex());
+ LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex());
+ }
+ }
+ start_index = next;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rpc/blockchain.cpp` around lines 2753 - 2786, The chunked scan loop
mishandles failed lookups and reuses state across iterations: ensure
LookupFilterRange's boolean failure is handled (log the error via
LogPrintf/LogPrint and abort the RPC loop or return an error to avoid returning
partial results) rather than silently continuing; clear the local
std::vector<BlockFilter> filters before each chunk fetch so filters from prior
chunks aren't reused; and update start_index to the next block after the
processed range (use the previously-computed next pointer or start_index = next)
instead of setting start_index = block to avoid overlap/reprocessing and keep
g_scanfilter_progress_height/progress consistent with the actual processed
height. Also ensure blocks/results are only appended when LookupFilterRange
succeeds.
| batch_reqs.append(from_node.sendrawtransaction.get_request(hexstring=tx_hex, maxfeerate=0)) | ||
| unconflist.append({"txid": txid, "vout": 0, "value": total_in - amount - fee}) | ||
| unconflist.append({"txid": txid, "vout": 1, "value": amount}) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the file exists and read the relevant sections
wc -l test/functional/feature_fee_estimation.py
echo "=== Lines 60-70 ==="
sed -n '60,70p' test/functional/feature_fee_estimation.py
echo
echo "=== Lines 165-175 ==="
sed -n '165,175p' test/functional/feature_fee_estimation.pyRepository: dashpay/dash
Length of output: 1037
🏁 Script executed:
# Find and examine the authproxy batch implementation
fd -a 'authproxy.py' 2>/dev/null | head -n1 | xargs -I {} bash -c 'echo "File: {}"; wc -l {}'Repository: dashpay/dash
Length of output: 191
🏁 Script executed:
# Check how batch() is typically used in functional tests
rg -n '\.batch\(' test/functional --max-count=10 -B2 -A2Repository: dashpay/dash
Length of output: 1422
🏁 Script executed:
# Examine authproxy.batch() implementation
cat -n /home/jailuser/git/test/functional/test_framework/authproxy.py | sed -n '1,206p' | tail -n 100Repository: dashpay/dash
Length of output: 5676
🏁 Script executed:
# Also look for the batch method specifically
rg -n 'def batch' /home/jailuser/git/test/functional/test_framework/authproxy.py -A 10Repository: dashpay/dash
Length of output: 643
🏁 Script executed:
# Check how unconflist/memutxo is used after the batch call
sed -n '50,85p' test/functional/feature_fee_estimation.pyRepository: dashpay/dash
Length of output: 1379
🏁 Script executed:
# Verify the complete context around mempool sync and utxo usage
sed -n '167,180p' test/functional/feature_fee_estimation.pyRepository: dashpay/dash
Length of output: 592
🏁 Script executed:
# Check what happens to unconflist/memutxo after population
grep -n "memutxo\|unconflist" test/functional/feature_fee_estimation.py | head -20Repository: dashpay/dash
Length of output: 546
Check batch RPC results before trusting memutxo state.
At lines 64–65, outputs are added to unconflist before submission outcome is known, and line 168 ignores all batch results. If any sendrawtransaction fails, the code later iterates over corrupted memutxo containing nonexistent UTXOs, breaking fee estimation logic.
The batch() method returns a response array where individual requests can carry error objects. Currently those results are discarded without verification.
Fix
- for node in self.nodes:
- node.batch(batch_sendtx_reqs)
+ submit_results = self.nodes[0].batch(batch_sendtx_reqs)
+ for result in submit_results:
+ assert_equal(result["error"], None)Also applies to: 64–65
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/functional/feature_fee_estimation.py` around lines 63 - 65, The test
currently appends outputs to unconflist and memutxo before verifying the batch
RPC sendrawtransaction results; change the logic in the batch handling flow
(where batch() is called for from_node.sendrawtransaction requests) to inspect
each batch response for an error object and only append to unconflist/memutxo
when the corresponding sendrawtransaction succeeded, and if any entry has an
error, fail the test or handle the error path (e.g., raise/assert) instead of
silently continuing; update the code around the sendrawtransaction batch
processing and any loop that later uses memutxo/unconflist so it validates
per-request results before trusting memutxo state.
| _, spk_1, addr_1 = getnewdestination() | ||
| wallet.send_to(from_node=node, scriptPubKey=spk_1, amount=1 * COIN) | ||
|
|
||
| parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B" |
There was a problem hiding this comment.
Suppress secret-scan false positive for the test extended public key literal.
Line [35] is test data, but it is flagged as a generic key and can create high-noise security output. Mark it as an allowed test literal (or derive it at runtime) so real leaks remain high-signal.
🔧 Minimal suppression example
-parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B"
+parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B" # gitleaks:allow🧰 Tools
🪛 Gitleaks (8.30.0)
[high] 35-35: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/functional/rpc_scanblocks.py` at line 35, The literal tpub value
assigned to variable parent_key in test/functional/rpc_scanblocks.py is a
test-only extended public key that triggers secret-scan false positives; either
replace the hard-coded string with a runtime-derived test key (e.g., generate
from a fixed test mnemonic or use a helper function like generate_test_xpub())
or mark the literal as an allowed test value for the scanner by adding the
project’s approved suppression tag/comment next to parent_key so the scanner
treats it as a benign test fixture.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
test/functional/rpc_scanblocks.py (1)
35-35:⚠️ Potential issue | 🟡 MinorSuppress secret-scan noise for the test xpub literal.
This test fixture still uses a hard-coded extended public key string and can keep triggering gitleaks noise. Please either derive it at runtime or add the approved inline suppression marker.
🔧 Minimal suppression
-parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B" +parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B" # gitleaks:allow🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/rpc_scanblocks.py` at line 35, The hard-coded extended public key assigned to variable parent_key is causing gitleaks noise; either generate the xpub at runtime (e.g., implement a helper like derive_test_xpub(seed/mnemonic) and set parent_key = derive_test_xpub(...)) or keep the literal but append the approved inline suppression marker to the assignment (e.g., add the project's gitleaks allow comment like " # gitleaks:allow" per CI rules) so the test fixture no longer triggers secret-scan alerts.
🧹 Nitpick comments (1)
test/functional/rpc_scanblocks.py (1)
84-84: Use direct iterator access for single-element set extraction.Prefer
next(iter(genesis_spks))over building a temporary list for one element.♻️ Small refactor
-genesis_coinbase_spk = list(genesis_spks)[0] +genesis_coinbase_spk = next(iter(genesis_spks))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/functional/rpc_scanblocks.py` at line 84, The code builds a temporary list to extract a single element from the set genesis_spks (line creating genesis_coinbase_spk = list(genesis_spks)[0]); replace that with direct iterator access using next(iter(genesis_spks)) to avoid allocating a list—update the assignment to use next(iter(genesis_spks)) for genesis_coinbase_spk.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@test/functional/rpc_scanblocks.py`:
- Line 35: The hard-coded extended public key assigned to variable parent_key is
causing gitleaks noise; either generate the xpub at runtime (e.g., implement a
helper like derive_test_xpub(seed/mnemonic) and set parent_key =
derive_test_xpub(...)) or keep the literal but append the approved inline
suppression marker to the assignment (e.g., add the project's gitleaks allow
comment like " # gitleaks:allow" per CI rules) so the test fixture no longer
triggers secret-scan alerts.
---
Nitpick comments:
In `@test/functional/rpc_scanblocks.py`:
- Line 84: The code builds a temporary list to extract a single element from the
set genesis_spks (line creating genesis_coinbase_spk = list(genesis_spks)[0]);
replace that with direct iterator access using next(iter(genesis_spks)) to avoid
allocating a list—update the assignment to use next(iter(genesis_spks)) for
genesis_coinbase_spk.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
This backport PR is well-executed overall, but has several Dash adaptation gaps: the new scanblocks RPC help text was not rebranded, a Dash-specific CoinJoin salt regression was introduced by the -rescan removal refactor, and dead code from the removed -rescan parameter was left behind in two locations.
Reviewed commit: a7c4a44
🟡 5 suggestion(s)
2 additional findings
🟡 suggestion: Dead code: Dash-specific `-rescan` mode validation left after parameter removal
src/wallet/init.cpp (lines 149-154)
The -rescan parameter was removed from AddWalletOptions() by the bitcoin#23123 backport, but the Dash-specific validation block at lines 149–154 still reads it via gArgs.GetIntArg("-rescan", 0). Since the argument is no longer registered, this code path is dead — GetIntArg returns the default 0, the range check < 0 || > 2 is never true, and the warning is never emitted. The associated translatable string in src/qt/dashstrings.cpp line 296 is also dead. This block should be removed.
🟡 suggestion: Dead code: Dash-specific `-rescan=2` full-rescan mode check in AttachChain
src/wallet/wallet.cpp (lines 3403-3415)
Dash extended the upstream boolean -rescan with an integer mode where mode 2 meant "rescan from genesis, bypassing wallet birthday optimization." The merge correctly replaced the main -rescan boolean check with the new rescan_required parameter (line 3347), but left the Dash-specific mode-2 check at line 3406: if (gArgs.GetIntArg("-rescan", 0) != 2). Since -rescan is no longer registered, GetIntArg always returns 0, the condition is always true, and the birthday optimization always applies. The comment on line 3405 ("unless a full rescan was requested") is now misleading. This block should be simplified to unconditionally apply the birthday optimization.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/wallet/wallet.cpp`:
- [SUGGESTION] lines 2408-2411: CoinJoin salt initialization runs on corrupt/failed wallet loads
The upstream `-rescan` refactor (bitcoin#23123) removed the early return `if (nLoadWalletRet != DBErrors::LOAD_OK) return nLoadWalletRet;` that previously guarded the CoinJoin salt block. This means `GetCoinJoinSalt()`/`SetCoinJoinSalt()` now execute unconditionally — even when the DB load returned CORRUPT, TOO_NEW, LOAD_FAIL, or EXTERNAL_SIGNER_SUPPORT_REQUIRED.
While the caller (`Create()`) does check these error codes and returns nullptr, the damage is done inside `LoadWallet()` before the caller sees the error: `SetCoinJoinSalt()` writes to the wallet DB, and `GetCoinJoinSalt()` may trigger a legacy salt migration. Writing to a corrupt database can deepen corruption. Additionally, if `SetCoinJoinSalt()` fails on a corrupt DB, it returns `LOAD_FAIL`, masking the original `CORRUPT` error code.
This is a Dash-specific regression introduced by the backport.
- [SUGGESTION] lines 3403-3415: Dead code: Dash-specific `-rescan=2` full-rescan mode check in AttachChain
Dash extended the upstream boolean `-rescan` with an integer mode where mode 2 meant "rescan from genesis, bypassing wallet birthday optimization." The merge correctly replaced the main `-rescan` boolean check with the new `rescan_required` parameter (line 3347), but left the Dash-specific mode-2 check at line 3406: `if (gArgs.GetIntArg("-rescan", 0) != 2)`. Since `-rescan` is no longer registered, `GetIntArg` always returns 0, the condition is always true, and the birthday optimization always applies. The comment on line 3405 ("unless a full rescan was requested") is now misleading. This block should be simplified to unconditionally apply the birthday optimization.
In `src/rpc/blockchain.cpp`:
- [SUGGESTION] line 2645: `bitcoin-cli` not rebranded to `dash-cli` in scanblocks help text
The scanblocks RPC help reads `bitcoin-cli -rpcclienttimeout=0`. This is the only occurrence of `bitcoin-cli` in the entire `src/rpc/` directory — all other CLI references use `dash-cli` (see `src/rpc/util.cpp:203` and `src/rpc/node.cpp:1031`). This was copied verbatim from upstream and needs rebranding.
- [SUGGESTION] lines 2669-2673: scanblocks examples use Bitcoin bech32 addresses unsupported by Dash
All four `HelpExampleCli`/`HelpExampleRpc` calls use the Bitcoin regtest bech32 address `bcrt1q4u4nsgk6ug0sqz7r3rj9tykjxrsl0yy4d0wwte`. Dash does not support segwit/bech32 (no bech32 HRP in chainparams, `OutputType` only contains `LEGACY`). These examples are non-functional for Dash users. They should use a Dash-appropriate format such as a P2PKH address (e.g. starting with `y` for regtest) or a `raw()` descriptor like the neighboring `scantxoutset` RPC uses.
In `src/wallet/init.cpp`:
- [SUGGESTION] lines 149-154: Dead code: Dash-specific `-rescan` mode validation left after parameter removal
The `-rescan` parameter was removed from `AddWalletOptions()` by the #23123 backport, but the Dash-specific validation block at lines 149–154 still reads it via `gArgs.GetIntArg("-rescan", 0)`. Since the argument is no longer registered, this code path is dead — `GetIntArg` returns the default 0, the range check `< 0 || > 2` is never true, and the warning is never emitted. The associated translatable string in `src/qt/dashstrings.cpp` line 296 is also dead. This block should be removed.
| /* If the CoinJoin salt is not set, try to set a new random hash as the salt */ | ||
| if (GetCoinJoinSalt().IsNull() && !SetCoinJoinSalt(GetRandHash())) { | ||
| return DBErrors::LOAD_FAIL; | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: CoinJoin salt initialization runs on corrupt/failed wallet loads
The upstream -rescan refactor (bitcoin#23123) removed the early return if (nLoadWalletRet != DBErrors::LOAD_OK) return nLoadWalletRet; that previously guarded the CoinJoin salt block. This means GetCoinJoinSalt()/SetCoinJoinSalt() now execute unconditionally — even when the DB load returned CORRUPT, TOO_NEW, LOAD_FAIL, or EXTERNAL_SIGNER_SUPPORT_REQUIRED.
While the caller (Create()) does check these error codes and returns nullptr, the damage is done inside LoadWallet() before the caller sees the error: SetCoinJoinSalt() writes to the wallet DB, and GetCoinJoinSalt() may trigger a legacy salt migration. Writing to a corrupt database can deepen corruption. Additionally, if SetCoinJoinSalt() fails on a corrupt DB, it returns LOAD_FAIL, masking the original CORRUPT error code.
This is a Dash-specific regression introduced by the backport.
💡 Suggested change
| /* If the CoinJoin salt is not set, try to set a new random hash as the salt */ | |
| if (GetCoinJoinSalt().IsNull() && !SetCoinJoinSalt(GetRandHash())) { | |
| return DBErrors::LOAD_FAIL; | |
| } | |
| if (nLoadWalletRet == DBErrors::CORRUPT || | |
| nLoadWalletRet == DBErrors::TOO_NEW || | |
| nLoadWalletRet == DBErrors::EXTERNAL_SIGNER_SUPPORT_REQUIRED || | |
| nLoadWalletRet == DBErrors::LOAD_FAIL) { | |
| return nLoadWalletRet; | |
| } | |
| /* If the CoinJoin salt is not set, try to set a new random hash as the salt */ | |
| if (GetCoinJoinSalt().IsNull() && !SetCoinJoinSalt(GetRandHash())) { | |
| return DBErrors::LOAD_FAIL; | |
| } |
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/wallet/wallet.cpp`:
- [SUGGESTION] lines 2408-2411: CoinJoin salt initialization runs on corrupt/failed wallet loads
The upstream `-rescan` refactor (bitcoin#23123) removed the early return `if (nLoadWalletRet != DBErrors::LOAD_OK) return nLoadWalletRet;` that previously guarded the CoinJoin salt block. This means `GetCoinJoinSalt()`/`SetCoinJoinSalt()` now execute unconditionally — even when the DB load returned CORRUPT, TOO_NEW, LOAD_FAIL, or EXTERNAL_SIGNER_SUPPORT_REQUIRED.
While the caller (`Create()`) does check these error codes and returns nullptr, the damage is done inside `LoadWallet()` before the caller sees the error: `SetCoinJoinSalt()` writes to the wallet DB, and `GetCoinJoinSalt()` may trigger a legacy salt migration. Writing to a corrupt database can deepen corruption. Additionally, if `SetCoinJoinSalt()` fails on a corrupt DB, it returns `LOAD_FAIL`, masking the original `CORRUPT` error code.
This is a Dash-specific regression introduced by the backport.
| { | ||
| return RPCHelpMan{"scanblocks", | ||
| "\nReturn relevant blockhashes for given descriptors.\n" | ||
| "This call may take several minutes. Make sure to use no RPC timeout (bitcoin-cli -rpcclienttimeout=0)", |
There was a problem hiding this comment.
🟡 Suggestion: bitcoin-cli not rebranded to dash-cli in scanblocks help text
The scanblocks RPC help reads bitcoin-cli -rpcclienttimeout=0. This is the only occurrence of bitcoin-cli in the entire src/rpc/ directory — all other CLI references use dash-cli (see src/rpc/util.cpp:203 and src/rpc/node.cpp:1031). This was copied verbatim from upstream and needs rebranding.
💡 Suggested change
| "This call may take several minutes. Make sure to use no RPC timeout (bitcoin-cli -rpcclienttimeout=0)", | |
| "This call may take several minutes. Make sure to use no RPC timeout (dash-cli -rpcclienttimeout=0)", |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/rpc/blockchain.cpp`:
- [SUGGESTION] line 2645: `bitcoin-cli` not rebranded to `dash-cli` in scanblocks help text
The scanblocks RPC help reads `bitcoin-cli -rpcclienttimeout=0`. This is the only occurrence of `bitcoin-cli` in the entire `src/rpc/` directory — all other CLI references use `dash-cli` (see `src/rpc/util.cpp:203` and `src/rpc/node.cpp:1031`). This was copied verbatim from upstream and needs rebranding.
| HelpExampleCli("scanblocks", "start '[\"addr(bcrt1q4u4nsgk6ug0sqz7r3rj9tykjxrsl0yy4d0wwte)\"]' 300000") + | ||
| HelpExampleCli("scanblocks", "start '[\"addr(bcrt1q4u4nsgk6ug0sqz7r3rj9tykjxrsl0yy4d0wwte)\"]' 100 150 basic") + | ||
| HelpExampleCli("scanblocks", "status") + | ||
| HelpExampleRpc("scanblocks", "\"start\", [\"addr(bcrt1q4u4nsgk6ug0sqz7r3rj9tykjxrsl0yy4d0wwte)\"], 300000") + | ||
| HelpExampleRpc("scanblocks", "\"start\", [\"addr(bcrt1q4u4nsgk6ug0sqz7r3rj9tykjxrsl0yy4d0wwte)\"], 100, 150, \"basic\"") + |
There was a problem hiding this comment.
🟡 Suggestion: scanblocks examples use Bitcoin bech32 addresses unsupported by Dash
All four HelpExampleCli/HelpExampleRpc calls use the Bitcoin regtest bech32 address bcrt1q4u4nsgk6ug0sqz7r3rj9tykjxrsl0yy4d0wwte. Dash does not support segwit/bech32 (no bech32 HRP in chainparams, OutputType only contains LEGACY). These examples are non-functional for Dash users. They should use a Dash-appropriate format such as a P2PKH address (e.g. starting with y for regtest) or a raw() descriptor like the neighboring scantxoutset RPC uses.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/rpc/blockchain.cpp`:
- [SUGGESTION] lines 2669-2673: scanblocks examples use Bitcoin bech32 addresses unsupported by Dash
All four `HelpExampleCli`/`HelpExampleRpc` calls use the Bitcoin regtest bech32 address `bcrt1q4u4nsgk6ug0sqz7r3rj9tykjxrsl0yy4d0wwte`. Dash does not support segwit/bech32 (no bech32 HRP in chainparams, `OutputType` only contains `LEGACY`). These examples are non-functional for Dash users. They should use a Dash-appropriate format such as a P2PKH address (e.g. starting with `y` for regtest) or a `raw()` descriptor like the neighboring `scantxoutset` RPC uses.
|
✅ Review complete (commit aab109e) |
|
This pull request has conflicts, please rebase. |
thepastaclaw
left a comment
There was a problem hiding this comment.
No blockers from me on the current head; remaining comments are advisory suggestions/nits.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/rpc/blockchain.cpp`:
- Around line 2698-2743: In the "start" branch, validate that the scanobjects
parameter (request.params[1]) is present and is an array before calling
request.params[1].get_array(); add a check like verifying
!request.params[1].isNull() && request.params[1].isArray() and if it fails throw
JSONRPCError(RPC_INVALID_PARAMETER, "Missing or invalid scanobjects parameter")
so the error matches documented parameter errors; place this check early in the
start handling (before building needle_set / the loop over
request.params[1].get_array()) and reference request.params[1] and the loop over
request.params[1].get_array().
In `@src/rpc/mining.cpp`:
- Around line 558-570: The help metadata for the "template_request" RPCArg is
out of sync with the handler: add a "longpollid" RPCArg (Type::STR,
Optional::OMITTED) and a "data" RPCArg (Type::STR or OBJ as implemented,
required only for proposal mode) to match the handler's accepted keys, change
the "rules" entry from RPCArg::Optional::NO to RPCArg::Optional::OMITTED (since
the implementation treats it as optional), and update the descriptions to state
that "data" is required when mode == \"proposal\" and that "longpollid" is
optional; locate and edit the "template_request" block in src/rpc/mining.cpp and
ensure RPCArgOptions/oneline_description remains accurate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 4ab9640a-cdbc-4582-bc14-feff1973c7fe
📒 Files selected for processing (13)
doc/REST-interface.mddoc/descriptors.mddoc/release-notes-remove-rescan.mdsrc/dummywallet.cppsrc/init.cppsrc/rest.cppsrc/rpc/blockchain.cppsrc/rpc/client.cppsrc/rpc/mining.cppsrc/rpc/server.cppsrc/rpc/util.cppsrc/rpc/util.hsrc/test/fuzz/rpc.cpp
💤 Files with no reviewable changes (1)
- src/dummywallet.cpp
✅ Files skipped from review due to trivial changes (4)
- doc/release-notes-remove-rescan.md
- src/init.cpp
- src/rpc/server.cpp
- doc/REST-interface.md
🚧 Files skipped from review as they are similar to previous changes (3)
- doc/descriptors.md
- src/test/fuzz/rpc.cpp
- src/rpc/util.h
| else if (request.params[0].get_str() == "start") { | ||
| BlockFiltersScanReserver reserver; | ||
| if (!reserver.reserve()) { | ||
| throw JSONRPCError(RPC_INVALID_PARAMETER, "Scan already in progress, use action \"abort\" or \"status\""); | ||
| } | ||
| const std::string filtertype_name{request.params[4].isNull() ? "basic" : request.params[4].get_str()}; | ||
|
|
||
| BlockFilterType filtertype; | ||
| if (!BlockFilterTypeByName(filtertype_name, filtertype)) { | ||
| throw JSONRPCError(RPC_INVALID_ADDRESS_OR_KEY, "Unknown filtertype"); | ||
| } | ||
|
|
||
| BlockFilterIndex* index = GetBlockFilterIndex(filtertype); | ||
| if (!index) { | ||
| throw JSONRPCError(RPC_MISC_ERROR, "Index is not enabled for filtertype " + filtertype_name); | ||
| } | ||
|
|
||
| NodeContext& node = EnsureAnyNodeContext(request.context); | ||
| ChainstateManager& chainman = EnsureChainman(node); | ||
|
|
||
| // set the start-height | ||
| const CBlockIndex* block = nullptr; | ||
| const CBlockIndex* stop_block = nullptr; | ||
| { | ||
| LOCK(cs_main); | ||
| CChain& active_chain = chainman.ActiveChain(); | ||
| block = active_chain.Genesis(); | ||
| stop_block = active_chain.Tip(); | ||
| if (!request.params[2].isNull()) { | ||
| block = active_chain[request.params[2].getInt<int>()]; | ||
| if (!block) { | ||
| throw JSONRPCError(RPC_MISC_ERROR, "Invalid start_height"); | ||
| } | ||
| } | ||
| if (!request.params[3].isNull()) { | ||
| stop_block = active_chain[request.params[3].getInt<int>()]; | ||
| if (!stop_block || stop_block->nHeight < block->nHeight) { | ||
| throw JSONRPCError(RPC_MISC_ERROR, "Invalid stop_height"); | ||
| } | ||
| } | ||
| } | ||
| CHECK_NONFATAL(block); | ||
|
|
||
| // loop through the scan objects, add scripts to the needle_set | ||
| GCSFilter::ElementSet needle_set; | ||
| for (const UniValue& scanobject : request.params[1].get_array().getValues()) { |
There was a problem hiding this comment.
Validate scanobjects before entering the start path.
Unlike scantxoutset, this branch never checks that scanobjects was actually provided. scanblocks start can therefore fall through to request.params[1].get_array() and return a generic type error instead of the documented parameter error.
💡 Proposed fix
else if (request.params[0].get_str() == "start") {
BlockFiltersScanReserver reserver;
if (!reserver.reserve()) {
throw JSONRPCError(RPC_INVALID_PARAMETER, "Scan already in progress, use action \"abort\" or \"status\"");
}
+ if (request.params.size() < 2) {
+ throw JSONRPCError(RPC_MISC_ERROR, "scanobjects argument is required for the start action");
+ }
const std::string filtertype_name{request.params[4].isNull() ? "basic" : request.params[4].get_str()};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rpc/blockchain.cpp` around lines 2698 - 2743, In the "start" branch,
validate that the scanobjects parameter (request.params[1]) is present and is an
array before calling request.params[1].get_array(); add a check like verifying
!request.params[1].isNull() && request.params[1].isArray() and if it fails throw
JSONRPCError(RPC_INVALID_PARAMETER, "Missing or invalid scanobjects parameter")
so the error matches documented parameter errors; place this check early in the
start handling (before building needle_set / the loop over
request.params[1].get_array()) and reference request.params[1] and the loop over
request.params[1].get_array().
| {"template_request", RPCArg::Type::OBJ, RPCArg::Default{UniValue::VOBJ}, "Format of the template", | ||
| { | ||
| {"mode", RPCArg::Type::STR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "This must be set to \"template\", \"proposal\" (see BIP 23), or omitted"}, | ||
| {"capabilities", RPCArg::Type::ARR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "A list of strings", | ||
| { | ||
| {"mode", RPCArg::Type::STR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "This must be set to \"template\", \"proposal\" (see BIP 23), or omitted"}, | ||
| {"capabilities", RPCArg::Type::ARR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "A list of strings", | ||
| { | ||
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported feature, 'longpoll', 'coinbasevalue', 'proposal', 'serverlist', 'workid'"}, | ||
| }, | ||
| }, | ||
| {"rules", RPCArg::Type::ARR, RPCArg::Optional::NO, "A list of strings", | ||
| { | ||
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported softfork deployment"}, | ||
| }, | ||
| }, | ||
| }, | ||
| "\"template_request\""}, | ||
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported feature, 'longpoll', 'coinbasevalue', 'proposal', 'serverlist', 'workid'"}, | ||
| }}, | ||
| {"rules", RPCArg::Type::ARR, RPCArg::Optional::NO, "A list of strings", | ||
| { | ||
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported softfork deployment"}, | ||
| }}, | ||
| }, | ||
| RPCArgOptions{.oneline_description="\"template_request\""}}, |
There was a problem hiding this comment.
template_request help metadata no longer matches the handler.
The RPC still accepts longpollid and requires data for proposal mode, but neither key is documented here, and rules is marked required even though the implementation treats it as optional. That makes help getblocktemplate misleading for valid request shapes.
💡 Suggested direction
{"template_request", RPCArg::Type::OBJ, RPCArg::Default{UniValue::VOBJ}, "Format of the template",
{
{"mode", RPCArg::Type::STR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "This must be set to \"template\", \"proposal\" (see BIP 23), or omitted"},
+ {"longpollid", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "Long poll ID from a previous template"},
+ {"data", RPCArg::Type::STR_HEX, RPCArg::Optional::OMITTED, "Hex-encoded block data to check when mode=\"proposal\""},
{"capabilities", RPCArg::Type::ARR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "A list of strings",
{
{"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported feature, 'longpoll', 'coinbasevalue', 'proposal', 'serverlist', 'workid'"},
}},
- {"rules", RPCArg::Type::ARR, RPCArg::Optional::NO, "A list of strings",
+ {"rules", RPCArg::Type::ARR, RPCArg::Optional::OMITTED, "A list of strings",
{
{"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported softfork deployment"},
}},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {"template_request", RPCArg::Type::OBJ, RPCArg::Default{UniValue::VOBJ}, "Format of the template", | |
| { | |
| {"mode", RPCArg::Type::STR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "This must be set to \"template\", \"proposal\" (see BIP 23), or omitted"}, | |
| {"capabilities", RPCArg::Type::ARR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "A list of strings", | |
| { | |
| {"mode", RPCArg::Type::STR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "This must be set to \"template\", \"proposal\" (see BIP 23), or omitted"}, | |
| {"capabilities", RPCArg::Type::ARR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "A list of strings", | |
| { | |
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported feature, 'longpoll', 'coinbasevalue', 'proposal', 'serverlist', 'workid'"}, | |
| }, | |
| }, | |
| {"rules", RPCArg::Type::ARR, RPCArg::Optional::NO, "A list of strings", | |
| { | |
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported softfork deployment"}, | |
| }, | |
| }, | |
| }, | |
| "\"template_request\""}, | |
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported feature, 'longpoll', 'coinbasevalue', 'proposal', 'serverlist', 'workid'"}, | |
| }}, | |
| {"rules", RPCArg::Type::ARR, RPCArg::Optional::NO, "A list of strings", | |
| { | |
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported softfork deployment"}, | |
| }}, | |
| }, | |
| RPCArgOptions{.oneline_description="\"template_request\""}}, | |
| {"template_request", RPCArg::Type::OBJ, RPCArg::Default{UniValue::VOBJ}, "Format of the template", | |
| { | |
| {"mode", RPCArg::Type::STR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "This must be set to \"template\", \"proposal\" (see BIP 23), or omitted"}, | |
| {"longpollid", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "Long poll ID from a previous template"}, | |
| {"data", RPCArg::Type::STR_HEX, RPCArg::Optional::OMITTED, "Hex-encoded block data to check when mode=\"proposal\""}, | |
| {"capabilities", RPCArg::Type::ARR, /* treat as named arg */ RPCArg::Optional::OMITTED_NAMED_ARG, "A list of strings", | |
| { | |
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported feature, 'longpoll', 'coinbasevalue', 'proposal', 'serverlist', 'workid'"}, | |
| }}, | |
| {"rules", RPCArg::Type::ARR, RPCArg::Optional::OMITTED, "A list of strings", | |
| { | |
| {"str", RPCArg::Type::STR, RPCArg::Optional::OMITTED, "client side supported softfork deployment"}, | |
| }}, | |
| }, | |
| RPCArgOptions{.oneline_description="\"template_request\""}}, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rpc/mining.cpp` around lines 558 - 570, The help metadata for the
"template_request" RPCArg is out of sync with the handler: add a "longpollid"
RPCArg (Type::STR, Optional::OMITTED) and a "data" RPCArg (Type::STR or OBJ as
implemented, required only for proposal mode) to match the handler's accepted
keys, change the "rules" entry from RPCArg::Optional::NO to
RPCArg::Optional::OMITTED (since the implementation treats it as optional), and
update the descriptions to state that "data" is required when mode ==
\"proposal\" and that "longpollid" is optional; locate and edit the
"template_request" block in src/rpc/mining.cpp and ensure
RPCArgOptions/oneline_description remains accurate.
…tializers fa2c72d rpc: Set RPCArg options with designated initializers (MacroFake) Pull request description: For optional constructor arguments, use a new struct. This comes with two benefits: * Earlier unused optional arguments can be omitted * Designated initializers can be used ACKs for top commit: stickies-v: re-ACK fa2c72d Tree-SHA512: 2a0619548187cc7437fee2466ac4780746490622f202659f53641be01bc2a1fea4416d1a77f3e963bf7c4cce62899b61fab0b9683440cf82f68be44f63826658
626b7c8 fuzz: add scanblocks as safe for fuzzing (James O'Beirne) 94fe545 test: rpc: add scanblocks functional test (Jonas Schnelli) 6ef2566 rpc: add scanblocks - scan for relevant blocks with descriptors (Jonas Schnelli) a4258f6 rpc: move-only: consolidate blockchain scan args (James O'Beirne) Pull request description: Revives bitcoin#20664. All feedback from the previous PR has either been responded to inline or incorporated here. --- Major changes from Jonas' PR: - consolidated arguments for scantxoutset/scanblocks - substantial cleanup of the functional test Here's the range-diff (`git range-diff master jonasschnelli/2020/12/filterblocks_rpc jamesob/2021-11-scanblocks`): https://gist.github.com/jamesob/aa4a975344209f0316444b8de2ec1d18 ### Original PR description > The `scanblocks` RPC call allows one to get relevant blockhashes from a set of descriptors by scanning all blockfilters in a given range. > > **Example:** > > `scanblocks start '["addr(<bitcoin_address>)"]' 661000` (returns relevant blockhashes for `<bitcoin_address>` from blockrange 661000->tip) > > ## Why is this useful? > **Fast wallet rescans**: get the relevant blocks and only rescan those via `rescanblockchain getblockheader(<hash>)[height] getblockheader(<hash>)[height])`. A future PR may add an option to allow to provide an array of blockhashes to `rescanblockchain`. > > **prune wallet rescans**: (_needs additional changes_): together with a call to fetch blocks from the p2p network if they have been pruned, it would allow to rescan wallets back to the genesis block in pruned mode (relevant bitcoin#15946). > > **SPV mode** (_needs additional changes_): it would be possible to build the blockfilterindex from the p2p network (rather then deriving them from the blocks) and thus allow some sort of hybrid-SPV mode with moderate bandwidth consumption (related bitcoin#9483) ACKs for top commit: furszy: diff re-ACK 626b7c8 Tree-SHA512: f84e4dcb851b122b39e9700c58fbc31e899cdcf9b587df9505eaf1f45578cc4253e89ce2a45d1ff21bd213e31ddeedbbcad2c80810f46755b30acc17b07e2873
ae3626e test: use MiniWallet for rpc_scanblocks.py (Sebastian Falbesoner) Pull request description: This is a small follow-up for bitcoin#23549 which introduced `scanblocks`. Since that RPC doesn't need the wallet, we can switch the functional test to use MiniWallet. ACKs for top commit: MarcoFalke: review ACK ae3626e Tree-SHA512: e0b0088103e059b29719299c58fd5173b1cff58cb73025a9a33ad493cd0ac50ba25a5f790e471c00a09b4dca80f3c011174ca4e0c8f34746c39831f5823dc8ba
dc3ec74 Add rescan removal release note (Samuel Dobson) bccd1d9 Remove -rescan startup parameter (Samuel Dobson) f963b0f Corrupt wallet tx shouldn't trigger rescan of all wallets (Samuel Dobson) 6c00649 Remove outdated dummy wallet -salvagewallet arg (Samuel Dobson) Pull request description: Remove the `-rescan` startup parameter. Rescans can be run with the `rescanblockchain` RPC. Rescans are still done on wallet-load if needed due to corruption, for example. ACKs for top commit: achow101: ACK dc3ec74 laanwj: re-ACK dc3ec74 Tree-SHA512: 608360d0e7d73737fd3ef408b01b33d97a75eebccd70c6d1b47a32fecb99b9105b520b111b225beb10611c09aa840a2b6d2b6e6e54be5d0362829e757289de5c
…ED_RESCAN 8615507 scripted-diff: rename DBErrors::RESCAN_REQUIRED to NEED_RESCAN (Samuel Dobson) Pull request description: Suggested here as a trivial follow-up: bitcoin#23123 (comment) Makes RESCAN_REQUIRED consistent with NEED_REWRITE ACKs for top commit: achow101: ACK 8615507 jonatack: ACK 8615507 Tree-SHA512: 82d057c45e192cd6dd8a47675b52699e6cbc82272609a971e6e5d6796aad14a941a70e40d3913dbb611f79c8eadff8030c60ea6f203f2edc3720c0e78c166b97
…n `--enable-debug` BACKPORT NOTE: missing changes for feature_taproot.py 1647a11 tests: Reorder longer running tests in test_runner (Andrew Chow) ff6c9fe tests: Whitelist test p2p connection in rpc_packages (Andrew Chow) 8c20796 tests: Use waitfornewblock for work queue test in interface_rpc (Andrew Chow) 6c872d5 tests: Initialize sigops draining script with bytes in feature_taproot (Andrew Chow) 544cbf7 tests: Use batched RPC in feature_fee_estimation (Andrew Chow) 4ad7272 tests: reduce number of generated blocks for wallet_import_rescan (Andrew Chow) Pull request description: When configured with `--enable-debug`, many tests become dramatically slower. These slow downs are particularly noticed in tests that generate a lot of blocks in separate calls, make a lot of RPC calls, or send a lot of data from the test framework's P2P connection. This PR aims to improve the runtime of some of the slower tests and improve the overall runtime of the test runner. This has improved the runtime of the test runner from ~400s to ~140s on my computer. The slowest test by far was `wallet_import_rescan.py`. This was taking ~320s. Most of that time was spent waiting for blocks to be mined and then synced to the other nodes. It was generating a new block for every new transaction it was creating in a setup loop. However it is not necessary to have one tx per block. By mining a block only every 10 txs, the runtime is improved to ~61s. The second slowest test was `feature_fee_estimation.py`. This test spends most of its time waiting for RPCs to respond. I was able to improve its runtime by batching RPC requests. This has improved the runtime from ~201s to ~140s. In `feature_taproot.py`, the test was constructing a Python `CScript` using a very large list of `OP_CHECKSIG`s. The constructor for the Python implementation of `CScript` was iterating this list in order to create a `bytes` from it even though a `bytes` could be created from it without iterating. By making the `bytes` before passing it into the constructor, we are able to improve this test's runtime from ~131s to ~106s. Although `interface_rpc.py` was not typically a slow test, I found that it would occasionally have a super long runtime. It typically takes ~7s, but I have observed it taking >400s to run on occasion. This longer runtime occurs more often when `--enable-debug`. This long runtime was caused by the "exceeding work queue" test which is really just trying to trigger a race condition. In this test, it would create a few threads and try an RPC in a loop in the hopes that eventually one of the RPCs would be added to the work queue while another was processing. It used `getrpcinfo` for this, but this function is fairly fast. I believe what was happening was that with `--enable-debug`, all of the code for receiving the RPC would often take longer to run than the RPC itself, so the majority of the requests would succeed, until we got lucky after 10's of thousands of requests. By changing this to use a slow RPC, the race condition can be triggered more reliably, and much sooner as well. I've used `waitfornewblock` with a 500ms timeout. This improves the runtime to ~3s consistently. The last test I've changed was `rpc_packages.py`. This test was one of the higher runtime variability tests. The main source of this variation appears to be waiting for the test node to relay a transaction to the test framework's P2P connection. By whitelisting that peer, the variability is reduced to nearly 0. Lastly, I've reordered the tests in `test_runner.py` to account for the slower runtimes when configured with `--enable-debug`. Some of the slow tests I've looked at were listed as being fast which was causing overall `test_runner.py` runtime to be extended. This change makes the test runner's runtime be bounded by the slowest test (currently `feature_fee_estimation.py` with my usual config (`-j 60`). ACKs for top commit: willcl-ark: ACK 1647a11 Tree-SHA512: 529e0da4bc51f12c78a40d6d70b3a492b97723c96a3526148c46943d923c118737b32d2aec23d246392e50ab48013891ef19fe6205bf538b61b70d4f16a203eb
…ck.py by using MiniWallet dee8549 test: simplify and speedup mempool_updatefromblock.py by using MiniWallet (Sebastian Falbesoner) Pull request description: This PR simplifies the functional test mempool_updatefromblock.py by using MiniWallet in order to avoid manual low-level tx creation (signing, outputs selection, fee calculation). Most of the tedious work is done by the method `MiniWallet.send_self_transfer_multi` (calling `create_self_transfer_multi` internally) which supports spending a given set of UTXOs and creating a certain number of outputs. As a nice side-effect, the test's performance increases significantly (~3.5x on my system): ``` master 1m56.80s real 1m50.10s user 0m06.36s system PR 0m32.34s real 0m30.26s user 0m01.41s system ``` The arguments `start_input_txid` and `end_address` have been removed from the `transaction_graph_test` method, as they are currently unused and I don't see them being needed for future tests. ACKs for top commit: brunoerg: crACK dee8549 MarcoFalke: lgtm ACK dee8549 🚏 Tree-SHA512: 9f6da634bdc8c272f9a2af1cddaa364ee371d4e95554463a066249eecebb668d8c6cb123ec8a5404c41b3291010c0c8806a8a01dd227733cec03e73aa93b0103
…et's initialization
Missing changes are in the files:
deleted by us: test/functional/feature_txindex_compatibility.py
deleted by us: test/functional/mempool_dust.py
both modified: test/functional/mempool_expiry.py
both modified: test/functional/mempool_resurrect.py
both modified: test/functional/mining_getblocktemplate_longpoll.py
both modified: test/functional/p2p_leak_tx.py
deleted by us: test/functional/p2p_tx_privacy.py
both modified: test/functional/rpc_net.py
both modified: test/functional/rpc_txoutproof.py
6bd098a test: simplify tests by using the pre-mined chain (kouloumos)
42029a7 test: remove redundant blocks generation logic (kouloumos)
0377d6b test: add `rescan_utxos` in MiniWallet's initialization (kouloumos)
Pull request description:
When a pre-mined blockchain is used (default behavior), it [contains coinbase outputs in blocks 76-10](https://github.com/bitcoin/bitcoin/blob/07c54de550035c3441f34ef6c34209666267eb38/test/functional/test_framework/test_framework.py#L809-L813) to [the MiniWallet's default address](https://github.com/bitcoin/bitcoin/blob/07c54de550035c3441f34ef6c34209666267eb38/test/functional/test_framework/wallet.py#L99-L101). That's why we always* `rescan_utxos()` after initializing the MiniWallet, in order for the MiniWallet to account for those mature UTXOs.
> The tests following this usage pattern can be seen with:
> ```git grep -n "MiniWallet(" $(git grep -le "rescan_utxos()" $(git grep -Le "self.setup_clean_chain = True"))```
**This PR adds `rescan_utxos()` inside MiniWallet's initialization to simplify usage when the MiniWallet is used with a pre-mined chain.**
### secondary changes
- *There are a few tests that use the pre-mined blockchain but do not `rescan_utxos()`, they instead generate new blocks to create mature UTXOs.
> Those were written before the `rescan_utxos()` method was introduced with bitcoin#22955 (fac66d0) and can be seen with:
> `git grep -n "MiniWallet(" $(git grep -Le "rescan_utxos()" $(git grep -Le "self.setup_clean_chain = True"))`
>
After including `rescan_utxos()` inside MiniWallets initilization, this blocks generation logic is not needed as the MiniWallet already accounts for enough mature UTXOs to perform the tests. **Therefore the now redundant blocks generation logic is removed from those tests with the second commit.**
- The rest of the MiniWallet tests use a clean chain (`self.setup_clean_chain = True`) and can be seen with
`git grep -n "MiniWallet(" $(git grep -le "self.setup_clean_chain = True")`
From those, there are a few that start from a clean chain and then create enough mature UTXOs for the MiniWallet with this kind of logic:
https://github.com/bitcoin/bitcoin/blob/07c54de550035c3441f34ef6c34209666267eb38/test/functional/mempool_expiry.py#L36-L40
**Those tests are simplified in the third commit to instead utilize the mature UTXOs of the pre-mined chain.**
ACKs for top commit:
MarcoFalke:
ACK 6bd098a 🕷
theStack:
re-ACK 6bd098a
Tree-SHA512: 7f9361e36910e4000c33a32efdde4449f4a8a763bb42df96de826fcde469f9362f701b8c99e2a2c482d2d5a42a83ae5ae3844fdbed187ed0ff231f386c222493
… fee in longpoll fa0abcd test: Flatten miniwallet array and remove random fee in longpoll (MarcoFalke) Pull request description: * Using a single MiniWallet is enough. * A random fee isn't needed either. ACKs for top commit: theStack: re-ACK fa0abcd Tree-SHA512: 77b99885b3f0d325d067838122114be57ec999ebc82912de6a22c33e2ba28a341c5e053c5bbc424b9922c2616562289a57c7156bd3b431d779182c2e472da59c
fa18504 rpc: Add submit option to generateblock (MarcoFalke) fab9a08 refactor: Replace block_hash with block_out (MarcoFalke) Pull request description: When submit is turned off, a block can be generated and returned as hex, to be used for further tests. For example, it can be submitted on a different node, on a different interface (like p2p), or just never submitted and be used for other testing purposes. ACKs for top commit: instagibbs: ACK fa18504 TheCharlatan: tACK fa18504 Tree-SHA512: 1b2ab6b71bb7e155c6482d75f5373f4e77de6446cb16bc2dfd19e7a4075b3a6ad87d7ad7a049a9eed934cb71574acfd27202f54c8bb3b03fac869f2e95db7ee5
…ams for mempool/contents 1ff5d61 doc: add mempool/contents rest verbose and mempool_sequence args (Andrew Toth) 52a31dc tests: mempool/contents verbose and mempool_sequence query params tests (Andrew Toth) a518fff rest: add verbose and mempool_sequence query params for mempool/contents (Andrew Toth) Pull request description: The verbose mempool json response can get very large. This adds an option to return the non-verbose response of just the txids. It is identical to the rpc response so the diff here is minimal. This also adds the mempool_sequence parameter for rpc consistency. Verbose defaults to true to remain backwards compatible. It uses query parameters to be compatible with the efforts in bitcoin#25752. ACKs for top commit: achow101: ACK 1ff5d61 stickies-v: re-ACK [1ff5d61](bitcoin@1ff5d61) pablomartin4btc: tested ACK 1ff5d61. Tree-SHA512: 1bf08a7ffde2e7db14dc746e421feedf17d84c4b3f1141e79e36feb6014811dfde80e1d8dbc476c15ff705de2d3c967b3081dcd80536d76b7edf888f1a92e9d1
ff138f9 doc: add `scanblocks` to list of descriptor RPCs (Sebastian Falbesoner) Pull request description: This is a tiny documentation follow-up to bitcoin#23549. ACKs for top commit: aureleoules: ACK ff138f9 shaavan: ACK ff138f9 Tree-SHA512: cc45b78c13ec4aa5bac688648f8e83f04a9ae54376e67371b280428f0253e2585cf0575fa51e3835d4c13c940559bfcdd88d7504bf97a81b2a73bb34a0db7577
… rpc_scanblocks.py fa54d30 test: check for false-positives in rpc_scanblocks.py (Sebastian Falbesoner) 3bca6cd test: add compact block filter (BIP158) helper routines (Sebastian Falbesoner) 25ee74d test: add SipHash implementation for generic data in Python (Sebastian Falbesoner) Pull request description: This PR adds a fixed false-positive element check to the functional test rpc_scanblocks.py by using a pre-calculated scriptPubKey that collides with the regtest genesis block's coinbase output. Note that determining a BIP158 false-positive at runtime would also be possible, but take too long (we'd need to create and check ~800k output scripts on average, which took at least 2 minutes on average on my machine). The introduced check is related to issue bitcoin#26322 and more concretely inspired by PR bitcoin#26325 which introduces an "accurate" mode that filters out these false-positives. The introduced cryptography routines (siphash for generic data) and helpers (BIP158 ranged hash calculation, relevant scriptPubKey per block determination) could potentially also be useful for more tests in the future that involve compact block filters. ACKs for top commit: achow101: ACK fa54d30 Tree-SHA512: c6af50864146028228d197fb022ba2ff24d1ef48dc7d171bccfb21e62dd50ac80db5fae0c53f5d205edabd48b3493c7aa2040f628a223e68df086ec2243e5a93
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/wallet/wallettool.cpp (1)
95-98: 💤 Low valueConsider guiding the user to
rescanblockchainRPC in the warning message.Since
dash-walletruns without a chain connection and cannot perform a rescan itself, the message "Wallet requires a rescan" leaves the user without actionable guidance. MentioningrescanblockchainRPC would improve the UX.💡 Suggested wording
- } else if (load_wallet_ret == DBErrors::NEED_RESCAN) { - tfm::format(std::cerr, "Error reading %s! Some transaction data might be missing or" - " incorrect. Wallet requires a rescan.", - name); + } else if (load_wallet_ret == DBErrors::NEED_RESCAN) { + tfm::format(std::cerr, "Error reading %s! Some transaction data might be missing or" + " incorrect. Wallet requires a rescan via the rescanblockchain RPC.", + name);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/wallettool.cpp` around lines 95 - 98, The error branch handling DBErrors::NEED_RESCAN (when load_wallet_ret equals DBErrors::NEED_RESCAN) prints a non-actionable "Wallet requires a rescan" message; update that tfm::format call in wallettool.cpp (where load_wallet_ret is checked) to include guidance to use the node RPC rescanblockchain (e.g., "Wallet requires a rescan; run the node RPC 'rescanblockchain' or restart the node with -rescan, as dash-wallet cannot perform a rescan itself") so users have a clear, actionable next step.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/wallet/wallettool.cpp`:
- Around line 95-98: The error branch handling DBErrors::NEED_RESCAN (when
load_wallet_ret equals DBErrors::NEED_RESCAN) prints a non-actionable "Wallet
requires a rescan" message; update that tfm::format call in wallettool.cpp
(where load_wallet_ret is checked) to include guidance to use the node RPC
rescanblockchain (e.g., "Wallet requires a rescan; run the node RPC
'rescanblockchain' or restart the node with -rescan, as dash-wallet cannot
perform a rescan itself") so users have a clear, actionable next step.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 477625c6-b302-47b2-8a9e-4cbad1851ffc
📒 Files selected for processing (59)
doc/REST-interface.mddoc/descriptors.mddoc/release-notes-remove-rescan.mdsrc/dummywallet.cppsrc/init.cppsrc/rest.cppsrc/rpc/blockchain.cppsrc/rpc/client.cppsrc/rpc/mining.cppsrc/rpc/server.cppsrc/rpc/util.cppsrc/rpc/util.hsrc/test/fuzz/rpc.cppsrc/wallet/init.cppsrc/wallet/rpc/backup.cppsrc/wallet/rpc/coins.cppsrc/wallet/rpc/spend.cppsrc/wallet/rpc/wallet.cppsrc/wallet/salvage.cppsrc/wallet/test/wallet_tests.cppsrc/wallet/wallet.cppsrc/wallet/wallet.hsrc/wallet/walletdb.cppsrc/wallet/walletdb.hsrc/wallet/wallettool.cpptest/functional/feature_bip68_sequence.pytest/functional/feature_dbcrash.pytest/functional/feature_fee_estimation.pytest/functional/feature_notifications.pytest/functional/interface_rest.pytest/functional/interface_rpc.pytest/functional/interface_usdt_utxocache.pytest/functional/interface_zmq.pytest/functional/mempool_accept.pytest/functional/mempool_datacarrier.pytest/functional/mempool_package_onemore.pytest/functional/mempool_persist.pytest/functional/mempool_reorg.pytest/functional/mempool_spend_coinbase.pytest/functional/mempool_unbroadcast.pytest/functional/mempool_updatefromblock.pytest/functional/mining_getblocktemplate_longpoll.pytest/functional/mining_prioritisetransaction.pytest/functional/p2p_blocksonly.pytest/functional/p2p_filter.pytest/functional/p2p_permissions.pytest/functional/rpc_generate.pytest/functional/rpc_mempool_info.pytest/functional/rpc_packages.pytest/functional/rpc_rawtransaction.pytest/functional/rpc_scanblocks.pytest/functional/rpc_scantxoutset.pytest/functional/test_framework/wallet.pytest/functional/test_runner.pytest/functional/wallet_basic.pytest/functional/wallet_fast_rescan.pytest/functional/wallet_hd.pytest/functional/wallet_import_rescan.pytest/functional/wallet_upgradetohd.py
💤 Files with no reviewable changes (21)
- test/functional/mempool_datacarrier.py
- test/functional/mining_prioritisetransaction.py
- test/functional/mempool_package_onemore.py
- test/functional/feature_dbcrash.py
- src/wallet/init.cpp
- test/functional/p2p_filter.py
- test/functional/mempool_unbroadcast.py
- test/functional/wallet_fast_rescan.py
- test/functional/rpc_scantxoutset.py
- test/functional/mempool_reorg.py
- test/functional/p2p_permissions.py
- test/functional/p2p_blocksonly.py
- test/functional/rpc_rawtransaction.py
- test/functional/mempool_persist.py
- test/functional/interface_zmq.py
- src/dummywallet.cpp
- test/functional/rpc_mempool_info.py
- test/functional/interface_usdt_utxocache.py
- test/functional/feature_bip68_sequence.py
- test/functional/mempool_accept.py
- test/functional/mempool_spend_coinbase.py
✅ Files skipped from review due to trivial changes (10)
- src/wallet/salvage.cpp
- src/test/fuzz/rpc.cpp
- src/init.cpp
- doc/release-notes-remove-rescan.md
- src/wallet/rpc/coins.cpp
- src/rpc/server.cpp
- doc/REST-interface.md
- src/wallet/rpc/wallet.cpp
- src/rpc/util.h
- src/wallet/rpc/backup.cpp
🚧 Files skipped from review as they are similar to previous changes (16)
- doc/descriptors.md
- src/wallet/rpc/spend.cpp
- test/functional/interface_rest.py
- src/wallet/test/wallet_tests.cpp
- test/functional/feature_notifications.py
- src/rpc/util.cpp
- test/functional/interface_rpc.py
- test/functional/wallet_import_rescan.py
- test/functional/wallet_upgradetohd.py
- test/functional/feature_fee_estimation.py
- src/rest.cpp
- test/functional/rpc_packages.py
- test/functional/wallet_hd.py
- test/functional/wallet_basic.py
- src/wallet/wallet.cpp
- src/wallet/walletdb.cpp
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
PR #7192 backports 13 Bitcoin Core PRs around -rescan removal, scanblocks RPC, and test infrastructure. Two blocking Dash-specific issues: (1) the upstream -rescan removal deregisters the arg but leaves two Dash-specific consumers behind, breaking the Dash-extended -rescan=2 (full-rescan-from-genesis) at startup with a generic 'Invalid parameter' error and turning the related validation into dead code; (2) the bitcoin#26341 cherry-pick imports a hardcoded false-positive scriptPubKey that was computed against Bitcoin's regtest genesis and won't collide with Dash's distinct regtest genesis, breaking rpc_scanblocks.py. Additional suggestions cover a LoadWallet refactor that runs CoinJoin salt setup on corrupt wallets, an upstream off-by-one in scanblocks chunking, and incomplete cleanup from the partial bitcoin#26886 backport.
Reviewed commit: aab109e
🔴 2 blocking | 🟡 2 suggestion(s) | 💬 1 nitpick(s)
2 additional findings
🔴 blocking: Dash-specific `-rescan=` is broken — arg deregistered but consumers remain
src/wallet/wallet.cpp (line 3507)
The bitcoin#23123 backport drops -rescan=<mode> from WalletInit::AddWalletOptions (src/wallet/init.cpp) and DummyWalletInit (src/dummywallet.cpp), but two Dash-specific consumers of gArgs.GetIntArg("-rescan", ...) survive:
src/wallet/wallet.cpp:3507—if (gArgs.GetIntArg("-rescan", 0) != 2)controls Dash's tri-state semantics where mode 2 = full rescan from genesis (skip thefindFirstBlockWithTimeAndHeighttime-window optimization).src/wallet/init.cpp:155-160— validates the mode is in [0,2] and warns/forces-removes otherwise.
User-facing consequences:
-
ArgsManager::ParseParameters(src/util/system.cpp:371-373) rejects unknown args withInvalid parameter %sand aborts startup. Any user/script passing-rescan=2to dashd will now fail at parse time, before either consumer ever runs. This is a Dash-specific regression — Bitcoin Core's-rescanwas a boolean with no equivalent of mode 2. -
With
-rescanunregistered,GetIntArg("-rescan", 0)always returns 0 in normal flows, so the!= 2predicate is always true and the 'full rescan from genesis' code path is unreachable. The validation block in init.cpp is also dead code.
Resolution: either (a) fully remove these Dash-specific references and document the loss of the full-genesis-rescan capability in the release notes, or (b) re-introduce -rescan as a Dash-extended argument with the existing tri-state semantics. The current state silently breaks documented Dash behavior.
💬 nitpick: Partial bitcoin#26886 backport leaves stale pre-prerequisite generate() workarounds in several tests
test/functional/rpc_txoutproof.py (lines 23-38)
bitcoin#26886 was a 3-commit series: it added MiniWallet.__init__() -> rescan_utxos() (backported here at test/functional/test_framework/wallet.py:92-97), then removed the now-redundant clean-chain / coinbase-maturation setup from tests that relied on the manual workaround. The backport carries the prerequisite but skips multiple cleanup hunks for files that do exist in Dash. The merge commit lists only 3 excluded files (feature_txindex_compatibility.py, mempool_dust.py, mempool_expiry.py).
Surviving stale patterns:
test/functional/rpc_txoutproof.py:23-38still hasself.setup_clean_chain = True, manualgenerate(miniwallet, 5), manualgenerate(..., COINBASE_MATURITY), and the old5 + COINBASE_MATURITYheight assertion.test/functional/rpc_net.py:55-58andtest/functional/mining_getblocktemplate_longpoll.py:63-64keep equivalentgenerate(...)calls that upstream removed.
Non-blocking — tests still pass, just slower. Either complete the cleanup in a follow-up or document the intentional scope reduction in the merge commit message.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/wallet/wallet.cpp`:
- [BLOCKING] line 3507: Dash-specific `-rescan=<mode>` is broken — arg deregistered but consumers remain
The bitcoin#23123 backport drops `-rescan=<mode>` from `WalletInit::AddWalletOptions` (src/wallet/init.cpp) and `DummyWalletInit` (src/dummywallet.cpp), but two Dash-specific consumers of `gArgs.GetIntArg("-rescan", ...)` survive:
- `src/wallet/wallet.cpp:3507` — `if (gArgs.GetIntArg("-rescan", 0) != 2)` controls Dash's tri-state semantics where mode 2 = full rescan from genesis (skip the `findFirstBlockWithTimeAndHeight` time-window optimization).
- `src/wallet/init.cpp:155-160` — validates the mode is in [0,2] and warns/forces-removes otherwise.
User-facing consequences:
1. `ArgsManager::ParseParameters` (src/util/system.cpp:371-373) rejects unknown args with `Invalid parameter %s` and aborts startup. Any user/script passing `-rescan=2` to dashd will now fail at parse time, before either consumer ever runs. This is a Dash-specific regression — Bitcoin Core's `-rescan` was a boolean with no equivalent of mode 2.
2. With `-rescan` unregistered, `GetIntArg("-rescan", 0)` always returns 0 in normal flows, so the `!= 2` predicate is always true and the 'full rescan from genesis' code path is unreachable. The validation block in init.cpp is also dead code.
Resolution: either (a) fully remove these Dash-specific references and document the loss of the full-genesis-rescan capability in the release notes, or (b) re-introduce `-rescan` as a Dash-extended argument with the existing tri-state semantics. The current state silently breaks documented Dash behavior.
- [SUGGESTION] lines 2441-2447: LoadWallet now runs CoinJoin salt setup on corrupt/too-new wallets and masks specific error codes
bitcoin#23147 removes the `if (nLoadWalletRet != DBErrors::LOAD_OK) return nLoadWalletRet;` early-return so that `NEED_RESCAN` can flow through. Upstream Bitcoin doesn't have CoinJoin, so removing the early-return was safe there. In Dash's adaptation the salt block sits *after* the (now-removed) early-return, so it now executes for *any* non-OK status (CORRUPT, TOO_NEW, NEED_REWRITE, NONCRITICAL_ERROR, NEED_RESCAN, etc.).
Two concerns:
1. `SetCoinJoinSalt(GetRandHash())` issues a write through `WalletBatch` against a database whose loader just reported it is in a bad state (e.g. CORRUPT or NEED_REWRITE). For NEED_REWRITE this also runs after `Rewrite("\x04pool")` and may interact with rewrite expectations.
2. If `SetCoinJoinSalt` fails, the function returns `DBErrors::LOAD_FAIL`, replacing what may have been the more informative `nLoadWalletRet` (NEED_RESCAN, CORRUPT, TOO_NEW). `CWallet::Create` then takes the generic 'Error loading %s' branch instead of the rescan-warning / corruption-specific recovery paths.
Gate the salt block on a successful-enough load state (e.g. `LOAD_OK || NEED_RESCAN`) or restore the early-return for the statuses that should not proceed.
In `test/functional/rpc_scanblocks.py`:
- [BLOCKING] lines 85-89: bitcoin#26341 false-positive hex is Bitcoin-regtest-only and won't collide with Dash's genesis
`false_positive_spk = bytes.fromhex("1400000000000000000000000000000000000ad23b")` is taken verbatim from upstream. That value was brute-force-computed by Bitcoin Core to BIP158-collide with Bitcoin's regtest genesis coinbase scriptPubKey under SipHash-2-4 keyed by the *Bitcoin* regtest genesis hash.
Dash's regtest genesis block has a different timestamp/coinbase script and therefore a different genesis hash, so the SipHash key differs and the assertion at line 89 (`assert_equal(genesis_coinbase_hash, false_positive_hash)`) will not hold. The test will fail.
The upstream PR description for this backport explicitly mentions a brute-force loop "to calculate false-positive for rpc_scanblocks.py" — strong indication the author knew a Dash-specific replacement was needed, but the merged commit still ships the upstream constant. A Dash-specific scriptPubKey collision needs to be computed and substituted before this test can pass.
In `src/rpc/blockchain.cpp`:
- [SUGGESTION] lines 2774-2785: scanblocks can append the same matching block twice at 10,000-block chunk boundaries
`BlockFilterIndex::LookupFilterRange()` reads an inclusive range from `start_height` through `stop_index->nHeight` (src/index/blockfilterindex.cpp:455-470, ultimately src/index/blockfilterindex.cpp:377 — `for (int height = start_height; height <= stop_index->nHeight; ++height)`).
At the end of each chunk this loop sets `start_index = block`, so the next chunk re-reads the boundary block. On scans longer than 10,000 blocks, a boundary block whose filter matches will be scanned twice and its hash pushed twice into `relevant_blocks`. The fix is to advance the next chunk's start past the boundary (`start_index = next ? next : block`) so the inclusive ranges don't overlap.
This is upstream behavior from bitcoin#23549 and not a Dash-only regression, but it ships into Dash via this backport and will affect Dash users running long scans.
| false_positive_spk = bytes.fromhex("1400000000000000000000000000000000000ad23b") | ||
|
|
||
| genesis_coinbase_hash = bip158_basic_element_hash(genesis_coinbase_spk, 1, genesis_blockhash) | ||
| false_positive_hash = bip158_basic_element_hash(false_positive_spk, 1, genesis_blockhash) | ||
| assert_equal(genesis_coinbase_hash, false_positive_hash) |
There was a problem hiding this comment.
🔴 Blocking: bitcoin#26341 false-positive hex is Bitcoin-regtest-only and won't collide with Dash's genesis
false_positive_spk = bytes.fromhex("1400000000000000000000000000000000000ad23b") is taken verbatim from upstream. That value was brute-force-computed by Bitcoin Core to BIP158-collide with Bitcoin's regtest genesis coinbase scriptPubKey under SipHash-2-4 keyed by the Bitcoin regtest genesis hash.
Dash's regtest genesis block has a different timestamp/coinbase script and therefore a different genesis hash, so the SipHash key differs and the assertion at line 89 (assert_equal(genesis_coinbase_hash, false_positive_hash)) will not hold. The test will fail.
The upstream PR description for this backport explicitly mentions a brute-force loop "to calculate false-positive for rpc_scanblocks.py" — strong indication the author knew a Dash-specific replacement was needed, but the merged commit still ships the upstream constant. A Dash-specific scriptPubKey collision needs to be computed and substituted before this test can pass.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `test/functional/rpc_scanblocks.py`:
- [BLOCKING] lines 85-89: bitcoin#26341 false-positive hex is Bitcoin-regtest-only and won't collide with Dash's genesis
`false_positive_spk = bytes.fromhex("1400000000000000000000000000000000000ad23b")` is taken verbatim from upstream. That value was brute-force-computed by Bitcoin Core to BIP158-collide with Bitcoin's regtest genesis coinbase scriptPubKey under SipHash-2-4 keyed by the *Bitcoin* regtest genesis hash.
Dash's regtest genesis block has a different timestamp/coinbase script and therefore a different genesis hash, so the SipHash key differs and the assertion at line 89 (`assert_equal(genesis_coinbase_hash, false_positive_hash)`) will not hold. The test will fail.
The upstream PR description for this backport explicitly mentions a brute-force loop "to calculate false-positive for rpc_scanblocks.py" — strong indication the author knew a Dash-specific replacement was needed, but the merged commit still ships the upstream constant. A Dash-specific scriptPubKey collision needs to be computed and substituted before this test can pass.
|
|
||
| if (nLoadWalletRet != DBErrors::LOAD_OK) | ||
| return nLoadWalletRet; | ||
|
|
||
| /* If the CoinJoin salt is not set, try to set a new random hash as the salt */ | ||
| if (GetCoinJoinSalt().IsNull() && !SetCoinJoinSalt(GetRandHash())) { | ||
| return DBErrors::LOAD_FAIL; | ||
| } | ||
|
|
||
| return DBErrors::LOAD_OK; | ||
| return nLoadWalletRet; |
There was a problem hiding this comment.
🟡 Suggestion: LoadWallet now runs CoinJoin salt setup on corrupt/too-new wallets and masks specific error codes
bitcoin#23147 removes the if (nLoadWalletRet != DBErrors::LOAD_OK) return nLoadWalletRet; early-return so that NEED_RESCAN can flow through. Upstream Bitcoin doesn't have CoinJoin, so removing the early-return was safe there. In Dash's adaptation the salt block sits after the (now-removed) early-return, so it now executes for any non-OK status (CORRUPT, TOO_NEW, NEED_REWRITE, NONCRITICAL_ERROR, NEED_RESCAN, etc.).
Two concerns:
-
SetCoinJoinSalt(GetRandHash())issues a write throughWalletBatchagainst a database whose loader just reported it is in a bad state (e.g. CORRUPT or NEED_REWRITE). For NEED_REWRITE this also runs afterRewrite("\x04pool")and may interact with rewrite expectations. -
If
SetCoinJoinSaltfails, the function returnsDBErrors::LOAD_FAIL, replacing what may have been the more informativenLoadWalletRet(NEED_RESCAN, CORRUPT, TOO_NEW).CWallet::Createthen takes the generic 'Error loading %s' branch instead of the rescan-warning / corruption-specific recovery paths.
Gate the salt block on a successful-enough load state (e.g. LOAD_OK || NEED_RESCAN) or restore the early-return for the statuses that should not proceed.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/wallet/wallet.cpp`:
- [SUGGESTION] lines 2441-2447: LoadWallet now runs CoinJoin salt setup on corrupt/too-new wallets and masks specific error codes
bitcoin#23147 removes the `if (nLoadWalletRet != DBErrors::LOAD_OK) return nLoadWalletRet;` early-return so that `NEED_RESCAN` can flow through. Upstream Bitcoin doesn't have CoinJoin, so removing the early-return was safe there. In Dash's adaptation the salt block sits *after* the (now-removed) early-return, so it now executes for *any* non-OK status (CORRUPT, TOO_NEW, NEED_REWRITE, NONCRITICAL_ERROR, NEED_RESCAN, etc.).
Two concerns:
1. `SetCoinJoinSalt(GetRandHash())` issues a write through `WalletBatch` against a database whose loader just reported it is in a bad state (e.g. CORRUPT or NEED_REWRITE). For NEED_REWRITE this also runs after `Rewrite("\x04pool")` and may interact with rewrite expectations.
2. If `SetCoinJoinSalt` fails, the function returns `DBErrors::LOAD_FAIL`, replacing what may have been the more informative `nLoadWalletRet` (NEED_RESCAN, CORRUPT, TOO_NEW). `CWallet::Create` then takes the generic 'Error loading %s' branch instead of the rescan-warning / corruption-specific recovery paths.
Gate the salt block on a successful-enough load state (e.g. `LOAD_OK || NEED_RESCAN`) or restore the early-return for the statuses that should not proceed.
| if (start_index->nHeight + amount_per_chunk == block->nHeight || next == nullptr) { | ||
| LogPrint(BCLog::RPC, "Fetching blockfilters from height %d to height %d.\n", start_index->nHeight, block->nHeight); | ||
| if (index->LookupFilterRange(start_index->nHeight, block, filters)) { | ||
| for (const BlockFilter& filter : filters) { | ||
| // compare the elements-set with each filter | ||
| if (filter.GetFilter().MatchAny(needle_set)) { | ||
| blocks.push_back(filter.GetBlockHash().GetHex()); | ||
| LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex()); | ||
| } | ||
| } | ||
| } | ||
| start_index = block; |
There was a problem hiding this comment.
🟡 Suggestion: scanblocks can append the same matching block twice at 10,000-block chunk boundaries
BlockFilterIndex::LookupFilterRange() reads an inclusive range from start_height through stop_index->nHeight (src/index/blockfilterindex.cpp:455-470, ultimately src/index/blockfilterindex.cpp:377 — for (int height = start_height; height <= stop_index->nHeight; ++height)).
At the end of each chunk this loop sets start_index = block, so the next chunk re-reads the boundary block. On scans longer than 10,000 blocks, a boundary block whose filter matches will be scanned twice and its hash pushed twice into relevant_blocks. The fix is to advance the next chunk's start past the boundary (start_index = next ? next : block) so the inclusive ranges don't overlap.
This is upstream behavior from bitcoin#23549 and not a Dash-only regression, but it ships into Dash via this backport and will affect Dash users running long scans.
💡 Suggested change
| if (start_index->nHeight + amount_per_chunk == block->nHeight || next == nullptr) { | |
| LogPrint(BCLog::RPC, "Fetching blockfilters from height %d to height %d.\n", start_index->nHeight, block->nHeight); | |
| if (index->LookupFilterRange(start_index->nHeight, block, filters)) { | |
| for (const BlockFilter& filter : filters) { | |
| // compare the elements-set with each filter | |
| if (filter.GetFilter().MatchAny(needle_set)) { | |
| blocks.push_back(filter.GetBlockHash().GetHex()); | |
| LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex()); | |
| } | |
| } | |
| } | |
| start_index = block; | |
| if (start_index->nHeight + amount_per_chunk == block->nHeight || next == nullptr) { | |
| LogPrint(BCLog::RPC, "Fetching blockfilters from height %d to height %d.\n", start_index->nHeight, block->nHeight); | |
| if (index->LookupFilterRange(start_index->nHeight, block, filters)) { | |
| for (const BlockFilter& filter : filters) { | |
| // compare the elements-set with each filter | |
| if (filter.GetFilter().MatchAny(needle_set)) { | |
| blocks.push_back(filter.GetBlockHash().GetHex()); | |
| LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex()); | |
| } | |
| } | |
| } | |
| start_index = next ? next : block; | |
| // update progress | |
| int blocks_processed = block->nHeight - start_block->nHeight; | |
| if (total_blocks_to_process > 0) { // avoid division by zero | |
| g_scanfilter_progress = (int)(100.0 / total_blocks_to_process * blocks_processed); | |
| } else { | |
| g_scanfilter_progress = 100; | |
| } | |
| g_scanfilter_progress_height = block->nHeight; | |
| } |
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/rpc/blockchain.cpp`:
- [SUGGESTION] lines 2774-2785: scanblocks can append the same matching block twice at 10,000-block chunk boundaries
`BlockFilterIndex::LookupFilterRange()` reads an inclusive range from `start_height` through `stop_index->nHeight` (src/index/blockfilterindex.cpp:455-470, ultimately src/index/blockfilterindex.cpp:377 — `for (int height = start_height; height <= stop_index->nHeight; ++height)`).
At the end of each chunk this loop sets `start_index = block`, so the next chunk re-reads the boundary block. On scans longer than 10,000 blocks, a boundary block whose filter matches will be scanned twice and its hash pushed twice into `relevant_blocks`. The fix is to advance the next chunk's start past the boundary (`start_index = next ? next : block`) so the inclusive ranges don't overlap.
This is upstream behavior from bitcoin#23549 and not a Dash-only regression, but it ships into Dash via this backport and will affect Dash users running long scans.
What was done?
Backports from Bitcoin Core v25, mostly related to
-rescan,rescanblockHow Has This Been Tested?
Run unit & functional tests
Here's a patch to calculate false-positive for
rpc_scanblocks.py:Breaking Changes
Rescan startup parameter removed
The
-rescanstartup parameter has been removed. Wallets which require rescanning due to corruption will still be rescanned on startup.Otherwise, please use the
rescanblockchainRPC to trigger a rescan.Checklist: