Reject transactions that have already been submitted to the tx pool#946
Reject transactions that have already been submitted to the tx pool#946peterargue wants to merge 23 commits intomainfrom
Conversation
Co-authored-by: Peter Argue <89119817+peterargue@users.noreply.github.com>
📝 WalkthroughWalkthroughThis PR implements Mainnet 27 spork-specific height handling with hardcoded bounds, refactors transaction deduplication from hash-based to per-EOA nonce tracking, removes verbose error logging, introduces experimental feature flags, and improves concurrency handling in transaction pool implementations. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
services/requester/cross-spork_client.go (1)
99-101: Potential integer overflow in sorting comparator.Converting
uint64heights tointfor comparison can overflow on 64-bit systems when heights exceedmath.MaxInt64. While current Cadence heights are well below this limit, consider using a safer comparison:🔎 Proposed fix
slices.SortFunc(*s, func(a, b *sporkClient) int { - return int(a.firstHeight) - int(b.firstHeight) + if a.firstHeight < b.firstHeight { + return -1 + } else if a.firstHeight > b.firstHeight { + return 1 + } + return 0 })services/requester/batch_tx_pool.go (3)
127-195: Lock held during network operations may cause contention.The
txMuxis held from line 127 through the entireAddmethod, including duringsubmitSingleTransactionwhich performs network I/O. WhilesubmitSingleTransactionuses a goroutine with timeout, the lock isn't released until the method returns.For high-throughput scenarios, consider restructuring to release the lock before submission:
- Check cache and decide path under lock
- Release lock
- Perform submission
- Re-acquire lock to update metadata
However, this adds complexity and the current 3-second timeout limits worst-case contention. The trade-off may be acceptable given the batching use case.
339-354: Nonce trimming may not preserve the most relevant nonces.The trimming at lines 349-351 keeps the last
maxTrackedTxNoncesPerEOAnonces by slice position, not by nonce value. If nonces arrive out of order (e.g., concurrent submissions), older nonces might be retained while newer ones are dropped.Consider sorting before trimming or using a sorted data structure:
🔎 Proposed fix to sort before trimming
func (t *BatchTxPool) updateEOAActivityMetadata( from gethCommon.Address, nonce uint64, ) { eoaActivity, _ := t.eoaActivityCache.Get(from) eoaActivity.lastSubmission = time.Now() eoaActivity.txNonces = append(eoaActivity.txNonces, nonce) + // Sort nonces to ensure we keep the highest values + sort.Slice(eoaActivity.txNonces, func(i, j int) bool { + return eoaActivity.txNonces[i] < eoaActivity.txNonces[j] + }) if len(eoaActivity.txNonces) > maxTrackedTxNoncesPerEOA { firstKeep := len(eoaActivity.txNonces) - maxTrackedTxNoncesPerEOA eoaActivity.txNonces = eoaActivity.txNonces[firstKeep:] } t.eoaActivityCache.Add(from, eoaActivity) }
226-230: Retry mechanism for failed batch submissions.Re-adding failed transactions to the pool provides resilience against transient failures. However, this could cause infinite retries if the failure is persistent.
Consider adding:
- A retry counter per transaction
- Maximum retry limit before dropping
- Exponential backoff between retries
For now, this is acceptable as a first implementation, but monitor for potential retry storms.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
api/utils.gocmd/run/cmd.gomodels/errors/errors.goservices/ingestion/event_subscriber.goservices/requester/batch_tx_pool.goservices/requester/cross-spork_client.goservices/requester/requester.goservices/requester/single_tx_pool.gotests/tx_batching_test.go
💤 Files with no reviewable changes (1)
- api/utils.go
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-10-06T10:14:49.706Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 890
File: services/ingestion/event_subscriber.go:235-239
Timestamp: 2025-10-06T10:14:49.706Z
Learning: In services/ingestion/event_subscriber.go, when reconnecting after disconnect errors (DeadlineExceeded, Internal, Unavailable), the subscription should reconnect at lastReceivedHeight rather than lastReceivedHeight+1. This avoids errors when the next height doesn't exist yet, and duplicate event processing is safe because the ingestion engine is explicitly designed to be idempotent (storage uses batch.Set() which overwrites existing entries).
Applied to files:
services/requester/cross-spork_client.goservices/ingestion/event_subscriber.go
📚 Learning: 2024-10-17T18:04:04.165Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 615
File: bootstrap/bootstrap.go:167-197
Timestamp: 2024-10-17T18:04:04.165Z
Learning: In the `flow-evm-gateway` Go project, the validation ensuring that `startHeight` is less than or equal to `endHeight` is performed before the `StartTraceDownloader` method in `bootstrap/bootstrap.go`, so additional checks in this method are unnecessary.
Applied to files:
services/ingestion/event_subscriber.go
📚 Learning: 2025-11-12T13:21:59.080Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 890
File: tests/integration_test.go:692-707
Timestamp: 2025-11-12T13:21:59.080Z
Learning: In the flow-evm-gateway integration tests (tests/integration_test.go), the gateway's RPC server continues to operate independently even after the context passed to bootstrap.Run() is cancelled. Context cancellation affects upstream dependencies (like the Emulator acting as an Access Node) but does not immediately shut down the RPC server, allowing tests to verify backup failover scenarios.
Applied to files:
services/ingestion/event_subscriber.gotests/tx_batching_test.go
📚 Learning: 2025-06-17T10:29:35.941Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/pool.go:202-214
Timestamp: 2025-06-17T10:29:35.941Z
Learning: In Flow EVM Gateway's transaction pool (services/requester/pool.go), signing keys acquired with keystore.Take() are intentionally held until the transaction is sealed, not released immediately after sending. The key release happens asynchronously when sealed transactions are ingested via event_subscriber.go NotifyTransaction() -> keystore unsafeUnlockKey() -> key.Done(). This pattern ensures transaction integrity by preventing key reuse before transaction finalization.
Applied to files:
services/requester/batch_tx_pool.goservices/requester/single_tx_pool.go
🧬 Code graph analysis (4)
services/requester/cross-spork_client.go (1)
models/errors/errors.go (1)
NewHeightOutOfRangeError(50-52)
services/ingestion/event_subscriber.go (2)
models/events.go (1)
NewBlockEventsError(252-256)services/requester/cross-spork_client.go (1)
HardcodedMainnet27LastHeight(26-26)
tests/tx_batching_test.go (1)
config/config.go (2)
TxStateValidation(37-37)LocalIndexValidation(40-40)
services/requester/single_tx_pool.go (2)
services/requester/tx_pool.go (1)
TxPool(16-18)models/transaction.go (1)
DeriveTxSender(373-387)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Test
🔇 Additional comments (17)
services/requester/requester.go (1)
240-245: Logging field name consistency improvement.The rename from
evm-idtoevm_txaligns with the broader logging standardization in this PR, improving Grafana compatibility by using underscores instead of dashes.services/ingestion/event_subscriber.go (2)
137-148: Verify the equality check forstopAtHardcodedLastHeight.The flag is set when
latestHeight == HardcodedMainnet27LastHeight. This works becauseGetLatestHeightForSporkclamps the height to the hardcoded value for Mainnet 27. However, if the AN is behind and reports a lower height, this check would returnfalse, allowing blocks past the limit to be processed later.Consider using
IsMainnet27Client()directly for a more robust check:stopAtHardcodedLastHeight := r.client.IsMainnet27Client()This would ensure the guard is always active for Mainnet 27 clients, regardless of the current reported height.
199-201: Graceful handling of blocks beyond the hardcoded limit.Using
continueinstead of returning prevents the gateway from crashing while still ignoring blocks past the Mainnet 27 boundary. This is the correct approach for maintaining service availability during the spork transition.services/requester/single_tx_pool.go (4)
42-42: Idiomatic interface assertion.Using
(*SingleTxPool)(nil)is more memory-efficient than&SingleTxPool{}for compile-time interface satisfaction checks.
98-101: Good refactor: Early sender derivation for better error context.Deriving the sender upfront allows subsequent error logs to include the EOA address, improving observability when debugging transaction submission failures.
206-213: Clear documentation of thread-safety model.The comment accurately explains why mutex protection isn't needed for
keystore.Take()- the channel-based distribution ensures no two goroutines receive the same key simultaneously. Based on learnings, keys are intentionally held until transaction sealing.
167-170: Consistent log field naming.The rename to
flow_txandevm_tximproves Grafana compatibility and aligns with the logging standardization across the codebase.services/requester/cross-spork_client.go (3)
188-193:IsMainnet27Client()range check may need adjustment.The check
currentSporkFirstHeight >= HardcodedMainnet27SporkRootHeight && currentSporkFirstHeight <= HardcodedMainnet27LastHeightcorrectly identifies Mainnet 27 clients. However, the comment notescurrentSporkFirstHeightis the node's root block, not spork root block, which could differ for late-bootstrapped nodes.
72-83: Automatic height adjustment with clear logging.Good defensive coding - when the access node returns an unexpected height for Mainnet 27, the code logs a warning and uses the hardcoded value. This ensures consistent behavior across different AN states.
46-49: Guard rejects requests beyond spork boundary.The simplified check correctly rejects requests where
endHeight > lastHeight, returning a clear error. This prevents attempting to fetch events from heights that don't exist in the spork.tests/tx_batching_test.go (3)
569-579: Correct event count verification.The test correctly verifies that only 6 unique transactions (nonces 0-5) were executed despite submitting 24 transactions (8 nonces × 3 goroutines with overlapping nonces 2,3). This validates the duplicate nonce rejection logic.
632-634: Configuration aligns with production settings.Using
LocalIndexValidationand a 2.5-second batch interval matches the mainnet configuration, ensuring tests validate realistic behavior.
645-647: Sleep for indexing catch-up is pragmatic for tests.While fixed sleeps are generally a code smell, this 2-second delay allows the gateway to index initial blocks before tests begin. This is acceptable for integration tests where deterministic synchronization would add significant complexity.
services/requester/batch_tx_pool.go (3)
283-337: Goroutine with timeout pattern for single transaction submission.The implementation spawns a goroutine to avoid blocking the caller beyond the 3-second timeout. However, if the context times out, the goroutine continues running and may complete the submission after the caller has already received an error.
This could lead to:
- Transaction being submitted but caller receiving timeout error
- Metrics being updated after error is returned
Consider propagating the context cancellation to abort the submission:
if err := t.client.SendTransaction(ctx, *flowTx); err != nil {The
ctxpassed here already has the timeout, soSendTransactionshould respect it. The pattern is mostly fine, but be aware of the fire-and-forget nature if timeout occurs.
130-142: Silent skip of duplicate nonces with informative logging.The duplicate detection using
slices.Containson tracked nonces correctly prevents re-submission of transactions with the same nonce. Returningnilinstead of an error is a reasonable UX choice - clients don't need to handle duplicate submission errors.The info-level log provides observability without cluttering error logs.
24-27: Reasonable defaults for nonce tracking.
maxTrackedTxNoncesPerEOA = 30provides a good balance between memory usage and detection window. Combined with the LRU cache TTL from config, this should handle typical transaction patterns.models/errors/errors.go (1)
29-32: Removal ofErrDuplicateTransactionaligns with new silent-skip behavior.The batch pool implementation (lines 133-142 in
batch_tx_pool.go) skips transactions with duplicate nonces and returnsnilinstead of raising an error. The PR shifts from rejecting duplicates with an error to silently skipping them, simplifying the API behavior for clients. No references toErrDuplicateTransactionremain in the codebase.
| // soft finality specific flags. include them as disabled to allow operators to run this image without any config changes | ||
| var experimentalSoftFinalityEnabled, experimentalSealingVerificationEnabled bool | ||
| Cmd.Flags().BoolVar(&experimentalSoftFinalityEnabled, "experimental-soft-finality-enabled", false, "feature disabled in this release") | ||
| Cmd.Flags().BoolVar(&experimentalSealingVerificationEnabled, "experimental-sealing-verification-enabled", false, "feature disabled in this release") | ||
| _ = Cmd.Flags().MarkHidden("experimental-soft-finality-enabled") | ||
| _ = Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled") |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Handle errors from MarkHidden() consistently with MarkDeprecated().
Lines 300-301 ignore errors from MarkHidden(), while lines 303-306 check errors from MarkDeprecated() and panic if present. Both methods can fail (e.g., if the flag doesn't exist), so error handling should be consistent to catch initialization issues early.
🔎 Proposed fix for consistent error handling
- _ = Cmd.Flags().MarkHidden("experimental-soft-finality-enabled")
- _ = Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled")
+ if err := Cmd.Flags().MarkHidden("experimental-soft-finality-enabled"); err != nil {
+ panic(err)
+ }
+ if err := Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled"); err != nil {
+ panic(err)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // soft finality specific flags. include them as disabled to allow operators to run this image without any config changes | |
| var experimentalSoftFinalityEnabled, experimentalSealingVerificationEnabled bool | |
| Cmd.Flags().BoolVar(&experimentalSoftFinalityEnabled, "experimental-soft-finality-enabled", false, "feature disabled in this release") | |
| Cmd.Flags().BoolVar(&experimentalSealingVerificationEnabled, "experimental-sealing-verification-enabled", false, "feature disabled in this release") | |
| _ = Cmd.Flags().MarkHidden("experimental-soft-finality-enabled") | |
| _ = Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled") | |
| // soft finality specific flags. include them as disabled to allow operators to run this image without any config changes | |
| var experimentalSoftFinalityEnabled, experimentalSealingVerificationEnabled bool | |
| Cmd.Flags().BoolVar(&experimentalSoftFinalityEnabled, "experimental-soft-finality-enabled", false, "feature disabled in this release") | |
| Cmd.Flags().BoolVar(&experimentalSealingVerificationEnabled, "experimental-sealing-verification-enabled", false, "feature disabled in this release") | |
| if err := Cmd.Flags().MarkHidden("experimental-soft-finality-enabled"); err != nil { | |
| panic(err) | |
| } | |
| if err := Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled"); err != nil { | |
| panic(err) | |
| } |
🤖 Prompt for AI Agents
In cmd/run/cmd.go around lines 296 to 301, the calls to
Cmd.Flags().MarkHidden(...) ignore returned errors while later
MarkDeprecated(...) errors are checked and panic on failure; make error handling
consistent by capturing the error return values from MarkHidden and panic (or
log + panic) if non-nil. Replace the ignored assignments with error checks
(e.g., if err := Cmd.Flags().MarkHidden("experimental-soft-finality-enabled");
err != nil { panic(fmt.Errorf("failed to mark flag hidden: %w", err)) }) and do
the same for "experimental-sealing-verification-enabled" so initialization fails
fast if the flags are missing or the operation errors.
| const ( | ||
| // The following are hardcoded heights used to ensure gateways ignore blocks after the hard fork | ||
| // height even if they are returned by the upstream access node. | ||
|
|
||
| // HardcodedMainnet27SporkRootHeight is the spork root block height (cadence) for the Mainnet 27 network. | ||
| HardcodedMainnet27SporkRootHeight = 130290659 | ||
|
|
||
| // HardcodedMainnet27LastHeight is the hardcoded last block height (cadence) for the Mainnet 27 network. | ||
| HardcodedMainnet27LastHeight = 137363395 // TODO: confirm final height when ready | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all references to the Mainnet 27 constant
rg -n "HardcodedMainnet27LastHeight|137363395" --type goRepository: onflow/flow-evm-gateway
Length of output: 1557
🏁 Script executed:
#!/bin/bash
# Check the actual file context around lines 18-27
cat -n services/requester/cross-spork_client.go | head -40Repository: onflow/flow-evm-gateway
Length of output: 1621
🏁 Script executed:
#!/bin/bash
# Search for other similar hardcoded values for other sporks to understand the pattern
rg -n "Hardcoded.*Height|LastHeight" --type go | head -20Repository: onflow/flow-evm-gateway
Length of output: 2272
🏁 Script executed:
#!/bin/bash
# Check for related issues or release notes
rg -n "137363395|Mainnet 27.*final.*height|confirm.*height" --type go --type md | head -20Repository: onflow/flow-evm-gateway
Length of output: 193
🏁 Script executed:
#!/bin/bash
# Check commit history or PR references to this constant
git log --all --oneline --grep="137363395\|Mainnet 27" 2>/dev/null | head -10
git log --all --oneline -- services/requester/cross-spork_client.go 2>/dev/null | head -10Repository: onflow/flow-evm-gateway
Length of output: 161
🏁 Script executed:
#!/bin/bash
# Look for any TODO tracking files or release documentation
find . -type f \( -name "*.md" -o -name "TODO*" -o -name "CHANGELOG*" -o -name "RELEASE*" \) -exec grep -l "Mainnet 27\|137363395" {} \;Repository: onflow/flow-evm-gateway
Length of output: 49
Confirm the hardcoded Mainnet 27 last height (137363395) before release.
The TODO at line 26 remains unresolved. This constant is used to clamp block heights across multiple critical systems (cross-spork client and event subscriber), and an incorrect value will cause the gateway to either process blocks it shouldn't or stop processing prematurely.
🤖 Prompt for AI Agents
In services/requester/cross-spork_client.go around lines 18 to 27, the
HardcodedMainnet27LastHeight constant is left with a TODO and may be incorrect;
confirm the true Mainnet 27 final block height from an authoritative source
(official chain upgrade docs or the upstream access node's finalized block
height), update HardcodedMainnet27LastHeight to that confirmed value, remove the
TODO, and add a one-line comment referencing the authoritative source and date
checked (or move the value into a configuration/env variable if this may change)
so downstream systems use the correct clamping height.
| g := errgroup.Group{} | ||
|
|
||
| txHash, err := rpcTester.sendRawTx(signed) | ||
| startBlock, err := emu.GetLatestBlock() | ||
| require.NoError(t, err) | ||
| hashes = append(hashes, txHash) | ||
|
|
||
| // Increment nonce for the duplicate test transactions that follow | ||
| nonce += 1 | ||
| dupSigned, _, err := evmSign(big.NewInt(10), 15_000_000, eoaKey, nonce, &testAddr, nil) | ||
| require.NoError(t, err) | ||
| hashes := []common.Hash{} | ||
| // transfer some funds to the test address | ||
| transferAmount := int64(1_000_000_000) | ||
|
|
||
| // Submit 5 identical transactions to test duplicate detection: | ||
| // the first should succeed, the rest should be rejected as duplicates. | ||
| for i := range 5 { | ||
| if i == 0 { | ||
| txHash, err := rpcTester.sendRawTx(dupSigned) | ||
| require.NoError(t, err) | ||
| hashes = append(hashes, txHash) | ||
| } else { | ||
| _, err := rpcTester.sendRawTx(dupSigned) | ||
| require.Error(t, err) | ||
| require.ErrorContains(t, err, "invalid: transaction already in pool") | ||
| } | ||
| for range 3 { | ||
| g.Go(func() error { | ||
| for _, nonce := range nonces { | ||
| signed, _, err := evmSign(big.NewInt(transferAmount), 23_500, eoaKey, nonce, &testAddr, nil) | ||
| require.NoError(t, err) | ||
|
|
||
| txHash, err := rpcTester.sendRawTx(signed) | ||
| require.NoError(t, err) | ||
| hashes = append(hashes, txHash) | ||
| } | ||
|
|
||
| return nil | ||
| }) | ||
| } |
There was a problem hiding this comment.
Data race: concurrent writes to shared hashes slice.
Multiple goroutines append to the shared hashes slice at line 548 without synchronization. While the slice isn't used in assertions (only expectedBalance is checked), this is still a data race that can cause undefined behavior.
🔎 Proposed fix using a mutex
+var hashesMux sync.Mutex
hashes := []common.Hash{}
// ...
for range 3 {
g.Go(func() error {
for _, nonce := range nonces {
// ...
txHash, err := rpcTester.sendRawTx(signed)
require.NoError(t, err)
+ hashesMux.Lock()
hashes = append(hashes, txHash)
+ hashesMux.Unlock()
}
return nil
})
}Alternatively, if hashes isn't needed for assertions, remove it entirely to avoid the race.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| g := errgroup.Group{} | |
| txHash, err := rpcTester.sendRawTx(signed) | |
| startBlock, err := emu.GetLatestBlock() | |
| require.NoError(t, err) | |
| hashes = append(hashes, txHash) | |
| // Increment nonce for the duplicate test transactions that follow | |
| nonce += 1 | |
| dupSigned, _, err := evmSign(big.NewInt(10), 15_000_000, eoaKey, nonce, &testAddr, nil) | |
| require.NoError(t, err) | |
| hashes := []common.Hash{} | |
| // transfer some funds to the test address | |
| transferAmount := int64(1_000_000_000) | |
| // Submit 5 identical transactions to test duplicate detection: | |
| // the first should succeed, the rest should be rejected as duplicates. | |
| for i := range 5 { | |
| if i == 0 { | |
| txHash, err := rpcTester.sendRawTx(dupSigned) | |
| require.NoError(t, err) | |
| hashes = append(hashes, txHash) | |
| } else { | |
| _, err := rpcTester.sendRawTx(dupSigned) | |
| require.Error(t, err) | |
| require.ErrorContains(t, err, "invalid: transaction already in pool") | |
| } | |
| for range 3 { | |
| g.Go(func() error { | |
| for _, nonce := range nonces { | |
| signed, _, err := evmSign(big.NewInt(transferAmount), 23_500, eoaKey, nonce, &testAddr, nil) | |
| require.NoError(t, err) | |
| txHash, err := rpcTester.sendRawTx(signed) | |
| require.NoError(t, err) | |
| hashes = append(hashes, txHash) | |
| } | |
| return nil | |
| }) | |
| } | |
| g := errgroup.Group{} | |
| startBlock, err := emu.GetLatestBlock() | |
| require.NoError(t, err) | |
| var hashesMux sync.Mutex | |
| hashes := []common.Hash{} | |
| // transfer some funds to the test address | |
| transferAmount := int64(1_000_000_000) | |
| for range 3 { | |
| g.Go(func() error { | |
| for _, nonce := range nonces { | |
| signed, _, err := evmSign(big.NewInt(transferAmount), 23_500, eoaKey, nonce, &testAddr, nil) | |
| require.NoError(t, err) | |
| txHash, err := rpcTester.sendRawTx(signed) | |
| require.NoError(t, err) | |
| hashesMux.Lock() | |
| hashes = append(hashes, txHash) | |
| hashesMux.Unlock() | |
| } | |
| return nil | |
| }) | |
| } |
🤖 Prompt for AI Agents
In tests/tx_batching_test.go around lines 531 to 553 there is a data race:
multiple goroutines append to the shared `hashes` slice without synchronization.
Fix by either removing `hashes` entirely if its values are unused, or protect
access with a mutex (or a channel) so that each append is performed under
lock/serialized; ensure any reads happen after the goroutines finish
(errgroup.Wait) to avoid concurrent access.
|
If I'm not mistaken the title of this PR is incorrect an the PR its porting has been closed. Do we want to close this one too? |
Closes: #???
Description
Backports #945 from
v0.4.3For contributor use:
masterbranchFiles changedin the Github PR explorerSummary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.