Skip to content

Reject transactions that have already been submitted to the tx pool#946

Open
peterargue wants to merge 23 commits intomainfrom
peter/mainnet27-hardcoded-latest-block-v1.4.3
Open

Reject transactions that have already been submitted to the tx pool#946
peterargue wants to merge 23 commits intomainfrom
peter/mainnet27-hardcoded-latest-block-v1.4.3

Conversation

@peterargue
Copy link
Copy Markdown
Contributor

@peterargue peterargue commented Dec 28, 2025

Closes: #???

Description

Backports #945 from v0.4.3


For contributor use:

  • Targeted PR against master branch
  • Linked to Github issue with discussion and accepted design OR link to spec that describes this work.
  • Code follows the standards mentioned here.
  • Updated relevant documentation
  • Re-reviewed Files changed in the Github PR explorer
  • Added appropriate labels

Summary by CodeRabbit

  • New Features

    • Added Mainnet 27 spork support with proper height boundary management
    • Enhanced transaction batching with nonce tracking for improved duplicate prevention
  • Improvements

    • Better error reporting in transaction submission with enhanced context information

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Dec 28, 2025

📝 Walkthrough

Walkthrough

This PR implements Mainnet 27 spork-specific height handling with hardcoded bounds, refactors transaction deduplication from hash-based to per-EOA nonce tracking, removes verbose error logging, introduces experimental feature flags, and improves concurrency handling in transaction pool implementations.

Changes

Cohort / File(s) Summary
Mainnet 27 Spork Bounds
services/requester/cross-spork_client.go, services/ingestion/event_subscriber.go
Added hardcoded Mainnet 27 spork root and last height constants. Implemented height clamping logic in CrossSporkClient to reject/clamp requests beyond HardcodedMainnet27LastHeight. Event subscriber now checks height limits and skips processing blocks exceeding the hardcoded maximum.
Transaction Batching & Deduplication
services/requester/batch_tx_pool.go, models/errors/errors.go
Replaced hash-based deduplication with per-EOA nonce tracking via new eoaActivityMetadata struct. Removed txHash field from pooledEvmTx, added nonce tracking. Introduced maxTrackedTxNoncesPerEOA boundary and eoaActivityCache. Removed ErrDuplicateTransaction public error (no longer needed with nonce-based dedup).
Transaction Pool Synchronization
services/requester/single_tx_pool.go
Removed mutex locking mechanism from SingleTxPool; keystore access now via inline Take() call (no concurrency protection needed due to channel-based key distribution). Enhanced error logging with sender context and renamed log fields from flow-id/evm-id to flow_tx/evm_tx. Eliminated fetchSigningAccountKey helper.
Logging & Debugging
api/utils.go, services/requester/requester.go
Removed verbose error logging in resolveBlockTag error paths. Changed SendRawTransaction log key from "evm-id" to "evm_tx" for consistency.
Experimental Features
cmd/run/cmd.go
Added two hidden experimental flags: \--experimental-soft-finality-enabled and \--experimental-sealing-verification-enabled, both defaulting to false. Flags not yet wired to runtime logic.
Test Updates
tests/tx_batching_test.go
Renamed Test\_MultipleTransactionSubmissionsWithDuplicates to Test\_TransactionSubmissionWithPreviouslySubmittedTransactions. Changed TxStateValidation to LocalIndexValidation, TxBatchInterval from 2s to 2.5s. Refactored test flow to use errgroup for concurrent nonce-based submission with balance and event-level verification.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested labels

Bugfix, EVM

Suggested reviewers

  • zhangchiqing
  • m-Peter
  • janezpodhostnik

Poem

🐰 A spork is split with heights now bound,
Nonces tracked where hashes drowned,
Mutexes shed, and logs refined,
Transaction batches better aligned! ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and accurately summarizes the main objective: rejecting/skipping duplicate transactions based on nonce tracking in the tx pool, which is the primary focus across multiple files (batch_tx_pool.go, single_tx_pool.go, and the test changes).
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch peter/mainnet27-hardcoded-latest-block-v1.4.3

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (4)
services/requester/cross-spork_client.go (1)

99-101: Potential integer overflow in sorting comparator.

Converting uint64 heights to int for comparison can overflow on 64-bit systems when heights exceed math.MaxInt64. While current Cadence heights are well below this limit, consider using a safer comparison:

🔎 Proposed fix
 slices.SortFunc(*s, func(a, b *sporkClient) int {
-    return int(a.firstHeight) - int(b.firstHeight)
+    if a.firstHeight < b.firstHeight {
+        return -1
+    } else if a.firstHeight > b.firstHeight {
+        return 1
+    }
+    return 0
 })
services/requester/batch_tx_pool.go (3)

127-195: Lock held during network operations may cause contention.

The txMux is held from line 127 through the entire Add method, including during submitSingleTransaction which performs network I/O. While submitSingleTransaction uses a goroutine with timeout, the lock isn't released until the method returns.

For high-throughput scenarios, consider restructuring to release the lock before submission:

  1. Check cache and decide path under lock
  2. Release lock
  3. Perform submission
  4. Re-acquire lock to update metadata

However, this adds complexity and the current 3-second timeout limits worst-case contention. The trade-off may be acceptable given the batching use case.


339-354: Nonce trimming may not preserve the most relevant nonces.

The trimming at lines 349-351 keeps the last maxTrackedTxNoncesPerEOA nonces by slice position, not by nonce value. If nonces arrive out of order (e.g., concurrent submissions), older nonces might be retained while newer ones are dropped.

Consider sorting before trimming or using a sorted data structure:

🔎 Proposed fix to sort before trimming
 func (t *BatchTxPool) updateEOAActivityMetadata(
     from gethCommon.Address,
     nonce uint64,
 ) {
     eoaActivity, _ := t.eoaActivityCache.Get(from)
     eoaActivity.lastSubmission = time.Now()
     eoaActivity.txNonces = append(eoaActivity.txNonces, nonce)
+    // Sort nonces to ensure we keep the highest values
+    sort.Slice(eoaActivity.txNonces, func(i, j int) bool {
+        return eoaActivity.txNonces[i] < eoaActivity.txNonces[j]
+    })
     if len(eoaActivity.txNonces) > maxTrackedTxNoncesPerEOA {
         firstKeep := len(eoaActivity.txNonces) - maxTrackedTxNoncesPerEOA
         eoaActivity.txNonces = eoaActivity.txNonces[firstKeep:]
     }
     t.eoaActivityCache.Add(from, eoaActivity)
 }

226-230: Retry mechanism for failed batch submissions.

Re-adding failed transactions to the pool provides resilience against transient failures. However, this could cause infinite retries if the failure is persistent.

Consider adding:

  1. A retry counter per transaction
  2. Maximum retry limit before dropping
  3. Exponential backoff between retries

For now, this is acceptable as a first implementation, but monitor for potential retry storms.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e9c15c and 3586cce.

📒 Files selected for processing (9)
  • api/utils.go
  • cmd/run/cmd.go
  • models/errors/errors.go
  • services/ingestion/event_subscriber.go
  • services/requester/batch_tx_pool.go
  • services/requester/cross-spork_client.go
  • services/requester/requester.go
  • services/requester/single_tx_pool.go
  • tests/tx_batching_test.go
💤 Files with no reviewable changes (1)
  • api/utils.go
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-10-06T10:14:49.706Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 890
File: services/ingestion/event_subscriber.go:235-239
Timestamp: 2025-10-06T10:14:49.706Z
Learning: In services/ingestion/event_subscriber.go, when reconnecting after disconnect errors (DeadlineExceeded, Internal, Unavailable), the subscription should reconnect at lastReceivedHeight rather than lastReceivedHeight+1. This avoids errors when the next height doesn't exist yet, and duplicate event processing is safe because the ingestion engine is explicitly designed to be idempotent (storage uses batch.Set() which overwrites existing entries).

Applied to files:

  • services/requester/cross-spork_client.go
  • services/ingestion/event_subscriber.go
📚 Learning: 2024-10-17T18:04:04.165Z
Learnt from: peterargue
Repo: onflow/flow-evm-gateway PR: 615
File: bootstrap/bootstrap.go:167-197
Timestamp: 2024-10-17T18:04:04.165Z
Learning: In the `flow-evm-gateway` Go project, the validation ensuring that `startHeight` is less than or equal to `endHeight` is performed before the `StartTraceDownloader` method in `bootstrap/bootstrap.go`, so additional checks in this method are unnecessary.

Applied to files:

  • services/ingestion/event_subscriber.go
📚 Learning: 2025-11-12T13:21:59.080Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 890
File: tests/integration_test.go:692-707
Timestamp: 2025-11-12T13:21:59.080Z
Learning: In the flow-evm-gateway integration tests (tests/integration_test.go), the gateway's RPC server continues to operate independently even after the context passed to bootstrap.Run() is cancelled. Context cancellation affects upstream dependencies (like the Emulator acting as an Access Node) but does not immediately shut down the RPC server, allowing tests to verify backup failover scenarios.

Applied to files:

  • services/ingestion/event_subscriber.go
  • tests/tx_batching_test.go
📚 Learning: 2025-06-17T10:29:35.941Z
Learnt from: m-Peter
Repo: onflow/flow-evm-gateway PR: 831
File: services/requester/pool.go:202-214
Timestamp: 2025-06-17T10:29:35.941Z
Learning: In Flow EVM Gateway's transaction pool (services/requester/pool.go), signing keys acquired with keystore.Take() are intentionally held until the transaction is sealed, not released immediately after sending. The key release happens asynchronously when sealed transactions are ingested via event_subscriber.go NotifyTransaction() -> keystore unsafeUnlockKey() -> key.Done(). This pattern ensures transaction integrity by preventing key reuse before transaction finalization.

Applied to files:

  • services/requester/batch_tx_pool.go
  • services/requester/single_tx_pool.go
🧬 Code graph analysis (4)
services/requester/cross-spork_client.go (1)
models/errors/errors.go (1)
  • NewHeightOutOfRangeError (50-52)
services/ingestion/event_subscriber.go (2)
models/events.go (1)
  • NewBlockEventsError (252-256)
services/requester/cross-spork_client.go (1)
  • HardcodedMainnet27LastHeight (26-26)
tests/tx_batching_test.go (1)
config/config.go (2)
  • TxStateValidation (37-37)
  • LocalIndexValidation (40-40)
services/requester/single_tx_pool.go (2)
services/requester/tx_pool.go (1)
  • TxPool (16-18)
models/transaction.go (1)
  • DeriveTxSender (373-387)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Test
🔇 Additional comments (17)
services/requester/requester.go (1)

240-245: Logging field name consistency improvement.

The rename from evm-id to evm_tx aligns with the broader logging standardization in this PR, improving Grafana compatibility by using underscores instead of dashes.

services/ingestion/event_subscriber.go (2)

137-148: Verify the equality check for stopAtHardcodedLastHeight.

The flag is set when latestHeight == HardcodedMainnet27LastHeight. This works because GetLatestHeightForSpork clamps the height to the hardcoded value for Mainnet 27. However, if the AN is behind and reports a lower height, this check would return false, allowing blocks past the limit to be processed later.

Consider using IsMainnet27Client() directly for a more robust check:

stopAtHardcodedLastHeight := r.client.IsMainnet27Client()

This would ensure the guard is always active for Mainnet 27 clients, regardless of the current reported height.


199-201: Graceful handling of blocks beyond the hardcoded limit.

Using continue instead of returning prevents the gateway from crashing while still ignoring blocks past the Mainnet 27 boundary. This is the correct approach for maintaining service availability during the spork transition.

services/requester/single_tx_pool.go (4)

42-42: Idiomatic interface assertion.

Using (*SingleTxPool)(nil) is more memory-efficient than &SingleTxPool{} for compile-time interface satisfaction checks.


98-101: Good refactor: Early sender derivation for better error context.

Deriving the sender upfront allows subsequent error logs to include the EOA address, improving observability when debugging transaction submission failures.


206-213: Clear documentation of thread-safety model.

The comment accurately explains why mutex protection isn't needed for keystore.Take() - the channel-based distribution ensures no two goroutines receive the same key simultaneously. Based on learnings, keys are intentionally held until transaction sealing.


167-170: Consistent log field naming.

The rename to flow_tx and evm_tx improves Grafana compatibility and aligns with the logging standardization across the codebase.

services/requester/cross-spork_client.go (3)

188-193: IsMainnet27Client() range check may need adjustment.

The check currentSporkFirstHeight >= HardcodedMainnet27SporkRootHeight && currentSporkFirstHeight <= HardcodedMainnet27LastHeight correctly identifies Mainnet 27 clients. However, the comment notes currentSporkFirstHeight is the node's root block, not spork root block, which could differ for late-bootstrapped nodes.


72-83: Automatic height adjustment with clear logging.

Good defensive coding - when the access node returns an unexpected height for Mainnet 27, the code logs a warning and uses the hardcoded value. This ensures consistent behavior across different AN states.


46-49: Guard rejects requests beyond spork boundary.

The simplified check correctly rejects requests where endHeight > lastHeight, returning a clear error. This prevents attempting to fetch events from heights that don't exist in the spork.

tests/tx_batching_test.go (3)

569-579: Correct event count verification.

The test correctly verifies that only 6 unique transactions (nonces 0-5) were executed despite submitting 24 transactions (8 nonces × 3 goroutines with overlapping nonces 2,3). This validates the duplicate nonce rejection logic.


632-634: Configuration aligns with production settings.

Using LocalIndexValidation and a 2.5-second batch interval matches the mainnet configuration, ensuring tests validate realistic behavior.


645-647: Sleep for indexing catch-up is pragmatic for tests.

While fixed sleeps are generally a code smell, this 2-second delay allows the gateway to index initial blocks before tests begin. This is acceptable for integration tests where deterministic synchronization would add significant complexity.

services/requester/batch_tx_pool.go (3)

283-337: Goroutine with timeout pattern for single transaction submission.

The implementation spawns a goroutine to avoid blocking the caller beyond the 3-second timeout. However, if the context times out, the goroutine continues running and may complete the submission after the caller has already received an error.

This could lead to:

  1. Transaction being submitted but caller receiving timeout error
  2. Metrics being updated after error is returned

Consider propagating the context cancellation to abort the submission:

if err := t.client.SendTransaction(ctx, *flowTx); err != nil {

The ctx passed here already has the timeout, so SendTransaction should respect it. The pattern is mostly fine, but be aware of the fire-and-forget nature if timeout occurs.


130-142: Silent skip of duplicate nonces with informative logging.

The duplicate detection using slices.Contains on tracked nonces correctly prevents re-submission of transactions with the same nonce. Returning nil instead of an error is a reasonable UX choice - clients don't need to handle duplicate submission errors.

The info-level log provides observability without cluttering error logs.


24-27: Reasonable defaults for nonce tracking.

maxTrackedTxNoncesPerEOA = 30 provides a good balance between memory usage and detection window. Combined with the LRU cache TTL from config, this should handle typical transaction patterns.

models/errors/errors.go (1)

29-32: Removal of ErrDuplicateTransaction aligns with new silent-skip behavior.

The batch pool implementation (lines 133-142 in batch_tx_pool.go) skips transactions with duplicate nonces and returns nil instead of raising an error. The PR shifts from rejecting duplicates with an error to silently skipping them, simplifying the API behavior for clients. No references to ErrDuplicateTransaction remain in the codebase.

Comment thread cmd/run/cmd.go
Comment on lines +296 to +301
// soft finality specific flags. include them as disabled to allow operators to run this image without any config changes
var experimentalSoftFinalityEnabled, experimentalSealingVerificationEnabled bool
Cmd.Flags().BoolVar(&experimentalSoftFinalityEnabled, "experimental-soft-finality-enabled", false, "feature disabled in this release")
Cmd.Flags().BoolVar(&experimentalSealingVerificationEnabled, "experimental-sealing-verification-enabled", false, "feature disabled in this release")
_ = Cmd.Flags().MarkHidden("experimental-soft-finality-enabled")
_ = Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Handle errors from MarkHidden() consistently with MarkDeprecated().

Lines 300-301 ignore errors from MarkHidden(), while lines 303-306 check errors from MarkDeprecated() and panic if present. Both methods can fail (e.g., if the flag doesn't exist), so error handling should be consistent to catch initialization issues early.

🔎 Proposed fix for consistent error handling
-	_ = Cmd.Flags().MarkHidden("experimental-soft-finality-enabled")
-	_ = Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled")
+	if err := Cmd.Flags().MarkHidden("experimental-soft-finality-enabled"); err != nil {
+		panic(err)
+	}
+	if err := Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled"); err != nil {
+		panic(err)
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// soft finality specific flags. include them as disabled to allow operators to run this image without any config changes
var experimentalSoftFinalityEnabled, experimentalSealingVerificationEnabled bool
Cmd.Flags().BoolVar(&experimentalSoftFinalityEnabled, "experimental-soft-finality-enabled", false, "feature disabled in this release")
Cmd.Flags().BoolVar(&experimentalSealingVerificationEnabled, "experimental-sealing-verification-enabled", false, "feature disabled in this release")
_ = Cmd.Flags().MarkHidden("experimental-soft-finality-enabled")
_ = Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled")
// soft finality specific flags. include them as disabled to allow operators to run this image without any config changes
var experimentalSoftFinalityEnabled, experimentalSealingVerificationEnabled bool
Cmd.Flags().BoolVar(&experimentalSoftFinalityEnabled, "experimental-soft-finality-enabled", false, "feature disabled in this release")
Cmd.Flags().BoolVar(&experimentalSealingVerificationEnabled, "experimental-sealing-verification-enabled", false, "feature disabled in this release")
if err := Cmd.Flags().MarkHidden("experimental-soft-finality-enabled"); err != nil {
panic(err)
}
if err := Cmd.Flags().MarkHidden("experimental-sealing-verification-enabled"); err != nil {
panic(err)
}
🤖 Prompt for AI Agents
In cmd/run/cmd.go around lines 296 to 301, the calls to
Cmd.Flags().MarkHidden(...) ignore returned errors while later
MarkDeprecated(...) errors are checked and panic on failure; make error handling
consistent by capturing the error return values from MarkHidden and panic (or
log + panic) if non-nil. Replace the ignored assignments with error checks
(e.g., if err := Cmd.Flags().MarkHidden("experimental-soft-finality-enabled");
err != nil { panic(fmt.Errorf("failed to mark flag hidden: %w", err)) }) and do
the same for "experimental-sealing-verification-enabled" so initialization fails
fast if the flags are missing or the operation errors.

Comment on lines +18 to +27
const (
// The following are hardcoded heights used to ensure gateways ignore blocks after the hard fork
// height even if they are returned by the upstream access node.

// HardcodedMainnet27SporkRootHeight is the spork root block height (cadence) for the Mainnet 27 network.
HardcodedMainnet27SporkRootHeight = 130290659

// HardcodedMainnet27LastHeight is the hardcoded last block height (cadence) for the Mainnet 27 network.
HardcodedMainnet27LastHeight = 137363395 // TODO: confirm final height when ready
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find all references to the Mainnet 27 constant
rg -n "HardcodedMainnet27LastHeight|137363395" --type go

Repository: onflow/flow-evm-gateway

Length of output: 1557


🏁 Script executed:

#!/bin/bash
# Check the actual file context around lines 18-27
cat -n services/requester/cross-spork_client.go | head -40

Repository: onflow/flow-evm-gateway

Length of output: 1621


🏁 Script executed:

#!/bin/bash
# Search for other similar hardcoded values for other sporks to understand the pattern
rg -n "Hardcoded.*Height|LastHeight" --type go | head -20

Repository: onflow/flow-evm-gateway

Length of output: 2272


🏁 Script executed:

#!/bin/bash
# Check for related issues or release notes
rg -n "137363395|Mainnet 27.*final.*height|confirm.*height" --type go --type md | head -20

Repository: onflow/flow-evm-gateway

Length of output: 193


🏁 Script executed:

#!/bin/bash
# Check commit history or PR references to this constant
git log --all --oneline --grep="137363395\|Mainnet 27" 2>/dev/null | head -10
git log --all --oneline -- services/requester/cross-spork_client.go 2>/dev/null | head -10

Repository: onflow/flow-evm-gateway

Length of output: 161


🏁 Script executed:

#!/bin/bash
# Look for any TODO tracking files or release documentation
find . -type f \( -name "*.md" -o -name "TODO*" -o -name "CHANGELOG*" -o -name "RELEASE*" \) -exec grep -l "Mainnet 27\|137363395" {} \;

Repository: onflow/flow-evm-gateway

Length of output: 49


Confirm the hardcoded Mainnet 27 last height (137363395) before release.

The TODO at line 26 remains unresolved. This constant is used to clamp block heights across multiple critical systems (cross-spork client and event subscriber), and an incorrect value will cause the gateway to either process blocks it shouldn't or stop processing prematurely.

🤖 Prompt for AI Agents
In services/requester/cross-spork_client.go around lines 18 to 27, the
HardcodedMainnet27LastHeight constant is left with a TODO and may be incorrect;
confirm the true Mainnet 27 final block height from an authoritative source
(official chain upgrade docs or the upstream access node's finalized block
height), update HardcodedMainnet27LastHeight to that confirmed value, remove the
TODO, and add a one-line comment referencing the authoritative source and date
checked (or move the value into a configuration/env variable if this may change)
so downstream systems use the correct clamping height.

Comment thread tests/tx_batching_test.go
Comment on lines +531 to 553
g := errgroup.Group{}

txHash, err := rpcTester.sendRawTx(signed)
startBlock, err := emu.GetLatestBlock()
require.NoError(t, err)
hashes = append(hashes, txHash)

// Increment nonce for the duplicate test transactions that follow
nonce += 1
dupSigned, _, err := evmSign(big.NewInt(10), 15_000_000, eoaKey, nonce, &testAddr, nil)
require.NoError(t, err)
hashes := []common.Hash{}
// transfer some funds to the test address
transferAmount := int64(1_000_000_000)

// Submit 5 identical transactions to test duplicate detection:
// the first should succeed, the rest should be rejected as duplicates.
for i := range 5 {
if i == 0 {
txHash, err := rpcTester.sendRawTx(dupSigned)
require.NoError(t, err)
hashes = append(hashes, txHash)
} else {
_, err := rpcTester.sendRawTx(dupSigned)
require.Error(t, err)
require.ErrorContains(t, err, "invalid: transaction already in pool")
}
for range 3 {
g.Go(func() error {
for _, nonce := range nonces {
signed, _, err := evmSign(big.NewInt(transferAmount), 23_500, eoaKey, nonce, &testAddr, nil)
require.NoError(t, err)

txHash, err := rpcTester.sendRawTx(signed)
require.NoError(t, err)
hashes = append(hashes, txHash)
}

return nil
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Data race: concurrent writes to shared hashes slice.

Multiple goroutines append to the shared hashes slice at line 548 without synchronization. While the slice isn't used in assertions (only expectedBalance is checked), this is still a data race that can cause undefined behavior.

🔎 Proposed fix using a mutex
+var hashesMux sync.Mutex
 hashes := []common.Hash{}
 // ...
 for range 3 {
     g.Go(func() error {
         for _, nonce := range nonces {
             // ...
             txHash, err := rpcTester.sendRawTx(signed)
             require.NoError(t, err)
+            hashesMux.Lock()
             hashes = append(hashes, txHash)
+            hashesMux.Unlock()
         }
         return nil
     })
 }

Alternatively, if hashes isn't needed for assertions, remove it entirely to avoid the race.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
g := errgroup.Group{}
txHash, err := rpcTester.sendRawTx(signed)
startBlock, err := emu.GetLatestBlock()
require.NoError(t, err)
hashes = append(hashes, txHash)
// Increment nonce for the duplicate test transactions that follow
nonce += 1
dupSigned, _, err := evmSign(big.NewInt(10), 15_000_000, eoaKey, nonce, &testAddr, nil)
require.NoError(t, err)
hashes := []common.Hash{}
// transfer some funds to the test address
transferAmount := int64(1_000_000_000)
// Submit 5 identical transactions to test duplicate detection:
// the first should succeed, the rest should be rejected as duplicates.
for i := range 5 {
if i == 0 {
txHash, err := rpcTester.sendRawTx(dupSigned)
require.NoError(t, err)
hashes = append(hashes, txHash)
} else {
_, err := rpcTester.sendRawTx(dupSigned)
require.Error(t, err)
require.ErrorContains(t, err, "invalid: transaction already in pool")
}
for range 3 {
g.Go(func() error {
for _, nonce := range nonces {
signed, _, err := evmSign(big.NewInt(transferAmount), 23_500, eoaKey, nonce, &testAddr, nil)
require.NoError(t, err)
txHash, err := rpcTester.sendRawTx(signed)
require.NoError(t, err)
hashes = append(hashes, txHash)
}
return nil
})
}
g := errgroup.Group{}
startBlock, err := emu.GetLatestBlock()
require.NoError(t, err)
var hashesMux sync.Mutex
hashes := []common.Hash{}
// transfer some funds to the test address
transferAmount := int64(1_000_000_000)
for range 3 {
g.Go(func() error {
for _, nonce := range nonces {
signed, _, err := evmSign(big.NewInt(transferAmount), 23_500, eoaKey, nonce, &testAddr, nil)
require.NoError(t, err)
txHash, err := rpcTester.sendRawTx(signed)
require.NoError(t, err)
hashesMux.Lock()
hashes = append(hashes, txHash)
hashesMux.Unlock()
}
return nil
})
}
🤖 Prompt for AI Agents
In tests/tx_batching_test.go around lines 531 to 553 there is a data race:
multiple goroutines append to the shared `hashes` slice without synchronization.
Fix by either removing `hashes` entirely if its values are unused, or protect
access with a mutex (or a channel) so that each append is performed under
lock/serialized; ensure any reads happen after the goroutines finish
(errgroup.Wait) to avoid concurrent access.

@janezpodhostnik
Copy link
Copy Markdown
Contributor

If I'm not mistaken the title of this PR is incorrect an the PR its porting has been closed. Do we want to close this one too?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants