-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Description
Background
The JSON-RPC event subscription API eth_newFilter is a stateful service designed for efficiently querying newly generated transaction events. This mechanism works by matching user-defined filter criteria against the Bloom filters of block events, and returning the matched results through the eth_getFilterChanges API.
Currently, the event-matching process is executed in a single-threaded, sequential manner. Under high concurrency with a large number of active subscriptions, the matching latency increases significantly, causing the event processing throughput to fall behind the block synchronization speed. As a result, users experience noticeable delays when retrieving the latest events.
To address this issue, we propose optimizing the existing sequential matching mechanism into a parallel matching model to improve overall throughput and processing efficiency. In addition, an upper limit will be enforced on the number of concurrently created subscriptions. When this limit is exceeded, the system will return a clear and explicit error message to prevent resource exhaustion and further improve user experience.
Rationale
Why should this feature exist?
To improve user experience by preventing delayed delivery of subscribed events.
What are the use-cases?
The combination of eth_newFilter and eth_getFilterChanges can be used to build event-driven WebSocket services for real-time event subscriptions.
Specification
After a FullNode finishes processing a block, all transaction events generated within that block are written into a FIFO queue named filterCapsuleQueue. A background thread, filterProcessLoop, continuously consumes events from this queue and processes them block by block. For each block, the handleLogsFilter method is invoked to match the events against the subscription conditions created via eth_newFilter. Matched events are temporarily stored in a cache and can later be retrieved by users through eth_getFilterChanges.
If a subscription is not accessed within 300 seconds, it is considered expired, and its associated cached data will be automatically cleaned up.
In the current implementation, if there are m active subscriptions, each of the n events in a block must be matched against all subscriptions, resulting in a time complexity of O(m × n). When the number of subscriptions reaches approximately 200,000, the matching time for a single block can take up to 20 seconds, which significantly exceeds the block production interval (3 seconds). This leads to severe event processing lag: existing subscribers experience substantial delays, while newly created subscriptions starting from the latest block may fail to receive any events for an extended period of time.
To resolve this issue, the sequential matching model between block events and subscriptions is optimized by partitioning subscriptions into groups and matching them in parallel against the event list of a block. At the same time, the ordering of events within each individual subscription is strictly preserved.
Since event matching is a CPU-intensive operation, the level of parallelism is intentionally limited to 2–4 threads to avoid negatively impacting block synchronization and other core processing workflows.
Furthermore, to prevent uncontrolled matching latency under extreme subscription volumes, a configuration parameter node.jsonrpc.maxLogFilterNum is introduced to cap the maximum number of log filters that can be created. When the number of active subscriptions exceeds this threshold, new subscription requests will be rejected with a clear error message: “exceed max log filters“.
Service operators can adjust the default maximum subscription limit based on their hardware resources and system load.
Test Specification
Stress testing can be used to evaluate block-level event matching latency and CPU utilization. Test results will be provided in a follow-up update.
Scope of Impact
This change only affects the JSON-RPC eth_newFilter interface.
Implementation
Do you have ideas regarding the implementation of this feature?
Yes.
Are you willing to implement this feature?
Yes.