Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -423,6 +423,7 @@ public List<String> getCheckpointList() {

private void deleteCheckpoint() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This TODO comment indicates a potential issue in mock tests. While not directly related to the current PR's functionality, it's good to keep track of these for future improvements.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The comment `// only occurs in mock test. TODO fix test` suggests that there is an underlying issue with the tests that should be addressed. While this is a pre-existing issue, since this line is being touched in this PR, it's a good opportunity to highlight that the tests should be fixed to avoid having special conditions in the production code just for tests.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The TODO fix test comment indicates an existing issue in mock tests. While not directly introduced by this PR, addressing these mock test deficiencies is crucial for the reliability and completeness of the testing suite. This could affect lifecycle or state if the mock is not accurately reflecting production behavior.

if(checkTmpStore == null) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's good practice to address TODO comments. Could you please fix the test as the comment suggests? If not in this PR, please create a ticket to track this.

// only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Adding comments about mock tests in production code is generally discouraged. If this null check is required for safety in production, the comment might be misleading. If it is only for tests, the tests should ideally be fixed to set up the environment correctly rather than patching production code.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 While this check prevents the test failure, leaving a TODO in production code regarding a test fix is technically technical debt. It is recommended to fix the test setup to ensure checkTmpStore is properly initialized rather than patching the production code.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium - The // only occurs in mock test. TODO fix test comment indicates a pre-existing issue within the mock tests. While not introduced by this PR, such comments highlight potential gaps in test coverage or design, which could mask real issues or lead to unexpected behavior in non-mock environments. It is recommended to address these TODOs to improve the robustness of the test suite.

return;
}
try {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -527,6 +527,9 @@ public class CommonParameter {
@Getter
@Setter
public int jsonRpcMaxSubTopics = 1000;
@Getter
@Setter
public int jsonRpcMaxBlockFilterNum = 50000;

@Getter
@Setter
Expand Down
1 change: 1 addition & 0 deletions common/src/main/java/org/tron/core/Constant.java
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,7 @@ public class Constant {
public static final String NODE_JSONRPC_HTTP_PBFT_PORT = "node.jsonrpc.httpPBFTPort";
public static final String NODE_JSONRPC_MAX_BLOCK_RANGE = "node.jsonrpc.maxBlockRange";
public static final String NODE_JSONRPC_MAX_SUB_TOPICS = "node.jsonrpc.maxSubTopics";
public static final String NODE_JSONRPC_MAX_BLOCK_FILTER_NUM = "node.jsonrpc.maxBlockFilterNum";

public static final String NODE_DISABLED_API_LIST = "node.disabledApi";

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
package org.tron.core.exception.jsonrpc;

public class JsonRpcExceedLimitException extends JsonRpcException {

public JsonRpcExceedLimitException() {
super();
}

public JsonRpcExceedLimitException(String message) {
super(message);
}

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Nitpick: Please add a newline at the end of the file.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This new exception class is not used anywhere in the codebase. In TronJsonRpcImpl.java, a JsonRpcInvalidParamsException is thrown when the block filter limit is exceeded.

Consider either using this new exception there (and updating the interface and error mappings accordingly) or removing this class to avoid dead code.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's a good practice to add a newline at the end of the file. Most Java style guides recommend it.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's a good practice to have a newline at the end of the file.

Suggested change
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Consider adding a newline at the end of the file for better POSIX compatibility and to align with common Java formatting practices.

Original file line number Diff line number Diff line change
Expand Up @@ -546,6 +546,7 @@ public boolean isBusy() {
}
int queueSize = 0;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Similar to the SnapshotManager comment, this TODO fix test indicates a potential gap or inaccuracy in the mock test setup for EventPluginLoader. Ensuring mocks accurately represent real-world scenarios is important for robust testing and preventing silent failures in the actual system.

if (eventListeners == null || eventListeners.isEmpty()) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Similar to the SnapshotManager, this TODO indicates a potential issue in mock tests that might warrant a separate fix.

//only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Similar to the comment in SnapshotManager, try to address the root cause in the test environment instead of modifying production code with test-specific fixes.

return false;
}
for (IPluginEventListener listener : eventListeners) {
Expand Down
6 changes: 6 additions & 0 deletions framework/src/main/java/org/tron/core/config/args/Args.java
Original file line number Diff line number Diff line change
Expand Up @@ -207,6 +207,7 @@ public static void clearParam() {
PARAMETER.jsonRpcHttpPBFTNodeEnable = false;
PARAMETER.jsonRpcMaxBlockRange = 5000;
PARAMETER.jsonRpcMaxSubTopics = 1000;
PARAMETER.jsonRpcMaxBlockFilterNum = 50000;
PARAMETER.nodeMetricsEnable = false;
PARAMETER.metricsStorageEnable = false;
PARAMETER.metricsPrometheusEnable = false;
Expand Down Expand Up @@ -491,6 +492,11 @@ public static void setParam(final Config config) {
config.getInt(Constant.NODE_JSONRPC_MAX_SUB_TOPICS);
}

if (config.hasPath(Constant.NODE_JSONRPC_MAX_BLOCK_FILTER_NUM)) {
PARAMETER.jsonRpcMaxBlockFilterNum =
config.getInt(Constant.NODE_JSONRPC_MAX_BLOCK_FILTER_NUM);
}

if (config.hasPath(Constant.VM_MIN_TIME_RATIO)) {
PARAMETER.minTimeRatio = config.getDouble(Constant.VM_MIN_TIME_RATIO);
}
Expand Down
9 changes: 6 additions & 3 deletions framework/src/main/java/org/tron/core/net/TronNetService.java

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The changes in this file, moving the P2P service start to the end, seem unrelated to the PR's goal of optimizing memory allocation for the JSON-RPC filter interface. Could you please clarify why these changes are included in this PR or move them to a separate one?

Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,6 @@ private static void setP2pConfig(P2pConfig config) {
public void start() {
try {
init = true;
setP2pConfig(getConfig());
p2pService.start(p2pConfig);
p2pService.register(p2pEventHandler);
advService.init();
syncService.init();
peerStatusCheck.init();
Expand All @@ -101,6 +98,12 @@ public void start() {
PeerManager.init();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The reordering of the P2P service start to the end of the start() method is noted. Please ensure this change does not introduce any unexpected dependencies or issues with other services that might rely on the P2P service being initialized earlier.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 The P2P service start has been reordered to the end of the start() method. Please provide more context on the rationale behind this reordering. Specifically, ensure that this change does not negatively impact network stability, peer discovery, or other lifecycle aspects, especially in a distributed blockchain environment where timing and initialization order can be critical for consensus and state synchronization.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 Moving the P2P service start to the end of the start() method is a significant change in the node's initialization sequence. This could introduce subtle behavioral compatibility issues if other services implicitly depend on P2P being available earlier. Ensure thorough testing for any timing-dependent failures or delayed network participation during node startup. Consider adding explicit synchronization mechanisms if such dependencies exist.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The P2P service initialization in TronNetService.java::start() has been moved to the end of the method. This reordering could introduce runtime issues if advService, syncService, peerStatusCheck, PeerManager, relayService, or effectiveCheckService (or any other service initialized before the P2P service) have implicit dependencies on a fully operational P2P network. This could lead to NullPointerException or incorrect network state during startup. Thorough testing of all service interdependencies during startup is critical.

Suggested change
PeerManager.init();
setP2pConfig(getConfig());
p2pService.start(p2pConfig);
p2pService.register(p2pEventHandler);
advService.init();
syncService.init();
peerStatusCheck.init();
...

relayService.init();
effectiveCheckService.init();

Comment on lines 98 to +101

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The changes in this file, particularly moving the P2P service initialization, seem unrelated to the primary goal of this pull request, which is optimizing JSON-RPC filter memory allocation. Could you clarify the reason for this change or consider moving it to a separate PR to maintain focus and clarity?

// Move P2P service start to the end
setP2pConfig(getConfig());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This change alters the startup order of services and seems unrelated to the PR's objective of optimizing memory allocation for JSON-RPC. While postponing P2P start might be a valid fix, it should ideally be in a separate PR or clearly documented in the PR description to ensure it gets proper visibility and testing regarding side effects.

p2pService.start(p2pConfig);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High - The reordering of p2pService.start() to the end of the start() method could introduce race conditions or NullPointerException if services like advService, syncService, peerStatusCheck, etc., implicitly depend on p2pService being fully operational during their init() phase. This change should be carefully reviewed for implicit dependencies and potential startup issues.

p2pService.register(p2pEventHandler);
Comment on lines +102 to +105

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 This change modifies the service startup sequence by moving p2pService initialization to the end. This falls outside the scope of the PR's objective ("Optimize memory allocation for JSON-RPC").

Changing the startup order can have significant side effects if other services (like PeerManager, syncService, etc.) rely on the P2P service being active during their initialization.

Please revert this change unless it is strictly necessary for the memory optimization, in which case, please provide a detailed explanation of the dependency.


logger.info("Net service start successfully");
Comment on lines +104 to 107

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The change in the P2P service initialization order is significant and seems unrelated to the primary goal of this PR, which is to optimize memory for JSON-RPC filters. Altering the startup sequence of core services can introduce subtle bugs, race conditions, or dependency issues that are hard to detect.

Please revert this change and move it to a separate PR where it can be reviewed with the appropriate context and justification. Bundling unrelated changes makes the review process difficult and increases the risk of introducing regressions.

} catch (Exception e) {
throw new TronError(e, TronError.ErrCode.TRON_NET_SERVICE_INIT);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ public void statusCheck() {
long now = System.currentTimeMillis();

if (tronNetDelegate == null) {
//only occurs in mock test. TODO fix test
// only occurs in mock test. TODO fix test
return;
}
tronNetDelegate.getActivePeer().forEach(peer -> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
import com.googlecode.jsonrpc4j.JsonRpcMethod;
import java.io.IOException;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.ExecutionException;
import lombok.AllArgsConstructor;
import lombok.Getter;
Expand All @@ -18,6 +19,7 @@
import org.tron.common.utils.ByteArray;
import org.tron.core.exception.BadItemException;
import org.tron.core.exception.ItemNotFoundException;
import org.tron.core.exception.jsonrpc.JsonRpcExceedLimitException;
import org.tron.core.exception.jsonrpc.JsonRpcInternalException;
import org.tron.core.exception.jsonrpc.JsonRpcInvalidParamsException;
import org.tron.core.exception.jsonrpc.JsonRpcInvalidRequestException;
Expand All @@ -28,7 +30,11 @@
import org.tron.core.services.jsonrpc.types.CallArguments;
import org.tron.core.services.jsonrpc.types.TransactionReceipt;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The import import org.tron.core.net.peer.Item; in framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpc.java is unused and should be removed.

Suggested change
import org.tron.core.services.jsonrpc.types.TransactionReceipt;
-import org.tron.core.net.peer.Item;

import org.tron.core.services.jsonrpc.types.TransactionResult;
import org.tron.core.net.peer.Item;

/**
* Error code refers to https://www.quicknode.com/docs/ethereum/error-references
*/
@Component
public interface TronJsonRpc {

Expand Down Expand Up @@ -292,9 +298,10 @@ String newFilter(FilterRequest fr) throws JsonRpcInvalidParamsException,

@JsonRpcMethod("eth_newBlockFilter")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The eth_newBlockFilter API now throws JsonRpcInvalidParamsException when maxBlockFilterNum is exceeded. This is a breaking behavioral change for existing clients that may not be prepared to handle this new exception, potentially leading to client-side errors or crashes.

Suggested change
@JsonRpcMethod("eth_newBlockFilter")
String newBlockFilter() throws JsonRpcMethodNotFoundException;

@JsonRpcErrors({
@JsonRpcError(exception = JsonRpcInvalidParamsException.class, code = -32005, data = "{}"),
@JsonRpcError(exception = JsonRpcMethodNotFoundException.class, code = -32601, data = "{}"),
})
String newBlockFilter() throws JsonRpcMethodNotFoundException;
String newBlockFilter() throws JsonRpcInvalidParamsException, JsonRpcMethodNotFoundException;

@JsonRpcMethod("eth_uninstallFilter")
@JsonRpcErrors({
Expand Down Expand Up @@ -472,5 +479,41 @@ public LogFilterElement(String blockHash, Long blockNum, String txId, Integer tx
}
this.removed = removed;
}

@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}

@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
Comment on lines +485 to +512

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 These equals and hashCode implementations can be simplified using java.util.Objects.

Suggested change
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}
@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
LogFilterElement that = (LogFilterElement) o;
return removed == that.removed &&
Objects.equals(blockHash, that.blockHash) &&
Objects.equals(transactionHash, that.transactionHash) &&
Objects.equals(transactionIndex, that.transactionIndex) &&
Objects.equals(logIndex, that.logIndex);
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
Comment on lines +508 to +514

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For better readability and conciseness, you can use `Objects.hash()` to implement the `hashCode` method.
Suggested change
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

return result;
Comment on lines +514 to +515

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The `hashCode()` implementation can be made more concise and readable by using `java.util.Objects.hash()`.
Suggested change
result = result + (removed ? 1 : 0);
return result;
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

}
Comment on lines +509 to +516

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The custom hashCode implementation could be improved. While it is functional, using java.util.Objects.hash() would be more concise, less error-prone, and is generally considered a best practice for generating hash codes from multiple fields.

Suggested change
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}


Comment on lines +482 to +517

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The equals and hashCode implementations can be simplified using java.util.Objects.

Suggested change
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}
@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
LogFilterElement that = (LogFilterElement) o;
return removed == that.removed && Objects.equals(blockHash, that.blockHash)
&& Objects.equals(transactionHash, that.transactionHash)
&& Objects.equals(transactionIndex, that.transactionIndex) && Objects
.equals(logIndex, that.logIndex);
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

}
}
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
import static org.tron.core.services.jsonrpc.JsonRpcApiUtil.triggerCallContract;

import com.alibaba.fastjson.JSON;
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
import com.google.protobuf.ByteString;
import com.google.protobuf.GeneratedMessageV3;
import java.io.Closeable;
Expand All @@ -25,10 +27,10 @@
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.regex.Pattern;
import lombok.Getter;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.collections4.CollectionUtils;
import org.apache.commons.lang3.StringUtils;
import org.bouncycastle.util.encoders.Hex;
import org.springframework.beans.factory.annotation.Autowired;
Expand All @@ -51,6 +53,7 @@
import org.tron.core.Wallet;
import org.tron.core.capsule.BlockCapsule;
import org.tron.core.capsule.TransactionCapsule;
import org.tron.core.config.args.Args;
import org.tron.core.db.Manager;
import org.tron.core.db2.core.Chainbase;
import org.tron.core.exception.BadItemException;
Expand All @@ -59,6 +62,7 @@
import org.tron.core.exception.HeaderNotFound;
import org.tron.core.exception.ItemNotFoundException;
import org.tron.core.exception.VMIllegalException;
import org.tron.core.exception.jsonrpc.JsonRpcExceedLimitException;
import org.tron.core.exception.jsonrpc.JsonRpcInternalException;
import org.tron.core.exception.jsonrpc.JsonRpcInvalidParamsException;
import org.tron.core.exception.jsonrpc.JsonRpcInvalidRequestException;
Expand Down Expand Up @@ -110,6 +114,17 @@ public enum RequestSource {

private static final String FILTER_NOT_FOUND = "filter not found";

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The logElementCache and blockHashCache are configured with relatively large maximum sizes (300,000 and 60,000 respectively). While the comments provide a rationale, it's important to monitor the memory footprint of these caches in a production environment, especially under sustained high load. Consider adding metrics to track cache size and hit/miss ratios to ensure efficient memory utilization and to detect potential memory pressure.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Adding a comment like // LRU cache is helpful for readability and maintainability, clarifying the cache eviction policy. Consider adding similar comments for other cache configurations where applicable for consistency.

public static final int EXPIRE_SECONDS = 5 * 60;
private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum();
private static final Cache<LogFilterElement, LogFilterElement> logElementCache =
CacheBuilder.newBuilder()
.maximumSize(300_000L) // 300s * tps(1000) * 1 log/tx ≈ 300_000

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The cache maximum sizes are hardcoded. Consider making these configurable via Args or CommonParameter, similar to jsonRpcMaxBlockFilterNum. This would provide flexibility for nodes with different memory constraints.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Adding a comment like // LRU cache is helpful for readability and maintainability, clarifying the cache eviction policy. This improves code clarity.

.expireAfterWrite(EXPIRE_SECONDS, TimeUnit.SECONDS)
.recordStats().build(); //LRU cache
private static final Cache<String, String> blockHashCache =
CacheBuilder.newBuilder()
.maximumSize(60_000L) // 300s * 200 block/s when syncing
.expireAfterWrite(EXPIRE_SECONDS, TimeUnit.SECONDS)
.recordStats().build(); //LRU cache
/**
* for log filter in Full Json-RPC
*/
Expand Down Expand Up @@ -181,16 +196,31 @@ public static void handleBLockFilter(BlockFilterCapsule blockFilterCapsule) {
it = getBlockFilter2ResultFull().entrySet().iterator();
}

if (!it.hasNext()) {
return;
}
final String originalBlockHash = ByteArray.toJsonHex(blockFilterCapsule.getBlockHash());
String cachedBlockHash;
try {
// compare with hashcode() first, then with equals(). If not exist, put it.
cachedBlockHash = blockHashCache.get(originalBlockHash, () -> originalBlockHash);
} catch (ExecutionException e) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Catching `ExecutionException` and only logging it with a comment "never happen" can hide potential issues. The cause of the `ExecutionException` should be properly handled or re-thrown as a runtime exception to ensure that any unexpected errors during cache loading are propagated.
Suggested change
} catch (ExecutionException e) {
try {
// compare with hashcode() first, then with equals(). If not exist, put it.
cachedBlockHash = blockHashCache.get(originalBlockHash, () -> originalBlockHash);
} catch (ExecutionException e) {
logger.error("Getting/loading blockHash from cache failed", e);
throw new RuntimeException(e);
}

logger.error("Getting/loading blockHash from cache failed", e); //never happen
cachedBlockHash = originalBlockHash;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The comment `//never happen` isn't very descriptive. It would be better to explain *why* it will never happen. For example, you could explain that the callable for the cache does not throw a checked exception.
Suggested change
}
// The callable cannot throw a checked exception, so this should never happen.
logger.error("Getting/loading blockHash from cache failed", e);

while (it.hasNext()) {
Entry<String, BlockFilterAndResult> entry = it.next();
if (entry.getValue().isExpire()) {
it.remove();
continue;
}
entry.getValue().getResult().add(ByteArray.toJsonHex(blockFilterCapsule.getBlockHash()));
entry.getValue().getResult().add(cachedBlockHash);
}
}

/**
* append LogsFilterCapsule's LogFilterElement list to each filter if matched
*/
public static void handleLogsFilter(LogsFilterCapsule logsFilterCapsule) {
Iterator<Entry<String, LogFilterAndResult>> it;

Expand Down Expand Up @@ -226,8 +256,17 @@ public static void handleLogsFilter(LogsFilterCapsule logsFilterCapsule) {
LogMatch.matchBlock(logFilter, logsFilterCapsule.getBlockNumber(),
logsFilterCapsule.getBlockHash(), logsFilterCapsule.getTxInfoList(),
logsFilterCapsule.isRemoved());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The List returned by BlockFilterAndResult.getResult() and LogFilterAndResult.getResult() is being modified concurrently in handleBLockFilter and handleLogsFilter through .add() operations. If these lists are not thread-safe (e.g., ArrayList), this will lead to ConcurrentModificationException or data corruption under concurrent access. Consider using thread-safe collections like CopyOnWriteArrayList or Collections.synchronizedList().

Suggested change
logsFilterCapsule.isRemoved());
// Change the return type of getResult() to a thread-safe List or
// ensure all modifications happen under a lock.
// For example, if getResult() returns a new CopyOnWriteArrayList:
// logFilterAndResult.getResult().add(cachedElement);

if (CollectionUtils.isNotEmpty(elements)) {
logFilterAndResult.getResult().addAll(elements);

for (LogFilterElement element : elements) {
LogFilterElement cachedElement;
try {
// compare with hashcode() first, then with equals(). If not exist, put it.
cachedElement = logElementCache.get(element, () -> element);
} catch (ExecutionException e) {
logger.error("Getting/loading LogFilterElement from cache fails", e); // never happen
cachedElement = element;
}
logFilterAndResult.getResult().add(cachedElement);
}
}
}
Expand Down Expand Up @@ -797,7 +836,7 @@ public TransactionReceipt getTransactionReceipt(String txId)
long blockNum = blockCapsule.getNum();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This line contains redundant spaces. Consider removing them for consistency with coding style guidelines.

Suggested change
long blockNum = blockCapsule.getNum();
long energyFee = wallet.getEnergyFee(blockCapsule.getTimeStamp());

TransactionInfoList transactionInfoList = wallet.getTransactionInfoByBlockNum(blockNum);
long energyFee = wallet.getEnergyFee(blockCapsule.getTimeStamp());

// Find transaction context
TransactionReceipt.TransactionContext context
= findTransactionContext(transactionInfoList,
Expand All @@ -806,7 +845,7 @@ public TransactionReceipt getTransactionReceipt(String txId)
if (context == null) {
return null; // Transaction not found in block
}

return new TransactionReceipt(blockCapsule, transactionInfo, context, energyFee);
}

Expand Down Expand Up @@ -1391,7 +1430,8 @@ public String newFilter(FilterRequest fr) throws JsonRpcInvalidParamsException,
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium - The newBlockFilter() method's signature has been updated to throw JsonRpcInvalidParamsException. Clients calling this RPC method will need to handle this new exception, which indicates an intentional behavioral change when the maximum number of block filters is exceeded.

@Override
public String newBlockFilter() throws JsonRpcMethodNotFoundException {
public String newBlockFilter() throws JsonRpcMethodNotFoundException,
JsonRpcInvalidParamsException {
disableInPBFT("eth_newBlockFilter");

Map<String, BlockFilterAndResult> blockFilter2Result;
Expand All @@ -1400,6 +1440,10 @@ public String newBlockFilter() throws JsonRpcMethodNotFoundException {
} else {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 The introduction of maxBlockFilterNum and the associated check in newBlockFilter() represents a new restriction on the eth_newBlockFilter API. While crucial for resource management, this change can affect existing API clients that might expect an unbounded number of filters. This behavioral change should be clearly documented for API consumers to prevent unexpected JsonRpcInvalidParamsException errors.

blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 You've introduced a new exception `JsonRpcExceedLimitException`, but you're throwing `JsonRpcInvalidParamsException` here. It would be more consistent and specific to use the new exception you've created.
Suggested change
if (blockFilter2Result.size() >= maxBlockFilterNum) {
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The newly created `JsonRpcExceedLimitException` should be used here instead of `JsonRpcInvalidParamsException` to provide a more specific error when the block filter limit is exceeded. This improves error handling clarity.
Suggested change
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");

throw new JsonRpcInvalidParamsException(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The class JsonRpcExceedLimitException was added in this PR but is not used here. You are throwing JsonRpcInvalidParamsException instead.

If you intend to use JsonRpcExceedLimitException, please ensure you:

  1. Update this throw statement.
  2. Update the newBlockFilter method signature in TronJsonRpc.java to throw JsonRpcExceedLimitException.
  3. Add the corresponding @JsonRpcError annotation in TronJsonRpc.java.

"exceed max block filters: " + maxBlockFilterNum + ", try again later");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 A new exception `JsonRpcExceedLimitException` was added in this PR, but it is not being used. Instead, a generic `JsonRpcInvalidParamsException` is thrown when the block filter limit is exceeded. To provide a more specific error to the API user, consider using the new exception here.
Suggested change
"exceed max block filters: " + maxBlockFilterNum + ", try again later");
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");
}

}

Comment on lines +1446 to 1447

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 You've created a specific exception JsonRpcExceedLimitException, but here you are throwing a generic JsonRpcInvalidParamsException. It would be more consistent to use the new exception.

Suggested change
}
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");

BlockFilterAndResult filterAndResult = new BlockFilterAndResult();
String filterID = generateFilterId();
Expand Down Expand Up @@ -1529,6 +1573,7 @@ public static Object[] getFilterResult(String filterId, Map<String, BlockFilterA

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The blockHashCache also holds memory resources. Although expireAfterWrite provides eventual cleanup, explicitly calling blockHashCache.invalidateAll() alongside logElementCache.invalidateAll() in the close() method would ensure immediate and deterministic resource release upon shutdown, improving lifecycle safety.

Suggested change
logElementCache.invalidateAll();
blockHashCache.invalidateAll();
ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 blockHashCache is not explicitly invalidated in TronJsonRpcImpl.java::close(). For consistency and to prevent potential subtle resource leaks in scenarios involving classloader resets or dynamic class reloading, all static cache resources should be explicitly invalidated during shutdown.

Suggested change
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

@Override

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 It seems like `blockHashCache` is not invalidated in the `close()` method, which could lead to a memory leak. You should invalidate it along with `logElementCache`.
Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For consistency and to ensure all cache resources are released upon closing, you should also invalidate blockHashCache.

Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High - blockHashCache is not explicitly invalidated during close(). Although expireAfterWrite provides some cleanup, explicit invalidation on shutdown is a safer practice for resource management.

Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

public void close() throws IOException {
logElementCache.invalidateAll();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For consistency and to ensure all resources are released, consider invalidating the blockHashCache here as well.

Suggested change
logElementCache.invalidateAll();
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The newly introduced `blockHashCache` is not invalidated in the `close()` method, unlike `logElementCache`. While this might not be a critical issue if `TronJsonRpcImpl` is a singleton, it's a good practice to clean up all resources consistently. Please add `blockHashCache.invalidateAll()` to the `close()` method to prevent potential memory leaks in scenarios where the object might be re-initialized.
Suggested change
logElementCache.invalidateAll();
public void close() throws IOException {
logElementCache.invalidateAll();
blockHashCache.invalidateAll();
ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName);
}

ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName);
}

Expand Down
1 change: 1 addition & 0 deletions framework/src/main/resources/config-localtest.conf
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,7 @@ node {
# httpPBFTPort = 8565
# maxBlockRange = 5000
# maxSubTopics = 1000
# maxBlockFilterNum = 30000
}

}
Expand Down
2 changes: 2 additions & 0 deletions framework/src/main/resources/config.conf
Original file line number Diff line number Diff line change
Expand Up @@ -375,6 +375,8 @@ node {
# The maximum number of allowed topics within a topic criteria, default value is 1000,
# should be > 0, otherwise means no limit.
maxSubTopics = 1000
# Allowed maximum number for blockFilter
maxBlockFilterNum = 50000
}

# Disabled api list, it will work for http, rpc and pbft, both FullNode and SolidityNode,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import org.junit.Assert;
import org.junit.Test;
import org.tron.common.runtime.vm.DataWord;
Expand Down Expand Up @@ -220,6 +221,18 @@ public void testMatchBlock() {
List<LogFilterElement> elementList =
matchBlock(logFilter, 100, null, transactionInfoList, false);
Assert.assertEquals(1, elementList.size());

//test LogFilterElement
List<LogFilterElement> elementList2 =
matchBlock(logFilter, 100, null, transactionInfoList, false);
Assert.assertEquals(1, elementList2.size());

LogFilterElement logFilterElement1 = elementList.get(0);
LogFilterElement logFilterElement2 = elementList2.get(0);

Assert.assertEquals(logFilterElement1.hashCode(), logFilterElement2.hashCode());
Assert.assertEquals(logFilterElement1, logFilterElement2);

} catch (JsonRpcInvalidParamsException e) {
Assert.fail();
}
Expand Down
Loading
Loading