Skip to content

IGNITE-28305 Add backpressure for partition operations#7950

Open
EgorKuts wants to merge 1 commit intoapache:mainfrom
EgorKuts:ignite-28305
Open

IGNITE-28305 Add backpressure for partition operations#7950
EgorKuts wants to merge 1 commit intoapache:mainfrom
EgorKuts:ignite-28305

Conversation

@EgorKuts
Copy link
Copy Markdown
Contributor

@EgorKuts EgorKuts commented Apr 7, 2026

Before this change, a node could accept an unbounded number of partition operations, which could lead to the following problems:

Long-queued operations no longer make sense to the client by the time they are executed, forcing the node to perform useless work before it can start handling new requests.

That useless work still consumes node resources (CPU, memory, threads).

Eventually this can lead to OOM.

To prevent such scenarios, this change introduces a node-level semaphore shared across the replica manager and thin-client connector that limits the total number of concurrent partition operations to

maxInFlightPartitionOperationsPerCore * availableProcessors
When the limit is reached, new requests are rejected immediately with ReplicaOverloadedException so clients can back off, while already accepted operations complete uninterrupted.

The limit is disabled by default

(maxInFlightPartitionOperationsPerCore = 0)
Operator can start tuning this around 512 per core and adjust based on observed heap usage and rejection rate under peak load.

https://issues.apache.org/jira/browse/IGNITE-28305

@EgorKuts EgorKuts force-pushed the ignite-28305 branch 7 times, most recently from 9140021 to 4fd15ab Compare April 10, 2026 11:25
@EgorKuts EgorKuts marked this pull request as ready for review April 10, 2026 15:25
* When positive, {@link #tryAcquire()} returns {@code false} once the limit is reached and the caller should reject the request.
* A permit must be released via {@link #release()} upon operation completes.
*/
public class PartitionOperationInFlightLimiter {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
public class PartitionOperationInFlightLimiter {
public class PartitionOperationInflightLimiter {

return result;
/** Constructor. */
public ReplicaOverloadedException() {
super(GROUP_OVERLOADED_ERR, "Node is overloaded: max in-flight partition operations limit reached.");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A separate error code is needed: REPLICA_OVERLOADED_ERR

if (ClientOp.isPartitionOperation(opCode)) {
long requestId0 = requestId;
int opCode0 = opCode;
if (!partitionOperationInFlightLimiter.tryAcquire()) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach doesn't look working to me.

  • It make single get operation and a write batch of thousand keys equal in terms of backpressure.
  • It's not clear how a used should choose the proper "per-core" limit value. Using a small value will throttle too much, and big value basically disables throttling.
  • It is disabled by default, which means a cluster is not protected from overloading by this mechanism.

We already have natural backpressure in the form of max lock table size.
Probably this is enough.
I would start from writing tests which actually overload the cluster and figure out how to avoid this by introducing new natual backpressure metrics.
For example, a number of active transactions per node.

@ptupitsyn ptupitsyn changed the title ignite-28305 backpressure to limit in flight partition operations per… IGNITE-28305 Add backpressure for client partition operations Apr 14, 2026
@ptupitsyn ptupitsyn changed the title IGNITE-28305 Add backpressure for client partition operations IGNITE-28305 Add backpressure for partition operations Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants