Skip to content

Implement reverse performance optimization#4775

Open
ahkcs wants to merge 25 commits intoopensearch-project:mainfrom
ahkcs:feat/reverse_optimization
Open

Implement reverse performance optimization#4775
ahkcs wants to merge 25 commits intoopensearch-project:mainfrom
ahkcs:feat/reverse_optimization

Conversation

@ahkcs
Copy link
Contributor

@ahkcs ahkcs commented Nov 10, 2025

Description

Originally from #4056 by @selsong

This PR implements a significant performance optimization for the reverse command by eliminating the expensive ROW_NUMBER() window function and implementing a three-tier logic based on query context.

Resolves #3924
Resolves #5125

Motivation

The previous implementation used ROW_NUMBER() window function which:

  • Required materializing the entire dataset
  • Caused excessive memory usage
  • Failed on large datasets (100M+ records) with "insufficient resources" errors

Solution: Three-Tier Reverse Logic

The reverse command now follows context-aware behavior:

  1. With existing sort/collation: Reverses all sort directions (ASC ↔ DESC)
  2. With @timestamp field (no explicit sort): Sorts by @timestamp in descending order
  3. Without sort or @timestamp: The command is ignored (no-op)

Key Implementation Details

  • In-place sort replacement: When reverse is applied directly on top of a sort, the sort node is replaced in-place using PlanUtils.replaceTop() rather than stacking a new sort. This avoids a Calcite optimizer issue where consecutive LogicalSort nodes get merged, discarding the reversed direction (Reverse Optimization fails when creating consecutive sorts due to Calcite physical optimizer merging #5125).
  • Backtracking through non-blocking operators: When reverse is separated from the sort by non-blocking operators (where, eval, fields), the implementation walks the RelNode tree to find the upstream sort and inserts a reversed sort via RelHomogeneousShuttle.
  • Blocking operator detection: Aggregation, Join, SetOp, Uncollect, and window-function Projects are recognized as blocking operators that destroy collation — reverse will not backtrack past these.

Examples

1. Reverse with Explicit Sort (Primary Use Case)

Query:

source=accounts | sort +balance, -firstname | reverse

Behavior: Flips all sort directions: +balance, -firstname-balance, +firstname

Logical Plan:

LogicalSystemLimit(sort0=[$3], sort1=[$1], dir0=[DESC-nulls-last], dir1=[ASC-nulls-first], fetch=[10000], type=[QUERY_SIZE_LIMIT])
  LogicalProject(account_number=[$0], firstname=[$1], ...)
    LogicalSort(sort0=[$3], sort1=[$1], dir0=[DESC-nulls-last], dir1=[ASC-nulls-first])
      CalciteLogicalIndexScan(table=[[OpenSearch, accounts]])

Physical Plan: (efficiently pushes reversed sort to OpenSearch)

CalciteEnumerableIndexScan(table=[[OpenSearch, accounts]],
  PushDownContext=[[..., SORT->[
    {"balance": {"order": "desc", "missing": "_last"}},
    {"firstname.keyword": {"order": "asc", "missing": "_first"}}
  ], LIMIT->10000]])

2. Reverse with @timestamp (Time-Series Optimization)

Query:

source=time_series_logs | reverse | head 100

Behavior: When no explicit sort exists but the index has an @timestamp field, reverse automatically sorts by @timestamp DESC to show most recent events first.

Use Case: Common pattern in log analysis - users want recent logs first

Logical Plan:

LogicalSystemLimit(sort0=[$0], dir0=[DESC], fetch=[10000], type=[QUERY_SIZE_LIMIT])
  LogicalProject(@timestamp=[$0], category=[$1], value=[$2])
    LogicalSort(sort0=[$0], dir0=[DESC])
      CalciteLogicalIndexScan(table=[[OpenSearch, time_data]])

3. Reverse Ignored (No-Op Case)

Query:

source=accounts | reverse | head 100

Behavior: When there's no explicit sort AND no @timestamp field, reverse is ignored. Results appear in natural index order.

Rationale: Avoid expensive operations when reverse has no meaningful semantic interpretation.

Logical Plan:

LogicalSystemLimit(fetch=[10000], type=[QUERY_SIZE_LIMIT])
  LogicalProject(account_number=[$0], firstname=[$1], ...)
    CalciteLogicalIndexScan(table=[[OpenSearch, accounts]])

Note: No sort node is added - reverse is completely ignored.


4. Double Reverse (Cancellation)

Query:

source=accounts | sort +balance, -firstname | reverse | reverse

Behavior: Two reverses cancel each other out, returning to original sort order.

Logical Plan:

LogicalSystemLimit(sort0=[$3], sort1=[$1], dir0=[ASC-nulls-first], dir1=[DESC-nulls-last], fetch=[10000])
  LogicalProject(account_number=[$0], firstname=[$1], ...)
    LogicalSort(sort0=[$3], sort1=[$1], dir0=[ASC-nulls-first], dir1=[DESC-nulls-last])
      CalciteLogicalIndexScan(table=[[OpenSearch, accounts]])

Final sort order matches original query: +balance, -firstname


5. Backtracking Through Non-Blocking Operators

Query:

source=accounts | sort age | where age > 20 | reverse

Behavior: Reverse walks past the where (filter) node to find and reverse the upstream sort.

Logical Plan:

LogicalSystemLimit(sort0=[$8], dir0=[DESC-nulls-last], fetch=[10000])
  LogicalProject(account_number=[$0], firstname=[$1], ...)
    LogicalSort(sort0=[$8], dir0=[DESC-nulls-last])
      LogicalSort(sort0=[$8], dir0=[ASC-nulls-first])
        LogicalFilter(condition=[>($8, 20)])
          CalciteLogicalIndexScan(table=[[OpenSearch, accounts]])

6. Reverse After Aggregation (Blocking Operator)

Query:

source=accounts | stats count() as cnt by gender | sort cnt | reverse

Behavior: Aggregation is a blocking operator. The sort after aggregation is visible to reverse and gets flipped.


7. Timechart with Reverse

Query:

source=web_logs | timechart count() as cnt | reverse

Behavior: Timechart produces a time-bucketed aggregation with an implicit sort. Reverse flips the time order from ASC to DESC.

Related Issues

Resolves #3924
Resolves #5125

Check List

  • New functionality includes testing.
  • New functionality has been documented.
    • New functionality has javadoc added.
    • New functionality has a user manual doc added.
  • API changes companion pull request created.
  • Commits are signed per the DCO using --signoff.
  • Public documentation issue/PR created.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Copy link
Collaborator

@dai-chen dai-chen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QQ: I recall the major comment on original PR is early optimization in analyzer layer. Is this new PR trying to address the concern? Ref: #4056 (comment)

@ahkcs
Copy link
Contributor Author

ahkcs commented Nov 11, 2025

QQ: I recall the major comment on original PR is early optimization in analyzer layer. Is this new PR trying to address the concern? Ref: #4056 (comment)

Hi Chen, I think that's a valid concern. However, after trying it out, I think it has significant complexity comparing to the current approach. I think CalciteRelNodeVisitor is used as a logical plan builder that constructs the logical representation of the query, so I think optimization can also happen here. In our approach, our visitReverse is choosing LogicalSort(reversed) vs LogicalSort(ROW_NUMBER), and I think this is appropriate for logical plan builder. If we moved the optimization to Calcite rule, we'd be doing something more complex - starting with a naive representation (always ROW_NUMBER) and rewriting it. That adds significant complexity.

@ahkcs ahkcs requested a review from dai-chen November 11, 2025 22:29
Copy link
Collaborator

@noCharger noCharger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add benchmark results on before VS after?

ahkcs added 11 commits February 6, 2026 14:33
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>

# Conflicts:
#	core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>

# Conflicts:
#	docs/user/ppl/cmd/reverse.md
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
@ahkcs ahkcs force-pushed the feat/reverse_optimization branch from dd9f4d6 to 74d5997 Compare February 6, 2026 22:41
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java`:
- Around line 689-816: backtrackForCollation is allowing traversal through
limit-like Sorts (LogicalSystemLimit and Sorts with non-null fetch/offset), so
insertReversedSortInTree ends up placing the reversed sort under the limit and
changing semantics; update backtrackForCollation to treat LogicalSystemLimit and
org.apache.calcite.rel.core.Sort nodes that have non-null fetch/offset as
blocking operators (return null when encountered) and do not traverse past them,
and then adjust visitReverse to, when a limit/barrier is encountered, obtain the
barrier's collation (from the limit/sort itself), compute the reversed collation
and apply the reversed sort on top of that barrier (i.e., wrap the Limit/Sort
with the reversed sort using context.relBuilder.sort or by creating a
LogicalSort above the visited limit) instead of calling insertReversedSortInTree
to insert below the barrier; keep references to backtrackForCollation,
LogicalSystemLimit, org.apache.calcite.rel.core.Sort (fetch/offset),
insertReversedSortInTree, and visitReverse to locate the changes.

In `@docs/user/ppl/cmd/reverse.md`:
- Line 6: Update the warning for the reverse command to clarify its optimized
behavior: state that reverse may be pushed down or be a no-op when a reversible
collation exists, and that the heavy full-dataset materialization and high
coordinating-node memory usage only occur when no reversible collation is
available and pushdown cannot be applied; mention the terms "reverse",
"reversible collation", "pushdown", and "no-op" so readers can relate this to
the optimization section.
- Around line 209-218: Example 8 in the reverse command docs uses
source=time_test_data which is undocumented; update the example to use the
documented sample index by replacing source=time_test_data with source=accounts
in the Example 8 block (the code block that begins with "source=time_test_data"
and includes the reverse and head 5 pipeline) so it matches the other examples
that use the accounts sample index.
🧹 Nitpick comments (1)
integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java (1)

359-373: Consider asserting order since an explicit sort makes it deterministic.
Using verifyDataRows (unordered) doesn’t validate the reverse ordering. With a post-aggregation sort, order should be stable; consider verifyDataRowsInOrder (or add a deterministic tiebreaker).

💡 Suggested test assertion change
-    verifyDataRows(result, rows(4, "M"), rows(3, "F"));
+    verifyDataRowsInOrder(result, rows(4, "M"), rows(3, "F"));

@@ -5,6 +5,16 @@ The `reverse` command reverses the display order of the search results. It retur

> **Note**: The `reverse` command processes the entire dataset. If applied directly to millions of records, it consumes significant coordinating node memory resources. Only apply the `reverse` command to smaller datasets, typically after aggregation operations.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Update the warning note to reflect the optimized behavior.

The note says reverse always processes the entire dataset and consumes significant memory, but the new optimization section says it can be pushed down or even be a no‑op. This is now misleading for readers. Consider rephrasing to clarify that the heavy, full‑materialization behavior only applies when no reversible collation exists and pushdown can’t be applied.

📝 Possible rewording
-> **Note**: The `reverse` command processes the entire dataset. If applied directly to millions of records, it consumes significant coordinating node memory resources. Only apply the `reverse` command to smaller datasets, typically after aggregation operations.
+> **Note**: The `reverse` command can be optimized by reversing an existing sort or by using `@timestamp` when available. If no reversible collation is found and no `@timestamp` exists, `reverse` is a no-op. When `reverse` cannot be pushed down and must materialize results, it can consume significant coordinating node memory resources. Prefer using it after aggregation or with an explicit sort.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
> **Note**: The `reverse` command processes the entire dataset. If applied directly to millions of records, it consumes significant coordinating node memory resources. Only apply the `reverse` command to smaller datasets, typically after aggregation operations.
> **Note**: The `reverse` command can be optimized by reversing an existing sort or by using `@timestamp` when available. If no reversible collation is found and no `@timestamp` exists, `reverse` is a no-op. When `reverse` cannot be pushed down and must materialize results, it can consume significant coordinating node memory resources. Prefer using it after aggregation or with an explicit sort.
🤖 Prompt for AI Agents
In `@docs/user/ppl/cmd/reverse.md` at line 6, Update the warning for the reverse
command to clarify its optimized behavior: state that reverse may be pushed down
or be a no-op when a reversible collation exists, and that the heavy
full-dataset materialization and high coordinating-node memory usage only occur
when no reversible collation is available and pushdown cannot be applied;
mention the terms "reverse", "reversible collation", "pushdown", and "no-op" so
readers can relate this to the optimization section.

Comment on lines +209 to +218
## Example 8: Reverse with @timestamp field

When no explicit sort exists but the data source has an `@timestamp` field, `reverse` sorts by `@timestamp` in descending order:

```ppl
source=time_test_data
| fields value, category, `@timestamp`
| reverse
| head 5
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for the referenced indices and sample data documentation
echo "=== Searching for time_test_data index references ==="
rg -n 'time_test_data' docs --type md

echo -e "\n=== Searching for events index references ==="
rg -n 'source=events|index=events|"events"' docs --type md

echo -e "\n=== Searching for sample datasets documentation ==="
rg -n -i 'sample.*data|sample.*dataset|test.*data|fixture' docs --type md | head -30

echo -e "\n=== Check if there is any samples or test data documentation ==="
fd 'sample|fixture|data' docs --type f | head -20

Repository: opensearch-project/sql

Length of output: 6551


🏁 Script executed:

#!/bin/bash
# Check the reverse.md file for any sample data documentation or intro
echo "=== reverse.md structure and content around examples ==="
head -50 docs/user/ppl/cmd/reverse.md

echo -e "\n=== Check if there's an intro section defining sample data ==="
rg -B5 -A5 'time_test_data' docs/user/ppl/cmd/reverse.md

echo -e "\n=== Check timechart.md for events documentation ==="
head -100 docs/user/ppl/cmd/timechart.md | tail -50

echo -e "\n=== Search for sample data setup documentation ==="
rg -n -i 'sample.*dataset|how.*to.*use.*example|sample.*index' docs --type md | head -20

echo -e "\n=== Check for getting started or setup docs ==="
fd -e md docs | xargs grep -l -i 'sample.*data\|test.*index\|setup\|getting.*started' | head -10

Repository: opensearch-project/sql

Length of output: 5737


🏁 Script executed:

#!/bin/bash
# Check PPL documentation structure for intro/setup
echo "=== PPL docs structure ==="
ls -la docs/user/ppl/

echo -e "\n=== Check PPL intro/index page ==="
fd index.md docs/user/ppl --exec cat {} \;

echo -e "\n=== Look for reference to sample indices in PPL docs ==="
rg -n 'accounts|events' docs/user/ppl/general --type md

echo -e "\n=== Check example 13 in reverse.md ==="
sed -n '330,340p' docs/user/ppl/cmd/reverse.md

echo -e "\n=== Search for accounts index documentation ==="
rg -n 'source=accounts' docs --type md

echo -e "\n=== Check if there's a README or intro for PPL ==="
cat docs/user/ppl/README.md 2>/dev/null || echo "No README.md found"

Repository: opensearch-project/sql

Length of output: 37180


Replace time_test_data with a documented sample index.

Example 8 uses source=time_test_data, which has no references elsewhere in the documentation. All other examples in this file use source=accounts, which is the standard documented sample index across PPL documentation. Update Example 8 to use source=accounts for consistency. Example 13's use of source=events is already established in other command documentation (timechart.md, etc.).

🤖 Prompt for AI Agents
In `@docs/user/ppl/cmd/reverse.md` around lines 209 - 218, Example 8 in the
reverse command docs uses source=time_test_data which is undocumented; update
the example to use the documented sample index by replacing
source=time_test_data with source=accounts in the Example 8 block (the code
block that begins with "source=time_test_data" and includes the reverse and head
5 pipeline) so it matches the other examples that use the accounts sample index.

@ahkcs
Copy link
Contributor Author

ahkcs commented Feb 6, 2026

There's some Calcite physical optimizer merging issue
Created github issue: #5125 for the details

Signed-off-by: Kai Huang <ahkcs@amazon.com>
@ahkcs
Copy link
Contributor Author

ahkcs commented Feb 6, 2026

@songkant-aws Updated PR with a new approach to resolve the issue mentioned, details in the PR description

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java`:
- Around line 749-775: The current shuttle in insertReversedSortInTree inserts
the reversed LogicalSort immediately after the first Sort with collation, which
can place it below a limit barrier and change semantics; update the shuttle to
treat a Sort with getFetch() != null or getOffset() != null as a barrier and,
when such a barrier is encountered during traversal, wrap that barrier (the
visited barrier node) with the reversed sort instead of inserting below the
downstream Sort. Concretely, inside the anonymous RelHomogeneousShuttle in
insertReversedSortInTree, detect org.apache.calcite.rel.core.Sort where
getFetch()!=null || getOffset()!=null and, upon visiting it, if the downstream
collation Sort was found during traversal, return
org.apache.calcite.rel.logical.LogicalSort.create(visitedBarrier,
reversedCollation, null, null) so the reversed sort sits above the limit; keep
using sortFound to avoid multiple inserts and ensure you still recurse via
super.visit(other) to obtain visited children before wrapping.
- Around line 781-806: The current in-place replacement uses
MetadataQuery.collations() (which can return input-propagated collation) and
then replaces a Sort node including its fetch, which breaks fetch-only Sort
semantics; change the logic in CalciteRelNodeVisitor so that when currentNode is
an instance of org.apache.calcite.rel.core.Sort you first inspect
existingSort.getCollation(): if existingSort.getCollation() is non-empty,
compute reversedCollation from that and do the in-place replacement via
PlanUtils.replaceTop as before; but if existingSort.getCollation() is null or
empty (fetch-only Sort), do NOT replaceTop with a new LogicalSort containing
fetch — instead compute the reversed collation from metadata (if needed) but
call context.relBuilder.sort(reversedCollation) to add a separate Sort after the
limit so the "limit then reverse" semantics are preserved.

In
`@integ-test/src/test/java/org/opensearch/sql/calcite/CalciteNoPushdownIT.java`:
- Around line 21-111: Update the CalciteNoPushdownIT class javadoc to reflect
the actual suite contents (only CalciteExplainIT is enabled) and add an inline
code comment in the class (near the `@Suite.SuiteClasses` annotation) explaining
why the remaining tests are commented out or intentionally excluded (e.g.,
resource/time constraints, flaky tests, or strategic coverage choice),
referencing the class name CalciteNoPushdownIT and the enabled test symbol
CalciteExplainIT.class so maintainers can understand the reduced scope and
rationale.

In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`:
- Around line 288-313: The top comment in the testStreamstatsByWithReverse
method is incorrect: replace the line that says "Test that reverse is ignored
after streamstats with partitioning (by clause)" with a brief description that
reverse is effective when backtracking is enabled (i.e., streamstats with
partitioning supports reversing the __stream_seq__ order), so the test's
expectations (reversed rows) match the comment; update only the comment in
testStreamstatsByWithReverse to reflect that behavior.

In `@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java`:
- Around line 214-235: Add a new unit test method named testSortHeadReverse that
constructs the PPL "source=EMP | sort SAL | head 5 | reverse", calls
getRelNode(ppl), and asserts the logical plan via verifyLogical(root,
expectedLogical) where expectedLogical represents a top-level DESC reverse sort
above an ASC sort+fetch (e.g., LogicalSort(sort0=[$5], dir0=[DESC-nulls-last])
above LogicalSort(sort0=[$5], dir0=[ASC-nulls-first], fetch=[5]) above
LogicalTableScan(...)); this ensures the reverse is applied after the limit and
prevents the backtracking bug.
🧹 Nitpick comments (1)
integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java (1)

181-192: Clarify the "natural order" assumption in the comment.

Line 190 states "Without sort or @timestamp, reverse is ignored, so data comes in natural order" and line 191 claims this is "ascending by account_number". OpenSearch's natural order is not guaranteed to be by account_number. Consider updating the comment to clarify this is the expected order for the specific test data loading, or use verifyDataRows (unordered) if the specific order isn't critical to the test's purpose.

ahkcs added 2 commits February 6, 2026 15:38
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLTimechartTest.java`:
- Around line 365-419: Add null/boundary/error tests in CalcitePPLTimechartTest
to cover reverse+timechart edge cases: add a test method (e.g.,
testTimechartWithReverse_NullTimefield) that runs "source=events | timechart
timefield=@timestamp span=1m count() | reverse" on data where `@timestamp` can be
NULL and verify the logical plan and SQL via getRelNode, verifyLogical and
verifyPPLToSparkSQL include proper IS NULL handling or expected ordering; add a
boundary/empty-input test (e.g., testTimechartWithReverse_EmptySource) that runs
the pipeline on an empty source and asserts the plan still generates a valid
aggregate/sort; and add an invalid-input test (e.g.,
testTimechartWithReverse_InvalidTimefield) that calls getRelNode with a
nonexistent timefield and asserts the parser/optimizer throws the expected
exception using assertThrows (or the project's error-assertion helper). Ensure
each new test references the same helper methods (getRelNode, verifyLogical,
verifyPPLToSparkSQL) and asserts the specific failure message or null-handling
behavior.

Comment on lines +365 to +419
// ==================== Timechart with Reverse tests ====================
// These tests verify that reverse works correctly with timechart.
// Timechart always adds a sort at the end of its plan, so reverse will
// find the collation via metadata query (tier 1) and flip the sort direction.

@Test
public void testTimechartWithReverse() {
// Timechart adds ORDER BY @timestamp ASC at the end
// Reverse should flip it to DESC
String ppl = "source=events | timechart count() | reverse";
RelNode root = getRelNode(ppl);
// Reverse replaces the timechart's ASC sort in-place with DESC
String expectedLogical =
"LogicalSort(sort0=[$0], dir0=[DESC])\n"
+ " LogicalProject(@timestamp=[$0], count()=[$1])\n"
+ " LogicalAggregate(group=[{0}], count()=[COUNT()])\n"
+ " LogicalProject(@timestamp0=[SPAN($0, 1, 'm')])\n"
+ " LogicalFilter(condition=[IS NOT NULL($0)])\n"
+ " LogicalTableScan(table=[[scott, events]])\n";
verifyLogical(root, expectedLogical);

String expectedSparkSql =
"SELECT SPAN(`@timestamp`, 1, 'm') `@timestamp`, COUNT(*) `count()`\n"
+ "FROM `scott`.`events`\n"
+ "WHERE `@timestamp` IS NOT NULL\n"
+ "GROUP BY SPAN(`@timestamp`, 1, 'm')\n"
+ "ORDER BY 1 DESC NULLS FIRST";
verifyPPLToSparkSQL(root, expectedSparkSql);
}

@Test
public void testTimechartWithCustomTimefieldAndReverse() {
// Timechart with custom timefield should also work with reverse
// The sort is on created_at (the custom field), not @timestamp
String ppl = "source=events | timechart timefield=created_at span=1month count() | reverse";
RelNode root = getRelNode(ppl);

// Reverse replaces the timechart's ASC sort in-place with DESC
String expectedLogical =
"LogicalSort(sort0=[$0], dir0=[DESC])\n"
+ " LogicalProject(created_at=[$0], count()=[$1])\n"
+ " LogicalAggregate(group=[{0}], count()=[COUNT()])\n"
+ " LogicalProject(created_at0=[SPAN($1, 1, 'M')])\n"
+ " LogicalFilter(condition=[IS NOT NULL($1)])\n"
+ " LogicalTableScan(table=[[scott, events]])\n";
verifyLogical(root, expectedLogical);

String expectedSparkSql =
"SELECT SPAN(`created_at`, 1, 'M') `created_at`, COUNT(*) `count()`\n"
+ "FROM `scott`.`events`\n"
+ "WHERE `created_at` IS NOT NULL\n"
+ "GROUP BY SPAN(`created_at`, 1, 'M')\n"
+ "ORDER BY 1 DESC NULLS FIRST";
verifyPPLToSparkSQL(root, expectedSparkSql);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add null/boundary/error coverage for the new reverse+timechart paths.

These new tests cover the happy path, but the test guidelines require null-input, boundary, and invalid-input coverage for new functionality. Please add at least one case (e.g., NULL timefield values and an invalid timefield) to satisfy those requirements. As per coding guidelines: "NULL input tests must be included for all new functions", "Include boundary condition tests (min/max values, empty inputs) for all new functions", and "Include error condition tests (invalid inputs, exceptions) for all new functions".

🤖 Prompt for AI Agents
In
`@ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLTimechartTest.java`
around lines 365 - 419, Add null/boundary/error tests in CalcitePPLTimechartTest
to cover reverse+timechart edge cases: add a test method (e.g.,
testTimechartWithReverse_NullTimefield) that runs "source=events | timechart
timefield=@timestamp span=1m count() | reverse" on data where `@timestamp` can be
NULL and verify the logical plan and SQL via getRelNode, verifyLogical and
verifyPPLToSparkSQL include proper IS NULL handling or expected ordering; add a
boundary/empty-input test (e.g., testTimechartWithReverse_EmptySource) that runs
the pipeline on an empty source and asserts the plan still generates a valid
aggregate/sort; and add an invalid-input test (e.g.,
testTimechartWithReverse_InvalidTimefield) that calls getRelNode with a
nonexistent timefield and asserts the parser/optimizer throws the expected
exception using assertThrows (or the project's error-assertion helper). Ensure
each new test references the same helper methods (getRelNode, verifyLogical,
verifyPPLToSparkSQL) and asserts the specific failure message or null-handling
behavior.

Signed-off-by: Kai Huang <ahkcs@amazon.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`:
- Around line 181-192: In testReverseIgnoredWithoutSortOrTimestamp, remove the
brittle order assertion by replacing the call to verifyDataRowsInOrder(result,
rows(1), rows(6), rows(13)) with an order-independent check: fetch the result
rows from executeQuery (same TEST_INDEX_BANK query) and assert that the set of
returned account_number values equals the expected set {1,6,13} (or compare the
result to a baseline executeQuery without the reverse pipe), using the test
helper that asserts unordered equality (or implement a small helper to compare
collections ignoring order) so the test no longer depends on natural execution
order; keep verifySchema as-is.
🧹 Nitpick comments (2)
ppl/src/test/java/org/opensearch/sql/ppl/calcite/CalcitePPLReverseTest.java (1)

252-257: Clarify comment: SAL is available during sorting, removed only in final output.

The comment states "The sort field (SAL) is removed from schema by fields command" which could be misleading. Looking at the logical plan, the LogicalProject(ENAME=[$1]) sits on top of both sorts, meaning SAL is fully available when sorting occurs. The projection only removes SAL from the final output, not from the sort's scope.

Consider rewording to clarify the behavior:

  `@Test`
  public void testSortFieldsReverse() {
-   // Test backtracking: sort on SAL, then project only ENAME, then reverse
-   // The sort field (SAL) is removed from schema by fields command
-   // But reverse should still work by backtracking to find the sort
+   // Test backtracking: sort on SAL, then project only ENAME, then reverse
+   // The sort happens before projection, so SAL is available for sorting
+   // Reverse backtracks through the project to find and reverse the sort
    String ppl = "source=EMP | sort SAL | fields ENAME | reverse";
core/src/main/java/org/opensearch/sql/calcite/CalciteRelNodeVisitor.java (1)

783-845: Method exceeds 50-line guideline; consider extracting helper methods.

The implementation is correct and properly handles all edge cases including the metadata propagation issue:

  • Lines 800-803 check existingSort.getCollation() (the node's own collation) rather than relying solely on metadata
  • The fetch == null && offset == null check ensures fetch-only Sorts go to the else branch, correctly adding the reversed sort on top

However, at ~62 lines this method exceeds the 50-line guideline for this file. Consider extracting the main branches into focused helpers:

♻️ Suggested structure
private void handleReverseSortInPlace(Sort existingSort, CalcitePlanContext context) {
  // Lines 804-811: Pure collation sort replacement
}

private void handleReverseViaBacktracking(RelNode currentNode, CalcitePlanContext context) {
  // Lines 820-842: Backtracking + `@timestamp` fallback
}

Comment on lines +181 to +192
@Test
public void testReverseIgnoredWithoutSortOrTimestamp() throws IOException {
// Test that reverse is ignored when there's no explicit sort and no @timestamp field
// BANK index doesn't have @timestamp, so reverse should be ignored
JSONObject result =
executeQuery(
String.format("source=%s | fields account_number | reverse | head 3", TEST_INDEX_BANK));
verifySchema(result, schema("account_number", "bigint"));
// Without sort or @timestamp, reverse is ignored, so data comes in natural order
// The first 3 documents in natural order (ascending by account_number)
verifyDataRowsInOrder(result, rows(1), rows(6), rows(13));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid asserting natural order for the no-sort case.

Line 189–191 assumes a deterministic “natural order” without an explicit sort, which can be unstable and make the test flaky. Prefer an order-independent assertion (or compare to a baseline query without reverse) so the test doesn’t rely on execution order.

[suggested fix]

🔧 One option: remove head and assert unordered rows
-            String.format("source=%s | fields account_number | reverse | head 3", TEST_INDEX_BANK));
+            String.format("source=%s | fields account_number | reverse", TEST_INDEX_BANK));
 ...
-    // Without sort or `@timestamp`, reverse is ignored, so data comes in natural order
-    // The first 3 documents in natural order (ascending by account_number)
-    verifyDataRowsInOrder(result, rows(1), rows(6), rows(13));
+    // Without sort or `@timestamp`, reverse is ignored; assert content without order.
+    verifyDataRows(result, rows(1), rows(6), rows(13), rows(18), rows(20), rows(25), rows(32));

As per coding guidelines, “Tests must not rely on execution order; ensure test independence”.

🤖 Prompt for AI Agents
In
`@integ-test/src/test/java/org/opensearch/sql/calcite/remote/CalciteReverseCommandIT.java`
around lines 181 - 192, In testReverseIgnoredWithoutSortOrTimestamp, remove the
brittle order assertion by replacing the call to verifyDataRowsInOrder(result,
rows(1), rows(6), rows(13)) with an order-independent check: fetch the result
rows from executeQuery (same TEST_INDEX_BANK query) and assert that the set of
returned account_number values equals the expected set {1,6,13} (or compare the
result to a baseline executeQuery without the reverse pipe), using the test
helper that asserts unordered equality (or implement a small helper to compare
collections ignoring order) so the test no longer depends on natural execution
order; keep verifySchema as-is.

Comment on lines 724 to 725
if (node instanceof org.apache.calcite.rel.core.Sort) {
org.apache.calcite.rel.core.Sort sort = (org.apache.calcite.rel.core.Sort) node;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Use imported class. -> node instanceof Sort

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

private RelNode insertReversedSortInTree(
RelNode root, RelCollation reversedCollation, CalcitePlanContext context) {
return root.accept(
new org.apache.calcite.rel.RelHomogeneousShuttle() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Same as above comment. Use imported class in this method

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

Comment on lines 797 to 799
if (currentNode instanceof org.apache.calcite.rel.core.Sort) {
org.apache.calcite.rel.core.Sort existingSort =
(org.apache.calcite.rel.core.Sort) currentNode;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Same here. Use imported class in this method

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

Comment on lines 800 to 811
if (existingSort.getCollation() != null
&& !existingSort.getCollation().getFieldCollations().isEmpty()
&& existingSort.fetch == null
&& existingSort.offset == null) {
// Pure collation sort (no fetch/offset) - replace in-place to avoid consecutive
// sorts. Calcite's physical optimizer merges consecutive LogicalSort nodes and may
// discard the reversed direction. Replacing in-place avoids this issue.
RelCollation reversedFromSort = PlanUtils.reverseCollation(existingSort.getCollation());
RelNode replacedSort =
org.apache.calcite.rel.logical.LogicalSort.create(
existingSort.getInput(), reversedFromSort, null, null);
PlanUtils.replaceTop(context.relBuilder, replacedSort);
Copy link
Contributor

@songkant-aws songkant-aws Feb 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this observation correct? The logic is weird because that means an explicit query like source =t | sort -a | sort a could be equivalent to the query of source=t | sort -a. I think it's not merge. The worst case is it needs double sort in case of no pushdown. But in pushdown case, I think the sort will be pushed twice. And finally the top sort will override the scan's traitset.

I checked the PR out in my local env. When I remove this piece of code, the explainIT still gives me the expected collation in physical plan.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The in-place replacement is intentional and necessary. The issue is that explainIT only shows the logical plan before Calcite's physical optimizer runs. During actual execution, SortRemoveRule merges consecutive LogicalSort nodes and keeps the lower sort's direction, effectively discarding the reversed one. So for a query like sort a | reverse, the plan becomes Sort(ASC) → Sort(DESC), and the optimizer collapses it back to Sort(ASC).

I confirmed this by running the integration tests (CalciteReverseCommandIT) — they fail without the in-place replacement. The workaround replaces the existing Sort node directly instead of stacking a new one on top, so there are no consecutive sorts for the optimizer to merge.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

detailed description regarding this issue: #5125

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the conclusion is not true. Explained plans in explainIT include both original logical plan and optimized physical plan. Calcite's VolcanoPlanner optimizes both logical plans and physical plans, then the final optimized physical plan will generate a compiled Java code to execute the query. Rules never execute the query.

The SortRemoveRule replaces an explicit sort operator with a trait requirement on the child RelSubset. It doesn't do the simple merge. It assigns ordering trait to different RelSubsets. VolcanoPlanner will compute the cheapest plan from different RelSubsets. A simplified process is like 1st round SortRemoveRule changes Sort(1 Desc) - Sort(1 Asc) - RelSubset#0(NONE) => Sort(1 Desc) - RelSubset#1(1 Asc), 2nd round SortRemoveRule changes Sort(1 Desc) - RelSubset#2(1 Asc) => RelSubset#3(1 Desc). The final plan must satisfy the (1 Desc) ordering requirements.

The actual root cause is there is a shallow copy bug in AggPushDownAction. In case of testReverseAfterAggregationWithSort test case failure you mentioned in the issue. The actual optimized physical plan is:

// query: source=%s | stats count() as c by gender | sort gender | reverse
calcite:
  logical: |
    LogicalSystemLimit(sort0=[$1], dir0=[DESC-nulls-last], fetch=[10000], type=[QUERY_SIZE_LIMIT])
      LogicalSort(sort0=[$1], dir0=[DESC-nulls-last])
        LogicalSort(sort0=[$1], dir0=[ASC-nulls-first])
          LogicalProject(c=[$1], gender=[$0])
            LogicalAggregate(group=[{0}], c=[COUNT()])
              LogicalProject(gender=[$4])
                CalciteLogicalIndexScan(table=[[OpenSearch, opensearch-sql_test_index_bank]])
  physical: |
    CalciteEnumerableIndexScan(table=[[OpenSearch, opensearch-sql_test_index_bank]], PushDownContext=[[AGGREGATION->rel#50:LogicalAggregate.NONE.[](input=RelSubset#49,group={0},c=COUNT()), PROJECT->[c, gender], SORT->[1 DESC LAST], LIMIT->10000], OpenSearchRequestBuilder(sourceBuilder={"from":0,"size":0,"timeout":"1m","aggregations":{"composite_buckets":{"composite":{"size":10000,"sources":[{"gender":{"terms":{"field":"gender.keyword","missing_bucket":true,"missing_order":"first","order":"asc"}}}]}}}}, requestedTotalSize=10000, pageSize=null, startFrom=0)])

You can see PushDownContext has correct SORT action, aka SORT->[1 DESC LAST], which means SortRemoveRule didn't pick the ASC-nulls-first as winner. The problem is the agg request builder doesn't have correct ordering on buckets. You can easily verify SortRemoveRule doesn't do anything wrong by manually removing our own rules like SortIndexScanRule. The testReverseAfterAggregationWithSort will pass without our SortIndexScanRule.

Regardless of the actual bug. I think your current approach still has some value. The collapse of consecutive sorts will reduce VolcanoPlanner's optimization space size. You can keep the current design.

Comment on lines 768 to 774
// Found a collation Sort - insert reversed sort on top of it
if (sort.getCollation() != null
&& !sort.getCollation().getFieldCollations().isEmpty()) {
sortFound = true;
RelNode visitedSort = super.visit(other);
return org.apache.calcite.rel.logical.LogicalSort.create(
visitedSort, reversedCollation, null, null);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the merge sort theory below is correct(I think it needs careful verification), this part should replace the visitedSort with reversedSort. The current behavior is appending visitedSort to the child of reversedSort, which creates consecutive sorts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! I've updated this to replace the sort in-place: instead of super.visit(other) (which returns the Sort node and stacks on top), it now uses sort.getInput().accept(this) to visit only the children, then creates a new Sort with the reversed collation using those children as input. This is the same in-place replacement pattern used in visitReverse above.

Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Signed-off-by: Kai Huang <ahkcs@amazon.com>
Copy link
Contributor

@songkant-aws songkant-aws left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM so far. Please let others review and approve.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Reverse Optimization fails when creating consecutive sorts due to Calcite physical optimizer merging [FEATURE] Support reverse pushdown with Calcite

7 participants