Skip to content

WIP: NETOBSERV-2660 Add server performance benchmark improvements#1444

Open
Amoghrd wants to merge 6 commits intonetobserv:mainfrom
Amoghrd:server-perf-followup
Open

WIP: NETOBSERV-2660 Add server performance benchmark improvements#1444
Amoghrd wants to merge 6 commits intonetobserv:mainfrom
Amoghrd:server-perf-followup

Conversation

@Amoghrd
Copy link
Copy Markdown
Member

@Amoghrd Amoghrd commented Apr 20, 2026

Description

Implement future improvements listed in BENCHMARKS.md:

  • Add BenchmarkExport for CSV export flows endpoint
  • Add BenchmarkLargeResultSets to test 100, 1K, 5K, 10K records
  • Add BenchmarkFilterHeavy for complex filter combinations
  • Add BenchmarkConcurrentTableView and BenchmarkConcurrentTopologyMetrics for parallel load testing
  • Update BENCHMARKS.md documentation with new scenarios and coverage

All benchmarks run successfully individually or in small groups. The concurrent benchmarks are separated to avoid macOS port exhaustion when running the full suite.

Dependencies

n/a

Checklist

  • Does the changes in PR need specific configuration or environment set up for testing?
    • if so please describe it in PR description.
  • I have added thorough unit tests for the change.
  • QE requirements (check 1 from the list):
    • Standard QE validation, with pre-merge tests unless stated otherwise.
    • Regression tests only (e.g. refactoring with no user-facing change).
    • No QE (e.g. trivial change with high reviewer's confidence, or per agreement with the QE team).

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Apr 20, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign stleerh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Amoghrd Amoghrd added no-qe This PR doesn't necessitate QE approval no-doc This PR doesn't require documentation change on the NetObserv operator labels Apr 20, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 20, 2026

Important

Review skipped

Ignore keyword(s) in the title.

⛔ Ignored keywords (1)
  • WIP

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f3245a8e-9af6-4779-916c-dcb4113edd62

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Amoghrd Amoghrd force-pushed the server-perf-followup branch 2 times, most recently from af91892 to 8152d50 Compare April 20, 2026 20:25
Implement future improvements listed in BENCHMARKS.md with full coverage
across all NetObserv UI views (Traffic, Topology, Overview):

**Export Flows:**
- Add BenchmarkExport with 7 scenarios testing CSV export from all tabs
- Test filters, limits, column selection, and complex query combinations

**Filter-Heavy Queries:**
- BenchmarkFilterHeavyTableView: 1, 2, 4, 8 filters on flow records
- BenchmarkFilterHeavyTopology: 1, 2, 4, 8 filters on topology metrics
- BenchmarkFilterHeavyOverview: 1, 2, 4, 8 filters on overview metrics

**Large Result Sets:**
- BenchmarkLargeResultSets: Test with 100, 1K, 5K, 10K flow records

**Aggregation Levels:**
- BenchmarkTopologyAggregations: Test by namespace, app, resource, owner
- BenchmarkOverviewAggregations: Test namespace, app, and mixed aggregations

**Concurrent User Scenarios:**
- BenchmarkConcurrentTableView: Parallel load on table view
- BenchmarkConcurrentTopology: Parallel load on topology view
- BenchmarkConcurrentOverview: Parallel load on overview page

**Makefile Targets:**
- Add convenient make targets for specific benchmark groups
- Users can now run `make benchmark-export`, `make benchmark-filters`, etc.
- Simplifies running benchmarks without remembering go test flags

All benchmarks run successfully individually or in small groups. The
concurrent benchmarks are separated to avoid macOS port exhaustion when
running the full suite.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@Amoghrd Amoghrd force-pushed the server-perf-followup branch from 8152d50 to 324e90a Compare April 20, 2026 20:29
Amoghrd and others added 5 commits April 20, 2026 16:35
Split the 816-line server_perf_test.go into 4 focused files for better
organization and maintainability:

**server_perf_test.go** (175 lines):
- Shared helper functions (setupBenchmarkServers, setupBenchmarkServersWithSize)
- BenchmarkExport (used across all views)

**server_perf_table_test.go** (169 lines):
- BenchmarkTable
- BenchmarkLargeResultSets
- BenchmarkFilterHeavyTableView
- BenchmarkConcurrentTableView

**server_perf_topology_test.go** (202 lines):
- BenchmarkTopologyLoki
- BenchmarkTopologyAuto
- BenchmarkFilterHeavyTopology
- BenchmarkConcurrentTopology
- BenchmarkTopologyAggregations
- runTopologyQuery helper

**server_perf_overview_test.go** (288 lines):
- BenchmarkOverviewLoki
- BenchmarkOverviewAuto
- BenchmarkFilterHeavyOverview
- BenchmarkConcurrentOverview
- BenchmarkOverviewAggregations
- Overview query builder helpers

All benchmarks verified and working correctly.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Remove duplication of filter test data across table, topology, and
overview benchmark files by extracting to a shared helper function.

**Changes:**
- Add getCommonFilterTests() to server_perf_test.go
- Returns standardized filter strings (1, 2, 4, 8 filter combinations)
- Update BenchmarkFilterHeavyTableView to use shared helper
- Update BenchmarkFilterHeavyTopology to use shared helper
- Update BenchmarkFilterHeavyOverview to use shared helper

**Impact:**
- Removed 58 lines of duplicated filter data
- Added 29 lines for shared helper
- Net reduction: 29 lines (834 → 805)
- Filter test data now defined in ONE place

**Benefits:**
- Single source of truth for filter test combinations
- Easier to add/modify filter test scenarios
- Consistent filter testing across all views

All benchmarks verified and working correctly.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Replace view-specific query helpers with generic shared helpers that
work for both topology and overview benchmarks.

**Changes:**
- Add runMetricsQuery() to server_perf_test.go (shared)
  - Generic helper to run a single /api/flow/metrics query
  - Used by topology benchmarks

- Add runMetricsQueries() to server_perf_test.go (shared)
  - Generic helper to run multiple /api/flow/metrics queries
  - Uses runMetricsQuery() internally
  - Used by overview benchmarks

- Remove runTopologyQuery() from topology_test.go
  - Replaced with runMetricsQuery()

- Remove runOverviewQueries() from overview_test.go
  - Replaced with runMetricsQueries()

**Impact:**
- Overview: 269 → 254 lines (-15)
- Topology: 183 → 170 lines (-13)
- Test: 204 → 224 lines (+20 for shared helpers)
- Total: 805 → 797 lines (-8)

**Benefits:**
- Single implementation of metrics query logic
- View-agnostic helpers work for both topology and overview
- Easier to maintain and enhance
- More consistent testing approach

All benchmarks verified and working correctly.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
- Replace duplicated HTTP request/response patterns with runFlowRecordsQuery()
- Use runMetricsQuery() for histogram metrics query
- Remove unused net/http import
- Reduce file from 150 to 100 lines (33% reduction)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Export flows: 7 → 4 scenarios (removed CSVWithColumns, WithMultipleFilters, WithLimit1000)
Large result sets: 4 → 3 sizes (removed 5000records - middle ground between 1K and 10K)
Data source tests: Removed Loki-only benchmarks, keeping Auto (more realistic - chooses best source automatically)
Filter combinations: 4 → 3 levels (removed 8 filters scenario, keeping 1, 2, 4)

- BenchmarkTopologyLoki: Removed (11 sub-benchmarks)
- BenchmarkOverviewLoki: Removed (5 sub-benchmarks)
- Total reduction: ~20 sub-benchmarks removed while maintaining comprehensive coverage

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/work-in-progress no-doc This PR doesn't require documentation change on the NetObserv operator no-qe This PR doesn't necessitate QE approval

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant