Skip to content

NETOBSERV-2387: Backend tests migration#2586

Open
oliver-smakal wants to merge 16 commits intonetobserv:mainfrom
oliver-smakal:backend-tests-migration/NETOBSERV-2387
Open

NETOBSERV-2387: Backend tests migration#2586
oliver-smakal wants to merge 16 commits intonetobserv:mainfrom
oliver-smakal:backend-tests-migration/NETOBSERV-2387

Conversation

@oliver-smakal
Copy link
Copy Markdown

@oliver-smakal oliver-smakal commented Mar 24, 2026

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Testcases were migrated, there were some changes made due to the golanci linter complaining. Minor things were applied, however there were some things such as passing heavy structs without using * which I don't really see as a problem -> I have adjusted the Makefile to omit the integration-tests from linting/testing.

This PR can be tested with flexy cluster via the Network Observability Backend Tests.

Review tips

Important files to definitely check:

  • integration-tests/backend/backend_suite_test.go
  • integration-tests/backend/version_checker.go

To check changes in the testcases checkout the latest openshift-tests-private on the main branch and use:

git diff --no-index ~/Repos/openshift-tests-private/test/extended/netobserv/test_flowcollector.go test_flowcollector.go

to check file diff or

git diff --no-index ~/Repos/openshift-tests-private/test/extended/netobserv/ .

to check whole dirs.

How to run locally

Either:

go run github.com/onsi/ginkgo/v2/ginkgo #to run all
go run github.com/onsi/ginkgo/v2/ginkgo --focus="VM" #use regex to filter

or if the ginkgo cli is installed:

ginkgo #to run all
ginkgo --focus="VM" #use regex to filter

Other standard flags of ginkgo such as --dry-run or -v also work.

Example output

The following is an example output of of a run:

[1776324075] Backend Suite - 3/7488 specs Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776324075

Will run 3 specs
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2601
• PASSED [0.000 seconds]
•[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2654
• PASSED [0.000 seconds]
•[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2780
• PASSED [0.000 seconds]
•------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.136 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 0
 SUCCESS! 136.307854ms PASS

Ginkgo ran 1 suite in 12.165523429s
Test Suite Passed

This implementation cannot control the [1776103444] Backend Suite - 3/7488 specs line. I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

More examples:

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Mar 24, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Mar 24, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible
  • yaml testdata files are not being compiled in and are just referenced from the file tree

TODO:

  • explore --dry-run and piping possibilities with ginkgo, to eliminate the 'skipped tests' number to include non netobserv tests.
  • port latest netobserv test changes.

Please comment, if you see any other thinks we could change.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Mar 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign stleerh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 24, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 72.30%. Comparing base (fa26090) to head (15e2676).
⚠️ Report is 17 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2586      +/-   ##
==========================================
- Coverage   72.50%   72.30%   -0.21%     
==========================================
  Files         107      107              
  Lines       11482    11482              
==========================================
- Hits         8325     8302      -23     
- Misses       2658     2676      +18     
- Partials      499      504       +5     
Flag Coverage Δ
unittests 72.30% <ø> (-0.21%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 5 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Mar 24, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

TODO:

  • explore --dry-run and piping possibilities with ginkgo, to eliminate the 'skipped tests' number to include non netobserv tests.
  • port latest netobserv test changes.

Please comment, if you see any other thinks we could change.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Mar 25, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

TODO:

  • port latest netobserv test changes.

Please comment, if you see any other thinks we could change.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Comment thread integration-tests/backend/alerts.go Outdated
Comment thread integration-tests/backend/test_exporters.go
@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

TODO:

  • port latest netobserv test changes.

Please comment, if you see any other thinks we could change.

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

This way you can see which parts we can control by which part of the code. The things that are not under our controll using the current implementation is anything above --------------------REPORT_BEFORE_SUITE_START-------------------- line eg. the [1776103444] Backend Suite - 3/7488 specs. I don't think it will be issue in any way, as for people the rest of the report can be made really clear in what is actually being run. For running it in prow it should also not be a problem if we preserv the way it works with the openshift-test-private repo, is that a junit report is generated and then parsed there. Machine outputs from ginkglo like junit or json are not affected by the report hooks and should be parsed in prow if nescessary.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line. I don't think it will be issue in any way, as for people the rest of the report can be made really clear to see what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo, is that a junit report is generated and then parsed there. Machine outputs from ginkgo like junit or json are not affected by the report hooks.

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could use something a bit more custom using more low level ginkgo functionality like in commit xxx, that could get us whatever console output we wish. But I don't think it would be worth the tradeoff of not using the default ginkgo runner as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line. I don't think it will be issue in any way, as for people the rest of the report can be made really clear to see what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo, is that a junit report is generated and then parsed there. Machine outputs from ginkgo like junit or json are not affected by the report hooks.

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could use something a bit more custom using more low level ginkgo functionality like in commit xxx, that could get us whatever console output we wish. But I don't think it would be worth the tradeoff of not using the default ginkgo runner as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could use something a bit more custom using more low level ginkgo functionality like in commit xxx, that could get us whatever console output we wish. But I don't think it would be worth the tradeoff of not using the default ginkgo runner as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit xxx, that could get us whatever console output we wish. But I don't think it would be worth the tradeoff of not using the default ginkgo runner as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc3, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 13, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes.

Example output

The following is an example output of of a run, with added start/end report hooks for clarity:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In this example I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code, but would not actually be part of the implementation. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

More examples:

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 13, 2026

Important

Review skipped

Too many files!

This PR contains 266 files, which is 116 over the limit of 150.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 26debe59-f747-4294-88b9-5534ced3d781

📥 Commits

Reviewing files that changed from the base of the PR and between e1afb44 and 15e2676.

⛔ Files ignored due to path filters (34)
  • go.sum is excluded by !**/*.sum
  • vendor/cloud.google.com/go/iam/apiv1/iampb/iam_policy.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/iam/apiv1/iampb/iam_policy_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/iam/apiv1/iampb/options.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/iam/apiv1/iampb/policy.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/iam/apiv1/iampb/resource_policy_member.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/alert.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/alert_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/alert_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/common.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/dropped_labels.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/group.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/group_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/group_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/metric.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/metric_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/metric_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/mutation_record.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/notification.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/notification_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/notification_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/query_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/query_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/service_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/service_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/snooze.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/snooze_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/snooze_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/span_context.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/uptime.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/uptime_service.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/monitoringpb/uptime_service_grpc.pb.go is excluded by !**/*.pb.go
  • vendor/cloud.google.com/go/storage/internal/apiv2/storagepb/storage.pb.go is excluded by !**/*.pb.go
📒 Files selected for processing (266)
  • .golangci-e2e.yaml
  • .golangci.yml
  • Makefile
  • bundle/manifests/flows.netobserv.io_flowcollectors.yaml
  • config/crd/bases/flows.netobserv.io_flowcollectors.yaml
  • docs/FlowCollector.md
  • go.mod
  • helm/crds/flows.netobserv.io_flowcollectors.yaml
  • integration-tests/backend/OWNERS
  • integration-tests/backend/alerts.go
  • integration-tests/backend/aws_sts.go
  • integration-tests/backend/azure_utils.go
  • integration-tests/backend/backend_suite_test.go
  • integration-tests/backend/custom_metrics.go
  • integration-tests/backend/flowcollector.go
  • integration-tests/backend/flowcollector_utils.go
  • integration-tests/backend/flowcollectorslice.go
  • integration-tests/backend/ip_utils.go
  • integration-tests/backend/kafka.go
  • integration-tests/backend/loki.go
  • integration-tests/backend/loki_client.go
  • integration-tests/backend/loki_storage.go
  • integration-tests/backend/metrics.go
  • integration-tests/backend/multitenants.go
  • integration-tests/backend/operator.go
  • integration-tests/backend/sctp.go
  • integration-tests/backend/test_exporters.go
  • integration-tests/backend/test_flowcollector.go
  • integration-tests/backend/test_flowcollectorslice.go
  • integration-tests/backend/test_flowmetrics.go
  • integration-tests/backend/testdata/DNS-pods.yaml
  • integration-tests/backend/testdata/SYN_flood_alert_template.yaml
  • integration-tests/backend/testdata/SYN_flood_metrics_template.yaml
  • integration-tests/backend/testdata/bpfman/catalog-source.yaml
  • integration-tests/backend/testdata/bpfman/image-digest-mirror-set.yaml
  • integration-tests/backend/testdata/bpfman/namespace.yaml
  • integration-tests/backend/testdata/cert_manager_certificates_template.yaml
  • integration-tests/backend/testdata/exporters/ipfix-collector.yaml
  • integration-tests/backend/testdata/exporters/otel-collector-tls.yaml
  • integration-tests/backend/testdata/exporters/otel-collector.yaml
  • integration-tests/backend/testdata/flowcollectorSlice_v1alpha1_template.yaml
  • integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
  • integration-tests/backend/testdata/flowlogs_pipeline_hpa_template.yaml
  • integration-tests/backend/testdata/flowmetrics_v1alpha1_template.yaml
  • integration-tests/backend/testdata/gateway-api-template.yaml
  • integration-tests/backend/testdata/kafka/kafka-default.yaml
  • integration-tests/backend/testdata/kafka/kafka-metrics-config.yaml
  • integration-tests/backend/testdata/kafka/kafka-node-pool.yaml
  • integration-tests/backend/testdata/kafka/kafka-tls.yaml
  • integration-tests/backend/testdata/kafka/kafka-topic.yaml
  • integration-tests/backend/testdata/kafka/kafka-user.yaml
  • integration-tests/backend/testdata/kafka/topic-consumer-tls.yaml
  • integration-tests/backend/testdata/logging/minIO/deploy.yaml
  • integration-tests/backend/testdata/logging/odf/objectBucketClaim.yaml
  • integration-tests/backend/testdata/logging/subscription/namespace.yaml
  • integration-tests/backend/testdata/loki/0-click-loki.yaml
  • integration-tests/backend/testdata/loki/loki-pvc.yaml
  • integration-tests/backend/testdata/loki/lokistack-simple.yaml
  • integration-tests/backend/testdata/netobserv-loki-reader-multitenant-crb.yaml
  • integration-tests/backend/testdata/networking/adminnetworkPolicy.yaml
  • integration-tests/backend/testdata/networking/baselineadminnetworkPolicy.yaml
  • integration-tests/backend/testdata/networking/egressQoS.yaml
  • integration-tests/backend/testdata/networking/networkPolicy.yaml
  • integration-tests/backend/testdata/networking/nmstate/catalogsource-template.yaml
  • integration-tests/backend/testdata/networking/nmstate/image-digest-mirrorset.yaml
  • integration-tests/backend/testdata/networking/nmstate/namespace-template.yaml
  • integration-tests/backend/testdata/networking/nmstate/nmstate-cr-template.yaml
  • integration-tests/backend/testdata/networking/nmstate/operatorgroup-template.yaml
  • integration-tests/backend/testdata/networking/nmstate/ovn-mapping-policy-template.yaml
  • integration-tests/backend/testdata/networking/nmstate/subscription-template.yaml
  • integration-tests/backend/testdata/networking/sctpclient.yaml
  • integration-tests/backend/testdata/networking/sctpserver.yaml
  • integration-tests/backend/testdata/networking/test-client-DSCP.yaml
  • integration-tests/backend/testdata/networking/udn/cudn_crd_dualstack_template.yaml
  • integration-tests/backend/testdata/networking/udn/cudn_crd_layer2_dualstack_template.yaml
  • integration-tests/backend/testdata/networking/udn/cudn_crd_layer2_singlestack_template.yaml
  • integration-tests/backend/testdata/networking/udn/cudn_crd_localnet_singlestack_template.yaml
  • integration-tests/backend/testdata/networking/udn/cudn_crd_localnet_singlestack_with_vlan_template.yaml
  • integration-tests/backend/testdata/networking/udn/cudn_crd_singlestack_template.yaml
  • integration-tests/backend/testdata/networking/udn/udn_crd_dualstack2_template.yaml
  • integration-tests/backend/testdata/networking/udn/udn_crd_layer2_dualstack_template.yaml
  • integration-tests/backend/testdata/networking/udn/udn_crd_layer2_singlestack_template.yaml
  • integration-tests/backend/testdata/networking/udn/udn_crd_singlestack_template.yaml
  • integration-tests/backend/testdata/networking/udn/udn_statefulset_template.yaml
  • integration-tests/backend/testdata/networking/udn/udn_test_pod_template.yaml
  • integration-tests/backend/testdata/subscription/allnamespace-og.yaml
  • integration-tests/backend/testdata/subscription/catalog-source.yaml
  • integration-tests/backend/testdata/subscription/image-digest-mirror-set.yaml
  • integration-tests/backend/testdata/subscription/namespace.yaml
  • integration-tests/backend/testdata/subscription/singlenamespace-og.yaml
  • integration-tests/backend/testdata/subscription/sub-template.yaml
  • integration-tests/backend/testdata/test-SYN-flood-client_template.yaml
  • integration-tests/backend/testdata/test-nginx-client_template.yaml
  • integration-tests/backend/testdata/test-nginx-server_template.yaml
  • integration-tests/backend/testdata/test-ping-pods_template.yaml
  • integration-tests/backend/testdata/test-tls-client_template.yaml
  • integration-tests/backend/testdata/test-tls-server_template.yaml
  • integration-tests/backend/testdata/testuser-client-server_template.yaml
  • integration-tests/backend/testdata/testuser-template-crb.yaml
  • integration-tests/backend/testdata/virtualization/kubevirt-hyperconverged.yaml
  • integration-tests/backend/testdata/virtualization/layer2-nad.yaml
  • integration-tests/backend/testdata/virtualization/test-vm-UDN_template.yaml
  • integration-tests/backend/testdata/virtualization/test-vm-localnet_template.yaml
  • integration-tests/backend/testdata/virtualization/test-vm-static-IP_template.yaml
  • integration-tests/backend/udn.go
  • integration-tests/backend/util.go
  • integration-tests/backend/version_checker.go
  • integration-tests/backend/virtualization.go
  • vendor/cloud.google.com/go/LICENSE
  • vendor/cloud.google.com/go/auth/CHANGES.md
  • vendor/cloud.google.com/go/auth/LICENSE
  • vendor/cloud.google.com/go/auth/README.md
  • vendor/cloud.google.com/go/auth/auth.go
  • vendor/cloud.google.com/go/auth/credentials/compute.go
  • vendor/cloud.google.com/go/auth/credentials/detect.go
  • vendor/cloud.google.com/go/auth/credentials/doc.go
  • vendor/cloud.google.com/go/auth/credentials/filetypes.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/aws_provider.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/executable_provider.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/externalaccount.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/file_provider.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/info.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/programmatic_provider.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/url_provider.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/x509_provider.go
  • vendor/cloud.google.com/go/auth/credentials/internal/externalaccountuser/externalaccountuser.go
  • vendor/cloud.google.com/go/auth/credentials/internal/gdch/gdch.go
  • vendor/cloud.google.com/go/auth/credentials/internal/impersonate/idtoken.go
  • vendor/cloud.google.com/go/auth/credentials/internal/impersonate/impersonate.go
  • vendor/cloud.google.com/go/auth/credentials/internal/stsexchange/sts_exchange.go
  • vendor/cloud.google.com/go/auth/credentials/selfsignedjwt.go
  • vendor/cloud.google.com/go/auth/grpctransport/dial_socketopt.go
  • vendor/cloud.google.com/go/auth/grpctransport/directpath.go
  • vendor/cloud.google.com/go/auth/grpctransport/grpctransport.go
  • vendor/cloud.google.com/go/auth/grpctransport/pool.go
  • vendor/cloud.google.com/go/auth/httptransport/httptransport.go
  • vendor/cloud.google.com/go/auth/httptransport/transport.go
  • vendor/cloud.google.com/go/auth/internal/compute/compute.go
  • vendor/cloud.google.com/go/auth/internal/compute/manufacturer.go
  • vendor/cloud.google.com/go/auth/internal/compute/manufacturer_linux.go
  • vendor/cloud.google.com/go/auth/internal/compute/manufacturer_windows.go
  • vendor/cloud.google.com/go/auth/internal/credsfile/credsfile.go
  • vendor/cloud.google.com/go/auth/internal/credsfile/filetype.go
  • vendor/cloud.google.com/go/auth/internal/credsfile/parse.go
  • vendor/cloud.google.com/go/auth/internal/internal.go
  • vendor/cloud.google.com/go/auth/internal/jwt/jwt.go
  • vendor/cloud.google.com/go/auth/internal/retry/retry.go
  • vendor/cloud.google.com/go/auth/internal/transport/cba.go
  • vendor/cloud.google.com/go/auth/internal/transport/cert/default_cert.go
  • vendor/cloud.google.com/go/auth/internal/transport/cert/enterprise_cert.go
  • vendor/cloud.google.com/go/auth/internal/transport/cert/secureconnect_cert.go
  • vendor/cloud.google.com/go/auth/internal/transport/cert/workload_cert.go
  • vendor/cloud.google.com/go/auth/internal/transport/headers/headers.go
  • vendor/cloud.google.com/go/auth/internal/transport/s2a.go
  • vendor/cloud.google.com/go/auth/internal/transport/transport.go
  • vendor/cloud.google.com/go/auth/internal/trustboundary/external_accounts_config_providers.go
  • vendor/cloud.google.com/go/auth/internal/trustboundary/trust_boundary.go
  • vendor/cloud.google.com/go/auth/internal/version.go
  • vendor/cloud.google.com/go/auth/oauth2adapt/CHANGES.md
  • vendor/cloud.google.com/go/auth/oauth2adapt/LICENSE
  • vendor/cloud.google.com/go/auth/oauth2adapt/oauth2adapt.go
  • vendor/cloud.google.com/go/auth/threelegged.go
  • vendor/cloud.google.com/go/compute/metadata/CHANGES.md
  • vendor/cloud.google.com/go/compute/metadata/LICENSE
  • vendor/cloud.google.com/go/compute/metadata/README.md
  • vendor/cloud.google.com/go/compute/metadata/log.go
  • vendor/cloud.google.com/go/compute/metadata/metadata.go
  • vendor/cloud.google.com/go/compute/metadata/retry.go
  • vendor/cloud.google.com/go/compute/metadata/retry_linux.go
  • vendor/cloud.google.com/go/compute/metadata/syscheck.go
  • vendor/cloud.google.com/go/compute/metadata/syscheck_linux.go
  • vendor/cloud.google.com/go/compute/metadata/syscheck_windows.go
  • vendor/cloud.google.com/go/iam/CHANGES.md
  • vendor/cloud.google.com/go/iam/LICENSE
  • vendor/cloud.google.com/go/iam/README.md
  • vendor/cloud.google.com/go/iam/iam.go
  • vendor/cloud.google.com/go/internal/.repo-metadata-full.json
  • vendor/cloud.google.com/go/internal/README.md
  • vendor/cloud.google.com/go/internal/annotate.go
  • vendor/cloud.google.com/go/internal/gen_info.sh
  • vendor/cloud.google.com/go/internal/optional/optional.go
  • vendor/cloud.google.com/go/internal/retry.go
  • vendor/cloud.google.com/go/internal/trace/trace.go
  • vendor/cloud.google.com/go/internal/version/update_version.sh
  • vendor/cloud.google.com/go/internal/version/version.go
  • vendor/cloud.google.com/go/monitoring/LICENSE
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/alert_policy_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/auxiliary.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/auxiliary_go123.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/doc.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/gapic_metadata.json
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/group_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/helpers.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/metric_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/notification_channel_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/query_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/service_monitoring_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/snooze_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/uptime_check_client.go
  • vendor/cloud.google.com/go/monitoring/apiv3/v2/version.go
  • vendor/cloud.google.com/go/monitoring/internal/version.go
  • vendor/cloud.google.com/go/storage/CHANGES.md
  • vendor/cloud.google.com/go/storage/LICENSE
  • vendor/cloud.google.com/go/storage/README.md
  • vendor/cloud.google.com/go/storage/TESTING.md
  • vendor/cloud.google.com/go/storage/acl.go
  • vendor/cloud.google.com/go/storage/bucket.go
  • vendor/cloud.google.com/go/storage/client.go
  • vendor/cloud.google.com/go/storage/copy.go
  • vendor/cloud.google.com/go/storage/doc.go
  • vendor/cloud.google.com/go/storage/dynamic_delay.go
  • vendor/cloud.google.com/go/storage/emulator_test.sh
  • vendor/cloud.google.com/go/storage/experimental/experimental.go
  • vendor/cloud.google.com/go/storage/grpc_client.go
  • vendor/cloud.google.com/go/storage/grpc_dp.go
  • vendor/cloud.google.com/go/storage/grpc_metrics.go
  • vendor/cloud.google.com/go/storage/grpc_reader.go
  • vendor/cloud.google.com/go/storage/grpc_reader_multi_range.go
  • vendor/cloud.google.com/go/storage/grpc_writer.go
  • vendor/cloud.google.com/go/storage/hmac.go
  • vendor/cloud.google.com/go/storage/http_client.go
  • vendor/cloud.google.com/go/storage/iam.go
  • vendor/cloud.google.com/go/storage/internal/apiv2/auxiliary.go
  • vendor/cloud.google.com/go/storage/internal/apiv2/auxiliary_go123.go
  • vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
  • vendor/cloud.google.com/go/storage/internal/apiv2/gapic_metadata.json
  • vendor/cloud.google.com/go/storage/internal/apiv2/helpers.go
  • vendor/cloud.google.com/go/storage/internal/apiv2/storage_client.go
  • vendor/cloud.google.com/go/storage/internal/apiv2/version.go
  • vendor/cloud.google.com/go/storage/internal/experimental.go
  • vendor/cloud.google.com/go/storage/internal/version.go
  • vendor/cloud.google.com/go/storage/invoke.go
  • vendor/cloud.google.com/go/storage/notifications.go
  • vendor/cloud.google.com/go/storage/option.go
  • vendor/cloud.google.com/go/storage/post_policy_v4.go
  • vendor/cloud.google.com/go/storage/reader.go
  • vendor/cloud.google.com/go/storage/storage.go
  • vendor/cloud.google.com/go/storage/storage.replay
  • vendor/cloud.google.com/go/storage/trace.go
  • vendor/cloud.google.com/go/storage/writer.go
  • vendor/cyphar.com/go-pathrs/.golangci.yml
  • vendor/cyphar.com/go-pathrs/COPYING
  • vendor/cyphar.com/go-pathrs/doc.go
  • vendor/cyphar.com/go-pathrs/handle_linux.go
  • vendor/cyphar.com/go-pathrs/internal/fdutils/fd_linux.go
  • vendor/cyphar.com/go-pathrs/internal/libpathrs/error_unix.go
  • vendor/cyphar.com/go-pathrs/internal/libpathrs/libpathrs_linux.go
  • vendor/cyphar.com/go-pathrs/procfs/procfs_linux.go
  • vendor/cyphar.com/go-pathrs/root_linux.go
  • vendor/cyphar.com/go-pathrs/utils_linux.go
  • vendor/github.com/Azure/azure-pipeline-go/LICENSE
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/core.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/defaultlog.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/defaultlog_syslog.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/defaultlog_windows.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/doc.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/error.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/progress.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/request.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/response.go
  • vendor/github.com/Azure/azure-pipeline-go/pipeline/version.go
  • vendor/github.com/Azure/azure-sdk-for-go/LICENSE.txt
  • vendor/github.com/Azure/azure-sdk-for-go/NOTICE.txt
  • vendor/github.com/Azure/azure-sdk-for-go/profiles/latest/containerregistry/mgmt/containerregistry/models.go
  • vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/CHANGELOG.md
  • vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/LICENSE.txt

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 14, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes and add guards to skip some of the testcases for certain openshift versions. Then I would put this PR from Draft status.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

More examples:

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 14, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes and add guards to skip some of the testcases for certain openshift versions. Then I would put this PR from Draft status.

How to run locally

Either:

go run github.com/onsi/ginkgo/v2/ginkgo #to run all
go run github.com/onsi/ginkgo/v2/ginkgo --focus="VM" #use regex to filter

or if the ginkgo cli is installed:

ginkgo #to run all
ginkgo --focus="VM" #use regex to filter

Other standard flags of ginkgo such as --dry-run or -v also work.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the '---REPORTxxxxSTART/END---' to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

More examples:

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 14, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes and add guards to skip some of the testcases for certain openshift versions. Then I would put this PR from Draft status.

How to run locally

Either:

go run github.com/onsi/ginkgo/v2/ginkgo #to run all
go run github.com/onsi/ginkgo/v2/ginkgo --focus="VM" #use regex to filter

or if the ginkgo cli is installed:

ginkgo #to run all
ginkgo --focus="VM" #use regex to filter

Other standard flags of ginkgo such as --dry-run or -v also work.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 7485

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the ---REPORTxxxxSTART/END--- to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

More examples:

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 14, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes and add guards to skip some of the testcases for certain openshift versions. Then I would put this PR from Draft status.

How to run locally

Either:

go run github.com/onsi/ginkgo/v2/ginkgo #to run all
go run github.com/onsi/ginkgo/v2/ginkgo --focus="VM" #use regex to filter

or if the ginkgo cli is installed:

ginkgo #to run all
ginkgo --focus="VM" #use regex to filter

Other standard flags of ginkgo such as --dry-run or -v also work.

Example output

The following is an example output of of a run:

$ go run github.com/onsi/ginkgo/v2/ginkgo --focus="Kafka" -v --dry-run
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/networking
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/subscription
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/testdata/flowcollector_v1beta2_template.yaml
[1776103444] Backend Suite - 3/7488 specs
--------------------REPORT_BEFORE_SUITE_START--------------------
Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776103444

Will run 3 specs

--------------------REPORT_BEFORE_SUITE_END--------------------
--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2606
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2659
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•--------------------REPORT_AFTER_EACH_START--------------------
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2785
• PASSED [0.000 seconds]
--------------------REPORT_AFTER_EACH_END--------------------
•
--------------------REPORT_AFTER_SUITE_START--------------------
------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.122 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 3

--------------------REPORT_AFTER_SUITE_END--------------------
SUCCESS! 122.331852ms PASS

Ginkgo ran 1 suite in 12.321528802s
Test Suite Passed

In current form I have used the ---REPORTxxxxSTART/END--- to indicate which parts can be controlled by which parts of our code. This implementation cannot control anything above line REPORT_BEFORE_SUITE_START or below line REPORT_AFTER_SUITE_END . This means we cannot adjust the [1776103444] Backend Suite - 3/7488 specs line.

I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

More examples:

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link
Copy Markdown
Collaborator

openshift-ci-robot commented Apr 16, 2026

@oliver-smakal: This pull request references NETOBSERV-2387 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set.

Details

In response to this:

Description

This PR is intended to show how the e2e backend migration of tests could look like and where we can discuss more details.

Changes compare to the openshift-tests-private repo:

  • using ginkgo directly without wrapper
  • using BeforeSuite possible, but we need to be carefull about not using compat_otp.NewCLI and similar utilities which have g.BeforeEach calls
  • yaml testdata files are not being compiled in and are just referenced from the file tree

Once agreed on the implementation, the testcases will need to be ported over one more time to reflect the latest changes and add guards to skip some of the testcases for certain openshift versions. Then I would put this PR from Draft status.

How to run locally

Either:

go run github.com/onsi/ginkgo/v2/ginkgo #to run all
go run github.com/onsi/ginkgo/v2/ginkgo --focus="VM" #use regex to filter

or if the ginkgo cli is installed:

ginkgo #to run all
ginkgo --focus="VM" #use regex to filter

Other standard flags of ginkgo such as --dry-run or -v also work.

Example output

The following is an example output of of a run:

[1776324075] Backend Suite - 3/7488 specs Running Suite: Backend Suite - /home/osmakal/Repos/network-observability-operator/integration-tests/backend
==========================================================================================================
Random Seed: 1776324075

Will run 3 specs
[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-Critical-56362-High-53597-High-56326-Verify network flows are captured with Kafka with TLS [Serial][Slow]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2601
• PASSED [0.000 seconds]
•[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-Longduration-High-57397-High-65116-Verify network-flows export with Kafka and netobserv installation without Loki[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2654
• PASSED [0.000 seconds]
•[sig-netobserv] Network_Observability with Loki with Kafka Author:aramesha-NonPreRelease-High-64880-High-75340-Verify secrets copied for Loki and Kafka when deployed in NS other than flowcollector pods and networkPolicy enabled[Serial]
/home/osmakal/Repos/network-observability-operator/integration-tests/backend/test_flowcollector.go:2780
• PASSED [0.000 seconds]
•------------------------------

Backend Suite - 3/3 specs • SUCCESS! [0.136 seconds]

Ran 3 tests
Passed: 3, Failed: 0, Skipped: 0
SUCCESS! 136.307854ms PASS

Ginkgo ran 1 suite in 12.165523429s
Test Suite Passed

This implementation cannot control the [1776103444] Backend Suite - 3/7488 specs line. I don't think it will be issue in any way, as the rest of the report can make it really clear to us what is actually being run. For running it in prow it should also not be a problem if we preserve the way it works with the openshift-test-private repo and use junit report.

More examples:

How it could be used in CI.

Currently the openshift-test-private implementation uses junit to handle the result.
See lines 398-459 that junit result is generated and parsed and later is used by the openshift-e2e-test-qe-report step to fail the prow job if necessary). For the nice output in prow, it seems like the function handle_result in the openshift-extended-test step is used as it seems to be formatting and renaming the junit file with the help of handleresult.py python script.

Though this is probably not directly reusable for implementation as the junit in openshift-test-private does not use default ginkgo implementation, we could use the same idea and use the junit output with minor transformation in prow to report result in a nice way.

Alternative approaches

We could create something a bit more custom using more low level ginkgo functionality like in commit fbb6fc36b17bc37b6eb847b8e51f42d26a9b29d8, to remove things like the [1776103444] Backend Suite - 3/7488 specs line. However, I don't think it would be worth the tradeoff of not using the default ginkgo runner via RunSpecs() as we would use the possibility to run tests in parallel and some other features.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@oliver-smakal oliver-smakal marked this pull request as ready for review April 28, 2026 11:45
@oliver-smakal oliver-smakal force-pushed the backend-tests-migration/NETOBSERV-2387 branch from 77f49b5 to e9dff92 Compare April 28, 2026 12:17
@oliver-smakal oliver-smakal force-pushed the backend-tests-migration/NETOBSERV-2387 branch from e9dff92 to 77f49b5 Compare April 28, 2026 12:21
@oliver-smakal oliver-smakal force-pushed the backend-tests-migration/NETOBSERV-2387 branch from 77f49b5 to 299451f Compare April 28, 2026 12:29
Comment thread integration-tests/backend/example-test-fail.log Outdated
Copy link
Copy Markdown
Member

@memodi memodi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @oliver-smakal , some questions/comments.

/cc @jotak @OlivierCazade @leandroberetta @jpinsonneau
JSYK, in terms of code the files in testdata and all the code has been already reviewed in openshift-tests-private except for couple of new files (version_checker.go, backend_suite_test.go) , besides those new files, the most important thing here to review here go.mod for new dependencies.

Comment thread bundle/manifests/flows.netobserv.io_flowcollectors.yaml
Comment thread config/crd/bases/flows.netobserv.io_flowcollectors.yaml
Comment thread docs/FlowCollector.md
Comment thread integration-tests/backend/version_checker.go Outdated
Comment thread integration-tests/backend/backend_suite_test.go
Comment thread integration-tests/backend/backend_suite_test.go Outdated
Copy link
Copy Markdown
Member

@memodi memodi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

thanks @oliver-smakal

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants