Skip to content

NO-ISSUE: remove etcd dependency from failing cases#1241

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift:mainfrom
bandrade:codex/remove-etcd-from-tests
Mar 20, 2026
Merged

NO-ISSUE: remove etcd dependency from failing cases#1241
openshift-merge-bot[bot] merged 1 commit intoopenshift:mainfrom
bandrade:codex/remove-etcd-from-tests

Conversation

@bandrade
Copy link
Contributor

@bandrade bandrade commented Feb 27, 2026

Summary

  • rewrite OCP-32613 to use the existing learn-operator path instead of the archived etcd-based catalog
  • rewrite OCP-27680 to use the community prometheus operator and validate a real Prometheus CR instead of the etcd service-monitor catalog
  • rewrite OCP-47181 to use the existing ditto-based dependency fixture instead of an etcd install path

Assisted-By: Claude

@bandrade
Copy link
Contributor Author

/retitle NO-ISSUE: remove etcd dependency from failing cases

@openshift-ci openshift-ci bot changed the title tests-extension: remove etcd dependency from failing QE cases NO-ISSUE: remove etcd dependency from failing cases Feb 27, 2026
@openshift-ci-robot
Copy link

@bandrade: This pull request explicitly references no jira issue.

Details

In response to this:

Summary

  • rewrite OCP-32613 to use the existing learn-operator path instead of the archived etcd-based catalog
  • rewrite OCP-27680 to use the community prometheus operator and validate a real Prometheus CR instead of the etcd service-monitor catalog
  • rewrite OCP-47181 to use the existing ditto-based dependency fixture instead of an etcd install path

Testing

  • go test ./test/qe/util/olmv0util
  • go test ./test/qe/specs -run TestDoesNotExist

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Feb 27, 2026
@jianzhangbjz
Copy link
Member

Hi @bandrade , could you help list the test log in the FIPS cluster? Thanks!

@jianzhangbjz
Copy link
Member

And, other test cases(OCP-47149, OCP-24387) still use the etcd operator, which fails on the OCP 4.22 FIPS enabled cluster.

@jianzhangbjz
Copy link
Member

/retest-required

@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch from 81b8410 to c11bf70 Compare March 10, 2026 22:50
@coderabbitai
Copy link

coderabbitai bot commented Mar 10, 2026

Important

Review skipped

Auto reviews are limited based on label configuration.

🚫 Review skipped — only excluded labels are configured. (1)
  • do-not-merge/work-in-progress

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: dccc5467-cdd8-4e70-81ae-a89928f34c18

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Tests modified to dynamically resolve PackageManifest default channels and switch to a new "learn" operator/catalog context; CatalogSource and Subscription values were updated, multi-CSV readiness polling and template-driven Prometheus resource application were added, node-gating and cleanup/defer flows were introduced across two OLM test specs. (41 words)

Changes

Cohort / File(s) Summary
Default-option OLM test
tests-extension/test/qe/specs/olmv0_defaultoption.go
Reworked to discover packageName dynamically (uses odf-prometheus-operator lookup), resolve defaultChannel from PackageManifest, and drive currentCSVDesc fields from the selected channel. CatalogSource address/name/namespace updated to learn-operator-index variants; Subscriptions switched to learn package and beta channels; replaced hard-coded CSV checks with multi-CSV readiness polling (learn, ditto, planetscale), added logging and defer cleanup for CSVs/Subscriptions; swapped some catalogsource-image YAML uses.
Non-all-namespaces OLM test
tests-extension/test/qe/specs/olmv0_nonallns.go
Removed creating a specific CatalogSource CR; added PackageManifest default-channel lookup and skip-if-missing behavior; gated on presence of schedulable Linux worker nodes; use Subscription.InstalledCSV for validations; apply Prometheus resources via templates with cleanup; replaced direct CSV/resource assertions with template-driven applies and generalized Prometheus resource checks.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    actor Tester as Test Runner
    participant K8s as Kubernetes API
    participant PM as PackageManifest
    participant CS as CatalogSource
    participant OLM as OLM (Subscription/CSV)
    participant Node as NodeChecker

    Tester->>PM: query PackageManifest for package & defaultChannel
    PM-->>Tester: package info or not found (skip)
    Tester->>Node: check schedulable Linux worker nodes (if required)
    Node-->>Tester: present / absent (skip if absent)
    Tester->>K8s: (optional) reference/create CatalogSource (`cs.Name`/`cs.Namespace`)
    Tester->>OLM: create Subscription (CatalogSourceName/Namespace, Channel, Package, StartingCSV)
    OLM-->>Tester: InstalledCSV name
    Tester->>K8s: poll CSV(s) status until Succeeded or timeout (multiple operators)
    K8s-->>Tester: CSV statuses (Succeeded / Failure / timeout)
    Tester->>K8s: apply template-driven Prometheus resources (ServiceMonitor/Rules)
    K8s-->>Tester: resource creation confirmation
    Tester->>OLM: cleanup: delete Subscription(s) and CSV(s)
    Tester->>CS: cleanup CatalogSource references (if dynamically created)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Test Structure And Quality ⚠️ Warning Test PolarionID:47181 violates test quality requirements: missing defer cleanup statements, improper error handling in poll function, and multiple concerns in single test. Add explicit defer cleanup after resource creation, replace error handling in poll function with graceful retry logic, and split test into separate tests for each operator.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'NO-ISSUE: remove etcd dependency from failing cases' directly aligns with the PR objectives, which describe rewriting failing test cases to remove etcd-based dependencies and replace them with alternative operators.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed All test names in the modified test cases are stable and deterministic, containing only static descriptive strings without dynamic information.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@bandrade
Copy link
Contributor Author

@jianzhangbjz here is the command output:


oc get cm cluster-config-v1 -n kube-system -o json | jq -r '.data."install-config"' | grep -i "fips"
fips: true

== Command: ./bin/olmv0-tests-ext run-test -o json -n <32613> -n <47181> -n <27680>

  I0310 19:42:22.435086 75591 test_context.go:566] The --provider flag is not set. Continuing as if --provider=skeleton had been used.
  Running Suite:  - /Users/bandrade/redhat/repositories/operator-framework-olm/tests-extension
  ============================================================================================
  Random Seed: 1773182542 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [sig-operator][Jira:OLM] OLMv0 optional should PolarionID:32613-[OTP][Skipped:Disconnected]Operators won't install if the CSV dependency is already installed
  /Users/bandrade/redhat/repositories/operator-framework-olm/tests-extension/test/qe/specs/olmv0_defaultoption.go:776
    STEP: Creating a kubernetes client @ 03/10/26 19:42:22.437
  I0310 19:42:27.446179 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig explain template.apiVersion'
  I0310 19:42:29.652198 75591 client.go:349] do not know if it is external oidc cluster or not, and try to check it again
  I0310 19:42:29.652695 75591 client.go:820] showInfo is true
  I0310 19:42:29.652723 75591 client.go:821] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get authentication/cluster -o=jsonpath={.spec.type}'
  I0310 19:42:30.717782 75591 clusters.go:572] Found authentication type used: 
  I0310 19:42:33.505577 75591 client.go:200] configPath is now "/var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/configfile3479064367"
  I0310 19:42:33.505643 75591 client.go:363] The user is now "e2e-test-default-f52p6-user"
  I0310 19:42:33.505656 75591 client.go:366] Creating project "e2e-test-default-f52p6"
  I0310 19:42:33.787774 75591 client.go:375] Waiting on permissions in project "e2e-test-default-f52p6" ...
  I0310 19:42:34.665272 75591 client.go:436] Waiting for ServiceAccount "default" to be provisioned...
  I0310 19:42:34.983831 75591 client.go:436] Waiting for ServiceAccount "builder" to be provisioned...
  I0310 19:42:35.306084 75591 client.go:436] Waiting for ServiceAccount "deployer" to be provisioned...
  I0310 19:42:35.621398 75591 client.go:446] Waiting for RoleBinding "system:image-builders" to be provisioned...
  I0310 19:42:36.065320 75591 client.go:446] Waiting for RoleBinding "system:deployers" to be provisioned...
  I0310 19:42:36.493407 75591 client.go:446] Waiting for RoleBinding "system:image-pullers" to be provisioned...
  I0310 19:42:36.933354 75591 client.go:477] Project "e2e-test-default-f52p6" has been fully provisioned.
  I0310 19:42:37.184652 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get nodes -o=jsonpath={.items[*].status.nodeInfo.architecture}'
  I0310 19:42:38.937227 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get operatorhubs cluster -o=jsonpath={.spec.disableAllDefaultSources}'
  I0310 19:42:39.998486 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get clusterversion version -o=jsonpath={.spec.capabilities.baselineCapabilitySet}'
  I0310 19:42:41.094418 75591 catalog_source.go:50] set interval to be 10m0s
  I0310 19:42:44.098011 75591 client.go:761] Running 'oc --namespace=e2e-test-default-f52p6 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig process --ignore-unknown-parameters=true -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/fixture-testdata-dir3477913146/test/qe/testdata/olm/catalogsource-image.yaml -p NAME=learn-32613 NAMESPACE=e2e-test-default-f52p6 ADDRESS=quay.io/olmqe/learn-operator-index:v25 SECRET= DISPLAYNAME="OLM QE" PUBLISHER="OLM QE" SOURCETYPE=grpc INTERVAL=10m0s IMAGETEMPLATE=""'
  I0310 19:42:45.165752 75591 helper.go:101] the file of resource is /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-6609b77bolm-config.json
  I0310 19:42:45.166425 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig apply -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-6609b77bolm-config.json'
  catalogsource.operators.coreos.com/learn-32613 created
  I0310 19:42:46.667025 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get infrastructures.config.openshift.io cluster -o=jsonpath={.status.controlPlaneTopology}'
  I0310 19:42:48.241003 75591 clusters.go:464] topology is HighlyAvailable
  I0310 19:42:48.241427 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get configmaps -n openshift-kube-apiserver config -o=jsonpath={.data.config\.yaml}'
  I0310 19:42:49.511485 75591 catalog_source.go:109] pod-security.kubernetes.io/enforce is restricted
  I0310 19:42:49.511804 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.spec.grpcPodConfig.securityContextConfig}'
  I0310 19:42:51.312349 75591 catalog_source.go:115] spec.grpcPodConfig.securityContextConfig is 
  I0310 19:42:51.312414 75591 catalog_source.go:118] set spec.grpcPodConfig.securityContextConfig to be restricted
  I0310 19:42:51.312774 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig patch catsrc learn-32613 -n e2e-test-default-f52p6 --type=merge -p {"spec":{"grpcPodConfig":{"securityContextConfig":"restricted"}}}'
  catalogsource.operators.coreos.com/learn-32613 patched
  I0310 19:42:52.661164 75591 catalog_source.go:74] create catsrc learn-32613 SUCCESS
  I0310 19:42:52.661212 75591 helper.go:225] Running: oc get AsAdmin(true) WithoutNamespace(true) catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}
  I0310 19:42:55.662777 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:42:56.850702 75591 helper.go:246] ---> we do expect value: READY, in returned value: CONNECTING
  I0310 19:42:56.850804 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:42:58.666098 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:42:59.718805 75591 helper.go:246] ---> we do expect value: READY, in returned value: CONNECTING
  I0310 19:42:59.718881 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:43:01.663286 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:43:02.715991 75591 helper.go:246] ---> we do expect value: READY, in returned value: CONNECTING
  I0310 19:43:02.716047 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:43:04.662805 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:43:05.666244 75591 helper.go:246] ---> we do expect value: READY, in returned value: CONNECTING
  I0310 19:43:05.666278 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:43:07.663506 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:43:08.694397 75591 helper.go:246] ---> we do expect value: READY, in returned value: CONNECTING
  I0310 19:43:08.694471 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:43:10.663501 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:43:11.725190 75591 helper.go:246] ---> we do expect value: READY, in returned value: TRANSIENT_FAILURE
  I0310 19:43:11.725250 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:43:13.663511 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:43:14.702806 75591 helper.go:246] ---> we do expect value: READY, in returned value: TRANSIENT_FAILURE
  I0310 19:43:14.702891 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:43:16.663563 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:43:17.814872 75591 helper.go:246] ---> we do expect value: READY, in returned value: TRANSIENT_FAILURE
  I0310 19:43:17.814949 75591 helper.go:263] ---> Not as expected! Return false
  I0310 19:43:19.663368 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status..lastObservedState}'
  I0310 19:43:20.717260 75591 helper.go:246] ---> we do expect value: READY, in returned value: READY
  I0310 19:43:20.717356 75591 helper.go:248] the output READY matches one of the content READY, expected
  I0310 19:43:20.718801 75591 client.go:761] Running 'oc --namespace=e2e-test-default-f52p6 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get operatorgroup'
  I0310 19:43:21.814574 75591 og.go:42] No operatorgroup in project: e2e-test-default-f52p6, create one: og-32613
  I0310 19:43:24.816530 75591 client.go:761] Running 'oc --namespace=e2e-test-default-f52p6 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig process --ignore-unknown-parameters=true -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/fixture-testdata-dir3477913146/test/qe/testdata/olm/operatorgroup.yaml -p NAME=og-32613 NAMESPACE=e2e-test-default-f52p6'
  I0310 19:43:25.865659 75591 helper.go:101] the file of resource is /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-27f3e30colm-config.json
  I0310 19:43:25.865954 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig apply -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-27f3e30colm-config.json'
  operatorgroup.operators.coreos.com/og-32613 created
  I0310 19:43:27.318209 75591 og.go:89] create og og-32613 success
  I0310 19:43:27.318465 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get csv --all-namespaces -o=jsonpath={range .items[*]}{@.metadata.name}{","}{@.metadata.namespace}{":"}{end}'
  I0310 19:43:28.599144 75591 subscription.go:135] getting csv is odf-prometheus-operator.v4.21.0-rhodf, the related NS is [ e2e-test-default-kfvv6                   e2e-test-default-ldt8f]
  I0310 19:43:28.599218 75591 subscription.go:135] getting csv is packageserver, the related NS is [ openshift-operator-lifecycle-manager                  ]
  I0310 19:43:28.599227 75591 subscription.go:138] create sub sub-32613
  I0310 19:43:31.601043 75591 client.go:761] Running 'oc --namespace=e2e-test-default-f52p6 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig process --ignore-unknown-parameters=true -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/fixture-testdata-dir3477913146/test/qe/testdata/olm/olm-subscription.yaml -p SUBNAME=sub-32613 SUBNAMESPACE=e2e-test-default-f52p6 CHANNEL=beta APPROVAL=Automatic OPERATORNAME=learn SOURCENAME=learn-32613 SOURCENAMESPACE=e2e-test-default-f52p6 STARTINGCSV=learn-operator.v0.0.3 CONFIGMAPREF= SECRETREF='
  I0310 19:43:32.720708 75591 helper.go:101] the file of resource is /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-e1333c2aolm-config.json
  I0310 19:43:32.721090 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig apply -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-e1333c2aolm-config.json'
  subscription.operators.coreos.com/sub-32613 created
  I0310 19:43:39.214201 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status.state}'
  I0310 19:43:40.310226 75591 subscription.go:177] sub sub-32613 state is , not AtLatestKnown
  I0310 19:43:44.213902 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status.state}'
  I0310 19:43:45.311067 75591 subscription.go:177] sub sub-32613 state is , not AtLatestKnown
  I0310 19:43:49.214032 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status.state}'
  I0310 19:43:53.312275 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status.installedCSV}'
  I0310 19:43:54.373410 75591 helper.go:182] $oc get [sub sub-32613 -n e2e-test-default-f52p6 -o=jsonpath={.status.installedCSV}], the returned resource:
  learn-operator.v0.0.3
  I0310 19:43:54.373518 75591 subscription.go:208] the installed CSV name is learn-operator.v0.0.3
  I0310 19:43:54.373546 75591 helper.go:225] Running: oc get AsAdmin(true) WithoutNamespace(true) csv learn-operator.v0.0.3 -n e2e-test-default-f52p6 -o=jsonpath={.status.phase}
  I0310 19:43:57.375715 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get csv learn-operator.v0.0.3 -n e2e-test-default-f52p6 -o=jsonpath={.status.phase}'
  I0310 19:43:59.134287 75591 helper.go:246] ---> we do expect value: Succeeded, in returned value: Succeeded
  I0310 19:43:59.134365 75591 helper.go:248] the output Succeeded matches one of the content Succeeded, expected
  I0310 19:43:59.134859 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get csv --all-namespaces -o=jsonpath={range .items[*]}{@.metadata.name}{","}{@.metadata.namespace}{":"}{end}'
  I0310 19:44:00.934717 75591 subscription.go:135] getting csv is odf-prometheus-operator.v4.21.0-rhodf, the related NS is [ e2e-test-default-kfvv6                   e2e-test-default-ldt8f]
  I0310 19:44:00.934814 75591 subscription.go:135] getting csv is packageserver, the related NS is [ openshift-operator-lifecycle-manager                  ]
  I0310 19:44:00.934822 75591 subscription.go:135] getting csv is learn-operator.v0.0.3, the related NS is [ e2e-test-default-f52p6                  ]
  I0310 19:44:00.934837 75591 subscription.go:138] create sub sub-32613-conflict
  I0310 19:44:03.937577 75591 client.go:761] Running 'oc --namespace=e2e-test-default-f52p6 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig process --ignore-unknown-parameters=true -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/fixture-testdata-dir3477913146/test/qe/testdata/olm/olm-subscription.yaml -p SUBNAME=sub-32613-conflict SUBNAMESPACE=e2e-test-default-f52p6 CHANNEL=beta APPROVAL=Automatic OPERATORNAME=learn SOURCENAME=learn-32613 SOURCENAMESPACE=e2e-test-default-f52p6 STARTINGCSV= CONFIGMAPREF= SECRETREF='
  I0310 19:44:05.112826 75591 helper.go:101] the file of resource is /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-6fa47e29olm-config.json
  I0310 19:44:05.113174 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig apply -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-f52p6-6fa47e29olm-config.json'
  subscription.operators.coreos.com/sub-32613-conflict created
  I0310 19:44:06.581391 75591 helper.go:225] Running: oc get AsAdmin(true) WithoutNamespace(true) subs sub-32613-conflict -n e2e-test-default-f52p6 -o=jsonpath={.status.conditions..reason}
  I0310 19:44:09.583805 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get subs sub-32613-conflict -n e2e-test-default-f52p6 -o=jsonpath={.status.conditions..reason}'
  I0310 19:44:10.649060 75591 helper.go:246] ---> we do expect value: ConstraintsNotSatisfiable, in returned value: CatalogSourcesAdded ConstraintsNotSatisfiable
  I0310 19:44:10.649146 75591 helper.go:256] the output CatalogSourcesAdded ConstraintsNotSatisfiable Contains one of the content ConstraintsNotSatisfiable, expected
  I0310 19:44:10.649184 75591 subscription.go:382] remove csv , ns is e2e-test-default-f52p6, the subscription name is: sub-32613-conflict
  I0310 19:44:10.649583 75591 subscription.go:378] remove sub sub-32613-conflict, ns is e2e-test-default-f52p6
  I0310 19:44:10.649859 75591 helper.go:276] removeResource: parameters contain '-n' flag, parameters: [sub sub-32613-conflict -n e2e-test-default-f52p6]
  I0310 19:44:10.650253 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig delete sub sub-32613-conflict -n e2e-test-default-f52p6'
  I0310 19:44:16.935177 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-32613-conflict -n e2e-test-default-f52p6'
  I0310 19:44:18.200365 75591 client.go:795] Error running oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-32613-conflict -n e2e-test-default-f52p6:
  Error from server (NotFound): subscriptions.operators.coreos.com "sub-32613-conflict" not found
  I0310 19:44:18.200467 75591 helper.go:292] the resource is delete successfully
  I0310 19:44:18.200524 75591 catalog_source.go:207] delete carsrc learn-32613, ns is e2e-test-default-f52p6
  I0310 19:44:18.200585 75591 helper.go:276] removeResource: parameters contain '-n' flag, parameters: [catsrc learn-32613 -n e2e-test-default-f52p6]
  I0310 19:44:18.200958 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig delete catsrc learn-32613 -n e2e-test-default-f52p6'
  I0310 19:44:24.538021 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6'
  I0310 19:44:25.793819 75591 client.go:795] Error running oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc learn-32613 -n e2e-test-default-f52p6:
  Error from server (NotFound): catalogsources.operators.coreos.com "learn-32613" not found
  I0310 19:44:25.793893 75591 helper.go:292] the resource is delete successfully
  I0310 19:44:26.836832 75591 client.go:524] Deleted {user.openshift.io/v1, Resource=users  e2e-test-default-f52p6-user}, err: <nil>
  I0310 19:44:27.056501 75591 client.go:524] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-default-f52p6}, err: <nil>
  I0310 19:44:27.279938 75591 client.go:524] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~ZuZbn5OC6zAaAgJBGd6uXJnULB24UZ9dLLRcV-TtJv8}, err: <nil>
    STEP: Destroying namespace "e2e-test-default-f52p6" for this suite. @ 03/10/26 19:44:27.281
  • [125.065 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 125.066 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
  Running Suite:  - /Users/bandrade/redhat/repositories/operator-framework-olm/tests-extension
  ============================================================================================
  Random Seed: 1773182542 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [sig-operator][Jira:OLM] OLMv0 optional should PolarionID:make -[OTP][Skipped:Disconnected]Disjunctive constraint of one package and one GVK
  /Users/bandrade/redhat/repositories/operator-framework-olm/tests-extension/test/qe/specs/olmv0_defaultoption.go:3263
    STEP: Creating a kubernetes client @ 03/10/26 19:44:27.506
  I0310 19:44:32.509130 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig explain template.apiVersion'
  I0310 19:44:34.015346 75591 client.go:349] do not know if it is external oidc cluster or not, and try to check it again
  I0310 19:44:34.015771 75591 client.go:820] showInfo is true
  I0310 19:44:34.015798 75591 client.go:821] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get authentication/cluster -o=jsonpath={.spec.type}'
  I0310 19:44:35.102800 75591 clusters.go:572] Found authentication type used: 
  I0310 19:44:37.106319 75591 client.go:200] configPath is now "/var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/configfile2920095470"
  I0310 19:44:37.106393 75591 client.go:363] The user is now "e2e-test-default-ps9v8-user"
  I0310 19:44:37.106406 75591 client.go:366] Creating project "e2e-test-default-ps9v8"
  I0310 19:44:37.375464 75591 client.go:375] Waiting on permissions in project "e2e-test-default-ps9v8" ...
  I0310 19:44:38.258174 75591 client.go:436] Waiting for ServiceAccount "default" to be provisioned...
  I0310 19:44:38.599443 75591 client.go:436] Waiting for ServiceAccount "builder" to be provisioned...
  I0310 19:44:38.914894 75591 client.go:436] Waiting for ServiceAccount "deployer" to be provisioned...
  I0310 19:44:39.233206 75591 client.go:446] Waiting for RoleBinding "system:image-builders" to be provisioned...
  I0310 19:44:39.662568 75591 client.go:446] Waiting for RoleBinding "system:deployers" to be provisioned...
  I0310 19:44:40.085434 75591 client.go:446] Waiting for RoleBinding "system:image-pullers" to be provisioned...
  I0310 19:44:40.513133 75591 client.go:477] Project "e2e-test-default-ps9v8" has been fully provisioned.
  I0310 19:44:40.730995 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get proxy cluster -o=jsonpath={.status}'
  I0310 19:44:41.801943 75591 helper.go:944] No proxy configuration detected in cluster (status={})
  I0310 19:44:41.802038 75591 helper.go:883] Testing external network connectivity from master node using DebugNodeWithChroot
  I0310 19:44:41.802455 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get node -l node-role.kubernetes.io/master -o jsonpath='{.items[*].metadata.name}''
  I0310 19:44:43.275473 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get ns/default -o=jsonpath={.metadata.labels.pod-security\.kubernetes\.io/enforce}'
  I0310 19:44:44.309754 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get scheduler cluster -o=jsonpath={.spec.defaultNodeSelector}'
  I0310 19:44:45.378531 75591 client.go:820] showInfo is true
  I0310 19:44:45.378590 75591 client.go:821] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig debug node/ip-10-0-106-50.us-west-2.compute.internal --to-namespace=default -- chroot /host bash -c timeout 10 curl -k https://quay.io > /dev/null 2>&1; [ $? -eq 0 ] && echo "connected"'
  I0310 19:44:48.937373 75591 helper.go:902] External network connectivity test succeeded (output: connected
  Starting pod/ip-10-0-106-50us-west-2computeinternal-debug-8jk95 ...
  To use host binaries, run `chroot /host`

  Removing debug pod ...), cluster can access quay.io
  I0310 19:44:48.937461 75591 helper.go:985] Cluster has external network access (connected environment), no mirror validation needed
  I0310 19:44:48.937775 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get nodes -o=jsonpath={.items[*].status.nodeInfo.architecture}'
  I0310 19:44:50.625916 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get operatorhubs cluster -o=jsonpath={.spec.disableAllDefaultSources}'
  I0310 19:44:51.718102 75591 catalog_source.go:50] set interval to be 10m0s
  I0310 19:44:54.721900 75591 client.go:761] Running 'oc --namespace=e2e-test-default-ps9v8 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig process --ignore-unknown-parameters=true -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/fixture-testdata-dir3477913146/test/qe/testdata/olm/catalogsource-image-extract.yaml -p NAME=ocp-47181 NAMESPACE=e2e-test-default-ps9v8 ADDRESS=quay.io/olmqe/ditto-index:41565-cache SECRET= DISPLAYNAME="ocp-47181" PUBLISHER="OLM QE" SOURCETYPE=grpc INTERVAL=10m0s IMAGETEMPLATE=""'
  I0310 19:44:55.759018 75591 helper.go:101] the file of resource is /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-ps9v8-ee2985d2olm-config.json
  I0310 19:44:55.759411 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig apply -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-ps9v8-ee2985d2olm-config.json'
  catalogsource.operators.coreos.com/ocp-47181 created
  I0310 19:44:57.255924 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get infrastructures.config.openshift.io cluster -o=jsonpath={.status.controlPlaneTopology}'
  I0310 19:44:59.399564 75591 clusters.go:464] topology is HighlyAvailable
  I0310 19:44:59.400001 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get configmaps -n openshift-kube-apiserver config -o=jsonpath={.data.config\.yaml}'
  I0310 19:45:00.625582 75591 catalog_source.go:109] pod-security.kubernetes.io/enforce is restricted
  I0310 19:45:00.625970 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc ocp-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.spec.grpcPodConfig.securityContextConfig}'
  I0310 19:45:01.641845 75591 catalog_source.go:115] spec.grpcPodConfig.securityContextConfig is 
  I0310 19:45:01.641917 75591 catalog_source.go:118] set spec.grpcPodConfig.securityContextConfig to be restricted
  I0310 19:45:01.642292 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig patch catsrc ocp-47181 -n e2e-test-default-ps9v8 --type=merge -p {"spec":{"grpcPodConfig":{"securityContextConfig":"restricted"}}}'
  catalogsource.operators.coreos.com/ocp-47181 patched
  I0310 19:45:02.984014 75591 catalog_source.go:74] create catsrc ocp-47181 SUCCESS
  I0310 19:45:12.985552 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc ocp-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status..lastObservedState}'
  I0310 19:45:14.038892 75591 catalog_source.go:158] catsrc ocp-47181 lastObservedState is TRANSIENT_FAILURE, not READY
  I0310 19:45:22.987546 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc ocp-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status..lastObservedState}'
  I0310 19:45:24.034899 75591 catalog_source.go:158] catsrc ocp-47181 lastObservedState is TRANSIENT_FAILURE, not READY
  I0310 19:45:32.986349 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get catsrc ocp-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status..lastObservedState}'
  I0310 19:45:34.031912 75591 catalog_source.go:175] catsrc ocp-47181 lastObservedState is READY
  I0310 19:45:34.032376 75591 client.go:761] Running 'oc --namespace=e2e-test-default-ps9v8 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get operatorgroup'
  I0310 19:45:35.159244 75591 og.go:42] No operatorgroup in project: e2e-test-default-ps9v8, create one: test-og-47181
  I0310 19:45:38.162715 75591 client.go:761] Running 'oc --namespace=e2e-test-default-ps9v8 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig process --ignore-unknown-parameters=true -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/fixture-testdata-dir3477913146/test/qe/testdata/olm/operatorgroup.yaml -p NAME=test-og-47181 NAMESPACE=e2e-test-default-ps9v8'
  I0310 19:45:39.221619 75591 helper.go:101] the file of resource is /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-ps9v8-5f3e8ca8olm-config.json
  I0310 19:45:39.222177 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig apply -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-ps9v8-5f3e8ca8olm-config.json'
  operatorgroup.operators.coreos.com/test-og-47181 created
  I0310 19:45:40.762807 75591 og.go:89] create og test-og-47181 success
  I0310 19:45:40.763201 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get csv --all-namespaces -o=jsonpath={range .items[*]}{@.metadata.name}{","}{@.metadata.namespace}{":"}{end}'
  I0310 19:45:42.018077 75591 subscription.go:135] getting csv is odf-prometheus-operator.v4.21.0-rhodf, the related NS is [ e2e-test-default-kfvv6                   e2e-test-default-ldt8f]
  I0310 19:45:42.018162 75591 subscription.go:135] getting csv is packageserver, the related NS is [ openshift-operator-lifecycle-manager                  ]
  I0310 19:45:42.018177 75591 subscription.go:138] create sub sub-47181
  I0310 19:45:45.021036 75591 client.go:761] Running 'oc --namespace=e2e-test-default-ps9v8 --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig process --ignore-unknown-parameters=true -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/fixture-testdata-dir3477913146/test/qe/testdata/olm/olm-subscription.yaml -p SUBNAME=sub-47181 SUBNAMESPACE=e2e-test-default-ps9v8 CHANNEL=alpha APPROVAL=Automatic OPERATORNAME=ditto-operator SOURCENAME=ocp-47181 SOURCENAMESPACE=e2e-test-default-ps9v8 STARTINGCSV= CONFIGMAPREF= SECRETREF='
  I0310 19:45:46.062646 75591 helper.go:101] the file of resource is /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-ps9v8-7733916folm-config.json
  I0310 19:45:46.063009 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig apply -f /var/folders/z3/8ny0f1dx2d55mpr_jsb0p0qc0000gn/T/e2e-test-default-ps9v8-7733916folm-config.json'
  subscription.operators.coreos.com/sub-47181 created
  I0310 19:45:52.530098 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status.state}'
  I0310 19:45:53.688438 75591 subscription.go:177] sub sub-47181 state is , not AtLatestKnown
  I0310 19:45:57.530123 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status.state}'
  I0310 19:45:58.569883 75591 subscription.go:177] sub sub-47181 state is , not AtLatestKnown
  I0310 19:46:02.530153 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status.state}'
  I0310 19:46:06.604293 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get sub sub-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status.installedCSV}'
  I0310 19:46:07.916495 75591 helper.go:182] $oc get [sub sub-47181 -n e2e-test-default-ps9v8 -o=jsonpath={.status.installedCSV}], the returned resource:
  ditto-operator.v0.2.0
  I0310 19:46:07.916608 75591 subscription.go:208] the installed CSV name is ditto-operator.v0.2.0
  I0310 19:46:17.918550 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get csv -n e2e-test-default-ps9v8'
  I0310 19:46:20.089536 75591 client.go:524] Deleted {user.openshift.io/v1, Resource=users  e2e-test-default-f52p6-user}, err: users.user.openshift.io "e2e-test-default-f52p6-user" not found
  I0310 19:46:20.308802 75591 client.go:524] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-default-f52p6}, err: oauthclients.oauth.openshift.io "e2e-client-e2e-test-default-f52p6" not found
  I0310 19:46:20.525599 75591 client.go:524] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~ZuZbn5OC6zAaAgJBGd6uXJnULB24UZ9dLLRcV-TtJv8}, err: oauthaccesstokens.oauth.openshift.io "sha256~ZuZbn5OC6zAaAgJBGd6uXJnULB24UZ9dLLRcV-TtJv8" not found
  I0310 19:46:20.764191 75591 client.go:524] Deleted {user.openshift.io/v1, Resource=users  e2e-test-default-ps9v8-user}, err: <nil>
  I0310 19:46:21.008580 75591 client.go:524] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-default-ps9v8}, err: <nil>
  I0310 19:46:21.231781 75591 client.go:524] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~bT9siHLDWBva9Nti9yC-D3zv9g_9BZuFvcY_ko2E-Ro}, err: <nil>
    STEP: Destroying namespace "e2e-test-default-ps9v8" for this suite. @ 03/10/26 19:46:21.232
  • [113.945 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 113.945 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
  Running Suite:  - /Users/bandrade/redhat/repositories/operator-framework-olm/tests-extension
  ============================================================================================
  Random Seed: 1773182542 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [sig-operator][Jira:OLM] OLMv0 within a namespace PolarionID:27680-[OTP][Skipped:Disconnected]OLM Bundle support for Prometheus Types [Serial] [NonHyperShiftHOST]
  /Users/bandrade/redhat/repositories/operator-framework-olm/tests-extension/test/qe/specs/olmv0_nonallns.go:372
    STEP: Creating a kubernetes client @ 03/10/26 19:46:21.452
  I0310 19:46:26.454645 75591 client.go:761] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig explain template.apiVersion'
  I0310 19:46:27.938097 75591 client.go:349] do not know if it is external oidc cluster or not, and try to check it again
  I0310 19:46:27.938321 75591 client.go:820] showInfo is true
  I0310 19:46:27.938332 75591 client.go:821] Running 'oc --kubeconfig=/Users/bandrade/redhat/kubeconfig/cluster-bot-2026-03-10-204700.kubeconfig get authentication/cluster -o=jsonpath={.spec.type}'
  I0310 19:46:28.985076 75591 clusters.go:572] Found authentication type used: 
  

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests-extension/test/qe/specs/olmv0_defaultoption.go`:
- Around line 3307-3315: The current poll in wait.PollUntilContextTimeout only
checks that CSV names appear in oc get csv output; change the predicate to
verify each CSV's .status.phase == "Succeeded" (e.g., by invoking
oc.AsAdmin().WithoutNamespace().Run("get").Args("csv", "<csv-name>", "-n",
sub.Namespace, "-o", "jsonpath={.status.phase}") or by parsing structured JSON
for both "ditto-operator" and "planetscale-operator") so the function
wait.PollUntilContextTimeout only returns true when both CSVs are Succeeded;
keep the existing olmv0util.LogDebugInfo(oc, sub.Namespace, ...) and
exutil.AssertWaitPollNoErr(waitErr, ...) behavior for failure reporting.
- Line 3266: The test currently calls exutil.SkipIfDisableDefaultCatalogsource
which incorrectly skips the entire self-contained CatalogSource scenario; remove
that call from olmv0_defaultoption.go so the test always runs (or replace it
with a short comment asserting the test creates and uses its own CatalogSource)
and make sure the code paths using create/consume of the test-local
CatalogSource (the surrounding test in olmv0_defaultoption.go) do not rely on
operatorhub/cluster or default sources; reference
exutil.SkipIfDisableDefaultCatalogsource to locate and remove/replace the guard.
- Around line 809-824: The test is creating a duplicate subscription to the same
package ("learn") so it triggers the same-package conflict instead of the
intended "dependency CSV already installed" scenario; update the second
SubscriptionDescription (SubName "sub-32613") so its OperatorPackage and
StartingCSV target a package that depends on the already-installed CSV (or swap
to the provided fixture that preserves the dependency relationship) and ensure
CatalogSourceName/CatalogSourceNamespace remain correct for that package; make
the same change for the other occurrence around the second block (the similar
SubscriptionDescription at the other location) so the test covers the
dependency-installed path rather than duplicate-subscription.

In `@tests-extension/test/qe/specs/olmv0_nonallns.go`:
- Around line 406-435: The code currently swallows errors from
olmv0util.ClusterPackageExists, the packagemanifest get, and
exutil.GetSchedulableLinuxWorkerNodes and fabricates defaults (e.g., setting
sub.Channel = "stable") or skips, which hides transient failures; change the
handling so that errors from ClusterPackageExists, the
oc.AsAdmin().WithoutNamespace().Run("get") call that fetches packagemanifest,
and exutil.GetSchedulableLinuxWorkerNodes are checked and cause the test to fail
with the actual error (use g.Fatalf or the test framework's fail function)
instead of ignoring them or defaulting the channel, and only proceed when the
packagemanifest lookup returns a non-empty defaultChannel (otherwise fail with
the error/output), and similarly fail with the worker node lookup error or a
clear failure message if no schedulable READY worker is found.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: aeee2c3e-d796-4487-bf74-8f5dad3fbb0d

📥 Commits

Reviewing files that changed from the base of the PR and between bea777e and c11bf70.

📒 Files selected for processing (2)
  • tests-extension/test/qe/specs/olmv0_defaultoption.go
  • tests-extension/test/qe/specs/olmv0_nonallns.go

@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch from c11bf70 to ef86b52 Compare March 10, 2026 23:23
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests-extension/test/qe/specs/olmv0_defaultoption.go`:
- Around line 826-838: The test uses SubscriptionDescription (sub) with
OperatorPackage "sample-operator" and expects "learn-operator.v0.0.3" in the
external catalog but those packages/versions don't exist; either point the
subscription to available operator bundles or add the missing bundles. Update
the SubscriptionDescription fields (OperatorPackage, Channel,
CatalogSourceName/Namespace or Template) used by sub.CreateWithoutCheck and the
subsequent olmv0util.NewCheck invocation to reference existing packages (e.g.,
learn-operator v0.0.1 or v0.0.2) or create and publish the missing
sample-operator and learn-operator v0.0.3 bundles to
quay.io/olmqe/learn-operator-index:v25 so the subscription check
(ConstraintsNotSatisfiable) targets valid catalog content.

In `@tests-extension/test/qe/specs/olmv0_nonallns.go`:
- Around line 419-447: The code selects firstNode using worker.Name and
re-checks Ready conditions unnecessarily; change it to use the node's
kubernetes.io/hostname label value for NODE_NAME so the Prometheus affinity
matches the template: after calling GetSchedulableLinuxWorkerNodes(oc) pick the
first worker (no extra Ready condition loop) and extract its
labels["kubernetes.io/hostname"] (falling back to worker.Name only if the label
is missing) and pass that value into olmv0util.ApplyResourceFromTemplate for
NODE_NAME when rendering prometheusTemplate.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 33abb12e-0049-4175-a7b7-955f1aab5571

📥 Commits

Reviewing files that changed from the base of the PR and between c11bf70 and ef86b52.

📒 Files selected for processing (2)
  • tests-extension/test/qe/specs/olmv0_defaultoption.go
  • tests-extension/test/qe/specs/olmv0_nonallns.go

Comment on lines +419 to +447
workerNodes, workerErr := exutil.GetSchedulableLinuxWorkerNodes(oc)
o.Expect(workerErr).NotTo(o.HaveOccurred(), "failed to list schedulable linux worker nodes")
firstNode := ""
for _, worker := range workerNodes {
for _, con := range worker.Status.Conditions {
if con.Type == "Ready" && con.Status == "True" {
firstNode = worker.Name
break
}
}
if firstNode != "" {
break
}
}
if firstNode == "" {
e2e.Failf("no schedulable worker node in READY state found")
}

g.By("Install the Prometheus operator with Automatic approval")
defer sub.Delete(itName, dr)
defer sub.DeleteCSV(itName, dr)
sub.Create(oc, itName, dr)
olmv0util.NewCheck("expect", exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "Succeeded", exutil.Ok, []string{"csv", csvName, "-n", ns, "-o=jsonpath={.status.phase}"}).Check(oc)
olmv0util.NewCheck("expect", exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "Succeeded", exutil.Ok, []string{"csv", sub.InstalledCSV, "-n", ns, "-o=jsonpath={.status.phase}"}).Check(oc)

g.By("Assert that prometheus dependency is resolved")
msg, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("csv", "-n", ns).Output()
g.By("Create a Prometheus resource that relies on Prometheus bundle types")
defer func() {
_ = oc.AsAdmin().WithoutNamespace().Run("delete").Args("prometheus", "example", "-n", ns, "--ignore-not-found").Execute()
}()
err = olmv0util.ApplyResourceFromTemplate(oc, "--ignore-unknown-parameters=true", "-f", prometheusTemplate, "-p", fmt.Sprintf("NAMESPACE=%s", ns), fmt.Sprintf("NODE_NAME=%s", firstNode))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use the hostname label value for NODE_NAME.

Line 425 passes worker.Name, but the template matches on kubernetes.io/hostname. Those values are not guaranteed to be identical, so this can generate a Prometheus CR whose affinity matches no worker on some platforms. Also, GetSchedulableLinuxWorkerNodes already filters to Ready workers, so the extra condition scan is redundant.

♻️ Suggested fix
 		workerNodes, workerErr := exutil.GetSchedulableLinuxWorkerNodes(oc)
 		o.Expect(workerErr).NotTo(o.HaveOccurred(), "failed to list schedulable linux worker nodes")
 		firstNode := ""
 		for _, worker := range workerNodes {
-			for _, con := range worker.Status.Conditions {
-				if con.Type == "Ready" && con.Status == "True" {
-					firstNode = worker.Name
-					break
-				}
-			}
+			firstNode = strings.TrimSpace(worker.Labels["kubernetes.io/hostname"])
 			if firstNode != "" {
 				break
 			}
 		}
 		if firstNode == "" {
-			e2e.Failf("no schedulable worker node in READY state found")
+			e2e.Failf("no schedulable worker node with kubernetes.io/hostname label found")
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_nonallns.go` around lines 419 - 447, The
code selects firstNode using worker.Name and re-checks Ready conditions
unnecessarily; change it to use the node's kubernetes.io/hostname label value
for NODE_NAME so the Prometheus affinity matches the template: after calling
GetSchedulableLinuxWorkerNodes(oc) pick the first worker (no extra Ready
condition loop) and extract its labels["kubernetes.io/hostname"] (falling back
to worker.Name only if the label is missing) and pass that value into
olmv0util.ApplyResourceFromTemplate for NODE_NAME when rendering
prometheusTemplate.

@jianzhangbjz
Copy link
Member

/payload-job periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 11, 2026

@jianzhangbjz: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/1a09b100-1ce4-11f1-8f8f-7ba332b0ae18-0

@jianzhangbjz
Copy link
Member

@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch 2 times, most recently from 1bb1ce2 to a629db7 Compare March 12, 2026 21:24
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
tests-extension/test/qe/specs/olmv0_nonallns.go (1)

419-447: ⚠️ Potential issue | 🟠 Major

Use hostname label for NODE_NAME (and drop redundant Ready scan).

At Line 425, worker.Name is used for NODE_NAME, but the template affinity matches kubernetes.io/hostname. These values can differ on some platforms, causing false targeting. Also, GetSchedulableLinuxWorkerNodes already guarantees Ready workers, so the extra condition loop is redundant.

♻️ Proposed fix
 		workerNodes, workerErr := exutil.GetSchedulableLinuxWorkerNodes(oc)
 		o.Expect(workerErr).NotTo(o.HaveOccurred(), "failed to list schedulable linux worker nodes")
 		firstNode := ""
 		for _, worker := range workerNodes {
-			for _, con := range worker.Status.Conditions {
-				if con.Type == "Ready" && con.Status == "True" {
-					firstNode = worker.Name
-					break
-				}
-			}
+			firstNode = strings.TrimSpace(worker.Labels["kubernetes.io/hostname"])
+			if firstNode == "" {
+				firstNode = strings.TrimSpace(worker.Name)
+			}
 			if firstNode != "" {
 				break
 			}
 		}
 		if firstNode == "" {
-			e2e.Failf("no schedulable worker node in READY state found")
+			e2e.Failf("no schedulable worker node with hostname value found")
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_nonallns.go` around lines 419 - 447, The
code currently iterates over workerNodes from
exutil.GetSchedulableLinuxWorkerNodes and sets firstNode = worker.Name after an
extra Ready-condition scan (redundant because GetSchedulableLinuxWorkerNodes
already returns ready schedulable nodes) and uses that Name for NODE_NAME even
though the pod affinity in the template matches the kubernetes.io/hostname
label; update the logic in the block that computes firstNode to stop the manual
Ready scan and instead read the hostname label
(worker.Labels["kubernetes.io/hostname"]) from the first worker in workerNodes,
assign that value to firstNode, and only fallback to worker.Name if the hostname
label is missing so the ApplyResourceFromTemplate call uses the correct
NODE_NAME for affinity matching (refer to variables/funcs: workerNodes,
firstNode, exutil.GetSchedulableLinuxWorkerNodes, ApplyResourceFromTemplate).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests-extension/test/qe/specs/olmv0_defaultoption.go`:
- Around line 3306-3315: The poll callback used by wait.PollUntilContextTimeout
currently calls o.Expect(err).NotTo(o.HaveOccurred()) which will abort the test
on any transient oc.AsAdmin().WithoutNamespace().Run(...).Output() error;
instead, change the callback to not call o.Expect on the Output() error—if err
!= nil then return false, nil so the poll will retry (optionally log the error)
and only use o.Expect assertions after the poll succeeds (e.g., when evaluating
hasDittoReady/hasPlanetscaleReady or after the wait returns); keep the same
logic inside the callback but replace the o.Expect line with a transient-error
handling return to ensure retries.
- Around line 3274-3283: The CatalogSourceDescription instance `cs` is missing
the Secret field which causes the template to be rendered with an empty SECRET
value making `secrets: - ""` invalid; update the
`olmv0util.CatalogSourceDescription` literal (the `cs` variable) to set `Secret`
to a valid Kubernetes secret name (or switch to a template that does not require
SECRET) so that the `Create` call receives a non-empty secret name when applying
the `Template` (`csImageTemplate`).

---

Duplicate comments:
In `@tests-extension/test/qe/specs/olmv0_nonallns.go`:
- Around line 419-447: The code currently iterates over workerNodes from
exutil.GetSchedulableLinuxWorkerNodes and sets firstNode = worker.Name after an
extra Ready-condition scan (redundant because GetSchedulableLinuxWorkerNodes
already returns ready schedulable nodes) and uses that Name for NODE_NAME even
though the pod affinity in the template matches the kubernetes.io/hostname
label; update the logic in the block that computes firstNode to stop the manual
Ready scan and instead read the hostname label
(worker.Labels["kubernetes.io/hostname"]) from the first worker in workerNodes,
assign that value to firstNode, and only fallback to worker.Name if the hostname
label is missing so the ApplyResourceFromTemplate call uses the correct
NODE_NAME for affinity matching (refer to variables/funcs: workerNodes,
firstNode, exutil.GetSchedulableLinuxWorkerNodes, ApplyResourceFromTemplate).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 56b121c5-3e11-4011-8ddc-ca666ed48079

📥 Commits

Reviewing files that changed from the base of the PR and between ef86b52 and a629db7.

📒 Files selected for processing (2)
  • tests-extension/test/qe/specs/olmv0_defaultoption.go
  • tests-extension/test/qe/specs/olmv0_nonallns.go

Comment on lines +3274 to 3283
csImageTemplate := filepath.Join(buildPruningBaseDir, "catalogsource-image-extract.yaml")
cs := olmv0util.CatalogSourceDescription{
Name: "ocp-47181",
Namespace: oc.Namespace(),
DisplayName: "ocp-47181",
Publisher: "OLM QE",
SourceType: "grpc",
Address: "quay.io/olmqe/etcd-47181:1.0",
Address: "quay.io/olmqe/ditto-index:41565-cache",
Template: csImageTemplate,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check CatalogSourceDescription struct definition
rg -nA 30 'type CatalogSourceDescription struct' tests-extension/test/qe/util/olmv0util/

# Check how the template is applied and if SECRET can be empty
rg -n 'SECRET' tests-extension/test/qe/testdata/olm/catalogsource-image-extract.yaml

Repository: openshift/operator-framework-olm

Length of output: 3613


🏁 Script executed:

#!/bin/bash
# Get the Create method implementation to see how Secret is handled
rg -nA 50 'func (catsrc \*CatalogSourceDescription) Create' tests-extension/test/qe/util/olmv0util/catalog_source.go

# Get the full catalogsource-image-extract.yaml template
cat -n tests-extension/test/qe/testdata/olm/catalogsource-image-extract.yaml

Repository: openshift/operator-framework-olm

Length of output: 1118


🏁 Script executed:

#!/bin/bash
# Get more of the Create method implementation
sed -n '46,150p' tests-extension/test/qe/util/olmv0util/catalog_source.go

Repository: openshift/operator-framework-olm

Length of output: 5395


Missing Secret field in CatalogSourceDescription.

The CatalogSourceDescription struct has a Secret field, but the test code does not set it. When the Create method applies the template, it passes SECRET= (empty string), which results in:

secrets:
- ""

An empty string is not a valid Kubernetes secret name and will cause the CatalogSource creation to fail.

Set the Secret field in the CatalogSourceDescription struct, or use a template that does not require a secret parameter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_defaultoption.go` around lines 3274 -
3283, The CatalogSourceDescription instance `cs` is missing the Secret field
which causes the template to be rendered with an empty SECRET value making
`secrets: - ""` invalid; update the `olmv0util.CatalogSourceDescription` literal
(the `cs` variable) to set `Secret` to a valid Kubernetes secret name (or switch
to a template that does not require SECRET) so that the `Create` call receives a
non-empty secret name when applying the `Template` (`csImageTemplate`).

Comment on lines +3306 to +3315
waitErr := wait.PollUntilContextTimeout(context.TODO(), 10*time.Second, 360*time.Second, false, func(ctx context.Context) (bool, error) {
csvList, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("csv", "-n", sub.Namespace, "-o=jsonpath={range .items[*]}{@.metadata.name}{\",\"}{@.status.phase}{\"\\n\"}{end}").Output()
o.Expect(err).NotTo(o.HaveOccurred())
hasDittoReady := false
hasPlanetscaleReady := false
for _, line := range strings.Split(strings.TrimSpace(csvList), "\n") {
parts := strings.Split(line, ",")
if len(parts) != 2 {
continue
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

o.Expect inside poll function will abort on transient errors.

Line 3309 uses o.Expect(err).NotTo(o.HaveOccurred()) inside the polling function. If oc.Run fails transiently (e.g., temporary API unavailability), the test will fail immediately rather than retrying. The poll pattern should handle transient errors gracefully.

🛠️ Suggested fix
 waitErr := wait.PollUntilContextTimeout(context.TODO(), 10*time.Second, 360*time.Second, false, func(ctx context.Context) (bool, error) {
     csvList, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("csv", "-n", sub.Namespace, "-o=jsonpath={range .items[*]}{@.metadata.name}{\",\"}{@.status.phase}{\"\\n\"}{end}").Output()
-    o.Expect(err).NotTo(o.HaveOccurred())
+    if err != nil {
+        e2e.Logf("Failed to get CSV list, retrying: %v", err)
+        return false, nil
+    }
     hasDittoReady := false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_defaultoption.go` around lines 3306 -
3315, The poll callback used by wait.PollUntilContextTimeout currently calls
o.Expect(err).NotTo(o.HaveOccurred()) which will abort the test on any transient
oc.AsAdmin().WithoutNamespace().Run(...).Output() error; instead, change the
callback to not call o.Expect on the Output() error—if err != nil then return
false, nil so the poll will retry (optionally log the error) and only use
o.Expect assertions after the poll succeeds (e.g., when evaluating
hasDittoReady/hasPlanetscaleReady or after the wait returns); keep the same
logic inside the callback but replace the o.Expect line with a transient-error
handling return to ensure retries.

@bandrade
Copy link
Contributor Author

/payload-job periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 12, 2026

@bandrade: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/d0467c50-1e60-11f1-9352-ada36cddc92f-0

@jianzhangbjz
Copy link
Member

@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch from a629db7 to d3d68f7 Compare March 17, 2026 17:06
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests-extension/test/qe/specs/olmv0_nonallns.go (1)

419-435: ⚠️ Potential issue | 🟠 Major

Use hostname label for NODE_NAME to avoid affinity mismatches.

At Line 425, firstNode = worker.Name can diverge from kubernetes.io/hostname, which can make the rendered Prometheus node affinity unschedulable on some platforms.

♻️ Suggested fix
 		workerNodes, workerErr := exutil.GetSchedulableLinuxWorkerNodes(oc)
 		o.Expect(workerErr).NotTo(o.HaveOccurred(), "failed to list schedulable linux worker nodes")
 		firstNode := ""
 		for _, worker := range workerNodes {
-			for _, con := range worker.Status.Conditions {
-				if con.Type == "Ready" && con.Status == "True" {
-					firstNode = worker.Name
-					break
-				}
-			}
+			firstNode = strings.TrimSpace(worker.Labels["kubernetes.io/hostname"])
+			if firstNode == "" {
+				firstNode = worker.Name
+			}
 			if firstNode != "" {
 				break
 			}
 		}
 		if firstNode == "" {
-			e2e.Failf("no schedulable worker node in READY state found")
+			e2e.Failf("no schedulable worker node with hostname label found")
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_nonallns.go` around lines 419 - 435, The
code selects a node by assigning worker.Name to firstNode, but Pod node affinity
uses the kubernetes.io/hostname label and this can differ from the Node.Name;
change the assignment in the loop to read the hostname label from the node
object (use worker.Labels["kubernetes.io/hostname"]) and set firstNode to that
value, falling back to worker.Name only if the label is missing; update
references that construct NODE_NAME to use this hostname-derived firstNode so
affinity matches actual node labels.
🧹 Nitpick comments (1)
tests-extension/test/qe/specs/olmv0_defaultoption.go (1)

3389-3410: Consider adding debug logging on failure for consistency.

Unlike the PolarionID:47181 test at lines 3340-3342, this test doesn't log debug info when the poll fails. Adding olmv0util.LogDebugInfo here would improve troubleshooting consistency across similar tests.

♻️ Add debug logging on failure
     return hasDittoReady && hasPlanetscaleReady, nil
 })
+if waitErr != nil {
+    olmv0util.LogDebugInfo(oc, sub.Namespace, "pod", "ip", "csv", "events")
+}
 exutil.AssertWaitPollNoErr(waitErr, "csv ditto-operator or planetscale-operator was not Succeeded nor Installing")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_defaultoption.go` around lines 3389 -
3410, The poll loop can fail without emitting debug info; before calling
exutil.AssertWaitPollNoErr add a call to olmv0util.LogDebugInfo so failures get
the same troubleshooting output as the other test. Specifically, after waitErr
is set (the variable from wait.PollUntilContextTimeout) and before
exutil.AssertWaitPollNoErr, invoke olmv0util.LogDebugInfo passing the
subscription namespace and the oc client (use sub.Namespace and oc) so logs are
captured when the CSV readiness check fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests-extension/test/qe/specs/olmv0_nonallns.go`:
- Around line 419-435: The code selects a node by assigning worker.Name to
firstNode, but Pod node affinity uses the kubernetes.io/hostname label and this
can differ from the Node.Name; change the assignment in the loop to read the
hostname label from the node object (use
worker.Labels["kubernetes.io/hostname"]) and set firstNode to that value,
falling back to worker.Name only if the label is missing; update references that
construct NODE_NAME to use this hostname-derived firstNode so affinity matches
actual node labels.

---

Nitpick comments:
In `@tests-extension/test/qe/specs/olmv0_defaultoption.go`:
- Around line 3389-3410: The poll loop can fail without emitting debug info;
before calling exutil.AssertWaitPollNoErr add a call to olmv0util.LogDebugInfo
so failures get the same troubleshooting output as the other test. Specifically,
after waitErr is set (the variable from wait.PollUntilContextTimeout) and before
exutil.AssertWaitPollNoErr, invoke olmv0util.LogDebugInfo passing the
subscription namespace and the oc client (use sub.Namespace and oc) so logs are
captured when the CSV readiness check fails.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3cb5a9a9-78d2-4a70-a02c-dfacaa33527f

📥 Commits

Reviewing files that changed from the base of the PR and between a629db7 and d3d68f7.

📒 Files selected for processing (2)
  • tests-extension/test/qe/specs/olmv0_defaultoption.go
  • tests-extension/test/qe/specs/olmv0_nonallns.go

@bandrade
Copy link
Contributor Author

/payload-job periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 17, 2026

@bandrade: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9a983770-2224-11f1-8063-3f702d4b503e-0

@bandrade
Copy link
Contributor Author

@jianzhangbjz looks good now, can you please take a look? Thank you

@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch from d3d68f7 to 8bb4202 Compare March 18, 2026 14:30
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 18, 2026
@openshift-merge-robot
Copy link
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
tests-extension/test/qe/specs/olmv0_nonallns.go (1)

419-447: ⚠️ Potential issue | 🟠 Major

Use hostname label for NODE_NAME to avoid unschedulable Prometheus

Line 425 uses worker.Name, but the applied template matches on kubernetes.io/hostname. Those values are not guaranteed to match, so this can fail scheduling on some platforms.

Suggested fix
 		workerNodes, workerErr := exutil.GetSchedulableLinuxWorkerNodes(oc)
 		o.Expect(workerErr).NotTo(o.HaveOccurred(), "failed to list schedulable linux worker nodes")
 		firstNode := ""
 		for _, worker := range workerNodes {
-			for _, con := range worker.Status.Conditions {
-				if con.Type == "Ready" && con.Status == "True" {
-					firstNode = worker.Name
-					break
-				}
-			}
+			firstNode = strings.TrimSpace(worker.Labels["kubernetes.io/hostname"])
+			if firstNode == "" {
+				firstNode = worker.Name
+			}
 			if firstNode != "" {
 				break
 			}
 		}
 		if firstNode == "" {
-			e2e.Failf("no schedulable worker node in READY state found")
+			e2e.Failf("no schedulable worker node with usable hostname found")
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_nonallns.go` around lines 419 - 447, The
code sets NODE_NAME using worker.Name (firstNode) but the Prometheus podSelector
matches the kubernetes.io/hostname label; change the node selection to use the
node's kubernetes.io/hostname label: when iterating workerNodes in the block
that sets firstNode (and before calling olmv0util.ApplyResourceFromTemplate with
NODE_NAME and prometheusTemplate), read worker.Labels["kubernetes.io/hostname"]
and assign that to firstNode (with a safe fallback to worker.Name only if the
hostname label is missing) so the applied template uses the label value the
scheduler matches.
tests-extension/test/qe/specs/olmv0_defaultoption.go (2)

818-847: ⚠️ Potential issue | 🟠 Major

This has drifted back to the duplicate-subscription path.

Both subscriptions target learn from the same catalog and namespace, so PolarionID:32613 now overlaps with the same-package conflict case already covered later in this file at Lines 2198-2277 instead of exercising “dependency CSV already installed.” Point the second subscription at a package with a real dependency on the first install, or reuse a fixture that preserves that relationship.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_defaultoption.go` around lines 818 - 847,
The second SubscriptionDescription (currently SubName "sub-32613-conflict") is
pointing to the same OperatorPackage ("learn") and thus hits the
duplicate-subscription path; update the test so the second subscription targets
a package that has a declared dependency on the first installed CSV (or reuse an
existing fixture that preserves that dependency relationship) by changing the
OperatorPackage and any related fields in the SubscriptionDescription used in
the second block (the instance created with sub.CreateWithoutCheck and
referenced in the Contain check); ensure the jsonpath check still asserts for
"ConstraintsNotSatisfiable" against the correct subscription name.

3252-3273: ⚠️ Potential issue | 🟠 Major

Make these CSV polls retry-friendly and wait for Succeeded only.

o.Expect inside the callback turns a transient oc get csv failure into an immediate spec failure, and treating Installing as success lets the test pass before the dependency chain actually finishes.

Suggested change
 waitErr := wait.PollUntilContextTimeout(context.TODO(), 10*time.Second, 360*time.Second, false, func(ctx context.Context) (bool, error) {
     csvList, err := oc.AsAdmin().WithoutNamespace().Run("get").Args(
         "csv", "-n", sub.Namespace,
         "-o=jsonpath={range .items[*]}{@.metadata.name}{\",\"}{@.status.phase}{\"\\n\"}{end}",
     ).Output()
-    o.Expect(err).NotTo(o.HaveOccurred())
+    if err != nil {
+        e2e.Logf("failed to list CSVs in %s, retrying: %v", sub.Namespace, err)
+        return false, nil
+    }

     hasDittoReady := false
     hasPlanetscaleReady := false
     ...
-    if strings.Contains(name, "ditto-operator") && (phase == "Succeeded" || phase == "Installing") {
+    if strings.Contains(name, "ditto-operator") && phase == "Succeeded" {
         hasDittoReady = true
     }
-    if strings.Contains(name, "planetscale-operator") && (phase == "Succeeded" || phase == "Installing") {
+    if strings.Contains(name, "planetscale-operator") && phase == "Succeeded" {
         hasPlanetscaleReady = true
     }

     return hasDittoReady && hasPlanetscaleReady, nil
 })
-exutil.AssertWaitPollNoErr(waitErr, "csv ditto-operator or planetscale-operator was not Succeeded nor Installing")
+exutil.AssertWaitPollNoErr(waitErr, "csv ditto-operator or planetscale-operator was not Succeeded")

Also applies to: 3320-3344, 3390-3411

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_defaultoption.go` around lines 3252 -
3273, The callback passed to wait.PollUntilContextTimeout should be
retry-friendly and only consider CSV phase "Succeeded" as success: remove the
o.Expect(err).NotTo... assertion inside the callback (do not fail the spec on
transient oc get errors); instead, if err != nil return false, nil so the poll
retries, and only set hasDittoReady/hasPlanetscaleReady when phase ==
"Succeeded" (remove "Installing" as a success state). Update the code around
wait.PollUntilContextTimeout, the
oc.AsAdmin().WithoutNamespace().Run("get").Args(...).Output() call, and the
variables hasDittoReady/hasPlanetscaleReady accordingly; apply the same changes
to the similar blocks at the other locations referenced (lines 3320-3344 and
3390-3411).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests-extension/test/qe/specs/olmv0_nonallns.go`:
- Around line 449-451: The assertion created by olmv0util.NewCheck is expecting
the wrong combined metadata/name+replica string: it passes "example2" but the
template sets spec.replicas: 1 (so the jsonpath {.metadata.name}{.spec.replicas}
will produce "example1"); update the NewCheck call to use the correct expected
value (change "example2" to "example1") or, if the template was meant to be
different, update the template to set spec.replicas to 2 so the current
expectation remains valid; locate the call to olmv0util.NewCheck("expect",
exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "example2", ...) and
make the corresponding change.

---

Duplicate comments:
In `@tests-extension/test/qe/specs/olmv0_defaultoption.go`:
- Around line 818-847: The second SubscriptionDescription (currently SubName
"sub-32613-conflict") is pointing to the same OperatorPackage ("learn") and thus
hits the duplicate-subscription path; update the test so the second subscription
targets a package that has a declared dependency on the first installed CSV (or
reuse an existing fixture that preserves that dependency relationship) by
changing the OperatorPackage and any related fields in the
SubscriptionDescription used in the second block (the instance created with
sub.CreateWithoutCheck and referenced in the Contain check); ensure the jsonpath
check still asserts for "ConstraintsNotSatisfiable" against the correct
subscription name.
- Around line 3252-3273: The callback passed to wait.PollUntilContextTimeout
should be retry-friendly and only consider CSV phase "Succeeded" as success:
remove the o.Expect(err).NotTo... assertion inside the callback (do not fail the
spec on transient oc get errors); instead, if err != nil return false, nil so
the poll retries, and only set hasDittoReady/hasPlanetscaleReady when phase ==
"Succeeded" (remove "Installing" as a success state). Update the code around
wait.PollUntilContextTimeout, the
oc.AsAdmin().WithoutNamespace().Run("get").Args(...).Output() call, and the
variables hasDittoReady/hasPlanetscaleReady accordingly; apply the same changes
to the similar blocks at the other locations referenced (lines 3320-3344 and
3390-3411).

In `@tests-extension/test/qe/specs/olmv0_nonallns.go`:
- Around line 419-447: The code sets NODE_NAME using worker.Name (firstNode) but
the Prometheus podSelector matches the kubernetes.io/hostname label; change the
node selection to use the node's kubernetes.io/hostname label: when iterating
workerNodes in the block that sets firstNode (and before calling
olmv0util.ApplyResourceFromTemplate with NODE_NAME and prometheusTemplate), read
worker.Labels["kubernetes.io/hostname"] and assign that to firstNode (with a
safe fallback to worker.Name only if the hostname label is missing) so the
applied template uses the label value the scheduler matches.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e2a0a0f7-0afb-48e6-8cb8-d952ea91664d

📥 Commits

Reviewing files that changed from the base of the PR and between d3d68f7 and 8bb4202.

📒 Files selected for processing (2)
  • tests-extension/test/qe/specs/olmv0_defaultoption.go
  • tests-extension/test/qe/specs/olmv0_nonallns.go

Comment on lines +449 to +451
// The package used for this test guarantees Prometheus types are served, but may not report
// status.conditions[0].type=Available for this standalone CR in all environments.
olmv0util.NewCheck("expect", exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "example2", exutil.Ok, []string{"Prometheus", "example", "-n", ns, "-o=jsonpath={.metadata.name}{.spec.replicas}"}).Check(oc)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prometheus assertion expects the wrong replicas value

Line 451 expects "example2", but the referenced template sets spec.replicas: 1; {.metadata.name}{.spec.replicas} should be "example1" unless the template is intentionally changed.

Suggested fix
-		olmv0util.NewCheck("expect", exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "example2", exutil.Ok, []string{"Prometheus", "example", "-n", ns, "-o=jsonpath={.metadata.name}{.spec.replicas}"}).Check(oc)
+		olmv0util.NewCheck("expect", exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "example1", exutil.Ok, []string{"Prometheus", "example", "-n", ns, "-o=jsonpath={.metadata.name}{.spec.replicas}"}).Check(oc)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// The package used for this test guarantees Prometheus types are served, but may not report
// status.conditions[0].type=Available for this standalone CR in all environments.
olmv0util.NewCheck("expect", exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "example2", exutil.Ok, []string{"Prometheus", "example", "-n", ns, "-o=jsonpath={.metadata.name}{.spec.replicas}"}).Check(oc)
// The package used for this test guarantees Prometheus types are served, but may not report
// status.conditions[0].type=Available for this standalone CR in all environments.
olmv0util.NewCheck("expect", exutil.AsAdmin, exutil.WithoutNamespace, exutil.Compare, "example1", exutil.Ok, []string{"Prometheus", "example", "-n", ns, "-o=jsonpath={.metadata.name}{.spec.replicas}"}).Check(oc)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests-extension/test/qe/specs/olmv0_nonallns.go` around lines 449 - 451, The
assertion created by olmv0util.NewCheck is expecting the wrong combined
metadata/name+replica string: it passes "example2" but the template sets
spec.replicas: 1 (so the jsonpath {.metadata.name}{.spec.replicas} will produce
"example1"); update the NewCheck call to use the correct expected value (change
"example2" to "example1") or, if the template was meant to be different, update
the template to set spec.replicas to 2 so the current expectation remains valid;
locate the call to olmv0util.NewCheck("expect", exutil.AsAdmin,
exutil.WithoutNamespace, exutil.Compare, "example2", ...) and make the
corresponding change.

@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch from 8bb4202 to 0a7924a Compare March 18, 2026 22:16
@openshift-ci openshift-ci bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 18, 2026
@bandrade
Copy link
Contributor Author

/payload-job periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 18, 2026

@bandrade: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/5414f450-2319-11f1-9011-3a8f50c57933-0

@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch from 0a7924a to ed5bd37 Compare March 19, 2026 01:23
@bandrade bandrade force-pushed the codex/remove-etcd-from-tests branch from ed5bd37 to b51eec3 Compare March 19, 2026 01:39
@bandrade
Copy link
Contributor Author

/payload-job periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 19, 2026

@bandrade: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/0d868480-2388-11f1-9b9a-41e3499472d7-0

@bandrade
Copy link
Contributor Author

/payload-job periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 19, 2026

@bandrade: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-extended-f2

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/295447f0-2393-11f1-9a6d-aef1914bbbc6-0

@bandrade
Copy link
Contributor Author

/payload-job periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-techpreview-extended-f1

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 19, 2026

@bandrade: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-operator-framework-olm-release-4.22-periodics-e2e-aws-ovn-fips-techpreview-extended-f1

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/e03f0f30-239e-11f1-92fc-13ba1dfdca73-0

@bandrade
Copy link
Contributor Author

/retest

@bandrade
Copy link
Contributor Author

@jianzhangbjz, would you mind please taking another look at this one? I replaced all cases that use etcd-operator with another. I changed 24387, 32613, 47149, 47179, 47181, 27680

@jianzhangbjz
Copy link
Member

/approve
/lgtm
/verified by @bandrade

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Mar 20, 2026
@openshift-ci-robot
Copy link

@jianzhangbjz: This PR has been marked as verified by @bandrade.

Details

In response to this:

/approve
/lgtm
/verified by @bandrade

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 20, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 20, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bandrade, jianzhangbjz

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 20, 2026
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 20, 2026

@bandrade: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit a9c6068 into openshift:main Mar 20, 2026
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants