Merge https://github.com/IBM-Cloud/cloud-provider-ibm:release-1.36 (e91b23b) into main#96
Conversation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: John Kasperski <jckasper@us.ibm.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* Bump github.com/go-openapi/strfmt from 0.25.0 to 0.26.1 Bumps [github.com/go-openapi/strfmt](https://github.com/go-openapi/strfmt) from 0.25.0 to 0.26.1. - [Release notes](https://github.com/go-openapi/strfmt/releases) - [Commits](go-openapi/strfmt@v0.25.0...v0.26.1) --- updated-dependencies: - dependency-name: github.com/go-openapi/strfmt dependency-version: 0.26.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * Fix go mod tidy issue * Move exclude to bottom of go.mod --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: jckasper <jckasper@us.ibm.com>
…y (#801) Bumps the go_modules group with 1 update in the / directory: [google.golang.org/grpc](https://github.com/grpc/grpc-go). Updates `google.golang.org/grpc` from 1.79.1 to 1.79.3 - [Release notes](https://github.com/grpc/grpc-go/releases) - [Commits](grpc/grpc-go@v1.79.1...v1.79.3) --- updated-dependencies: - dependency-name: google.golang.org/grpc dependency-version: 1.79.3 dependency-type: indirect dependency-group: go_modules ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Bumps the kubernetes-1-36 group with 5 updates: | Package | From | To | | --- | --- | --- | | [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) | `0.36.0-alpha.2` | `0.36.0-beta.0` | | [k8s.io/client-go](https://github.com/kubernetes/client-go) | `0.36.0-alpha.2` | `0.36.0-beta.0` | | [k8s.io/cloud-provider](https://github.com/kubernetes/cloud-provider) | `0.36.0-alpha.2` | `0.36.0-beta.0` | | [k8s.io/component-base](https://github.com/kubernetes/component-base) | `0.36.0-alpha.2` | `0.36.0-beta.0` | | [k8s.io/component-helpers](https://github.com/kubernetes/component-helpers) | `0.36.0-alpha.2` | `0.36.0-beta.0` | Updates `k8s.io/apimachinery` from 0.36.0-alpha.2 to 0.36.0-beta.0 - [Commits](kubernetes/apimachinery@v0.36.0-alpha.2...v0.36.0-beta.0) Updates `k8s.io/client-go` from 0.36.0-alpha.2 to 0.36.0-beta.0 - [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) - [Commits](kubernetes/client-go@v0.36.0-alpha.2...v0.36.0-beta.0) Updates `k8s.io/cloud-provider` from 0.36.0-alpha.2 to 0.36.0-beta.0 - [Commits](kubernetes/cloud-provider@v0.36.0-alpha.2...v0.36.0-beta.0) Updates `k8s.io/component-base` from 0.36.0-alpha.2 to 0.36.0-beta.0 - [Commits](kubernetes/component-base@v0.36.0-alpha.2...v0.36.0-beta.0) Updates `k8s.io/component-helpers` from 0.36.0-alpha.2 to 0.36.0-beta.0 - [Commits](kubernetes/component-helpers@v0.36.0-alpha.2...v0.36.0-beta.0) --- updated-dependencies: - dependency-name: k8s.io/apimachinery dependency-version: 0.36.0-beta.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/client-go dependency-version: 0.36.0-beta.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/cloud-provider dependency-version: 0.36.0-beta.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/component-base dependency-version: 0.36.0-beta.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/component-helpers dependency-version: 0.36.0-beta.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: John Kasperski <jckasper@us.ibm.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Bumps [DavidAnson/markdownlint-cli2-action](https://github.com/davidanson/markdownlint-cli2-action) from 22 to 23. - [Release notes](https://github.com/davidanson/markdownlint-cli2-action/releases) - [Commits](DavidAnson/markdownlint-cli2-action@v22...v23) --- updated-dependencies: - dependency-name: DavidAnson/markdownlint-cli2-action dependency-version: '23' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…841) Bumps [github.com/IBM/platform-services-go-sdk](https://github.com/IBM/platform-services-go-sdk) from 0.96.0 to 0.97.0. - [Release notes](https://github.com/IBM/platform-services-go-sdk/releases) - [Changelog](https://github.com/IBM/platform-services-go-sdk/blob/main/CHANGELOG.md) - [Commits](IBM/platform-services-go-sdk@v0.96.0...v0.97.0) --- updated-dependencies: - dependency-name: github.com/IBM/platform-services-go-sdk dependency-version: 0.97.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…858) Bumps [github.com/IBM/platform-services-go-sdk](https://github.com/IBM/platform-services-go-sdk) from 0.97.0 to 0.97.1. - [Release notes](https://github.com/IBM/platform-services-go-sdk/releases) - [Changelog](https://github.com/IBM/platform-services-go-sdk/blob/main/CHANGELOG.md) - [Commits](IBM/platform-services-go-sdk@v0.97.0...v0.97.1) --- updated-dependencies: - dependency-name: github.com/IBM/platform-services-go-sdk dependency-version: 0.97.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [github.com/IBM/vpc-go-sdk](https://github.com/IBM/vpc-go-sdk) from 0.80.0 to 0.81.0. - [Release notes](https://github.com/IBM/vpc-go-sdk/releases) - [Changelog](https://github.com/IBM/vpc-go-sdk/blob/master/CHANGELOG.md) - [Commits](IBM/vpc-go-sdk@v0.80.0...v0.81.0) --- updated-dependencies: - dependency-name: github.com/IBM/vpc-go-sdk dependency-version: 0.81.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps the go_modules group with 1 update in the / directory: [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go). Updates `go.opentelemetry.io/otel/sdk` from 1.40.0 to 1.43.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](open-telemetry/opentelemetry-go@v1.40.0...v1.43.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/otel/sdk dependency-version: 1.43.0 dependency-type: indirect dependency-group: go_modules ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [actions/github-script](https://github.com/actions/github-script) from 8 to 9. - [Release notes](https://github.com/actions/github-script/releases) - [Commits](actions/github-script@v8...v9) --- updated-dependencies: - dependency-name: actions/github-script dependency-version: '9' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps the kubernetes-1-36 group with 6 updates: | Package | From | To | | --- | --- | --- | | [k8s.io/api](https://github.com/kubernetes/api) | `0.36.0-beta.0` | `0.36.0-rc.0` | | [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) | `0.36.0-beta.0` | `0.36.0-rc.0` | | [k8s.io/client-go](https://github.com/kubernetes/client-go) | `0.36.0-beta.0` | `0.36.0-rc.0` | | [k8s.io/cloud-provider](https://github.com/kubernetes/cloud-provider) | `0.36.0-beta.0` | `0.36.0-rc.0` | | [k8s.io/component-base](https://github.com/kubernetes/component-base) | `0.36.0-beta.0` | `0.36.0-rc.0` | | [k8s.io/component-helpers](https://github.com/kubernetes/component-helpers) | `0.36.0-beta.0` | `0.36.0-rc.0` | Updates `k8s.io/api` from 0.36.0-beta.0 to 0.36.0-rc.0 - [Commits](kubernetes/api@v0.36.0-beta.0...v0.36.0-rc.0) Updates `k8s.io/apimachinery` from 0.36.0-beta.0 to 0.36.0-rc.0 - [Commits](kubernetes/apimachinery@v0.36.0-beta.0...v0.36.0-rc.0) Updates `k8s.io/client-go` from 0.36.0-beta.0 to 0.36.0-rc.0 - [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) - [Commits](kubernetes/client-go@v0.36.0-beta.0...v0.36.0-rc.0) Updates `k8s.io/cloud-provider` from 0.36.0-beta.0 to 0.36.0-rc.0 - [Commits](kubernetes/cloud-provider@v0.36.0-beta.0...v0.36.0-rc.0) Updates `k8s.io/component-base` from 0.36.0-beta.0 to 0.36.0-rc.0 - [Commits](kubernetes/component-base@v0.36.0-beta.0...v0.36.0-rc.0) Updates `k8s.io/component-helpers` from 0.36.0-beta.0 to 0.36.0-rc.0 - [Commits](kubernetes/component-helpers@v0.36.0-beta.0...v0.36.0-rc.0) --- updated-dependencies: - dependency-name: k8s.io/api dependency-version: 0.36.0-rc.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/apimachinery dependency-version: 0.36.0-rc.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/client-go dependency-version: 0.36.0-rc.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/cloud-provider dependency-version: 0.36.0-rc.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/component-base dependency-version: 0.36.0-rc.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/component-helpers dependency-version: 0.36.0-rc.0 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: John Kasperski <jckasper@us.ibm.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: John Kasperski <jckasper@us.ibm.com>
…884) Bumps [github.com/IBM/platform-services-go-sdk](https://github.com/IBM/platform-services-go-sdk) from 0.97.1 to 0.97.2. - [Release notes](https://github.com/IBM/platform-services-go-sdk/releases) - [Changelog](https://github.com/IBM/platform-services-go-sdk/blob/main/CHANGELOG.md) - [Commits](IBM/platform-services-go-sdk@v0.97.1...v0.97.2) --- updated-dependencies: - dependency-name: github.com/IBM/platform-services-go-sdk dependency-version: 0.97.2 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [github.com/IBM/vpc-go-sdk](https://github.com/IBM/vpc-go-sdk) from 0.81.0 to 0.82.0. - [Release notes](https://github.com/IBM/vpc-go-sdk/releases) - [Changelog](https://github.com/IBM/vpc-go-sdk/blob/master/CHANGELOG.md) - [Commits](IBM/vpc-go-sdk@v0.81.0...v0.82.0) --- updated-dependencies: - dependency-name: github.com/IBM/vpc-go-sdk dependency-version: 0.82.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [github.com/IBM/vpc-go-sdk](https://github.com/IBM/vpc-go-sdk) from 0.82.0 to 0.82.1. - [Release notes](https://github.com/IBM/vpc-go-sdk/releases) - [Changelog](https://github.com/IBM/vpc-go-sdk/blob/master/CHANGELOG.md) - [Commits](IBM/vpc-go-sdk@v0.82.0...v0.82.1) --- updated-dependencies: - dependency-name: github.com/IBM/vpc-go-sdk dependency-version: 0.82.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps the kubernetes-1-36 group with 6 updates: | Package | From | To | | --- | --- | --- | | [k8s.io/api](https://github.com/kubernetes/api) | `0.36.0-rc.0` | `0.36.0-rc.1` | | [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) | `0.36.0-rc.0` | `0.36.0-rc.1` | | [k8s.io/client-go](https://github.com/kubernetes/client-go) | `0.36.0-rc.0` | `0.36.0-rc.1` | | [k8s.io/cloud-provider](https://github.com/kubernetes/cloud-provider) | `0.36.0-rc.0` | `0.36.0-rc.1` | | [k8s.io/component-base](https://github.com/kubernetes/component-base) | `0.36.0-rc.0` | `0.36.0-rc.1` | | [k8s.io/component-helpers](https://github.com/kubernetes/component-helpers) | `0.36.0-rc.0` | `0.36.0-rc.1` | Updates `k8s.io/api` from 0.36.0-rc.0 to 0.36.0-rc.1 - [Commits](kubernetes/api@v0.36.0-rc.0...v0.36.0-rc.1) Updates `k8s.io/apimachinery` from 0.36.0-rc.0 to 0.36.0-rc.1 - [Commits](kubernetes/apimachinery@v0.36.0-rc.0...v0.36.0-rc.1) Updates `k8s.io/client-go` from 0.36.0-rc.0 to 0.36.0-rc.1 - [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) - [Commits](kubernetes/client-go@v0.36.0-rc.0...v0.36.0-rc.1) Updates `k8s.io/cloud-provider` from 0.36.0-rc.0 to 0.36.0-rc.1 - [Commits](kubernetes/cloud-provider@v0.36.0-rc.0...v0.36.0-rc.1) Updates `k8s.io/component-base` from 0.36.0-rc.0 to 0.36.0-rc.1 - [Commits](kubernetes/component-base@v0.36.0-rc.0...v0.36.0-rc.1) Updates `k8s.io/component-helpers` from 0.36.0-rc.0 to 0.36.0-rc.1 - [Commits](kubernetes/component-helpers@v0.36.0-rc.0...v0.36.0-rc.1) --- updated-dependencies: - dependency-name: k8s.io/api dependency-version: 0.36.0-rc.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/apimachinery dependency-version: 0.36.0-rc.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/client-go dependency-version: 0.36.0-rc.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/cloud-provider dependency-version: 0.36.0-rc.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/component-base dependency-version: 0.36.0-rc.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 - dependency-name: k8s.io/component-helpers dependency-version: 0.36.0-rc.1 dependency-type: direct:production update-type: version-update:semver-patch dependency-group: kubernetes-1-36 ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository: openshift/coderabbit/.coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: ⛔ Files ignored due to path filters (291)
📒 Files selected for processing (9)
✅ Files skipped from review due to trivial changes (5)
🚧 Files skipped from review as they are similar to previous changes (2)
WalkthroughBumps Kubernetes-related versions to v1.36.0, advances Go toolchain to 1.26.0, upgrades golangci-lint to 2.12.2, refreshes many go.mod direct and indirect dependencies, updates package version constant and docs, and adds Dependabot plus three GitHub Actions workflows to automate updates and CI for generated PRs. ChangesVersion & Dependency Bumps
Automation / GitHub Workflows
Sequence Diagram(s)sequenceDiagram
participant Updater as Update Workflow (kube/golangci)
participant Repo as GitHub Repo
participant Runner as Actions Runner
participant PR as Pull Request
participant Dispatcher as Dispatch PR Create Workflow
Updater->>Repo: Check current version (Makefile)
Updater->>Repo: Query upstream (kubernetes / golangci releases)
alt version differs
Updater->>Repo: Commit bump and create PR
Updater->>Repo: repository_dispatch event (pull-request-created) with pr_branch/pr_title
Runner->>PR: (PR created) Actions trigger notifies Dispatcher
Dispatcher->>Runner: Checkout PR branch, extract tool versions
Runner->>Runner: Run make fmt/lint/vet/test/ccm
Runner->>PR: Post success comment and enable auto-merge
else no change
Updater->>Repo: Log "No update needed"
end
🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 10 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (10 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
Hi @cloud-team-rebase-bot[bot]. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Regular contributors should join the org to skip this step. Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
.github/workflows/dispatch-pr-create.yml (1)
49-50: Consider removing redundantghCLI installation.The GitHub CLI (
gh) is pre-installed onubuntu-latestrunners. This step adds unnecessary installation time.♻️ Proposed fix
- - name: Install GitHub CLI - run: sudo apt-get install gh - - name: Get PR number for branch🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/dispatch-pr-create.yml around lines 49 - 50, Remove the redundant step that installs the GitHub CLI: delete the job/step named "Install GitHub CLI" that runs "sudo apt-get install gh" since the "gh" CLI is preinstalled on ubuntu-latest; ensure no downstream steps depend on that step name or its outputs and update any references to the step if necessary.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/dispatch-pr-create.yml:
- Around line 40-41: The workflow step name "Rnn go vet" contains a typo; update
the step's name string to "Run go vet" so the job reads correctly (locate the
step with name "Rnn go vet" in the dispatch-pr-create.yml workflow and change it
to "Run go vet").
- Around line 52-58: The step with id get-pr uses PR_NUMBER=$(gh pr list --head
"${{ github.event.client_payload.pr_branch }}" --json number --jq '.[0].number')
which can be empty; add validation immediately after that command to check if
PR_NUMBER is empty and handle it explicitly: if empty, print a clear error to
stderr (or to $GITHUB_OUTPUT) and exit with a non-zero status (or set a sentinel
output like pr_number=; export it) so downstream steps (the comment and
auto-merge steps) can be skipped via conditional checks; update the get-pr step
to include this check and ensure subsequent steps reference the pr_number output
and use if: ${{ steps.get-pr.outputs.pr_number != '' }} to avoid failing with
unclear errors.
In @.github/workflows/kube-update.yml:
- Around line 104-105: The PR branch name currently uses only
steps.new-k8s-ver.outputs.version which can collide across matrix bases; update
the branch: value to include the matrix branch as well (e.g. combine
matrix.branch and steps.new-k8s-ver.outputs.version) so each matrix base yields
a unique PR branch; apply the same change for the other occurrence where branch:
and base: are set alongside matrix.branch and steps.new-k8s-ver.outputs.version;
ensure the combined string uses the same YAML expression syntax (e.g. ${{
matrix.branch }}-${{ steps.new-k8s-ver.outputs.version }}) and optionally
sanitize slashes/characters if needed.
- Line 102: The PR body link points to the wrong repository; edit the string in
the kube-update.yml entry that reads "Check the PR results
[here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml)"
and replace the repository segment with "openshift/cloud-provider-powervs" so it
becomes the correct actions workflow URL for this repo; update the link target
text if necessary to preserve readability.
- Line 23: The workflow references a non-existent tag
actions/download-artifact@v8 which will break the run; update the download
action tag to a valid major version such as actions/download-artifact@v3 or
actions/download-artifact@v4 (keep actions/upload-artifact@v7 and
actions/checkout@v6 as-is) so the workflow uses an available release; locate the
download step that mentions actions/download-artifact@v8 and replace the tag
accordingly.
- Around line 31-34: The workflow-level continue-on-error: true is masking
failed updates; remove the continue-on-error key or set it to false so failures
are surfaced, relying on strategy.fail-fast: false and max-parallel: 1 to
preserve matrix behavior; update the YAML to delete or change the
continue-on-error entry so failed branch updates no longer appear successful.
- Around line 3-8: Add an explicit top-level permissions block to the workflow
enabling only the tokens required by the token-using steps: grant contents:
write and pull-requests: write for peter-evans/create-pull-request and
repository-dispatch: write for peter-evans/repository-dispatch; place this
permissions map at the top-level of the workflow (alongside the existing on:
block) so GITHUB_TOKEN has the minimal required rights for those actions.
---
Nitpick comments:
In @.github/workflows/dispatch-pr-create.yml:
- Around line 49-50: Remove the redundant step that installs the GitHub CLI:
delete the job/step named "Install GitHub CLI" that runs "sudo apt-get install
gh" since the "gh" CLI is preinstalled on ubuntu-latest; ensure no downstream
steps depend on that step name or its outputs and update any references to the
step if necessary.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 815f27c6-9185-44e3-a798-38941cfe8c19
⛔ Files ignored due to path filters (291)
go.sumis excluded by!**/*.sumvendor/github.com/IBM/vpc-go-sdk/common/version.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/IBM/vpc-go-sdk/vpcv1/vpc_v1.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/btree/README.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/google/btree/btree_generic.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/compare.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/export.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/diff/diff.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/flags/flags.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/function/func.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/value/name.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/value/pointer.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/value/sort.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/path.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_compare.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_references.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_reflect.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_slices.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_text.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_value.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/LICENSEis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/client_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/client_options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/constants.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/doc.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/server_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/server_options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/COPYRIGHTis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/LICENSEis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/callmeta.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/client.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/doc.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/server.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/.gitignoreis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/.travis.ymlis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/CHANGELOG.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/README.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/client.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/client_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/client_reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/makefileis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/metric_options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/util.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/context/context.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/README.mdis excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/adapt.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1/reflection.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1/reflection_grpc.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection_grpc.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/internal/internal.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/serverreflection.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/protodelim/protodelim.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/protojson/decode.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/prototext/decode.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/internal/descfmt/stringer.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/internal/version/version.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/reflect/protodesc/desc_init.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/register.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/admissionregistration/v1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/admissionregistration/v1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/admissionregistration/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1alpha1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/apidiscovery/v2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta2/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta2/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta2/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/batch/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/certificates/v1beta1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/coordination/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1alpha2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1alpha2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1alpha2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/core/v1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/discovery/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/zz_generated.validations.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/flowcontrol/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta3/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta3/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta3/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/imagepolicy/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/imagepolicy/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/imagepolicy/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1alpha3/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/register.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1alpha3/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1alpha3/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1beta1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1beta1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1beta2/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta2/register.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta2/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1beta2/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1beta2/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1beta2/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/scheduling/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/scheduling/v1alpha1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/scheduling/v1alpha2/doc.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha2/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha2/register.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha2/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1alpha2/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/scheduling/v1alpha2/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/scheduling/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/scheduling/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/storage/v1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**
📒 Files selected for processing (9)
.github/dependabot.yml.github/workflows/dispatch-pr-create.yml.github/workflows/golangci-update.yml.github/workflows/kube-update.ymlMakefileREADME.mdgo.modibm/ibm_version.gomain.go
| - name: Get PR number for branch | ||
| id: get-pr | ||
| run: | | ||
| PR_NUMBER=$(gh pr list --head "${{ github.event.client_payload.pr_branch }}" --json number --jq '.[0].number') | ||
| echo "pr_number=${PR_NUMBER}" >> "$GITHUB_OUTPUT" | ||
| env: | ||
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} |
There was a problem hiding this comment.
Add error handling for empty PR number.
If gh pr list returns no results (e.g., due to timing or the PR not being created), PR_NUMBER will be empty, causing the subsequent comment and auto-merge steps to fail with unclear errors.
🛡️ Proposed fix to add validation
- name: Get PR number for branch
id: get-pr
run: |
PR_NUMBER=$(gh pr list --head "${{ github.event.client_payload.pr_branch }}" --json number --jq '.[0].number')
+ if [ -z "$PR_NUMBER" ]; then
+ echo "::error::No PR found for branch ${{ github.event.client_payload.pr_branch }}"
+ exit 1
+ fi
echo "pr_number=${PR_NUMBER}" >> "$GITHUB_OUTPUT"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/dispatch-pr-create.yml around lines 52 - 58, The step with
id get-pr uses PR_NUMBER=$(gh pr list --head "${{
github.event.client_payload.pr_branch }}" --json number --jq '.[0].number')
which can be empty; add validation immediately after that command to check if
PR_NUMBER is empty and handle it explicitly: if empty, print a clear error to
stderr (or to $GITHUB_OUTPUT) and exit with a non-zero status (or set a sentinel
output like pr_number=; export it) so downstream steps (the comment and
auto-merge steps) can be skipped via conditional checks; update the get-pr step
to include this check and ensure subsequent steps reference the pr_number output
and use if: ${{ steps.get-pr.outputs.pr_number != '' }} to avoid failing with
unclear errors.
| on: | ||
| schedule: | ||
| - cron: "0 10 * * *" # Run at 10:00 UTC daily | ||
| workflow_dispatch: # Allow manual triggering | ||
|
|
||
| jobs: |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What are the minimum GITHUB_TOKEN permissions required for peter-evans/create-pull-request and peter-evans/repository-dispatch in GitHub Actions?
💡 Result:
For peter-evans/create-pull-request (latest v6+), the minimum GITHUB_TOKEN permissions required are: permissions: contents: write pull-requests: write This is explicitly stated in the action's README. Repositories created after February 2023 have read-only GITHUB_TOKEN by default, so these must be set in the job or workflow. If PRs modify workflows, add workflows: write. For peter-evans/repository-dispatch (latest v4), the minimum GITHUB_TOKEN permission required is: permissions: contents: write This works for dispatching to the same repository (default). The action's README and examples show this, and issues confirm contents: write resolves "resource not accessible" errors for same-repo dispatch. For remote repositories, a PAT with repo scope (or fine-grained contents: read & write + metadata: read) is required instead of GITHUB_TOKEN. Additionally, ensure repository Settings > Actions > General > Workflow permissions allows "Read and write permissions" and "Allow GitHub Actions to create and approve pull requests" for create-pull-request.
Citations:
- 1: https://github.com/peter-evans/create-pull-request
- 2: https://github.com/peter-evans/repository-dispatch?tab=readme-ov-file
- 3: Breaking change with v6: Error: GitHub Actions is not permitted to create or approve pull requests. peter-evans/create-pull-request#2767
- 4: Error: Resource not accessible by integration peter-evans/repository-dispatch#196
🏁 Script executed:
cat -n .github/workflows/kube-update.ymlRepository: openshift/cloud-provider-powervs
Length of output: 5146
Declare explicit workflow permissions for token-dependent steps.
This workflow uses peter-evans/create-pull-request (line 94) and peter-evans/repository-dispatch (line 113), which require explicit permissions. Declaring minimal required permissions avoids failures when org/repo defaults are restrictive (repositories created after February 2023 have read-only GITHUB_TOKEN by default).
Suggested fix
on:
schedule:
- cron: "0 10 * * *" # Run at 10:00 UTC daily
workflow_dispatch: # Allow manual triggering
+permissions:
+ contents: write
+ pull-requests: write
+
jobs:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| on: | |
| schedule: | |
| - cron: "0 10 * * *" # Run at 10:00 UTC daily | |
| workflow_dispatch: # Allow manual triggering | |
| jobs: | |
| on: | |
| schedule: | |
| - cron: "0 10 * * *" # Run at 10:00 UTC daily | |
| workflow_dispatch: # Allow manual triggering | |
| permissions: | |
| contents: write | |
| pull-requests: write | |
| jobs: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/kube-update.yml around lines 3 - 8, Add an explicit
top-level permissions block to the workflow enabling only the tokens required by
the token-using steps: grant contents: write and pull-requests: write for
peter-evans/create-pull-request and repository-dispatch: write for
peter-evans/repository-dispatch; place this permissions map at the top-level of
the workflow (alongside the existing on: block) so GITHUB_TOKEN has the minimal
required rights for those actions.
| run: curl -L --fail --retry 10 --retry-all-errors --retry-delay 60 --retry-max-time 3600 -o shared-files/kube-tags.txt https://api.github.com/repos/kubernetes/kubernetes/tags | ||
|
|
||
| - name: Upload both files as one artifact | ||
| uses: actions/upload-artifact@v7 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
check_ref() {
local repo="$1"
local tag="$2"
local code
code=$(curl -s -o /dev/null -w "%{http_code}" "https://api.github.com/repos/${repo}/git/ref/tags/${tag}")
echo "${repo}@${tag} -> HTTP ${code} (200 means tag exists)"
}
check_ref actions/upload-artifact v7
check_ref actions/checkout v6
check_ref actions/download-artifact v8Repository: openshift/cloud-provider-powervs
Length of output: 256
actions/download-artifact@v8 tag does not exist and will break the workflow.
Line 56 uses actions/download-artifact@v8, but this version tag is unavailable (HTTP 403). Change to an existing major version (e.g., v3, v4). Lines 23 and 51 (actions/upload-artifact@v7 and actions/checkout@v6) are valid.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/kube-update.yml at line 23, The workflow references a
non-existent tag actions/download-artifact@v8 which will break the run; update
the download action tag to a valid major version such as
actions/download-artifact@v3 or actions/download-artifact@v4 (keep
actions/upload-artifact@v7 and actions/checkout@v6 as-is) so the workflow uses
an available release; locate the download step that mentions
actions/download-artifact@v8 and replace the tag accordingly.
| continue-on-error: true | ||
| strategy: | ||
| fail-fast: false | ||
| max-parallel: 1 |
There was a problem hiding this comment.
Job-level continue-on-error hides failed updates.
With Line 31 set to true, scheduled runs can appear successful even when one or more branches fail. fail-fast: false already preserves matrix continuity without masking failures.
Suggested fix
- continue-on-error: true
strategy:
fail-fast: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| continue-on-error: true | |
| strategy: | |
| fail-fast: false | |
| max-parallel: 1 | |
| strategy: | |
| fail-fast: false | |
| max-parallel: 1 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/kube-update.yml around lines 31 - 34, The workflow-level
continue-on-error: true is masking failed updates; remove the continue-on-error
key or set it to false so failures are surfaced, relying on strategy.fail-fast:
false and max-parallel: 1 to preserve matrix behavior; update the YAML to delete
or change the continue-on-error entry so failed branch updates no longer appear
successful.
|
|
||
| ✅ Tests completed successfully | ||
|
|
||
| Check the PR results [here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml) |
There was a problem hiding this comment.
PR body points to the wrong repository workflow URL.
Line 102 links to IBM-Cloud/cloud-provider-ibm, but this workflow is in openshift/cloud-provider-powervs. That sends reviewers to the wrong checks page.
Suggested fix
- Check the PR results [here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml)
+ Check the PR results [here](https://github.com/${{ github.repository }}/actions/workflows/dispatch-pr-create.yml)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Check the PR results [here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml) | |
| Check the PR results [here](https://github.com/${{ github.repository }}/actions/workflows/dispatch-pr-create.yml) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/kube-update.yml at line 102, The PR body link points to
the wrong repository; edit the string in the kube-update.yml entry that reads
"Check the PR results
[here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml)"
and replace the repository segment with "openshift/cloud-provider-powervs" so it
becomes the correct actions workflow URL for this repo; update the link target
text if necessary to preserve readability.
| branch: ${{ steps.new-k8s-ver.outputs.version }} | ||
| base: ${{ matrix.branch }} |
There was a problem hiding this comment.
Use a unique PR branch per matrix base branch.
Line 104 uses only the Kubernetes version as branch name. If multiple release branches bump to the same TAG, branch names collide and PR creation/updating becomes ambiguous.
Suggested fix
- branch: ${{ steps.new-k8s-ver.outputs.version }}
+ branch: kube-update/${{ matrix.branch }}-${{ steps.new-k8s-ver.outputs.version }}
...
- "pr_branch": "${{ steps.new-k8s-ver.outputs.version }}",
+ "pr_branch": "kube-update/${{ matrix.branch }}-${{ steps.new-k8s-ver.outputs.version }}",Also applies to: 119-120
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/kube-update.yml around lines 104 - 105, The PR branch name
currently uses only steps.new-k8s-ver.outputs.version which can collide across
matrix bases; update the branch: value to include the matrix branch as well
(e.g. combine matrix.branch and steps.new-k8s-ver.outputs.version) so each
matrix base yields a unique PR branch; apply the same change for the other
occurrence where branch: and base: are set alongside matrix.branch and
steps.new-k8s-ver.outputs.version; ensure the combined string uses the same YAML
expression syntax (e.g. ${{ matrix.branch }}-${{
steps.new-k8s-ver.outputs.version }}) and optionally sanitize slashes/characters
if needed.
There was a problem hiding this comment.
Actionable comments posted: 3
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/vpcctl/vpc_sdk_gen2.go (1)
109-121:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep proxy-protocol behavior consistent for pools added later.
These initial pool prototypes now honor the service's proxy-protocol setting, but
CreateLoadBalancerPoolstill creates new pools without mirroring that configuration. A service that enables proxy protocol and later adds another port will end up with mixed pool behavior afterUpdateLoadBalancer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/vpcctl/vpc_sdk_gen2.go` around lines 109 - 121, CreateLoadBalancerPool is not propagating the service's proxy-protocol setting, causing newly created pools to differ from the initially built prototypes; update CreateLoadBalancerPool to set the LoadBalancerPoolPrototype.ProxyProtocol field based on options.isProxyProtocol() (use sdk.LoadBalancerPoolProxyProtocolV1Const when true, otherwise sdk.LoadBalancerPoolProxyProtocolDisabledConst) similar to the pool prototype creation above, referencing the CreateLoadBalancerPool function, LoadBalancerPoolPrototype.ProxyProtocol, and options.isProxyProtocol() to locate and fix the logic.
🟠 Major comments (35)
scripts/trigger_dev_deploy.sh-39-42 (1)
39-42:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRemote resolution can produce invalid
git pushinput.
git remote -v | grep "push"can return multiple rows; command substitution then passes a malformed remote value togit push.Suggested fix
-remote_branch=$(git remote -v | grep "push" | awk '{ print $1}') +remote_branch=$(git remote | head -n1) +if [[ -z "${remote_branch}" ]]; then + echo "No git remote configured" + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/trigger_dev_deploy.sh` around lines 39 - 42, The remote_branch assignment can yield multiple rows and break git push; change the command that sets remote_branch to only capture the first "push" remote (e.g., use a one-line filter that prints the first match and exits) so remote_branch holds a single valid remote name, then use that variable with git push "${remote_branch}" "${current_tag}" as before; update the line that defines remote_branch (the current substitution) to a variant that stops after the first match to avoid passing a malformed remote to git push.vagrant-kube-build/build.sh-28-35 (1)
28-35:⚠️ Potential issue | 🟠 Major | ⚡ Quick winBuild script masks provisioning failures by always exiting success.
exit 0at the end causes false-positive success even whenvagrant provisionorvagrant up --provisionfails.Suggested fix
if vagrant status --machine-readable | grep state,running; then echo "INFO: Vagrant is running, running provision ..." CLOUD_PROVIDER_IBM_BUILD_STEPS="${CLOUD_PROVIDER_IBM_BUILD_STEPS[*]}" vagrant provision + rc=$? else echo "INFO: Bringing up and provisioning vm ..." CLOUD_PROVIDER_IBM_BUILD_STEPS="${CLOUD_PROVIDER_IBM_BUILD_STEPS[*]}" vagrant up --provision + rc=$? fi -exit 0 +exit ${rc}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vagrant-kube-build/build.sh` around lines 28 - 35, The script currently forces success with a trailing "exit 0", hiding failures from "vagrant provision" or "vagrant up --provision"; remove the unconditional "exit 0" and instead propagate the real exit status: run the chosen vagrant command (the branches that invoke "CLOUD_PROVIDER_IBM_BUILD_STEPS=\"${CLOUD_PROVIDER_IBM_BUILD_STEPS[*]}\" vagrant provision" and "CLOUD_PROVIDER_IBM_BUILD_STEPS=\"${CLOUD_PROVIDER_IBM_BUILD_STEPS[*]}\" vagrant up --provision"), capture its exit code ($?), and exit the script with that code (or test for non-zero and exit 1) so provisioning failures are not masked.scripts/trigger_dev_deploy.sh-26-35 (1)
26-35:⚠️ Potential issue | 🟠 Major | ⚡ Quick winStop retrying forever on non-collision tag failures.
The current
while trueloop retries on anygit tagerror, so auth/config failures can spin forever.Suggested fix
-while true; do - git tag -a "${current_tag}" -m "Tag to trigger dev deploy From: ${username}" - return_code=$? - if [[ ${return_code} -eq 0 ]]; then - break - fi - - count=$((count + 1)) - current_tag="dev-${git_branch}-${count}" -done +while git rev-parse -q --verify "refs/tags/${current_tag}" >/dev/null; do + count=$((count + 1)) + current_tag="dev-${git_branch}-${count}" +done + +git tag -a "${current_tag}" -m "Tag to trigger dev deploy From: ${username}" || { + echo "Failed to create tag ${current_tag}" + exit 1 +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/trigger_dev_deploy.sh` around lines 26 - 35, Replace the infinite retry loop around git tag with a bounded retry that only retries on tag-collision errors: run git tag -a "${current_tag}" -m "..." while capturing its stderr and exit code, and if return_code is 0 break; if stderr contains the tag-already-exists message (e.g. "already exists" / "tag '...’ already exists") then increment count and update current_tag="dev-${git_branch}-${count}" and retry up to a MAX_ATTEMPTS (e.g. 5); for any other non-zero return_code or when MAX_ATTEMPTS is exceeded, log the stderr and exit with a non-zero code. Ensure you reference the existing variables/current_tag, count, git_branch and username in the updated loop.vagrant-kube-build/setupFVT.sh-29-30 (1)
29-30:⚠️ Potential issue | 🟠 MajorQuoted wildcard paths prevent kubeconfig/PEM copy from working.
The glob patterns inside double quotes are not expanded by Bash, so these
cpcommands attempt to copy literal filenames likekube-config-*.ymlinstead of matching files. This will fail at runtime.Suggested fix
-cp "${HOME}/.bluemix/plugins/container-service/clusters/${1}-admin/kube-config-*.yml" ../tests/fvt/kubeconfig -cp "${HOME}/.bluemix/plugins/container-service/clusters/${1}-admin/*.pem" ../tests/fvt/ +cp "${HOME}/.bluemix/plugins/container-service/clusters/${1}-admin"/kube-config-*.yml ../tests/fvt/kubeconfig +cp "${HOME}/.bluemix/plugins/container-service/clusters/${1}-admin"/*.pem ../tests/fvt/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vagrant-kube-build/setupFVT.sh` around lines 29 - 30, The cp commands in setupFVT.sh are quoting the glob patterns so Bash won't expand kube-config-*.yml and *.pem; update the two cp invocations to let the globs expand (e.g., keep the ${HOME} and ${1} parts quoted but remove quotes around the wildcard segment so the shell can match files) and optionally guard with shopt -s nullglob or check for matches before copying so you don't pass a literal pattern to cp; reference the existing cp lines that mention kube-config-*.yml and *.pem and ensure the destination paths (../tests/fvt/kubeconfig and ../tests/fvt/) exist before copying.pkg/klog/logger.go-31-35 (1)
31-35:⚠️ Potential issue | 🟠 Major
logStdoutis accessed without synchronization, creating a data race.The module-level
logStdoutvariable is written bySetOutputToStdout()(line 34) and read by the exported functionsErrorf,Infof, andWarningf(lines 39, 49, 60). These functions are likely called concurrently, whileSetOutputToStdout()can be called at any time, resulting in unsynchronized read-write access.Use
atomic.BoolwithStore()andLoad()methods to protect access:Suggested fix
import ( "fmt" + "sync/atomic" "time" "k8s.io/klog/v2" ) @@ -var logStdout = false +var logStdout atomic.Bool func SetOutputToStdout() { - logStdout = true + logStdout.Store(true) } @@ - if logStdout { + if logStdout.Load() { @@ - if logStdout { + if logStdout.Load() { @@ - if logStdout { + if logStdout.Load() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/klog/logger.go` around lines 31 - 35, Replace the unsynchronized package-level bool logStdout with an atomic.Bool and update SetOutputToStdout to call Store(true); change all reads in the exported logging functions Errorf, Infof, and Warningf to use Load() on that atomic.Bool to avoid data races; ensure the atomic variable is package-scoped (e.g., var logStdout atomic.Bool) so SetOutputToStdout and the logging functions reference the same atomic instance.scripts/updatePackage.sh-151-160 (1)
151-160:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon't drop dependency-only updates when deciding whether to open a PR.
This guard only looks for
${TARGET_DIR}ingit status, so a run that changes onlygo.mod/go.sumexits early even though those files were explicitly staged above. That makesUPDATE_GO_MOD=truea no-op whenever the copied.gofiles are unchanged.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/updatePackage.sh` around lines 151 - 160, The script currently only checks git status for ${TARGET_DIR} and returns early even when UPDATE_GO_MOD=true has staged changes to go.mod/go.sum; update the PR-needed guard in scripts/updatePackage.sh so it also detects staged/modified go.mod or go.sum (or use a porcelain check like git diff --cached --name-only) instead of only grepping for ${TARGET_DIR}; ensure the logic references UPDATE_GO_MOD and checks for "go.mod" and "go.sum" (or uses git diff --cached) so dependency-only updates do not get dropped.vagrant-kube-build/provisioning/push_image.sh-38-40 (1)
38-40:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon't replace the whole Docker daemon config just to add insecure registries.
This unconditionally overwrites
/etc/docker/daemon.json, so any existing mirrors, log settings, storage-driver config, or other required daemon options are lost. On a reused VM that can break Docker before the push even starts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vagrant-kube-build/provisioning/push_image.sh` around lines 38 - 40, The script currently overwrites /etc/docker/daemon.json by writing a new JSON blob via the echo -> /tmp/docker-daemon.json and sudo cp steps (in push_image.sh), which loses existing daemon settings; update the logic to read and parse the existing daemon.json (if present), merge or append the LOCAL_REGISTRY and ALT_REGISTRY into the insecure-registries array without removing other keys, write the merged JSON atomically (create a backup of the original before replacing), and then restart docker only after a successful merge; implement the merge using a safe JSON tool (jq or a short python snippet) and ensure idempotency if the registries are already present.scripts/updatePackage.sh-54-56 (1)
54-56:⚠️ Potential issue | 🟠 Major | ⚡ Quick winValidate the source files before deleting the target package.
rm -f "${TARGET_DIR}"/*.goruns before we know${REPO_BASE}/${SOURCE_DIR}/*.goactually exists. If the addon file points at the wrong directory or the upstream layout changes, this script exits oncpafter wiping the current package.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/updatePackage.sh` around lines 54 - 56, The script deletes TARGET_DIR Go files before confirming source files exist; change the sequence in scripts/updatePackage.sh so you first verify that any files match "${REPO_BASE}/${SOURCE_DIR}"/*.go (e.g., test for existence of that glob or count matches) and only then run rm -f "${TARGET_DIR}"/*.go and cp from the source; reference the symbols TARGET_DIR, REPO_BASE, SOURCE_DIR, and the cp/rm commands to locate the lines to update.scripts/deploy.sh-132-152 (1)
132-152:⚠️ Potential issue | 🟠 MajorFix empty argument passed to
hub pull-requestin release path.When
pr_option=""(on the normal release path), this command passes an empty string as a positional argument:hub pull-request --file ... --push "" --labels .... This breaks PR creation. Either conditionally include the flag only when needed or use an array-based approach to build the command.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/deploy.sh` around lines 132 - 152, The release path is passing an empty positional argument to hub via pr_option which causes hub pull-request to receive --push "" and fail; update the deploy script so the --push flag is only included when pr_option is non-empty (or build the command args in an array and append pr_option conditionally) before invoking hub pull-request; specifically modify the invocation that uses pr_option and pr_labels (the hub pull-request call) and the variable pr_option assignment/usage so that when pr_option=="" the script calls hub pull-request --file "${TRAVIS_BUILD_DIR}"/message.txt --labels "${pr_labels}" (or uses a dynamically assembled args array) rather than passing an empty string as a positional argument.cmd/ibm-cloud-controller-manager/Dockerfile-27-48 (1)
27-48:⚠️ Potential issue | 🟠 Major | ⚡ Quick winRun the container as non-root explicitly.
This image currently has no
USERdirective, so it runs as root by default. Please set a non-root runtime user.Suggested change
WORKDIR /bin/ ADD cmd/ibm-cloud-controller-manager/calicoctl /bin/ ADD ibm-cloud-controller-manager /bin/ +USER 65532:65532 ENTRYPOINT ["/bin/ibm-cloud-controller-manager"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/ibm-cloud-controller-manager/Dockerfile` around lines 27 - 48, The image currently runs as root because there's no USER directive; create a non-root runtime user and switch to it before ENTRYPOINT. Add commands to create a user/group (e.g., userid like "appuser"), chown the added binaries (the files added by ADD: "cmd/ibm-cloud-controller-manager/calicoctl" and "ibm-cloud-controller-manager" in WORKDIR /bin/) to that user, and then add a USER instruction prior to ENTRYPOINT ["/bin/ibm-cloud-controller-manager"] so the container runs as the non-root account.scripts/calculateCoverage.sh-21-21 (1)
21-21:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFail fast when coverage input is missing/empty.
Current parsing can produce invalid coverage (or divide by zero behavior) while still exiting successfully when
cover.htmlis absent or no% )rows match.Suggested hardening
-COVERAGE=$(grep "%)" cover.html | sed 's/[][()><%]/ /g' | awk '{ print $4 }' | awk '{s+=$1}END{print s/NR}') +if [[ ! -f cover.html ]]; then + echo "cover.html not found" + exit 1 +fi + +VALUES=$(grep "%)" cover.html | sed 's/[][()><%]/ /g' | awk '{ print $4 }') +if [[ -z "${VALUES}" ]]; then + echo "No coverage values found in cover.html" + exit 1 +fi + +COVERAGE=$(printf "%s\n" "${VALUES}" | awk '{s+=$1} END { if (NR==0) exit 1; print s/NR }')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/calculateCoverage.sh` at line 21, The COVERAGE assignment can silently produce invalid/empty results or divide-by-zero when cover.html is missing or grep finds no "%)" lines; update the calculateCoverage.sh logic to first verify cover.html exists and is readable, then capture the grep output count (or the awk NR) before computing the average, and if no matching lines are found or the parsed COVERAGE is empty/non-numeric, print an error and exit non-zero; modify the pipeline around the COVERAGE variable assignment (the line using grep "%)" ... awk '{s+=$1}END{print s/NR}') to bail out with a clear error when inputs are absent or NR==0.ibm/ibm_vpc_client.go-65-73 (1)
65-73:⚠️ Potential issue | 🟠 MajorGuard
region.Endpointbefore dereference.The IBM vpc-go-sdk does not guarantee that
Region.Endpointis non-nil. The Region struct fields are pointers that can be nil, and the API contract does not explicitly state Endpoint is always populated. Dereferencing*region.Endpointwithout a nil check will panic if the SDK returns a nil or empty endpoint.Defensive fix
// Get Region and Set Service URL region, _, err := sdk.GetRegion(sdk.NewGetRegionOptions(provider.Region)) if err != nil { return nil, err } + if region == nil || region.Endpoint == nil || *region.Endpoint == "" { + return nil, errors.New("region endpoint is empty") + } + // Set the Service URL err = sdk.SetServiceURL(*region.Endpoint + "/v1") if err != nil { return nil, err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_vpc_client.go` around lines 65 - 73, The code dereferences region.Endpoint without a nil check causing a potential panic; update the block using sdk.GetRegion and sdk.SetServiceURL to validate that region and region.Endpoint are non-nil (and optionally non-empty) before using *region.Endpoint, and return a clear error (or use a sensible fallback) if the endpoint is missing; specifically change the logic around the GetRegion result to check region != nil && region.Endpoint != nil (and len(*region.Endpoint) > 0) before calling sdk.SetServiceURL(*region.Endpoint + "/v1") and handle the error path instead of dereferencing blindly.ibm/ibm_metadata_service_test.go-98-101 (1)
98-101:⚠️ Potential issue | 🟠 Major | ⚡ Quick winCheck API call errors instead of inspecting stale
err.Line 98 drops the returned error, then Line 99 checks a previous
errvalue; Line 122 and Line 167 also ignore delete errors. This can produce false-positive test passes.✅ Suggested patch
- k8sclient.CoreV1().Nodes().Update(context.TODO(), &k8snode, metav1.UpdateOptions{}) - if nil != err { + _, err = k8sclient.CoreV1().Nodes().Update(context.TODO(), &k8snode, metav1.UpdateOptions{}) + if nil != err { t.Fatalf("Failed to update Node goodnode: %v", err) } @@ - k8sclient.CoreV1().Nodes().Delete(context.TODO(), "goodnode", metav1.DeleteOptions{}) + err = k8sclient.CoreV1().Nodes().Delete(context.TODO(), "goodnode", metav1.DeleteOptions{}) + if nil != err { + t.Fatalf("Failed to delete Node goodnode: %v", err) + } @@ - k8sclient.CoreV1().Nodes().Delete(context.TODO(), "partialnode", metav1.DeleteOptions{}) + err = k8sclient.CoreV1().Nodes().Delete(context.TODO(), "partialnode", metav1.DeleteOptions{}) + if nil != err { + t.Fatalf("Failed to delete Node partialnode: %v", err) + }Also applies to: 122-122, 167-167
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_metadata_service_test.go` around lines 98 - 101, The test is dropping returned errors from Kubernetes API calls and then checking a stale err variable; update each call (e.g., k8sclient.CoreV1().Nodes().Update(...), k8sclient.CoreV1().Pods().Delete(...) or similar Delete calls around the test) to assign the returned error to err (err := ... or err = ...) and immediately check that err for nil with t.Fatalf on failure; replace any existing checks that reference an earlier err variable so each API call's result is validated against its own error value (ensuring Update and Delete errors are not ignored).vagrant-kube-build/provisioning/build_docker_registry.sh-36-41 (1)
36-41:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftHandle existing
daemon.jsoninstead of only the missing-file case.Line 36 only configures insecure registries when
/etc/docker/daemon.jsondoes not exist. On hosts where it exists without this registry entry, push operations can fail.A safer approach is to merge/update the
insecure-registrieslist in the existing JSON, then restart Docker only when the file changed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vagrant-kube-build/provisioning/build_docker_registry.sh` around lines 36 - 41, The script currently only writes /etc/docker/daemon.json when the file is missing; instead, modify the block that references /etc/docker/daemon.json and BUILD_REGISTRY_URL to read and parse the existing JSON (e.g., via jq or a short python snippet), ensure an "insecure-registries" array exists, add BUILD_REGISTRY_URL if not present, and write the merged JSON to a temp file then atomically replace /etc/docker/daemon.json; only call sudo service docker restart (and sleep) if the file contents actually changed. Ensure error handling for invalid JSON and preserve other keys when updating the file.scripts/setup-artifactory-build.sh-25-35 (1)
25-35:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHarden token file handling and credential file permissions.
Line 26 writes a bearer token to
/tmp, and Line 32-35 append secrets without permission controls or cleanup. That increases credential exposure risk.🔒 Suggested patch
export ARTIFACTORY_TOKEN_PATH="/tmp/.artifactory-token-path" -echo "${ARTIFACTORY_JFRT_TOKEN}" > ${ARTIFACTORY_TOKEN_PATH} +umask 077 +printf '%s' "${ARTIFACTORY_JFRT_TOKEN}" > "${ARTIFACTORY_TOKEN_PATH}" +trap 'rm -f "${ARTIFACTORY_TOKEN_PATH}"' EXIT @@ -echo "machine na.artifactory.swg-devops.com login ${ARTIFACTORY_USER_NAME} password " >> ~/.netrc -cat "${ARTIFACTORY_TOKEN_PATH}" >> ~/.netrc +printf 'machine na.artifactory.swg-devops.com login %s password %s\n' \ + "${ARTIFACTORY_USER_NAME}" "${ARTIFACTORY_JFRT_TOKEN}" >> ~/.netrc +chmod 600 ~/.netrc @@ printf "Authorization: Bearer " > "${ARTIFACTORY_AUTH_HEADER_FILE}" cat "${ARTIFACTORY_TOKEN_PATH}" >> "${ARTIFACTORY_AUTH_HEADER_FILE}" +chmod 600 "${ARTIFACTORY_AUTH_HEADER_FILE}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/setup-artifactory-build.sh` around lines 25 - 35, Replace the insecure writes to temporary and credential files by creating a secure temp file (use mktemp) for ARTIFACTORY_TOKEN_PATH, write the token with a single redirect rather than echo piping, set restrictive permissions (chmod 600) on ARTIFACTORY_TOKEN_PATH, ARTIFACTORY_AUTH_HEADER_FILE and ~/.netrc/ ~/.pip/pip.conf after creation, and add a trap to securely remove the temp token and auth header on EXIT; update references to ARTIFACTORY_TOKEN_PATH and ARTIFACTORY_AUTH_HEADER_FILE in the docker login and cat commands accordingly so the same secure files are used.scripts/setup-artifactory-build.sh-20-24 (1)
20-24:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftAvoid streaming unverified remote content into execution path.
Line 21 pipes a download directly into extraction, then Line 22 executes from that payload. Add fail-fast flags and verify artifact integrity before running install scripts.
🔐 Suggested hardening
+set -euo pipefail + echo "Preparing artifactory build setup." -curl -s https://s3.us.cloud-object-storage.appdomain.cloud/armada-build-tools-prod-us-geo/build-tools/build-tools.tar.gz | tar -xvz +curl -fsSLo /tmp/build-tools.tar.gz \ + "https://s3.us.cloud-object-storage.appdomain.cloud/armada-build-tools-prod-us-geo/build-tools/build-tools.tar.gz" +# Example: verify checksum provided by trusted channel before extraction. +# echo "${BUILD_TOOLS_SHA256} /tmp/build-tools.tar.gz" | sha256sum -c - +tar -xvz -f /tmp/build-tools.tar.gz ./build-tools/install.sh🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/setup-artifactory-build.sh` around lines 20 - 24, The script currently streams a remote tarball directly into tar and then runs ./build-tools/install.sh and sources ./build-tools/JFrog/setJfrtToken.sh without verifying integrity; change the flow to download the tarball to a temporary file (use curl --fail --show-error --location), download a checksum/signature alongside it and verify the artifact before extracting, extract only after verification, and only then run ./build-tools/install.sh and source setJfrtToken.sh; specifically update the steps that download/extract/run the payload (the curl | tar -xvz pipeline, the subsequent ./build-tools/install.sh invocation, and the source ./build-tools/JFrog/setJfrtToken.sh) to implement fail-fast flags, checksum/signature verification, and safe extraction into a temporary directory.vagrant-kube-build/provisioning/build_docker_registry.sh-52-58 (1)
52-58:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFail explicitly when registry never becomes ready.
The wait loop breaks on a successful curl but does not error if readiness is never reached, so subsequent push failures are delayed and less diagnosable.
🛠️ Suggested patch
echo "Waiting for docker registry container to start ..." +ready=0 for _ in $(seq 1 10); do sleep 3 - if curl ${BUILD_REGISTRY_URL} >/dev/null 2>&1; then + if curl -fsS "http://${BUILD_REGISTRY_URL}/v2/" >/dev/null 2>&1; then echo "Docker registry container has started." + ready=1 break fi done +if [[ "${ready}" -ne 1 ]]; then + exit_build +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vagrant-kube-build/provisioning/build_docker_registry.sh` around lines 52 - 58, The readiness loop in build_docker_registry.sh that polls BUILD_REGISTRY_URL up to 10 times may exit silently if the registry never becomes ready; modify the loop logic to detect failure after the retries and exit non‑zero with an error message. Specifically, keep the existing for ... seq retry loop that curls BUILD_REGISTRY_URL, but after the loop finishes check a success flag or the curl exit status and if it never succeeded call echo with a clear error (including BUILD_REGISTRY_URL) and exit 1 so the provisioning fails fast and clearly.ibm/ibm_metadata_service.go-114-117 (1)
114-117:⚠️ Potential issue | 🟠 Major | ⚡ Quick winProviderID invalidation logic does not execute for cached hits
Line 115 returns cached data before ProviderID consistency is checked. The mismatch block at Line 151 compares against a non-hit path value, so stale cached ProviderIDs are not invalidated as intended.
Suggested direction
- node, ok := ms.getCachedNode(name) - if ok { - return node, nil - } + cachedNode, cached := ms.getCachedNode(name) k8sNode, err := ms.kubeClient.CoreV1().Nodes().Get(context.TODO(), string(name), metav1.GetOptions{}) if nil != err { - return node, err + return cachedNode, err } newNode.ProviderID = k8sNode.Spec.ProviderID - if newNode.ProviderID != "" && newNode.ProviderID != node.ProviderID { + if cached && newNode.ProviderID != "" && newNode.ProviderID != cachedNode.ProviderID { ms.deleteCachedNode(name) } + if cached && newNode.ProviderID != "" && newNode.ProviderID == cachedNode.ProviderID { + return cachedNode, nil + }Also applies to: 150-154
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_metadata_service.go` around lines 114 - 117, The cached-path returns early from ms.getCachedNode (variable node from getCachedNode(name)) before the ProviderID consistency check runs; change the flow so that when getCachedNode(name) returns ok you still perform the existing ProviderID mismatch/invalidation logic (the block that currently runs around the ProviderID comparison) against the cached node prior to returning, and only return the cached node if ProviderIDs are consistent (or after you update/invalidate the cache as the mismatch logic requires); reference ms.getCachedNode, the local variable node, and the existing ProviderID mismatch/invalidation block to locate where to move/insert the check.docs/examples/ibm-cloud-controller-manager.yaml-75-77 (1)
75-77:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHarden container security context defaults
Line 75 sets only partial hardening. Please also disable privilege escalation and enforce read-only root FS to avoid permissive runtime defaults.
Suggested patch
securityContext: runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: + drop: ["ALL"] + seccompProfile: + type: RuntimeDefault runAsGroup: 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/examples/ibm-cloud-controller-manager.yaml` around lines 75 - 77, The container's securityContext currently sets runAsNonRoot and runAsGroup but misses privilege and filesystem restrictions; update the same securityContext block (where runAsNonRoot and runAsGroup are defined) to also set allowPrivilegeEscalation: false and readOnlyRootFilesystem: true so the container cannot escalate privileges and the root filesystem is read-only; keep the existing runAsNonRoot and runAsGroup entries and add those two fields under the securityContext for the container definition.vagrant-kube-build/Vagrantfile-58-60 (1)
58-60:⚠️ Potential issue | 🟠 MajorGo version mismatch: Vagrantfile installs Go 1.22.5 but go.mod requires Go 1.24.0
The hardcoded Go version on line 58 conflicts with the repository's Go toolchain requirement. This will cause local VM builds to fail since Go 1.22.5 cannot satisfy the
go 1.24.0directive in go.mod. Update the version to match the repository requirement:Affected code
curl -O https://dl.google.com/go/go1.24.0.linux-amd64.tar.gz if [[ $? -ne 0 ]]; then exit_build ; fi tar -C /usr/local -xzf go1.24.0.linux-amd64.tar.gz🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vagrant-kube-build/Vagrantfile` around lines 58 - 60, Update the hardcoded Go download in the Vagrantfile to match the repository's go.mod requirement (Go 1.24.0): change the curl target and the tarball referenced in the install block that currently downloads "go1.22.5.linux-amd64.tar.gz" so it instead downloads and extracts the "go1.24.0.linux-amd64.tar.gz" artifact, keeping the existing error check and tar extraction lines intact.scripts/verify_file_md5.sh-39-39 (1)
39-39:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAvoid xtrace in this script to prevent secret leakage
Line 39 enables
-x, which can print curl arguments from Line 81/84 (including auth headers/modifiers) into logs.Suggested patch
-set -ex +set -euo pipefail🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/verify_file_md5.sh` at line 39, The script currently enables xtrace via "set -ex", which risks leaking sensitive curl arguments; change the option to disable xtrace by removing "-x" (use "set -e" only) and, if you must keep debugging traces elsewhere, wrap only non-sensitive sections with temporary tracing (turn xtrace on/off around safe blocks) or explicitly disable tracing around the curl calls in the script so the curl invocations do not print auth headers or modifiers; update the invocation referenced as "set -ex" and protect the curl commands.scripts/process_nancy_log.sh-72-80 (1)
72-80:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMatch branch names as fixed strings when selecting issues
Lines 72/79 use regex
grepwith raw branch names. Branches containing regex metacharacters or substrings can update/close the wrong issue.Suggested patch
- if ! grep -q "${TRAVIS_BRANCH}" issues.txt; then + if ! grep -Fq -- "${TRAVIS_BRANCH}" issues.txt; then if [ "${status}" = "failed" ]; then echo "Create new issue" hub issue create --file "${TRAVIS_BUILD_DIR}"/create-issue.txt --labels "ccm,depcheck,security,${TRAVIS_BRANCH}" fi else echo "Update existing issue" - num=$(grep "${TRAVIS_BRANCH}" issues.txt | awk '{print $1 }' | tr -d '#') + num=$(grep -F -- "${TRAVIS_BRANCH}" issues.txt | awk '{print $1 }' | tr -d '#')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/process_nancy_log.sh` around lines 72 - 80, The branch-name lookup uses regex grep which can mis-match branches; change both grep invocations to fixed-string matching and exact-field matching: use grep -F -x "${TRAVIS_BRANCH}" issues.txt (or grep -F "${TRAVIS_BRANCH}" issues.txt piped into awk restricting to the branch field) instead of plain grep so TRAVIS_BRANCH is treated as a literal and only whole-line or exact-field matches are accepted; update the two places referencing TRAVIS_BRANCH (the conditional grep -q and the grep used to set num) accordingly to use grep -F and/or -x to prevent substring/regex matches.pkg/vpcctl/vpc_provider.go-103-104 (1)
103-104:⚠️ Potential issue | 🟠 MajorUse explicit format strings with
klog.*fcallsThese calls pass runtime strings directly to format-style APIs. If message text contains
%, logs are formatted incorrectly. Use"%s"explicitly.Representative fix pattern
- klog.Errorf(errString) + klog.Errorf("%s", errString) - klog.Infof(lb.GetSummary()) + klog.Infof("%s", lb.GetSummary())Affected lines: 103, 112, 119, 126, 133, 142, 157, 168, 174, 189, 194, 198, 203, 212, 328, 339
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/vpcctl/vpc_provider.go` around lines 103 - 104, The klog.*f calls are being given runtime strings directly (e.g., klog.Errorf(errString)), which misformats logs if the text contains percent signs; update each klog.*f invocation in vpc_provider.go to use an explicit format string like klog.Errorf("%s", errString) (or klog.Warningf("%s", msg) as appropriate) for every occurrence (e.g., the klog.Errorf calls that precede recordServiceWarningEvent(...) and other klog.*f usages around the errString/msg variables) so the message is passed as a single string argument rather than as a format pattern.scripts/checkCopyrights.sh-4-4 (1)
4-4:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUpdate the header year before enabling this lint.
This file is being added in 2026, but its header still ends at 2021.
lint_file()now requires changed files to mention the current year in the first 10 lines, somake lint-copyrightwill fail on this script as written. The same applies to the other touched headers in this PR.📝 Proposed fix
-# (C) Copyright IBM Corp. 2017, 2021 All Rights Reserved. +# (C) Copyright IBM Corp. 2017, 2026 All Rights Reserved.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/checkCopyrights.sh` at line 4, Update the copyright header in scripts/checkCopyrights.sh so the first 10 lines include the current year (2026) — e.g., change the trailing year range/text that currently ends at 2021 to include 2026 (for example "2017, 2026" or "2017-2026"); also scan the other files modified in this PR and update their top-of-file copyright headers similarly so lint_file() will detect the current year..travis.yml-79-79 (1)
79-79:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon't pass an empty Docker tag on non-tag builds.
TRAVIS_TAGis empty for branch and PR jobs, so this expands toBUILD_TAG=andmake containersends up tagging the image with an empty suffix. That breaks the mainscriptphase outside release tags.🐳 Proposed fix
- - BUILD_SHA=${TRAVIS_COMMIT} BUILD_ID=${TRAVIS_BUILD_NUMBER} BUILD_TAG=${TRAVIS_TAG} make containers + - BUILD_SHA=${TRAVIS_COMMIT} BUILD_ID=${TRAVIS_BUILD_NUMBER} BUILD_TAG=${TRAVIS_TAG:-${TRAVIS_COMMIT:0:12}} make containers🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.travis.yml at line 79, The current Travis step unconditionally injects BUILD_TAG=${TRAVIS_TAG} into the environment which yields BUILD_TAG= when TRAVIS_TAG is empty and causes an empty Docker tag; change the invocation so BUILD_TAG is only exported when TRAVIS_TAG is non-empty (e.g. use shell conditional or parameter expansion) before calling make containers — for example use BUILD_SHA=${TRAVIS_COMMIT} BUILD_ID=${TRAVIS_BUILD_NUMBER} ${TRAVIS_TAG:+BUILD_TAG=${TRAVIS_TAG}} make containers or perform an if [ -n "$TRAVIS_TAG" ] check to set BUILD_TAG only when TRAVIS_TAG is present to avoid passing an empty BUILD_TAG to make containers.pkg/vpcctl/vpc_config.go-89-98 (1)
89-98:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftDon't silently ignore configured endpoint overrides.
Providerstill acceptsg2EndpointOverride,iamEndpointOverride,rmEndpointOverride, andiksPrivateEndpointHostnameinibm/ibm.go, butConfigVpcno longer carries them and these helpers now always derive URLs from region/private mode. Existing configs will still parse successfully and then talk to different endpoints at runtime. Either keep plumbing these values through or fail fast during config validation instead of ignoring them.Also applies to: 106-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/vpcctl/vpc_config.go` around lines 89 - 98, ConfigVpc currently drops endpoint override fields so existing configs with g2EndpointOverride, iamEndpointOverride, rmEndpointOverride, or iksPrivateEndpointHostname are ignored at runtime; either re-add these fields to the ConfigVpc struct and pass them through to the Provider in ibm/ibm.go (ensure the Provider's endpoint-resolution logic consumes ConfigVpc.g2EndpointOverride / iamEndpointOverride / rmEndpointOverride / iksPrivateEndpointHostname), or add validation in ConfigVpc (e.g., a Validate method called at load time) that fails fast when any of those overrides are present to prevent silent misrouting; update the code paths that construct Provider (the function(s) that instantiate Provider from ConfigVpc) to wire the fields or trigger validation accordingly.ibm/ibm.go-187-190 (1)
187-190:⚠️ Potential issue | 🟠 Major | ⚡ Quick winHandle endpoint add events too.
Only registering
UpdateFuncskips existingEndpointsobjects seen during informer sync and newly created ones until some later update/resync. That delays the keepalived cleanup path after controller restarts and initial service creation.💡 Proposed fix
endpointInformer := informerFactory.Core().V1().Endpoints().Informer() // `#nosec` G104 Error is ignored for now endpointInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{ + AddFunc: func(obj interface{}) { + c.handleEndpointUpdate(nil, obj) + }, UpdateFunc: c.handleEndpointUpdate, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm.go` around lines 187 - 190, The informer is only registering UpdateFunc so initial and newly-created Endpoints are missed; update the endpoint informer registration (the call that creates endpointInformer via informerFactory.Core().V1().Endpoints().Informer()) to include an AddFunc in the cache.ResourceEventHandlerFuncs alongside UpdateFunc and point it to c.handleEndpointUpdate (or a small wrapper that converts add events to the same handling path) so Add events are processed immediately for initial sync and new Endpoints.ibm/ibm_loadbalancer.go-1739-1763 (1)
1739-1763:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFailed IPVS creates leave orphaned ConfigMaps and Calico policy behind.
The IPVS ConfigMap and Calico policy are created before the Deployment exists. If Deployment creation loses the IP race or fails for any other reason, this path
continues/returns without cleaning them up. That leaves stale IPVS state for a load balancer that was never created and can poison retries.Also applies to: 1923-1935
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_loadbalancer.go` around lines 1739 - 1763, The code creates IPVS ConfigMaps and a Calico policy via createIPVSConfigMapStruct -> createIPVSConfigMap and createCalicoIngressPolicy before creating the Deployment, but on subsequent failures those resources are left orphaned; update the logic so that after successfully creating the IPVS ConfigMap (ipvsCm) and the Calico policy you register cleanup actions (either with deferred calls or an explicit rollback) that delete the created ConfigMap and remove the Calico ingress policy if any later step (e.g., Deployment creation) fails or returns early; locate and modify the paths around createIPVSConfigMapStruct/createIPVSConfigMap/createCalicoIngressPolicy (also mirror the fix in the 1923-1935 block) to call the corresponding delete functions when an error occurs.kube-update.sh-82-82 (1)
82-82:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSame-Go-version Kubernetes bumps can leave
go.modandgo.sumstale.
go.*is excluded from the replacement sweep, and the dependency refresh only runs when the upstream Go version changes. Av1.26.x -> v1.26.yupdate with the same Go toolchain will therefore update repo text but keep the Kubernetes module graph pinned to the old patch.Also applies to: 91-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kube-update.sh` at line 82, The find invocation assigning ALL_FILES excludes go.* which prevents go.mod/go.sum from being updated on same-major Go toolchain bumps (e.g., v1.26.x→v1.26.y); change the logic so go.mod and go.sum are included in the replacement sweep (remove './go.*' from the prune list or explicitly add go.mod and go.sum into ALL_FILES) and ensure the dependency refresh step (the routine that runs when Kubernetes is bumped, referenced by the ALL_FILES handling and the dependency refresh block around the same area) always runs go mod tidy and updates module dependencies (e.g., run go get -u or equivalent) when Kubernetes is bumped even if GO_VERSION string did not change.Makefile-218-220 (1)
218-220:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
runfvtshould depend on the binary it executes.This target runs
./ibm_loadbalancer, butcleandeletes that binary andrunfvtdoes not depend onfvttest.make clean && make runfvtwill fail before the test starts.Suggested fix
-runfvt: kubectlcli vpcctlcli +runfvt: fvttest kubectlcli vpcctlcli cd ./tests/fvt && LOCAL_IBM_ARMADA_LB_FVT_TEST=true ./ibm_loadbalancer --logtostderr=true -v=4 ${TEST_FVT_OPTIONS}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 218 - 220, The runfvt target runs ./ibm_loadbalancer but doesn't depend on the target that builds that binary, so make clean can remove it and make runfvt will fail; update the runfvt dependency list to include the fvttest (or the specific target that produces ibm_loadbalancer) so runfvt: kubectlcli vpcctlcli fvttest (or the ibm_loadbalancer build target) ensures the binary is built before changing directory and executing ./ibm_loadbalancer; reference the runfvt target and the ibm_loadbalancer/fvttest build target when making the change.ibm/ibm_loadbalancer.go-505-516 (1)
505-516:⚠️ Potential issue | 🟠 MajorThe image-update check is not safe for registries with ports.
Both comparisons at lines 505-516 and 578-589 use
strings.Split(..., ":")to parse image tags, but valid references likeregistry.example.com:5000/lb:12contain multiple colons. Splitting on all colons and taking index [1] mis-parses the image—yielding5000/lbinstead of the tag12—causing incorrect version comparison and failed updates. Use the rightmost colon to extract the tag.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_loadbalancer.go` around lines 505 - 516, The current image comparison logic splits image strings by ":" (lbDeployment.Spec.Template.Spec.Containers[0].Image and c.Config.LBDeployment.Image) and assumes the tag is at index [1], which breaks for registries with ports; change the extraction to use the rightmost colon (e.g., via strings.LastIndex) to isolate the tag portion for both lbDeploymentImageList and configImageList before converting to int and comparing, and keep the existing fallback branch that compares full image strings to set updateImage accordingly.kube-update.sh-45-45 (1)
45-45:⚠️ Potential issue | 🟠 MajorPaginated release lookup will miss updates for older Kubernetes branches.
The GitHub releases endpoint defaults to 30 items per page. Once releases for older branches (e.g.,
release-1.26) fall beyond the first 30 releases in the API response, this cron path will leaveK8S_UPDATE_VERSIONempty even though newer patch versions still exist. Add pagination (?per_page=100) or switch to a tag-based lookup for the target minor version.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@kube-update.sh` at line 45, The curl command that sets K8S_UPDATE_VERSION currently hits the releases API without pagination, causing misses when target branch releases are beyond the first page; update the command that defines K8S_UPDATE_VERSION to request more items (e.g., add ?per_page=100 to the releases URL) or switch to a tag-based lookup that queries tags for the desired MAJOR_MINOR to reliably find the latest patch for the release branch referenced by the K8S_UPDATE_VERSION assignment.tests/fvt/ibm_loadbalancer.go-248-249 (1)
248-249:⚠️ Potential issue | 🟠 MajorFix image tag parsing to handle registry URLs with ports.
strings.Split(image, ":")incorrectly splits images likeregistry.example.com:5000/lb:42into multiple parts, causingimageSplit[0]to contain onlyregistry.example.cominstead of the full image reference. Usestrings.LastIndexor similar to split only on the rightmost colon, which separates the image name from the tag.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/fvt/ibm_loadbalancer.go` around lines 248 - 249, The current parsing using strings.Split on ibmCloud.Config.LBDeployment.Image incorrectly breaks registry URLs with ports; update the logic in the block that assigns ibmCloud.Config.LBDeployment.Image (where imageSplit is used) to find the rightmost colon (e.g., strings.LastIndex) and split there so you preserve the full image reference including any registry:port portion, then reconstruct the image as "<imageWithoutTag>:<td.imageVersion>" (handle the case where no tag exists by simply appending the new tag). Ensure you update the code that references imageSplit[0] to use the computed imageWithoutTag variable instead.tests/fvt/ibm_loadbalancer.go-317-323 (1)
317-323:⚠️ Potential issue | 🟠 MajorCheck for
NotFounderror specifically before attempting to create the service.The current code treats all Get() errors identically and falls through to the create path. This masks RBAC denials, transient API failures, and network errors—turning a read problem into a misleading create error. Return non-
NotFounderrors directly:_, err := c.KubeClient.CoreV1().Services(s.ObjectMeta.Namespace).Get(context.TODO(), s.ObjectMeta.Name, metav1.GetOptions{}) if err == nil { klog.Infof("Service %v/%v already exists", s.ObjectMeta.Namespace, s.ObjectMeta.Name) return nil } if !apierrors.IsNotFound(err) { return err } // Service does not exist, create it...Requires importing
k8s.io/apimachinery/pkg/api/errorsasapierrors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/fvt/ibm_loadbalancer.go` around lines 317 - 323, The Get() call against the cluster (c.KubeClient.CoreV1().Services(...).Get) currently treats all errors the same and proceeds to create the Service, which hides RBAC/transport errors; change the error handling to return any error that is not a NotFound by using apierrors.IsNotFound(err) (import k8s.io/apimachinery/pkg/api/errors as apierrors), only proceed to create the Service when apierrors.IsNotFound(err), and keep the early-return when err == nil and the klog.Infof as-is.ibm/ibm_loadbalancer.go-727-734 (1)
727-734:⚠️ Potential issue | 🟠 MajorThis loop never updates the dedicated selector value in-place.
matchExpressionis a copy in afor _, matchExpression := range ...loop over a slice of structs, so assigningmatchExpression.Valuesmodifies only the copy, not the original slice element. The deployment remains pinned to the old dedicated node value.Use an index-based loop instead:
Fix
- for _, matchExpression := range matchExpressions { - if lbDedicatedLabel == matchExpression.Key { - matchExpression.Values = []string{expectedSelectorValue} + for i := range matchExpressions { + if lbDedicatedLabel == matchExpressions[i].Key { + matchExpressions[i].Values = []string{expectedSelectorValue} break } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_loadbalancer.go` around lines 727 - 734, The loop over matchExpressions in nodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions is modifying a copy (for _, matchExpression := range ...) so matchExpression.Values assignment doesn't persist; change to an index-based loop (for i := range matchExpressions) and update matchExpressions[i].Values = []string{expectedSelectorValue} when matchExpressions[i].Key == lbDedicatedLabel, then reassign the slice back to nodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchExpressions to ensure the dedicated selector value is updated in-place.
🟡 Minor comments (5)
CONTRIBUTING.md-9-13 (1)
9-13:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winFix doc typo and markdown lint nits in Legal section.
There’s a spelling typo (
documentaion), an extra space inside the Apache link text, and the fenced block should declare a language.Suggested fix
-All files, except documentaion, must include a license header for the -[Apache License, Version 2.0 ](./LICENSE). Using the SPDX format is the +All files, except documentation, must include a license header for the +[Apache License, Version 2.0](./LICENSE). Using the SPDX format is the simplest approach. Here is an example: -``` +```text🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CONTRIBUTING.md` around lines 9 - 13, Fix the typos and markdown lint nits in CONTRIBUTING.md: correct the misspelling "documentaion" to "documentation", remove the trailing extra space inside the link text "Apache License, Version 2.0 " so it reads "Apache License, Version 2.0", and update the fenced code block to declare a language (e.g., use ```text) to satisfy markdown linting; apply these edits where the example block and Legal section text appear.main_test.go-28-33 (1)
28-33:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRestore
os.Argsafter mutation to keep tests isolated.Line 31 and Line 50 overwrite global process args but never restore them, which can leak state into other tests.
💡 Suggested patch
func TestCommandVersion(t *testing.T) { + origArgs := os.Args + t.Cleanup(func() { os.Args = origArgs }) + // if TEST_COMMAND_VERSION is set we are inside the process created below if os.Getenv("TEST_COMMAND_VERSION") == "1" { os.Args = []string{"ibm-cloud-controller-manager", "--version"} main() return } @@ func TestCommandHelp(t *testing.T) { + origArgs := os.Args + t.Cleanup(func() { os.Args = origArgs }) + os.Args = []string{"ibm-cloud-controller-manager", "--help"} main() }Also applies to: 49-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main_test.go` around lines 28 - 33, TestCommandVersion mutates the global os.Args and never restores it, which can leak state to other tests; save the original args (orig := os.Args) before setting os.Args, then restore them after the test (using defer or an explicit assignment) both in the branch where TEST_COMMAND_VERSION == "1" and in the parent process path that launches the subprocess; apply the same save-and-restore pattern to the other test in the file that mutates os.Args (the one around lines 49-52) so each test leaves os.Args unchanged after completion.README.md-18-19 (1)
18-19:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUse the canonical spelling “macOS”
Line 19 should use
macOSfor consistency with Apple naming.Suggested patch
-MacOS version 12.5. +macOS version 12.5.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 18 - 19, Update the README line that currently reads "MacOS version 12.5" to use Apple's canonical capitalization "macOS" (i.e., change "MacOS version 12.5" to "macOS version 12.5") so the project documentation matches the official naming convention.ibm/ibm_endpoint_watch_test.go-525-529 (1)
525-529:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winFail the test when the fixture service is missing.
Returning here turns a broken setup into a passing test, so this case can silently stop exercising
listPodsViaLabel.✅ Proposed fix
_, err := c.KubeClient.CoreV1().Services(newObj.Namespace).Get(context.TODO(), newObj.Name, metav1.GetOptions{}) if err != nil { - klog.Errorf("ERROR: Failed to get service: %v", err) - return + t.Fatalf("Failed to get service fixture: %v", err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_endpoint_watch_test.go` around lines 525 - 529, The test currently swallows a missing fixture Service by logging an error and returning, which lets the test pass incorrectly; replace the klog.Errorf + return with a test failure call (e.g. t.Fatalf("failed to get service %s/%s: %v", newObj.Namespace, newObj.Name, err)) so the test fails fast and continues to exercise listPodsViaLabel; locate the lookup using c.KubeClient.CoreV1().Services(...).Get(...) and change the error handling there to call t.Fatalf (or require.NoError(t, err)) including the error and service names.ibm/ibm_eventrecorder.go-131-148 (1)
131-148:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winBuild the aggregated error string after the VLAN scan, not during it.
errorsis cumulative across all VLANs, but Lines 142-148 append the entire accumulated map on every outer iteration. With multiple VLAN entries, earlier reasons are repeated inerrMsg.Suggested fix
for _, portableSubnetVlanError := range portableSubnetVlanErrors { // Loop through each subnet error in the vlan for _, portableSubnetError := range portableSubnetVlanError { if _, ok := errors[portableSubnetError.ErrorReasonCode]; !ok { errors[portableSubnetError.ErrorReasonCode] = &subnetConfigErrors{portableSubnetError, 1} } else { errors[portableSubnetError.ErrorReasonCode].occurrences++ } } - - // Loop through the subnet errors and append the occurrences of the error - for _, tempError := range errors { - if errMsg != "" { - errMsg += ", " - } - errMsg += fmt.Sprintf("[%s: %s - Number of Occurrences: %d.]", tempError.subnetConfigErrorField.ErrorReasonCode, tempError.subnetConfigErrorField.ErrorMessage, tempError.occurrences) - } } + + // Loop through the subnet errors and append the occurrences of the error + for _, tempError := range errors { + if errMsg != "" { + errMsg += ", " + } + errMsg += fmt.Sprintf("[%s: %s - Number of Occurrences: %d.]", tempError.subnetConfigErrorField.ErrorReasonCode, tempError.subnetConfigErrorField.ErrorMessage, tempError.occurrences) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ibm/ibm_eventrecorder.go` around lines 131 - 148, The aggregation loop currently appends the full cumulative errors map inside the outer VLAN loop (iterating portableSubnetVlanErrors), causing earlier error reasons to be repeated; to fix, populate the errors map (map[string]*subnetConfigErrors) while iterating each portableSubnetError as you already do, but move the block that builds errMsg (the loop over errors that does fmt.Sprintf and concatenation) to run once after the outer loop completes so errMsg is constructed from the final aggregated errors; ensure the errors map is initialized before the outer loop and keep the existing symbols (portableSubnetVlanErrors, portableSubnetError, errors, subnetConfigErrors, errMsg) to locate and adjust the code.
| // Determine what nodes the service endpoints are running on | ||
| endpointNodes := map[string]string{} | ||
| if len(ep.Subsets) > 0 { | ||
| for _, subset := range ep.Subsets { | ||
| for _, address := range subset.Addresses { | ||
| if address.NodeName != nil { | ||
| endpointNodes[*address.NodeName] = address.IP |
There was a problem hiding this comment.
Compare node identity consistently before deleting keepalived pods.
endpointNodes is keyed by EndpointAddress.NodeName, but the eviction check looks it up with pod.Status.HostIP. Those are different identifiers in Kubernetes, so a keepalived pod on the correct node will still look misplaced and get deleted. The current test fixtures hide this because they set NodeName to an IP literal.
🐛 Proposed fix
- endpointNodes := map[string]string{}
+ endpointNodes := map[string]struct{}{}
if len(ep.Subsets) > 0 {
for _, subset := range ep.Subsets {
for _, address := range subset.Addresses {
if address.NodeName != nil {
- endpointNodes[*address.NodeName] = address.IP
+ endpointNodes[*address.NodeName] = struct{}{}
}
}
}
}
@@
- if pod.Status.HostIP == "" {
- klog.V(3).Infof("Host IP is not set for pod: Name: %v, Namespace: %v", pod.Name, pod.Namespace)
+ if pod.Spec.NodeName == "" {
+ klog.V(3).Infof("Node name is not set for pod: Name: %v, Namespace: %v", pod.Name, pod.Namespace)
continue
}
// Determine where the pod is running
- if podIP, exist := endpointNodes[pod.Status.HostIP]; exist {
- klog.V(3).Infof("Endpoint %v is running on this node", podIP)
+ if _, exist := endpointNodes[pod.Spec.NodeName]; exist {
+ klog.V(3).Infof("Endpoint is running on node %v", pod.Spec.NodeName)
continue
}Also applies to: 173-180
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ibm/ibm_endpoint_watch.go` around lines 103 - 109, The endpointNodes map is
keyed by EndpointAddress.NodeName but the eviction logic looks up pods by
pod.Status.HostIP, causing mismatches; change the eviction lookup to use the
pod's node name (pod.Spec.NodeName) so both sides compare Kubernetes node
identity consistently (EndpointAddress.NodeName ↔ pod.Spec.NodeName). Update the
two places that perform this check (the block that builds endpointNodes and the
eviction check(s) that currently reference pod.Status.HostIP) to use
NodeName-based keys/lookup; ensure any nil checks handle missing
pod.Spec.NodeName or EndpointAddress.NodeName before comparing.
| set -xe | ||
|
|
||
| DOCKER_IMAGE_NAME="${1}" | ||
| DOCKER_IMAGE_TAG="${2}" | ||
|
|
||
| # Push the docker image. | ||
| ./build-tools/docker/pushDockerImage.sh "${DOCKER_IMAGE_NAME}" "${DOCKER_IMAGE_TAG}" | ||
|
|
||
| # Initialize image, kube release, and tag information | ||
| bom_image="k8s_cloud_controller_manager_image" | ||
| current_release=$(grep "^TAG ?=" Makefile | awk '{ print $3 }') | ||
| kube_major=$(echo "${current_release}" | cut -d'.' -f1 | tr -d 'v') | ||
| kube_minor=$(echo "${current_release}" | cut -d'.' -f2) | ||
| image_name=$(echo "${DOCKER_IMAGE_NAME}" | cut -d'/' -f2) | ||
| new_image_tag=${DOCKER_IMAGE_TAG} | ||
|
|
||
| # Clone the armada-update-release repo | ||
| git clone --depth=1 --single-branch "https://${GHE_USER}:${GHE_TOKEN}@github.ibm.com/alchemy-containers/armada-update-release.git" | ||
|
|
||
| # Clone the armada-ansible repo | ||
| git clone --filter=blob:none --depth=1 --sparse "https://${GHE_USER}:${GHE_TOKEN}@github.ibm.com/alchemy-containers/armada-ansible.git" |
There was a problem hiding this comment.
Stop echoing repo and Jenkins tokens into CI logs.
With set -x, the credentialed git clone URLs, export GITHUB_TOKEN=${GHE_TOKEN}, and the go run ... -token "${JENKINS_TOKEN}" command all get written to the job log. That leaks live secrets on a successful run.
Suggested hardening
-set -xe
+set -e
@@
-git clone --depth=1 --single-branch "https://${GHE_USER}:${GHE_TOKEN}@github.ibm.com/alchemy-containers/armada-update-release.git"
+git clone --depth=1 --single-branch "https://${GHE_USER}:${GHE_TOKEN}@github.ibm.com/alchemy-containers/armada-update-release.git"
@@
-export GITHUB_TOKEN=${GHE_TOKEN}
+export GITHUB_TOKEN=${GHE_TOKEN}
hub pull-request --file "${TRAVIS_BUILD_DIR}"/message.txt --push "${pr_option}" --labels "${pr_labels}"
@@
- go run main.go -action createTestBOM -ansibleBranch "armada-lb-${new_image_tag}" -clusterType "$cluster_type" -clusterVersion "$cluster_version" -user "${JENKINS_USER}" -token "${JENKINS_TOKEN}"
+ go run main.go -action createTestBOM -ansibleBranch "armada-lb-${new_image_tag}" -clusterType "$cluster_type" -clusterVersion "$cluster_version" -user "${JENKINS_USER}" -token "${JENKINS_TOKEN}"Also applies to: 151-161
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/deploy.sh` around lines 20 - 40, The script currently uses set -x and
interpolates secrets into commands (e.g., the git clone lines using
"${GHE_USER}:${GHE_TOKEN}", any export GITHUB_TOKEN=${GHE_TOKEN}, and
invocations that pass -token "${JENKINS_TOKEN}"), which leaks credentials into
CI logs; disable or scope shell xtrace (remove or limit set -x) around
secret-containing commands, stop embedding secrets directly in command arguments
by using credential helpers or reading tokens from environment inside the
process (e.g., use git credential store, .netrc, or pass tokens via stdin), and
ensure any export or command that references GHE_TOKEN or JENKINS_TOKEN is not
printed (wrap sensitive sections with "set +x" before and "set -x" after or
avoid set -x entirely); update the git clone lines and any export/ go run
invocation to use these non-echoing mechanisms so secrets are never emitted to
logs.
Bumps [github.com/go-openapi/strfmt](https://github.com/go-openapi/strfmt) from 0.26.1 to 0.26.2. - [Release notes](https://github.com/go-openapi/strfmt/releases) - [Commits](go-openapi/strfmt@v0.26.1...v0.26.2) --- updated-dependencies: - dependency-name: github.com/go-openapi/strfmt dependency-version: 0.26.2 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…935) Bumps [github.com/IBM/platform-services-go-sdk](https://github.com/IBM/platform-services-go-sdk) from 0.97.2 to 0.97.4. - [Release notes](https://github.com/IBM/platform-services-go-sdk/releases) - [Changelog](https://github.com/IBM/platform-services-go-sdk/blob/main/CHANGELOG.md) - [Commits](IBM/platform-services-go-sdk@v0.97.2...v0.97.4) --- updated-dependencies: - dependency-name: github.com/IBM/platform-services-go-sdk dependency-version: 0.97.4 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [github.com/fsnotify/fsnotify](https://github.com/fsnotify/fsnotify) from 1.9.0 to 1.10.0. - [Release notes](https://github.com/fsnotify/fsnotify/releases) - [Changelog](https://github.com/fsnotify/fsnotify/blob/main/CHANGELOG.md) - [Commits](fsnotify/fsnotify@v1.9.0...v1.10.0) --- updated-dependencies: - dependency-name: github.com/fsnotify/fsnotify dependency-version: 1.10.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
fe35749 to
c2af6ad
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (8)
.github/workflows/golangci-update.yml (2)
79-79:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winFix PR body link to this repository.
Line 79 hardcodes
IBM-Cloud/cloud-provider-ibm, which sends reviewers to the wrong workflow page for this repo.Suggested fix
- Check the PR results [here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml) + Check the PR results [here](https://github.com/${{ github.repository }}/actions/workflows/dispatch-pr-create.yml)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/golangci-update.yml at line 79, The PR body link is hardcoded to "IBM-Cloud/cloud-provider-ibm" and must point to the current repository; update the link text in the golangci-update workflow (the line containing "Check the PR results [here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml)") to reference the current repo dynamically (e.g., use the repository context or a variable such as github.repository or GITHUB_REPOSITORY) or replace the hardcoded owner/repo with the correct repository name so the link directs reviewers to this repo's dispatch-pr-create workflow.
3-10:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd explicit workflow permissions for PR creation/dispatch.
At Line 8, no top-level permissions are defined, but Lines 71 and 90 rely on write-capable tokens. This is fragile under read-only default
GITHUB_TOKENsettings.Suggested fix
on: schedule: - cron: "0 11 * * *" # Run at 11:00 UTC daily workflow_dispatch: # Allow manual triggering +permissions: + contents: write + pull-requests: write + jobs:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/golangci-update.yml around lines 3 - 10, The workflow "golangci-update" currently lacks top-level permissions but later steps require write-capable tokens; add an explicit top-level permissions block (e.g., permissions: contents: write, issues: write, pull-requests: write, workflows: write as needed) to grant the GITHUB_TOKEN the required scopes for PR creation and dispatch and ensure these permissions appear before the jobs definition so steps that call the REST/Dispatch APIs have the needed write access..github/workflows/kube-update.yml (4)
31-31:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon’t mask matrix failures with
continue-on-error: true.Line 31 causes branch update failures to look successful.
strategy.fail-fast: false(Line 33) already preserves matrix progression without hiding failures.Suggested fix
- continue-on-error: true strategy: fail-fast: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/kube-update.yml at line 31, Remove the continue-on-error: true setting so matrix job failures are not masked; instead rely on strategy.fail-fast: false to allow other matrix entries to run while still surfacing failures. Locate the continue-on-error property in the workflow job definition (the key "continue-on-error" present near the matrix/strategy block) and delete it or set it to false so branch update failures are reported correctly.
104-105:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUse a branch name that includes the matrix base branch.
Line 104 and Line 119 only use the Kubernetes version, so concurrent matrix runs can collide on the same PR branch name.
Suggested fix
- branch: ${{ steps.new-k8s-ver.outputs.version }} + branch: kube-update/${{ matrix.branch }}-${{ steps.new-k8s-ver.outputs.version }} ... - "pr_branch": "${{ steps.new-k8s-ver.outputs.version }}", + "pr_branch": "kube-update/${{ matrix.branch }}-${{ steps.new-k8s-ver.outputs.version }}",Also applies to: 119-120
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/kube-update.yml around lines 104 - 105, The branch name currently uses only steps.new-k8s-ver.outputs.version which can collide across matrix runs; change the branch assignment to include the matrix base branch (matrix.branch) so the PR branch is unique per matrix entry (e.g., combine matrix.branch and steps.new-k8s-ver.outputs.version into the branch value). Update both occurrences that set branch to steps.new-k8s-ver.outputs.version (and the similar use at the second location) to construct a composite name that includes matrix.branch and the version output.
102-102:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winFix PR body link to the current repository.
Line 102 points reviewers to
IBM-Cloud/cloud-provider-ibm, which is not this repository. Use${{ github.repository }}to avoid stale/wrong links.Suggested fix
- Check the PR results [here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml) + Check the PR results [here](https://github.com/${{ github.repository }}/actions/workflows/dispatch-pr-create.yml)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/kube-update.yml at line 102, Replace the hard-coded PR results link that points to "IBM-Cloud/cloud-provider-ibm" with a dynamic repository reference using ${{ github.repository }} so the URL always targets the current repo (e.g., update the string "https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml" to use ${{ github.repository }}); locate and modify the literal URL in the kube-update.yml workflow (the PR results link text) so reviewers are directed to the current repository's workflow run.
3-8:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAdd explicit workflow permissions for token-based actions.
At Line 8, the workflow starts jobs without a top-level
permissionsblock, but Lines 94 and 113 use token-dependent actions that commonly require write scopes. This can fail on repos/orgs with read-only defaultGITHUB_TOKEN.Suggested fix
on: schedule: - cron: "0 10 * * *" # Run at 10:00 UTC daily workflow_dispatch: # Allow manual triggering +permissions: + contents: write + pull-requests: write + jobs:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/kube-update.yml around lines 3 - 8, Add a top-level permissions block to explicitly grant the GITHUB_TOKEN the scopes required by the token-dependent actions used later in the workflow (the actions that perform write operations with GITHUB_TOKEN). Insert a permissions section directly under the on: block and include at minimum the necessary write scopes (for example: contents: write and id-token: write, plus actions: write/packages: write or others needed by those specific actions); keep the permissions minimal and enumerate any extra scopes required by those token-using steps so the workflow will succeed even on repos/orgs with a read-only default GITHUB_TOKEN..github/workflows/dispatch-pr-create.yml (2)
5-11:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDeclare explicit token permissions for PR comment/automerge steps.
At Line 9, jobs start without explicit permissions while later steps write PR comments and enable auto-merge. On read-only default tokens this fails intermittently across repos/orgs.
Suggested fix
on: repository_dispatch: types: [pull-request-created] +permissions: + contents: read + pull-requests: write + issues: write + jobs:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/dispatch-pr-create.yml around lines 5 - 11, The workflow lacks explicit token permissions which breaks later steps that comment on PRs and enable auto-merge; add a permissions stanza to the job named dispatch-pr-create (or at workflow root) granting the minimum write rights needed, e.g. set permissions: pull-requests: write and contents: read (or contents: write if your steps modify repo files), so the steps that post PR comments and enable auto-merge succeed reliably when triggered by repository_dispatch.
55-57:⚠️ Potential issue | 🟠 Major | ⚡ Quick winGuard against empty PR lookup before comment/automerge steps.
Line 55 can return no matches; then Lines 64 and 72 consume an empty PR number and fail with low-signal errors.
Suggested fix
- name: Get PR number for branch id: get-pr run: | PR_NUMBER=$(gh pr list --head "${{ github.event.client_payload.pr_branch }}" --json number --jq '.[0].number') + if [ -z "$PR_NUMBER" ] || [ "$PR_NUMBER" = "null" ]; then + echo "::error::No PR found for branch ${{ github.event.client_payload.pr_branch }}" + echo "pr_number=" >> "$GITHUB_OUTPUT" + exit 1 + fi echo "pr_number=${PR_NUMBER}" >> "$GITHUB_OUTPUT" env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} - name: Comment on PR + if: ${{ steps.get-pr.outputs.pr_number != '' }} uses: peter-evans/create-or-update-comment@v5 with: token: ${{ secrets.GITHUB_TOKEN }} issue-number: ${{ steps.get-pr.outputs.pr_number }} body: | ✅ Tests completed successfully - name: Enable auto-merge + if: ${{ steps.get-pr.outputs.pr_number != '' }} uses: peter-evans/enable-pull-request-automerge@v3Also applies to: 60-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/dispatch-pr-create.yml around lines 55 - 57, The gh lookup can return empty PR_NUMBER; add a guard immediately after PR_NUMBER is set: check if the variable is empty and either fail early with a clear error log or set a sentinel output and exit successfully to skip downstream actions; then update the steps that consume PR_NUMBER (the steps referencing pr_number/output and the comment/automerge steps) to run only when the pr_number output is non-empty (use an if: condition checking the lookup step output not equal to '' or the sentinel). Ensure you reference the PR_NUMBER variable (from the gh pr list command) and the consuming steps' use of that output so callers do not run when PR_NUMBER is empty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/dispatch-pr-create.yml:
- Around line 5-11: The workflow lacks explicit token permissions which breaks
later steps that comment on PRs and enable auto-merge; add a permissions stanza
to the job named dispatch-pr-create (or at workflow root) granting the minimum
write rights needed, e.g. set permissions: pull-requests: write and contents:
read (or contents: write if your steps modify repo files), so the steps that
post PR comments and enable auto-merge succeed reliably when triggered by
repository_dispatch.
- Around line 55-57: The gh lookup can return empty PR_NUMBER; add a guard
immediately after PR_NUMBER is set: check if the variable is empty and either
fail early with a clear error log or set a sentinel output and exit successfully
to skip downstream actions; then update the steps that consume PR_NUMBER (the
steps referencing pr_number/output and the comment/automerge steps) to run only
when the pr_number output is non-empty (use an if: condition checking the lookup
step output not equal to '' or the sentinel). Ensure you reference the PR_NUMBER
variable (from the gh pr list command) and the consuming steps' use of that
output so callers do not run when PR_NUMBER is empty.
In @.github/workflows/golangci-update.yml:
- Line 79: The PR body link is hardcoded to "IBM-Cloud/cloud-provider-ibm" and
must point to the current repository; update the link text in the
golangci-update workflow (the line containing "Check the PR results
[here](https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml)")
to reference the current repo dynamically (e.g., use the repository context or a
variable such as github.repository or GITHUB_REPOSITORY) or replace the
hardcoded owner/repo with the correct repository name so the link directs
reviewers to this repo's dispatch-pr-create workflow.
- Around line 3-10: The workflow "golangci-update" currently lacks top-level
permissions but later steps require write-capable tokens; add an explicit
top-level permissions block (e.g., permissions: contents: write, issues: write,
pull-requests: write, workflows: write as needed) to grant the GITHUB_TOKEN the
required scopes for PR creation and dispatch and ensure these permissions appear
before the jobs definition so steps that call the REST/Dispatch APIs have the
needed write access.
In @.github/workflows/kube-update.yml:
- Line 31: Remove the continue-on-error: true setting so matrix job failures are
not masked; instead rely on strategy.fail-fast: false to allow other matrix
entries to run while still surfacing failures. Locate the continue-on-error
property in the workflow job definition (the key "continue-on-error" present
near the matrix/strategy block) and delete it or set it to false so branch
update failures are reported correctly.
- Around line 104-105: The branch name currently uses only
steps.new-k8s-ver.outputs.version which can collide across matrix runs; change
the branch assignment to include the matrix base branch (matrix.branch) so the
PR branch is unique per matrix entry (e.g., combine matrix.branch and
steps.new-k8s-ver.outputs.version into the branch value). Update both
occurrences that set branch to steps.new-k8s-ver.outputs.version (and the
similar use at the second location) to construct a composite name that includes
matrix.branch and the version output.
- Line 102: Replace the hard-coded PR results link that points to
"IBM-Cloud/cloud-provider-ibm" with a dynamic repository reference using ${{
github.repository }} so the URL always targets the current repo (e.g., update
the string
"https://github.com/IBM-Cloud/cloud-provider-ibm/actions/workflows/dispatch-pr-create.yml"
to use ${{ github.repository }}); locate and modify the literal URL in the
kube-update.yml workflow (the PR results link text) so reviewers are directed to
the current repository's workflow run.
- Around line 3-8: Add a top-level permissions block to explicitly grant the
GITHUB_TOKEN the scopes required by the token-dependent actions used later in
the workflow (the actions that perform write operations with GITHUB_TOKEN).
Insert a permissions section directly under the on: block and include at minimum
the necessary write scopes (for example: contents: write and id-token: write,
plus actions: write/packages: write or others needed by those specific actions);
keep the permissions minimal and enumerate any extra scopes required by those
token-using steps so the workflow will succeed even on repos/orgs with a
read-only default GITHUB_TOKEN.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: f9a615ab-1f3e-4713-b95b-4f37661932bf
⛔ Files ignored due to path filters (291)
go.sumis excluded by!**/*.sumvendor/github.com/IBM/platform-services-go-sdk/common/version.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/IBM/vpc-go-sdk/common/version.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/IBM/vpc-go-sdk/vpcv1/vpc_v1.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/.cirrus.ymlis excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/CHANGELOG.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/CONTRIBUTING.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/README.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/backend_fen.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/backend_inotify.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/backend_kqueue.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/backend_windows.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/fsnotify.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/darwin.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/debug_darwin.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/debug_dragonfly.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/debug_freebsd.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/debug_kqueue.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/debug_netbsd.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/debug_openbsd.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/freebsd.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/unix.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/unix2.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/fsnotify/fsnotify/internal/windows.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/go-openapi/strfmt/.gitignoreis excluded by!**/vendor/**,!vendor/**vendor/github.com/go-openapi/strfmt/CONTRIBUTORS.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/go-openapi/strfmt/README.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/go-openapi/strfmt/duration.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/go-openapi/strfmt/go.workis excluded by!**/*.work,!**/vendor/**,!vendor/**vendor/github.com/go-openapi/strfmt/go.work.sumis excluded by!**/*.sum,!**/vendor/**,!vendor/**vendor/github.com/google/btree/README.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/google/btree/btree_generic.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/compare.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/export.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/diff/diff.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/flags/flags.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/function/func.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/value/name.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/value/pointer.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/internal/value/sort.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/path.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_compare.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_references.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_reflect.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_slices.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_text.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/google/go-cmp/cmp/report_value.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/LICENSEis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/client_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/client_options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/constants.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/doc.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/server_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus/server_options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/COPYRIGHTis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/LICENSEis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/callmeta.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/client.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/doc.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-middleware/v2/interceptors/server.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/.gitignoreis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/.travis.ymlis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/CHANGELOG.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/README.mdis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/client.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/client_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/client_reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/makefileis excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/metric_options.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_reporter.gois excluded by!**/vendor/**,!vendor/**vendor/github.com/grpc-ecosystem/go-grpc-prometheus/util.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/context/context.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/http2/hpack/tables.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/http2/http2.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/http2/server.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/http2/transport.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/http2/writesched.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/http2/writesched_priority_rfc7540.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/net/http2/writesched_random.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/sync/singleflight/singleflight.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/sys/cpu/cpu_darwin_arm64_other.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/sys/cpu/cpu_other_arm64.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/sys/cpu/cpu_windows_arm64.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/sys/windows/dll_windows.gois excluded by!**/vendor/**,!vendor/**vendor/golang.org/x/sys/windows/security_windows.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/README.mdis excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/adapt.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1/reflection.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1/reflection_grpc.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/grpc_reflection_v1alpha/reflection_grpc.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/internal/internal.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/grpc/reflection/serverreflection.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/protodelim/protodelim.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/protojson/decode.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/protojson/well_known_types.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/encoding/prototext/decode.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/internal/descfmt/stringer.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/internal/version/version.gois excluded by!**/vendor/**,!vendor/**vendor/google.golang.org/protobuf/reflect/protodesc/desc_init.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admission/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/register.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/admissionregistration/v1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/admissionregistration/v1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/admissionregistration/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1alpha1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/admissionregistration/v1beta1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/apidiscovery/v2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apidiscovery/v2beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apiserverinternal/v1alpha1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/apps/v1beta2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1alpha1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authentication/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/authorization/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta2/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/autoscaling/v2beta2/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta2/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/autoscaling/v2beta2/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/batch/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/batch/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/certificates/v1beta1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/certificates/v1beta1/zz_generated.prerelease-lifecycle.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/coordination/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1alpha2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1alpha2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1alpha2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/coordination/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/core/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/core/v1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/discovery/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/discovery/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/extensions/v1beta1/zz_generated.validations.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/flowcontrol/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta2/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta2/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta2/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta3/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta3/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/flowcontrol/v1beta3/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/imagepolicy/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/imagepolicy/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/imagepolicy/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/networking/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/node/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/policy/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1alpha1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1alpha1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1alpha1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1beta1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1beta1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/rbac/v1beta1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1/zz_generated.model_name.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*vendor/k8s.io/api/resource/v1alpha3/generated.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/generated.protois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/generated.protomessage.pb.gois excluded by!**/*.pb.go,!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/register.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/types.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/types_swagger_doc_generated.gois excluded by!**/vendor/**,!vendor/**vendor/k8s.io/api/resource/v1alpha3/zz_generated.deepcopy.gois excluded by!**/vendor/**,!vendor/**,!**/zz_generated*
📒 Files selected for processing (9)
.github/dependabot.yml.github/workflows/dispatch-pr-create.yml.github/workflows/golangci-update.yml.github/workflows/kube-update.ymlMakefileREADME.mdgo.modibm/ibm_version.gomain.go
✅ Files skipped from review due to trivial changes (4)
- ibm/ibm_version.go
- README.md
- main.go
- Makefile
Bumps [github.com/fsnotify/fsnotify](https://github.com/fsnotify/fsnotify) from 1.10.0 to 1.10.1. - [Release notes](https://github.com/fsnotify/fsnotify/releases) - [Changelog](https://github.com/fsnotify/fsnotify/blob/main/CHANGELOG.md) - [Commits](fsnotify/fsnotify@v1.10.0...v1.10.1) --- updated-dependencies: - dependency-name: github.com/fsnotify/fsnotify dependency-version: 1.10.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* Fetch internal IPs of vms from dhcp server * Remove unnesesarry field from config
…ainer image to be consistent with ART for 4.17 Reconciling with https://github.com/openshift/ocp-build-data/tree/4c1326094222f9209876f06833179a1b9178faf7/images/ose-powervs-cloud-controller-manager.yml
…ainer image to be consistent with ART for 4.18 Reconciling with https://github.com/openshift/ocp-build-data/tree/827ab4ccce9cbbcf82c9dbaf6398b61d6cff8d7a/images/ose-powervs-cloud-controller-manager.yml
…ainer image to be consistent with ART for 4.19 Reconciling with https://github.com/openshift/ocp-build-data/tree/a39508c86497b4e5e463d7b2c78e51e577be9e7d/images/ose-powervs-cloud-controller-manager.yml
Signed-off-by: Prajyot Parab <Prajyot.Parab2@ibm.com> # Conflicts: # vendor/golang.org/x/crypto/internal/poly1305/sum_amd64.go # vendor/golang.org/x/crypto/poly1305/sum_amd64.go # vendor/golang.org/x/crypto/poly1305/sum_asm.go # vendor/golang.org/x/crypto/poly1305/sum_loong64.s # vendor/golang.org/x/crypto/poly1305/sum_ppc64le.go # vendor/golang.org/x/net/context/go17.go # vendor/golang.org/x/net/context/go19.go # vendor/golang.org/x/net/context/pre_go17.go # vendor/golang.org/x/net/context/pre_go19.go # vendor/golang.org/x/net/html/atom/table.go # vendor/golang.org/x/net/html/parse.go # vendor/golang.org/x/net/html/token.go # vendor/golang.org/x/sys/cpu/cpu_loong64.go # vendor/golang.org/x/sys/cpu/parse.go # vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go # vendor/golang.org/x/sys/unix/zsysnum_linux_loong64.go # vendor/golang.org/x/sys/windows/registry/key.go # vendor/golang.org/x/sys/windows/registry/value.go # vendor/golang.org/x/text/internal/number/format.go # vendor/golang.org/x/text/language/parse.go
…ainer image to be consistent with ART for 4.20 Reconciling with https://github.com/openshift/ocp-build-data/tree/8f77fc475c95f9d98c348deb2feb88f5952d7357/images/ose-powervs-cloud-controller-manager.yml
…ainer image to be consistent with ART for 4.21 Reconciling with https://github.com/openshift/ocp-build-data/tree/4fbe3fab45239dc4be6f5d9d98a0bf36e0274ec9/images/ose-powervs-cloud-controller-manager.yml
…ainer image to be consistent with ART for 4.22 Reconciling with https://github.com/openshift/ocp-build-data/tree/56cb39ad358cdec1db7c84ea1919fe8849c2550b/images/ose-powervs-cloud-controller-manager.yml
c2af6ad to
6402114
Compare
Summary by CodeRabbit
Chores
New Workflows