CNTRLPLANE-3184: Create network policies for image-registry components#1301
Conversation
|
@andreacv98: This pull request references CNTRLPLANE-2655 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
c161959 to
0993702
Compare
|
Important Review skippedAuto reviews are limited based on label configuration. 🚫 Review skipped — only excluded labels are configured. (1)
Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds NetworkPolicy YAMLs and corresponding mutators, RBAC rules, client/lister fields, and informer wiring to manage image-registry and image-pruner network policies in the openshift-image-registry namespace. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
pkg/resource/imageregistrynetworkpolicy.go (1)
22-88: Consolidate duplicated mutator logic to reduce drift riskThis implementation is almost identical to
pkg/resource/imageprunernetworkpolicy.go(same fields/method bodies, only policy/asset names differ). Extracting a shared NetworkPolicy mutator (parameterized by policy name + asset filename) would reduce copy/paste maintenance and future divergence.Refactor sketch
+type staticNetworkPolicyMutator struct { + eventRecorder events.Recorder + networkPolicyLister networkingv1listers.NetworkPolicyNamespaceLister + client networkingv1client.NetworkingV1Interface + name string + assetFile string +} + +func (np *staticNetworkPolicyMutator) GetName() string { return np.name } +func (np *staticNetworkPolicyMutator) expected() *networkingv1.NetworkPolicy { + return resourceread.ReadNetworkPolicyV1OrDie(assets.MustAsset(np.assetFile)) +}-func NewGeneratorImageRegistryNetworkPolicy(...) Mutator { - return &generatorImageRegistryNetworkPolicy{...} -} +func NewGeneratorImageRegistryNetworkPolicy(...) Mutator { + return &staticNetworkPolicyMutator{ + eventRecorder: eventRecorder, networkPolicyLister: networkPolicyLister, client: client, + name: "image-registry-networkpolicy", assetFile: "image-registry-networkpolicy.yaml", + } +}As per coding guidelines, "-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/resource/imageregistrynetworkpolicy.go` around lines 22 - 88, The generatorImageRegistryNetworkPolicy mutator duplicates logic from imageprunernetworkpolicy; extract a reusable parameterized network policy mutator struct (e.g., networkPolicyMutator) that accepts the policy name and asset filename plus eventRecorder, networkPolicyLister and client, move shared methods (Type, GetNamespace, GetName via parameter, Get using networkPolicyLister, expected reading the asset, Create delegating to Update, Update calling resourceapply.ApplyNetworkPolicy, Delete and Owned) into it, then implement generatorImageRegistryNetworkPolicy and the pruner variant as thin wrappers that construct the shared networkPolicyMutator with their specific name and asset; update references to functions/methods like expected(), Update(), Create(), Delete() and GetName() to use the generic implementation to avoid duplication.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindata/image-registry-networkpolicy.yaml`:
- Around line 7-15: The NetworkPolicy ingress currently only allows traffic from
the namespaceSelector with matchLabels kubernetes.io/metadata.name:
openshift-monitoring to port: 5000, which together with the namespace-wide
default-deny blocks builds/push/pull to the registry (docker-registry: default);
update the policy's ingress rules to also permit legitimate registry clients by
adding additional from entries (e.g., namespaceSelector(s) for build/ci
namespaces or podSelector(s)/serviceAccount selectors matching your
build/push/pull clients) and ensure the allowed ports include the registry ports
used by clients (5000 and any TLS ports), so registry traffic from those
namespaces/pods is explicitly allowed.
In `@manifests/07-image-registry-operator-networkpolicy.yaml`:
- Around line 45-60: The external egress rule that sets ipBlock: cidr: 0.0.0.0/0
currently allows ports 443 and 6443; remove port 6443 from the ports list so
only 443 remains for S3/internet access (leave the existing
kube-apiserver-specific rule using namespace/pod selectors intact), and add the
missing except ranges 169.254.0.0/16 and 100.64.0.0/10 to the ipBlock.except
list to block metadata and CGNAT addresses while preserving the RFC1918
exclusions.
---
Nitpick comments:
In `@pkg/resource/imageregistrynetworkpolicy.go`:
- Around line 22-88: The generatorImageRegistryNetworkPolicy mutator duplicates
logic from imageprunernetworkpolicy; extract a reusable parameterized network
policy mutator struct (e.g., networkPolicyMutator) that accepts the policy name
and asset filename plus eventRecorder, networkPolicyLister and client, move
shared methods (Type, GetNamespace, GetName via parameter, Get using
networkPolicyLister, expected reading the asset, Create delegating to Update,
Update calling resourceapply.ApplyNetworkPolicy, Delete and Owned) into it, then
implement generatorImageRegistryNetworkPolicy and the pruner variant as thin
wrappers that construct the shared networkPolicyMutator with their specific name
and asset; update references to functions/methods like expected(), Update(),
Create(), Delete() and GetName() to use the generic implementation to avoid
duplication.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 138094b3-cdda-46f7-aacb-ea098bdbe0a2
📒 Files selected for processing (10)
bindata/image-pruner-networkpolicy.yamlbindata/image-registry-networkpolicy.yamlmanifests/07-default-deny-all-networkpolicy.yamlmanifests/07-image-registry-operator-networkpolicy.yamlpkg/client/clients.gopkg/client/listers.gopkg/operator/controller.gopkg/resource/generator.gopkg/resource/imageprunernetworkpolicy.gopkg/resource/imageregistrynetworkpolicy.go
| # allow egress to ONLY internet (external) | ||
| # i.e. S3 access | ||
| - to: | ||
| - ipBlock: | ||
| cidr: 0.0.0.0/0 | ||
| # refine the rule with the private network exception | ||
| except: | ||
| - 10.0.0.0/8 # RFC1918: Block access towards other private internal networks | ||
| - 172.16.0.0/12 # RFC1918: Block access towards other private internal networks | ||
| - 192.168.0.0/16 # RFC1918: Block access towards other private internal networks | ||
| ports: | ||
| - protocol: TCP | ||
| port: 443 | ||
| - protocol: TCP | ||
| port: 6443 | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n manifests/07-image-registry-operator-networkpolicy.yamlRepository: openshift/cluster-image-registry-operator
Length of output: 2178
🏁 Script executed:
# Search for references to 6443 in the codebase
rg "6443" --type yaml --type go -A 2 -B 2 | head -50Repository: openshift/cluster-image-registry-operator
Length of output: 3269
🏁 Script executed:
# Look for operator configuration and what it actually needs
find . -name "*.go" -type f | xargs grep -l "external\|webhook\|6443" | head -10Repository: openshift/cluster-image-registry-operator
Length of output: 1027
🏁 Script executed:
# Check if there's documentation about operator networking requirements
find . -name "README*" -o -name "*.md" | xargs grep -l "network\|egress\|firewall" 2>/dev/null | head -5Repository: openshift/cluster-image-registry-operator
Length of output: 425
🏁 Script executed:
# Search for network policy references and external connectivity
find ./pkg -name "*.go" -type f 2>/dev/null | head -20Repository: openshift/cluster-image-registry-operator
Length of output: 680
🏁 Script executed:
# Look for any documentation about the operator's networking
find . -maxdepth 2 -name "*.md" -type f 2>/dev/null | xargs grep -i "network\|firewall\|egress" 2>/dev/null | head -20Repository: openshift/cluster-image-registry-operator
Length of output: 67
🏁 Script executed:
# Search for S3 or external registry access patterns
rg "s3\|S3\|external\|registry" pkg/ --type go -A 1 -B 1 | head -40Repository: openshift/cluster-image-registry-operator
Length of output: 67
🏁 Script executed:
# Look at the bootstrap code which might show what connectivity is needed
head -100 ./pkg/operator/bootstrap.goRepository: openshift/cluster-image-registry-operator
Length of output: 3662
🏁 Script executed:
# Search for any references to external API/registry connections
rg "webhook\|https\|certificate" pkg/ --type go | head -20Repository: openshift/cluster-image-registry-operator
Length of output: 67
🏁 Script executed:
# Check if there's any explanation in git history or issue tracking
find . -maxdepth 1 -name "*.md" -o -name "*.txt" | xargs grep -i "6443\|network\|egress" 2>/dev/nullRepository: openshift/cluster-image-registry-operator
Length of output: 69
Remove port 6443 from the external egress rule or justify its necessity.
Port 6443 to the kube-apiserver is already permitted by the specific rule on lines 19–29 using namespace and pod selectors. The broad external egress rule (0.0.0.0/0) on line 59 reopens this port to any public IP, which violates least-privilege. Since the comment indicates S3 access (which uses port 443), port 6443 should be removed unless the operator has documented external API connections requiring it.
Additionally, consider adding missing exception ranges to prevent egress to metadata services:
169.254.0.0/16(link-local/AWS metadata)100.64.0.0/10(Carrier Grade NAT)
Suggested changes
- to:
- ipBlock:
cidr: 0.0.0.0/0
# refine the rule with the private network exception
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
+ - 169.254.0.0/16
+ - 100.64.0.0/10
ports:
- protocol: TCP
port: 443
- - protocol: TCP
- port: 6443📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # allow egress to ONLY internet (external) | |
| # i.e. S3 access | |
| - to: | |
| - ipBlock: | |
| cidr: 0.0.0.0/0 | |
| # refine the rule with the private network exception | |
| except: | |
| - 10.0.0.0/8 # RFC1918: Block access towards other private internal networks | |
| - 172.16.0.0/12 # RFC1918: Block access towards other private internal networks | |
| - 192.168.0.0/16 # RFC1918: Block access towards other private internal networks | |
| ports: | |
| - protocol: TCP | |
| port: 443 | |
| - protocol: TCP | |
| port: 6443 | |
| # allow egress to ONLY internet (external) | |
| # i.e. S3 access | |
| - to: | |
| - ipBlock: | |
| cidr: 0.0.0.0/0 | |
| # refine the rule with the private network exception | |
| except: | |
| - 10.0.0.0/8 # RFC1918: Block access towards other private internal networks | |
| - 172.16.0.0/12 # RFC1918: Block access towards other private internal networks | |
| - 192.168.0.0/16 # RFC1918: Block access towards other private internal networks | |
| - 169.254.0.0/16 | |
| - 100.64.0.0/10 | |
| ports: | |
| - protocol: TCP | |
| port: 443 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@manifests/07-image-registry-operator-networkpolicy.yaml` around lines 45 -
60, The external egress rule that sets ipBlock: cidr: 0.0.0.0/0 currently allows
ports 443 and 6443; remove port 6443 from the ports list so only 443 remains for
S3/internet access (leave the existing kube-apiserver-specific rule using
namespace/pod selectors intact), and add the missing except ranges
169.254.0.0/16 and 100.64.0.0/10 to the ipBlock.except list to block metadata
and CGNAT addresses while preserving the RFC1918 exclusions.
0993702 to
1d2e2cf
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
bindata/image-registry-networkpolicy.yaml (1)
61-73: Consider adding metadata service IP ranges to the exception list.The external egress rule allows 0.0.0.0/0 with RFC1918 exceptions, which is appropriate for S3 access. However, cloud metadata services (e.g., AWS at 169.254.169.254) are not blocked. If a registry pod were compromised, it could potentially query instance metadata and leak credentials.
Adding
169.254.0.0/16(link-local) and optionally100.64.0.0/10(CGNAT) to the exception list would provide defense-in-depth.Suggested improvement
- to: - ipBlock: cidr: 0.0.0.0/0 # refine the rule with the private network exception except: - 10.0.0.0/8 # RFC1918: Block access towards other private internal networks - 172.16.0.0/12 # RFC1918: Block access towards other private internal networks - 192.168.0.0/16 # RFC1918: Block access towards other private internal networks + - 169.254.0.0/16 # Link-local: Block access to cloud metadata services🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindata/image-registry-networkpolicy.yaml` around lines 61 - 73, Update the ipBlock "except" list in the external egress rule so it also blocks link-local and optional CGNAT ranges: add 169.254.0.0/16 to the existing except entries (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) and optionally add 100.64.0.0/10; modify the ipBlock except array that currently contains those RFC1918 CIDRs to include these additional CIDRs to prevent access to cloud metadata and CGNAT addresses.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bindata/image-registry-networkpolicy.yaml`:
- Around line 61-73: Update the ipBlock "except" list in the external egress
rule so it also blocks link-local and optional CGNAT ranges: add 169.254.0.0/16
to the existing except entries (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) and
optionally add 100.64.0.0/10; modify the ipBlock except array that currently
contains those RFC1918 CIDRs to include these additional CIDRs to prevent access
to cloud metadata and CGNAT addresses.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6b7eaf30-906d-4230-a4de-871f00591f49
📒 Files selected for processing (10)
bindata/image-pruner-networkpolicy.yamlbindata/image-registry-networkpolicy.yamlmanifests/07-default-deny-all-networkpolicy.yamlmanifests/07-image-registry-operator-networkpolicy.yamlpkg/client/clients.gopkg/client/listers.gopkg/operator/controller.gopkg/resource/generator.gopkg/resource/imageprunernetworkpolicy.gopkg/resource/imageregistrynetworkpolicy.go
🚧 Files skipped from review as they are similar to previous changes (4)
- manifests/07-default-deny-all-networkpolicy.yaml
- pkg/resource/generator.go
- pkg/client/clients.go
- pkg/resource/imageregistrynetworkpolicy.go
dusk125
left a comment
There was a problem hiding this comment.
Still reviewing, but these stood out to me
1d2e2cf to
78caff2
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
bindata/image-registry-networkpolicy.yaml (1)
63-73: Consider IPv6 for dual-stack clusters.The external egress rule uses
0.0.0.0/0which only covers IPv4 traffic. On dual-stack clusters, IPv6 traffic to cloud storage endpoints may be blocked or behave inconsistently depending on CNI implementation.If the cluster supports IPv6, add a corresponding rule for
::/0(excluding private IPv6 ranges likefc00::/7). Note: The identical rule exists inmanifests/07-image-registry-operator-networkpolicy.yamland should be updated consistently.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindata/image-registry-networkpolicy.yaml` around lines 63 - 73, The egress ipBlock currently only allows IPv4 (ipBlock.cidr: 0.0.0.0/0 with except entries) which will block IPv6 on dual‑stack clusters; add a matching IPv6 ipBlock entry (ipBlock.cidr: ::/0) with corresponding except ranges (e.g., fc00::/7) and the same ports (protocol: TCP, port: 443) so IPv6 egress to cloud storage is allowed, and apply the same change to the other identical NetworkPolicy block in your manifests to keep them consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/resource/imageregistrynetworkpolicy.go`:
- Around line 62-77: The Update method recreates the ResourceCache each call
(resourceapply.NewResourceCache()), breaking caching and forcing unnecessary
applies; modify the generatorImageRegistryNetworkPolicy struct to include a
persistent ResourceCache field (e.g., cache *resourceapply.ResourceCache),
initialize that cache in the generatorImageRegistryNetworkPolicy constructor,
and change Update to pass the stored cache to resourceapply.ApplyNetworkPolicy
instead of calling NewResourceCache() so SafeToSkipApply and retry loops can use
the persisted cache.
---
Nitpick comments:
In `@bindata/image-registry-networkpolicy.yaml`:
- Around line 63-73: The egress ipBlock currently only allows IPv4
(ipBlock.cidr: 0.0.0.0/0 with except entries) which will block IPv6 on
dual‑stack clusters; add a matching IPv6 ipBlock entry (ipBlock.cidr: ::/0) with
corresponding except ranges (e.g., fc00::/7) and the same ports (protocol: TCP,
port: 443) so IPv6 egress to cloud storage is allowed, and apply the same change
to the other identical NetworkPolicy block in your manifests to keep them
consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4c22b96c-f038-4d8d-846e-55f89798719d
📒 Files selected for processing (10)
bindata/image-pruner-networkpolicy.yamlbindata/image-registry-networkpolicy.yamlmanifests/07-default-deny-all-networkpolicy.yamlmanifests/07-image-registry-operator-networkpolicy.yamlpkg/client/clients.gopkg/client/listers.gopkg/operator/controller.gopkg/resource/generator.gopkg/resource/imageprunernetworkpolicy.gopkg/resource/imageregistrynetworkpolicy.go
🚧 Files skipped from review as they are similar to previous changes (6)
- bindata/image-pruner-networkpolicy.yaml
- pkg/client/listers.go
- manifests/07-default-deny-all-networkpolicy.yaml
- pkg/operator/controller.go
- manifests/07-image-registry-operator-networkpolicy.yaml
- pkg/resource/imageprunernetworkpolicy.go
| - namespaceSelector: | ||
| matchLabels: | ||
| kubernetes.io/metadata.name: openshift-dns | ||
| podSelector: |
There was a problem hiding this comment.
I don't think you need the podSelector here (see the doc for the DNS egress)
| - namespaceSelector: | ||
| matchLabels: | ||
| kubernetes.io/metadata.name: openshift-dns | ||
| podSelector: |
There was a problem hiding this comment.
I don't think you need the podSelector here (see the doc for the DNS egress)
| protocol: TCP | ||
|
|
||
| egress: | ||
| # allow egress to kube-apiserver |
There was a problem hiding this comment.
See other comments about egress to apiserver
| - namespaceSelector: | ||
| matchLabels: | ||
| kubernetes.io/metadata.name: openshift-dns | ||
| podSelector: |
78caff2 to
dd9b321
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pkg/client/clients.go`:
- Around line 17-28: The Clients struct can be partially instantiated (e.g.
®opclient.Clients{}) leaving Networking nil and causing runtime nil derefs;
add an atomic constructor and/or validator to ensure all required interfaces are
non-nil. Implement a NewClients(...) factory that accepts the required interface
instances and returns a fully populated *Clients (or an error) and/or add a
Clients.Validate() method that checks Networking and other required fields are
non-nil; update controller wiring to call NewClients or Validate() before using
Clients.Networking so no code path ever operates on a partially initialized
Clients.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2953217b-3f66-4891-afac-738923eddfb5
📒 Files selected for processing (10)
bindata/image-pruner-networkpolicy.yamlbindata/image-registry-networkpolicy.yamlmanifests/07-default-deny-all-networkpolicy.yamlmanifests/07-image-registry-operator-networkpolicy.yamlpkg/client/clients.gopkg/client/listers.gopkg/operator/controller.gopkg/resource/generator.gopkg/resource/imageprunernetworkpolicy.gopkg/resource/imageregistrynetworkpolicy.go
✅ Files skipped from review due to trivial changes (3)
- manifests/07-default-deny-all-networkpolicy.yaml
- pkg/client/listers.go
- bindata/image-pruner-networkpolicy.yaml
🚧 Files skipped from review as they are similar to previous changes (6)
- pkg/resource/generator.go
- pkg/operator/controller.go
- bindata/image-registry-networkpolicy.yaml
- pkg/resource/imageregistrynetworkpolicy.go
- manifests/07-image-registry-operator-networkpolicy.yaml
- pkg/resource/imageprunernetworkpolicy.go
dd9b321 to
8b39835
Compare
|
@andreacv98 looks like you may need to grant the networkpolicies permission in the rbac.yaml: something like - apiGroups:
- networking.k8s.io
resources:
- networkpolicies
verbs:
- get
- list
- watch
- create
- update
- patch
- delete |
Adds NetworkPolicy resources to the openshift-image-registry namespace to secure network traffic for the image registry operator, registry, and image pruner components: - default-deny-all: Denies all ingress and egress traffic by default - image-registry-operator-networkpolicy: Allows operator to communicate with kube-apiserver, DNS, and monitoring, plus external S3-compatible storage - image-registry-networkpolicy: Allows registry to receive monitoring traffic and communicate with kube-apiserver, DNS, and external storage - image-pruner-networkpolicy: Allows pruner to communicate with kube-apiserver, DNS, and the image registry for pruning operations These policies follow the principle of least privilege by explicitly allowing only required network paths while blocking all other traffic.
8b39835 to
bca66ae
Compare
Moves the image-pruner-networkpolicy from the main Generator to ImagePrunerGenerator to ensure it's deployed by the pruner controller alongside the pruner CronJob. This fixes a race condition where the pruner CronJob could be created before the network policy, causing pruner pods to be blocked by the default-deny-all network policy. Adds necessary infrastructure to ImagePrunerController: - eventRecorder and resourceCache for network policy application - NetworkPolicies lister and Networking client initialization - NetworkPolicies informer to watch for changes
Default-deny-all network policy should be applied after the allow network policy, this to ensure less possible issues
The image-pruner-networkpolicy was using a podSelector with label "created-by: image-pruner", but this label was only set on the Job object, not on the pods created by the Job. This meant the network policy never applied to any pods. This change adds the "app: image-pruner" label to the pod template in the CronJob spec so that pods created by the image-pruner CronJob are properly selected by the network policy.
|
/retest |
1 similar comment
|
/retest |
|
@dusk125: trigger 13 job(s) of type blocking for the nightly release of OCP 4.22
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/0430a6d0-32b6-11f1-9b03-807b4577810c-0 |
|
/approve Holding for QE verification, feel free to remove when appropriate. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andreacv98, dusk125, flavianmissi The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/jira refresh |
|
@dusk125: This pull request references CNTRLPLANE-2655 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/payload-job periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3 periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4 periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6 |
|
@andreacv98: trigger 4 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/60422e30-3356-11f1-9cb8-dc3ea2713e67-0 |
|
/jira refresh |
|
@dusk125: This pull request references CNTRLPLANE-2655 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Pre-merge tested pr |
|
/verified by @gangwgr |
|
@gangwgr: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/payload 4.22 nightly informinng |
|
@gangwgr: An error was encountered. No known errors were detected, please see the full error message for details. Full error message.
could not resolve jobs for 4.22 nightly informinng: job type is not supported: informinng
Please contact an administrator to resolve this issue. |
|
/payload 4.22 nightly informing |
|
@gangwgr: trigger 66 job(s) of type informing for the nightly release of OCP 4.22
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/a25116a0-3366-11f1-95c4-c04ff32d7d9a-0 |
|
/hold cancel |
|
/retitle CNTRLPLANE-3184: Create network policies for image-registry components |
|
@andreacv98: This pull request references CNTRLPLANE-3184 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@dusk125: This pull request references CNTRLPLANE-3184 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
This PR adds NetworkPolicy resources to the openshift-image-registry namespace to secure network traffic for the image registry operator, registry, and image pruner components:
These policies follow the principle of least privilege by explicitly allowing only required network paths while blocking all other traffic.