Skip to content

CNTRLPLANE-3184: Create network policies for image-registry components#1301

Merged
openshift-merge-bot[bot] merged 8 commits intoopenshift:mainfrom
andreacv98:network-policies
Apr 8, 2026
Merged

CNTRLPLANE-3184: Create network policies for image-registry components#1301
openshift-merge-bot[bot] merged 8 commits intoopenshift:mainfrom
andreacv98:network-policies

Conversation

@andreacv98
Copy link
Copy Markdown
Contributor

This PR adds NetworkPolicy resources to the openshift-image-registry namespace to secure network traffic for the image registry operator, registry, and image pruner components:

  • default-deny-all: Denies all ingress and egress traffic by default in the namespace
  • image-registry-operator-networkpolicy: Allows operator to communicate with kube-apiserver, DNS, and monitoring, plus external IP communication
  • image-registry-networkpolicy: Allows registry to receive monitoring traffic and communicate with kube-apiserver, DNS, and external storage
  • image-pruner-networkpolicy: Allows pruner to communicate with kube-apiserver, DNS, and the image registry for pruning operations

These policies follow the principle of least privilege by explicitly allowing only required network paths while blocking all other traffic.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Mar 5, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Mar 5, 2026

@andreacv98: This pull request references CNTRLPLANE-2655 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.22.0" version, but no target version was set.

Details

In response to this:

This PR adds NetworkPolicy resources to the openshift-image-registry namespace to secure network traffic for the image registry operator, registry, and image pruner components:

  • default-deny-all: Denies all ingress and egress traffic by default in the namespace
  • image-registry-operator-networkpolicy: Allows operator to communicate with kube-apiserver, DNS, and monitoring, plus external IP communication
  • image-registry-networkpolicy: Allows registry to receive monitoring traffic and communicate with kube-apiserver, DNS, and external storage
  • image-pruner-networkpolicy: Allows pruner to communicate with kube-apiserver, DNS, and the image registry for pruning operations

These policies follow the principle of least privilege by explicitly allowing only required network paths while blocking all other traffic.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 16, 2026

Important

Review skipped

Auto reviews are limited based on label configuration.

🚫 Review skipped — only excluded labels are configured. (1)
  • do-not-merge/work-in-progress

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a4533c99-5976-400b-ac61-d4e5b9dd25e1

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds NetworkPolicy YAMLs and corresponding mutators, RBAC rules, client/lister fields, and informer wiring to manage image-registry and image-pruner network policies in the openshift-image-registry namespace.

Changes

Cohort / File(s) Summary
NetworkPolicy Manifests
bindata/image-pruner-networkpolicy.yaml, bindata/image-registry-networkpolicy.yaml, manifests/07-default-deny-all-networkpolicy.yaml, manifests/07-image-registry-operator-networkpolicy.yaml
New NetworkPolicy resources added: image-pruner and image-registry policies, a default deny-all policy, and an operator policy. Define podSelectors, ingress/egress rules (kube-apiserver, DNS, registry ports, internet egress with RFC1918 exceptions) and policyTypes.
RBAC
manifests/02-rbac.yaml
Added RBAC rules granting the operator permissions on networkpolicies (networking.k8s.io) with verbs create/delete/get/list/patch/update/watch.
Client & Lister Additions
pkg/client/clients.go, pkg/client/listers.go
Added Networking client field (Networking networkingset.NetworkingV1Interface) and NetworkPolicy lister field (NetworkPolicies knetworkinglisters.NetworkPolicyNamespaceLister), plus networking imports.
Controller Informer Wiring
pkg/operator/controller.go
Initialize NetworkingV1 client, register a NetworkPolicies informer, wire its namespace lister into controller listers, and add its HasSynced to informer sync checks.
Generator Registration
pkg/resource/generator.go
Generator.List now errors if clients.Networking is nil and appends two new mutator constructors: NewGeneratorImageRegistryNetworkPolicy and NewGeneratorImagePrunerNetworkPolicy.
Image Registry Mutator
pkg/resource/imageregistrynetworkpolicy.go
New exported mutator type and constructor managing image-registry-networkpolicy: implements Mutator methods (Type, GetNamespace, GetName, Get, Create, Update, Delete, Owned), loads embedded YAML asset, reconciles via resourceapply using lister and NetworkingV1 client.
Image Pruner Mutator
pkg/resource/imageprunernetworkpolicy.go
New exported mutator type and constructor managing image-pruner-networkpolicy: similar Mutator implementation, uses namespace-scoped lister and NetworkingV1 client, reconciles and deletes via typed client.
Image Pruner Generator Validation
pkg/resource/generatorimagepruner.go
ImagePrunerGenerator.List added nil checks for required clients (Core, RBAC, Batch) and returns an error if any are uninitialized.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
pkg/resource/imageregistrynetworkpolicy.go (1)

22-88: Consolidate duplicated mutator logic to reduce drift risk

This implementation is almost identical to pkg/resource/imageprunernetworkpolicy.go (same fields/method bodies, only policy/asset names differ). Extracting a shared NetworkPolicy mutator (parameterized by policy name + asset filename) would reduce copy/paste maintenance and future divergence.

Refactor sketch
+type staticNetworkPolicyMutator struct {
+	eventRecorder       events.Recorder
+	networkPolicyLister networkingv1listers.NetworkPolicyNamespaceLister
+	client              networkingv1client.NetworkingV1Interface
+	name                string
+	assetFile           string
+}
+
+func (np *staticNetworkPolicyMutator) GetName() string { return np.name }
+func (np *staticNetworkPolicyMutator) expected() *networkingv1.NetworkPolicy {
+	return resourceread.ReadNetworkPolicyV1OrDie(assets.MustAsset(np.assetFile))
+}
-func NewGeneratorImageRegistryNetworkPolicy(...) Mutator {
-	return &generatorImageRegistryNetworkPolicy{...}
-}
+func NewGeneratorImageRegistryNetworkPolicy(...) Mutator {
+	return &staticNetworkPolicyMutator{
+		eventRecorder: eventRecorder, networkPolicyLister: networkPolicyLister, client: client,
+		name: "image-registry-networkpolicy", assetFile: "image-registry-networkpolicy.yaml",
+	}
+}

As per coding guidelines, "-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/resource/imageregistrynetworkpolicy.go` around lines 22 - 88, The
generatorImageRegistryNetworkPolicy mutator duplicates logic from
imageprunernetworkpolicy; extract a reusable parameterized network policy
mutator struct (e.g., networkPolicyMutator) that accepts the policy name and
asset filename plus eventRecorder, networkPolicyLister and client, move shared
methods (Type, GetNamespace, GetName via parameter, Get using
networkPolicyLister, expected reading the asset, Create delegating to Update,
Update calling resourceapply.ApplyNetworkPolicy, Delete and Owned) into it, then
implement generatorImageRegistryNetworkPolicy and the pruner variant as thin
wrappers that construct the shared networkPolicyMutator with their specific name
and asset; update references to functions/methods like expected(), Update(),
Create(), Delete() and GetName() to use the generic implementation to avoid
duplication.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bindata/image-registry-networkpolicy.yaml`:
- Around line 7-15: The NetworkPolicy ingress currently only allows traffic from
the namespaceSelector with matchLabels kubernetes.io/metadata.name:
openshift-monitoring to port: 5000, which together with the namespace-wide
default-deny blocks builds/push/pull to the registry (docker-registry: default);
update the policy's ingress rules to also permit legitimate registry clients by
adding additional from entries (e.g., namespaceSelector(s) for build/ci
namespaces or podSelector(s)/serviceAccount selectors matching your
build/push/pull clients) and ensure the allowed ports include the registry ports
used by clients (5000 and any TLS ports), so registry traffic from those
namespaces/pods is explicitly allowed.

In `@manifests/07-image-registry-operator-networkpolicy.yaml`:
- Around line 45-60: The external egress rule that sets ipBlock: cidr: 0.0.0.0/0
currently allows ports 443 and 6443; remove port 6443 from the ports list so
only 443 remains for S3/internet access (leave the existing
kube-apiserver-specific rule using namespace/pod selectors intact), and add the
missing except ranges 169.254.0.0/16 and 100.64.0.0/10 to the ipBlock.except
list to block metadata and CGNAT addresses while preserving the RFC1918
exclusions.

---

Nitpick comments:
In `@pkg/resource/imageregistrynetworkpolicy.go`:
- Around line 22-88: The generatorImageRegistryNetworkPolicy mutator duplicates
logic from imageprunernetworkpolicy; extract a reusable parameterized network
policy mutator struct (e.g., networkPolicyMutator) that accepts the policy name
and asset filename plus eventRecorder, networkPolicyLister and client, move
shared methods (Type, GetNamespace, GetName via parameter, Get using
networkPolicyLister, expected reading the asset, Create delegating to Update,
Update calling resourceapply.ApplyNetworkPolicy, Delete and Owned) into it, then
implement generatorImageRegistryNetworkPolicy and the pruner variant as thin
wrappers that construct the shared networkPolicyMutator with their specific name
and asset; update references to functions/methods like expected(), Update(),
Create(), Delete() and GetName() to use the generic implementation to avoid
duplication.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 138094b3-cdda-46f7-aacb-ea098bdbe0a2

📥 Commits

Reviewing files that changed from the base of the PR and between e000262 and 0993702.

📒 Files selected for processing (10)
  • bindata/image-pruner-networkpolicy.yaml
  • bindata/image-registry-networkpolicy.yaml
  • manifests/07-default-deny-all-networkpolicy.yaml
  • manifests/07-image-registry-operator-networkpolicy.yaml
  • pkg/client/clients.go
  • pkg/client/listers.go
  • pkg/operator/controller.go
  • pkg/resource/generator.go
  • pkg/resource/imageprunernetworkpolicy.go
  • pkg/resource/imageregistrynetworkpolicy.go

Comment thread bindata/image-registry-networkpolicy.yaml
Comment on lines +45 to +60
# allow egress to ONLY internet (external)
# i.e. S3 access
- to:
- ipBlock:
cidr: 0.0.0.0/0
# refine the rule with the private network exception
except:
- 10.0.0.0/8 # RFC1918: Block access towards other private internal networks
- 172.16.0.0/12 # RFC1918: Block access towards other private internal networks
- 192.168.0.0/16 # RFC1918: Block access towards other private internal networks
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 6443

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n manifests/07-image-registry-operator-networkpolicy.yaml

Repository: openshift/cluster-image-registry-operator

Length of output: 2178


🏁 Script executed:

# Search for references to 6443 in the codebase
rg "6443" --type yaml --type go -A 2 -B 2 | head -50

Repository: openshift/cluster-image-registry-operator

Length of output: 3269


🏁 Script executed:

# Look for operator configuration and what it actually needs
find . -name "*.go" -type f | xargs grep -l "external\|webhook\|6443" | head -10

Repository: openshift/cluster-image-registry-operator

Length of output: 1027


🏁 Script executed:

# Check if there's documentation about operator networking requirements
find . -name "README*" -o -name "*.md" | xargs grep -l "network\|egress\|firewall" 2>/dev/null | head -5

Repository: openshift/cluster-image-registry-operator

Length of output: 425


🏁 Script executed:

# Search for network policy references and external connectivity
find ./pkg -name "*.go" -type f 2>/dev/null | head -20

Repository: openshift/cluster-image-registry-operator

Length of output: 680


🏁 Script executed:

# Look for any documentation about the operator's networking
find . -maxdepth 2 -name "*.md" -type f 2>/dev/null | xargs grep -i "network\|firewall\|egress" 2>/dev/null | head -20

Repository: openshift/cluster-image-registry-operator

Length of output: 67


🏁 Script executed:

# Search for S3 or external registry access patterns
rg "s3\|S3\|external\|registry" pkg/ --type go -A 1 -B 1 | head -40

Repository: openshift/cluster-image-registry-operator

Length of output: 67


🏁 Script executed:

# Look at the bootstrap code which might show what connectivity is needed
head -100 ./pkg/operator/bootstrap.go

Repository: openshift/cluster-image-registry-operator

Length of output: 3662


🏁 Script executed:

# Search for any references to external API/registry connections
rg "webhook\|https\|certificate" pkg/ --type go | head -20

Repository: openshift/cluster-image-registry-operator

Length of output: 67


🏁 Script executed:

# Check if there's any explanation in git history or issue tracking
find . -maxdepth 1 -name "*.md" -o -name "*.txt" | xargs grep -i "6443\|network\|egress" 2>/dev/null

Repository: openshift/cluster-image-registry-operator

Length of output: 69


Remove port 6443 from the external egress rule or justify its necessity.

Port 6443 to the kube-apiserver is already permitted by the specific rule on lines 19–29 using namespace and pod selectors. The broad external egress rule (0.0.0.0/0) on line 59 reopens this port to any public IP, which violates least-privilege. Since the comment indicates S3 access (which uses port 443), port 6443 should be removed unless the operator has documented external API connections requiring it.

Additionally, consider adding missing exception ranges to prevent egress to metadata services:

  • 169.254.0.0/16 (link-local/AWS metadata)
  • 100.64.0.0/10 (Carrier Grade NAT)
Suggested changes
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        # refine the rule with the private network exception
        except:
        - 10.0.0.0/8
        - 172.16.0.0/12
        - 192.168.0.0/16
+       - 169.254.0.0/16
+       - 100.64.0.0/10
    ports:
    - protocol: TCP
      port: 443
-   - protocol: TCP
-     port: 6443
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# allow egress to ONLY internet (external)
# i.e. S3 access
- to:
- ipBlock:
cidr: 0.0.0.0/0
# refine the rule with the private network exception
except:
- 10.0.0.0/8 # RFC1918: Block access towards other private internal networks
- 172.16.0.0/12 # RFC1918: Block access towards other private internal networks
- 192.168.0.0/16 # RFC1918: Block access towards other private internal networks
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 6443
# allow egress to ONLY internet (external)
# i.e. S3 access
- to:
- ipBlock:
cidr: 0.0.0.0/0
# refine the rule with the private network exception
except:
- 10.0.0.0/8 # RFC1918: Block access towards other private internal networks
- 172.16.0.0/12 # RFC1918: Block access towards other private internal networks
- 192.168.0.0/16 # RFC1918: Block access towards other private internal networks
- 169.254.0.0/16
- 100.64.0.0/10
ports:
- protocol: TCP
port: 443
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@manifests/07-image-registry-operator-networkpolicy.yaml` around lines 45 -
60, The external egress rule that sets ipBlock: cidr: 0.0.0.0/0 currently allows
ports 443 and 6443; remove port 6443 from the ports list so only 443 remains for
S3/internet access (leave the existing kube-apiserver-specific rule using
namespace/pod selectors intact), and add the missing except ranges
169.254.0.0/16 and 100.64.0.0/10 to the ipBlock.except list to block metadata
and CGNAT addresses while preserving the RFC1918 exclusions.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
bindata/image-registry-networkpolicy.yaml (1)

61-73: Consider adding metadata service IP ranges to the exception list.

The external egress rule allows 0.0.0.0/0 with RFC1918 exceptions, which is appropriate for S3 access. However, cloud metadata services (e.g., AWS at 169.254.169.254) are not blocked. If a registry pod were compromised, it could potentially query instance metadata and leak credentials.

Adding 169.254.0.0/16 (link-local) and optionally 100.64.0.0/10 (CGNAT) to the exception list would provide defense-in-depth.

Suggested improvement
   - to:
     - ipBlock:
         cidr: 0.0.0.0/0
         # refine the rule with the private network exception
         except:
         - 10.0.0.0/8     # RFC1918: Block access towards other private internal networks
         - 172.16.0.0/12  # RFC1918: Block access towards other private internal networks
         - 192.168.0.0/16 # RFC1918: Block access towards other private internal networks
+        - 169.254.0.0/16 # Link-local: Block access to cloud metadata services
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindata/image-registry-networkpolicy.yaml` around lines 61 - 73, Update the
ipBlock "except" list in the external egress rule so it also blocks link-local
and optional CGNAT ranges: add 169.254.0.0/16 to the existing except entries
(10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) and optionally add 100.64.0.0/10;
modify the ipBlock except array that currently contains those RFC1918 CIDRs to
include these additional CIDRs to prevent access to cloud metadata and CGNAT
addresses.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@bindata/image-registry-networkpolicy.yaml`:
- Around line 61-73: Update the ipBlock "except" list in the external egress
rule so it also blocks link-local and optional CGNAT ranges: add 169.254.0.0/16
to the existing except entries (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) and
optionally add 100.64.0.0/10; modify the ipBlock except array that currently
contains those RFC1918 CIDRs to include these additional CIDRs to prevent access
to cloud metadata and CGNAT addresses.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6b7eaf30-906d-4230-a4de-871f00591f49

📥 Commits

Reviewing files that changed from the base of the PR and between 0993702 and 1d2e2cf.

📒 Files selected for processing (10)
  • bindata/image-pruner-networkpolicy.yaml
  • bindata/image-registry-networkpolicy.yaml
  • manifests/07-default-deny-all-networkpolicy.yaml
  • manifests/07-image-registry-operator-networkpolicy.yaml
  • pkg/client/clients.go
  • pkg/client/listers.go
  • pkg/operator/controller.go
  • pkg/resource/generator.go
  • pkg/resource/imageprunernetworkpolicy.go
  • pkg/resource/imageregistrynetworkpolicy.go
🚧 Files skipped from review as they are similar to previous changes (4)
  • manifests/07-default-deny-all-networkpolicy.yaml
  • pkg/resource/generator.go
  • pkg/client/clients.go
  • pkg/resource/imageregistrynetworkpolicy.go

Copy link
Copy Markdown

@dusk125 dusk125 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still reviewing, but these stood out to me

Comment thread manifests/08-default-deny-all-networkpolicy.yaml
Comment thread manifests/07-image-registry-operator-networkpolicy.yaml
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
bindata/image-registry-networkpolicy.yaml (1)

63-73: Consider IPv6 for dual-stack clusters.

The external egress rule uses 0.0.0.0/0 which only covers IPv4 traffic. On dual-stack clusters, IPv6 traffic to cloud storage endpoints may be blocked or behave inconsistently depending on CNI implementation.

If the cluster supports IPv6, add a corresponding rule for ::/0 (excluding private IPv6 ranges like fc00::/7). Note: The identical rule exists in manifests/07-image-registry-operator-networkpolicy.yaml and should be updated consistently.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindata/image-registry-networkpolicy.yaml` around lines 63 - 73, The egress
ipBlock currently only allows IPv4 (ipBlock.cidr: 0.0.0.0/0 with except entries)
which will block IPv6 on dual‑stack clusters; add a matching IPv6 ipBlock entry
(ipBlock.cidr: ::/0) with corresponding except ranges (e.g., fc00::/7) and the
same ports (protocol: TCP, port: 443) so IPv6 egress to cloud storage is
allowed, and apply the same change to the other identical NetworkPolicy block in
your manifests to keep them consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/resource/imageregistrynetworkpolicy.go`:
- Around line 62-77: The Update method recreates the ResourceCache each call
(resourceapply.NewResourceCache()), breaking caching and forcing unnecessary
applies; modify the generatorImageRegistryNetworkPolicy struct to include a
persistent ResourceCache field (e.g., cache *resourceapply.ResourceCache),
initialize that cache in the generatorImageRegistryNetworkPolicy constructor,
and change Update to pass the stored cache to resourceapply.ApplyNetworkPolicy
instead of calling NewResourceCache() so SafeToSkipApply and retry loops can use
the persisted cache.

---

Nitpick comments:
In `@bindata/image-registry-networkpolicy.yaml`:
- Around line 63-73: The egress ipBlock currently only allows IPv4
(ipBlock.cidr: 0.0.0.0/0 with except entries) which will block IPv6 on
dual‑stack clusters; add a matching IPv6 ipBlock entry (ipBlock.cidr: ::/0) with
corresponding except ranges (e.g., fc00::/7) and the same ports (protocol: TCP,
port: 443) so IPv6 egress to cloud storage is allowed, and apply the same change
to the other identical NetworkPolicy block in your manifests to keep them
consistent.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4c22b96c-f038-4d8d-846e-55f89798719d

📥 Commits

Reviewing files that changed from the base of the PR and between 1d2e2cf and 78caff2.

📒 Files selected for processing (10)
  • bindata/image-pruner-networkpolicy.yaml
  • bindata/image-registry-networkpolicy.yaml
  • manifests/07-default-deny-all-networkpolicy.yaml
  • manifests/07-image-registry-operator-networkpolicy.yaml
  • pkg/client/clients.go
  • pkg/client/listers.go
  • pkg/operator/controller.go
  • pkg/resource/generator.go
  • pkg/resource/imageprunernetworkpolicy.go
  • pkg/resource/imageregistrynetworkpolicy.go
🚧 Files skipped from review as they are similar to previous changes (6)
  • bindata/image-pruner-networkpolicy.yaml
  • pkg/client/listers.go
  • manifests/07-default-deny-all-networkpolicy.yaml
  • pkg/operator/controller.go
  • manifests/07-image-registry-operator-networkpolicy.yaml
  • pkg/resource/imageprunernetworkpolicy.go

Comment thread pkg/resource/imageregistrynetworkpolicy.go
Comment thread bindata/image-pruner-networkpolicy.yaml Outdated
Comment thread bindata/image-pruner-networkpolicy.yaml Outdated
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns
podSelector:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need the podSelector here (see the doc for the DNS egress)

Comment thread bindata/image-pruner-networkpolicy.yaml
Comment thread manifests/07-image-registry-operator-networkpolicy.yaml
Comment thread bindata/image-registry-networkpolicy.yaml Outdated
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns
podSelector:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need the podSelector here (see the doc for the DNS egress)

Comment thread bindata/image-registry-networkpolicy.yaml
protocol: TCP

egress:
# allow egress to kube-apiserver
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See other comments about egress to apiserver

- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns
podSelector:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see other comments about podselector

Comment thread manifests/07-image-registry-operator-networkpolicy.yaml
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@pkg/client/clients.go`:
- Around line 17-28: The Clients struct can be partially instantiated (e.g.
&regopclient.Clients{}) leaving Networking nil and causing runtime nil derefs;
add an atomic constructor and/or validator to ensure all required interfaces are
non-nil. Implement a NewClients(...) factory that accepts the required interface
instances and returns a fully populated *Clients (or an error) and/or add a
Clients.Validate() method that checks Networking and other required fields are
non-nil; update controller wiring to call NewClients or Validate() before using
Clients.Networking so no code path ever operates on a partially initialized
Clients.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2953217b-3f66-4891-afac-738923eddfb5

📥 Commits

Reviewing files that changed from the base of the PR and between 78caff2 and dd9b321.

📒 Files selected for processing (10)
  • bindata/image-pruner-networkpolicy.yaml
  • bindata/image-registry-networkpolicy.yaml
  • manifests/07-default-deny-all-networkpolicy.yaml
  • manifests/07-image-registry-operator-networkpolicy.yaml
  • pkg/client/clients.go
  • pkg/client/listers.go
  • pkg/operator/controller.go
  • pkg/resource/generator.go
  • pkg/resource/imageprunernetworkpolicy.go
  • pkg/resource/imageregistrynetworkpolicy.go
✅ Files skipped from review due to trivial changes (3)
  • manifests/07-default-deny-all-networkpolicy.yaml
  • pkg/client/listers.go
  • bindata/image-pruner-networkpolicy.yaml
🚧 Files skipped from review as they are similar to previous changes (6)
  • pkg/resource/generator.go
  • pkg/operator/controller.go
  • bindata/image-registry-networkpolicy.yaml
  • pkg/resource/imageregistrynetworkpolicy.go
  • manifests/07-image-registry-operator-networkpolicy.yaml
  • pkg/resource/imageprunernetworkpolicy.go

Comment thread pkg/client/clients.go
@dusk125
Copy link
Copy Markdown

dusk125 commented Mar 25, 2026

@andreacv98 looks like you may need to grant the networkpolicies permission in the rbac.yaml: something like

- apiGroups:
    - networking.k8s.io
    resources:
    - networkpolicies
    verbs:
    - get
    - list
    - watch
    - create
    - update
    - patch
    - delete

Comment thread bindata/image-pruner-networkpolicy.yaml Outdated
Comment thread bindata/image-registry-networkpolicy.yaml Outdated
Comment thread manifests/07-image-registry-operator-networkpolicy.yaml Outdated
Adds NetworkPolicy resources to the openshift-image-registry namespace
to secure network traffic for the image registry operator, registry, and
image pruner components:

- default-deny-all: Denies all ingress and egress traffic by default
- image-registry-operator-networkpolicy: Allows operator to communicate
  with kube-apiserver, DNS, and monitoring, plus external S3-compatible
  storage
- image-registry-networkpolicy: Allows registry to receive monitoring
  traffic and communicate with kube-apiserver, DNS, and external storage
- image-pruner-networkpolicy: Allows pruner to communicate with
  kube-apiserver, DNS, and the image registry for pruning operations

These policies follow the principle of least privilege by explicitly
allowing only required network paths while blocking all other traffic.
Comment thread manifests/08-default-deny-all-networkpolicy.yaml
Comment thread pkg/resource/imageprunernetworkpolicy.go Outdated
Comment thread pkg/resource/imageregistrynetworkpolicy.go Outdated
Moves the image-pruner-networkpolicy from the main Generator to
ImagePrunerGenerator to ensure it's deployed by the pruner controller
alongside the pruner CronJob. This fixes a race condition where the
pruner CronJob could be created before the network policy, causing
pruner pods to be blocked by the default-deny-all network policy.

Adds necessary infrastructure to ImagePrunerController:
- eventRecorder and resourceCache for network policy application
- NetworkPolicies lister and Networking client initialization
- NetworkPolicies informer to watch for changes
Default-deny-all network policy should be applied after the allow network policy, this to ensure less possible issues
The image-pruner-networkpolicy was using a podSelector with label
"created-by: image-pruner", but this label was only set on the Job
object, not on the pods created by the Job. This meant the network
policy never applied to any pods.

This change adds the "app: image-pruner" label to the pod template
in the CronJob spec so that pods created by the image-pruner CronJob
are properly selected by the network policy.
@andreacv98
Copy link
Copy Markdown
Contributor Author

/retest

1 similar comment
@andreacv98
Copy link
Copy Markdown
Contributor Author

/retest

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 7, 2026

@dusk125: trigger 13 job(s) of type blocking for the nightly release of OCP 4.22

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-serial-2of2
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-1of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-3of3
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/0430a6d0-32b6-11f1-9b03-807b4577810c-0

@flavianmissi
Copy link
Copy Markdown
Member

/approve

Holding for QE verification, feel free to remove when appropriate.
/hold

@openshift-ci openshift-ci Bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 8, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 8, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andreacv98, dusk125, flavianmissi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 8, 2026
@dusk125
Copy link
Copy Markdown

dusk125 commented Apr 8, 2026

/jira refresh

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Apr 8, 2026

@dusk125: This pull request references CNTRLPLANE-2655 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.22.0" version, but no target version was set.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@andreacv98
Copy link
Copy Markdown
Contributor Author

/payload-job periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3 periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4 periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 8, 2026

@andreacv98: trigger 4 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-techpreview-serial-2of3
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/60422e30-3356-11f1-9cb8-dc3ea2713e67-0

@dusk125
Copy link
Copy Markdown

dusk125 commented Apr 8, 2026

/jira refresh

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Apr 8, 2026

@dusk125: This pull request references CNTRLPLANE-2655 which is a valid jira issue.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@gangwgr
Copy link
Copy Markdown
Contributor

gangwgr commented Apr 8, 2026

Pre-merge tested pr

@gangwgr
Copy link
Copy Markdown
Contributor

gangwgr commented Apr 8, 2026

/verified by @gangwgr
pre-merge testing completed, waiting for payload jobs to passed, please free to unhold pr when job completed

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Apr 8, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@gangwgr: This PR has been marked as verified by @gangwgr.

Details

In response to this:

/verified by @gangwgr
pre-merge testing completed, waiting for payload jobs to passed, please free to unhold pr when job completed

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@gangwgr
Copy link
Copy Markdown
Contributor

gangwgr commented Apr 8, 2026

/payload 4.22 nightly informinng

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 8, 2026

@gangwgr: An error was encountered. No known errors were detected, please see the full error message for details.

Full error message. could not resolve jobs for 4.22 nightly informinng: job type is not supported: informinng

Please contact an administrator to resolve this issue.

@gangwgr
Copy link
Copy Markdown
Contributor

gangwgr commented Apr 8, 2026

/payload 4.22 nightly informing

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 8, 2026

@gangwgr: trigger 66 job(s) of type informing for the nightly release of OCP 4.22

  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-release-main-nightly-4.22-console-aws
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-cgroupsv2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-fips
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-single-node-techpreview-serial
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upgrade-fips-no-nat-instance
  • periodic-ci-openshift-release-main-ci-4.22-e2e-aws-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-upi
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-azure
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-azure-csi
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-azure-ovn-upgrade-out-of-change
  • periodic-ci-openshift-release-main-cnv-nightly-4.22-deploy-azure-kubevirt-ovn
  • periodic-ci-openshift-cluster-control-plane-machine-set-operator-release-4.22-periodics-e2e-gcp
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-rt
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-gcp-ovn-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.22-periodics-e2e-azure-kubevirt-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-dualstack-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-ipv4
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-1of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-serial-virtualmedia-2of2
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-serial-ovn-dualstack
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-upgrade-from-stable-4.21-e2e-metal-ipi-upgrade-ovn-ipv6
  • periodic-ci-openshift-release-main-nightly-4.22-metal-ovn-single-node-recert-cluster-rename
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-aws
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-osd-ccs-gcp
  • periodic-ci-openshift-osde2e-main-nightly-4.22-osd-gcp
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-aws-ovn-proxy
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-single-node-live-iso
  • periodic-ci-openshift-eng-ocp-qe-perfscale-ci-main-aws-4.22-nightly-x86-payload-control-plane-6nodes
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-telco5g
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-csi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-techpreview-serial
  • periodic-ci-openshift-release-main-ci-4.22-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-ci-4.22-upgrade-from-stable-4.21-e2e-vsphere-ovn-upgrade
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-ovn-upi-serial
  • periodic-ci-openshift-release-main-nightly-4.22-e2e-vsphere-static-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/a25116a0-3366-11f1-95c4-c04ff32d7d9a-0

@dusk125
Copy link
Copy Markdown

dusk125 commented Apr 8, 2026

/hold cancel
Blocking payloads look good, save one unrelated metal ipv6 failure

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 8, 2026
@dusk125
Copy link
Copy Markdown

dusk125 commented Apr 8, 2026

/retitle CNTRLPLANE-3184: Create network policies for image-registry components

@openshift-ci openshift-ci Bot changed the title CNTRLPLANE-2655: Create network policies for image-registry components CNTRLPLANE-3184: Create network policies for image-registry components Apr 8, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Apr 8, 2026

@andreacv98: This pull request references CNTRLPLANE-3184 which is a valid jira issue.

Details

In response to this:

This PR adds NetworkPolicy resources to the openshift-image-registry namespace to secure network traffic for the image registry operator, registry, and image pruner components:

  • default-deny-all: Denies all ingress and egress traffic by default in the namespace
  • image-registry-operator-networkpolicy: Allows operator to communicate with kube-apiserver, DNS, and monitoring, plus external IP communication
  • image-registry-networkpolicy: Allows registry to receive monitoring traffic and communicate with kube-apiserver, DNS, and external storage
  • image-pruner-networkpolicy: Allows pruner to communicate with kube-apiserver, DNS, and the image registry for pruning operations

These policies follow the principle of least privilege by explicitly allowing only required network paths while blocking all other traffic.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@dusk125
Copy link
Copy Markdown

dusk125 commented Apr 8, 2026

/jira refresh

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

openshift-ci-robot commented Apr 8, 2026

@dusk125: This pull request references CNTRLPLANE-3184 which is a valid jira issue.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@benluddy benluddy added the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Apr 8, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

/retest-required

Remaining retests: 0 against base HEAD d32b6f9 and 2 for PR HEAD 68cc087 in total

@openshift-merge-bot openshift-merge-bot Bot merged commit 3a32428 into openshift:main Apr 8, 2026
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants