Skip to content

Skip device plugin alert when devicePlugin is disabled in ClusterPolicy#2238

Open
harche wants to merge 1 commit intoNVIDIA:mainfrom
harche:fix/device-plugin-disabled-alert-false-positive
Open

Skip device plugin alert when devicePlugin is disabled in ClusterPolicy#2238
harche wants to merge 1 commit intoNVIDIA:mainfrom
harche:fix/device-plugin-disabled-alert-false-positive

Conversation

@harche
Copy link
Copy Markdown
Contributor

@harche harche commented Mar 20, 2026

Summary

When devicePlugin.enabled is set to false in the ClusterPolicy, the nvidia-node-status-exporter still monitors gpu_operator_node_device_plugin_devices_total, which reports 0 (since no device plugin pods are running). This triggers a false positive GPUOperatorNodeDeploymentFailed alert after 30 minutes.

This is a valid configuration — for example, when GPU allocation is managed externally via MIG partitioning through a third-party operator.

Changes

  • controllers/object_controls.go: The operator now injects a DEVICE_PLUGIN_ENABLED env var into the node-status-exporter daemonset based on config.DevicePlugin.IsEnabled().
  • cmd/nvidia-validator/metrics.go: When DEVICE_PLUGIN_ENABLED=false, the exporter skips device plugin validation and sets the gauge to -1 (documented sentinel value), preventing the == 0 alert from firing.
  • assets/state-node-status-exporter/0800_prometheus_rule_openshift.yaml: Updated comment to document the behavior.
  • controllers/transforms_test.go: Unit tests for env var injection (enabled + disabled).
  • tests/e2e/helpers/clusterpolicy.go: EnableDevicePlugin/DisableDevicePlugin helpers.
  • tests/e2e/suites/clusterpolicy_test.go: E2E tests verifying env var propagation to node-status-exporter daemonset.

Test plan

  • Unit tests pass (TestTransformNodeStatusExporter)
  • Code compiles (go build ./cmd/nvidia-validator/ and go build ./controllers/)
  • E2E tests compile (go build ./tests/e2e/...)
  • Manual validation on hardware with MIG-capable GPUs and devicePlugin.enabled: false

Fixes: #2237

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Mar 20, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@harche
Copy link
Copy Markdown
Contributor Author

harche commented Mar 20, 2026

Initial code review looks good and unit tests pass. Keeping this as a draft, hardware testing with MIG-capable GPUs and devicePlugin.enabled: false is pending before marking ready for review.

@harche
Copy link
Copy Markdown
Contributor Author

harche commented May 1, 2026

Verified the fix on OpenShift 4.21.5 / GCP with NVIDIA T4

Environment:

  • OpenShift 4.21.5 on GCP (us-central1)
  • GPU Operator v26.3.1 baseline, patched with this PR
  • NFD 4.21.0-202604200440 (redhat-operators)
  • Node: n1-standard-4 with 1x NVIDIA Tesla T4

How I tested:

  1. Built the PR branch (6b55de9) on-cluster using an OpenShift BuildConfig — multi-stage build replacing gpu-operator and nvidia-validator binaries in the v26.3.1 base image.

  2. Reproduced the bug first with the unpatched operator:

    • Set devicePlugin.enabled: false in ClusterPolicy
    • Metric dropped to 0, alert entered pending state:
      gpu_operator_node_device_plugin_devices_total{node="...gpu-worker-a-zdd2q"} 0
      GPUOperatorNodeDeploymentFailed: state=pending
      
  3. Deployed the patched image to the nvidia-node-status-exporter daemonset with DEVICE_PLUGIN_ENABLED=false env var injected:

    gpu_operator_node_device_plugin_devices_total{node="...gpu-worker-a-zdd2q"} -1
    GPUOperatorNodeDeploymentFailed: state=inactive, activeAlerts=0
    

Results:

Before fix After fix
DEVICE_PLUGIN_ENABLED env var absent false
gpu_operator_node_device_plugin_devices_total 0 -1 (sentinel)
Alert expr == 0 matches yes no
GPUOperatorNodeDeploymentFailed pending inactive

Fix works as expected. No operator errors observed during testing.

@harche harche marked this pull request as ready for review May 1, 2026 15:44
@harche harche force-pushed the fix/device-plugin-disabled-alert-false-positive branch from 6b55de9 to 71be3d1 Compare May 1, 2026 15:56
# There is no GPU exposed on the node.
# When the device plugin is intentionally disabled in the ClusterPolicy
# (devicePlugin.enabled: false), the metric is set to -1, so this
# alert will not fire in that case.
Copy link
Copy Markdown
Contributor

@shivamerla shivamerla May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively we can also check for both gpu_operator_node_plugin_ready == 1 and gpu_operator_node_device_plugin_devices_total == 0 before firing an alert, but the caveat is that if the plugin pod itself is crashing, then we would not raise an alert which is not desired. So current approach looks good.

@shivamerla
Copy link
Copy Markdown
Contributor

shivamerla commented May 1, 2026

/ok-to-test 71be3d1

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 1, 2026

/ok-to-test

@shivamerla, there was an error processing your request: E1

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/

@shivamerla
Copy link
Copy Markdown
Contributor

/ok-to-test 71be3d1

When devicePlugin.enabled is set to false in the ClusterPolicy, the
nvidia-node-status-exporter still monitors the device_plugin_devices_total
metric which reports 0 (since no device plugin pods are running). This
triggers a false positive GPUOperatorNodeDeploymentFailed alert.

Fix: The operator now injects a DEVICE_PLUGIN_ENABLED env var into the
node-status-exporter daemonset based on the ClusterPolicy. When set to
"false", the exporter skips device plugin validation entirely, so the
metric is never emitted and the alert does not fire.

Fixes: NVIDIA#2237

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Signed-off-by: Harshal Patil <12152047+harche@users.noreply.github.com>
@harche harche force-pushed the fix/device-plugin-disabled-alert-false-positive branch from 71be3d1 to eb2d90f Compare May 4, 2026 16:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GPUOperatorNodeDeploymentFailed alert false positive when devicePlugin is disabled in ClusterPolicy

2 participants