Skip device plugin alert when devicePlugin is disabled in ClusterPolicy#2238
Skip device plugin alert when devicePlugin is disabled in ClusterPolicy#2238harche wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
|
Initial code review looks good and unit tests pass. Keeping this as a draft, hardware testing with MIG-capable GPUs and |
|
Verified the fix on OpenShift 4.21.5 / GCP with NVIDIA T4 Environment:
How I tested:
Results:
Fix works as expected. No operator errors observed during testing. |
6b55de9 to
71be3d1
Compare
| # There is no GPU exposed on the node. | ||
| # When the device plugin is intentionally disabled in the ClusterPolicy | ||
| # (devicePlugin.enabled: false), the metric is set to -1, so this | ||
| # alert will not fire in that case. |
There was a problem hiding this comment.
Alternatively we can also check for both gpu_operator_node_plugin_ready == 1 and gpu_operator_node_device_plugin_devices_total == 0 before firing an alert, but the caveat is that if the plugin pod itself is crashing, then we would not raise an alert which is not desired. So current approach looks good.
|
/ok-to-test 71be3d1 |
@shivamerla, there was an error processing your request: See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/ |
|
/ok-to-test 71be3d1 |
When devicePlugin.enabled is set to false in the ClusterPolicy, the nvidia-node-status-exporter still monitors the device_plugin_devices_total metric which reports 0 (since no device plugin pods are running). This triggers a false positive GPUOperatorNodeDeploymentFailed alert. Fix: The operator now injects a DEVICE_PLUGIN_ENABLED env var into the node-status-exporter daemonset based on the ClusterPolicy. When set to "false", the exporter skips device plugin validation entirely, so the metric is never emitted and the alert does not fire. Fixes: NVIDIA#2237 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Harshal Patil <12152047+harche@users.noreply.github.com>
71be3d1 to
eb2d90f
Compare
Summary
When
devicePlugin.enabledis set tofalsein the ClusterPolicy, thenvidia-node-status-exporterstill monitorsgpu_operator_node_device_plugin_devices_total, which reports0(since no device plugin pods are running). This triggers a false positiveGPUOperatorNodeDeploymentFailedalert after 30 minutes.This is a valid configuration — for example, when GPU allocation is managed externally via MIG partitioning through a third-party operator.
Changes
controllers/object_controls.go: The operator now injects aDEVICE_PLUGIN_ENABLEDenv var into the node-status-exporter daemonset based onconfig.DevicePlugin.IsEnabled().cmd/nvidia-validator/metrics.go: WhenDEVICE_PLUGIN_ENABLED=false, the exporter skips device plugin validation and sets the gauge to-1(documented sentinel value), preventing the== 0alert from firing.assets/state-node-status-exporter/0800_prometheus_rule_openshift.yaml: Updated comment to document the behavior.controllers/transforms_test.go: Unit tests for env var injection (enabled + disabled).tests/e2e/helpers/clusterpolicy.go:EnableDevicePlugin/DisableDevicePluginhelpers.tests/e2e/suites/clusterpolicy_test.go: E2E tests verifying env var propagation to node-status-exporter daemonset.Test plan
TestTransformNodeStatusExporter)go build ./cmd/nvidia-validator/andgo build ./controllers/)go build ./tests/e2e/...)devicePlugin.enabled: falseFixes: #2237