In my k8s cluster, we're finding jobs are creating pods, they run successfully, and the job reports a backoff-limit-reached. Why? I can't tell. K8s by default are cleaning up the pod definitions. I can't see any metadata about how the job finished.
I want to be able to have my cluster keep pods around (until the ttl is expired). This is a field in the Job-spec called podRetentionPolicy. Valid values are Delete and Retain. We need to be able to specify this parameter to properly support our teams.
Missing from here:
https://github.com/kcl-lang/modules/blob/main/k8s/1.33/api/batch/v1/job_spec.k#L44
In my k8s cluster, we're finding jobs are creating pods, they run successfully, and the job reports a backoff-limit-reached. Why? I can't tell. K8s by default are cleaning up the pod definitions. I can't see any metadata about how the job finished.
I want to be able to have my cluster keep pods around (until the ttl is expired). This is a field in the Job-spec called
podRetentionPolicy. Valid values areDeleteandRetain. We need to be able to specify this parameter to properly support our teams.Missing from here:
https://github.com/kcl-lang/modules/blob/main/k8s/1.33/api/batch/v1/job_spec.k#L44