diff --git a/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt b/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt index 7faa7bca3121..4c4682012587 100644 --- a/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt +++ b/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt @@ -12,7 +12,9 @@ [Tt]elco [Uu]npause Assisted Installer +ClusterObjectSets? Control Plane Machine Set Operator +[Dd]eployment [Cc]onfigurations? GHz gpsd gpspipe diff --git a/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt b/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt index fcb24dfb1e44..9021905b2f8e 100644 --- a/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt +++ b/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt @@ -2,7 +2,6 @@ # Add terms that have a corresponding correctly capitalized form to accept.txt. [Dd]eployment [Cc]onfigs? -[Dd]eployment [Cc]onfigurations? [Oo]peratorize [Ss]ingle [Nn]ode OpenShift [Tt]hree [Nn]ode OpenShift diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 71de1f00da27..e8abf766f270 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2485,6 +2485,8 @@ Topics: File: mco-update-boot-skew-mgmt - Name: Manually updating the boot image File: mco-update-boot-images-manual +- Name: Creating custom machine config pools + File: machine-config-custom-mcp - Name: Managing unused rendered machine configs File: machine-configs-garbage-collection - Name: Image mode for OpenShift @@ -2955,6 +2957,8 @@ Topics: File: nodes-nodes-resources-configuring - Name: Allocating specific CPUs for nodes in a cluster File: nodes-nodes-resources-cpus + - Name: Additional CRI-O storage locations for faster container startup + File: nodes-nodes-additional-crio-storage - Name: Enabling TLS security profiles for the kubelet File: nodes-nodes-tls Distros: openshift-enterprise,openshift-origin diff --git a/extensions/ce/olmv1-configuring-extensions.adoc b/extensions/ce/olmv1-configuring-extensions.adoc index 746dec4812b0..e4ec8199101d 100644 --- a/extensions/ce/olmv1-configuring-extensions.adoc +++ b/extensions/ce/olmv1-configuring-extensions.adoc @@ -7,9 +7,9 @@ include::_attributes/common-attributes.adoc[] toc::[] [role="_abstract"] -In {olmv1-first}, extensions watch all namespaces by default. Some Operators support only namespace-scoped watching based on {olmv0} install modes. To install these Operators, configure the watch namespace for the extension. For more information, see "Discovering bundle install modes". +In {olmv1-first}, you can configure cluster extensions to customize installation behavior. This includes configuring watch namespaces for Operators that support only namespace-scoped watching based on {olmv0} install modes, and customizing operator pod deployments with resource limits, node placement, and other settings. -:FeatureName: Configuring a watch namespace for a cluster extension +:FeatureName: Configuring cluster extensions include::snippets/technology-preview.adoc[] include::modules/olmv1-config-api.adoc[leveloffset=+1] @@ -29,8 +29,22 @@ include::modules/olmv1-config-api-watch-namespace-examples.adoc[leveloffset=+2] include::modules/olmv1-clusterextension-watchnamespace-validation-errors.adoc[leveloffset=+2] +include::modules/olmv1-deployment-config-api.adoc[leveloffset=+1] + +include::modules/olmv1-clusterobjectsets-deployment-mechanism.adoc[leveloffset=+1] + +include::modules/olmv1-customizing-operator-deployments.adoc[leveloffset=+1] + +include::modules/olmv1-deployment-config-examples.adoc[leveloffset=+1] + +include::modules/olmv1-deployment-config-reference.adoc[leveloffset=+1] + +include::modules/olmv1-deployment-config-troubleshooting.adoc[leveloffset=+1] + [id="olmv1-configuring-extensions_additional-resources"] [role="_additional-resources"] == Additional resources * xref:../../extensions/ce/managing-ce.adoc#olmv1-installing-an-operator_managing-ce[Installing a cluster extension in all namespaces] +* link:https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[Kubernetes: Assigning Pods to Nodes] +* link:https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[Kubernetes: Taints and Tolerations] diff --git a/machine_configuration/machine-config-custom-mcp.adoc b/machine_configuration/machine-config-custom-mcp.adoc new file mode 100644 index 000000000000..297706cea19e --- /dev/null +++ b/machine_configuration/machine-config-custom-mcp.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: ASSEMBLY +[id="machine-config-creating-custom-mcp"] += Creating custom machine config pools +include::_attributes/common-attributes.adoc[] +:context: machine-config-creating-custom-mcp + +toc::[] + +[role="_abstract"] +You can create custom machine config pools (MCP) to manage compute nodes for custom use cases that extend outside of the default node types. By using a custom machine config pool, you can deploy changes targeted only at nodes in the custom pool. + +Custom machine config pools inherit their configurations from the `worker` machine config pool. Changes made to the `worker` machine config pool apply to nodes in the custom pool. However, changes made to the custom machine config pool apply only to the nodes in the custom pool. For more information on custom machine config pools, see "Node configuration management with machine config pools". + +[NOTE] +==== +Custom machine config pools for the control plane nodes are not supported. +==== + +For example, you could use a custom machine config pool to create an _infrastructure_ node. Components that you move to an infrastructure node do not need to be accounted for during sizing. For more information on infrastructure nodes, see "Creating infrastructure machine sets". + +After you create the custom machine config pool, you can boot new nodes directly to the pool by creating a new machine set. Or, you can add existing nodes to the custom pool by using labels. + +include::modules/machine-config-custom-mcp-automatic.adoc[leveloffset=+1] +include::modules/machine-config-custom-mcp-existing.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources + +* xref:../machine_configuration/index.adoc#architecture-machine-config-pools_machine-config-overview[Node configuration management with machine config pools] +* xref:../machine_configuration/mco-update-boot-images-manual.adoc#mco-update-boot-images-manual[Manually updating the boot image] +* xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets] + + diff --git a/modules/ai-adding-worker-nodes-to-cluster.adoc b/modules/ai-adding-worker-nodes-to-cluster.adoc index fadcfdffcc92..9d0b1f9023f8 100644 --- a/modules/ai-adding-worker-nodes-to-cluster.adoc +++ b/modules/ai-adding-worker-nodes-to-cluster.adoc @@ -323,6 +323,6 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-1.example.com Ready master,worker 56m v1.34.2 -compute-1.example.com Ready worker 11m v1.34.2 +control-plane-1.example.com Ready master,worker 56m v1.35.4 +compute-1.example.com Ready worker 11m v1.35.4 ---- diff --git a/modules/aws-outposts-load-balancer-clb.adoc b/modules/aws-outposts-load-balancer-clb.adoc index 6ebfbf2430f7..51974baae4ff 100644 --- a/modules/aws-outposts-load-balancer-clb.adoc +++ b/modules/aws-outposts-load-balancer-clb.adoc @@ -64,9 +64,9 @@ $ oc get nodes -l = [source,terminal] ---- NAME STATUS ROLES AGE VERSION -node1.example.com Ready worker 7h v1.34.2 -node2.example.com Ready worker 7h v1.34.2 -node3.example.com Ready worker 7h v1.34.2 +node1.example.com Ready worker 7h v1.35.4 +node2.example.com Ready worker 7h v1.35.4 +node3.example.com Ready worker 7h v1.35.4 ---- . Configure the Classic Load Balancer service by adding the cloud-based subnet information to the `annotations` field of the `Service` manifest: diff --git a/modules/cleaning-crio-storage.adoc b/modules/cleaning-crio-storage.adoc index 65b8bc4a24aa..da7fc31f13d6 100644 --- a/modules/cleaning-crio-storage.adoc +++ b/modules/cleaning-crio-storage.adoc @@ -123,7 +123,7 @@ $ oc get nodes + ---- NAME STATUS ROLES AGE VERSION -ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.34.2 +ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.35.4 ---- + . Mark the node schedulable. You will know that the scheduling is enabled when `SchedulingDisabled` is no longer in status: @@ -138,5 +138,5 @@ $ oc adm uncordon + ---- NAME STATUS ROLES AGE VERSION -ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.34.2 +ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.35.4 ---- diff --git a/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc b/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc index e4a5e609652c..97c544b25209 100644 --- a/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc +++ b/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc @@ -122,12 +122,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -worker-0 Ready worker,worker-cnf 100m v1.34.2 -worker-1 Ready worker 93m v1.34.2 -master-0 Ready control-plane,master,worker 108m v1.34.2 -master-1 Ready control-plane,master,worker 107m v1.34.2 -master-2 Ready control-plane,master,worker 107m v1.34.2 -worker-2 Ready worker 100m v1.34.2 +worker-0 Ready worker,worker-cnf 100m v1.35.4 +worker-1 Ready worker 93m v1.35.4 +master-0 Ready control-plane,master,worker 108m v1.35.4 +master-1 Ready control-plane,master,worker 107m v1.35.4 +master-2 Ready control-plane,master,worker 107m v1.35.4 +worker-2 Ready worker 100m v1.35.4 ---- . Verify that the NUMA Resources Operator’s pods are running on the intended nodes by running the following command. You should see a numaresourcesoperator pod for each node group you specified in the CR: diff --git a/modules/compliance-apply-remediation-for-customized-mcp.adoc b/modules/compliance-apply-remediation-for-customized-mcp.adoc index 2d0dde8968bc..307740caafa1 100644 --- a/modules/compliance-apply-remediation-for-customized-mcp.adoc +++ b/modules/compliance-apply-remediation-for-customized-mcp.adoc @@ -23,11 +23,11 @@ $ oc get nodes -n openshift-compliance [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.34.2 -ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.34.2 -ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.34.2 -ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.34.2 -ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.34.2 +ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.35.4 +ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.35.4 +ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.35.4 +ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.35.4 +ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.35.4 ---- . Add a label to nodes. diff --git a/modules/connected-to-disconnected-verify.adoc b/modules/connected-to-disconnected-verify.adoc index 7799d1252364..844f5af0a10f 100644 --- a/modules/connected-to-disconnected-verify.adoc +++ b/modules/connected-to-disconnected-verify.adoc @@ -44,10 +44,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.34.2 +ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.35.4 ---- diff --git a/modules/coreos-layering-configuring-on-revert.adoc b/modules/coreos-layering-configuring-on-revert.adoc index 1cc67fc38353..b0a51624c73a 100644 --- a/modules/coreos-layering-configuring-on-revert.adoc +++ b/modules/coreos-layering-configuring-on-revert.adoc @@ -64,12 +64,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.34.2 -ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.34.2 -ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.34.2 +ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.35.4 +ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.35.4 +ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.35.4 ---- ** When the node is back in the `Ready` state, check that the node is using the base image: diff --git a/modules/coreos-layering-configuring.adoc b/modules/coreos-layering-configuring.adoc index 0287bcead5fa..95d4dcfdcb34 100644 --- a/modules/coreos-layering-configuring.adoc +++ b/modules/coreos-layering-configuring.adoc @@ -165,12 +165,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.34.2 -ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.34.2 -ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.34.2 +ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.35.4 +ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.35.4 +ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.35.4 ---- . When the node is back in the `Ready` state, check that the node is using the custom layered image: diff --git a/modules/coreos-layering-removing.adoc b/modules/coreos-layering-removing.adoc index c8af6ead46b4..7452b8bf5dbd 100644 --- a/modules/coreos-layering-removing.adoc +++ b/modules/coreos-layering-removing.adoc @@ -52,12 +52,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.34.2 -ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.34.2 -ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.34.2 +ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.35.4 +ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.35.4 +ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.35.4 ---- . When the node is back in the `Ready` state, check that the node is using the base image: diff --git a/modules/graceful-restart.adoc b/modules/graceful-restart.adoc index 68d33121c862..d4c0b192dd41 100644 --- a/modules/graceful-restart.adoc +++ b/modules/graceful-restart.adoc @@ -58,9 +58,9 @@ The control plane nodes are ready if the status is `Ready`, as shown in the foll [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-168-251.ec2.internal Ready control-plane,master 75m v1.34.2 -ip-10-0-170-223.ec2.internal Ready control-plane,master 75m v1.34.2 -ip-10-0-211-16.ec2.internal Ready control-plane,master 75m v1.34.2 +ip-10-0-168-251.ec2.internal Ready control-plane,master 75m v1.35.4 +ip-10-0-170-223.ec2.internal Ready control-plane,master 75m v1.35.4 +ip-10-0-211-16.ec2.internal Ready control-plane,master 75m v1.35.4 ---- . If the control plane nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. @@ -99,9 +99,9 @@ The worker nodes are ready if the status is `Ready`, as shown in the following o [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-179-95.ec2.internal Ready worker 64m v1.34.2 -ip-10-0-182-134.ec2.internal Ready worker 64m v1.34.2 -ip-10-0-250-100.ec2.internal Ready worker 64m v1.34.2 +ip-10-0-179-95.ec2.internal Ready worker 64m v1.35.4 +ip-10-0-182-134.ec2.internal Ready worker 64m v1.35.4 +ip-10-0-250-100.ec2.internal Ready worker 64m v1.35.4 ---- . If the worker nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. @@ -172,12 +172,12 @@ Check that the status for all nodes is `Ready`. [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-168-251.ec2.internal Ready control-plane,master 82m v1.34.2 -ip-10-0-170-223.ec2.internal Ready control-plane,master 82m v1.34.2 -ip-10-0-179-95.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-182-134.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-211-16.ec2.internal Ready control-plane,master 82m v1.34.2 -ip-10-0-250-100.ec2.internal Ready worker 69m v1.34.2 +ip-10-0-168-251.ec2.internal Ready control-plane,master 82m v1.35.4 +ip-10-0-170-223.ec2.internal Ready control-plane,master 82m v1.35.4 +ip-10-0-179-95.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-182-134.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-211-16.ec2.internal Ready control-plane,master 82m v1.35.4 +ip-10-0-250-100.ec2.internal Ready worker 69m v1.35.4 ---- + If the cluster did not start properly, you might need to restore your cluster using an etcd backup. For more information, see "Restoring to a previous cluster state". diff --git a/modules/hcp-np-capacity-blocks.adoc b/modules/hcp-np-capacity-blocks.adoc index d55bb27af383..45810d5e01c8 100644 --- a/modules/hcp-np-capacity-blocks.adoc +++ b/modules/hcp-np-capacity-blocks.adoc @@ -137,6 +137,6 @@ $ oc get nodes [source, terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-132-74.ec2.internal Ready worker 17m v1.34.2 -ip-10-0-134-183.ec2.internal Ready worker 4h5m v1.34.2 +ip-10-0-132-74.ec2.internal Ready worker 17m v1.35.4 +ip-10-0-134-183.ec2.internal Ready worker 4h5m v1.35.4 ---- diff --git a/modules/hibernating-cluster-hibernate.adoc b/modules/hibernating-cluster-hibernate.adoc index 4faebbd44e4f..8afb08e4ec01 100644 --- a/modules/hibernating-cluster-hibernate.adoc +++ b/modules/hibernating-cluster-hibernate.adoc @@ -36,12 +36,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.34.2 -Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.34.2 +ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.35.4 +Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.35.4 ---- + All nodes should show `Ready` in the `STATUS` column. diff --git a/modules/hibernating-cluster-resume.adoc b/modules/hibernating-cluster-resume.adoc index 10e94b43a170..fc607f2dd0bb 100644 --- a/modules/hibernating-cluster-resume.adoc +++ b/modules/hibernating-cluster-resume.adoc @@ -79,12 +79,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.34.2 -Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.34.2 +ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.35.4 +Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.35.4 ---- + All nodes should show `Ready` in the `STATUS` column. It might take a few minutes for all nodes to become ready after approving the CSRs. diff --git a/modules/ibi-create-standalone-config-iso.adoc b/modules/ibi-create-standalone-config-iso.adoc index cd8f8e29f901..2dc3fc53825e 100644 --- a/modules/ibi-create-standalone-config-iso.adoc +++ b/modules/ibi-create-standalone-config-iso.adoc @@ -226,5 +226,5 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -node/sno-cluster-name.host.example.com Ready control-plane,master 5h15m v1.34.2 +node/sno-cluster-name.host.example.com Ready control-plane,master 5h15m v1.35.4 ---- \ No newline at end of file diff --git a/modules/images-configuration-file.adoc b/modules/images-configuration-file.adoc index 890489fcc97e..3728d6061f16 100644 --- a/modules/images-configuration-file.adoc +++ b/modules/images-configuration-file.adoc @@ -73,10 +73,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.34.2 -ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.34.2 -ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.34.2 -ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.34.2 -ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.34.2 -ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.34.2 +ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.35.4 +ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.35.4 +ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.35.4 +ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.35.4 +ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.35.4 +ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.35.4 ---- diff --git a/modules/images-configuration-image-registry-settings-hcp.adoc b/modules/images-configuration-image-registry-settings-hcp.adoc index 45344bdadeed..9d8b4bfedebe 100644 --- a/modules/images-configuration-image-registry-settings-hcp.adoc +++ b/modules/images-configuration-image-registry-settings-hcp.adoc @@ -130,7 +130,7 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.34.2 -ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.34.2 -ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.34.2 +ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.35.4 +ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.35.4 +ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.35.4 ---- \ No newline at end of file diff --git a/modules/images-configuration-registry-mirror-configuring.adoc b/modules/images-configuration-registry-mirror-configuring.adoc index e95453542d41..8b18854fa4a8 100644 --- a/modules/images-configuration-registry-mirror-configuring.adoc +++ b/modules/images-configuration-registry-mirror-configuring.adoc @@ -184,12 +184,12 @@ $ oc get node [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-137-44.ec2.internal Ready worker 7m v1.34.2 -ip-10-0-138-148.ec2.internal Ready master 11m v1.34.2 -ip-10-0-139-122.ec2.internal Ready master 11m v1.34.2 -ip-10-0-147-35.ec2.internal Ready worker 7m v1.34.2 -ip-10-0-153-12.ec2.internal Ready worker 7m v1.34.2 -ip-10-0-154-10.ec2.internal Ready master 11m v1.34.2 +ip-10-0-137-44.ec2.internal Ready worker 7m v1.35.4 +ip-10-0-138-148.ec2.internal Ready master 11m v1.35.4 +ip-10-0-139-122.ec2.internal Ready master 11m v1.35.4 +ip-10-0-147-35.ec2.internal Ready worker 7m v1.35.4 +ip-10-0-153-12.ec2.internal Ready worker 7m v1.35.4 +ip-10-0-154-10.ec2.internal Ready master 11m v1.35.4 ---- .. Start the debugging process to access the node: diff --git a/modules/infrastructure-moving-router.adoc b/modules/infrastructure-moving-router.adoc index 7f15b3f37daa..8bac5a407cc2 100644 --- a/modules/infrastructure-moving-router.adoc +++ b/modules/infrastructure-moving-router.adoc @@ -111,7 +111,7 @@ $ oc get node <1> [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.34.2 +ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.35.4 ---- + Because the role list includes `infra`, the pod is running on the correct node. diff --git a/modules/install-sno-monitoring-the-installation-manually.adoc b/modules/install-sno-monitoring-the-installation-manually.adoc index 14c7eee0f85b..bfcf0dc27d7a 100644 --- a/modules/install-sno-monitoring-the-installation-manually.adoc +++ b/modules/install-sno-monitoring-the-installation-manually.adoc @@ -66,7 +66,7 @@ ifndef::openshift-origin[] [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane.example.com Ready master,worker 10m v1.34.2 +control-plane.example.com Ready master,worker 10m v1.35.4 ---- endif::openshift-origin[] ifdef::openshift-origin[] diff --git a/modules/installation-approve-csrs.adoc b/modules/installation-approve-csrs.adoc index a80f2a2b571b..f6478b9b6c5d 100644 --- a/modules/installation-approve-csrs.adoc +++ b/modules/installation-approve-csrs.adoc @@ -66,9 +66,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0 Ready master 63m v1.34.2 -master-1 Ready master 63m v1.34.2 -master-2 Ready master 64m v1.34.2 +master-0 Ready master 63m v1.35.4 +master-1 Ready master 63m v1.35.4 +master-2 Ready master 64m v1.35.4 ---- + The output lists all of the machines that you created. @@ -203,11 +203,11 @@ ifndef::ibm-power[] [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0 Ready master 73m v1.34.2 -master-1 Ready master 73m v1.34.2 -master-2 Ready master 74m v1.34.2 -worker-0 Ready worker 11m v1.34.2 -worker-1 Ready worker 11m v1.34.2 +master-0 Ready master 73m v1.35.4 +master-1 Ready master 73m v1.35.4 +master-2 Ready master 74m v1.35.4 +worker-0 Ready worker 11m v1.35.4 +worker-1 Ready worker 11m v1.35.4 ---- endif::ibm-power[] ifdef::ibm-power[] @@ -215,13 +215,13 @@ ifdef::ibm-power[] [source,terminal] ---- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME -worker-0-ppc64le Ready worker 42d v1.34.2 192.168.200.21 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -worker-1-ppc64le Ready worker 42d v1.34.2 192.168.200.20 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -master-0-x86 Ready control-plane,master 75d v1.34.2 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -master-1-x86 Ready control-plane,master 75d v1.34.2 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -master-2-x86 Ready control-plane,master 75d v1.34.2 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -worker-0-x86 Ready worker 75d v1.34.2 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -worker-1-x86 Ready worker 75d v1.34.2 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 +worker-0-ppc64le Ready worker 42d v1.35.4 192.168.200.21 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +worker-1-ppc64le Ready worker 42d v1.35.4 192.168.200.20 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +master-0-x86 Ready control-plane,master 75d v1.35.4 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +master-1-x86 Ready control-plane,master 75d v1.35.4 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +master-2-x86 Ready control-plane,master 75d v1.35.4 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +worker-0-x86 Ready worker 75d v1.35.4 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +worker-1-x86 Ready worker 75d v1.35.4 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 ---- endif::ibm-power[] + diff --git a/modules/installation-aws-user-infra-bootstrap.adoc b/modules/installation-aws-user-infra-bootstrap.adoc index 53a4c4674962..a01ef397fd2a 100644 --- a/modules/installation-aws-user-infra-bootstrap.adoc +++ b/modules/installation-aws-user-infra-bootstrap.adoc @@ -32,7 +32,7 @@ stored the installation files in. [source,terminal] ---- INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... -INFO API v1.34.2 up +INFO API v1.35.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s diff --git a/modules/installation-installing-bare-metal.adoc b/modules/installation-installing-bare-metal.adoc index d952d4b43c5a..7790aa758230 100644 --- a/modules/installation-installing-bare-metal.adoc +++ b/modules/installation-installing-bare-metal.adoc @@ -66,7 +66,7 @@ where: [source,terminal] ---- INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... -INFO API v1.34.2 up +INFO API v1.35.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources ---- diff --git a/modules/installation-osp-creating-control-plane.adoc b/modules/installation-osp-creating-control-plane.adoc index 6ae916791f27..f9badebec2ff 100644 --- a/modules/installation-osp-creating-control-plane.adoc +++ b/modules/installation-osp-creating-control-plane.adoc @@ -40,7 +40,7 @@ You will see messages that confirm that the control plane machines are running a + [source,terminal] ---- -INFO API v1.34.2 up +INFO API v1.35.4 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources diff --git a/modules/installation-special-config-rtkernel.adoc b/modules/installation-special-config-rtkernel.adoc index 9bbaa286b574..30aa34253c3f 100644 --- a/modules/installation-special-config-rtkernel.adoc +++ b/modules/installation-special-config-rtkernel.adoc @@ -92,12 +92,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.34.2 -ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.34.2 -ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.34.2 -ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.34.2 -ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.34.2 -ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.34.2 +ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.35.4 +ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.35.4 +ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.35.4 +ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.35.4 +ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.35.4 +ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.35.4 ---- + [source,terminal] diff --git a/modules/ipi-install-provisioning-the-bare-metal-node.adoc b/modules/ipi-install-provisioning-the-bare-metal-node.adoc index ac5b18f9fe6b..1e242b689024 100644 --- a/modules/ipi-install-provisioning-the-bare-metal-node.adoc +++ b/modules/ipi-install-provisioning-the-bare-metal-node.adoc @@ -35,11 +35,11 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -openshift-master-1.openshift.example.com Ready master 30h v1.34.2 -openshift-master-2.openshift.example.com Ready master 30h v1.34.2 -openshift-master-3.openshift.example.com Ready master 30h v1.34.2 -openshift-worker-0.openshift.example.com Ready worker 30h v1.34.2 -openshift-worker-1.openshift.example.com Ready worker 30h v1.34.2 +openshift-master-1.openshift.example.com Ready master 30h v1.35.4 +openshift-master-2.openshift.example.com Ready master 30h v1.35.4 +openshift-master-3.openshift.example.com Ready master 30h v1.35.4 +openshift-worker-0.openshift.example.com Ready worker 30h v1.35.4 +openshift-worker-1.openshift.example.com Ready worker 30h v1.35.4 ---- . Get the compute machine set. @@ -99,12 +99,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -openshift-master-1.openshift.example.com Ready master 30h v1.34.2 -openshift-master-2.openshift.example.com Ready master 30h v1.34.2 -openshift-master-3.openshift.example.com Ready master 30h v1.34.2 -openshift-worker-0.openshift.example.com Ready worker 30h v1.34.2 -openshift-worker-1.openshift.example.com Ready worker 30h v1.34.2 -openshift-worker-.openshift.example.com Ready worker 3m27s v1.34.2 +openshift-master-1.openshift.example.com Ready master 30h v1.35.4 +openshift-master-2.openshift.example.com Ready master 30h v1.35.4 +openshift-master-3.openshift.example.com Ready master 30h v1.35.4 +openshift-worker-0.openshift.example.com Ready worker 30h v1.35.4 +openshift-worker-1.openshift.example.com Ready worker 30h v1.35.4 +openshift-worker-.openshift.example.com Ready worker 3m27s v1.35.4 ---- + You can also check the kubelet. diff --git a/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc b/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc index 92a3364ae377..2e2dd8f69d0e 100644 --- a/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc +++ b/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc @@ -183,11 +183,11 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-1.example.com available master 4m2s v1.34.2 -control-plane-2.example.com available master 141m v1.34.2 -control-plane-3.example.com available master 141m v1.34.2 -compute-1.example.com available worker 87m v1.34.2 -compute-2.example.com available worker 87m v1.34.2 +control-plane-1.example.com available master 4m2s v1.35.4 +control-plane-2.example.com available master 141m v1.35.4 +control-plane-3.example.com available master 141m v1.35.4 +compute-1.example.com available worker 87m v1.35.4 +compute-2.example.com available worker 87m v1.35.4 ---- + [NOTE] diff --git a/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc b/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc index f91f170637ab..9175ee78763c 100644 --- a/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc +++ b/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc @@ -21,10 +21,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0.cloud.example.com Ready master 145m v1.34.2 -master-1.cloud.example.com Ready master 135m v1.34.2 -master-2.cloud.example.com Ready master 145m v1.34.2 -worker-2.cloud.example.com Ready worker 100m v1.34.2 +master-0.cloud.example.com Ready master 145m v1.35.4 +master-1.cloud.example.com Ready master 135m v1.35.4 +master-2.cloud.example.com Ready master 145m v1.35.4 +worker-2.cloud.example.com Ready worker 100m v1.35.4 ---- . Check for inconsistent timing delays due to clock drift. For example: diff --git a/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc b/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc index c0ca53496e99..74728b874f7a 100644 --- a/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc +++ b/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc @@ -21,9 +21,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0.example.com Ready master,worker 4h v1.34.2 -master-1.example.com Ready master,worker 4h v1.34.2 -master-2.example.com Ready master,worker 4h v1.34.2 +master-0.example.com Ready master,worker 4h v1.35.4 +master-1.example.com Ready master,worker 4h v1.35.4 +master-2.example.com Ready master,worker 4h v1.35.4 ---- . Confirm the installation program deployed all pods successfully. The following command diff --git a/modules/learning-deploying-application-storage-end-session.adoc b/modules/learning-deploying-application-storage-end-session.adoc index fbc6acf0687d..2edda6414666 100644 --- a/modules/learning-deploying-application-storage-end-session.adoc +++ b/modules/learning-deploying-application-storage-end-session.adoc @@ -9,4 +9,4 @@ To securely close your workspace and free up system resources, end your session from the terminal. .Procedure -* Type `exit` in your terminal to quit the session and return to the CLI. \ No newline at end of file +* Type `exit` in your terminal to quit the session and return to the command line interface (CLI). \ No newline at end of file diff --git a/modules/learning-getting-started-accessing-cli.adoc b/modules/learning-getting-started-accessing-cli.adoc index bf3ef456121a..d87441640c0c 100644 --- a/modules/learning-getting-started-accessing-cli.adoc +++ b/modules/learning-getting-started-accessing-cli.adoc @@ -6,7 +6,7 @@ = Accessing your cluster using the CLI [role="_abstract"] -To access the cluster using the CLI, you must have the `oc` CLI installed. With the `oc` CLI, you can work directly with project source code, and manage projects in bandwidth-restricted environments where the web console might be unavailable. If you are following the tutorials, you already installed the `oc` CLI. +To access the cluster using the command line interface (CLI), you must have the `oc` CLI installed. With the `oc` CLI, you can work directly with project source code, and manage projects in bandwidth-restricted environments where the web console might be unavailable. If you are following the tutorials, you already installed the `oc` CLI. .Procedure . Log in to the {cluster-manager-url}. diff --git a/modules/learning-getting-started-admin-cli.adoc b/modules/learning-getting-started-admin-cli.adoc index 53e8b2ab9428..d9c58149af7d 100644 --- a/modules/learning-getting-started-admin-cli.adoc +++ b/modules/learning-getting-started-admin-cli.adoc @@ -6,7 +6,7 @@ = Creating an admin user using the CLI [role="_abstract"] -You can use the {rosa-cli-first} to create an admin user for your clusters. Admin users can create new clusters, schedule cluster upgrades, monitor health, manage cluster resources, and so on. +You can use the {rosa-cli-first} to create an admin user for your clusters. Admin users perform tasks such as creating new clusters, scheduling cluster upgrades, monitoring health, and managing cluster resources. [NOTE] ==== diff --git a/modules/learning-getting-started-create-vpc.adoc b/modules/learning-getting-started-create-vpc.adoc index 0041d00edd71..a4126a157bf1 100644 --- a/modules/learning-getting-started-create-vpc.adoc +++ b/modules/learning-getting-started-create-vpc.adoc @@ -6,7 +6,7 @@ = Creating a VPC [role="_abstract"] -Before deploying a {product-title} cluster, you must have both a VPC and OIDC resources. We will create these resources first. {product-title} uses the bring your own VPC (BYO-VPC) model. +Before deploying a {product-title} cluster, you must have both a Virtual Private Cloud (VPC) and OpenID Connect (OIDC) resources. We will create these resources first. {product-title} uses the bring your own VPC (BYO-VPC) model. .Procedure . Make sure your AWS CLI (`aws`) is configured to use a region where {product-title} is available. See the regions supported by the AWS CLI by running the following command: diff --git a/modules/learning-getting-started-oidc-config.adoc b/modules/learning-getting-started-oidc-config.adoc index c5d3f0734be0..41a40355f039 100644 --- a/modules/learning-getting-started-oidc-config.adoc +++ b/modules/learning-getting-started-oidc-config.adoc @@ -6,7 +6,7 @@ = Creating your OIDC configuration [role="_abstract"] -In this workshop, we will use the automatic mode when creating the OIDC configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the {rosa-cli} to create your cluster's unique OIDC configuration. +In this workshop, we will use the automatic mode when creating the OpenID Connect (OIDC) configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the {rosa-cli} to create your cluster's unique OIDC configuration. .Procedure * Create the OIDC configuration by running the following command: diff --git a/modules/learning-getting-started-support-ui.adoc b/modules/learning-getting-started-support-ui.adoc index a80ed6fe33cf..d0a717416e06 100644 --- a/modules/learning-getting-started-support-ui.adoc +++ b/modules/learning-getting-started-support-ui.adoc @@ -6,9 +6,9 @@ = Contacting Red{nbsp}Hat for support using the UI [role="_abstract"] -You can request support within {cluster-manager-url}. +You can request support within {cluster-manager-first}. .Procedure -. On the {cluster-manager} UI, click the *Support* tab. +. On the {cluster-manager-url} UI, click the *Support* tab. . Click *Open support case*. \ No newline at end of file diff --git a/modules/learning-getting-started-upgrading-recurring-updates.adoc b/modules/learning-getting-started-upgrading-recurring-updates.adoc index 7b0034b02a9e..1f2c39fe6680 100644 --- a/modules/learning-getting-started-upgrading-recurring-updates.adoc +++ b/modules/learning-getting-started-upgrading-recurring-updates.adoc @@ -6,7 +6,7 @@ = Setting up automatic recurring upgrades [role="_abstract"] -To schedule your cluster to automatically receive new patch (z-stream) updates, you can set your cluster to upgrade on a recurring basis within {cluster-manager}. +To schedule your cluster to automatically receive new patch (z-stream) updates, you can set your cluster to upgrade on a recurring basis within {cluster-manager-url}. .Procedure . Log in to the {cluster-manager}, and select the cluster you want to upgrade. diff --git a/modules/learning-lab-overview-about-ostoy.adoc b/modules/learning-lab-overview-about-ostoy.adoc index 5dd651abfeec..f27c8f4ec5a6 100644 --- a/modules/learning-lab-overview-about-ostoy.adoc +++ b/modules/learning-lab-overview-about-ostoy.adoc @@ -15,8 +15,8 @@ This application has a user interface where you can: * Toggle a liveness probe and monitor OpenShift behavior * Read ConfigMaps, secrets, and environment variables * Read and write files when connected to shared storage -* Check network connectivity, intra-cluster DNS, and intra-communication with the included microservice -* Increase the load to view automatic scaling of the pods by using the HPA +* Check network connectivity, intra-cluster Domain Name System (DNS), and intra-communication with the included microservice +* Increase the load to view automatic scaling of the pods by using the Horizontal Pod Autoscaler (HPA) //* Connect to an AWS S3 bucket to read and write objects image::ostoy-arch.png[OSToy architecture diagram] \ No newline at end of file diff --git a/modules/machine-config-custom-mcp-automatic.adoc b/modules/machine-config-custom-mcp-automatic.adoc new file mode 100644 index 000000000000..29977cbf8bb5 --- /dev/null +++ b/modules/machine-config-custom-mcp-automatic.adoc @@ -0,0 +1,177 @@ +// Module included in the following assemblies: +// +// * machine_configuration/machine-config-creating-custom-mcp.adoc + +:_mod-docs-content-type: PROCEDURE +[id="machine-config-custom-mcp-automatic_{context}"] += Creating a custom machine config pool with a new node + +[role="_abstract"] +You can create a custom machine config pool (MCP) and launch a new node directly into that pool. By launching the node directly into the new pool, you save a node reboot cycle that would be required when moving the nodes from the worker machine config pool to the custom pool. + +Use the `userDataSecret` parameter in the machine set to instruct the Machine Config Operator (MCO) to add the node to a specific machine config pool. The secret contains the endpoint of the custom machine config pool. You must prefix the name of this new secret with the name of the custom machine config pool. + +The following procedure shows you how to create a new custom machine config pool and launch a new node into that pool. + +.Procedure + +. Create a custom machine config pool: + +.. Create a YAML file similar to the following: ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: MachineConfigPool +metadata: + name: custom +spec: + machineConfigSelector: + matchExpressions: + - {key: machineconfiguration.openshift.io/role, operator: In, values: [custom,worker]} + nodeSelector: + matchLabels: + node-role.kubernetes.io/custom: "" +---- +where: ++ +-- +`metadata.name`:: Specifies a name for the custom machine config pool. +`spec.machineConfigSelector.matchExpressions`:: Specifies the node roles for the new node. This must include the `worker` role and the custom role. +`spec.nodeSelector.matchLabels`:: Specifies a node selector to use when adding nodes to this pool. +-- + +.. Create the machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +. Create a new machine set that creates a new node in the new custom machine config pool: + +.. Create a YAML file, for example by making a copy of an existing compute machine set YAML and making the following changes: ++ +[source,yaml] +---- +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: +# ... + name: + namespace: openshift-machine-api +# ... +spec: +# ... + template: +# ... + spec: +# ... + providerSpec: +# ... + value: +# ... + userDataSecret: + name: -user-data-managed +# ... +---- ++ +where: ++ +-- +`metadata.name`:: Specifies a name for the machine set. +`metadata.namespace`:: Specifies a namespace for the machine set. This must be `openshift-machine-api`. +`spec.template.spec.providerSpec.value.userDataSecret`:: Specifies a name for the user data secret that is created. The name of the secret must start with the name of the custom machine config pool and end with `-user-data-managed`. For example `custom-user-data-managed`. +-- ++ +These are the minimum changes required to create the new machine set. For more configuration options or to configure an all-new machine set, see "Creating infrastructure machine sets" for your platform. ++ +[NOTE] +==== +When creating a new machine set, you should specify the latest image to use for the boot image. For more information about configuring the boot image on your cluster, see "Manually updating the boot image" for your platform. The method to specify the image varies by provider. +==== + +.. Create the machine set by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- ++ +The MCO creates a new node in the new custom machine config pool. + +.Verification + +. Check to see that the MCO created the new machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc get mcp +---- ++ +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +custom rendered-custom-72be15c95699b6d39f70fce525f51bb2 True False False 1 1 1 0 12s +master rendered-master-9e25b616b551d6c77f490191f45161d7 True False False 3 3 3 0 32m +worker rendered-worker-72be15c95699b6d39f70fce525f51bb2 True False False 2 2 2 0 32m +---- ++ +In this example, `custom` is the new machine config pool. + +. Check to see that the MCO created the new machine set by running the following command: ++ +[source,terminal] +---- +$ oc get machineset -n openshift-machine-api +---- ++ +.Example output +[source,terminal] +---- +NAME DESIRED CURRENT READY AVAILABLE AGE +ci-ln-7x179fk-72292-tc5qz-custom-a 1 1 1 1 62s +ci-ln-7x179fk-72292-tc5qz-worker-a 1 1 1 1 91m +ci-ln-7x179fk-72292-tc5qz-worker-b 1 1 1 1 91m +ci-ln-7x179fk-72292-tc5qz-worker-f 1 1 1 1 91m +---- ++ +In this example, `ci-ln-7x179fk-72292-tc5qz-custom-a` is the new machine set. + +. Check that the MCO created the required secret by running the following command: ++ +[source,terminal] +---- +$ oc get secrets -n openshift-machine-api +---- ++ +.Example output +[source,terminal] +---- +NAME TYPE DATA AGE +# ... +custom-user-data-managed Opaque 2 9m +# ... +---- + +. Check to see that the node is in the new custom machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +ci-ln-i61xqwb-72292-hz2mw-custom-9r496 Ready custom,worker 9m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-0 Ready control-plane,master 42m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-1 Ready control-plane,master 44m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-2 Ready control-plane,master 43m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-c-2lhcl Ready worker 36m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-f-qgdb7 Ready worker 36m v1.35.3 +---- ++ +In this example, the `ci-ln-i61xqwb-72292--hz2mw-custom-9r496` is a new node that was added to the `custom` machine config pool. diff --git a/modules/machine-config-custom-mcp-existing.adoc b/modules/machine-config-custom-mcp-existing.adoc new file mode 100644 index 000000000000..64214c89c662 --- /dev/null +++ b/modules/machine-config-custom-mcp-existing.adoc @@ -0,0 +1,103 @@ +// Module included in the following assemblies: +// +// * machine_configuration/machine-config-creating-custom-mcp.adoc + +:_mod-docs-content-type: PROCEDURE +[id="machine-config-custom-mcp-existing_{context}"] += Creating a custom machine config pool for an existing node + +[role="_abstract"] +You can create custom machine config pools (MCP) and manually add an existing node into that pool. With custom machine config pools, you can deploy changes targeted at the nodes in the custom pool. + +The following procedure shows you how to create a new custom machine config pool and add an existing node into that pool. + +.Procedure + +. Create a custom machine config pool: + +.. Create a YAML file similar to the following: ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: MachineConfigPool +metadata: + name: custom +spec: + machineConfigSelector: + matchExpressions: + - {key: machineconfiguration.openshift.io/role, operator: In, values: [custom,worker]} + nodeSelector: + matchLabels: + node-role.kubernetes.io/custom: "" +---- ++ +where: + +`metadata.name`:: Specifies a name for the machine config pool. +`spec.machineConfigSelector.matchExpressions`:: Specifies the node roles for the new node. This must include the `worker` role and the custom role. +`spec.nodeSelector.matchLabels`:: Specifies a node selector to use when adding nodes to this pool. + +.. Create the machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +. Add a label to the worker nodes that you want to move to the new custom pool by running the following command: ++ +[source,terminal] +---- +$ oc label node +---- ++ +Replace `` with the name of the node that you want to move and replace `` with the node selector you added to the machine config pool. ++ +.Example command +[source,terminal] +---- +$ oc label node ci-ln-g5tpp5k-72292-hz2mw-worker-b-ps8xh node-role.kubernetes.io/custom="" +---- + +.Verification + +. Check to see that the MCO created the new machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc get mcp +---- ++ +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +custom rendered-custom-72be15c95699b6d39f70fce525f51bb2 True False False 1 1 1 0 12s +master rendered-master-9e25b616b551d6c77f490191f45161d7 True False False 3 3 3 0 32m +worker rendered-worker-72be15c95699b6d39f70fce525f51bb2 True False False 2 2 2 0 32m +---- ++ +In this example, `custom` is the new machine config pool. + +. Check to see if the node is in that pool by running the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +ci-ln-i61xqwb-72292-ftjn8-master-0 Ready control-plane,master 42m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-1 Ready control-plane,master 44m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-2 Ready control-plane,master 43m v1.35.3 +ci-ln-g5tpp5k-72292-hz2mw-worker-b-ps8xh Ready custom,worker 36m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-c-2lhcl Ready worker 36m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-f-qgdb7 Ready worker 36m v1.35.3 +---- ++ +In this example, the `ci-ln-g5tpp5k-72292-hz2mw-worker-b-ps8xh` node is an existing node that was moved to the `custom` machine config pool. + diff --git a/modules/machine-node-custom-partition.adoc b/modules/machine-node-custom-partition.adoc index 81289d0886e9..a70241ac46ad 100644 --- a/modules/machine-node-custom-partition.adoc +++ b/modules/machine-node-custom-partition.adoc @@ -236,13 +236,13 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-128-78.ec2.internal Ready worker 117m v1.34.2 -ip-10-0-146-113.ec2.internal Ready master 127m v1.34.2 -ip-10-0-153-35.ec2.internal Ready worker 118m v1.34.2 -ip-10-0-176-58.ec2.internal Ready master 126m v1.34.2 -ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.34.2 <1> -ip-10-0-225-248.ec2.internal Ready master 127m v1.34.2 -ip-10-0-245-59.ec2.internal Ready worker 116m v1.34.2 +ip-10-0-128-78.ec2.internal Ready worker 117m v1.35.4 +ip-10-0-146-113.ec2.internal Ready master 127m v1.35.4 +ip-10-0-153-35.ec2.internal Ready worker 118m v1.35.4 +ip-10-0-176-58.ec2.internal Ready master 126m v1.35.4 +ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.35.4 <1> +ip-10-0-225-248.ec2.internal Ready master 127m v1.35.4 +ip-10-0-245-59.ec2.internal Ready worker 116m v1.35.4 ---- <1> This is new new node. diff --git a/modules/machineset-azure-confidential-vms.adoc b/modules/machineset-azure-confidential-vms.adoc index 13f5226a9a23..2bd275cf49d9 100644 --- a/modules/machineset-azure-confidential-vms.adoc +++ b/modules/machineset-azure-confidential-vms.adoc @@ -9,9 +9,10 @@ endif::[] :_mod-docs-content-type: PROCEDURE [id="machineset-azure-confidential-vms_{context}"] -= Configuring Azure confidential virtual machines by using machine sets += Configuring {azure-short} confidential virtual machines by using machine sets -{product-title} {product-version} supports Azure confidential virtual machines (VMs). +[role="_abstract"] +{product-title} {product-version} supports {azure-full} confidential virtual machines (VMs). By enabling {azure-short} confidential VMs, you can use memory encryption to improve data confidentiality. [NOTE] ==== @@ -27,7 +28,7 @@ Not all instance types support confidential VMs. Do not change the instance type ==== endif::cpmso[] -For more information about related features and functionality, see the Microsoft Azure documentation about link:https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-vm-overview[Confidential virtual machines]. +For more information about related features and functionality, see the {azure-full} documentation about link:https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-vm-overview[Confidential virtual machines]. .Procedure @@ -56,32 +57,35 @@ spec: osDisk: # ... managedDisk: - securityProfile: # <1> - securityEncryptionType: VMGuestStateOnly # <2> + securityProfile: + securityEncryptionType: VMGuestStateOnly # ... - securityProfile: # <3> + securityProfile: settings: - securityType: ConfidentialVM # <4> + securityType: ConfidentialVM confidentialVM: - uefiSettings: # <5> - secureBoot: Disabled # <6> - virtualizedTrustedPlatformModule: Enabled # <7> - vmSize: Standard_DC16ads_v5 # <8> + uefiSettings: + secureBoot: Disabled + virtualizedTrustedPlatformModule: Enabled + vmSize: Standard_DC16ads_v5 # ... ---- -<1> Specifies security profile settings for the managed disk when using a confidential VM. -<2> Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. -<3> Specifies security profile settings for the confidential VM. -<4> Enables the use of confidential VMs. This value is required for all valid configurations. -<5> Specifies which UEFI security features to use. This section is required for all valid configurations. -<6> Disables UEFI Secure Boot. -<7> Enables the use of a vTPM. -<8> Specifies an instance type that supports confidential VMs. + +where: + +`spec.template.spec.providerSpec.value.osDisk.managedDisk.securityProfile`:: Specifies security profile settings for the managed disk when using a confidential VM. +`spec.template.spec.providerSpec.value.osDisk.managedDisk.securityProfile.securityEncryptionType`:: Enables encryption of the {azure-full} VM Guest State (VMGS) blob. This setting requires the use of vTPM. +`spec.template.spec.providerSpec.value.securityProfile`:: Specifies security profile settings for the confidential VM. +`spec.template.spec.providerSpec.value.securityProfile.settings.securityType`:: Enables the use of confidential VMs. This value is required for all valid configurations. +`spec.template.spec.providerSpec.value.securityProfile.settings.confidentialVM.uefiSettings`:: Specifies which UEFI security features to use. This section is required for all valid configurations. +`spec.template.spec.providerSpec.value.securityProfile.settings.confidentialVM.uefiSettings.secureBoot`:: Disables UEFI Secure Boot. +`spec.template.spec.providerSpec.value.securityProfile.settings.confidentialVM.uefiSettings.virtualizedTrustedPlatformModule`:: Enables the use of a vTPM. +`spec.template.spec.providerSpec.value.vmSize`:: Specifies an instance type that supports confidential VMs. -- .Verification -* On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. +* On the {azure-full} portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. ifeval::["{context}" == "cpmso-supported-features-azure"] :!cpmso: diff --git a/modules/machineset-azure-enabling-accelerated-networking-existing.adoc b/modules/machineset-azure-enabling-accelerated-networking-existing.adoc index 159c2ce0a875..37e4b4764d37 100644 --- a/modules/machineset-azure-enabling-accelerated-networking-existing.adoc +++ b/modules/machineset-azure-enabling-accelerated-networking-existing.adoc @@ -12,13 +12,14 @@ endif::[] :_mod-docs-content-type: PROCEDURE [id="machineset-azure-enabling-accelerated-networking-existing_{context}"] -= Enabling Accelerated Networking on an existing Microsoft Azure cluster += Enabling Accelerated Networking on an existing {azure-full} cluster -You can enable Accelerated Networking on Azure by adding `acceleratedNetworking` to your machine set YAML file. +[role="_abstract"] +You can enable Accelerated Networking on {azure-full} by adding `acceleratedNetworking` to your machine set YAML file. This uses SR-IOV to help improve network performance for new node. .Prerequisites -* Have an existing Microsoft Azure cluster where the Machine API is operational. +* Have an existing {azure-short} cluster where the Machine API is operational. .Procedure //// @@ -58,12 +59,13 @@ $ oc edit machineset ---- providerSpec: value: - acceleratedNetworking: true <1> - vmSize: <2> + acceleratedNetworking: true + vmSize: ---- -+ -<1> This line enables Accelerated Networking. -<2> Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see link:https://docs.microsoft.com/en-us/azure/virtual-machines/sizes[Microsoft Azure documentation]. +where: + +`providerSpec.value.acceleratedNetworking`:: Enables Accelerated Networking. +`providerSpec.value.vmSize`:: Specifies an {azure-short} VM size that includes at least four vCPUs. For information about VM sizes, see the {azure-full} documentation link:https://docs.microsoft.com/en-us/azure/virtual-machines/sizes[Sizes for virtual machines in {azure-short}]. ifdef::compute[] .Next steps @@ -73,7 +75,7 @@ endif::compute[] .Verification -* On the Microsoft Azure portal, review the *Networking* settings page for a machine provisioned by the machine set, and verify that the `Accelerated networking` field is set to `Enabled`. +* On the {azure-full} portal, review the *Networking* settings page for a machine provisioned by the machine set, and verify that the `Accelerated networking` field is set to `Enabled`. ifeval::["{context}" == "creating-machineset-azure"] :!compute: diff --git a/modules/machineset-azure-ephemeral-os.adoc b/modules/machineset-azure-ephemeral-os.adoc index e2e79293c704..c901f32fcace 100644 --- a/modules/machineset-azure-ephemeral-os.adoc +++ b/modules/machineset-azure-ephemeral-os.adoc @@ -6,9 +6,10 @@ [id="machineset-azure-ephemeral-os_{context}"] = Machine sets that deploy machines on Ephemeral OS disks -You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. +[role="_abstract"] +You can create a compute machine set running on {azure-first} that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote {azure-short} Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. [role="_additional-resources"] .Additional resources -* For more information, see the Microsoft Azure documentation about link:https://docs.microsoft.com/en-us/azure/virtual-machines/ephemeral-os-disks[Ephemeral OS disks for Azure VMs]. +* link:https://docs.microsoft.com/en-us/azure/virtual-machines/ephemeral-os-disks[Ephemeral OS disks for {azure-short} VMs ({azure-full} documentation)] diff --git a/modules/machineset-azure-trusted-launch.adoc b/modules/machineset-azure-trusted-launch.adoc index 0e8d07672778..07ee5f87a3cd 100644 --- a/modules/machineset-azure-trusted-launch.adoc +++ b/modules/machineset-azure-trusted-launch.adoc @@ -9,9 +9,10 @@ endif::[] :_mod-docs-content-type: PROCEDURE [id="machineset-azure-trusted-launch_{context}"] -= Configuring trusted launch for Azure virtual machines by using machine sets += Configuring trusted launch for {azure-short} virtual machines by using machine sets -{product-title} {product-version} supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. +[role="_abstract"] +{product-title} {product-version} supports trusted launch for {azure-full} virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. [NOTE] ==== @@ -60,7 +61,7 @@ Some feature combinations result in an invalid configuration. 2. Using the `virtualizedTrustedPlatformModule` field. -- -For more information about related features and functionality, see the Microsoft Azure documentation about link:https://learn.microsoft.com/en-us/azure/virtual-machines/trusted-launch[Trusted launch for Azure virtual machines]. +For more information about related features and functionality, see the {azure-full} documentation about link:https://learn.microsoft.com/en-us/azure/virtual-machines/trusted-launch[Trusted launch for {azure-short} virtual machines]. .Procedure @@ -88,21 +89,24 @@ spec: value: securityProfile: settings: - securityType: TrustedLaunch # <1> + securityType: TrustedLaunch trustedLaunch: - uefiSettings: # <2> - secureBoot: Enabled # <3> - virtualizedTrustedPlatformModule: Enabled # <4> + uefiSettings: + secureBoot: Enabled + virtualizedTrustedPlatformModule: Enabled # ... ---- -<1> Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. -<2> Specifies which UEFI security features to use. This section is required for all valid configurations. -<3> Enables UEFI Secure Boot. -<4> Enables the use of a vTPM. ++ +where: ++ +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.securityType`:: Enables the use of trusted launch for {azure-short} virtual machines. This value is required for all valid configurations. +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.trustedLaunch.uefiSettings`:: Specifies which UEFI security features to use. This section is required for all valid configurations. +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.trustedLaunch.uefiSettings.secureBoot`:: Enables UEFI Secure Boot. +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.trustedLaunch.uefiSettings.virtualizedTrustedPlatformModule`:: Enables the use of a vTPM. .Verification -* On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. +* On the {azure-full} portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. ifeval::["{context}" == "cpmso-supported-features-azure"] :!cpmso: diff --git a/modules/microshift-custom-ca-proc.adoc b/modules/microshift-custom-ca-proc.adoc index 6d10373709ca..1d8318e0e244 100644 --- a/modules/microshift-custom-ca-proc.adoc +++ b/modules/microshift-custom-ca-proc.adoc @@ -81,7 +81,7 @@ $ oc --certificate-authority ~/certs/ca.ca get node ---- oc get node NAME STATUS ROLES AGE VERSION -dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.34.2 +dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.35.4 ---- .. Add the new CA file to the $KUBECONFIG environment variable by running the following command: diff --git a/modules/nodes-cluster-resource-override-move-infra.adoc b/modules/nodes-cluster-resource-override-move-infra.adoc index 7251175ea425..87a4613d10a8 100644 --- a/modules/nodes-cluster-resource-override-move-infra.adoc +++ b/modules/nodes-cluster-resource-override-move-infra.adoc @@ -33,15 +33,15 @@ clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.34.2 -ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.34.2 -ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.34.2 -ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.34.2 -ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.34.2 -ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.34.2 -ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.34.2 -ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.34.2 -ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.34.2 +ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.35.4 +ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.35.4 +ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.35.4 +ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.35.4 +ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.35.4 +ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.35.4 +ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.35.4 +ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.35.4 +ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.35.4 ---- .Procedure diff --git a/modules/nodes-descheduler-rn-5.3.0.adoc b/modules/nodes-descheduler-rn-5.3.0.adoc deleted file mode 100644 index b01f2812d5f7..000000000000 --- a/modules/nodes-descheduler-rn-5.3.0.adoc +++ /dev/null @@ -1,39 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="descheduler-operator-release-notes-5.3.0_{context}"] -= Release notes for {descheduler-operator} 5.3.0 - -[role="_abstract"] -Review the release notes for {descheduler-operator} 5.3.0 to learn what is new and updated with this release. - -Issued: 29 October 2025 - -The following advisory is available for the {descheduler-operator} 5.3.0: - -* link:https://access.redhat.com/errata/RHBA-2025:19249[RHBA-2025:19249] - -[id="descheduler-operator-5.3.0-new-features_{context}"] -== New features and enhancements - -* The descheduler profile `DevKubeVirtRelieveAndMigrate` has been renamed to `KubeVirtRelieveAndMigrate` and is now generally available. The updated profile improves VM eviction stability during live migrations by enabling background evictions and reducing oscillatory behavior. This profile is only available for use with {VirtProductName}. -+ -For more information, see xref:../../../virt/managing_vms/advanced_vm_management/virt-enabling-descheduler-evictions.adoc#virt-configuring-descheduler-evictions_virt-enabling-descheduler-evictions[Configuring descheduler evictions for virtual machines]. - -* This release of the {descheduler-operator} updates the Kubernetes version to 1.33. - -// No bug fixes or CVEs to list -// [id="descheduler-operator-5.3.0-bug-fixes_{context}"] -// === Bug fixes -// -// * This release of the {descheduler-operator} addresses several Common Vulnerabilities and Exposures (CVEs). - -// No known issues to list -// [id="descheduler-operator-5.3.0-known-issues_{context}"] -// === Known issues -// -// * TODO diff --git a/modules/nodes-descheduler-rn-5.3.1.adoc b/modules/nodes-descheduler-rn-5.3.1.adoc deleted file mode 100644 index 9a31e6f409a6..000000000000 --- a/modules/nodes-descheduler-rn-5.3.1.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="descheduler-operator-release-notes-5.3.1_{context}"] -= Release notes for {descheduler-operator} 5.3.1 - -[role="_abstract"] -Review the release notes for {descheduler-operator} 5.3.1 to learn what is new and updated with this release. - -Issued: 4 December 2025 - -The following advisory is available for the {descheduler-operator} 5.3.1: - -* link:https://access.redhat.com/errata/RHBA-2025:22737[RHBA-2025:22737] - -[id="descheduler-operator-5.3.1-new-features_{context}"] -== New features and enhancements - -* This release rebuilds the {descheduler-operator} to improve its image grade. diff --git a/modules/nodes-descheduler-rn-5.3.2.adoc b/modules/nodes-descheduler-rn-5.3.2.adoc deleted file mode 100644 index 884847e63d12..000000000000 --- a/modules/nodes-descheduler-rn-5.3.2.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="descheduler-operator-release-notes-5.3.2_{context}"] -= Release notes for {descheduler-operator} 5.3.2 - -[role="_abstract"] -Review the release notes for {descheduler-operator} 5.3.2 to learn what is new and updated with this release. - -Issued: 12 February 2026 - -The following advisory is available for the {descheduler-operator} 5.3.2: - -* link:https://access.redhat.com/errata/RHBA-2026:2641[RHBA-2026:2641] - -[id="descheduler-operator-5.3.2-new-features_{context}"] -== New features and enhancements - -* This release of the {descheduler-operator} updates the Kubernetes version to 1.34. diff --git a/modules/nodes-nodes-additional-crio-storage-about.adoc b/modules/nodes-nodes-additional-crio-storage-about.adoc new file mode 100644 index 000000000000..d2a5b6fbbbba --- /dev/null +++ b/modules/nodes-nodes-additional-crio-storage-about.adoc @@ -0,0 +1,98 @@ +// Module included in the following assemblies: +// +// * nodes/nodes/nodes-nodes-additional-crio-storage.adoc + +:_mod-docs-content-type: CONCEPT +[id="nodes-nodes-additional-crio-storage-about_{context}"] += About additional storage locations for CRI-O + +[role="_abstract"] +To reduce application startup time and make your applications run more efficiently, you can configure additional storage locations for the CRI-O container engine. + +By using storage locations for the CRI-O container engine other than the default gives you control over where CRI-O stores and retrieves OCI artifacts, complete container images, and container image layers. Using additional storage locations for these CRI-O objects can reduce application startup time and make your applications run more efficiently through dedicated solid-state drive (SSD) storage, shared image caches, or lazy pulling. + +By default, CRI-O stores all container data under a single root directory, `/var/lib/containers/storage`. This works well for typical workloads, but can create problems in clusters that use large images or artifacts, such as artificial intelligence and machine learning (AI/ML) workloads. + +For example, large OCI artifacts, such as machine learning models, are stored in the default location, consuming space and preventing the use of faster dedicated storage. By configuring the `additionalArtifactStores` field, you can store large AI/ML models on high-performance solid-state drives (SSD) separate from the root file system. As a result, your workloads can experience faster start times and your clusters can use storage more efficiently. + +Also, you could use the `additionalImageStores` field to mount an NFS share with prepopulated images across all worker nodes. Nodes read from the shared cache instead of pulling from an external registry. This is useful in disconnected environments or when many nodes run the same workloads. + +With the `additionalLayerStores` field, you could enable lazy pulling through a third-party storage plugin, such as stargz-store. With lazy pulling, containers start after downloading only the required file chunks. The remaining data is fetched during runtime. + +After you configure any of these new storage locations, the Machine Config Operator (MCO) reboots the affected nodes with the new configuration. After the reboot, CRI-O begins resolving storage from the additional locations. + +Additional storage for OCI artifacts:: +Use the `additionalArtifactStores` field in a container runtime config to specify read-only locations where CRI-O resolves OCI artifacts, such as machine learning models pulled as OCI volume images. CRI-O checks these locations in order before falling back to the default storage location. CRI-O requires an existing, prepopulated `artifacts/` subdirectory within each configured path. For example, if the path is `/mnt/ssd-artifacts`, place the artifacts in the `/mnt/ssd-artifacts/artifacts/` directory. ++ +The following example container runtime config configures storage for OCI artifacts. ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: ssd-artifact-stores +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalArtifactStores: + - path: /mnt/ssd-artifacts + - path: /mnt/nfs-shared-artifacts +---- ++ +When you create the container runtime config, the Machine Config Operator (MCO) writes the configuration to the `/etc/crio/crio.conf.d/01-ctrcfg-additionalArtifactStores` file on the target nodes. + +Additional storage for container images:: +Use the `additionalImageStores` field to specify read-only container image caches on shared or high-performance storage. When CRI-O needs an image, it checks the additional image stores first. If the image exists there, no registry pull happens. ++ +The following example container runtime config configures storage for container images. ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: shared-image-cache +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalImageStores: + - path: /mnt/nfs-image-cache + - path: /mnt/ssd-images +---- ++ +When you create the container runtime config, the Machine Config Operator (MCO) writes the configuration to the `/etc/containers/storage.conf` file on the target nodes. + +Additional container image layers for lazy pulling:: +Use the `additionalLayerStores` field to enable lazy pulling through a third-party storage plugin. ++ +Note that CRI-O falls back to a standard image pull in the following cases: ++ +-- +* The registry does not support HTTP range requests. +* The image is in standard OCI format, not a lazy-pull-compatible format such as eStargz or Nydus. +* The storage plugin is not running. +-- ++ +The following example container runtime config configures container image layers for lazy pulling. ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: lazy-pulling +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalLayerStores: + - path: /var/lib/stargz-store +---- ++ +When you create the container runtime config, the Machine Config Operator (MCO) writes the configuration to the `/etc/containers/storage.conf` file on the target nodes. diff --git a/modules/nodes-nodes-additional-crio-storage-configuring.adoc b/modules/nodes-nodes-additional-crio-storage-configuring.adoc new file mode 100644 index 000000000000..89043a161f2a --- /dev/null +++ b/modules/nodes-nodes-additional-crio-storage-configuring.adoc @@ -0,0 +1,165 @@ +// Module included in the following assemblies: +// +// * nodes/nodes/nodes-nodes-additional-crio-storage.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nodes-nodes-additional-crio-storage-configuring_{context}"] += Configuring additional storage locations for CRI-O + +[role="_abstract"] +To reduce application startup time and make your applications run more efficiently, you can configure additional storage locations for the CRI-O container engine to store OCI objects by using the `ContainerRuntimeConfig` custom resource (CR). + +Use the `additionalArtifactStores`, `additionalImageStores`, and `additionalLayerStores` fields in a `ContainerRuntimeConfig` to specify read-only locations where CRI-O stores and resolves OCI artifacts, container images, or container image layers. CRI-O checks these locations in order before falling back to the default storage location. + +[IMPORTANT] +==== +When using multiple `ContainerRuntimeConfig` resources, merge all additional storage configurations into a single `ContainerRuntimeConfig` for each machine config pool. Multiple `ContainerRuntimeConfig` resources affecting the same configuration file might result in only a subset of the changes taking effect. +==== + +.Prerequisites + +* You enabled the required Technology Preview features for your cluster by adding the `TechPreviewNoUpgrade` feature set to the `FeatureGate` CR named `cluster`. For information about enabling Feature Gates, see "Enabling features using feature gates". ++ +[WARNING] +==== +Enabling the `TechPreviewNoUpgrade` feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters. +==== + +* If you are configuring the `additionalImageStores` or `additionalLayerStores` field, the target storage paths must exist and be accessible on the nodes and the container image or layers must be present in the directory. For network storage, ensure the paths are mounted before applying the configuration. + +* If you are configuring the `additionalLayerStores` field, you must meet the following additional prerequisites: + +** A supported storage plugin binary must be installed on each node, such as Stargz Store or Nydus Storage Plugin. See "Stargz Store plugin" or "Nydus Storage Plugin" for more information. You must have installed the plugin by using one of the following methods: +*** Use a daemon set to run the plugin as a privileged container. +*** Use a machine config to install the binary and configure it as a systemd service. +*** Use Image mode for OpenShift to install the plugin in a custom {op-system} image. + +** You converted the container images to a lazy-pull-compatible format, such as eStargz or Nydus. See "eStargz format" or "Nydus format" for more information. +** Your container registry must support HTTP range requests. + +.Procedure + +. Create a YAML file for the `ContainerRuntimeConfig` CR similar to the following example: ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: crio-additional-stores +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalArtifactStores: + - path: /mnt/ssd-artifacts + - path: /mnt/nfs-shared-artifacts + additionalImageStores: + - path: /mnt/nfs-image-cache + - path: /mnt/ssd-images + additionalLayerStores: + - path: /var/lib/stargz-store +---- +where: ++ +-- +`spec.machineConfigPoolSelector.matchLabels`:: Specifies a label associated with the nodes that you want to update. +`spec.containerRuntimeConfig.additionalArtifactStores.path`:: Optional: Specifies the path to the directory that contains OCI artifacts. CRI-O searches for content in an `artifacts/` subdirectory within this path. You can specify up to 10 directories. +`spec.containerRuntimeConfig.additionalImageStores.path`:: Optional: Specifies the path to an NFS share or other location that contains pre-populated container images. You can specify up to 10 directories. +`spec.containerRuntimeConfig.additionalLayerStores.path`:: Optional: Specifies the path to the directory that contains lazy-pull-compatible-formatted container image layers. You can specify up to 5 directories. +-- ++ +The specified path must meet the following criteria: ++ +-- +* Contains between 1 and 256 characters +* Is an absolute path, starting with the `/` character +* Contains only alphanumeric characters: `a-z`, `A-Z`, `0-9`, `/`, `.`, `_`, and `-` +* Cannot contain consecutive forward slashes +-- ++ +You can configure any combination of these three additional CRI-O storage locations. ++ +For a layer store, the MCO automatically appends the `:ref` suffix to the path when writing to the `storage.conf` file. This suffix switches the container storage library from storing actual image layers (blobs) to storing references (pointers) to where those layers can be found, which is required for the lazy-pulling plugins. You do not need to include the suffix in the `ContainerRuntimeConfig` path. ++ +[NOTE] +==== +If a path does not exist or is inaccessible at runtime, CRI-O generates a warning and continues with the remaining stores. The default storage location is always used as a fallback. +==== + +. Create the `ContainerRuntimeConfig` CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- ++ +Replace `` with the name of the YAML file. ++ +After you configure any of these new storage locations, the Machine Config Operator (MCO) reboots the affected nodes with the new configuration. + +.Verification + +. After the nodes have returned to the `Ready` status, check that the new stores have been added to the node configuration: + +.. Start a debug pod by running the following command: ++ +[source,terminal] +---- +$ oc debug node/ +---- ++ +where `` specifies the name of one of the nodes in the affected machine config pool. + +.. Set `/host` as the root directory within the debug shell by running the following command: ++ +[source,terminal] +---- +sh-5.1# chroot /host +---- + +** For an artifact store, review the contents of the `/etc/crio/crio.conf.d/01-ctrcfg-additionalArtifactStores` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/crio/crio.conf.d/01-ctrcfg-additionalArtifactStores +---- ++ +.Example output +[source,terminal] +---- +[crio] + [crio.runtime] + additional_artifact_stores = ["/mnt/ssd-artifacts", "/mnt/nfs-shared-artifacts"] +---- + +** For an image store, review the contents of the `/etc/containers/storage.conf` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/containers/storage.conf +---- ++ +.Example output +[source,terminal] +---- +[storage] + [storage.options] + additionalimagestores = ["/mnt/nfs-image-cache", "/mnt/ssd-images"] +---- + +** For a layer store, review the contents of the `/etc/containers/storage.conf` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/containers/storage.conf +---- ++ +.Example output +[source,terminal] +---- +[storage] + [storage.options] + additionallayerstores = ["/var/lib/stargz-store:ref"] +---- diff --git a/modules/nodes-nodes-kernel-arguments.adoc b/modules/nodes-nodes-kernel-arguments.adoc index 4632945d438d..12f6cd68f106 100644 --- a/modules/nodes-nodes-kernel-arguments.adoc +++ b/modules/nodes-nodes-kernel-arguments.adoc @@ -134,12 +134,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-136-161.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-136-243.ec2.internal Ready master 34m v1.34.2 -ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.34.2 -ip-10-0-142-249.ec2.internal Ready master 34m v1.34.2 -ip-10-0-153-11.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-153-150.ec2.internal Ready master 34m v1.34.2 +ip-10-0-136-161.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-136-243.ec2.internal Ready master 34m v1.35.4 +ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.35.4 +ip-10-0-142-249.ec2.internal Ready master 34m v1.35.4 +ip-10-0-153-11.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-153-150.ec2.internal Ready master 34m v1.35.4 ---- + You can see that scheduling on each worker node is disabled as the change is being applied. diff --git a/modules/nodes-nodes-rtkernel-arguments.adoc b/modules/nodes-nodes-rtkernel-arguments.adoc index 9f853346645e..1c51b3d8439e 100644 --- a/modules/nodes-nodes-rtkernel-arguments.adoc +++ b/modules/nodes-nodes-rtkernel-arguments.adoc @@ -61,9 +61,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.34.2 -ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.34.2 -ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.34.2 +ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.35.4 +ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.35.4 +ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.35.4 ---- + [source,terminal] diff --git a/modules/nodes-nodes-viewing-listing.adoc b/modules/nodes-nodes-viewing-listing.adoc index e7a13772106f..757b560fe094 100644 --- a/modules/nodes-nodes-viewing-listing.adoc +++ b/modules/nodes-nodes-viewing-listing.adoc @@ -27,9 +27,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master.example.com Ready master 7h v1.34.2 -node1.example.com Ready worker 7h v1.34.2 -node2.example.com Ready worker 7h v1.34.2 +master.example.com Ready master 7h v1.35.4 +node1.example.com Ready worker 7h v1.35.4 +node2.example.com Ready worker 7h v1.35.4 ---- + The following example is a cluster with one unhealthy node: @@ -43,9 +43,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master.example.com Ready master 7h v1.34.2 -node1.example.com NotReady,SchedulingDisabled worker 7h v1.34.2 -node2.example.com Ready worker 7h v1.34.2 +master.example.com Ready master 7h v1.35.4 +node1.example.com NotReady,SchedulingDisabled worker 7h v1.35.4 +node2.example.com Ready worker 7h v1.35.4 ---- + The conditions that trigger a `NotReady` status are shown later in this section. @@ -61,9 +61,9 @@ $ oc get nodes -o wide [source,terminal] ---- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME -master.example.com Ready master 171m v1.34.2 10.0.129.108 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev -node1.example.com Ready worker 72m v1.34.2 10.0.129.222 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev -node2.example.com Ready worker 164m v1.34.2 10.0.142.150 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev +master.example.com Ready master 171m v1.35.4 10.0.129.108 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.35.4-30.rhaos4.10.gitf2f339d.el8-dev +node1.example.com Ready worker 72m v1.35.4 10.0.129.222 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.35.4-30.rhaos4.10.gitf2f339d.el8-dev +node2.example.com Ready worker 164m v1.35.4 10.0.142.150 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.35.4-30.rhaos4.10.gitf2f339d.el8-dev ---- * The following command lists information about a single node: @@ -84,7 +84,7 @@ $ oc get node node1.example.com [source,terminal] ---- NAME STATUS ROLES AGE VERSION -node1.example.com Ready worker 7h v1.34.2 +node1.example.com Ready worker 7h v1.35.4 ---- * The following command provides more detailed information about a specific node, including the reason for @@ -163,9 +163,9 @@ System Info: OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 - Container Runtime Version: cri-o://1.34.2-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 - Kubelet Version: v1.34.2 - Kube-Proxy Version: v1.34.2 + Container Runtime Version: cri-o://1.35.4-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 + Kubelet Version: v1.35.4 + Kube-Proxy Version: v1.35.4 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) diff --git a/modules/nodes-nodes-working-evacuating.adoc b/modules/nodes-nodes-working-evacuating.adoc index cd27e2201bfa..ec49e83b2646 100644 --- a/modules/nodes-nodes-working-evacuating.adoc +++ b/modules/nodes-nodes-working-evacuating.adoc @@ -44,7 +44,7 @@ $ oc get node [source,terminal] ---- NAME STATUS ROLES AGE VERSION - Ready,SchedulingDisabled worker 1d v1.34.2 + Ready,SchedulingDisabled worker 1d v1.35.4 ---- . Evacuate the pods by using one of the following methods: diff --git a/modules/nodes-scheduler-node-selectors-cluster.adoc b/modules/nodes-scheduler-node-selectors-cluster.adoc index daa8f636d4f1..e00c95ddeb44 100644 --- a/modules/nodes-scheduler-node-selectors-cluster.adoc +++ b/modules/nodes-scheduler-node-selectors-cluster.adoc @@ -145,7 +145,7 @@ $ oc get nodes -l type=user-node [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.35.4 ---- * Add labels directly to a node: @@ -198,5 +198,5 @@ $ oc get nodes -l type=user-node,region=east [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.35.4 ---- diff --git a/modules/nodes-scheduler-node-selectors-pod.adoc b/modules/nodes-scheduler-node-selectors-pod.adoc index 4ca5f9fb0d3b..db0f5ff87899 100644 --- a/modules/nodes-scheduler-node-selectors-pod.adoc +++ b/modules/nodes-scheduler-node-selectors-pod.adoc @@ -180,7 +180,7 @@ $ oc get nodes -l type=user-node,region=east [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-142-25.ec2.internal Ready worker 17m v1.34.2 +ip-10-0-142-25.ec2.internal Ready worker 17m v1.35.4 ---- . Add the matching node selector to a pod: diff --git a/modules/nodes-scheduler-node-selectors-project.adoc b/modules/nodes-scheduler-node-selectors-project.adoc index 4b689829da30..36d48eec3619 100644 --- a/modules/nodes-scheduler-node-selectors-project.adoc +++ b/modules/nodes-scheduler-node-selectors-project.adoc @@ -161,7 +161,7 @@ $ oc get nodes -l type=user-node [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.35.4 ---- * Add labels directly to a node: @@ -214,5 +214,5 @@ $ oc get nodes -l type=user-node,region=east [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.35.4 ---- diff --git a/modules/nodes-secondary-scheduler-rn-1.5.0.adoc b/modules/nodes-secondary-scheduler-rn-1.5.0.adoc deleted file mode 100644 index a01a1af26043..000000000000 --- a/modules/nodes-secondary-scheduler-rn-1.5.0.adoc +++ /dev/null @@ -1,34 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="secondary-scheduler-operator-release-notes-1.5.0_{context}"] -= Release notes for {secondary-scheduler-operator-full} 1.5.0 - -[role="_abstract"] -Review the release notes for {secondary-scheduler-operator} 1.5.0 to learn what is new and updated with this release. - -Issued: 29 October 2025 - -The following advisory is available for the {secondary-scheduler-operator-full} 1.5.0: - -* link:https://access.redhat.com/errata/RHBA-2025:19251[RHBA-2025:19251] - -[id="secondary-scheduler-1.5.0-new-features_{context}"] -== New features and enhancements - -* This release of the {secondary-scheduler-operator} updates the Kubernetes version to 1.33. - -// No bug fixes or CVEs to list -// [id="secondary-scheduler-1.5.0-bug-fixes_{context}"] -// === Bug fixes -// -// * This release of the {secondary-scheduler-operator} addresses several Common Vulnerabilities and Exposures (CVEs). - -[id="secondary-scheduler-operator-1.5.0-known-issues_{context}"] -== Known issues - -* Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the {secondary-scheduler-operator}. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. (link:https://issues.redhat.com/browse/WRKLDS-645[WRKLDS-645]) diff --git a/modules/nodes-secondary-scheduler-rn-1.5.1.adoc b/modules/nodes-secondary-scheduler-rn-1.5.1.adoc deleted file mode 100644 index 803c202cefba..000000000000 --- a/modules/nodes-secondary-scheduler-rn-1.5.1.adoc +++ /dev/null @@ -1,28 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="secondary-scheduler-operator-release-notes-1.5.1_{context}"] -= Release notes for {secondary-scheduler-operator-full} 1.5.1 - -[role="_abstract"] -Review the release notes for {secondary-scheduler-operator} 1.5.1 to learn what is new and updated with this release. - -Issued: 12 February 2026 - -The following advisory is available for the {secondary-scheduler-operator-full} 1.5.1: - -* link:https://access.redhat.com/errata/RHBA-2026:2642[RHBA-2026:2642] - -[id="secondary-scheduler-1.5.1-new-features_{context}"] -== New features and enhancements - -* This release of the {secondary-scheduler-operator} updates the Kubernetes version to 1.34. - -[id="secondary-scheduler-operator-1.5.1-known-issues_{context}"] -== Known issues - -* Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the {secondary-scheduler-operator}. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. (link:https://issues.redhat.com/browse/WRKLDS-645[WRKLDS-645]) diff --git a/modules/nodes-verify-failed-node-deleted.adoc b/modules/nodes-verify-failed-node-deleted.adoc index 6bb564bbb12e..7f40b39bb8e6 100644 --- a/modules/nodes-verify-failed-node-deleted.adoc +++ b/modules/nodes-verify-failed-node-deleted.adoc @@ -38,10 +38,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -openshift-control-plane-0 Ready master 3h24m v1.34.2 -openshift-control-plane-1 Ready master 3h24m v1.34.2 -openshift-compute-0 Ready worker 176m v1.34.2 -openshift-compute-1 Ready worker 176m v1.34.2 +openshift-control-plane-0 Ready master 3h24m v1.35.4 +openshift-control-plane-1 Ready master 3h24m v1.35.4 +openshift-compute-0 Ready worker 176m v1.35.4 +openshift-compute-1 Ready worker 176m v1.35.4 ---- . Wait for all of the cluster Operators to complete rolling out changes. diff --git a/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc b/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc index 986321b1ac35..5c84951a1b5e 100644 --- a/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc +++ b/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc @@ -29,12 +29,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.34.2 -ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.34.2 -ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.34.2 -ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.34.2 -ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.34.2 -ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.34.2 +ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.35.4 +ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.35.4 +ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.35.4 +ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.35.4 +ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.35.4 +ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.35.4 ---- . View the machines and machine sets that exist in the `openshift-machine-api` namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones. diff --git a/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc b/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc index d626e413ef14..4eebb1f47250 100644 --- a/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc +++ b/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc @@ -346,13 +346,13 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -myclustername-master-0 Ready control-plane,master 6h39m v1.34.2 -myclustername-master-1 Ready control-plane,master 6h41m v1.34.2 -myclustername-master-2 Ready control-plane,master 6h39m v1.34.2 -myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.34.2 -myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.34.2 -myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.34.2 -myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.34.2 +myclustername-master-0 Ready control-plane,master 6h39m v1.35.4 +myclustername-master-1 Ready control-plane,master 6h41m v1.35.4 +myclustername-master-2 Ready control-plane,master 6h39m v1.35.4 +myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.35.4 +myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.35.4 +myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.35.4 +myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.35.4 ---- . View the list of compute machine sets: diff --git a/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc b/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc index 7b46b8659d0b..401c95aba2ba 100644 --- a/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc +++ b/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc @@ -158,13 +158,13 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 -myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 -myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 -myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.34.2 -myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.34.2 -myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.34.2 -myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.34.2 +myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.35.4 +myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.35.4 +myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.35.4 +myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.35.4 +myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.35.4 +myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.35.4 +myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.35.4 ---- . View the machines and machine sets that exist in the `openshift-machine-api` namespace by running the following command. Each compute machine set is associated with a different availability zone within the {gcp-short} region. The installation program automatically load balances compute machines across availability zones. diff --git a/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc b/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc index cf53edd01d5b..d59d194de18d 100644 --- a/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc +++ b/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc @@ -446,7 +446,7 @@ The output is similar to the following: [source,terminal] ---- NAME STATUS ROLES AGE VERSION -mysno-sno.demo.lab Ready control-plane,master,worker 4h19m v1.34.1 +mysno-sno.demo.lab Ready control-plane,master,worker 4h19m v1.35.4 ---- + Then describe the NodePtpDevice using your node name: @@ -530,4 +530,3 @@ The output shows synchronization status messages for `phc2sys`. ---- phc2sys[xxx]: CLOCK_REALTIME phc offset -17 s2 freq -13865 delay 2305 ---- - diff --git a/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc b/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc index 044400eb854b..c1ec2b3c87ab 100644 --- a/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc +++ b/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc @@ -61,11 +61,11 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0 Ready master 2d v1.34.2 -master-1 Ready master 2d v1.34.2 -worker-0 Ready worker 2d v1.34.2 -worker-1 Ready worker 2d v1.34.2 -worker-2 Ready mcp-offloading,worker 47h v1.34.2 +master-0 Ready master 2d v1.35.4 +master-1 Ready master 2d v1.35.4 +worker-0 Ready worker 2d v1.35.4 +worker-1 Ready worker 2d v1.35.4 +worker-2 Ready mcp-offloading,worker 47h v1.35.4 ---- -- diff --git a/modules/olm-catalogsource-image-template.adoc b/modules/olm-catalogsource-image-template.adoc index 8fee34a5bf10..7026f7d591f4 100644 --- a/modules/olm-catalogsource-image-template.adoc +++ b/modules/olm-catalogsource-image-template.adoc @@ -24,14 +24,14 @@ During a cluster upgrade, the index image tag for the default Red Hat-provided c [source,terminal] ---- -registry.redhat.io/redhat/redhat-operator-index:v4.21 +registry.redhat.io/redhat/redhat-operator-index:v4.22 ---- to: [source,terminal] ---- -registry.redhat.io/redhat/redhat-operator-index:v4.21 +registry.redhat.io/redhat/redhat-operator-index:v4.22 ---- However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image. @@ -70,7 +70,7 @@ metadata: "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}" spec: displayName: Example Catalog - image: quay.io/example-org/example-catalog:v1.34 + image: quay.io/example-org/example-catalog:v1.35 priority: -400 publisher: Example Org ---- @@ -90,11 +90,11 @@ endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] a {product-title} endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -cluster, which uses Kubernetes 1.34, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference: +cluster, which uses Kubernetes 1.35, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference: [source,terminal] ---- -quay.io/example-org/example-catalog:v1.34 +quay.io/example-org/example-catalog:v1.35 ---- For future releases of {product-title}, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later {product-title} version. With the `olm.catalogImageTemplate` annotation set before the upgrade, upgrading the cluster to the later {product-title} version would then automatically update the catalog's index image as well. diff --git a/modules/olmv1-clusterobjectsets-deployment-mechanism.adoc b/modules/olmv1-clusterobjectsets-deployment-mechanism.adoc new file mode 100644 index 000000000000..6afb60de2c98 --- /dev/null +++ b/modules/olmv1-clusterobjectsets-deployment-mechanism.adoc @@ -0,0 +1,91 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-clusterobjectsets-deployment-mechanism_{context}"] += ClusterObjectSets deployment mechanism + +[role="_abstract"] +{olmv1} uses ClusterObjectSets as the underlying mechanism to deploy cluster extensions with phased rollouts and safe upgrades. + +:FeatureName: {olmv1} ClusterObjectSets +include::snippets/technology-preview.adoc[] + +ClusterObjectSets are cluster-scoped APIs representing versioned resource sets organized into ordered phases. {olmv1} uses ClusterObjectSets to deploy operator resources sequentially. + +[id="olmv1-clusterobjectsets-benefits_{context}"] +== Benefits + +Phased rollouts:: Resources deploy in a defined order by kind. For example, CRDs are created before deployments that use them. + +Safe upgrades:: Both old and new revisions remain active until the new version succeeds, preventing service disruption. + +Immutable revision records:: Immutable revisions provide a clear deployment record. + +Large bundle support:: References externalized secrets to bypass the etcd 1.5 MiB size limit, enabling large bundle deployments. + +[id="olmv1-clusterobjectsets-relationship_{context}"] +== Relationship to deployment configuration + +{olmv1} applies deployment configurations during the ClusterObjectSet process, modifying operator manifests before organizing them into phases. + +[id="olmv1-clusterobjectsets-phases_{context}"] +== Deployment phases + +Phases in order: + +. Namespaces +. Policies +. Identity resources +. Configuration resources +. Storage resources +. Custom resource definitions +. Roles +. Role bindings +. Infrastructure resources +. Deployments +. Scaling resources +. Publishing resources +. Admission resources + +Each phase completes before the next begins, ensuring foundational resources exist before dependent resources deploy. + +[id="olmv1-clusterobjectsets-inspecting_{context}"] +== Inspecting ClusterObjectSets + +Inspect ClusterObjectSets to view deployment status and revision history. + +. List all ClusterObjectSets in the cluster: ++ +[source,terminal] +---- +$ oc get clusterobjectsets +---- + +. List ClusterObjectSets for a specific extension: ++ +[source,terminal] +---- +$ oc get clusterobjectsets -l olm.operatorframework.io/owner-name= +---- ++ +Replace `` with your `ClusterExtension` name. + +. View the details of a specific ClusterObjectSet: ++ +[source,terminal] +---- +$ oc get clusterobjectset -o yaml +---- ++ +Shows deployment phases, resource status, and conditions. + +. Check the `ClusterExtension` status to see active revisions: ++ +[source,terminal] +---- +$ oc get clusterextension -o jsonpath='{.status.conditions}' | jq +---- ++ +Shows active revisions and their conditions. diff --git a/modules/olmv1-customizing-operator-deployments.adoc b/modules/olmv1-customizing-operator-deployments.adoc new file mode 100644 index 000000000000..efc2b4642416 --- /dev/null +++ b/modules/olmv1-customizing-operator-deployments.adoc @@ -0,0 +1,89 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: PROCEDURE +[id="olmv1-customizing-operator-deployments_{context}"] += Customizing operator deployments + +[role="_abstract"] +You can customize how operator pods are deployed by configuring deployment settings in the `ClusterExtension` resource. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +.Prerequisites + +* You have installed the {oc-first}. +* You have identified the operator you want to install and customize. + +.Procedure + +. Create a `ClusterExtension` resource with deployment configuration customizations: ++ +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + serviceAccount: + name: my-operator-installer + config: + configType: Inline + inline: + deploymentConfig: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule + source: + sourceType: Catalog + catalog: + packageName: my-operator + version: 1.0.0 +---- ++ +where: ++ +-- +`resources:`:: Specifies CPU and memory resource requests and limits for the operator pod. +`nodeSelector:`:: Restricts pod scheduling to infrastructure nodes. +`tolerations:`:: Allows the pod to be scheduled on nodes with the specified taint. +-- + +. Apply the `ClusterExtension` resource: ++ +[source,terminal] +---- +$ oc apply -f my-operator.yaml +---- + +. Verify the installation: ++ +[source,terminal] +---- +$ oc get clusterextension my-operator -o yaml +---- + +.Verification + +* Verify that the operator pod is running with the configured settings: ++ +[source,terminal] +---- +$ oc get pods -n my-operator-ns +---- ++ +The output shows the operator pod in the `Running` state with the configured deployment settings applied. diff --git a/modules/olmv1-deployment-config-api.adoc b/modules/olmv1-deployment-config-api.adoc new file mode 100644 index 000000000000..1d824dfaec5e --- /dev/null +++ b/modules/olmv1-deployment-config-api.adoc @@ -0,0 +1,139 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-deployment-config-api_{context}"] += Deployment configuration API + +[role="_abstract"] +Customize operator pod deployments by using the deployment configuration API in the `ClusterExtension` resource. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +Provides feature parity with {olmv0}'s `Subscription.spec.config`. Configure resources, node placement, storage, environment variables, and other deployment settings. + +[id="olmv1-deployment-config-structure_{context}"] +== Deployment configuration structure + +Specify deployment configuration in the `spec.config.inline.deploymentConfig` field as a JSON object. + +.Example deployment configuration +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: +spec: + namespace: + config: + configType: Inline + inline: + deploymentConfig: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule +---- ++ +where: ++ +-- +`deploymentConfig:`:: Deployment configuration object. +`resources:`:: CPU and memory requests and limits. +`nodeSelector:`:: Node placement selector. +`tolerations:`:: Node taint tolerations. +-- + +[id="olmv1-deployment-config-fields_{context}"] +== Supported configuration fields + +Environment variables:: Add or override environment variables with `env` and `envFrom`. Values are merged with existing container environment variables, with `deploymentConfig` values taking precedence. + +Resource requirements:: Specify CPU and memory requests and limits with `resources`. Replaces existing resource requirements. + +Node selector:: Control pod node placement with `nodeSelector`. Replaces existing node selector. + +Tolerations:: Schedule pods on nodes with taints by using `tolerations`. Appended to existing tolerations. + +Affinity rules:: Define pod affinity and anti-affinity rules with `affinity`. Non-nil fields replace corresponding bundle fields. + +Volumes and volume mounts:: Add `emptyDir`, `configMap`, or `secret` volumes. Appended to existing volumes. + +Annotations:: Add custom pod annotations. Merged with existing annotations, with bundle values taking precedence on conflicts. + +[id="olmv1-deployment-config-validation_{context}"] +== Configuration validation + +{olmv1} validates configuration against a JSON schema generated from Kubernetes API definitions. The schema derives from the `SubscriptionConfig` type used in {olmv0}, providing consistent validation across versions. + +Invalid configurations prevent installation and report errors in the `ClusterExtension` resource's `Progressing` condition. Common validation errors include: + +* Unknown field errors when using unsupported configuration options +* Type mismatch errors when field values do not match the expected type +* Required field errors when mandatory nested fields are missing + +[NOTE] +==== +{olmv1} applies configurations during the ClusterObjectSet deployment process, modifying operator manifests before organizing them into phases. +==== + +[id="olmv1-deployment-config-migration_{context}"] +== Migrating from {olmv0} + +Transfer existing `Subscription.spec.config` settings to the `deploymentConfig` object. The format is identical. + +.Example {olmv0} subscription configuration +[source,yaml] +---- +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: my-operator +spec: + package: my-operator + channel: stable + config: + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule +---- + +.Equivalent {olmv1} cluster extension configuration +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + config: + configType: Inline + inline: + deploymentConfig: + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule + source: + sourceType: Catalog + catalog: + packageName: my-operator +---- diff --git a/modules/olmv1-deployment-config-examples.adoc b/modules/olmv1-deployment-config-examples.adoc new file mode 100644 index 000000000000..016165146053 --- /dev/null +++ b/modules/olmv1-deployment-config-examples.adoc @@ -0,0 +1,191 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-deployment-config-examples_{context}"] += Deployment configuration examples + +[role="_abstract"] +Common deployment configuration examples. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +[id="olmv1-deployment-config-env-vars_{context}"] +== Environment variables + +Add environment variables for runtime configuration. + +.Adding environment variables +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: kmm-operator +spec: + namespace: openshift-kmm + config: + configType: Inline + inline: + deploymentConfig: + env: + - name: KMM_MANAGED + value: "1" + source: + sourceType: Catalog + catalog: + packageName: kernel-module-management +---- ++ +where: ++ +-- +`KMM_MANAGED`:: Sets the environment variable used when deploying the Kernel Module Management Operator in a hub-and-spoke configuration. +-- + +[id="olmv1-deployment-config-volumes_{context}"] +== Custom volumes + +Mount a custom CA certificate for HTTPS communication through a proxy. + +.Mounting a custom CA certificate +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + config: + configType: Inline + inline: + deploymentConfig: + volumes: + - name: trusted-ca + configMap: + name: trusted-ca + items: + - key: ca-bundle.crt + path: tls-ca-bundle.pem + volumeMounts: + - name: trusted-ca + mountPath: /etc/pki/ca-trust/extracted/pem + readOnly: true + source: + sourceType: Catalog + catalog: + packageName: my-operator +---- ++ +where: ++ +-- +`volumes:`:: Creates a volume from the `trusted-ca` config map. +`volumeMounts:`:: Mounts the volume to the operator container at the specified path. +`mountPath:`:: The path where the certificate bundle is available inside the container. +-- + +[id="olmv1-deployment-config-affinity_{context}"] +== Pod anti-affinity + +Spread operator pods across nodes for high availability. + +.Pod anti-affinity for high availability +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + config: + configType: Inline + inline: + deploymentConfig: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: + - my-operator + topologyKey: kubernetes.io/hostname + source: + sourceType: Catalog + catalog: + packageName: my-operator +---- ++ +where: ++ +-- +`podAntiAffinity:`:: Configures anti-affinity rules for the operator pod. +`preferredDuringSchedulingIgnoredDuringExecution:`:: Specifies soft constraints that the scheduler tries to enforce but does not guarantee. +`topologyKey`:: Groups nodes by hostname to ensure pods are spread across different nodes. +-- + +[id="olmv1-deployment-config-combined_{context}"] +== Multiple customizations + +Combine multiple deployment customizations. + +.Production operator with combined customizations +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: production-operator +spec: + namespace: production-operators + serviceAccount: + name: production-operator-installer + config: + configType: Inline + inline: + deploymentConfig: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 1000m + memory: 1Gi + env: + - name: LOG_LEVEL + value: info + - name: ENABLE_METRICS + value: "true" + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule + annotations: + monitoring.openshift.io/scrape: "true" + monitoring.openshift.io/port: "8080" + source: + sourceType: Catalog + catalog: + packageName: production-operator + version: 2.1.0 +---- ++ +where: ++ +-- +`resources:`:: Specifies memory and CPU requests and limits for the operator pod. +`env:`:: Defines environment variables for the operator. +`nodeSelector:`:: Restricts the pod to run on infrastructure nodes. +`tolerations:`:: Allows the pod to be scheduled on nodes with the specified taint. +`annotations:`:: Adds Prometheus monitoring annotations to the pod. +-- diff --git a/modules/olmv1-deployment-config-reference.adoc b/modules/olmv1-deployment-config-reference.adoc new file mode 100644 index 000000000000..dfd98332b4c0 --- /dev/null +++ b/modules/olmv1-deployment-config-reference.adoc @@ -0,0 +1,161 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: REFERENCE +[id="olmv1-deployment-config-reference_{context}"] += Deployment configuration field reference + +[role="_abstract"] +Deployment configuration field reference and {olmv0} to {olmv1} mapping. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +[id="olmv1-deployment-config-field-mapping_{context}"] +== Field mapping from {olmv0} to {olmv1} + +Field conversion from {olmv0} to {olmv1}: + +.{olmv0} to {olmv1} configuration field mapping +[cols="1,1,2",options="header"] +|=== +|{olmv0} field path +|{olmv1} field path +|Notes + +|`spec.config.env` +|`spec.config.inline.deploymentConfig.env` +|Environment variables are merged. {olmv1} values take precedence over bundle values. + +|`spec.config.envFrom` +|`spec.config.inline.deploymentConfig.envFrom` +|Environment variable sources are merged. + +|`spec.config.resources` +|`spec.config.inline.deploymentConfig.resources` +|Resource specifications completely replace bundle resource requirements. + +|`spec.config.nodeSelector` +|`spec.config.inline.deploymentConfig.nodeSelector` +|Node selectors completely replace bundle node selectors. + +|`spec.config.tolerations` +|`spec.config.inline.deploymentConfig.tolerations` +|Tolerations are appended to bundle tolerations. + +|`spec.config.affinity` +|`spec.config.inline.deploymentConfig.affinity` +|Affinity rules selectively override bundle affinity. Non-nil fields replace corresponding bundle fields. + +|`spec.config.volumes` +|`spec.config.inline.deploymentConfig.volumes` +|Volumes are appended to bundle volumes. + +|`spec.config.volumeMounts` +|`spec.config.inline.deploymentConfig.volumeMounts` +|Volume mounts are appended to bundle volume mounts. + +|`spec.config.selector` +|Not supported +|The `selector` field from {olmv0} is not supported in {olmv1}. This field was non-functional in {olmv0}. + +|=== + +[id="olmv1-deployment-config-merge-behavior_{context}"] +== Merge and override behavior + +Configuration fields have different merge behaviors: + +Replace:: Completely replaces bundle values. Applies to: `resources`, `nodeSelector` + +Append:: Adds to existing bundle values. Applies to: `tolerations`, `volumes`, `volumeMounts` + +Merge with precedence:: Merges with bundle values. Deployment configuration takes precedence on conflicts. Applies to: `env`, `envFrom` + +Merge with bundle precedence:: Merges with bundle values. Bundle takes precedence on conflicts. Applies to: `annotations` + +Selective override:: Non-nil fields replace corresponding bundle fields. Applies to: `affinity` + +[id="olmv1-deployment-config-env-reference_{context}"] +== Environment variable fields + +`env`:: An array of environment variable objects. Merged with existing container environment variables, with deployment configuration values taking precedence. Each object has: ++ +* `name`: Environment variable name (string, required). +* `value`: Environment variable value (string, optional). +* `valueFrom`: Reference to a secret or config map key (object, optional). + +`envFrom`:: An array of environment variable source objects merged with existing sources. Each object can reference: ++ +* `configMapRef`: Config map containing environment variables. +* `secretRef`: Secret containing environment variables. + +[id="olmv1-deployment-config-resources-reference_{context}"] +== Resource requirements fields + +`resources`:: Compute resource requirements that completely replace existing bundle resource requirements. Contains: ++ +* `requests`: Minimum resources required. +** `cpu`: CPU request (string, for example, `"100m"`, `"0.5"`). +** `memory`: Memory request (string, for example, `"128Mi"`, `"1Gi"`). +* `limits`: Maximum resources allowed. +** `cpu`: CPU limit (string). +** `memory`: Memory limit (string). + +[id="olmv1-deployment-config-node-placement-reference_{context}"] +== Node placement fields + +`nodeSelector`:: Map of key-value pairs for node selection. Completely replaces any existing node selector. Pods schedule only on nodes with all specified labels. ++ +.Example node selector +[source,yaml] +---- +nodeSelector: + node-role.kubernetes.io/infra: "" + disktype: ssd +---- + +`tolerations`:: Array of toleration objects appended to existing bundle tolerations. Each toleration has: ++ +* `key`: Taint key (string). +* `operator`: Operator (string: `Exists`, `Equal`). +* `value`: Taint value (string, required if `operator` is `Equal`). +* `effect`: Taint effect (string: `NoSchedule`, `PreferNoSchedule`, `NoExecute`). +* `tolerationSeconds`: Time before pod eviction for `NoExecute` effect (integer). + +`affinity`:: Affinity rules object. Non-nil fields replace corresponding bundle fields. Contains: ++ +* `nodeAffinity`: Node label-based scheduling rules. +* `podAffinity`: Pod label-based scheduling rules. +* `podAntiAffinity`: Pod spreading rules across nodes. + +[id="olmv1-deployment-config-storage-reference_{context}"] +== Storage fields + +`volumes`:: Array of volume objects appended to existing bundle volumes. Supported types: ++ +* `configMap`: Config map volume. +* `secret`: Secret volume. +* `emptyDir`: Empty directory volume. +* Each volume requires a `name` field (string). + +`volumeMounts`:: Array of volume mount objects appended to existing bundle volume mounts. Each mount has: ++ +* `name`: Volume name to mount (string, required). +* `mountPath`: The path within the container (string, required). +* `readOnly`: Whether the volume is read-only (boolean, optional). +* `subPath`: A path within the volume (string, optional). + +[id="olmv1-deployment-config-metadata-reference_{context}"] +== Metadata fields + +`annotations`:: A map of key-value pairs for pod annotations. Annotations from the deployment configuration are merged with bundle annotations. When keys conflict, bundle annotations take precedence. ++ +.Example annotations +[source,yaml] +---- +annotations: + monitoring.openshift.io/scrape: "true" + monitoring.openshift.io/port: "8080" +---- diff --git a/modules/olmv1-deployment-config-troubleshooting.adoc b/modules/olmv1-deployment-config-troubleshooting.adoc new file mode 100644 index 000000000000..9c2e8c155ae9 --- /dev/null +++ b/modules/olmv1-deployment-config-troubleshooting.adoc @@ -0,0 +1,63 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-deployment-config-troubleshooting_{context}"] += Troubleshooting deployment configuration + +[role="_abstract"] +Common deployment configuration issues and their resolutions. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +[id="olmv1-deployment-config-troubleshooting-validation_{context}"] +== Validation errors + +Check the `Progressing` condition for validation errors when installation fails: + +[source,terminal] +---- +$ oc get clusterextension -o jsonpath='{.status.conditions[?(@.type=="Progressing")].message}' +---- + +Common validation errors and resolutions: + +Unknown field:: Configuration includes an unsupported field. Remove unsupported fields. + +Type mismatch:: Field value does not match the expected type. Verify field types match Kubernetes specifications. + +Required field missing:: Mandatory nested field is missing. Complete all required fields in nested structures. + +[id="olmv1-deployment-config-troubleshooting-applied_{context}"] +== Verifying applied configuration + +Inspect the operator deployment to verify applied configurations: + +[source,terminal] +---- +$ oc get deployment -n -l olm.operatorframework.io/owner-name= -o yaml +---- + +Configuration locations in the deployment specification: + +* **Environment variables**: `spec.template.spec.containers[].env` and `envFrom` +* **Resources**: `spec.template.spec.containers[].resources` +* **Node selector**: `spec.template.spec.nodeSelector` +* **Tolerations**: `spec.template.spec.tolerations` +* **Affinity**: `spec.template.spec.affinity` +* **Volumes**: `spec.template.spec.volumes` and `volumeMounts` +* **Annotations**: `spec.template.metadata.annotations` + +[id="olmv1-deployment-config-troubleshooting-conflicts_{context}"] +== Annotation conflicts + +Bundle annotations take precedence over deployment configuration annotations when keys conflict. Check bundle annotations: + +[source,terminal] +---- +$ oc get clusterextension -o jsonpath='{.status.install.bundle}' +---- + +To override a bundle annotation, modify the bundle or accept the bundle value. diff --git a/modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc b/modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc new file mode 100644 index 000000000000..294c6b3b4b33 --- /dev/null +++ b/modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc @@ -0,0 +1,19 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-gcp-images-snapshot-class-overview_{context}"] += Volume snapshot class csi-gce-pd-vsc-images + +[role="_abstract"] +By default, you cannot restore more than six volumes per snapshot per hour. So in Kubevirt environments, you normally cannot create more than six VMs per hour from a "golden image" (templates saved as snapshots). + +For Google Cloud Platform (GCP) persistent disk (PD) storage CSI, there is a non-default `VolumeSnapshotClass`, named `csi-gce-pd-vsc-images`, that uses the `snapshot-type`: `images` parameter. When using KubeVirt, it allows you overcome the six VMs per hour restriction, so that you can create VMs from "golden images". + +[NOTE] +==== +Snapshots using the images snapshot class are strictly limited to ReadWriteOnce (RWO) sources, but you can restore them to ReadWriteMany (RWX) hyperdisk-balanced disks. +==== + +For more information, see, under _Additional Resources_, Section _Volume snapshots CRD: VolumeSnapshotClass_. diff --git a/modules/persistent-storage-csi-snapshots-operator.adoc b/modules/persistent-storage-csi-snapshots-operator.adoc index 9f5689b3957f..697fad192f54 100644 --- a/modules/persistent-storage-csi-snapshots-operator.adoc +++ b/modules/persistent-storage-csi-snapshots-operator.adoc @@ -6,7 +6,7 @@ [id="persistent-storage-csi-snapshots-operator_{context}"] = About the CSI Snapshot Controller Operator -The CSI Snapshot Controller Operator runs in the `openshift-cluster-storage-operator` namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default. +The Container Storage Interface (CSI) Snapshot Controller Operator runs in the `openshift-cluster-storage-operator` namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default. The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the `openshift-cluster-storage-operator` namespace. @@ -32,10 +32,36 @@ The `VolumeSnapshot` CRD is namespaced. A developer uses the CRD as a distinct r `VolumeSnapshotClass`:: -Allows a cluster administrator to specify different attributes belonging to a `VolumeSnapshot` object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim. +The `VolumeSnapshotClass` CRD allows a cluster administrator to specify different attributes belonging to a `VolumeSnapshot` object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim. + The `VolumeSnapshotClass` CRD defines the parameters for the `csi-external-snapshotter` sidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported. + Dynamically provisioned snapshots use the `VolumeSnapshotClass` CRD to specify storage-provider-specific parameters to use when creating a snapshot. + The `VolumeSnapshotContentClass` CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end. ++ +For Google Cloud Platform (GCP) persistent disk (PD) storage CSI, there is a non-default `VolumeSnapshotClass`, named `csi-gce-pd-vsc-images`, that uses the `snapshot-type`: `images` parameter. When using KubeVirt, this allows you to create VMs from "golden images" (templates saved as snapshots). ++ +If you want to use the images volume snapshot class for dynamic snapshot provisioning, you can either: + +* Make the images volume snapshot class the default by changing the `snapshot.storage.kubernetes.io/is-default-class` annotation to `true`. Also, for the normal default volume snapshot class, `csi-gce-pd-vsc`, be sure to change this parameter to `false`. + +* When creating the snapshot object, be sure to set `volumeSnapshotClassName` to `csi-gce-pd-vsc-images`. ++ +For information about creating volume snapshots, see Section _Creating a volume snapshot_. ++ +.Example images volume snapshot class YAML file +[source,yaml] +---- +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotClass +metadata: + name: csi-gce-pd-vsc-images +driver: pd.csi.storage.gke.io +parameters: + snapshot-type: images +---- ++ +* `metadata.name:csi-gce-pd-vsc-images`: Name for the non-default images volume snapshot class. + +* `parameters: snapshot-type: images`: Defines the snapshot as a "golden image" or a bootable template, rather than the standard disk backup. diff --git a/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc b/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc index 49dc0aeeec74..c3ac6b3a4077 100644 --- a/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc +++ b/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc @@ -8,8 +8,6 @@ = Adding bare-metal nodes [role="_abstract"] -Adding bare-metal nodes to an {product-title} cluster on vSphere is supported as a Technology Preview feature. - -However, if you add bare-metal nodes, you must remove the vSphere CSI Driver, otherwise the cluster is marked as degraded. For information about how to remove the driver and the consequences of doing this, see Section _Disabling and enabling storage on vSphere_. +Adding bare-metal nodes to an {product-title} cluster on vSphere is supported. However, if you add bare-metal nodes, you must remove the vSphere CSI Driver, otherwise the cluster is marked as degraded. For information about how to remove the driver and the consequences of doing this, see Section _Disabling and enabling storage on vSphere_. For information about how to add bare-metal nodes, under _Additional resources_, see Section _Adding bare-metal compute machines to a vSphere cluster_. diff --git a/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc b/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc index bbfaca27fb66..9ea4da590577 100644 --- a/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc +++ b/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc @@ -26,12 +26,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -compute-1.example.com Ready worker 33m v1.34.2 -control-plane-1.example.com Ready master 41m v1.34.2 -control-plane-2.example.com Ready master 45m v1.34.2 -compute-2.example.com Ready worker 38m v1.34.2 -compute-3.example.com Ready worker 33m v1.34.2 -control-plane-3.example.com Ready master 41m v1.34.2 +compute-1.example.com Ready worker 33m v1.35.4 +control-plane-1.example.com Ready master 41m v1.35.4 +control-plane-2.example.com Ready master 45m v1.35.4 +compute-2.example.com Ready worker 38m v1.35.4 +compute-3.example.com Ready worker 33m v1.35.4 +control-plane-3.example.com Ready master 41m v1.35.4 ---- . Review CPU and memory resource availability for each cluster node: diff --git a/modules/restore-determine-state-etcd-member.adoc b/modules/restore-determine-state-etcd-member.adoc index 17879f54ee6d..d218c70b85ab 100644 --- a/modules/restore-determine-state-etcd-member.adoc +++ b/modules/restore-determine-state-etcd-member.adoc @@ -72,7 +72,7 @@ $ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady" .Example output [source,terminal] ---- -ip-10-0-131-183.ec2.internal NotReady master 122m v1.34.2 <1> +ip-10-0-131-183.ec2.internal NotReady master 122m v1.35.4 <1> ---- <1> If the node is listed as `NotReady`, then the *node is not ready*. @@ -96,9 +96,9 @@ $ oc get nodes -l node-role.kubernetes.io/master [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-131-183.ec2.internal Ready master 6h13m v1.34.2 -ip-10-0-164-97.ec2.internal Ready master 6h13m v1.34.2 -ip-10-0-154-204.ec2.internal Ready master 6h13m v1.34.2 +ip-10-0-131-183.ec2.internal Ready master 6h13m v1.35.4 +ip-10-0-164-97.ec2.internal Ready master 6h13m v1.35.4 +ip-10-0-154-204.ec2.internal Ready master 6h13m v1.35.4 ---- .. Check whether the status of an etcd pod is either `Error` or `CrashloopBackoff`: diff --git a/modules/restore-replace-stopped-baremetal-etcd-member.adoc b/modules/restore-replace-stopped-baremetal-etcd-member.adoc index 67798840071e..46f00c85a852 100644 --- a/modules/restore-replace-stopped-baremetal-etcd-member.adoc +++ b/modules/restore-replace-stopped-baremetal-etcd-member.adoc @@ -285,10 +285,10 @@ examplecluster-compute-1 Running 165m opens $ oc get nodes NAME STATUS ROLES AGE VERSION -openshift-control-plane-0 Ready master 3h24m v1.34.2 -openshift-control-plane-1 Ready master 3h24m v1.34.2 -openshift-compute-0 Ready worker 176m v1.34.2 -openshift-compute-1 Ready worker 176m v1.34.2 +openshift-control-plane-0 Ready master 3h24m v1.35.4 +openshift-control-plane-1 Ready master 3h24m v1.35.4 +openshift-compute-0 Ready worker 176m v1.35.4 +openshift-compute-1 Ready worker 176m v1.35.4 ---- . Create the new `BareMetalHost` object and the secret to store the BMC credentials: @@ -413,11 +413,11 @@ $ oc get nodes ---- $ oc get nodes NAME STATUS ROLES AGE VERSION -openshift-control-plane-0 Ready master 4h26m v1.34.2 -openshift-control-plane-1 Ready master 4h26m v1.34.2 -openshift-control-plane-2 Ready master 12m v1.34.2 -openshift-compute-0 Ready worker 3h58m v1.34.2 -openshift-compute-1 Ready worker 3h58m v1.34.2 +openshift-control-plane-0 Ready master 4h26m v1.35.4 +openshift-control-plane-1 Ready master 4h26m v1.35.4 +openshift-control-plane-2 Ready master 12m v1.35.4 +openshift-compute-0 Ready worker 3h58m v1.35.4 +openshift-compute-1 Ready worker 3h58m v1.35.4 ---- . Turn the quorum guard back on by entering the following command: diff --git a/modules/rhcos-add-extensions.adoc b/modules/rhcos-add-extensions.adoc index f24865e9ffe7..29bae0d44098 100644 --- a/modules/rhcos-add-extensions.adoc +++ b/modules/rhcos-add-extensions.adoc @@ -105,7 +105,7 @@ $ oc get node | grep worker [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.34.2 +ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.35.4 ---- + [source,terminal] diff --git a/modules/rhcos-enabling-multipath-day-2.adoc b/modules/rhcos-enabling-multipath-day-2.adoc index a5a6406a9ed2..bd4a790e7da6 100644 --- a/modules/rhcos-enabling-multipath-day-2.adoc +++ b/modules/rhcos-enabling-multipath-day-2.adoc @@ -111,12 +111,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-136-161.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-136-243.ec2.internal Ready master 34m v1.34.2 -ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.34.2 -ip-10-0-142-249.ec2.internal Ready master 34m v1.34.2 -ip-10-0-153-11.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-153-150.ec2.internal Ready master 34m v1.34.2 +ip-10-0-136-161.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-136-243.ec2.internal Ready master 34m v1.35.4 +ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.35.4 +ip-10-0-142-249.ec2.internal Ready master 34m v1.35.4 +ip-10-0-153-11.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-153-150.ec2.internal Ready master 34m v1.35.4 ---- + You can see that scheduling on each worker node is disabled as the change is being applied. diff --git a/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc b/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc index ce58089b5847..5c89e5df4f3f 100644 --- a/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc +++ b/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc @@ -218,6 +218,6 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-1.example.com Ready master,worker 56m v1.34.2 -compute-1.example.com Ready worker 11m v1.34.2 +control-plane-1.example.com Ready master,worker 56m v1.35.4 +compute-1.example.com Ready worker 11m v1.35.4 ---- diff --git a/modules/update-upgrading-cli.adoc b/modules/update-upgrading-cli.adoc index 45f81535b54e..2301cf64fa48 100644 --- a/modules/update-upgrading-cli.adoc +++ b/modules/update-upgrading-cli.adoc @@ -208,10 +208,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-168-251.ec2.internal Ready master 82m v1.34.2 -ip-10-0-170-223.ec2.internal Ready master 82m v1.34.2 -ip-10-0-179-95.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-182-134.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-211-16.ec2.internal Ready master 82m v1.34.2 -ip-10-0-250-100.ec2.internal Ready worker 69m v1.34.2 +ip-10-0-168-251.ec2.internal Ready master 82m v1.35.4 +ip-10-0-170-223.ec2.internal Ready master 82m v1.35.4 +ip-10-0-179-95.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-182-134.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-211-16.ec2.internal Ready master 82m v1.35.4 +ip-10-0-250-100.ec2.internal Ready worker 69m v1.35.4 ---- \ No newline at end of file diff --git a/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc b/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc index a47265f5e7cd..90cd3705099e 100644 --- a/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc +++ b/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc @@ -34,9 +34,9 @@ $ oc get nodes -l node-role.kubernetes.io/worker [source,terminal] ---- NAME STATUS ROLES AGE VERSION -compute-node-0 Ready worker 30m v1.34.2 -compute-node-1 Ready worker 30m v1.34.2 -compute-node-2 Ready worker 30m v1.34.2 +compute-node-0 Ready worker 30m v1.35.4 +compute-node-1 Ready worker 30m v1.35.4 +compute-node-2 Ready worker 30m v1.35.4 ---- + Note the names of your compute nodes. diff --git a/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc b/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc index 97d47d4a3397..d17046f96f92 100644 --- a/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc +++ b/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc @@ -29,9 +29,9 @@ $ oc get nodes -l node-role.kubernetes.io/master [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-node-0 Ready master 75m v1.34.2 -control-plane-node-1 Ready master 75m v1.34.2 -control-plane-node-2 Ready master 75m v1.34.2 +control-plane-node-0 Ready master 75m v1.35.4 +control-plane-node-1 Ready master 75m v1.35.4 +control-plane-node-2 Ready master 75m v1.35.4 ---- + Note the names of your control plane nodes. diff --git a/modules/virt-booting-vms-uefi-mode.adoc b/modules/virt-booting-vms-uefi-mode.adoc index 19845a409334..0c7cd714b018 100644 --- a/modules/virt-booting-vms-uefi-mode.adoc +++ b/modules/virt-booting-vms-uefi-mode.adoc @@ -15,9 +15,8 @@ You can configure a virtual machine to boot in UEFI mode by editing the `Virtual .Procedure -. Edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode. +. To boot a virtual machine (VM) in UEFI mode with secure boot active, edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode: + -Booting in UEFI mode with secure boot active: [source,yaml] ---- apiversion: kubevirt.io/v1 diff --git a/modules/virt-collecting-data-about-vms.adoc b/modules/virt-collecting-data-about-vms.adoc index f6291cef8463..314b13c20a36 100644 --- a/modules/virt-collecting-data-about-vms.adoc +++ b/modules/virt-collecting-data-about-vms.adoc @@ -14,7 +14,7 @@ Collecting data about malfunctioning virtual machines (VMs) minimizes the time r * For Linux VMs, you have installed the latest QEMU guest agent. * For Windows VMs, you have: ** Recorded the Windows patch update details. -** link:https://access.redhat.com/solutions/6957701[Installed the latest VirtIO drivers]. +** Installed the latest VirtIO drivers. ** Installed the latest QEMU guest agent. ** If Remote Desktop Protocol (RDP) is enabled, you have connected by using the desktop viewer to determine whether there is a problem with the connection software. diff --git a/modules/virt-collecting-data-about-your-environment.adoc b/modules/virt-collecting-data-about-your-environment.adoc index 484709ad8c30..b36b6a65e9b6 100644 --- a/modules/virt-collecting-data-about-your-environment.adoc +++ b/modules/virt-collecting-data-about-your-environment.adoc @@ -12,13 +12,13 @@ Collecting data about your environment minimizes the time required to analyze an .Prerequisites //link needs to be added for HCP when available ifdef::openshift-dedicated,openshift-rosa[] -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[set the retention time for Prometheus metrics data] to a minimum of seven days. -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] so that they can be viewed and persisted outside the cluster. +* You have set the retention time for Prometheus metrics data to a minimum of seven days. +* You have configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster. endif::openshift-dedicated,openshift-rosa[] ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/storing-and-recording-data#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data[set the retention time for Prometheus metrics data] to a minimum of seven days. -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/configuring-alerts-and-notifications[configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] so that they can be viewed and persisted outside the cluster. +* You have set the retention time for Prometheus metrics data to a minimum of seven days. +* You have configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster. endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * You have recorded the exact number of affected nodes and virtual machines. @@ -27,10 +27,10 @@ endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // must-gather not supported for ROSA/OSD, per Dustin Row ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] . Collect must-gather data for the cluster. -. link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/latest/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary. +. Collect must-gather data for {rh-storage-first}, if necessary. . Collect must-gather data for {VirtProductName}. endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] -. link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/accessing_metrics/accessing-metrics-as-an-administrator#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Collect Prometheus metrics for the cluster]. +. Collect Prometheus metrics for the cluster. endif::openshift-rosa-hcp[] //link needs to be added for HCP when available diff --git a/modules/virt-generating-a-vm-memory-dump.adoc b/modules/virt-generating-a-vm-memory-dump.adoc index d9bfa60232c4..4ef6ce12f7db 100644 --- a/modules/virt-generating-a-vm-memory-dump.adoc +++ b/modules/virt-generating-a-vm-memory-dump.adoc @@ -44,7 +44,7 @@ $ virtctl memory-dump download --output= . Attach the memory dump to a Red Hat Support case. + -Alternatively, you can inspect the memory dump, for example by using link:https://github.com/volatilityfoundation/volatility3[the volatility3 tool]. +Alternatively, you can inspect the memory dump, for example by using the volatility3 tool. . Optional: Remove the memory dump: + diff --git a/modules/virt-support-create-jira-issue.adoc b/modules/virt-support-create-jira-issue.adoc index 3489ddd297b8..171d8666eb09 100644 --- a/modules/virt-support-create-jira-issue.adoc +++ b/modules/virt-support-create-jira-issue.adoc @@ -13,7 +13,7 @@ To report an issue with your environment to Red{nbsp}Hat Support, create a Jira . Log in to Red Hat Atlassian Jira. -. Click the following link to open a *Create Issue* page: link:https://redhat.atlassian.net/secure/CreateIssue.jspa[Create issue]. +. Access the *Create Issue* page. . Select {VirtProductName} (CNV) as the *Project*. diff --git a/modules/virt-support-submit-support-case.adoc b/modules/virt-support-submit-support-case.adoc index e73f8ae5279d..f91ff011232f 100644 --- a/modules/virt-support-submit-support-case.adoc +++ b/modules/virt-support-submit-support-case.adoc @@ -9,4 +9,4 @@ [role="_abstract"] Submit a support case to resolve a cluster issue that is affecting the ability of {VirtProductName} to function properly in your environment. -You can submit a support case to Red{nbsp}Hat Support by using the link:https://access.redhat.com/support/cases/#/case/list[Customer Support] page. Include data that you collected about your issue with your support request. \ No newline at end of file +You can submit a support case to Red{nbsp}Hat Support by using the Customer Support page. Include data that you collected about your issue with your support request. \ No newline at end of file diff --git a/nodes/nodes/nodes-nodes-additional-crio-storage.adoc b/nodes/nodes/nodes-nodes-additional-crio-storage.adoc new file mode 100644 index 000000000000..94b46282e062 --- /dev/null +++ b/nodes/nodes/nodes-nodes-additional-crio-storage.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: ASSEMBLY +[id="nodes-nodes-additional-crio-storage"] += Additional CRI-O storage locations for faster container startup +include::_attributes/common-attributes.adoc[] +:context: nodes-nodes-additional-crio-storage + +toc::[] + +[role="_abstract"] +To reduce application startup time, make your applications run more efficiently, and configure lazy pulling, you can configure additional storage locations for the CRI-O container engine. + +Fields in the `ContainerRuntimeConfig` custom resource (CR) let you specify where CRI-O stores and resolves container image layers, complete container images, and OCI artifacts. + +:FeatureName: Using additional CRI-O storage locations +include::snippets/technology-preview.adoc[] + +include::modules/nodes-nodes-additional-crio-storage-about.adoc[leveloffset=+1] +include::modules/nodes-nodes-additional-crio-storage-configuring.adoc[leveloffset=+1] + +== Additional resources + +* link:https://github.com/containerd/stargz-snapshotter[Stargz Store plugin] +* link:https://github.com/containerd/stargz-snapshotter/blob/main/docs/INSTALL.md[Install Stargz Snapshotter and Stargz Store] +* link:https://github.com/containers/nydus-storage-plugin[Nydus Storage Plugin] +* link:https://github.com/containerd/stargz-snapshotter/blob/main/docs/estargz.md[eStargz format] +* link:https://nydus.dev/[Nydus format] +* xref:../../nodes/jobs/nodes-pods-daemonsets.adoc#nodes-pods-daemonsets[Running background tasks on nodes automatically with daemon sets] +* xref:../../machine_configuration/machine-configs-configure.adoc#machine-configs-configure[Using machine config objects to configure nodes] +* xref:../../machine_configuration/mco-coreos-layering.adoc#mco-coreos-layering[Image mode for OpenShift] + diff --git a/nodes/scheduling/descheduler/index.adoc b/nodes/scheduling/descheduler/index.adoc index b8488889dc84..2b089d229466 100644 --- a/nodes/scheduling/descheduler/index.adoc +++ b/nodes/scheduling/descheduler/index.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] While the xref:../../../nodes/scheduling/nodes-scheduler-about.adoc#nodes-scheduler-about[scheduler] is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node. +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] + // About the descheduler include::modules/nodes-descheduler-about.adoc[leveloffset=+1] diff --git a/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc b/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc index b32eec97b4c7..c7a6b53976e4 100644 --- a/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc +++ b/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] You can run the descheduler in {product-title} by installing the {descheduler-operator} and setting the required profiles and other customizations. +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] + // Installing the descheduler include::modules/nodes-descheduler-installing.adoc[leveloffset=+1] diff --git a/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc b/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc index 2a4c44c6ca8e..87e9ee7cf936 100644 --- a/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc +++ b/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc @@ -11,13 +11,10 @@ Review the {descheduler-operator} release notes to track its development and lea The {descheduler-operator} allows you to evict pods so that they can be rescheduled on more appropriate nodes. -For more information, see xref:../../../nodes/scheduling/descheduler/index.adoc#nodes-descheduler-about_nodes-descheduler-about[About the descheduler]. - -// Release notes for Kube Descheduler Operator 5.3.2 -include::modules/nodes-descheduler-rn-5.3.2.adoc[leveloffset=+1] +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] -// Release notes for Kube Descheduler Operator 5.3.1 -include::modules/nodes-descheduler-rn-5.3.1.adoc[leveloffset=+1] +For more information, see xref:../../../nodes/scheduling/descheduler/index.adoc#nodes-descheduler-about_nodes-descheduler-about[About the descheduler]. -// Release notes for Kube Descheduler Operator 5.3.0 -include::modules/nodes-descheduler-rn-5.3.0.adoc[leveloffset=+1] +// Release notes for Kube Descheduler Operator x.y.z +// include::modules/nodes-descheduler-rn-x.y.z.adoc[leveloffset=+1] diff --git a/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc b/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc index c6599e813e35..318af35b8be7 100644 --- a/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc +++ b/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc @@ -9,5 +9,8 @@ toc::[] [role="_abstract"] If you no longer need the {descheduler-operator} in your cluster, you can uninstall the Operator and remove its related resources. +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] + // Uninstalling the descheduler include::modules/nodes-descheduler-uninstalling.adoc[leveloffset=+1] diff --git a/nodes/scheduling/secondary_scheduler/index.adoc b/nodes/scheduling/secondary_scheduler/index.adoc index 033cc2c0548d..17d11f4ba6cc 100644 --- a/nodes/scheduling/secondary_scheduler/index.adoc +++ b/nodes/scheduling/secondary_scheduler/index.adoc @@ -9,5 +9,8 @@ toc::[] [role="_abstract"] You can install the {secondary-scheduler-operator} to run a custom secondary scheduler alongside the default scheduler to schedule pods. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] + // About the {secondary-scheduler-operator} include::modules/nodes-secondary-scheduler-about.adoc[leveloffset=+1] diff --git a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc index 1669eecf922c..75c112df4876 100644 --- a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc +++ b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] You can run a custom secondary scheduler in {product-title} by installing the {secondary-scheduler-operator}, deploying the secondary scheduler, and setting the secondary scheduler in the pod definition. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] + // Installing the {secondary-scheduler-operator} include::modules/nodes-secondary-scheduler-install-console.adoc[leveloffset=+1] diff --git a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc index d14a4fb0bd79..ed17031df0fb 100644 --- a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc +++ b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc @@ -11,10 +11,10 @@ Review the {secondary-scheduler-operator-full} release notes to track its develo The {secondary-scheduler-operator} allows you to deploy a custom secondary scheduler in your {product-title} cluster. -For more information, see xref:../../../nodes/scheduling/secondary_scheduler/index.adoc#nodes-secondary-scheduler-about_nodes-secondary-scheduler-about[About the {secondary-scheduler-operator}]. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] -// Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.5.1 -include::modules/nodes-secondary-scheduler-rn-1.5.1.adoc[leveloffset=+1] +For more information, see xref:../../../nodes/scheduling/secondary_scheduler/index.adoc#nodes-secondary-scheduler-about_nodes-secondary-scheduler-about[About the {secondary-scheduler-operator}]. -// Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.5.0 -include::modules/nodes-secondary-scheduler-rn-1.5.0.adoc[leveloffset=+1] +// Release notes for Secondary Scheduler Operator for Red Hat OpenShift x.y.z +// include::modules/nodes-secondary-scheduler-rn-x.y.z.adoc[leveloffset=+1] diff --git a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc index 9b2345998767..7902f24c19e6 100644 --- a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc +++ b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] If you no longer need the {secondary-scheduler-operator-full} in your cluster, you can uninstall the Operator and remove its related resources. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] + // Uninstalling the {secondary-scheduler-operator} include::modules/nodes-secondary-scheduler-uninstall-console.adoc[leveloffset=+1] diff --git a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc index c73bc3e21a28..ad9c5341cf86 100644 --- a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc @@ -63,8 +63,17 @@ include::modules/persistent-storage-byok.adoc[leveloffset=+1] For information about installing with user-managed encryption for GCP PD, see xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installation-configuration-parameters_installing-gcp-customizations[Installation configuration parameters]. endif::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-dedicated[] + +include::modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc[leveloffset=+1] + +endif::openshift-rosa,openshift-dedicated[] + [id="resources-for-gcp"] [role="_additional-resources"] == Additional resources * xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk] * xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[Configuring CSI volumes] +ifndef::openshift-rosa,openshift-dedicated[] +* xref:../../storage/container_storage_interface/persistent-storage-csi-snapshots.adoc#volume-snapshot-crds[Volume snapshots CRD: VolumeSnapshotClass] +endif::openshift-rosa,openshift-dedicated[] diff --git a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc index 9c7b44d43474..699529b8e0fa 100644 --- a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc @@ -129,9 +129,6 @@ include::modules/persistent-storage-csi-vsphere-disable-storage-procedure.adoc[l include::modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc[leveloffset=+1] -:FeatureName: Adding bare-metal nodes -include::snippets/technology-preview.adoc[leveloffset=+2] - [role="_additional-resources"] .Additional resources * xref:../../machine_management/user_infra/adding-bare-metal-compute-vsphere-user-infra.adoc[Adding bare-metal compute machines to a vSphere cluster] diff --git a/virt/support/virt-collecting-virt-data.adoc b/virt/support/virt-collecting-virt-data.adoc index 19b8ab5b0f85..ffc291c258d6 100644 --- a/virt/support/virt-collecting-virt-data.adoc +++ b/virt/support/virt-collecting-virt-data.adoc @@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[] toc::[] [role="_abstract"] -When you submit a xref:../../support/getting-support.adoc#support-submitting-a-case_getting-support[support case] to Red{nbsp}Hat Support, it is helpful to provide debugging information for {product-title} and {VirtProductName} by using the following tools: +When you submit a support case to Red{nbsp}Hat Support, it is helpful to provide debugging information for {product-title} and {VirtProductName} by using the following tools: // must-gather not supported for ROSA/OSD, per Dustin Row ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] @@ -51,4 +51,15 @@ endif::openshift-dedicated,openshift-rosa[] * xref:../../virt/managing_vms/virt-install-virtio-drivers-on-windows-vms.adoc#virt-installing-virtio-drivers-existing-windows_virt-install-virtio-drivers-on-windows-vms[Installing VirtIO drivers from a SATA CD drive on an existing Windows VM] * xref:../../virt/managing_vms/virt-accessing-vm-consoles.adoc#virt-connecting-desktop-viewer-web_virt-accessing-vm-consoles[Connect to the desktop viewer by using the web console] * xref:../../virt/support/virt-collecting-virt-data.adoc#virt-generating-a-vm-memory-dump_virt-collecting-virt-data[Collect memory dumps from VMs] +* xref:../../support/getting-support.adoc#support-submitting-a-case_getting-support[Submitting a support case] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[Modifying retention time and size for Prometheus metrics data] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[Configuring the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/storing-and-recording-data#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data[Modifying retention time and size for Prometheus metrics data] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/configuring-alerts-and-notifications[Configuring alerts and notifications] +* link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/latest/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Downloading log files and diagnostic information] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/accessing_metrics/accessing-metrics-as-an-administrator#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Querying metrics for all projects with the monitoring dashboard] +* link:https://access.redhat.com/solutions/6957701[Installing the latest VirtIO drivers] +* link:https://github.com/volatilityfoundation/volatility3[Volatility3 tool] +* link:https://access.redhat.com/support/cases/#/case/list[Customer Support] +* link:https://redhat.atlassian.net/secure/CreateIssue.jspa[Create issue]