From 4dfef7852561027fc44f84199ec9672454556b01 Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Tue, 24 Feb 2026 14:43:32 -0500 Subject: [PATCH 01/17] OSDOCS-18432#Adding BM nodes to vSphere clusters TP -> GA --- modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc | 4 +--- .../persistent-storage-csi-vsphere.adoc | 3 --- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc b/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc index 49dc0aeeec74..c3ac6b3a4077 100644 --- a/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc +++ b/modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc @@ -8,8 +8,6 @@ = Adding bare-metal nodes [role="_abstract"] -Adding bare-metal nodes to an {product-title} cluster on vSphere is supported as a Technology Preview feature. - -However, if you add bare-metal nodes, you must remove the vSphere CSI Driver, otherwise the cluster is marked as degraded. For information about how to remove the driver and the consequences of doing this, see Section _Disabling and enabling storage on vSphere_. +Adding bare-metal nodes to an {product-title} cluster on vSphere is supported. However, if you add bare-metal nodes, you must remove the vSphere CSI Driver, otherwise the cluster is marked as degraded. For information about how to remove the driver and the consequences of doing this, see Section _Disabling and enabling storage on vSphere_. For information about how to add bare-metal nodes, under _Additional resources_, see Section _Adding bare-metal compute machines to a vSphere cluster_. diff --git a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc index 9c7b44d43474..699529b8e0fa 100644 --- a/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-vsphere.adoc @@ -129,9 +129,6 @@ include::modules/persistent-storage-csi-vsphere-disable-storage-procedure.adoc[l include::modules/persistent-storage-csi-vsphere-adding-bm-nodes.adoc[leveloffset=+1] -:FeatureName: Adding bare-metal nodes -include::snippets/technology-preview.adoc[leveloffset=+2] - [role="_additional-resources"] .Additional resources * xref:../../machine_management/user_infra/adding-bare-metal-compute-vsphere-user-infra.adoc[Adding bare-metal compute machines to a vSphere cluster] From a6393fc2271471920a4cf4b906f224855faf07e4 Mon Sep 17 00:00:00 2001 From: Lisa Pettyjohn Date: Thu, 26 Feb 2026 10:58:45 -0500 Subject: [PATCH 02/17] OSDOCS-18456#GCP PD images snapshot class --- ...si-gcp-images-snapshot-class-overview.adoc | 19 ++++++++++++ ...istent-storage-csi-snapshots-operator.adoc | 30 +++++++++++++++++-- .../persistent-storage-csi-gcp-pd.adoc | 9 ++++++ 3 files changed, 56 insertions(+), 2 deletions(-) create mode 100644 modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc diff --git a/modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc b/modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc new file mode 100644 index 000000000000..294c6b3b4b33 --- /dev/null +++ b/modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc @@ -0,0 +1,19 @@ +// Module included in the following assemblies: +// +// * storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc + +:_mod-docs-content-type: CONCEPT +[id="persistent-storage-csi-gcp-images-snapshot-class-overview_{context}"] += Volume snapshot class csi-gce-pd-vsc-images + +[role="_abstract"] +By default, you cannot restore more than six volumes per snapshot per hour. So in Kubevirt environments, you normally cannot create more than six VMs per hour from a "golden image" (templates saved as snapshots). + +For Google Cloud Platform (GCP) persistent disk (PD) storage CSI, there is a non-default `VolumeSnapshotClass`, named `csi-gce-pd-vsc-images`, that uses the `snapshot-type`: `images` parameter. When using KubeVirt, it allows you overcome the six VMs per hour restriction, so that you can create VMs from "golden images". + +[NOTE] +==== +Snapshots using the images snapshot class are strictly limited to ReadWriteOnce (RWO) sources, but you can restore them to ReadWriteMany (RWX) hyperdisk-balanced disks. +==== + +For more information, see, under _Additional Resources_, Section _Volume snapshots CRD: VolumeSnapshotClass_. diff --git a/modules/persistent-storage-csi-snapshots-operator.adoc b/modules/persistent-storage-csi-snapshots-operator.adoc index 9f5689b3957f..697fad192f54 100644 --- a/modules/persistent-storage-csi-snapshots-operator.adoc +++ b/modules/persistent-storage-csi-snapshots-operator.adoc @@ -6,7 +6,7 @@ [id="persistent-storage-csi-snapshots-operator_{context}"] = About the CSI Snapshot Controller Operator -The CSI Snapshot Controller Operator runs in the `openshift-cluster-storage-operator` namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default. +The Container Storage Interface (CSI) Snapshot Controller Operator runs in the `openshift-cluster-storage-operator` namespace. It is installed by the Cluster Version Operator (CVO) in all clusters by default. The CSI Snapshot Controller Operator installs the CSI snapshot controller, which runs in the `openshift-cluster-storage-operator` namespace. @@ -32,10 +32,36 @@ The `VolumeSnapshot` CRD is namespaced. A developer uses the CRD as a distinct r `VolumeSnapshotClass`:: -Allows a cluster administrator to specify different attributes belonging to a `VolumeSnapshot` object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim. +The `VolumeSnapshotClass` CRD allows a cluster administrator to specify different attributes belonging to a `VolumeSnapshot` object. These attributes may differ among snapshots taken of the same volume on the storage system, in which case they would not be expressed by using the same storage class of a persistent volume claim. + The `VolumeSnapshotClass` CRD defines the parameters for the `csi-external-snapshotter` sidecar to use when creating a snapshot. This allows the storage back end to know what kind of snapshot to dynamically create if multiple options are supported. + Dynamically provisioned snapshots use the `VolumeSnapshotClass` CRD to specify storage-provider-specific parameters to use when creating a snapshot. + The `VolumeSnapshotContentClass` CRD is not namespaced and is for use by a cluster administrator to enable global configuration options for their storage back end. ++ +For Google Cloud Platform (GCP) persistent disk (PD) storage CSI, there is a non-default `VolumeSnapshotClass`, named `csi-gce-pd-vsc-images`, that uses the `snapshot-type`: `images` parameter. When using KubeVirt, this allows you to create VMs from "golden images" (templates saved as snapshots). ++ +If you want to use the images volume snapshot class for dynamic snapshot provisioning, you can either: + +* Make the images volume snapshot class the default by changing the `snapshot.storage.kubernetes.io/is-default-class` annotation to `true`. Also, for the normal default volume snapshot class, `csi-gce-pd-vsc`, be sure to change this parameter to `false`. + +* When creating the snapshot object, be sure to set `volumeSnapshotClassName` to `csi-gce-pd-vsc-images`. ++ +For information about creating volume snapshots, see Section _Creating a volume snapshot_. ++ +.Example images volume snapshot class YAML file +[source,yaml] +---- +apiVersion: snapshot.storage.k8s.io/v1 +kind: VolumeSnapshotClass +metadata: + name: csi-gce-pd-vsc-images +driver: pd.csi.storage.gke.io +parameters: + snapshot-type: images +---- ++ +* `metadata.name:csi-gce-pd-vsc-images`: Name for the non-default images volume snapshot class. + +* `parameters: snapshot-type: images`: Defines the snapshot as a "golden image" or a bootable template, rather than the standard disk backup. diff --git a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc index c73bc3e21a28..ad9c5341cf86 100644 --- a/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc +++ b/storage/container_storage_interface/persistent-storage-csi-gcp-pd.adoc @@ -63,8 +63,17 @@ include::modules/persistent-storage-byok.adoc[leveloffset=+1] For information about installing with user-managed encryption for GCP PD, see xref:../../installing/installing_gcp/installing-gcp-customizations.adoc#installation-configuration-parameters_installing-gcp-customizations[Installation configuration parameters]. endif::openshift-rosa,openshift-dedicated[] +ifndef::openshift-rosa,openshift-dedicated[] + +include::modules/persistent-storage-csi-gcp-images-snapshot-class-overview.adoc[leveloffset=+1] + +endif::openshift-rosa,openshift-dedicated[] + [id="resources-for-gcp"] [role="_additional-resources"] == Additional resources * xref:../../storage/persistent_storage/persistent-storage-gce.adoc#persistent-storage-using-gce[Persistent storage using GCE Persistent Disk] * xref:../../storage/container_storage_interface/persistent-storage-csi.adoc#persistent-storage-csi[Configuring CSI volumes] +ifndef::openshift-rosa,openshift-dedicated[] +* xref:../../storage/container_storage_interface/persistent-storage-csi-snapshots.adoc#volume-snapshot-crds[Volume snapshots CRD: VolumeSnapshotClass] +endif::openshift-rosa,openshift-dedicated[] From 10d479cfb7d0b4f646b0973b0febd7144a38a445 Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Thu, 23 Apr 2026 09:51:26 -0400 Subject: [PATCH 03/17] OSDOCS#18166: Removing unsupported OSSO versions for 4.22 --- .../nodes-secondary-scheduler-rn-1.5.0.adoc | 34 ------------------- .../nodes-secondary-scheduler-rn-1.5.1.adoc | 28 --------------- .../scheduling/secondary_scheduler/index.adoc | 3 ++ ...nodes-secondary-scheduler-configuring.adoc | 3 ++ ...des-secondary-scheduler-release-notes.adoc | 10 +++--- ...odes-secondary-scheduler-uninstalling.adoc | 3 ++ 6 files changed, 14 insertions(+), 67 deletions(-) delete mode 100644 modules/nodes-secondary-scheduler-rn-1.5.0.adoc delete mode 100644 modules/nodes-secondary-scheduler-rn-1.5.1.adoc diff --git a/modules/nodes-secondary-scheduler-rn-1.5.0.adoc b/modules/nodes-secondary-scheduler-rn-1.5.0.adoc deleted file mode 100644 index a01a1af26043..000000000000 --- a/modules/nodes-secondary-scheduler-rn-1.5.0.adoc +++ /dev/null @@ -1,34 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="secondary-scheduler-operator-release-notes-1.5.0_{context}"] -= Release notes for {secondary-scheduler-operator-full} 1.5.0 - -[role="_abstract"] -Review the release notes for {secondary-scheduler-operator} 1.5.0 to learn what is new and updated with this release. - -Issued: 29 October 2025 - -The following advisory is available for the {secondary-scheduler-operator-full} 1.5.0: - -* link:https://access.redhat.com/errata/RHBA-2025:19251[RHBA-2025:19251] - -[id="secondary-scheduler-1.5.0-new-features_{context}"] -== New features and enhancements - -* This release of the {secondary-scheduler-operator} updates the Kubernetes version to 1.33. - -// No bug fixes or CVEs to list -// [id="secondary-scheduler-1.5.0-bug-fixes_{context}"] -// === Bug fixes -// -// * This release of the {secondary-scheduler-operator} addresses several Common Vulnerabilities and Exposures (CVEs). - -[id="secondary-scheduler-operator-1.5.0-known-issues_{context}"] -== Known issues - -* Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the {secondary-scheduler-operator}. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. (link:https://issues.redhat.com/browse/WRKLDS-645[WRKLDS-645]) diff --git a/modules/nodes-secondary-scheduler-rn-1.5.1.adoc b/modules/nodes-secondary-scheduler-rn-1.5.1.adoc deleted file mode 100644 index 803c202cefba..000000000000 --- a/modules/nodes-secondary-scheduler-rn-1.5.1.adoc +++ /dev/null @@ -1,28 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="secondary-scheduler-operator-release-notes-1.5.1_{context}"] -= Release notes for {secondary-scheduler-operator-full} 1.5.1 - -[role="_abstract"] -Review the release notes for {secondary-scheduler-operator} 1.5.1 to learn what is new and updated with this release. - -Issued: 12 February 2026 - -The following advisory is available for the {secondary-scheduler-operator-full} 1.5.1: - -* link:https://access.redhat.com/errata/RHBA-2026:2642[RHBA-2026:2642] - -[id="secondary-scheduler-1.5.1-new-features_{context}"] -== New features and enhancements - -* This release of the {secondary-scheduler-operator} updates the Kubernetes version to 1.34. - -[id="secondary-scheduler-operator-1.5.1-known-issues_{context}"] -== Known issues - -* Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the {secondary-scheduler-operator}. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. (link:https://issues.redhat.com/browse/WRKLDS-645[WRKLDS-645]) diff --git a/nodes/scheduling/secondary_scheduler/index.adoc b/nodes/scheduling/secondary_scheduler/index.adoc index 033cc2c0548d..17d11f4ba6cc 100644 --- a/nodes/scheduling/secondary_scheduler/index.adoc +++ b/nodes/scheduling/secondary_scheduler/index.adoc @@ -9,5 +9,8 @@ toc::[] [role="_abstract"] You can install the {secondary-scheduler-operator} to run a custom secondary scheduler alongside the default scheduler to schedule pods. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] + // About the {secondary-scheduler-operator} include::modules/nodes-secondary-scheduler-about.adoc[leveloffset=+1] diff --git a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc index 1669eecf922c..75c112df4876 100644 --- a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc +++ b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-configuring.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] You can run a custom secondary scheduler in {product-title} by installing the {secondary-scheduler-operator}, deploying the secondary scheduler, and setting the secondary scheduler in the pod definition. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] + // Installing the {secondary-scheduler-operator} include::modules/nodes-secondary-scheduler-install-console.adoc[leveloffset=+1] diff --git a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc index d14a4fb0bd79..ed17031df0fb 100644 --- a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc +++ b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-release-notes.adoc @@ -11,10 +11,10 @@ Review the {secondary-scheduler-operator-full} release notes to track its develo The {secondary-scheduler-operator} allows you to deploy a custom secondary scheduler in your {product-title} cluster. -For more information, see xref:../../../nodes/scheduling/secondary_scheduler/index.adoc#nodes-secondary-scheduler-about_nodes-secondary-scheduler-about[About the {secondary-scheduler-operator}]. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] -// Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.5.1 -include::modules/nodes-secondary-scheduler-rn-1.5.1.adoc[leveloffset=+1] +For more information, see xref:../../../nodes/scheduling/secondary_scheduler/index.adoc#nodes-secondary-scheduler-about_nodes-secondary-scheduler-about[About the {secondary-scheduler-operator}]. -// Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.5.0 -include::modules/nodes-secondary-scheduler-rn-1.5.0.adoc[leveloffset=+1] +// Release notes for Secondary Scheduler Operator for Red Hat OpenShift x.y.z +// include::modules/nodes-secondary-scheduler-rn-x.y.z.adoc[leveloffset=+1] diff --git a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc index 9b2345998767..7902f24c19e6 100644 --- a/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc +++ b/nodes/scheduling/secondary_scheduler/nodes-secondary-scheduler-uninstalling.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] If you no longer need the {secondary-scheduler-operator-full} in your cluster, you can uninstall the Operator and remove its related resources. +:operator-name: The {secondary-scheduler-operator} +include::snippets/operator-not-available.adoc[] + // Uninstalling the {secondary-scheduler-operator} include::modules/nodes-secondary-scheduler-uninstall-console.adoc[leveloffset=+1] From bbf2711b37fe53766485896e779483188c1ced0b Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Thu, 23 Apr 2026 09:54:33 -0400 Subject: [PATCH 04/17] OSDOCS#18167: Removing unsupported KDO versions for 4.22 --- modules/nodes-descheduler-rn-5.3.0.adoc | 39 ------------------- modules/nodes-descheduler-rn-5.3.1.adoc | 23 ----------- modules/nodes-descheduler-rn-5.3.2.adoc | 23 ----------- nodes/scheduling/descheduler/index.adoc | 3 ++ .../nodes-descheduler-configuring.adoc | 3 ++ .../nodes-descheduler-release-notes.adoc | 13 +++---- .../nodes-descheduler-uninstalling.adoc | 3 ++ 7 files changed, 14 insertions(+), 93 deletions(-) delete mode 100644 modules/nodes-descheduler-rn-5.3.0.adoc delete mode 100644 modules/nodes-descheduler-rn-5.3.1.adoc delete mode 100644 modules/nodes-descheduler-rn-5.3.2.adoc diff --git a/modules/nodes-descheduler-rn-5.3.0.adoc b/modules/nodes-descheduler-rn-5.3.0.adoc deleted file mode 100644 index b01f2812d5f7..000000000000 --- a/modules/nodes-descheduler-rn-5.3.0.adoc +++ /dev/null @@ -1,39 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="descheduler-operator-release-notes-5.3.0_{context}"] -= Release notes for {descheduler-operator} 5.3.0 - -[role="_abstract"] -Review the release notes for {descheduler-operator} 5.3.0 to learn what is new and updated with this release. - -Issued: 29 October 2025 - -The following advisory is available for the {descheduler-operator} 5.3.0: - -* link:https://access.redhat.com/errata/RHBA-2025:19249[RHBA-2025:19249] - -[id="descheduler-operator-5.3.0-new-features_{context}"] -== New features and enhancements - -* The descheduler profile `DevKubeVirtRelieveAndMigrate` has been renamed to `KubeVirtRelieveAndMigrate` and is now generally available. The updated profile improves VM eviction stability during live migrations by enabling background evictions and reducing oscillatory behavior. This profile is only available for use with {VirtProductName}. -+ -For more information, see xref:../../../virt/managing_vms/advanced_vm_management/virt-enabling-descheduler-evictions.adoc#virt-configuring-descheduler-evictions_virt-enabling-descheduler-evictions[Configuring descheduler evictions for virtual machines]. - -* This release of the {descheduler-operator} updates the Kubernetes version to 1.33. - -// No bug fixes or CVEs to list -// [id="descheduler-operator-5.3.0-bug-fixes_{context}"] -// === Bug fixes -// -// * This release of the {descheduler-operator} addresses several Common Vulnerabilities and Exposures (CVEs). - -// No known issues to list -// [id="descheduler-operator-5.3.0-known-issues_{context}"] -// === Known issues -// -// * TODO diff --git a/modules/nodes-descheduler-rn-5.3.1.adoc b/modules/nodes-descheduler-rn-5.3.1.adoc deleted file mode 100644 index 9a31e6f409a6..000000000000 --- a/modules/nodes-descheduler-rn-5.3.1.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="descheduler-operator-release-notes-5.3.1_{context}"] -= Release notes for {descheduler-operator} 5.3.1 - -[role="_abstract"] -Review the release notes for {descheduler-operator} 5.3.1 to learn what is new and updated with this release. - -Issued: 4 December 2025 - -The following advisory is available for the {descheduler-operator} 5.3.1: - -* link:https://access.redhat.com/errata/RHBA-2025:22737[RHBA-2025:22737] - -[id="descheduler-operator-5.3.1-new-features_{context}"] -== New features and enhancements - -* This release rebuilds the {descheduler-operator} to improve its image grade. diff --git a/modules/nodes-descheduler-rn-5.3.2.adoc b/modules/nodes-descheduler-rn-5.3.2.adoc deleted file mode 100644 index 884847e63d12..000000000000 --- a/modules/nodes-descheduler-rn-5.3.2.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc - -// This release notes module is allowed to contain xrefs. It must only ever be included from one assembly. - -:_mod-docs-content-type: REFERENCE -[id="descheduler-operator-release-notes-5.3.2_{context}"] -= Release notes for {descheduler-operator} 5.3.2 - -[role="_abstract"] -Review the release notes for {descheduler-operator} 5.3.2 to learn what is new and updated with this release. - -Issued: 12 February 2026 - -The following advisory is available for the {descheduler-operator} 5.3.2: - -* link:https://access.redhat.com/errata/RHBA-2026:2641[RHBA-2026:2641] - -[id="descheduler-operator-5.3.2-new-features_{context}"] -== New features and enhancements - -* This release of the {descheduler-operator} updates the Kubernetes version to 1.34. diff --git a/nodes/scheduling/descheduler/index.adoc b/nodes/scheduling/descheduler/index.adoc index b8488889dc84..2b089d229466 100644 --- a/nodes/scheduling/descheduler/index.adoc +++ b/nodes/scheduling/descheduler/index.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] While the xref:../../../nodes/scheduling/nodes-scheduler-about.adoc#nodes-scheduler-about[scheduler] is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node. +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] + // About the descheduler include::modules/nodes-descheduler-about.adoc[leveloffset=+1] diff --git a/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc b/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc index b32eec97b4c7..c7a6b53976e4 100644 --- a/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc +++ b/nodes/scheduling/descheduler/nodes-descheduler-configuring.adoc @@ -9,6 +9,9 @@ toc::[] [role="_abstract"] You can run the descheduler in {product-title} by installing the {descheduler-operator} and setting the required profiles and other customizations. +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] + // Installing the descheduler include::modules/nodes-descheduler-installing.adoc[leveloffset=+1] diff --git a/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc b/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc index 2a4c44c6ca8e..87e9ee7cf936 100644 --- a/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc +++ b/nodes/scheduling/descheduler/nodes-descheduler-release-notes.adoc @@ -11,13 +11,10 @@ Review the {descheduler-operator} release notes to track its development and lea The {descheduler-operator} allows you to evict pods so that they can be rescheduled on more appropriate nodes. -For more information, see xref:../../../nodes/scheduling/descheduler/index.adoc#nodes-descheduler-about_nodes-descheduler-about[About the descheduler]. - -// Release notes for Kube Descheduler Operator 5.3.2 -include::modules/nodes-descheduler-rn-5.3.2.adoc[leveloffset=+1] +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] -// Release notes for Kube Descheduler Operator 5.3.1 -include::modules/nodes-descheduler-rn-5.3.1.adoc[leveloffset=+1] +For more information, see xref:../../../nodes/scheduling/descheduler/index.adoc#nodes-descheduler-about_nodes-descheduler-about[About the descheduler]. -// Release notes for Kube Descheduler Operator 5.3.0 -include::modules/nodes-descheduler-rn-5.3.0.adoc[leveloffset=+1] +// Release notes for Kube Descheduler Operator x.y.z +// include::modules/nodes-descheduler-rn-x.y.z.adoc[leveloffset=+1] diff --git a/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc b/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc index c6599e813e35..318af35b8be7 100644 --- a/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc +++ b/nodes/scheduling/descheduler/nodes-descheduler-uninstalling.adoc @@ -9,5 +9,8 @@ toc::[] [role="_abstract"] If you no longer need the {descheduler-operator} in your cluster, you can uninstall the Operator and remove its related resources. +:operator-name: The {descheduler-operator} +include::snippets/operator-not-available.adoc[] + // Uninstalling the descheduler include::modules/nodes-descheduler-uninstalling.adoc[leveloffset=+1] From 1c72c9b476ecfeacce3472b2cd8faed0f408163b Mon Sep 17 00:00:00 2001 From: John Heraghty Date: Fri, 1 May 2026 13:26:35 +0100 Subject: [PATCH 05/17] OSDOCS-17649:Update acronyms where not defined_CQA jobs --- .../learning-deploying-application-storage-end-session.adoc | 2 +- modules/learning-getting-started-accessing-cli.adoc | 2 +- modules/learning-getting-started-admin-cli.adoc | 2 +- modules/learning-getting-started-create-vpc.adoc | 2 +- modules/learning-getting-started-oidc-config.adoc | 2 +- modules/learning-getting-started-support-ui.adoc | 4 ++-- .../learning-getting-started-upgrading-recurring-updates.adoc | 2 +- modules/learning-lab-overview-about-ostoy.adoc | 4 ++-- 8 files changed, 10 insertions(+), 10 deletions(-) diff --git a/modules/learning-deploying-application-storage-end-session.adoc b/modules/learning-deploying-application-storage-end-session.adoc index fbc6acf0687d..2edda6414666 100644 --- a/modules/learning-deploying-application-storage-end-session.adoc +++ b/modules/learning-deploying-application-storage-end-session.adoc @@ -9,4 +9,4 @@ To securely close your workspace and free up system resources, end your session from the terminal. .Procedure -* Type `exit` in your terminal to quit the session and return to the CLI. \ No newline at end of file +* Type `exit` in your terminal to quit the session and return to the command line interface (CLI). \ No newline at end of file diff --git a/modules/learning-getting-started-accessing-cli.adoc b/modules/learning-getting-started-accessing-cli.adoc index bf3ef456121a..d87441640c0c 100644 --- a/modules/learning-getting-started-accessing-cli.adoc +++ b/modules/learning-getting-started-accessing-cli.adoc @@ -6,7 +6,7 @@ = Accessing your cluster using the CLI [role="_abstract"] -To access the cluster using the CLI, you must have the `oc` CLI installed. With the `oc` CLI, you can work directly with project source code, and manage projects in bandwidth-restricted environments where the web console might be unavailable. If you are following the tutorials, you already installed the `oc` CLI. +To access the cluster using the command line interface (CLI), you must have the `oc` CLI installed. With the `oc` CLI, you can work directly with project source code, and manage projects in bandwidth-restricted environments where the web console might be unavailable. If you are following the tutorials, you already installed the `oc` CLI. .Procedure . Log in to the {cluster-manager-url}. diff --git a/modules/learning-getting-started-admin-cli.adoc b/modules/learning-getting-started-admin-cli.adoc index 53e8b2ab9428..d9c58149af7d 100644 --- a/modules/learning-getting-started-admin-cli.adoc +++ b/modules/learning-getting-started-admin-cli.adoc @@ -6,7 +6,7 @@ = Creating an admin user using the CLI [role="_abstract"] -You can use the {rosa-cli-first} to create an admin user for your clusters. Admin users can create new clusters, schedule cluster upgrades, monitor health, manage cluster resources, and so on. +You can use the {rosa-cli-first} to create an admin user for your clusters. Admin users perform tasks such as creating new clusters, scheduling cluster upgrades, monitoring health, and managing cluster resources. [NOTE] ==== diff --git a/modules/learning-getting-started-create-vpc.adoc b/modules/learning-getting-started-create-vpc.adoc index 0041d00edd71..a4126a157bf1 100644 --- a/modules/learning-getting-started-create-vpc.adoc +++ b/modules/learning-getting-started-create-vpc.adoc @@ -6,7 +6,7 @@ = Creating a VPC [role="_abstract"] -Before deploying a {product-title} cluster, you must have both a VPC and OIDC resources. We will create these resources first. {product-title} uses the bring your own VPC (BYO-VPC) model. +Before deploying a {product-title} cluster, you must have both a Virtual Private Cloud (VPC) and OpenID Connect (OIDC) resources. We will create these resources first. {product-title} uses the bring your own VPC (BYO-VPC) model. .Procedure . Make sure your AWS CLI (`aws`) is configured to use a region where {product-title} is available. See the regions supported by the AWS CLI by running the following command: diff --git a/modules/learning-getting-started-oidc-config.adoc b/modules/learning-getting-started-oidc-config.adoc index c5d3f0734be0..41a40355f039 100644 --- a/modules/learning-getting-started-oidc-config.adoc +++ b/modules/learning-getting-started-oidc-config.adoc @@ -6,7 +6,7 @@ = Creating your OIDC configuration [role="_abstract"] -In this workshop, we will use the automatic mode when creating the OIDC configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the {rosa-cli} to create your cluster's unique OIDC configuration. +In this workshop, we will use the automatic mode when creating the OpenID Connect (OIDC) configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the {rosa-cli} to create your cluster's unique OIDC configuration. .Procedure * Create the OIDC configuration by running the following command: diff --git a/modules/learning-getting-started-support-ui.adoc b/modules/learning-getting-started-support-ui.adoc index a80ed6fe33cf..d0a717416e06 100644 --- a/modules/learning-getting-started-support-ui.adoc +++ b/modules/learning-getting-started-support-ui.adoc @@ -6,9 +6,9 @@ = Contacting Red{nbsp}Hat for support using the UI [role="_abstract"] -You can request support within {cluster-manager-url}. +You can request support within {cluster-manager-first}. .Procedure -. On the {cluster-manager} UI, click the *Support* tab. +. On the {cluster-manager-url} UI, click the *Support* tab. . Click *Open support case*. \ No newline at end of file diff --git a/modules/learning-getting-started-upgrading-recurring-updates.adoc b/modules/learning-getting-started-upgrading-recurring-updates.adoc index 7b0034b02a9e..1f2c39fe6680 100644 --- a/modules/learning-getting-started-upgrading-recurring-updates.adoc +++ b/modules/learning-getting-started-upgrading-recurring-updates.adoc @@ -6,7 +6,7 @@ = Setting up automatic recurring upgrades [role="_abstract"] -To schedule your cluster to automatically receive new patch (z-stream) updates, you can set your cluster to upgrade on a recurring basis within {cluster-manager}. +To schedule your cluster to automatically receive new patch (z-stream) updates, you can set your cluster to upgrade on a recurring basis within {cluster-manager-url}. .Procedure . Log in to the {cluster-manager}, and select the cluster you want to upgrade. diff --git a/modules/learning-lab-overview-about-ostoy.adoc b/modules/learning-lab-overview-about-ostoy.adoc index 5dd651abfeec..f27c8f4ec5a6 100644 --- a/modules/learning-lab-overview-about-ostoy.adoc +++ b/modules/learning-lab-overview-about-ostoy.adoc @@ -15,8 +15,8 @@ This application has a user interface where you can: * Toggle a liveness probe and monitor OpenShift behavior * Read ConfigMaps, secrets, and environment variables * Read and write files when connected to shared storage -* Check network connectivity, intra-cluster DNS, and intra-communication with the included microservice -* Increase the load to view automatic scaling of the pods by using the HPA +* Check network connectivity, intra-cluster Domain Name System (DNS), and intra-communication with the included microservice +* Increase the load to view automatic scaling of the pods by using the Horizontal Pod Autoscaler (HPA) //* Connect to an AWS S3 bucket to read and write objects image::ostoy-arch.png[OSToy architecture diagram] \ No newline at end of file From 3aa0a3559f114e3e1b13b0fe950e3ca7e6823e13 Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Tue, 5 May 2026 14:29:56 -0400 Subject: [PATCH 06/17] OSDOCS#18076: Updating the Kubernetes version in example output to 1.35 --- .../ai-adding-worker-nodes-to-cluster.adoc | 4 +-- modules/aws-outposts-load-balancer-clb.adoc | 6 ++-- modules/cleaning-crio-storage.adoc | 4 +-- ...ing-nrop-on-schedlable-control-planes.adoc | 12 ++++---- ...-apply-remediation-for-customized-mcp.adoc | 10 +++---- modules/connected-to-disconnected-verify.adoc | 12 ++++---- ...coreos-layering-configuring-on-revert.adoc | 12 ++++---- modules/coreos-layering-configuring.adoc | 12 ++++---- modules/coreos-layering-removing.adoc | 12 ++++---- modules/graceful-restart.adoc | 24 +++++++-------- modules/hcp-np-capacity-blocks.adoc | 4 +-- modules/hibernating-cluster-hibernate.adoc | 12 ++++---- modules/hibernating-cluster-resume.adoc | 12 ++++---- modules/ibi-create-standalone-config-iso.adoc | 2 +- modules/images-configuration-file.adoc | 12 ++++---- ...iguration-image-registry-settings-hcp.adoc | 6 ++-- ...iguration-registry-mirror-configuring.adoc | 12 ++++---- modules/infrastructure-moving-router.adoc | 2 +- ...-monitoring-the-installation-manually.adoc | 2 +- modules/installation-approve-csrs.adoc | 30 +++++++++---------- ...installation-aws-user-infra-bootstrap.adoc | 2 +- .../installation-installing-bare-metal.adoc | 2 +- ...stallation-osp-creating-control-plane.adoc | 2 +- .../installation-special-config-rtkernel.adoc | 12 ++++---- ...tall-provisioning-the-bare-metal-node.adoc | 22 +++++++------- ...acing-a-bare-metal-control-plane-node.adoc | 10 +++---- ...stall-troubleshooting-ntp-out-of-sync.adoc | 8 ++--- ...leshooting-reviewing-the-installation.adoc | 6 ++-- modules/machine-node-custom-partition.adoc | 14 ++++----- modules/microshift-custom-ca-proc.adoc | 2 +- ...-cluster-resource-override-move-infra.adoc | 18 +++++------ modules/nodes-nodes-kernel-arguments.adoc | 12 ++++---- modules/nodes-nodes-rtkernel-arguments.adoc | 6 ++-- modules/nodes-nodes-viewing-listing.adoc | 26 ++++++++-------- modules/nodes-nodes-working-evacuating.adoc | 2 +- ...odes-scheduler-node-selectors-cluster.adoc | 4 +-- .../nodes-scheduler-node-selectors-pod.adoc | 2 +- ...odes-scheduler-node-selectors-project.adoc | 4 +-- modules/nodes-verify-failed-node-deleted.adoc | 8 ++--- modules/nvidia-gpu-aws-adding-a-gpu-node.adoc | 12 ++++---- .../nvidia-gpu-azure-adding-a-gpu-node.adoc | 14 ++++----- modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc | 14 ++++----- ...-configuring-gnss-to-ntp-failover-sno.adoc | 3 +- ...-hwol-configuring-machine-config-pool.adoc | 10 +++---- modules/olm-catalogsource-image-template.adoc | 10 +++---- ...status-of-cluster-nodes-using-the-cli.adoc | 12 ++++---- .../restore-determine-state-etcd-member.adoc | 8 ++--- ...replace-stopped-baremetal-etcd-member.adoc | 18 +++++------ modules/rhcos-add-extensions.adoc | 2 +- modules/rhcos-enabling-multipath-day-2.adoc | 12 ++++---- ...worker-nodes-to-sno-clusters-manually.adoc | 4 +-- modules/update-upgrading-cli.adoc | 12 ++++---- ...ere-virtual-hardware-on-compute-nodes.adoc | 6 ++-- ...rtual-hardware-on-control-plane-nodes.adoc | 6 ++-- 54 files changed, 253 insertions(+), 254 deletions(-) diff --git a/modules/ai-adding-worker-nodes-to-cluster.adoc b/modules/ai-adding-worker-nodes-to-cluster.adoc index fadcfdffcc92..9d0b1f9023f8 100644 --- a/modules/ai-adding-worker-nodes-to-cluster.adoc +++ b/modules/ai-adding-worker-nodes-to-cluster.adoc @@ -323,6 +323,6 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-1.example.com Ready master,worker 56m v1.34.2 -compute-1.example.com Ready worker 11m v1.34.2 +control-plane-1.example.com Ready master,worker 56m v1.35.4 +compute-1.example.com Ready worker 11m v1.35.4 ---- diff --git a/modules/aws-outposts-load-balancer-clb.adoc b/modules/aws-outposts-load-balancer-clb.adoc index 6ebfbf2430f7..51974baae4ff 100644 --- a/modules/aws-outposts-load-balancer-clb.adoc +++ b/modules/aws-outposts-load-balancer-clb.adoc @@ -64,9 +64,9 @@ $ oc get nodes -l = [source,terminal] ---- NAME STATUS ROLES AGE VERSION -node1.example.com Ready worker 7h v1.34.2 -node2.example.com Ready worker 7h v1.34.2 -node3.example.com Ready worker 7h v1.34.2 +node1.example.com Ready worker 7h v1.35.4 +node2.example.com Ready worker 7h v1.35.4 +node3.example.com Ready worker 7h v1.35.4 ---- . Configure the Classic Load Balancer service by adding the cloud-based subnet information to the `annotations` field of the `Service` manifest: diff --git a/modules/cleaning-crio-storage.adoc b/modules/cleaning-crio-storage.adoc index 65b8bc4a24aa..da7fc31f13d6 100644 --- a/modules/cleaning-crio-storage.adoc +++ b/modules/cleaning-crio-storage.adoc @@ -123,7 +123,7 @@ $ oc get nodes + ---- NAME STATUS ROLES AGE VERSION -ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.34.2 +ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.35.4 ---- + . Mark the node schedulable. You will know that the scheduling is enabled when `SchedulingDisabled` is no longer in status: @@ -138,5 +138,5 @@ $ oc adm uncordon + ---- NAME STATUS ROLES AGE VERSION -ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.34.2 +ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.35.4 ---- diff --git a/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc b/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc index e4a5e609652c..97c544b25209 100644 --- a/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc +++ b/modules/cnf-configuring-nrop-on-schedlable-control-planes.adoc @@ -122,12 +122,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -worker-0 Ready worker,worker-cnf 100m v1.34.2 -worker-1 Ready worker 93m v1.34.2 -master-0 Ready control-plane,master,worker 108m v1.34.2 -master-1 Ready control-plane,master,worker 107m v1.34.2 -master-2 Ready control-plane,master,worker 107m v1.34.2 -worker-2 Ready worker 100m v1.34.2 +worker-0 Ready worker,worker-cnf 100m v1.35.4 +worker-1 Ready worker 93m v1.35.4 +master-0 Ready control-plane,master,worker 108m v1.35.4 +master-1 Ready control-plane,master,worker 107m v1.35.4 +master-2 Ready control-plane,master,worker 107m v1.35.4 +worker-2 Ready worker 100m v1.35.4 ---- . Verify that the NUMA Resources Operator’s pods are running on the intended nodes by running the following command. You should see a numaresourcesoperator pod for each node group you specified in the CR: diff --git a/modules/compliance-apply-remediation-for-customized-mcp.adoc b/modules/compliance-apply-remediation-for-customized-mcp.adoc index 2d0dde8968bc..307740caafa1 100644 --- a/modules/compliance-apply-remediation-for-customized-mcp.adoc +++ b/modules/compliance-apply-remediation-for-customized-mcp.adoc @@ -23,11 +23,11 @@ $ oc get nodes -n openshift-compliance [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.34.2 -ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.34.2 -ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.34.2 -ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.34.2 -ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.34.2 +ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.35.4 +ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.35.4 +ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.35.4 +ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.35.4 +ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.35.4 ---- . Add a label to nodes. diff --git a/modules/connected-to-disconnected-verify.adoc b/modules/connected-to-disconnected-verify.adoc index 7799d1252364..844f5af0a10f 100644 --- a/modules/connected-to-disconnected-verify.adoc +++ b/modules/connected-to-disconnected-verify.adoc @@ -44,10 +44,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.34.2 -ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.34.2 +ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.35.4 +ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.35.4 ---- diff --git a/modules/coreos-layering-configuring-on-revert.adoc b/modules/coreos-layering-configuring-on-revert.adoc index 1cc67fc38353..b0a51624c73a 100644 --- a/modules/coreos-layering-configuring-on-revert.adoc +++ b/modules/coreos-layering-configuring-on-revert.adoc @@ -64,12 +64,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.34.2 -ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.34.2 -ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.34.2 +ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.35.4 +ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.35.4 +ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.35.4 ---- ** When the node is back in the `Ready` state, check that the node is using the base image: diff --git a/modules/coreos-layering-configuring.adoc b/modules/coreos-layering-configuring.adoc index 0287bcead5fa..95d4dcfdcb34 100644 --- a/modules/coreos-layering-configuring.adoc +++ b/modules/coreos-layering-configuring.adoc @@ -165,12 +165,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.34.2 -ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.34.2 -ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.34.2 +ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.35.4 +ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.35.4 +ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.35.4 ---- . When the node is back in the `Ready` state, check that the node is using the custom layered image: diff --git a/modules/coreos-layering-removing.adoc b/modules/coreos-layering-removing.adoc index c8af6ead46b4..7452b8bf5dbd 100644 --- a/modules/coreos-layering-removing.adoc +++ b/modules/coreos-layering-removing.adoc @@ -52,12 +52,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.34.2 -ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.34.2 -ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.34.2 -ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.34.2 +ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.35.4 +ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.35.4 +ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.35.4 +ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.35.4 ---- . When the node is back in the `Ready` state, check that the node is using the base image: diff --git a/modules/graceful-restart.adoc b/modules/graceful-restart.adoc index 68d33121c862..d4c0b192dd41 100644 --- a/modules/graceful-restart.adoc +++ b/modules/graceful-restart.adoc @@ -58,9 +58,9 @@ The control plane nodes are ready if the status is `Ready`, as shown in the foll [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-168-251.ec2.internal Ready control-plane,master 75m v1.34.2 -ip-10-0-170-223.ec2.internal Ready control-plane,master 75m v1.34.2 -ip-10-0-211-16.ec2.internal Ready control-plane,master 75m v1.34.2 +ip-10-0-168-251.ec2.internal Ready control-plane,master 75m v1.35.4 +ip-10-0-170-223.ec2.internal Ready control-plane,master 75m v1.35.4 +ip-10-0-211-16.ec2.internal Ready control-plane,master 75m v1.35.4 ---- . If the control plane nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. @@ -99,9 +99,9 @@ The worker nodes are ready if the status is `Ready`, as shown in the following o [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-179-95.ec2.internal Ready worker 64m v1.34.2 -ip-10-0-182-134.ec2.internal Ready worker 64m v1.34.2 -ip-10-0-250-100.ec2.internal Ready worker 64m v1.34.2 +ip-10-0-179-95.ec2.internal Ready worker 64m v1.35.4 +ip-10-0-182-134.ec2.internal Ready worker 64m v1.35.4 +ip-10-0-250-100.ec2.internal Ready worker 64m v1.35.4 ---- . If the worker nodes are _not_ ready, then check whether there are any pending certificate signing requests (CSRs) that must be approved. @@ -172,12 +172,12 @@ Check that the status for all nodes is `Ready`. [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-168-251.ec2.internal Ready control-plane,master 82m v1.34.2 -ip-10-0-170-223.ec2.internal Ready control-plane,master 82m v1.34.2 -ip-10-0-179-95.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-182-134.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-211-16.ec2.internal Ready control-plane,master 82m v1.34.2 -ip-10-0-250-100.ec2.internal Ready worker 69m v1.34.2 +ip-10-0-168-251.ec2.internal Ready control-plane,master 82m v1.35.4 +ip-10-0-170-223.ec2.internal Ready control-plane,master 82m v1.35.4 +ip-10-0-179-95.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-182-134.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-211-16.ec2.internal Ready control-plane,master 82m v1.35.4 +ip-10-0-250-100.ec2.internal Ready worker 69m v1.35.4 ---- + If the cluster did not start properly, you might need to restore your cluster using an etcd backup. For more information, see "Restoring to a previous cluster state". diff --git a/modules/hcp-np-capacity-blocks.adoc b/modules/hcp-np-capacity-blocks.adoc index d55bb27af383..45810d5e01c8 100644 --- a/modules/hcp-np-capacity-blocks.adoc +++ b/modules/hcp-np-capacity-blocks.adoc @@ -137,6 +137,6 @@ $ oc get nodes [source, terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-132-74.ec2.internal Ready worker 17m v1.34.2 -ip-10-0-134-183.ec2.internal Ready worker 4h5m v1.34.2 +ip-10-0-132-74.ec2.internal Ready worker 17m v1.35.4 +ip-10-0-134-183.ec2.internal Ready worker 4h5m v1.35.4 ---- diff --git a/modules/hibernating-cluster-hibernate.adoc b/modules/hibernating-cluster-hibernate.adoc index 4faebbd44e4f..8afb08e4ec01 100644 --- a/modules/hibernating-cluster-hibernate.adoc +++ b/modules/hibernating-cluster-hibernate.adoc @@ -36,12 +36,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.34.2 -Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.34.2 +ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.35.4 +Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.35.4 ---- + All nodes should show `Ready` in the `STATUS` column. diff --git a/modules/hibernating-cluster-resume.adoc b/modules/hibernating-cluster-resume.adoc index 10e94b43a170..fc607f2dd0bb 100644 --- a/modules/hibernating-cluster-resume.adoc +++ b/modules/hibernating-cluster-resume.adoc @@ -79,12 +79,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.34.2 -Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.34.2 -ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.34.2 +ci-ln-812tb4k-72292-8bcj7-master-0 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-1 Ready control-plane,master 32m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-master-2 Ready control-plane,master 32m v1.35.4 +Ci-ln-812tb4k-72292-8bcj7-worker-a-zhdvk Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-b-9hrmv Ready worker 19m v1.35.4 +ci-ln-812tb4k-72292-8bcj7-worker-c-q8mw2 Ready worker 19m v1.35.4 ---- + All nodes should show `Ready` in the `STATUS` column. It might take a few minutes for all nodes to become ready after approving the CSRs. diff --git a/modules/ibi-create-standalone-config-iso.adoc b/modules/ibi-create-standalone-config-iso.adoc index cd8f8e29f901..2dc3fc53825e 100644 --- a/modules/ibi-create-standalone-config-iso.adoc +++ b/modules/ibi-create-standalone-config-iso.adoc @@ -226,5 +226,5 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -node/sno-cluster-name.host.example.com Ready control-plane,master 5h15m v1.34.2 +node/sno-cluster-name.host.example.com Ready control-plane,master 5h15m v1.35.4 ---- \ No newline at end of file diff --git a/modules/images-configuration-file.adoc b/modules/images-configuration-file.adoc index 890489fcc97e..3728d6061f16 100644 --- a/modules/images-configuration-file.adoc +++ b/modules/images-configuration-file.adoc @@ -73,10 +73,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.34.2 -ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.34.2 -ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.34.2 -ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.34.2 -ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.34.2 -ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.34.2 +ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.35.4 +ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.35.4 +ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.35.4 +ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.35.4 +ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.35.4 +ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.35.4 ---- diff --git a/modules/images-configuration-image-registry-settings-hcp.adoc b/modules/images-configuration-image-registry-settings-hcp.adoc index 45344bdadeed..9d8b4bfedebe 100644 --- a/modules/images-configuration-image-registry-settings-hcp.adoc +++ b/modules/images-configuration-image-registry-settings-hcp.adoc @@ -130,7 +130,7 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.34.2 -ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.34.2 -ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.34.2 +ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.35.4 +ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.35.4 +ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.35.4 ---- \ No newline at end of file diff --git a/modules/images-configuration-registry-mirror-configuring.adoc b/modules/images-configuration-registry-mirror-configuring.adoc index e95453542d41..8b18854fa4a8 100644 --- a/modules/images-configuration-registry-mirror-configuring.adoc +++ b/modules/images-configuration-registry-mirror-configuring.adoc @@ -184,12 +184,12 @@ $ oc get node [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-137-44.ec2.internal Ready worker 7m v1.34.2 -ip-10-0-138-148.ec2.internal Ready master 11m v1.34.2 -ip-10-0-139-122.ec2.internal Ready master 11m v1.34.2 -ip-10-0-147-35.ec2.internal Ready worker 7m v1.34.2 -ip-10-0-153-12.ec2.internal Ready worker 7m v1.34.2 -ip-10-0-154-10.ec2.internal Ready master 11m v1.34.2 +ip-10-0-137-44.ec2.internal Ready worker 7m v1.35.4 +ip-10-0-138-148.ec2.internal Ready master 11m v1.35.4 +ip-10-0-139-122.ec2.internal Ready master 11m v1.35.4 +ip-10-0-147-35.ec2.internal Ready worker 7m v1.35.4 +ip-10-0-153-12.ec2.internal Ready worker 7m v1.35.4 +ip-10-0-154-10.ec2.internal Ready master 11m v1.35.4 ---- .. Start the debugging process to access the node: diff --git a/modules/infrastructure-moving-router.adoc b/modules/infrastructure-moving-router.adoc index 7f15b3f37daa..8bac5a407cc2 100644 --- a/modules/infrastructure-moving-router.adoc +++ b/modules/infrastructure-moving-router.adoc @@ -111,7 +111,7 @@ $ oc get node <1> [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.34.2 +ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.35.4 ---- + Because the role list includes `infra`, the pod is running on the correct node. diff --git a/modules/install-sno-monitoring-the-installation-manually.adoc b/modules/install-sno-monitoring-the-installation-manually.adoc index 14c7eee0f85b..bfcf0dc27d7a 100644 --- a/modules/install-sno-monitoring-the-installation-manually.adoc +++ b/modules/install-sno-monitoring-the-installation-manually.adoc @@ -66,7 +66,7 @@ ifndef::openshift-origin[] [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane.example.com Ready master,worker 10m v1.34.2 +control-plane.example.com Ready master,worker 10m v1.35.4 ---- endif::openshift-origin[] ifdef::openshift-origin[] diff --git a/modules/installation-approve-csrs.adoc b/modules/installation-approve-csrs.adoc index a80f2a2b571b..f6478b9b6c5d 100644 --- a/modules/installation-approve-csrs.adoc +++ b/modules/installation-approve-csrs.adoc @@ -66,9 +66,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0 Ready master 63m v1.34.2 -master-1 Ready master 63m v1.34.2 -master-2 Ready master 64m v1.34.2 +master-0 Ready master 63m v1.35.4 +master-1 Ready master 63m v1.35.4 +master-2 Ready master 64m v1.35.4 ---- + The output lists all of the machines that you created. @@ -203,11 +203,11 @@ ifndef::ibm-power[] [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0 Ready master 73m v1.34.2 -master-1 Ready master 73m v1.34.2 -master-2 Ready master 74m v1.34.2 -worker-0 Ready worker 11m v1.34.2 -worker-1 Ready worker 11m v1.34.2 +master-0 Ready master 73m v1.35.4 +master-1 Ready master 73m v1.35.4 +master-2 Ready master 74m v1.35.4 +worker-0 Ready worker 11m v1.35.4 +worker-1 Ready worker 11m v1.35.4 ---- endif::ibm-power[] ifdef::ibm-power[] @@ -215,13 +215,13 @@ ifdef::ibm-power[] [source,terminal] ---- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME -worker-0-ppc64le Ready worker 42d v1.34.2 192.168.200.21 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -worker-1-ppc64le Ready worker 42d v1.34.2 192.168.200.20 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -master-0-x86 Ready control-plane,master 75d v1.34.2 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -master-1-x86 Ready control-plane,master 75d v1.34.2 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -master-2-x86 Ready control-plane,master 75d v1.34.2 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -worker-0-x86 Ready worker 75d v1.34.2 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 -worker-1-x86 Ready worker 75d v1.34.2 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.34.2-3.rhaos4.15.gitb36169e.el9 +worker-0-ppc64le Ready worker 42d v1.35.4 192.168.200.21 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +worker-1-ppc64le Ready worker 42d v1.35.4 192.168.200.20 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +master-0-x86 Ready control-plane,master 75d v1.35.4 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +master-1-x86 Ready control-plane,master 75d v1.35.4 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +master-2-x86 Ready control-plane,master 75d v1.35.4 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +worker-0-x86 Ready worker 75d v1.35.4 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 +worker-1-x86 Ready worker 75d v1.35.4 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.35.4-3.rhaos4.15.gitb36169e.el9 ---- endif::ibm-power[] + diff --git a/modules/installation-aws-user-infra-bootstrap.adoc b/modules/installation-aws-user-infra-bootstrap.adoc index 53a4c4674962..a01ef397fd2a 100644 --- a/modules/installation-aws-user-infra-bootstrap.adoc +++ b/modules/installation-aws-user-infra-bootstrap.adoc @@ -32,7 +32,7 @@ stored the installation files in. [source,terminal] ---- INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... -INFO API v1.34.2 up +INFO API v1.35.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s diff --git a/modules/installation-installing-bare-metal.adoc b/modules/installation-installing-bare-metal.adoc index d952d4b43c5a..7790aa758230 100644 --- a/modules/installation-installing-bare-metal.adoc +++ b/modules/installation-installing-bare-metal.adoc @@ -66,7 +66,7 @@ where: [source,terminal] ---- INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... -INFO API v1.34.2 up +INFO API v1.35.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources ---- diff --git a/modules/installation-osp-creating-control-plane.adoc b/modules/installation-osp-creating-control-plane.adoc index 6ae916791f27..f9badebec2ff 100644 --- a/modules/installation-osp-creating-control-plane.adoc +++ b/modules/installation-osp-creating-control-plane.adoc @@ -40,7 +40,7 @@ You will see messages that confirm that the control plane machines are running a + [source,terminal] ---- -INFO API v1.34.2 up +INFO API v1.35.4 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources diff --git a/modules/installation-special-config-rtkernel.adoc b/modules/installation-special-config-rtkernel.adoc index 9bbaa286b574..30aa34253c3f 100644 --- a/modules/installation-special-config-rtkernel.adoc +++ b/modules/installation-special-config-rtkernel.adoc @@ -92,12 +92,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.34.2 -ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.34.2 -ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.34.2 -ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.34.2 -ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.34.2 -ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.34.2 +ip-10-0-139-200.us-east-2.compute.internal Ready master 111m v1.35.4 +ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.35.4 +ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.35.4 +ip-10-0-156-255.us-east-2.compute.internal Ready master 111m v1.35.4 +ip-10-0-164-74.us-east-2.compute.internal Ready master 111m v1.35.4 +ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.35.4 ---- + [source,terminal] diff --git a/modules/ipi-install-provisioning-the-bare-metal-node.adoc b/modules/ipi-install-provisioning-the-bare-metal-node.adoc index ac5b18f9fe6b..1e242b689024 100644 --- a/modules/ipi-install-provisioning-the-bare-metal-node.adoc +++ b/modules/ipi-install-provisioning-the-bare-metal-node.adoc @@ -35,11 +35,11 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -openshift-master-1.openshift.example.com Ready master 30h v1.34.2 -openshift-master-2.openshift.example.com Ready master 30h v1.34.2 -openshift-master-3.openshift.example.com Ready master 30h v1.34.2 -openshift-worker-0.openshift.example.com Ready worker 30h v1.34.2 -openshift-worker-1.openshift.example.com Ready worker 30h v1.34.2 +openshift-master-1.openshift.example.com Ready master 30h v1.35.4 +openshift-master-2.openshift.example.com Ready master 30h v1.35.4 +openshift-master-3.openshift.example.com Ready master 30h v1.35.4 +openshift-worker-0.openshift.example.com Ready worker 30h v1.35.4 +openshift-worker-1.openshift.example.com Ready worker 30h v1.35.4 ---- . Get the compute machine set. @@ -99,12 +99,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -openshift-master-1.openshift.example.com Ready master 30h v1.34.2 -openshift-master-2.openshift.example.com Ready master 30h v1.34.2 -openshift-master-3.openshift.example.com Ready master 30h v1.34.2 -openshift-worker-0.openshift.example.com Ready worker 30h v1.34.2 -openshift-worker-1.openshift.example.com Ready worker 30h v1.34.2 -openshift-worker-.openshift.example.com Ready worker 3m27s v1.34.2 +openshift-master-1.openshift.example.com Ready master 30h v1.35.4 +openshift-master-2.openshift.example.com Ready master 30h v1.35.4 +openshift-master-3.openshift.example.com Ready master 30h v1.35.4 +openshift-worker-0.openshift.example.com Ready worker 30h v1.35.4 +openshift-worker-1.openshift.example.com Ready worker 30h v1.35.4 +openshift-worker-.openshift.example.com Ready worker 3m27s v1.35.4 ---- + You can also check the kubelet. diff --git a/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc b/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc index 92a3364ae377..2e2dd8f69d0e 100644 --- a/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc +++ b/modules/ipi-install-replacing-a-bare-metal-control-plane-node.adoc @@ -183,11 +183,11 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-1.example.com available master 4m2s v1.34.2 -control-plane-2.example.com available master 141m v1.34.2 -control-plane-3.example.com available master 141m v1.34.2 -compute-1.example.com available worker 87m v1.34.2 -compute-2.example.com available worker 87m v1.34.2 +control-plane-1.example.com available master 4m2s v1.35.4 +control-plane-2.example.com available master 141m v1.35.4 +control-plane-3.example.com available master 141m v1.35.4 +compute-1.example.com available worker 87m v1.35.4 +compute-2.example.com available worker 87m v1.35.4 ---- + [NOTE] diff --git a/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc b/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc index f91f170637ab..9175ee78763c 100644 --- a/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc +++ b/modules/ipi-install-troubleshooting-ntp-out-of-sync.adoc @@ -21,10 +21,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0.cloud.example.com Ready master 145m v1.34.2 -master-1.cloud.example.com Ready master 135m v1.34.2 -master-2.cloud.example.com Ready master 145m v1.34.2 -worker-2.cloud.example.com Ready worker 100m v1.34.2 +master-0.cloud.example.com Ready master 145m v1.35.4 +master-1.cloud.example.com Ready master 135m v1.35.4 +master-2.cloud.example.com Ready master 145m v1.35.4 +worker-2.cloud.example.com Ready worker 100m v1.35.4 ---- . Check for inconsistent timing delays due to clock drift. For example: diff --git a/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc b/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc index c0ca53496e99..74728b874f7a 100644 --- a/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc +++ b/modules/ipi-install-troubleshooting-reviewing-the-installation.adoc @@ -21,9 +21,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0.example.com Ready master,worker 4h v1.34.2 -master-1.example.com Ready master,worker 4h v1.34.2 -master-2.example.com Ready master,worker 4h v1.34.2 +master-0.example.com Ready master,worker 4h v1.35.4 +master-1.example.com Ready master,worker 4h v1.35.4 +master-2.example.com Ready master,worker 4h v1.35.4 ---- . Confirm the installation program deployed all pods successfully. The following command diff --git a/modules/machine-node-custom-partition.adoc b/modules/machine-node-custom-partition.adoc index 81289d0886e9..a70241ac46ad 100644 --- a/modules/machine-node-custom-partition.adoc +++ b/modules/machine-node-custom-partition.adoc @@ -236,13 +236,13 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-128-78.ec2.internal Ready worker 117m v1.34.2 -ip-10-0-146-113.ec2.internal Ready master 127m v1.34.2 -ip-10-0-153-35.ec2.internal Ready worker 118m v1.34.2 -ip-10-0-176-58.ec2.internal Ready master 126m v1.34.2 -ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.34.2 <1> -ip-10-0-225-248.ec2.internal Ready master 127m v1.34.2 -ip-10-0-245-59.ec2.internal Ready worker 116m v1.34.2 +ip-10-0-128-78.ec2.internal Ready worker 117m v1.35.4 +ip-10-0-146-113.ec2.internal Ready master 127m v1.35.4 +ip-10-0-153-35.ec2.internal Ready worker 118m v1.35.4 +ip-10-0-176-58.ec2.internal Ready master 126m v1.35.4 +ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.35.4 <1> +ip-10-0-225-248.ec2.internal Ready master 127m v1.35.4 +ip-10-0-245-59.ec2.internal Ready worker 116m v1.35.4 ---- <1> This is new new node. diff --git a/modules/microshift-custom-ca-proc.adoc b/modules/microshift-custom-ca-proc.adoc index 6d10373709ca..1d8318e0e244 100644 --- a/modules/microshift-custom-ca-proc.adoc +++ b/modules/microshift-custom-ca-proc.adoc @@ -81,7 +81,7 @@ $ oc --certificate-authority ~/certs/ca.ca get node ---- oc get node NAME STATUS ROLES AGE VERSION -dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.34.2 +dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.35.4 ---- .. Add the new CA file to the $KUBECONFIG environment variable by running the following command: diff --git a/modules/nodes-cluster-resource-override-move-infra.adoc b/modules/nodes-cluster-resource-override-move-infra.adoc index 7251175ea425..87a4613d10a8 100644 --- a/modules/nodes-cluster-resource-override-move-infra.adoc +++ b/modules/nodes-cluster-resource-override-move-infra.adoc @@ -33,15 +33,15 @@ clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.34.2 -ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.34.2 -ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.34.2 -ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.34.2 -ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.34.2 -ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.34.2 -ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.34.2 -ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.34.2 -ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.34.2 +ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.35.4 +ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.35.4 +ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.35.4 +ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.35.4 +ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.35.4 +ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.35.4 +ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.35.4 +ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.35.4 +ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.35.4 ---- .Procedure diff --git a/modules/nodes-nodes-kernel-arguments.adoc b/modules/nodes-nodes-kernel-arguments.adoc index 4632945d438d..12f6cd68f106 100644 --- a/modules/nodes-nodes-kernel-arguments.adoc +++ b/modules/nodes-nodes-kernel-arguments.adoc @@ -134,12 +134,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-136-161.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-136-243.ec2.internal Ready master 34m v1.34.2 -ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.34.2 -ip-10-0-142-249.ec2.internal Ready master 34m v1.34.2 -ip-10-0-153-11.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-153-150.ec2.internal Ready master 34m v1.34.2 +ip-10-0-136-161.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-136-243.ec2.internal Ready master 34m v1.35.4 +ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.35.4 +ip-10-0-142-249.ec2.internal Ready master 34m v1.35.4 +ip-10-0-153-11.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-153-150.ec2.internal Ready master 34m v1.35.4 ---- + You can see that scheduling on each worker node is disabled as the change is being applied. diff --git a/modules/nodes-nodes-rtkernel-arguments.adoc b/modules/nodes-nodes-rtkernel-arguments.adoc index 9f853346645e..1c51b3d8439e 100644 --- a/modules/nodes-nodes-rtkernel-arguments.adoc +++ b/modules/nodes-nodes-rtkernel-arguments.adoc @@ -61,9 +61,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.34.2 -ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.34.2 -ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.34.2 +ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.35.4 +ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.35.4 +ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.35.4 ---- + [source,terminal] diff --git a/modules/nodes-nodes-viewing-listing.adoc b/modules/nodes-nodes-viewing-listing.adoc index e7a13772106f..757b560fe094 100644 --- a/modules/nodes-nodes-viewing-listing.adoc +++ b/modules/nodes-nodes-viewing-listing.adoc @@ -27,9 +27,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master.example.com Ready master 7h v1.34.2 -node1.example.com Ready worker 7h v1.34.2 -node2.example.com Ready worker 7h v1.34.2 +master.example.com Ready master 7h v1.35.4 +node1.example.com Ready worker 7h v1.35.4 +node2.example.com Ready worker 7h v1.35.4 ---- + The following example is a cluster with one unhealthy node: @@ -43,9 +43,9 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master.example.com Ready master 7h v1.34.2 -node1.example.com NotReady,SchedulingDisabled worker 7h v1.34.2 -node2.example.com Ready worker 7h v1.34.2 +master.example.com Ready master 7h v1.35.4 +node1.example.com NotReady,SchedulingDisabled worker 7h v1.35.4 +node2.example.com Ready worker 7h v1.35.4 ---- + The conditions that trigger a `NotReady` status are shown later in this section. @@ -61,9 +61,9 @@ $ oc get nodes -o wide [source,terminal] ---- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME -master.example.com Ready master 171m v1.34.2 10.0.129.108 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev -node1.example.com Ready worker 72m v1.34.2 10.0.129.222 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev -node2.example.com Ready worker 164m v1.34.2 10.0.142.150 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.34.2-30.rhaos4.10.gitf2f339d.el8-dev +master.example.com Ready master 171m v1.35.4 10.0.129.108 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.35.4-30.rhaos4.10.gitf2f339d.el8-dev +node1.example.com Ready worker 72m v1.35.4 10.0.129.222 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.35.4-30.rhaos4.10.gitf2f339d.el8-dev +node2.example.com Ready worker 164m v1.35.4 10.0.142.150 Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.21.0-240.15.1.el8_3.x86_64 cri-o://1.35.4-30.rhaos4.10.gitf2f339d.el8-dev ---- * The following command lists information about a single node: @@ -84,7 +84,7 @@ $ oc get node node1.example.com [source,terminal] ---- NAME STATUS ROLES AGE VERSION -node1.example.com Ready worker 7h v1.34.2 +node1.example.com Ready worker 7h v1.35.4 ---- * The following command provides more detailed information about a specific node, including the reason for @@ -163,9 +163,9 @@ System Info: OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 - Container Runtime Version: cri-o://1.34.2-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 - Kubelet Version: v1.34.2 - Kube-Proxy Version: v1.34.2 + Container Runtime Version: cri-o://1.35.4-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 + Kubelet Version: v1.35.4 + Kube-Proxy Version: v1.35.4 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) diff --git a/modules/nodes-nodes-working-evacuating.adoc b/modules/nodes-nodes-working-evacuating.adoc index cd27e2201bfa..ec49e83b2646 100644 --- a/modules/nodes-nodes-working-evacuating.adoc +++ b/modules/nodes-nodes-working-evacuating.adoc @@ -44,7 +44,7 @@ $ oc get node [source,terminal] ---- NAME STATUS ROLES AGE VERSION - Ready,SchedulingDisabled worker 1d v1.34.2 + Ready,SchedulingDisabled worker 1d v1.35.4 ---- . Evacuate the pods by using one of the following methods: diff --git a/modules/nodes-scheduler-node-selectors-cluster.adoc b/modules/nodes-scheduler-node-selectors-cluster.adoc index daa8f636d4f1..e00c95ddeb44 100644 --- a/modules/nodes-scheduler-node-selectors-cluster.adoc +++ b/modules/nodes-scheduler-node-selectors-cluster.adoc @@ -145,7 +145,7 @@ $ oc get nodes -l type=user-node [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.35.4 ---- * Add labels directly to a node: @@ -198,5 +198,5 @@ $ oc get nodes -l type=user-node,region=east [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.35.4 ---- diff --git a/modules/nodes-scheduler-node-selectors-pod.adoc b/modules/nodes-scheduler-node-selectors-pod.adoc index 4ca5f9fb0d3b..db0f5ff87899 100644 --- a/modules/nodes-scheduler-node-selectors-pod.adoc +++ b/modules/nodes-scheduler-node-selectors-pod.adoc @@ -180,7 +180,7 @@ $ oc get nodes -l type=user-node,region=east [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-142-25.ec2.internal Ready worker 17m v1.34.2 +ip-10-0-142-25.ec2.internal Ready worker 17m v1.35.4 ---- . Add the matching node selector to a pod: diff --git a/modules/nodes-scheduler-node-selectors-project.adoc b/modules/nodes-scheduler-node-selectors-project.adoc index 4b689829da30..36d48eec3619 100644 --- a/modules/nodes-scheduler-node-selectors-project.adoc +++ b/modules/nodes-scheduler-node-selectors-project.adoc @@ -161,7 +161,7 @@ $ oc get nodes -l type=user-node [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.35.4 ---- * Add labels directly to a node: @@ -214,5 +214,5 @@ $ oc get nodes -l type=user-node,region=east [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.34.2 +ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.35.4 ---- diff --git a/modules/nodes-verify-failed-node-deleted.adoc b/modules/nodes-verify-failed-node-deleted.adoc index 6bb564bbb12e..7f40b39bb8e6 100644 --- a/modules/nodes-verify-failed-node-deleted.adoc +++ b/modules/nodes-verify-failed-node-deleted.adoc @@ -38,10 +38,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -openshift-control-plane-0 Ready master 3h24m v1.34.2 -openshift-control-plane-1 Ready master 3h24m v1.34.2 -openshift-compute-0 Ready worker 176m v1.34.2 -openshift-compute-1 Ready worker 176m v1.34.2 +openshift-control-plane-0 Ready master 3h24m v1.35.4 +openshift-control-plane-1 Ready master 3h24m v1.35.4 +openshift-compute-0 Ready worker 176m v1.35.4 +openshift-compute-1 Ready worker 176m v1.35.4 ---- . Wait for all of the cluster Operators to complete rolling out changes. diff --git a/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc b/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc index 986321b1ac35..5c84951a1b5e 100644 --- a/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc +++ b/modules/nvidia-gpu-aws-adding-a-gpu-node.adoc @@ -29,12 +29,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.34.2 -ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.34.2 -ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.34.2 -ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.34.2 -ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.34.2 -ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.34.2 +ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.35.4 +ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.35.4 +ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.35.4 +ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.35.4 +ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.35.4 +ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.35.4 ---- . View the machines and machine sets that exist in the `openshift-machine-api` namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones. diff --git a/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc b/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc index d626e413ef14..4eebb1f47250 100644 --- a/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc +++ b/modules/nvidia-gpu-azure-adding-a-gpu-node.adoc @@ -346,13 +346,13 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -myclustername-master-0 Ready control-plane,master 6h39m v1.34.2 -myclustername-master-1 Ready control-plane,master 6h41m v1.34.2 -myclustername-master-2 Ready control-plane,master 6h39m v1.34.2 -myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.34.2 -myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.34.2 -myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.34.2 -myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.34.2 +myclustername-master-0 Ready control-plane,master 6h39m v1.35.4 +myclustername-master-1 Ready control-plane,master 6h41m v1.35.4 +myclustername-master-2 Ready control-plane,master 6h39m v1.35.4 +myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.35.4 +myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.35.4 +myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.35.4 +myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.35.4 ---- . View the list of compute machine sets: diff --git a/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc b/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc index 7b46b8659d0b..401c95aba2ba 100644 --- a/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc +++ b/modules/nvidia-gpu-gcp-adding-a-gpu-node.adoc @@ -158,13 +158,13 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 -myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 -myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 -myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.34.2 -myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.34.2 -myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.34.2 -myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.34.2 +myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.35.4 +myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.35.4 +myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.35.4 +myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.35.4 +myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.35.4 +myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.35.4 +myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.35.4 ---- . View the machines and machine sets that exist in the `openshift-machine-api` namespace by running the following command. Each compute machine set is associated with a different availability zone within the {gcp-short} region. The installation program automatically load balances compute machines across availability zones. diff --git a/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc b/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc index cf53edd01d5b..d59d194de18d 100644 --- a/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc +++ b/modules/nw-ptp-configuring-gnss-to-ntp-failover-sno.adoc @@ -446,7 +446,7 @@ The output is similar to the following: [source,terminal] ---- NAME STATUS ROLES AGE VERSION -mysno-sno.demo.lab Ready control-plane,master,worker 4h19m v1.34.1 +mysno-sno.demo.lab Ready control-plane,master,worker 4h19m v1.35.4 ---- + Then describe the NodePtpDevice using your node name: @@ -530,4 +530,3 @@ The output shows synchronization status messages for `phc2sys`. ---- phc2sys[xxx]: CLOCK_REALTIME phc offset -17 s2 freq -13865 delay 2305 ---- - diff --git a/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc b/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc index 044400eb854b..c1ec2b3c87ab 100644 --- a/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc +++ b/modules/nw-sriov-hwol-configuring-machine-config-pool.adoc @@ -61,11 +61,11 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -master-0 Ready master 2d v1.34.2 -master-1 Ready master 2d v1.34.2 -worker-0 Ready worker 2d v1.34.2 -worker-1 Ready worker 2d v1.34.2 -worker-2 Ready mcp-offloading,worker 47h v1.34.2 +master-0 Ready master 2d v1.35.4 +master-1 Ready master 2d v1.35.4 +worker-0 Ready worker 2d v1.35.4 +worker-1 Ready worker 2d v1.35.4 +worker-2 Ready mcp-offloading,worker 47h v1.35.4 ---- -- diff --git a/modules/olm-catalogsource-image-template.adoc b/modules/olm-catalogsource-image-template.adoc index 8fee34a5bf10..7026f7d591f4 100644 --- a/modules/olm-catalogsource-image-template.adoc +++ b/modules/olm-catalogsource-image-template.adoc @@ -24,14 +24,14 @@ During a cluster upgrade, the index image tag for the default Red Hat-provided c [source,terminal] ---- -registry.redhat.io/redhat/redhat-operator-index:v4.21 +registry.redhat.io/redhat/redhat-operator-index:v4.22 ---- to: [source,terminal] ---- -registry.redhat.io/redhat/redhat-operator-index:v4.21 +registry.redhat.io/redhat/redhat-operator-index:v4.22 ---- However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image. @@ -70,7 +70,7 @@ metadata: "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}" spec: displayName: Example Catalog - image: quay.io/example-org/example-catalog:v1.34 + image: quay.io/example-org/example-catalog:v1.35 priority: -400 publisher: Example Org ---- @@ -90,11 +90,11 @@ endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] ifdef::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] a {product-title} endif::openshift-rosa,openshift-rosa-hcp,openshift-dedicated[] -cluster, which uses Kubernetes 1.34, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference: +cluster, which uses Kubernetes 1.35, the `olm.catalogImageTemplate` annotation in the preceding example resolves to the following image reference: [source,terminal] ---- -quay.io/example-org/example-catalog:v1.34 +quay.io/example-org/example-catalog:v1.35 ---- For future releases of {product-title}, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later {product-title} version. With the `olm.catalogImageTemplate` annotation set before the upgrade, upgrading the cluster to the later {product-title} version would then automatically update the catalog's index image as well. diff --git a/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc b/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc index bbfaca27fb66..9ea4da590577 100644 --- a/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc +++ b/modules/querying-the-status-of-cluster-nodes-using-the-cli.adoc @@ -26,12 +26,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -compute-1.example.com Ready worker 33m v1.34.2 -control-plane-1.example.com Ready master 41m v1.34.2 -control-plane-2.example.com Ready master 45m v1.34.2 -compute-2.example.com Ready worker 38m v1.34.2 -compute-3.example.com Ready worker 33m v1.34.2 -control-plane-3.example.com Ready master 41m v1.34.2 +compute-1.example.com Ready worker 33m v1.35.4 +control-plane-1.example.com Ready master 41m v1.35.4 +control-plane-2.example.com Ready master 45m v1.35.4 +compute-2.example.com Ready worker 38m v1.35.4 +compute-3.example.com Ready worker 33m v1.35.4 +control-plane-3.example.com Ready master 41m v1.35.4 ---- . Review CPU and memory resource availability for each cluster node: diff --git a/modules/restore-determine-state-etcd-member.adoc b/modules/restore-determine-state-etcd-member.adoc index 17879f54ee6d..d218c70b85ab 100644 --- a/modules/restore-determine-state-etcd-member.adoc +++ b/modules/restore-determine-state-etcd-member.adoc @@ -72,7 +72,7 @@ $ oc get nodes -l node-role.kubernetes.io/master | grep "NotReady" .Example output [source,terminal] ---- -ip-10-0-131-183.ec2.internal NotReady master 122m v1.34.2 <1> +ip-10-0-131-183.ec2.internal NotReady master 122m v1.35.4 <1> ---- <1> If the node is listed as `NotReady`, then the *node is not ready*. @@ -96,9 +96,9 @@ $ oc get nodes -l node-role.kubernetes.io/master [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-131-183.ec2.internal Ready master 6h13m v1.34.2 -ip-10-0-164-97.ec2.internal Ready master 6h13m v1.34.2 -ip-10-0-154-204.ec2.internal Ready master 6h13m v1.34.2 +ip-10-0-131-183.ec2.internal Ready master 6h13m v1.35.4 +ip-10-0-164-97.ec2.internal Ready master 6h13m v1.35.4 +ip-10-0-154-204.ec2.internal Ready master 6h13m v1.35.4 ---- .. Check whether the status of an etcd pod is either `Error` or `CrashloopBackoff`: diff --git a/modules/restore-replace-stopped-baremetal-etcd-member.adoc b/modules/restore-replace-stopped-baremetal-etcd-member.adoc index 67798840071e..46f00c85a852 100644 --- a/modules/restore-replace-stopped-baremetal-etcd-member.adoc +++ b/modules/restore-replace-stopped-baremetal-etcd-member.adoc @@ -285,10 +285,10 @@ examplecluster-compute-1 Running 165m opens $ oc get nodes NAME STATUS ROLES AGE VERSION -openshift-control-plane-0 Ready master 3h24m v1.34.2 -openshift-control-plane-1 Ready master 3h24m v1.34.2 -openshift-compute-0 Ready worker 176m v1.34.2 -openshift-compute-1 Ready worker 176m v1.34.2 +openshift-control-plane-0 Ready master 3h24m v1.35.4 +openshift-control-plane-1 Ready master 3h24m v1.35.4 +openshift-compute-0 Ready worker 176m v1.35.4 +openshift-compute-1 Ready worker 176m v1.35.4 ---- . Create the new `BareMetalHost` object and the secret to store the BMC credentials: @@ -413,11 +413,11 @@ $ oc get nodes ---- $ oc get nodes NAME STATUS ROLES AGE VERSION -openshift-control-plane-0 Ready master 4h26m v1.34.2 -openshift-control-plane-1 Ready master 4h26m v1.34.2 -openshift-control-plane-2 Ready master 12m v1.34.2 -openshift-compute-0 Ready worker 3h58m v1.34.2 -openshift-compute-1 Ready worker 3h58m v1.34.2 +openshift-control-plane-0 Ready master 4h26m v1.35.4 +openshift-control-plane-1 Ready master 4h26m v1.35.4 +openshift-control-plane-2 Ready master 12m v1.35.4 +openshift-compute-0 Ready worker 3h58m v1.35.4 +openshift-compute-1 Ready worker 3h58m v1.35.4 ---- . Turn the quorum guard back on by entering the following command: diff --git a/modules/rhcos-add-extensions.adoc b/modules/rhcos-add-extensions.adoc index f24865e9ffe7..29bae0d44098 100644 --- a/modules/rhcos-add-extensions.adoc +++ b/modules/rhcos-add-extensions.adoc @@ -105,7 +105,7 @@ $ oc get node | grep worker [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.34.2 +ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.35.4 ---- + [source,terminal] diff --git a/modules/rhcos-enabling-multipath-day-2.adoc b/modules/rhcos-enabling-multipath-day-2.adoc index a5a6406a9ed2..bd4a790e7da6 100644 --- a/modules/rhcos-enabling-multipath-day-2.adoc +++ b/modules/rhcos-enabling-multipath-day-2.adoc @@ -111,12 +111,12 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-136-161.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-136-243.ec2.internal Ready master 34m v1.34.2 -ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.34.2 -ip-10-0-142-249.ec2.internal Ready master 34m v1.34.2 -ip-10-0-153-11.ec2.internal Ready worker 28m v1.34.2 -ip-10-0-153-150.ec2.internal Ready master 34m v1.34.2 +ip-10-0-136-161.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-136-243.ec2.internal Ready master 34m v1.35.4 +ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.35.4 +ip-10-0-142-249.ec2.internal Ready master 34m v1.35.4 +ip-10-0-153-11.ec2.internal Ready worker 28m v1.35.4 +ip-10-0-153-150.ec2.internal Ready master 34m v1.35.4 ---- + You can see that scheduling on each worker node is disabled as the change is being applied. diff --git a/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc b/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc index ce58089b5847..5c89e5df4f3f 100644 --- a/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc +++ b/modules/sno-adding-worker-nodes-to-sno-clusters-manually.adoc @@ -218,6 +218,6 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-1.example.com Ready master,worker 56m v1.34.2 -compute-1.example.com Ready worker 11m v1.34.2 +control-plane-1.example.com Ready master,worker 56m v1.35.4 +compute-1.example.com Ready worker 11m v1.35.4 ---- diff --git a/modules/update-upgrading-cli.adoc b/modules/update-upgrading-cli.adoc index 45f81535b54e..2301cf64fa48 100644 --- a/modules/update-upgrading-cli.adoc +++ b/modules/update-upgrading-cli.adoc @@ -208,10 +208,10 @@ $ oc get nodes [source,terminal] ---- NAME STATUS ROLES AGE VERSION -ip-10-0-168-251.ec2.internal Ready master 82m v1.34.2 -ip-10-0-170-223.ec2.internal Ready master 82m v1.34.2 -ip-10-0-179-95.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-182-134.ec2.internal Ready worker 70m v1.34.2 -ip-10-0-211-16.ec2.internal Ready master 82m v1.34.2 -ip-10-0-250-100.ec2.internal Ready worker 69m v1.34.2 +ip-10-0-168-251.ec2.internal Ready master 82m v1.35.4 +ip-10-0-170-223.ec2.internal Ready master 82m v1.35.4 +ip-10-0-179-95.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-182-134.ec2.internal Ready worker 70m v1.35.4 +ip-10-0-211-16.ec2.internal Ready master 82m v1.35.4 +ip-10-0-250-100.ec2.internal Ready worker 69m v1.35.4 ---- \ No newline at end of file diff --git a/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc b/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc index a47265f5e7cd..90cd3705099e 100644 --- a/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc +++ b/modules/update-vsphere-virtual-hardware-on-compute-nodes.adoc @@ -34,9 +34,9 @@ $ oc get nodes -l node-role.kubernetes.io/worker [source,terminal] ---- NAME STATUS ROLES AGE VERSION -compute-node-0 Ready worker 30m v1.34.2 -compute-node-1 Ready worker 30m v1.34.2 -compute-node-2 Ready worker 30m v1.34.2 +compute-node-0 Ready worker 30m v1.35.4 +compute-node-1 Ready worker 30m v1.35.4 +compute-node-2 Ready worker 30m v1.35.4 ---- + Note the names of your compute nodes. diff --git a/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc b/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc index 97d47d4a3397..d17046f96f92 100644 --- a/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc +++ b/modules/update-vsphere-virtual-hardware-on-control-plane-nodes.adoc @@ -29,9 +29,9 @@ $ oc get nodes -l node-role.kubernetes.io/master [source,terminal] ---- NAME STATUS ROLES AGE VERSION -control-plane-node-0 Ready master 75m v1.34.2 -control-plane-node-1 Ready master 75m v1.34.2 -control-plane-node-2 Ready master 75m v1.34.2 +control-plane-node-0 Ready master 75m v1.35.4 +control-plane-node-1 Ready master 75m v1.35.4 +control-plane-node-2 Ready master 75m v1.35.4 ---- + Note the names of your control plane nodes. From 930dc398859e9651361441db2503826df9b988a7 Mon Sep 17 00:00:00 2001 From: Brendan Daly Date: Wed, 6 May 2026 10:32:59 +0100 Subject: [PATCH 07/17] OSDOCS-17043_b:adding CQA updates --- .../machineset-azure-confidential-vms.adoc | 44 ++++++++++--------- ...bling-accelerated-networking-existing.adoc | 20 +++++---- modules/machineset-azure-ephemeral-os.adoc | 5 ++- modules/machineset-azure-trusted-launch.adoc | 28 +++++++----- 4 files changed, 54 insertions(+), 43 deletions(-) diff --git a/modules/machineset-azure-confidential-vms.adoc b/modules/machineset-azure-confidential-vms.adoc index 13f5226a9a23..2bd275cf49d9 100644 --- a/modules/machineset-azure-confidential-vms.adoc +++ b/modules/machineset-azure-confidential-vms.adoc @@ -9,9 +9,10 @@ endif::[] :_mod-docs-content-type: PROCEDURE [id="machineset-azure-confidential-vms_{context}"] -= Configuring Azure confidential virtual machines by using machine sets += Configuring {azure-short} confidential virtual machines by using machine sets -{product-title} {product-version} supports Azure confidential virtual machines (VMs). +[role="_abstract"] +{product-title} {product-version} supports {azure-full} confidential virtual machines (VMs). By enabling {azure-short} confidential VMs, you can use memory encryption to improve data confidentiality. [NOTE] ==== @@ -27,7 +28,7 @@ Not all instance types support confidential VMs. Do not change the instance type ==== endif::cpmso[] -For more information about related features and functionality, see the Microsoft Azure documentation about link:https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-vm-overview[Confidential virtual machines]. +For more information about related features and functionality, see the {azure-full} documentation about link:https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-vm-overview[Confidential virtual machines]. .Procedure @@ -56,32 +57,35 @@ spec: osDisk: # ... managedDisk: - securityProfile: # <1> - securityEncryptionType: VMGuestStateOnly # <2> + securityProfile: + securityEncryptionType: VMGuestStateOnly # ... - securityProfile: # <3> + securityProfile: settings: - securityType: ConfidentialVM # <4> + securityType: ConfidentialVM confidentialVM: - uefiSettings: # <5> - secureBoot: Disabled # <6> - virtualizedTrustedPlatformModule: Enabled # <7> - vmSize: Standard_DC16ads_v5 # <8> + uefiSettings: + secureBoot: Disabled + virtualizedTrustedPlatformModule: Enabled + vmSize: Standard_DC16ads_v5 # ... ---- -<1> Specifies security profile settings for the managed disk when using a confidential VM. -<2> Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. -<3> Specifies security profile settings for the confidential VM. -<4> Enables the use of confidential VMs. This value is required for all valid configurations. -<5> Specifies which UEFI security features to use. This section is required for all valid configurations. -<6> Disables UEFI Secure Boot. -<7> Enables the use of a vTPM. -<8> Specifies an instance type that supports confidential VMs. + +where: + +`spec.template.spec.providerSpec.value.osDisk.managedDisk.securityProfile`:: Specifies security profile settings for the managed disk when using a confidential VM. +`spec.template.spec.providerSpec.value.osDisk.managedDisk.securityProfile.securityEncryptionType`:: Enables encryption of the {azure-full} VM Guest State (VMGS) blob. This setting requires the use of vTPM. +`spec.template.spec.providerSpec.value.securityProfile`:: Specifies security profile settings for the confidential VM. +`spec.template.spec.providerSpec.value.securityProfile.settings.securityType`:: Enables the use of confidential VMs. This value is required for all valid configurations. +`spec.template.spec.providerSpec.value.securityProfile.settings.confidentialVM.uefiSettings`:: Specifies which UEFI security features to use. This section is required for all valid configurations. +`spec.template.spec.providerSpec.value.securityProfile.settings.confidentialVM.uefiSettings.secureBoot`:: Disables UEFI Secure Boot. +`spec.template.spec.providerSpec.value.securityProfile.settings.confidentialVM.uefiSettings.virtualizedTrustedPlatformModule`:: Enables the use of a vTPM. +`spec.template.spec.providerSpec.value.vmSize`:: Specifies an instance type that supports confidential VMs. -- .Verification -* On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. +* On the {azure-full} portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. ifeval::["{context}" == "cpmso-supported-features-azure"] :!cpmso: diff --git a/modules/machineset-azure-enabling-accelerated-networking-existing.adoc b/modules/machineset-azure-enabling-accelerated-networking-existing.adoc index 159c2ce0a875..37e4b4764d37 100644 --- a/modules/machineset-azure-enabling-accelerated-networking-existing.adoc +++ b/modules/machineset-azure-enabling-accelerated-networking-existing.adoc @@ -12,13 +12,14 @@ endif::[] :_mod-docs-content-type: PROCEDURE [id="machineset-azure-enabling-accelerated-networking-existing_{context}"] -= Enabling Accelerated Networking on an existing Microsoft Azure cluster += Enabling Accelerated Networking on an existing {azure-full} cluster -You can enable Accelerated Networking on Azure by adding `acceleratedNetworking` to your machine set YAML file. +[role="_abstract"] +You can enable Accelerated Networking on {azure-full} by adding `acceleratedNetworking` to your machine set YAML file. This uses SR-IOV to help improve network performance for new node. .Prerequisites -* Have an existing Microsoft Azure cluster where the Machine API is operational. +* Have an existing {azure-short} cluster where the Machine API is operational. .Procedure //// @@ -58,12 +59,13 @@ $ oc edit machineset ---- providerSpec: value: - acceleratedNetworking: true <1> - vmSize: <2> + acceleratedNetworking: true + vmSize: ---- -+ -<1> This line enables Accelerated Networking. -<2> Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see link:https://docs.microsoft.com/en-us/azure/virtual-machines/sizes[Microsoft Azure documentation]. +where: + +`providerSpec.value.acceleratedNetworking`:: Enables Accelerated Networking. +`providerSpec.value.vmSize`:: Specifies an {azure-short} VM size that includes at least four vCPUs. For information about VM sizes, see the {azure-full} documentation link:https://docs.microsoft.com/en-us/azure/virtual-machines/sizes[Sizes for virtual machines in {azure-short}]. ifdef::compute[] .Next steps @@ -73,7 +75,7 @@ endif::compute[] .Verification -* On the Microsoft Azure portal, review the *Networking* settings page for a machine provisioned by the machine set, and verify that the `Accelerated networking` field is set to `Enabled`. +* On the {azure-full} portal, review the *Networking* settings page for a machine provisioned by the machine set, and verify that the `Accelerated networking` field is set to `Enabled`. ifeval::["{context}" == "creating-machineset-azure"] :!compute: diff --git a/modules/machineset-azure-ephemeral-os.adoc b/modules/machineset-azure-ephemeral-os.adoc index e2e79293c704..c901f32fcace 100644 --- a/modules/machineset-azure-ephemeral-os.adoc +++ b/modules/machineset-azure-ephemeral-os.adoc @@ -6,9 +6,10 @@ [id="machineset-azure-ephemeral-os_{context}"] = Machine sets that deploy machines on Ephemeral OS disks -You can create a compute machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. +[role="_abstract"] +You can create a compute machine set running on {azure-first} that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote {azure-short} Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging. [role="_additional-resources"] .Additional resources -* For more information, see the Microsoft Azure documentation about link:https://docs.microsoft.com/en-us/azure/virtual-machines/ephemeral-os-disks[Ephemeral OS disks for Azure VMs]. +* link:https://docs.microsoft.com/en-us/azure/virtual-machines/ephemeral-os-disks[Ephemeral OS disks for {azure-short} VMs ({azure-full} documentation)] diff --git a/modules/machineset-azure-trusted-launch.adoc b/modules/machineset-azure-trusted-launch.adoc index 0e8d07672778..07ee5f87a3cd 100644 --- a/modules/machineset-azure-trusted-launch.adoc +++ b/modules/machineset-azure-trusted-launch.adoc @@ -9,9 +9,10 @@ endif::[] :_mod-docs-content-type: PROCEDURE [id="machineset-azure-trusted-launch_{context}"] -= Configuring trusted launch for Azure virtual machines by using machine sets += Configuring trusted launch for {azure-short} virtual machines by using machine sets -{product-title} {product-version} supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. +[role="_abstract"] +{product-title} {product-version} supports trusted launch for {azure-full} virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. [NOTE] ==== @@ -60,7 +61,7 @@ Some feature combinations result in an invalid configuration. 2. Using the `virtualizedTrustedPlatformModule` field. -- -For more information about related features and functionality, see the Microsoft Azure documentation about link:https://learn.microsoft.com/en-us/azure/virtual-machines/trusted-launch[Trusted launch for Azure virtual machines]. +For more information about related features and functionality, see the {azure-full} documentation about link:https://learn.microsoft.com/en-us/azure/virtual-machines/trusted-launch[Trusted launch for {azure-short} virtual machines]. .Procedure @@ -88,21 +89,24 @@ spec: value: securityProfile: settings: - securityType: TrustedLaunch # <1> + securityType: TrustedLaunch trustedLaunch: - uefiSettings: # <2> - secureBoot: Enabled # <3> - virtualizedTrustedPlatformModule: Enabled # <4> + uefiSettings: + secureBoot: Enabled + virtualizedTrustedPlatformModule: Enabled # ... ---- -<1> Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. -<2> Specifies which UEFI security features to use. This section is required for all valid configurations. -<3> Enables UEFI Secure Boot. -<4> Enables the use of a vTPM. ++ +where: ++ +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.securityType`:: Enables the use of trusted launch for {azure-short} virtual machines. This value is required for all valid configurations. +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.trustedLaunch.uefiSettings`:: Specifies which UEFI security features to use. This section is required for all valid configurations. +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.trustedLaunch.uefiSettings.secureBoot`:: Enables UEFI Secure Boot. +`spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.securityProfile.settings.trustedLaunch.uefiSettings.virtualizedTrustedPlatformModule`:: Enables the use of a vTPM. .Verification -* On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. +* On the {azure-full} portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. ifeval::["{context}" == "cpmso-supported-features-azure"] :!cpmso: From ff9857fafa26b48207f9e915c137d81300592f89 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Fri, 24 Apr 2026 16:27:13 -0400 Subject: [PATCH 08/17] Speeding Up Pulling Container Images/CRI-O Additional Storage Support --- _topic_maps/_topic_map.yml | 2 + ...s-nodes-additional-crio-storage-about.adoc | 98 +++++++++++ ...s-additional-crio-storage-configuring.adoc | 165 ++++++++++++++++++ .../nodes-nodes-additional-crio-storage.adoc | 30 ++++ 4 files changed, 295 insertions(+) create mode 100644 modules/nodes-nodes-additional-crio-storage-about.adoc create mode 100644 modules/nodes-nodes-additional-crio-storage-configuring.adoc create mode 100644 nodes/nodes/nodes-nodes-additional-crio-storage.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 949e35df3a7b..1920282c31a4 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2955,6 +2955,8 @@ Topics: File: nodes-nodes-resources-configuring - Name: Allocating specific CPUs for nodes in a cluster File: nodes-nodes-resources-cpus + - Name: Additional CRI-O storage locations for faster container startup + File: nodes-nodes-additional-crio-storage - Name: Enabling TLS security profiles for the kubelet File: nodes-nodes-tls Distros: openshift-enterprise,openshift-origin diff --git a/modules/nodes-nodes-additional-crio-storage-about.adoc b/modules/nodes-nodes-additional-crio-storage-about.adoc new file mode 100644 index 000000000000..d2a5b6fbbbba --- /dev/null +++ b/modules/nodes-nodes-additional-crio-storage-about.adoc @@ -0,0 +1,98 @@ +// Module included in the following assemblies: +// +// * nodes/nodes/nodes-nodes-additional-crio-storage.adoc + +:_mod-docs-content-type: CONCEPT +[id="nodes-nodes-additional-crio-storage-about_{context}"] += About additional storage locations for CRI-O + +[role="_abstract"] +To reduce application startup time and make your applications run more efficiently, you can configure additional storage locations for the CRI-O container engine. + +By using storage locations for the CRI-O container engine other than the default gives you control over where CRI-O stores and retrieves OCI artifacts, complete container images, and container image layers. Using additional storage locations for these CRI-O objects can reduce application startup time and make your applications run more efficiently through dedicated solid-state drive (SSD) storage, shared image caches, or lazy pulling. + +By default, CRI-O stores all container data under a single root directory, `/var/lib/containers/storage`. This works well for typical workloads, but can create problems in clusters that use large images or artifacts, such as artificial intelligence and machine learning (AI/ML) workloads. + +For example, large OCI artifacts, such as machine learning models, are stored in the default location, consuming space and preventing the use of faster dedicated storage. By configuring the `additionalArtifactStores` field, you can store large AI/ML models on high-performance solid-state drives (SSD) separate from the root file system. As a result, your workloads can experience faster start times and your clusters can use storage more efficiently. + +Also, you could use the `additionalImageStores` field to mount an NFS share with prepopulated images across all worker nodes. Nodes read from the shared cache instead of pulling from an external registry. This is useful in disconnected environments or when many nodes run the same workloads. + +With the `additionalLayerStores` field, you could enable lazy pulling through a third-party storage plugin, such as stargz-store. With lazy pulling, containers start after downloading only the required file chunks. The remaining data is fetched during runtime. + +After you configure any of these new storage locations, the Machine Config Operator (MCO) reboots the affected nodes with the new configuration. After the reboot, CRI-O begins resolving storage from the additional locations. + +Additional storage for OCI artifacts:: +Use the `additionalArtifactStores` field in a container runtime config to specify read-only locations where CRI-O resolves OCI artifacts, such as machine learning models pulled as OCI volume images. CRI-O checks these locations in order before falling back to the default storage location. CRI-O requires an existing, prepopulated `artifacts/` subdirectory within each configured path. For example, if the path is `/mnt/ssd-artifacts`, place the artifacts in the `/mnt/ssd-artifacts/artifacts/` directory. ++ +The following example container runtime config configures storage for OCI artifacts. ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: ssd-artifact-stores +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalArtifactStores: + - path: /mnt/ssd-artifacts + - path: /mnt/nfs-shared-artifacts +---- ++ +When you create the container runtime config, the Machine Config Operator (MCO) writes the configuration to the `/etc/crio/crio.conf.d/01-ctrcfg-additionalArtifactStores` file on the target nodes. + +Additional storage for container images:: +Use the `additionalImageStores` field to specify read-only container image caches on shared or high-performance storage. When CRI-O needs an image, it checks the additional image stores first. If the image exists there, no registry pull happens. ++ +The following example container runtime config configures storage for container images. ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: shared-image-cache +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalImageStores: + - path: /mnt/nfs-image-cache + - path: /mnt/ssd-images +---- ++ +When you create the container runtime config, the Machine Config Operator (MCO) writes the configuration to the `/etc/containers/storage.conf` file on the target nodes. + +Additional container image layers for lazy pulling:: +Use the `additionalLayerStores` field to enable lazy pulling through a third-party storage plugin. ++ +Note that CRI-O falls back to a standard image pull in the following cases: ++ +-- +* The registry does not support HTTP range requests. +* The image is in standard OCI format, not a lazy-pull-compatible format such as eStargz or Nydus. +* The storage plugin is not running. +-- ++ +The following example container runtime config configures container image layers for lazy pulling. ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: lazy-pulling +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalLayerStores: + - path: /var/lib/stargz-store +---- ++ +When you create the container runtime config, the Machine Config Operator (MCO) writes the configuration to the `/etc/containers/storage.conf` file on the target nodes. diff --git a/modules/nodes-nodes-additional-crio-storage-configuring.adoc b/modules/nodes-nodes-additional-crio-storage-configuring.adoc new file mode 100644 index 000000000000..89043a161f2a --- /dev/null +++ b/modules/nodes-nodes-additional-crio-storage-configuring.adoc @@ -0,0 +1,165 @@ +// Module included in the following assemblies: +// +// * nodes/nodes/nodes-nodes-additional-crio-storage.adoc + +:_mod-docs-content-type: PROCEDURE +[id="nodes-nodes-additional-crio-storage-configuring_{context}"] += Configuring additional storage locations for CRI-O + +[role="_abstract"] +To reduce application startup time and make your applications run more efficiently, you can configure additional storage locations for the CRI-O container engine to store OCI objects by using the `ContainerRuntimeConfig` custom resource (CR). + +Use the `additionalArtifactStores`, `additionalImageStores`, and `additionalLayerStores` fields in a `ContainerRuntimeConfig` to specify read-only locations where CRI-O stores and resolves OCI artifacts, container images, or container image layers. CRI-O checks these locations in order before falling back to the default storage location. + +[IMPORTANT] +==== +When using multiple `ContainerRuntimeConfig` resources, merge all additional storage configurations into a single `ContainerRuntimeConfig` for each machine config pool. Multiple `ContainerRuntimeConfig` resources affecting the same configuration file might result in only a subset of the changes taking effect. +==== + +.Prerequisites + +* You enabled the required Technology Preview features for your cluster by adding the `TechPreviewNoUpgrade` feature set to the `FeatureGate` CR named `cluster`. For information about enabling Feature Gates, see "Enabling features using feature gates". ++ +[WARNING] +==== +Enabling the `TechPreviewNoUpgrade` feature set on your cluster cannot be undone and prevents minor version updates. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them. Do not enable this feature set on production clusters. +==== + +* If you are configuring the `additionalImageStores` or `additionalLayerStores` field, the target storage paths must exist and be accessible on the nodes and the container image or layers must be present in the directory. For network storage, ensure the paths are mounted before applying the configuration. + +* If you are configuring the `additionalLayerStores` field, you must meet the following additional prerequisites: + +** A supported storage plugin binary must be installed on each node, such as Stargz Store or Nydus Storage Plugin. See "Stargz Store plugin" or "Nydus Storage Plugin" for more information. You must have installed the plugin by using one of the following methods: +*** Use a daemon set to run the plugin as a privileged container. +*** Use a machine config to install the binary and configure it as a systemd service. +*** Use Image mode for OpenShift to install the plugin in a custom {op-system} image. + +** You converted the container images to a lazy-pull-compatible format, such as eStargz or Nydus. See "eStargz format" or "Nydus format" for more information. +** Your container registry must support HTTP range requests. + +.Procedure + +. Create a YAML file for the `ContainerRuntimeConfig` CR similar to the following example: ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: ContainerRuntimeConfig +metadata: + name: crio-additional-stores +spec: + machineConfigPoolSelector: + matchLabels: + pools.operator.machineconfiguration.openshift.io/worker: "" + containerRuntimeConfig: + additionalArtifactStores: + - path: /mnt/ssd-artifacts + - path: /mnt/nfs-shared-artifacts + additionalImageStores: + - path: /mnt/nfs-image-cache + - path: /mnt/ssd-images + additionalLayerStores: + - path: /var/lib/stargz-store +---- +where: ++ +-- +`spec.machineConfigPoolSelector.matchLabels`:: Specifies a label associated with the nodes that you want to update. +`spec.containerRuntimeConfig.additionalArtifactStores.path`:: Optional: Specifies the path to the directory that contains OCI artifacts. CRI-O searches for content in an `artifacts/` subdirectory within this path. You can specify up to 10 directories. +`spec.containerRuntimeConfig.additionalImageStores.path`:: Optional: Specifies the path to an NFS share or other location that contains pre-populated container images. You can specify up to 10 directories. +`spec.containerRuntimeConfig.additionalLayerStores.path`:: Optional: Specifies the path to the directory that contains lazy-pull-compatible-formatted container image layers. You can specify up to 5 directories. +-- ++ +The specified path must meet the following criteria: ++ +-- +* Contains between 1 and 256 characters +* Is an absolute path, starting with the `/` character +* Contains only alphanumeric characters: `a-z`, `A-Z`, `0-9`, `/`, `.`, `_`, and `-` +* Cannot contain consecutive forward slashes +-- ++ +You can configure any combination of these three additional CRI-O storage locations. ++ +For a layer store, the MCO automatically appends the `:ref` suffix to the path when writing to the `storage.conf` file. This suffix switches the container storage library from storing actual image layers (blobs) to storing references (pointers) to where those layers can be found, which is required for the lazy-pulling plugins. You do not need to include the suffix in the `ContainerRuntimeConfig` path. ++ +[NOTE] +==== +If a path does not exist or is inaccessible at runtime, CRI-O generates a warning and continues with the remaining stores. The default storage location is always used as a fallback. +==== + +. Create the `ContainerRuntimeConfig` CR by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- ++ +Replace `` with the name of the YAML file. ++ +After you configure any of these new storage locations, the Machine Config Operator (MCO) reboots the affected nodes with the new configuration. + +.Verification + +. After the nodes have returned to the `Ready` status, check that the new stores have been added to the node configuration: + +.. Start a debug pod by running the following command: ++ +[source,terminal] +---- +$ oc debug node/ +---- ++ +where `` specifies the name of one of the nodes in the affected machine config pool. + +.. Set `/host` as the root directory within the debug shell by running the following command: ++ +[source,terminal] +---- +sh-5.1# chroot /host +---- + +** For an artifact store, review the contents of the `/etc/crio/crio.conf.d/01-ctrcfg-additionalArtifactStores` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/crio/crio.conf.d/01-ctrcfg-additionalArtifactStores +---- ++ +.Example output +[source,terminal] +---- +[crio] + [crio.runtime] + additional_artifact_stores = ["/mnt/ssd-artifacts", "/mnt/nfs-shared-artifacts"] +---- + +** For an image store, review the contents of the `/etc/containers/storage.conf` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/containers/storage.conf +---- ++ +.Example output +[source,terminal] +---- +[storage] + [storage.options] + additionalimagestores = ["/mnt/nfs-image-cache", "/mnt/ssd-images"] +---- + +** For a layer store, review the contents of the `/etc/containers/storage.conf` file by running the following command: ++ +[source,terminal] +---- +sh-5.1# cat /etc/containers/storage.conf +---- ++ +.Example output +[source,terminal] +---- +[storage] + [storage.options] + additionallayerstores = ["/var/lib/stargz-store:ref"] +---- diff --git a/nodes/nodes/nodes-nodes-additional-crio-storage.adoc b/nodes/nodes/nodes-nodes-additional-crio-storage.adoc new file mode 100644 index 000000000000..94b46282e062 --- /dev/null +++ b/nodes/nodes/nodes-nodes-additional-crio-storage.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: ASSEMBLY +[id="nodes-nodes-additional-crio-storage"] += Additional CRI-O storage locations for faster container startup +include::_attributes/common-attributes.adoc[] +:context: nodes-nodes-additional-crio-storage + +toc::[] + +[role="_abstract"] +To reduce application startup time, make your applications run more efficiently, and configure lazy pulling, you can configure additional storage locations for the CRI-O container engine. + +Fields in the `ContainerRuntimeConfig` custom resource (CR) let you specify where CRI-O stores and resolves container image layers, complete container images, and OCI artifacts. + +:FeatureName: Using additional CRI-O storage locations +include::snippets/technology-preview.adoc[] + +include::modules/nodes-nodes-additional-crio-storage-about.adoc[leveloffset=+1] +include::modules/nodes-nodes-additional-crio-storage-configuring.adoc[leveloffset=+1] + +== Additional resources + +* link:https://github.com/containerd/stargz-snapshotter[Stargz Store plugin] +* link:https://github.com/containerd/stargz-snapshotter/blob/main/docs/INSTALL.md[Install Stargz Snapshotter and Stargz Store] +* link:https://github.com/containers/nydus-storage-plugin[Nydus Storage Plugin] +* link:https://github.com/containerd/stargz-snapshotter/blob/main/docs/estargz.md[eStargz format] +* link:https://nydus.dev/[Nydus format] +* xref:../../nodes/jobs/nodes-pods-daemonsets.adoc#nodes-pods-daemonsets[Running background tasks on nodes automatically with daemon sets] +* xref:../../machine_configuration/machine-configs-configure.adoc#machine-configs-configure[Using machine config objects to configure nodes] +* xref:../../machine_configuration/mco-coreos-layering.adoc#mco-coreos-layering[Image mode for OpenShift] + From 000e68d79768d1959089be5c082a2835b10a69c1 Mon Sep 17 00:00:00 2001 From: Ashleigh Brennan Date: Fri, 8 May 2026 12:34:43 -0500 Subject: [PATCH 09/17] Fix ConceptLink warnings in virt/support Move cross-references and links to Additional resources section in the assembly file to resolve ConceptLink warnings. Links from included modules are consolidated in the assembly's Additional resources section. Changes: - Moved 12 links/xrefs across 1 assembly and 5 modules - Replaced links/xrefs with plain text in module content - Added all links to assembly's Additional resources section - Modules do NOT have Additional resources sections (only assemblies have them) - Preserved all empty lines and formatting Link text improvements: - Used exact heading titles for xrefs (e.g., "Submitting a support case") - Used gerund forms for procedures (e.g., "Modifying retention time...", "Configuring...") - Proper capitalization and removed leading articles - Matched URL fragments to actual heading titles where possible Files modified: - virt/support/virt-collecting-virt-data.adoc (added 12 links to Additional resources) - modules/virt-collecting-data-about-your-environment.adoc (removed 6 links from prose) - modules/virt-collecting-data-about-vms.adoc (removed 1 link from prose) - modules/virt-generating-a-vm-memory-dump.adoc (removed 1 link from prose) - modules/virt-support-create-jira-issue.adoc (removed 1 link from prose) - modules/virt-support-submit-support-case.adoc (removed 1 link from prose) Co-Authored-By: Claude Sonnet 4.5 --- modules/virt-collecting-data-about-vms.adoc | 2 +- ...virt-collecting-data-about-your-environment.adoc | 12 ++++++------ modules/virt-generating-a-vm-memory-dump.adoc | 2 +- modules/virt-support-create-jira-issue.adoc | 2 +- modules/virt-support-submit-support-case.adoc | 2 +- virt/support/virt-collecting-virt-data.adoc | 13 ++++++++++++- 6 files changed, 22 insertions(+), 11 deletions(-) diff --git a/modules/virt-collecting-data-about-vms.adoc b/modules/virt-collecting-data-about-vms.adoc index f6291cef8463..314b13c20a36 100644 --- a/modules/virt-collecting-data-about-vms.adoc +++ b/modules/virt-collecting-data-about-vms.adoc @@ -14,7 +14,7 @@ Collecting data about malfunctioning virtual machines (VMs) minimizes the time r * For Linux VMs, you have installed the latest QEMU guest agent. * For Windows VMs, you have: ** Recorded the Windows patch update details. -** link:https://access.redhat.com/solutions/6957701[Installed the latest VirtIO drivers]. +** Installed the latest VirtIO drivers. ** Installed the latest QEMU guest agent. ** If Remote Desktop Protocol (RDP) is enabled, you have connected by using the desktop viewer to determine whether there is a problem with the connection software. diff --git a/modules/virt-collecting-data-about-your-environment.adoc b/modules/virt-collecting-data-about-your-environment.adoc index 484709ad8c30..b36b6a65e9b6 100644 --- a/modules/virt-collecting-data-about-your-environment.adoc +++ b/modules/virt-collecting-data-about-your-environment.adoc @@ -12,13 +12,13 @@ Collecting data about your environment minimizes the time required to analyze an .Prerequisites //link needs to be added for HCP when available ifdef::openshift-dedicated,openshift-rosa[] -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[set the retention time for Prometheus metrics data] to a minimum of seven days. -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] so that they can be viewed and persisted outside the cluster. +* You have set the retention time for Prometheus metrics data to a minimum of seven days. +* You have configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster. endif::openshift-dedicated,openshift-rosa[] ifndef::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/storing-and-recording-data#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data[set the retention time for Prometheus metrics data] to a minimum of seven days. -* You have link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/configuring-alerts-and-notifications[configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] so that they can be viewed and persisted outside the cluster. +* You have set the retention time for Prometheus metrics data to a minimum of seven days. +* You have configured the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox so that they can be viewed and persisted outside the cluster. endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] * You have recorded the exact number of affected nodes and virtual machines. @@ -27,10 +27,10 @@ endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] // must-gather not supported for ROSA/OSD, per Dustin Row ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] . Collect must-gather data for the cluster. -. link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/latest/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Collect must-gather data for {rh-storage-first}], if necessary. +. Collect must-gather data for {rh-storage-first}, if necessary. . Collect must-gather data for {VirtProductName}. endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] -. link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/accessing_metrics/accessing-metrics-as-an-administrator#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Collect Prometheus metrics for the cluster]. +. Collect Prometheus metrics for the cluster. endif::openshift-rosa-hcp[] //link needs to be added for HCP when available diff --git a/modules/virt-generating-a-vm-memory-dump.adoc b/modules/virt-generating-a-vm-memory-dump.adoc index d9bfa60232c4..4ef6ce12f7db 100644 --- a/modules/virt-generating-a-vm-memory-dump.adoc +++ b/modules/virt-generating-a-vm-memory-dump.adoc @@ -44,7 +44,7 @@ $ virtctl memory-dump download --output= . Attach the memory dump to a Red Hat Support case. + -Alternatively, you can inspect the memory dump, for example by using link:https://github.com/volatilityfoundation/volatility3[the volatility3 tool]. +Alternatively, you can inspect the memory dump, for example by using the volatility3 tool. . Optional: Remove the memory dump: + diff --git a/modules/virt-support-create-jira-issue.adoc b/modules/virt-support-create-jira-issue.adoc index 3489ddd297b8..171d8666eb09 100644 --- a/modules/virt-support-create-jira-issue.adoc +++ b/modules/virt-support-create-jira-issue.adoc @@ -13,7 +13,7 @@ To report an issue with your environment to Red{nbsp}Hat Support, create a Jira . Log in to Red Hat Atlassian Jira. -. Click the following link to open a *Create Issue* page: link:https://redhat.atlassian.net/secure/CreateIssue.jspa[Create issue]. +. Access the *Create Issue* page. . Select {VirtProductName} (CNV) as the *Project*. diff --git a/modules/virt-support-submit-support-case.adoc b/modules/virt-support-submit-support-case.adoc index e73f8ae5279d..f91ff011232f 100644 --- a/modules/virt-support-submit-support-case.adoc +++ b/modules/virt-support-submit-support-case.adoc @@ -9,4 +9,4 @@ [role="_abstract"] Submit a support case to resolve a cluster issue that is affecting the ability of {VirtProductName} to function properly in your environment. -You can submit a support case to Red{nbsp}Hat Support by using the link:https://access.redhat.com/support/cases/#/case/list[Customer Support] page. Include data that you collected about your issue with your support request. \ No newline at end of file +You can submit a support case to Red{nbsp}Hat Support by using the Customer Support page. Include data that you collected about your issue with your support request. \ No newline at end of file diff --git a/virt/support/virt-collecting-virt-data.adoc b/virt/support/virt-collecting-virt-data.adoc index 19b8ab5b0f85..ffc291c258d6 100644 --- a/virt/support/virt-collecting-virt-data.adoc +++ b/virt/support/virt-collecting-virt-data.adoc @@ -7,7 +7,7 @@ include::_attributes/common-attributes.adoc[] toc::[] [role="_abstract"] -When you submit a xref:../../support/getting-support.adoc#support-submitting-a-case_getting-support[support case] to Red{nbsp}Hat Support, it is helpful to provide debugging information for {product-title} and {VirtProductName} by using the following tools: +When you submit a support case to Red{nbsp}Hat Support, it is helpful to provide debugging information for {product-title} and {VirtProductName} by using the following tools: // must-gather not supported for ROSA/OSD, per Dustin Row ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] @@ -51,4 +51,15 @@ endif::openshift-dedicated,openshift-rosa[] * xref:../../virt/managing_vms/virt-install-virtio-drivers-on-windows-vms.adoc#virt-installing-virtio-drivers-existing-windows_virt-install-virtio-drivers-on-windows-vms[Installing VirtIO drivers from a SATA CD drive on an existing Windows VM] * xref:../../virt/managing_vms/virt-accessing-vm-consoles.adoc#virt-connecting-desktop-viewer-web_virt-accessing-vm-consoles[Connect to the desktop viewer by using the web console] * xref:../../virt/support/virt-collecting-virt-data.adoc#virt-generating-a-vm-memory-dump_virt-collecting-virt-data[Collect memory dumps from VMs] +* xref:../../support/getting-support.adoc#support-submitting-a-case_getting-support[Submitting a support case] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[Modifying retention time and size for Prometheus metrics data] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_user_workload_monitoring/storing-and-recording-data-uwm#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data-uwm[Configuring the Alertmanager to capture relevant alerts and to send alert notifications to a dedicated mailbox] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/storing-and-recording-data#modifying-retention-time-and-size-for-prometheus-metrics-data_storing-and-recording-data[Modifying retention time and size for Prometheus metrics data] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/configuring_core_platform_monitoring/configuring-alerts-and-notifications[Configuring alerts and notifications] +* link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/latest/html-single/troubleshooting_openshift_data_foundation/index#downloading-log-files-and-diagnostic-information_rhodf[Downloading log files and diagnostic information] +* link:https://docs.redhat.com/en/documentation/monitoring_stack_for_red_hat_openshift/4.21/html/accessing_metrics/accessing-metrics-as-an-administrator#querying-metrics-for-all-projects-with-mon-dashboard_accessing-metrics-as-an-administrator[Querying metrics for all projects with the monitoring dashboard] +* link:https://access.redhat.com/solutions/6957701[Installing the latest VirtIO drivers] +* link:https://github.com/volatilityfoundation/volatility3[Volatility3 tool] +* link:https://access.redhat.com/support/cases/#/case/list[Customer Support] +* link:https://redhat.atlassian.net/secure/CreateIssue.jspa[Create issue] From 20f031df0320373f681566ae330df3b2d0cbc86a Mon Sep 17 00:00:00 2001 From: Roger Heslop Date: Fri, 8 May 2026 13:22:19 -0500 Subject: [PATCH 10/17] Fix asciidoc rendering error --- modules/virt-booting-vms-uefi-mode.adoc | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/modules/virt-booting-vms-uefi-mode.adoc b/modules/virt-booting-vms-uefi-mode.adoc index 19845a409334..0c7cd714b018 100644 --- a/modules/virt-booting-vms-uefi-mode.adoc +++ b/modules/virt-booting-vms-uefi-mode.adoc @@ -15,9 +15,8 @@ You can configure a virtual machine to boot in UEFI mode by editing the `Virtual .Procedure -. Edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode. +. To boot a virtual machine (VM) in UEFI mode with secure boot active, edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode: + -Booting in UEFI mode with secure boot active: [source,yaml] ---- apiversion: kubevirt.io/v1 From 8fe968e639d72f4aff5cfa306b80d0d114c1f1a2 Mon Sep 17 00:00:00 2001 From: Jake Berger Date: Tue, 28 Apr 2026 11:57:33 -0400 Subject: [PATCH 11/17] vale and dita checks for ROSA getting started --- adding_service_cluster/adding-service.adoc | 1 + architecture/rosa-architecture-models.adoc | 1 + ...ted-configure-an-idp-and-grant-access.adoc | 1 + ...rosa-getting-started-configure-an-idp.adoc | 29 +++-- ...ing-started-create-cluster-admin-user.adoc | 11 +- ...ing-started-creating-cluster-overview.adoc | 14 ++ modules/rosa-getting-started-enable-rosa.adoc | 6 +- modules/rosa-getting-started-learn.adoc | 1 + .../rosa-getting-started-prerequisites.adoc | 18 +++ ...-quickstart-creating-cluster-overview.adoc | 16 +++ modules/rosa-quickstart-prerequisites.adoc | 18 +++ ...rosa-sts-deployment-workflow-overview.adoc | 20 +++ rosa_architecture/rosa-understanding.adoc | 15 +-- .../rosa-managing-worker-nodes.adoc | 18 ++- .../rosa-getting-started.adoc | 100 ++++++-------- .../rosa-quickstart-guide-ui.adoc | 123 ++++++------------ .../rosa-sts-getting-started-workflow.adoc | 30 ++--- .../rosa-sts-creating-a-cluster-quickly.adoc | 9 +- ...reating-a-cluster-with-customizations.adoc | 2 +- .../rosa-sts-required-aws-service-quotas.adoc | 5 +- .../rosa-sts-setting-up-environment.adoc | 14 +- 21 files changed, 235 insertions(+), 217 deletions(-) create mode 100644 modules/rosa-getting-started-creating-cluster-overview.adoc create mode 100644 modules/rosa-getting-started-prerequisites.adoc create mode 100644 modules/rosa-quickstart-creating-cluster-overview.adoc create mode 100644 modules/rosa-quickstart-prerequisites.adoc create mode 100644 modules/rosa-sts-deployment-workflow-overview.adoc diff --git a/adding_service_cluster/adding-service.adoc b/adding_service_cluster/adding-service.adoc index 363bf2106eda..105ea531a281 100644 --- a/adding_service_cluster/adding-service.adoc +++ b/adding_service_cluster/adding-service.adoc @@ -6,6 +6,7 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] +[role="_abstract"] You can add, access, and remove add-on services for your {product-title} ifdef::openshift-rosa[] (ROSA) diff --git a/architecture/rosa-architecture-models.adoc b/architecture/rosa-architecture-models.adoc index 8c0e0f15d642..6f69790f9dad 100644 --- a/architecture/rosa-architecture-models.adoc +++ b/architecture/rosa-architecture-models.adoc @@ -7,6 +7,7 @@ include::_attributes/common-attributes.adoc[] toc::[] +[role="_abstract"] {product-title} has a classic architecture cluster topology meaning the control plane and the worker nodes are deployed in the customer's AWS account. include::modules/rosa-hcp-classic-comparison.adoc[leveloffset=+1] diff --git a/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc b/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc index c67d9976a352..22e437250fe9 100644 --- a/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc +++ b/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc @@ -7,6 +7,7 @@ [id="rosa-getting-started-configure-an-idp-and-grant-access_{context}"] = Configuring an identity provider and granting cluster access +[role="_abstract"] {product-title} (ROSA) includes a built-in OAuth server. After your ROSA cluster is created, you must configure OAuth to use an identity provider. You can then add members to your configured identity provider to grant them access to your cluster. You can also grant the identity provider users with `cluster-admin` or `dedicated-admin` privileges as required. diff --git a/modules/rosa-getting-started-configure-an-idp.adoc b/modules/rosa-getting-started-configure-an-idp.adoc index 5789d95f7e1a..b616d4962b26 100644 --- a/modules/rosa-getting-started-configure-an-idp.adoc +++ b/modules/rosa-getting-started-configure-an-idp.adoc @@ -14,6 +14,7 @@ ifeval::["{context}" == "rosa-quickstart"] :quickstart: endif::[] +[role="_abstract"] You can configure different identity provider types for your {product-title} (ROSA) cluster. Supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect and htpasswd identity providers. [IMPORTANT] @@ -40,13 +41,12 @@ endif::[] . If you do not have an existing GitHub organization to use for identity provisioning for your ROSA cluster, create one. Follow the steps in the link:https://docs.github.com/en/organizations/collaborating-with-groups-in-organizations/creating-a-new-organization-from-scratch[GitHub documentation]. . Configure a GitHub identity provider for your cluster that is restricted to the members of your GitHub organization. -.. Configure an identity provider using the interactive mode: +.. Configure an identity provider using the interactive mode, replacing `` with the name of your cluster: + [source,terminal] ---- -$ rosa create idp --cluster= --interactive <1> +$ rosa create idp --cluster= --interactive ---- -<1> Replace `` with the name of your cluster. + .Example output [source,terminal] @@ -56,40 +56,43 @@ Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations -? GitHub organizations: <1> +? GitHub organizations: ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations//settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps./.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps./.p1.openshiftapps.com - Click on 'Register application' ... ---- -<1> Replace `` with the name of your GitHub organization. -.. Follow the URL in the output and select *Register application* to register a new OAuth application in your GitHub organization. By registering the application, you enable the OAuth server that is built into ROSA to authenticate members of your GitHub organization into your cluster. +.. Follow the URL in the output and select *Register application* to register a new OAuth application in your GitHub organization, replacing `` with the name of your GitHub organization. By registering the application, you enable the OAuth server that is built into ROSA to authenticate members of your GitHub organization into your cluster. + [NOTE] ==== The fields in the *Register a new OAuth application* GitHub form are automatically filled with the required values through the URL defined by the ROSA CLI. ==== -.. Use the information from your GitHub OAuth application page to populate the remaining `rosa create idp` interactive prompts. +.. Use the information from your GitHub OAuth application page to populate the remaining `rosa create idp` interactive prompts: + .Continued example output [source,terminal] ---- ... -? Client ID: <1> -? Client Secret: [? for help] <2> +? Client ID: +? Client Secret: [? for help] ? GitHub Enterprise Hostname (optional): -? Mapping method: claim <3> +? Mapping method: claim I: Configuring IDP for cluster '' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps...p1.openshiftapps.com and click on github-1. ---- -<1> Replace `` with the client ID for your GitHub OAuth application. -<2> Replace `` with a client secret for your GitHub OAuth application. -<3> Specify `claim` as the mapping method. + +-- +where: + +`github_client_id`:: Specifies the client ID for your GitHub OAuth application. +`github_client_secret`:: Specifies a client secret for your GitHub OAuth application. +`claim`:: Specifies the mapping method. +-- [NOTE] ==== It might take approximately two minutes for the identity provider configuration to become active. If you have configured a `cluster-admin` user, you can watch the OAuth pods redeploy with the updated configuration by running `oc get pods -n openshift-authentication --watch`. diff --git a/modules/rosa-getting-started-create-cluster-admin-user.adoc b/modules/rosa-getting-started-create-cluster-admin-user.adoc index 1fa872abe0a5..5a103dcd86e1 100644 --- a/modules/rosa-getting-started-create-cluster-admin-user.adoc +++ b/modules/rosa-getting-started-create-cluster-admin-user.adoc @@ -14,6 +14,7 @@ ifeval::["{context}" == "rosa-quickstart"] :quickstart: endif::[] +[role="_abstract"] Before configuring an identity provider, you can create a user with `cluster-admin` privileges for immediate access to your {product-title} (ROSA) cluster. [NOTE] @@ -32,13 +33,12 @@ endif::[] .Procedure -. Create a cluster administrator user: +. Create a cluster administrator user, replacing `` with the name of your cluster: + [source,terminal] ---- -$ rosa create admin --cluster= <1> +$ rosa create admin --cluster= ---- -<1> Replace `` with the name of your cluster. + .Example output [source,terminal] @@ -60,13 +60,12 @@ It might take approximately one minute for the `cluster-admin` user to become ac ifdef::getting-started[] . Log in to the cluster through the CLI: -.. Run the command provided in the output of the preceding step to log in: +.. Run the command provided in the output of the preceding step to log in, replacing `` and `` with the API URL and cluster administrator password for your environment: + [source,terminal] ---- -$ oc login --username cluster-admin --password <1> +$ oc login --username cluster-admin --password ---- -<1> Replace `` and `` with the API URL and cluster administrator password for your environment. .. Verify if you are logged in to the ROSA cluster as the `cluster-admin` user: + [source,terminal] diff --git a/modules/rosa-getting-started-creating-cluster-overview.adoc b/modules/rosa-getting-started-creating-cluster-overview.adoc new file mode 100644 index 000000000000..663433117c74 --- /dev/null +++ b/modules/rosa-getting-started-creating-cluster-overview.adoc @@ -0,0 +1,14 @@ +// Module included in the following assemblies: +// +// * rosa_getting_started/rosa-getting-started.adoc + +:_mod-docs-content-type: CONCEPT +[id="rosa-getting-started-creating-a-cluster_{context}"] += Creating a ROSA cluster with STS + +[role="_abstract"] +Choose from one of the following methods to deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS). In each scenario, you can deploy your cluster by using {cluster-manager-first} or the ROSA CLI (`rosa`): + +* *Creating a ROSA cluster with STS using the default options*: You can create a ROSA cluster with STS quickly by using the default options and automatic STS resource creation. + +* *Creating a ROSA cluster with STS using customizations*: You can create a ROSA cluster with STS using customizations. You can also choose between the `auto` and `manual` modes when creating the required STS resources. diff --git a/modules/rosa-getting-started-enable-rosa.adoc b/modules/rosa-getting-started-enable-rosa.adoc index d90849e98a37..01b42bc2b9db 100644 --- a/modules/rosa-getting-started-enable-rosa.adoc +++ b/modules/rosa-getting-started-enable-rosa.adoc @@ -30,20 +30,20 @@ Consider using a dedicated AWS account to run production clusters. If you are us + The *Verify ROSA prerequisites* page opens. -. Under *ROSA enablement*, ensure that a green check mark and `You previously enabled ROSA` are displayed. +. Under *ROSA enablement*, ensure that a green checkmark and `You previously enabled ROSA` are displayed. + If not, follow these steps: .. Select the checkbox beside `I agree to share my contact information with Red{nbsp}Hat`. .. Click *Enable ROSA*. + -After a short wait, a green check mark and `You enabled ROSA` message are displayed. +After a short wait, a green checkmark and `You enabled ROSA` message are displayed. . Under *Service Quotas*, ensure that a green check and `Your quotas meet the requirements for ROSA` are displayed. + If you see `Your quotas don't meet the minimum requirements`, take note of the quota type and the minimum listed in the error message. See the Amazon documentation on link:https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html[requesting a quota increase] for guidance. It may take several hours for Amazon to approve your quota request. -. Under *ELB service-linked role*, ensure that a green check mark and `AWSServiceRoleForElasticLoadBalancing already exists` are displayed. +. Under *ELB service-linked role*, ensure that a green checkmark and `AWSServiceRoleForElasticLoadBalancing already exists` are displayed. . Click *Continue to Red{nbsp}Hat*. + diff --git a/modules/rosa-getting-started-learn.adoc b/modules/rosa-getting-started-learn.adoc index 6b5bafebfcc4..5aa99e194a5b 100644 --- a/modules/rosa-getting-started-learn.adoc +++ b/modules/rosa-getting-started-learn.adoc @@ -5,4 +5,5 @@ [id="rosa-getting-started-learn_{context}"] = Getting started with {product-title} +[role="_abstract"] Use the following sections to find content to help you learn about and use {product-title}. \ No newline at end of file diff --git a/modules/rosa-getting-started-prerequisites.adoc b/modules/rosa-getting-started-prerequisites.adoc new file mode 100644 index 000000000000..e8234859507a --- /dev/null +++ b/modules/rosa-getting-started-prerequisites.adoc @@ -0,0 +1,18 @@ +// Module included in the following assemblies: +// +// * rosa_getting_started/rosa-getting-started.adoc + +:_mod-docs-content-type: REFERENCE +[id="rosa-getting-started-prerequisites_{context}"] += Prerequisites + +[role="_abstract"] +* You reviewed the introduction to {product-title} (ROSA), and the documentation on ROSA architecture models and architecture concepts. + +* You have read the documentation on the guidelines for planning your environment. +// Removed as part of OSDOCS-13310, until figures are verified. +//xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and + +* You have reviewed the detailed AWS prerequisites for ROSA with STS. + +* You have the AWS service quotas required to run a ROSA cluster. diff --git a/modules/rosa-quickstart-creating-cluster-overview.adoc b/modules/rosa-quickstart-creating-cluster-overview.adoc new file mode 100644 index 000000000000..0d45ff10b6f8 --- /dev/null +++ b/modules/rosa-quickstart-creating-cluster-overview.adoc @@ -0,0 +1,16 @@ +// Module included in the following assemblies: +// +// * rosa_getting_started/rosa-quickstart-guide-ui.adoc + +:_mod-docs-content-type: CONCEPT +[id="rosa-quickstart-creating-a-cluster_{context}"] += Creating a ROSA cluster with AWS STS using the default auto mode + +[role="_abstract"] +{cluster-manager-first} is a managed service on the {hybrid-console-url} where you can install, modify, operate, and upgrade your Red{nbsp}Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. + +The procedures in this document use the `auto` modes in {cluster-manager} to immediately create the required Identity and Access Management (IAM) resources by using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider. + +When using the {cluster-manager} {hybrid-console-second} to create a {product-title} (ROSA) cluster that uses the STS, you can select the default options to create the cluster quickly. + +Before you can use the {cluster-manager} {hybrid-console-second} to deploy ROSA with STS clusters, you must associate your AWS account with your Red{nbsp}Hat organization and create the required account-wide STS roles and policies. diff --git a/modules/rosa-quickstart-prerequisites.adoc b/modules/rosa-quickstart-prerequisites.adoc new file mode 100644 index 000000000000..c96aef4df587 --- /dev/null +++ b/modules/rosa-quickstart-prerequisites.adoc @@ -0,0 +1,18 @@ +// Module included in the following assemblies: +// +// * rosa_getting_started/rosa-quickstart-guide-ui.adoc + +:_mod-docs-content-type: REFERENCE +[id="rosa-getting-started-prerequisites_{context}"] += Prerequisites + +[role="_abstract"] +* You reviewed the introduction to {product-title}, and the documentation on {product-title} architecture models and concepts. + +* You have read the documentation on the guidelines for planning your environment. +// Removed as part of OSDOCS-13310, until figures are verified. +// xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and + +* You have reviewed the detailed AWS prerequisites for ROSA with STS. + +* You have the AWS service quotas required to run a ROSA cluster. diff --git a/modules/rosa-sts-deployment-workflow-overview.adoc b/modules/rosa-sts-deployment-workflow-overview.adoc new file mode 100644 index 000000000000..7c42bb3f1699 --- /dev/null +++ b/modules/rosa-sts-deployment-workflow-overview.adoc @@ -0,0 +1,20 @@ +// Module included in the following assemblies: +// +// * rosa_getting_started/rosa-sts-getting-started-workflow.adoc + +:_mod-docs-content-type: CONCEPT +[id="rosa-sts-overview-of-the-deployment-workflow_{context}"] += Overview of the ROSA with STS deployment workflow + +[role="_abstract"] +The AWS Security Token Service (STS) is a global web service that provides short-term credentials for IAM or federated users. You can use AWS STS with {product-title} (ROSA) to allocate temporary, limited-privilege credentials for component-specific IAM roles. The service enables cluster components to make AWS API calls using secure cloud resource management practices. + +You can follow the workflow stages outlined below to set up and access a ROSA cluster that uses STS. + +. *Complete the AWS prerequisites for ROSA with STS*. To deploy a ROSA cluster with STS, your AWS account must meet the prerequisite requirements. +. *Review the required AWS service quotas*. To prepare for your cluster deployment, review the AWS service quotas that are required to run a ROSA cluster. +. *Set up the environment and install ROSA using STS*. Before you create a ROSA with STS cluster, you must enable ROSA in your AWS account, install and configure the required CLI tools, and verify the configuration of the CLI tools. You must also verify that the AWS Elastic Load Balancing (ELB) service role exists and that the required AWS resource quotas are available. +. *Create a ROSA cluster with STS quickly or create a cluster using customizations*. Use the ROSA CLI (`rosa`) or {cluster-manager-first} to create a cluster with STS. You can create a cluster quickly by using the default options, or you can apply customizations to suit the needs of your organization. +. *Access your cluster*. You can configure an identity provider and grant cluster administrator privileges to the identity provider users as required. You can also access a newly-deployed cluster quickly by configuring a `cluster-admin` user. +. *Revoke access to a ROSA cluster for a user*. You can revoke access to a ROSA with STS cluster from a user by using the ROSA CLI or the web console. +. *Delete a ROSA cluster*. You can delete a ROSA with STS cluster by using the ROSA CLI (`rosa`). After deleting a cluster, you can delete the STS resources by using the AWS Identity and Access Management (IAM) Console. diff --git a/rosa_architecture/rosa-understanding.adoc b/rosa_architecture/rosa-understanding.adoc index 946db885bd48..09e40226f6de 100644 --- a/rosa_architecture/rosa-understanding.adoc +++ b/rosa_architecture/rosa-understanding.adoc @@ -17,10 +17,14 @@ ROSA is a fully-managed, turnkey application platform that allows you to focus o You subscribe to the service directly from your AWS account. After you create clusters, you can operate your clusters with the OpenShift web console, the ROSA CLI, or through {cluster-manager-first}. -You receive OpenShift updates with new feature releases and a shared, common source for alignment with OpenShift Container Platform. ROSA supports the same versions of OpenShift as Red{nbsp}Hat OpenShift Dedicated and OpenShift Container Platform to achieve version consistency. +You receive {OCP-short} updates with new feature releases and a shared, common source for alignment with {OCP-short}. {product-title}. ROSA supports the same versions of {OCP-short} as Red{nbsp}Hat OpenShift Dedicated and {product-title} to achieve version consistency. image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}] -For additional information about ROSA installation, see link:https://www.redhat.com/en/products/interactive-walkthrough/install-rosa[Installing Red{nbsp}Hat OpenShift Service on AWS (ROSA) interactive walkthrough]. + +[id="rosa-understanding-getting-started_{context}"] +== Getting started + +To get started with deploying your cluster, ensure your AWS account has met the prerequisites and you have a Red{nbsp}Hat account ready. //[id="rosa-understanding-credential-modes_{context}"] //== Credential modes @@ -53,16 +57,11 @@ For additional information about ROSA installation, see link:https://www.redhat. include::modules/rosa-sdpolicy-am-billing.adoc[leveloffset=+1] -[id="rosa-understanding-getting-started_{context}"] -== Getting started - -To get started with deploying your cluster, ensure your AWS account has met the prerequisites, you have a Red{nbsp}Hat account ready, and follow the procedures outlined in xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Getting started with {product-title}]. - [role="_additional-resources"] [id="additional-resources_{context}"] == Additional resources -* xref:../ocm/ocm-overview.adoc#ocm-overview[OpenShift Cluster Manager] +* xref:../ocm/ocm-overview.adoc#ocm-overview[{cluster-manager}] //* xref ../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] * xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Getting started with {product-title}] * link:https://aws.amazon.com/rosa/pricing/[AWS pricing page] \ No newline at end of file diff --git a/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc b/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc index bf3f70babedc..eb2f046a9121 100644 --- a/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc +++ b/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc @@ -3,6 +3,7 @@ include::_attributes/attributes-openshift-dedicated.adoc[] [id="rosa-managing-worker-nodes"] = Managing compute nodes :context: rosa-managing-worker-nodes + toc::[] [role="_abstract"] @@ -15,9 +16,9 @@ You can edit machine pool configuration options such as scaling, adding node lab ifdef::openshift-rosa-hcp[] You can also create new machine pools with Capacity Reservations. -.Overview of AWS Capacity Reservations +*Overview of AWS Capacity Reservations* -If you have reserved compute capacity using link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html[AWS Capacity Reservations] for a specific instance type and Availability Zone (AZ), you can use it for your {product-title} worker nodes. Both On-Demand Capacity Reservations and Capacity Blocks for machine learning (ML) workloads are supported. +If you have reserved compute capacity using AWS Capacity Reservations for a specific instance type and Availability Zone (AZ), you can use it for your {product-title} worker nodes. Both On-Demand Capacity Reservations and Capacity Blocks for machine learning (ML) workloads are supported. Purchase and manage a Capacity Reservation directly with AWS. After reserving the capacity, add a Capacity Reservation ID to a new machine pool when you create it in your {product-title} cluster. You can also use a Capacity Reservation shared with you from another AWS account within your AWS Organization. @@ -32,7 +33,13 @@ Using Capacity Reservations on machine pools in {product-title} clusters has the * You can only add a Capacity Reservation ID to a new machine pool. * You cannot use autoscaling with Capacity Reservations if you create a machine pool using the {rosa-cli}. However, you can enable both autoscaling and Capacity Reservations on machine pools created using {cluster-manager}. -You can create a machine pool with a Capacity Reservation using either xref:../../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#creating_machine_pools_ocm_rosa-managing-worker-nodes[{cluster-manager}] or xref:../../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#creating_machine_pools_cli_capres_rosa-managing-worker-nodes[the {rosa-cli}]. +You can create a machine pool with a Capacity Reservation using either {cluster-manager} or the {rosa-cli}. + +[role="_additional-resources"] +.Additional resources + +* link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html[AWS Capacity Reservations] + endif::openshift-rosa-hcp[] include::modules/creating-a-machine-pool.adoc[leveloffset=+1] @@ -48,7 +55,7 @@ endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources -* xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Additional custom security groups] +* xref:../../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-security-groups-custom_rosa-sts-aws-prereqs[Additional custom security groups] endif::openshift-rosa-hcp[] include::modules/configuring-machine-pool-disk-volume.adoc[leveloffset=+1] @@ -59,7 +66,7 @@ include::modules/configuring-machine-pool-disk-volume-cli.adoc[leveloffset=+2] ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources -* xref:../../cli_reference/rosa_cli/rosa-cli-commands.adoc#rosa-create-machinepool[`rosa create machinepool` command] +* xref:../../cli_reference/rosa_cli/rosa-cli-commands.adoc#_rosa_create_machinepool[`rosa create machinepool` command] endif::openshift-rosa-hcp[] include::modules/deleting-machine-pools.adoc[leveloffset=+1] @@ -92,6 +99,7 @@ endif::openshift-rosa-hcp[] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[About autoscaling] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[Enabling autoscaling] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#nodes-disabling-autoscaling-nodes[Disabling autoscaling] +* link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html[AWS Capacity Reservations] ifdef::openshift-rosa[] * xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-service-definition[{rosa-classic} Service Definition] endif::openshift-rosa[] diff --git a/rosa_getting_started/rosa-getting-started.adoc b/rosa_getting_started/rosa-getting-started.adoc index 150b37dedae1..5e586a5f461d 100644 --- a/rosa_getting_started/rosa-getting-started.adoc +++ b/rosa_getting_started/rosa-getting-started.adoc @@ -6,91 +6,65 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] +[role="_abstract"] +Create a {product-title} cluster, grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. + [NOTE] ==== -If you are looking for a quickstart guide for ROSA, see xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quickstart guide]. +If you need a quick start guide, see the {product-title} quick start guide. ==== -Follow this getting started document to create a {product-title} (ROSA) cluster, grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. - -You can create a ROSA cluster either with or without the AWS Security Token Service (STS). The procedures in this document enable you to create a cluster that uses AWS STS. For more information about using AWS STS with ROSA clusters, see xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service]. - -[id="rosa-getting-started-prerequisites_{context}"] -== Prerequisites - -* You reviewed the xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[introduction to {product-title} (ROSA)], and the documentation on ROSA xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[architecture models] and xref:../architecture/architecture.adoc#architecture[architecture concepts]. - -* You have read the documentation on the xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[guidelines for planning your environment]. -// Removed as part of OSDOCS-13310, until figures are verified. -//xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and - -* You have reviewed the detailed xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for ROSA with STS]. - -* You have the xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas that are required to run a ROSA cluster]. - -include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+1] -include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+2] -include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+2] - -[id="rosa-getting-started-creating-a-cluster"] -== Creating a ROSA cluster with STS - -Choose from one of the following methods to deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS). In each scenario, you can deploy your cluster by using {cluster-manager-first} or the ROSA CLI (`rosa`): - -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[*Creating a ROSA cluster with STS using the default options*]: You can create a ROSA cluster with STS quickly by using the default options and automatic STS resource creation. +You can create a {product-title} cluster that uses AWS Security Token Service (STS). -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[*Creating a ROSA cluster with STS using customizations*]: You can create a ROSA cluster with STS using customizations. You can also choose between the `auto` and `manual` modes when creating the required STS resources. +include::modules/rosa-getting-started-prerequisites.adoc[leveloffset=+1] -.Additional resources - -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-creating-cluster.adoc#rosa-creating-cluster[Creating a ROSA cluster without AWS STS] -* xref:../rosa_install_access_delete_clusters/rosa-aws-privatelink-creating-cluster.adoc#rosa-aws-privatelink-creating-cluster[Creating an AWS PrivateLink cluster on ROSA] -* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] +include::modules/rosa-getting-started-creating-cluster-overview.adoc[leveloffset=+1] +include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+2] +include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+3] +include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+3] include::modules/rosa-getting-started-create-cluster-admin-user.adoc[leveloffset=+1] -.Additional resource - -* For steps to log in to the ROSA web console, see xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started-access-cluster-web-console_rosa-getting-started[Accessing a cluster through the web console] - include::modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc[leveloffset=+1] include::modules/rosa-getting-started-configure-an-idp.adoc[leveloffset=+2] - -.Additional resource - -* For detailed steps to configure each of the supported identity provider types, see xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS] - include::modules/rosa-getting-started-grant-user-access.adoc[leveloffset=+2] include::modules/rosa-getting-started-grant-admin-privileges.adoc[leveloffset=+2] -[role="_additional-resources"] -.Additional resources - -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] - include::modules/rosa-getting-started-access-cluster-web-console.adoc[leveloffset=+1] include::modules/deploy-app.adoc[leveloffset=+1] + include::modules/rosa-getting-started-revoking-admin-privileges-and-user-access.adoc[leveloffset=+1] include::modules/rosa-getting-started-revoke-admin-privileges.adoc[leveloffset=+2] include::modules/rosa-getting-started-revoke-user-access.adoc[leveloffset=+2] -include::modules/rosa-getting-started-deleting-a-cluster.adoc[leveloffset=+1] -[id="next-steps_{context}"] -== Next steps - -* xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console] -* xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes] -* xref:../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#preparing-to-configure-the-monitoring-stack-uwm[Preparing to configure the user workload monitoring stack] +include::modules/rosa-getting-started-deleting-a-cluster.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources_{context}"] == Additional resources -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the ROSA with STS deployment workflow] - -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow] - -* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading ROSA Classic clusters] +* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[Introduction to {product-title}] +* xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{product-title} architecture models] +* xref:../architecture/architecture.adoc#architecture[Architecture concepts] +* xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[Guidelines for planning your environment] +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for {product-title} with STS] +* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas required to run a {product-title} cluster] +* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service] +* xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quick start guide] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a {product-title} cluster with STS using the default options] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a {product-title} cluster with STS using customizations] +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-creating-cluster.adoc#rosa-creating-cluster[Creating a {product-title} cluster without AWS STS] +* xref:../rosa_install_access_delete_clusters/rosa-aws-privatelink-creating-cluster.adoc#rosa-aws-privatelink-creating-cluster[Creating an AWS PrivateLink cluster on ROSA] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] +* xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started-access-cluster-web-console_rosa-getting-started[Accessing a cluster through the web console] +* xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] +* xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console] +* xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes] +* xref:../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#preparing-to-configure-the-monitoring-stack-uwm[Preparing to configure the user workload monitoring stack] +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the {product-title} with STS deployment workflow] +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the {product-title} deployment workflow] +* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading {product-title} classic clusters] diff --git a/rosa_getting_started/rosa-quickstart-guide-ui.adoc b/rosa_getting_started/rosa-quickstart-guide-ui.adoc index 02e906a3bca0..cd69febc503c 100644 --- a/rosa_getting_started/rosa-quickstart-guide-ui.adoc +++ b/rosa_getting_started/rosa-quickstart-guide-ui.adoc @@ -6,116 +6,62 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] +[role="_abstract"] +Create a {product-title} cluster by using {cluster-manager-first} on the {hybrid-console-url}. After you create your cluster, you can grant user access, deploy your application, revoke user access, and delete your cluster. + [NOTE] ==== -If you are looking for a comprehensive getting started guide for {product-title} (ROSA), see xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Comprehensive guide to getting started with {product-title}]. For additional information on ROSA installation, see link:https://www.redhat.com/en/products/interactive-walkthrough/install-rosa[Installing Red{nbsp}Hat OpenShift Service on AWS (ROSA) interactive walkthrough]. +If you are looking for a comprehensive getting started guide for {product-title}, see the comprehensive guide to getting started with {product-title}. ==== -Follow this guide to quickly create a {product-title} (ROSA) cluster using {cluster-manager-first} on the {hybrid-console-url}, grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. - -The procedures in this document enable you to create a cluster that uses AWS Security Token Service (STS). For more information about using AWS STS with ROSA clusters, see xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service]. +You can create a cluster that uses AWS Security Token Service (STS). image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}] -[id="rosa-getting-started-prerequisites_{context}"] -== Prerequisites - -* You reviewed the xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[introduction to {product-title} (ROSA)], and the documentation on ROSA xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[architecture models] and xref:../architecture/architecture.adoc#architecture[architecture concepts]. - -* You have read the documentation on the xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[guidelines for planning your environment]. -// Removed as part of OSDOCS-13310, until figures are verified. -// xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and - -* You have reviewed the detailed xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for ROSA with STS]. - -* You have the xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas that are required to run a ROSA cluster]. +include::modules/rosa-quickstart-prerequisites.adoc[leveloffset=+1] +include::modules/rosa-quickstart-creating-cluster-overview.adoc[leveloffset=+1] //This content is pulled from rosa-getting-started-environment-setup.adoc -include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+1] +include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+2] //This content is pulled from rosa-getting-started-enable-rosa.adoc -include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+2] - +include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+3] //This content is pulled from rosa-getting-started-install-configure-cli-tools -include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+2] - - -//This content is pulled from rosa-sts-creating-a-cluster-quickly.adoc -[id="rosa-quickstart-creating-a-cluster"] -== Creating a ROSA cluster with AWS STS using the default auto mode - -{cluster-manager-first} is a managed service on the {hybrid-console-url} where you can install, modify, operate, and upgrade your Red{nbsp}Hat OpenShift clusters. This service allows you to work with all of your organization’s clusters from a single dashboard. -The procedures in this document use the `auto` modes in {cluster-manager} to immediately create the required Identity and Access Management (IAM) resources using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider. - -//This content is pulled from rosa-sts-creating-a-cluster-quickly-ocm.adoc -When using the {cluster-manager} {hybrid-console-second} to create a {product-title} (ROSA) cluster that uses the STS, you can select the default options to create the cluster quickly. - -Before you can use the {cluster-manager} {hybrid-console-second} to deploy ROSA with STS clusters, you must associate your AWS account with your Red{nbsp}Hat organization and create the required account-wide STS roles and policies. - +include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+3] //This content is pulled from rosa-sts-overview-of-the-default-cluster-specifications.adoc -include::modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc[leveloffset=+2] - +include::modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc[leveloffset=+3] //This content is pulled from rosa-sts-understanding-aws-account-association.adoc -include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset=+2] - +include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset=+3] //This content is pulled from rosa-sts-associating-your-aws-account.adoc -include::modules/rosa-sts-associating-your-aws-account.adoc[leveloffset=+2] - +include::modules/rosa-sts-associating-your-aws-account.adoc[leveloffset=+3] //This content is pulled from rosa-sts-creating-account-wide-sts-roles-and-policies.adoc -include::modules/rosa-sts-creating-account-wide-sts-roles-and-policies.adoc[leveloffset=+2] - +include::modules/rosa-sts-creating-account-wide-sts-roles-and-policies.adoc[leveloffset=+3] //This content is pulled from rosa-sts-creating-a-cluster-using-defaults-ocm.adoc -include::modules/rosa-sts-creating-a-cluster-using-defaults-ocm.adoc[leveloffset=+2] - -//// -.Additional resources - -* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] -//// - +include::modules/rosa-sts-creating-a-cluster-using-defaults-ocm.adoc[leveloffset=+3] //This content is pulled from rosa-getting-started-create-cluster-admin-user.adoc include::modules/rosa-getting-started-create-cluster-admin-user.adoc[leveloffset=+1] -.Additional resource - -* For steps to log in to the ROSA web console, see xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-getting-started-access-cluster-web-console_rosa-quickstart-guide-ui[Accessing a cluster through the web console]. - - //This content is pulled from rosa-getting-started-configure-an-idp-and-grant-access.adoc include::modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc[leveloffset=+1] //This content is pulled from rosa-getting-started-configure-an-idp.adoc include::modules/rosa-getting-started-configure-an-idp.adoc[leveloffset=+2] -.Additional resource - -* For detailed steps to configure each of the supported identity provider types, see xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS]. - - //This content is pulled from rosa-getting-started-grant-user-access.adoc include::modules/rosa-getting-started-grant-user-access.adoc[leveloffset=+2] - //This content is pulled from rosa-getting-started-grant-admin-privileges.adoc include::modules/rosa-getting-started-grant-admin-privileges.adoc[leveloffset=+2] -[role="_additional-resources"] -.Additional resources - -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] - //This content is pulled from rosa-getting-started-access-cluster-web-console.adoc include::modules/rosa-getting-started-access-cluster-web-console.adoc[leveloffset=+1] @@ -123,35 +69,40 @@ include::modules/rosa-getting-started-access-cluster-web-console.adoc[leveloffse //This content is pulled from deploy-app.adoc include::modules/deploy-app.adoc[leveloffset=+1] - //This content is pulled from rosa-getting-started-revoking-admin-privileges-and-user-access.adoc include::modules/rosa-getting-started-revoking-admin-privileges-and-user-access.adoc[leveloffset=+1] - //This content is pulled from rosa-getting-started-revoke-admin-privileges.adoc include::modules/rosa-getting-started-revoke-admin-privileges.adoc[leveloffset=+2] - -//This content is pulled from rosa-getting-started-revoke-admin-privileges.adoc +//This content is pulled from rosa-getting-started-revoke-user-access.adoc include::modules/rosa-getting-started-revoke-user-access.adoc[leveloffset=+2] - //This content is pulled from rosa-getting-started-deleting-a-cluster.adoc include::modules/rosa-getting-started-deleting-a-cluster.adoc[leveloffset=+1] -[id="next-steps_{context}"] -== Next steps - -* xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console] -* xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes] -* xref:../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#preparing-to-configure-the-monitoring-stack-uwm[Preparing to configure the user workload monitoring stack] - [role="_additional-resources"] [id="additional-resources_{context}"] == Additional resources -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the ROSA with STS deployment workflow] - -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow] - -* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading ROSA Classic clusters] +* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[Introduction to {product-title}] +* xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{product-title} architecture models] +* xref:../architecture/architecture.adoc#architecture[Architecture concepts] +* xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[Guidelines for planning your environment] +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for {product-title} with STS] +* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas required to run a {product-title} cluster] +* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service] +* xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Comprehensive guide to getting started with {product-title}] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] +* xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-getting-started-access-cluster-web-console_rosa-quickstart-guide-ui[Accessing a cluster through the web console] +* xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] +* xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console] +* xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes] +* xref:../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#preparing-to-configure-the-monitoring-stack-uwm[Preparing to configure the user workload monitoring stack] +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the {product-title} with STS deployment workflow] +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the {product-title} deployment workflow] +* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading {product-title} classic clusters] diff --git a/rosa_getting_started/rosa-sts-getting-started-workflow.adoc b/rosa_getting_started/rosa-sts-getting-started-workflow.adoc index 7ef6a476d6f5..8715972316a6 100644 --- a/rosa_getting_started/rosa-sts-getting-started-workflow.adoc +++ b/rosa_getting_started/rosa-sts-getting-started-workflow.adoc @@ -6,27 +6,21 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] -Before you create a {product-title} (ROSA) cluster, you must complete the AWS prerequisites, verify that the required AWS service quotas are available, and set up your environment. +[role="_abstract"] +Before you create a {product-title} cluster, you must complete the {AWS} prerequisites. Verify that the required AWS service quotas are available, and set up your environment. -This document provides an overview of the ROSA with STS deployment workflow stages and refers to detailed resources for each stage. - -[id="rosa-sts-overview-of-the-deployment-workflow"] -== Overview of the ROSA with STS deployment workflow - -The AWS Security Token Service (STS) is a global web service that provides short-term credentials for IAM or federated users. You can use AWS STS with {product-title} (ROSA) to allocate temporary, limited-privilege credentials for component-specific IAM roles. The service enables cluster components to make AWS API calls using secure cloud resource management practices. - -You can follow the workflow stages outlined in this section to set up and access a ROSA cluster that uses STS. - -. xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[Complete the AWS prerequisites for ROSA with STS]. To deploy a ROSA cluster with STS, your AWS account must meet the prerequisite requirements. -. xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Review the required AWS service quotas]. To prepare for your cluster deployment, review the AWS service quotas that are required to run a ROSA cluster. -. xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Set up the environment and install ROSA using STS]. Before you create a ROSA with STS cluster, you must enable ROSA in your AWS account, install and configure the required CLI tools, and verify the configuration of the CLI tools. You must also verify that the AWS Elastic Load Balancing (ELB) service role exists and that the required AWS resource quotas are available. -. xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Create a ROSA cluster with STS quickly] or xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[create a cluster using customizations]. Use the ROSA CLI (`rosa`) or {cluster-manager-first} to create a cluster with STS. You can create a cluster quickly by using the default options, or you can apply customizations to suit the needs of your organization. -. xref:../rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc#rosa-sts-accessing-cluster[Access your cluster]. You can configure an identity provider and grant cluster administrator privileges to the identity provider users as required. You can also access a newly-deployed cluster quickly by configuring a `cluster-admin` user. -. xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-access-cluster.adoc#rosa-sts-deleting-access-cluster[Revoke access to a ROSA cluster for a user]. You can revoke access to a ROSA with STS cluster from a user by using the ROSA CLI or the web console. -. xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Delete a ROSA cluster]. You can delete a ROSA with STS cluster by using the ROSA CLI (`rosa`). After deleting a cluster, you can delete the STS resources by using the AWS Identity and Access Management (IAM) Console. +include::modules/rosa-sts-deployment-workflow-overview.adoc[leveloffset=+1] [id="additional_resources_{context}"] [role="_additional-resources"] == Additional resources -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow] +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for {product-title} with STS] +* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas] +* xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Setting up the environment and installing {product-title} using STS] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a {product-title} cluster with STS quickly] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations] +* xref:../rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc#rosa-sts-accessing-cluster[Accessing your cluster] +* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-access-cluster.adoc#rosa-sts-deleting-access-cluster[Revoking access to a {product-title} cluster for a user] +* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Deleting a {product-title} cluster] +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the {product-title} deployment workflow] diff --git a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc index 703fd3785038..ce54ae1b04fa 100644 --- a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc +++ b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc @@ -11,19 +11,19 @@ Create a {product-title} cluster quickly by using the default options and automa [NOTE] ==== -If you are looking for a quickstart guide for ROSA, see xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quickstart guide]. +If you are looking for a quick start guide for ROSA, see the {product-title} quick start guide. ==== The procedures in this document use the `auto` modes in the ROSA CLI (`rosa`) and {cluster-manager} to immediately create the required IAM resources using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider. -Alternatively, you can use `manual` mode, which outputs the `aws` commands needed to create the IAM resources instead of deploying them automatically. For steps to deploy a {product-title} cluster by using `manual` mode or with customizations, see xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-using-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations]. +Alternatively, you can use `manual` mode, which outputs the `aws` commands needed to create the IAM resources instead of deploying them automatically. include::snippets/oidc-cloudfront.adoc[] [id="prerequisites_{context}"] == Prerequisites -* Ensure that you have completed the xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites]. +* Ensure that you have completed the AWS prerequisites. include::modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc[leveloffset=+1] include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset=+1] @@ -31,6 +31,9 @@ include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset [role="_additional-resources"] .Additional resources +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites] +* xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quick start guide] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-using-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations] * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-associating-your-aws-account_rosa-sts-creating-a-cluster-quickly[Associating your AWS account with your Red{nbsp}Hat organization] include::modules/osd-aws-vpc-required-resources.adoc[leveloffset=+1] diff --git a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc index 4bc7baf16bc5..bda44c021228 100644 --- a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc +++ b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc @@ -18,7 +18,7 @@ include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset [role="_additional-resources"] .Additional resources -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using OpenShift Cluster Manager] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using {cluster-manager}] include::modules/rosa-sts-arn-path-customization-for-iam-roles-and-policies.adoc[leveloffset=+1] diff --git a/rosa_planning/rosa-sts-required-aws-service-quotas.adoc b/rosa_planning/rosa-sts-required-aws-service-quotas.adoc index 14c9ee446dc8..c2c9c0effd85 100644 --- a/rosa_planning/rosa-sts-required-aws-service-quotas.adoc +++ b/rosa_planning/rosa-sts-required-aws-service-quotas.adoc @@ -11,7 +11,10 @@ Review this list of the required Amazon Web Service (AWS) service quotas that ar include::modules/rosa-required-aws-service-quotas.adoc[leveloffset=+1] -== Next steps +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources + ifndef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Setting up the environment] endif::openshift-rosa-hcp[] diff --git a/rosa_planning/rosa-sts-setting-up-environment.adoc b/rosa_planning/rosa-sts-setting-up-environment.adoc index eb861e7bf052..de32ae9bc09c 100644 --- a/rosa_planning/rosa-sts-setting-up-environment.adoc +++ b/rosa_planning/rosa-sts-setting-up-environment.adoc @@ -30,23 +30,17 @@ ifdef::openshift-rosa-hcp[] include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+1] endif::openshift-rosa-hcp[] -[id="next-steps_rosa-sts-setting-up-environment"] -== Next steps -ifndef::openshift-rosa-hcp[] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Create a {product-title} cluster with STS quickly] or xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[create a cluster using customizations]. -endif::openshift-rosa-hcp[] -ifdef::openshift-rosa-hcp[] -* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Create a {product-title} cluster] -endif::openshift-rosa-hcp[] - -[id="additional-resources"] [role="_additional-resources"] +[id="additional-resources_{context}"] == Additional resources ifndef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS Prerequisites] * xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas and increase requests] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a {product-title} cluster with STS quickly] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations] endif::openshift-rosa-hcp[] ifdef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-hcp-prereqs[AWS Prerequisites] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Creating a {product-title} cluster] // // TODO OSDOCS-11789: AWS quotas for HCP endif::openshift-rosa-hcp[] From 0bf64257bd18aacfa30be6ca9251350d9c784d12 Mon Sep 17 00:00:00 2001 From: Jake Berger Date: Fri, 8 May 2026 14:16:51 -0400 Subject: [PATCH 12/17] more doc feedback --- .../rosa-architecture-topology-overview.adoc | 12 +++++++++ ...rosa-getting-started-configure-an-idp.adoc | 2 ++ modules/rosa-sts-ocm-roles-permissions.adoc | 22 ++++++++++++++++ modules/rosa-sts-oidc-overview.adoc | 10 ++++++++ modules/rosa-understanding-about.adoc | 16 ++++++++++++ .../rosa-architecture-models.adoc | 8 ++++-- .../rosa-sts-about-iam-resources.adoc | 25 ++++--------------- rosa_architecture/rosa-understanding.adoc | 22 +++------------- .../rosa-managing-worker-nodes.adoc | 4 ++- 9 files changed, 80 insertions(+), 41 deletions(-) create mode 100644 modules/rosa-architecture-topology-overview.adoc create mode 100644 modules/rosa-sts-ocm-roles-permissions.adoc create mode 100644 modules/rosa-sts-oidc-overview.adoc create mode 100644 modules/rosa-understanding-about.adoc diff --git a/modules/rosa-architecture-topology-overview.adoc b/modules/rosa-architecture-topology-overview.adoc new file mode 100644 index 000000000000..03698c1111eb --- /dev/null +++ b/modules/rosa-architecture-topology-overview.adoc @@ -0,0 +1,12 @@ +// Module included in the following assemblies: +// +// * rosa_architecture/rosa-architecture-models.adoc + +:_mod-docs-content-type: CONCEPT +[id="rosa-architecture-topology-overview_{context}"] += Cluster topology overview + +[role="_abstract"] +{product-title} has the following cluster topology: + +Hosted control plane (HCP) - The control plane is hosted in a Red{nbsp}Hat account and the worker nodes are deployed in the customer's AWS account. diff --git a/modules/rosa-getting-started-configure-an-idp.adoc b/modules/rosa-getting-started-configure-an-idp.adoc index b616d4962b26..d01df746db92 100644 --- a/modules/rosa-getting-started-configure-an-idp.adoc +++ b/modules/rosa-getting-started-configure-an-idp.adoc @@ -93,10 +93,12 @@ where: `github_client_secret`:: Specifies a client secret for your GitHub OAuth application. `claim`:: Specifies the mapping method. -- ++ [NOTE] ==== It might take approximately two minutes for the identity provider configuration to become active. If you have configured a `cluster-admin` user, you can watch the OAuth pods redeploy with the updated configuration by running `oc get pods -n openshift-authentication --watch`. ==== ++ .. Enter the following command to verify that the identity provider has been configured correctly: + [source,terminal] diff --git a/modules/rosa-sts-ocm-roles-permissions.adoc b/modules/rosa-sts-ocm-roles-permissions.adoc new file mode 100644 index 000000000000..0d4dae812e9b --- /dev/null +++ b/modules/rosa-sts-ocm-roles-permissions.adoc @@ -0,0 +1,22 @@ +// Module included in the following assemblies: +// +// * rosa_architecture/rosa-sts-about-iam-resources.adoc + +:_mod-docs-content-type: CONCEPT +[id="rosa-sts-ocm-roles-and-permissions_{context}"] += {cluster-manager} roles and permissions + +[role="_abstract"] +If you create {product-title} clusters by using {cluster-manager-url}, you must have the following AWS IAM roles linked to your AWS account to create and manage the clusters. + +These AWS IAM roles are as follows: + +* The {product-title} user role (`user-role`) is an AWS role used by Red{nbsp}Hat to verify the customer's AWS identity. This role has no additional permissions, and the role has a trust relationship with the Red{nbsp}Hat installer account. +* An `ocm-role` resource grants the required permissions for installation of {product-title} clusters in {cluster-manager}. You can apply basic or administrative permissions to the `ocm-role` resource. If you create an administrative `ocm-role` resource, {cluster-manager} can create the needed AWS Operator roles and OpenID Connect (OIDC) provider. This IAM role also creates a trust relationship with the Red{nbsp}Hat installer account as well. ++ +[NOTE] +==== +The `ocm-role` IAM resource refers to the combination of the IAM role and the necessary policies created with it. +==== + +You must create this user role as well as an administrative `ocm-role` resource, if you want to use the auto mode in {cluster-manager} to create your Operator role policies and OIDC provider. diff --git a/modules/rosa-sts-oidc-overview.adoc b/modules/rosa-sts-oidc-overview.adoc new file mode 100644 index 000000000000..8fd003431964 --- /dev/null +++ b/modules/rosa-sts-oidc-overview.adoc @@ -0,0 +1,10 @@ +// Module included in the following assemblies: +// +// * rosa_architecture/rosa-sts-about-iam-resources.adoc + +:_mod-docs-content-type: CONCEPT +[id="rosa-sts-oidc-overview_{context}"] += Open ID Connect (OIDC) requirements for Operator authentication + +[role="_abstract"] +For ROSA installations that use STS, you must create a cluster-specific OIDC provider that is used by the cluster Operators to authenticate or create your own OIDC configuration for your own OIDC provider. diff --git a/modules/rosa-understanding-about.adoc b/modules/rosa-understanding-about.adoc new file mode 100644 index 000000000000..b67910ac1f01 --- /dev/null +++ b/modules/rosa-understanding-about.adoc @@ -0,0 +1,16 @@ +// Module included in the following assemblies: +// +// * rosa_architecture/rosa-understanding.adoc + +:_mod-docs-content-type: CONCEPT +[id="rosa-understanding-about_{context}"] += About {product-title} + +[role="_abstract"] +{product-title} is a fully-managed, turnkey application platform that allows you to focus on delivering value to your customers by building and deploying applications. Red{nbsp}Hat site reliability engineering (SRE) experts manage the underlying platform so you do not have to worry about the complexity of infrastructure management. {product-title} provides seamless integration with Amazon CloudWatch, AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (VPC), and a wide range of additional AWS services to further accelerate the building and delivering of differentiating experiences to your customers. + +You subscribe to the service directly from your AWS account. After you create clusters, you can operate your clusters with the OpenShift web console, the ROSA CLI, or through {cluster-manager-first}. + +You receive {OCP-short} updates with new feature releases and a shared, common source for alignment with {OCP-short}. {product-title} supports the same versions of {OCP-short} as Red{nbsp}Hat OpenShift Dedicated to achieve version consistency. + +image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}] diff --git a/rosa_architecture/rosa-architecture-models.adoc b/rosa_architecture/rosa-architecture-models.adoc index fe37fa7ba2ac..ec899296e520 100644 --- a/rosa_architecture/rosa-architecture-models.adoc +++ b/rosa_architecture/rosa-architecture-models.adoc @@ -9,12 +9,14 @@ include::_attributes/common-attributes.adoc[] toc::[] [role="_abstract"] -{product-title} has the following cluster topology: +Learn about {product-title} architecture models and cluster topologies. -Hosted control plane (HCP) - The control plane is hosted in a Red{nbsp}Hat account and the worker nodes are deployed in the customer's AWS account. +include::modules/rosa-architecture-topology-overview.adoc[leveloffset=+1] include::modules/rosa-hcp-classic-comparison.adoc[leveloffset=+1] +[role="_additional-resources"] +[id="additional-resources_{context}"] .Additional resources * xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-sdpolicy-regions-az_rosa-hcp-service-definition[Regions and availability zones] @@ -33,6 +35,8 @@ endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] +[role="_additional-resources"] +[id="additional-resources-classic_{context}"] .Additional resources * xref:../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-configuring.html[Configuring machine pools in Local Zones] diff --git a/rosa_architecture/rosa-sts-about-iam-resources.adoc b/rosa_architecture/rosa-sts-about-iam-resources.adoc index 01dedbf9798e..7cc9d9a536c9 100644 --- a/rosa_architecture/rosa-sts-about-iam-resources.adoc +++ b/rosa_architecture/rosa-sts-about-iam-resources.adoc @@ -55,27 +55,14 @@ ifdef::openshift-rosa-hcp[] * xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Creating a {hcp-title} cluster quickly] endif::openshift-rosa-hcp[] -[id="rosa-sts-ocm-roles-and-permissions_{context}"] -== {cluster-manager} roles and permissions - -If you create ROSA clusters by using {cluster-manager-url}, you must have the following AWS IAM roles linked to your AWS account to create and manage the clusters. For more information, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account]. - -These AWS IAM roles are as follows: - -* The ROSA user role (`user-role`) is an AWS role used by Red{nbsp}Hat to verify the customer's AWS identity. This role has no additional permissions, and the role has a trust relationship with the Red{nbsp}Hat installer account. -* An `ocm-role` resource grants the required permissions for installation of ROSA clusters in {cluster-manager}. You can apply basic or administrative permissions to the `ocm-role` resource. If you create an administrative `ocm-role` resource, {cluster-manager} can create the needed AWS Operator roles and OpenID Connect (OIDC) provider. This IAM role also creates a trust relationship with the Red{nbsp}Hat installer account as well. -+ -[NOTE] -==== -The `ocm-role` IAM resource refers to the combination of the IAM role and the necessary policies created with it. -==== - -You must create this user role as well as an administrative `ocm-role` resource, if you want to use the auto mode in {cluster-manager} to create your Operator role policies and OIDC provider. +include::modules/rosa-sts-ocm-roles-permissions.adoc[leveloffset=+1] include::modules/rosa-sts-understanding-ocm-role.adoc[leveloffset=+2] [role="_additional-resources"] +[id="additional-resources-ocm-roles_{context}"] .Additional resources +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account] * xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] include::modules/rosa-sts-ocm-role-creation.adoc[leveloffset=+2] @@ -128,15 +115,13 @@ include::modules/rosa-sts-about-operator-role-prefixes.adoc[leveloffset=+2] ifdef::openshift-rosa[] [role="_additional-resources"] +[id="additional-resources_{context}"] .Additional resources * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-cli_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations using the CLI] * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using {cluster-manager}] endif::openshift-rosa[] -[id="rosa-sts-oidc-provider-requirements-for-operators_{context}"] -== Open ID Connect (OIDC) requirements for Operator authentication - -For ROSA installations that use STS, you must create a cluster-specific OIDC provider that is used by the cluster Operators to authenticate or create your own OIDC configuration for your own OIDC provider. +include::modules/rosa-sts-oidc-overview.adoc[leveloffset=+1] include::modules/rosa-sts-oidc-provider-command.adoc[leveloffset=+2] diff --git a/rosa_architecture/rosa-understanding.adoc b/rosa_architecture/rosa-understanding.adoc index 09e40226f6de..f7f8414ef149 100644 --- a/rosa_architecture/rosa-understanding.adoc +++ b/rosa_architecture/rosa-understanding.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="rosa-understanding"] -= Understanding ROSA += Understanding {product-title} include::_attributes/attributes-openshift-dedicated.adoc[] :context: rosa-understanding @@ -8,23 +8,9 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] [role="_abstract"] -Learn about {product-title} (ROSA), interacting with ROSA by using {cluster-manager-first} and command-line interface (CLI) tools, consumption experience, and integration with Amazon Web Services (AWS) services. +Learn about and manage containerized applications that integrate {product-title} with Amazon Web Services (AWS). Use the {cluster-manager-first}, the OpenShift web console, or the {rosa-cli} to operate clusters while Red Hat site reliability engineering (SRE) experts manage the underlying infrastructure. To get started, ensure that you have an AWS account and Red{nbsp}Hat account. -[id="rosa-understanding-about_{context}"] -== About ROSA - -ROSA is a fully-managed, turnkey application platform that allows you to focus on delivering value to your customers by building and deploying applications. Red{nbsp}Hat site reliability engineering (SRE) experts manage the underlying platform so you do not have to worry about the complexity of infrastructure management. ROSA provides seamless integration with Amazon CloudWatch, AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (VPC), and a wide range of additional AWS services to further accelerate the building and delivering of differentiating experiences to your customers. - -You subscribe to the service directly from your AWS account. After you create clusters, you can operate your clusters with the OpenShift web console, the ROSA CLI, or through {cluster-manager-first}. - -You receive {OCP-short} updates with new feature releases and a shared, common source for alignment with {OCP-short}. {product-title}. ROSA supports the same versions of {OCP-short} as Red{nbsp}Hat OpenShift Dedicated and {product-title} to achieve version consistency. - -image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}] - -[id="rosa-understanding-getting-started_{context}"] -== Getting started - -To get started with deploying your cluster, ensure your AWS account has met the prerequisites and you have a Red{nbsp}Hat account ready. +include::modules/rosa-understanding-about.adoc[leveloffset=+1] //[id="rosa-understanding-credential-modes_{context}"] //== Credential modes @@ -59,7 +45,7 @@ include::modules/rosa-sdpolicy-am-billing.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources_{context}"] -== Additional resources += Additional resources * xref:../ocm/ocm-overview.adoc#ocm-overview[{cluster-manager}] //* xref ../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] diff --git a/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc b/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc index eb2f046a9121..d094199f6d28 100644 --- a/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc +++ b/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc @@ -94,6 +94,8 @@ include::modules/rosa-adding-tuning.adoc[leveloffset=+1] include::modules/rosa-node-drain-grace-period.adoc[leveloffset=+1] endif::openshift-rosa-hcp[] +[role="_additional-resources"] +[id="additional-resources_{context}"] == Additional resources * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-about.adoc#rosa-nodes-machinepools-about[About machine pools] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[About autoscaling] @@ -101,7 +103,7 @@ endif::openshift-rosa-hcp[] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#nodes-disabling-autoscaling-nodes[Disabling autoscaling] * link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html[AWS Capacity Reservations] ifdef::openshift-rosa[] -* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-service-definition[{rosa-classic} Service Definition] +* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-service-definition[{rosa-classic-title} Service Definition] endif::openshift-rosa[] ifdef::openshift-rosa-hcp[] * xref:../../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-hcp-service-definition[{product-title} Service Definition] From 0a074a5ce95b3e61201695f043db0798fc991d60 Mon Sep 17 00:00:00 2001 From: Jake Berger Date: Fri, 8 May 2026 14:43:47 -0400 Subject: [PATCH 13/17] fix build error --- rosa_architecture/rosa-architecture-models.adoc | 4 ++-- rosa_architecture/rosa-sts-about-iam-resources.adoc | 2 +- rosa_architecture/rosa-understanding.adoc | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/rosa_architecture/rosa-architecture-models.adoc b/rosa_architecture/rosa-architecture-models.adoc index ec899296e520..712710ca357f 100644 --- a/rosa_architecture/rosa-architecture-models.adoc +++ b/rosa_architecture/rosa-architecture-models.adoc @@ -36,8 +36,8 @@ endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] [role="_additional-resources"] -[id="additional-resources-classic_{context}"] -.Additional resources +[id="additional-resources_{context}"] +== Additional resources * xref:../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-configuring.html[Configuring machine pools in Local Zones] endif::openshift-rosa-hcp[] \ No newline at end of file diff --git a/rosa_architecture/rosa-sts-about-iam-resources.adoc b/rosa_architecture/rosa-sts-about-iam-resources.adoc index 7cc9d9a536c9..040233912dfb 100644 --- a/rosa_architecture/rosa-sts-about-iam-resources.adoc +++ b/rosa_architecture/rosa-sts-about-iam-resources.adoc @@ -116,7 +116,7 @@ include::modules/rosa-sts-about-operator-role-prefixes.adoc[leveloffset=+2] ifdef::openshift-rosa[] [role="_additional-resources"] [id="additional-resources_{context}"] -.Additional resources +== Additional resources * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-cli_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations using the CLI] * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using {cluster-manager}] endif::openshift-rosa[] diff --git a/rosa_architecture/rosa-understanding.adoc b/rosa_architecture/rosa-understanding.adoc index f7f8414ef149..2e7909ed4aad 100644 --- a/rosa_architecture/rosa-understanding.adoc +++ b/rosa_architecture/rosa-understanding.adoc @@ -45,7 +45,7 @@ include::modules/rosa-sdpolicy-am-billing.adoc[leveloffset=+1] [role="_additional-resources"] [id="additional-resources_{context}"] -= Additional resources +== Additional resources * xref:../ocm/ocm-overview.adoc#ocm-overview[{cluster-manager}] //* xref ../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] From 95e0f460b3924d309e2c065512fb2967a8eed2f4 Mon Sep 17 00:00:00 2001 From: Michael Burke Date: Thu, 2 Apr 2026 17:19:01 -0400 Subject: [PATCH 14/17] OSDOCS 8353 Boot new workers directly into custom pool configuration --- _topic_maps/_topic_map.yml | 2 + .../machine-config-custom-mcp.adoc | 34 ++++ .../machine-config-custom-mcp-automatic.adoc | 177 ++++++++++++++++++ .../machine-config-custom-mcp-existing.adoc | 103 ++++++++++ 4 files changed, 316 insertions(+) create mode 100644 machine_configuration/machine-config-custom-mcp.adoc create mode 100644 modules/machine-config-custom-mcp-automatic.adoc create mode 100644 modules/machine-config-custom-mcp-existing.adoc diff --git a/_topic_maps/_topic_map.yml b/_topic_maps/_topic_map.yml index 5b5f6b46ae69..cb40dadb65e0 100644 --- a/_topic_maps/_topic_map.yml +++ b/_topic_maps/_topic_map.yml @@ -2465,6 +2465,8 @@ Topics: File: mco-update-boot-images - Name: Manually updating the boot image File: mco-update-boot-images-manual +- Name: Creating custom machine config pools + File: machine-config-custom-mcp - Name: Managing unused rendered machine configs File: machine-configs-garbage-collection - Name: Image mode for OpenShift diff --git a/machine_configuration/machine-config-custom-mcp.adoc b/machine_configuration/machine-config-custom-mcp.adoc new file mode 100644 index 000000000000..297706cea19e --- /dev/null +++ b/machine_configuration/machine-config-custom-mcp.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: ASSEMBLY +[id="machine-config-creating-custom-mcp"] += Creating custom machine config pools +include::_attributes/common-attributes.adoc[] +:context: machine-config-creating-custom-mcp + +toc::[] + +[role="_abstract"] +You can create custom machine config pools (MCP) to manage compute nodes for custom use cases that extend outside of the default node types. By using a custom machine config pool, you can deploy changes targeted only at nodes in the custom pool. + +Custom machine config pools inherit their configurations from the `worker` machine config pool. Changes made to the `worker` machine config pool apply to nodes in the custom pool. However, changes made to the custom machine config pool apply only to the nodes in the custom pool. For more information on custom machine config pools, see "Node configuration management with machine config pools". + +[NOTE] +==== +Custom machine config pools for the control plane nodes are not supported. +==== + +For example, you could use a custom machine config pool to create an _infrastructure_ node. Components that you move to an infrastructure node do not need to be accounted for during sizing. For more information on infrastructure nodes, see "Creating infrastructure machine sets". + +After you create the custom machine config pool, you can boot new nodes directly to the pool by creating a new machine set. Or, you can add existing nodes to the custom pool by using labels. + +include::modules/machine-config-custom-mcp-automatic.adoc[leveloffset=+1] +include::modules/machine-config-custom-mcp-existing.adoc[leveloffset=+1] + +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources + +* xref:../machine_configuration/index.adoc#architecture-machine-config-pools_machine-config-overview[Node configuration management with machine config pools] +* xref:../machine_configuration/mco-update-boot-images-manual.adoc#mco-update-boot-images-manual[Manually updating the boot image] +* xref:../machine_management/creating-infrastructure-machinesets.adoc#creating-infrastructure-machinesets[Creating infrastructure machine sets] + + diff --git a/modules/machine-config-custom-mcp-automatic.adoc b/modules/machine-config-custom-mcp-automatic.adoc new file mode 100644 index 000000000000..29977cbf8bb5 --- /dev/null +++ b/modules/machine-config-custom-mcp-automatic.adoc @@ -0,0 +1,177 @@ +// Module included in the following assemblies: +// +// * machine_configuration/machine-config-creating-custom-mcp.adoc + +:_mod-docs-content-type: PROCEDURE +[id="machine-config-custom-mcp-automatic_{context}"] += Creating a custom machine config pool with a new node + +[role="_abstract"] +You can create a custom machine config pool (MCP) and launch a new node directly into that pool. By launching the node directly into the new pool, you save a node reboot cycle that would be required when moving the nodes from the worker machine config pool to the custom pool. + +Use the `userDataSecret` parameter in the machine set to instruct the Machine Config Operator (MCO) to add the node to a specific machine config pool. The secret contains the endpoint of the custom machine config pool. You must prefix the name of this new secret with the name of the custom machine config pool. + +The following procedure shows you how to create a new custom machine config pool and launch a new node into that pool. + +.Procedure + +. Create a custom machine config pool: + +.. Create a YAML file similar to the following: ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: MachineConfigPool +metadata: + name: custom +spec: + machineConfigSelector: + matchExpressions: + - {key: machineconfiguration.openshift.io/role, operator: In, values: [custom,worker]} + nodeSelector: + matchLabels: + node-role.kubernetes.io/custom: "" +---- +where: ++ +-- +`metadata.name`:: Specifies a name for the custom machine config pool. +`spec.machineConfigSelector.matchExpressions`:: Specifies the node roles for the new node. This must include the `worker` role and the custom role. +`spec.nodeSelector.matchLabels`:: Specifies a node selector to use when adding nodes to this pool. +-- + +.. Create the machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +. Create a new machine set that creates a new node in the new custom machine config pool: + +.. Create a YAML file, for example by making a copy of an existing compute machine set YAML and making the following changes: ++ +[source,yaml] +---- +apiVersion: machine.openshift.io/v1beta1 +kind: MachineSet +metadata: +# ... + name: + namespace: openshift-machine-api +# ... +spec: +# ... + template: +# ... + spec: +# ... + providerSpec: +# ... + value: +# ... + userDataSecret: + name: -user-data-managed +# ... +---- ++ +where: ++ +-- +`metadata.name`:: Specifies a name for the machine set. +`metadata.namespace`:: Specifies a namespace for the machine set. This must be `openshift-machine-api`. +`spec.template.spec.providerSpec.value.userDataSecret`:: Specifies a name for the user data secret that is created. The name of the secret must start with the name of the custom machine config pool and end with `-user-data-managed`. For example `custom-user-data-managed`. +-- ++ +These are the minimum changes required to create the new machine set. For more configuration options or to configure an all-new machine set, see "Creating infrastructure machine sets" for your platform. ++ +[NOTE] +==== +When creating a new machine set, you should specify the latest image to use for the boot image. For more information about configuring the boot image on your cluster, see "Manually updating the boot image" for your platform. The method to specify the image varies by provider. +==== + +.. Create the machine set by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- ++ +The MCO creates a new node in the new custom machine config pool. + +.Verification + +. Check to see that the MCO created the new machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc get mcp +---- ++ +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +custom rendered-custom-72be15c95699b6d39f70fce525f51bb2 True False False 1 1 1 0 12s +master rendered-master-9e25b616b551d6c77f490191f45161d7 True False False 3 3 3 0 32m +worker rendered-worker-72be15c95699b6d39f70fce525f51bb2 True False False 2 2 2 0 32m +---- ++ +In this example, `custom` is the new machine config pool. + +. Check to see that the MCO created the new machine set by running the following command: ++ +[source,terminal] +---- +$ oc get machineset -n openshift-machine-api +---- ++ +.Example output +[source,terminal] +---- +NAME DESIRED CURRENT READY AVAILABLE AGE +ci-ln-7x179fk-72292-tc5qz-custom-a 1 1 1 1 62s +ci-ln-7x179fk-72292-tc5qz-worker-a 1 1 1 1 91m +ci-ln-7x179fk-72292-tc5qz-worker-b 1 1 1 1 91m +ci-ln-7x179fk-72292-tc5qz-worker-f 1 1 1 1 91m +---- ++ +In this example, `ci-ln-7x179fk-72292-tc5qz-custom-a` is the new machine set. + +. Check that the MCO created the required secret by running the following command: ++ +[source,terminal] +---- +$ oc get secrets -n openshift-machine-api +---- ++ +.Example output +[source,terminal] +---- +NAME TYPE DATA AGE +# ... +custom-user-data-managed Opaque 2 9m +# ... +---- + +. Check to see that the node is in the new custom machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +ci-ln-i61xqwb-72292-hz2mw-custom-9r496 Ready custom,worker 9m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-0 Ready control-plane,master 42m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-1 Ready control-plane,master 44m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-2 Ready control-plane,master 43m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-c-2lhcl Ready worker 36m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-f-qgdb7 Ready worker 36m v1.35.3 +---- ++ +In this example, the `ci-ln-i61xqwb-72292--hz2mw-custom-9r496` is a new node that was added to the `custom` machine config pool. diff --git a/modules/machine-config-custom-mcp-existing.adoc b/modules/machine-config-custom-mcp-existing.adoc new file mode 100644 index 000000000000..64214c89c662 --- /dev/null +++ b/modules/machine-config-custom-mcp-existing.adoc @@ -0,0 +1,103 @@ +// Module included in the following assemblies: +// +// * machine_configuration/machine-config-creating-custom-mcp.adoc + +:_mod-docs-content-type: PROCEDURE +[id="machine-config-custom-mcp-existing_{context}"] += Creating a custom machine config pool for an existing node + +[role="_abstract"] +You can create custom machine config pools (MCP) and manually add an existing node into that pool. With custom machine config pools, you can deploy changes targeted at the nodes in the custom pool. + +The following procedure shows you how to create a new custom machine config pool and add an existing node into that pool. + +.Procedure + +. Create a custom machine config pool: + +.. Create a YAML file similar to the following: ++ +[source,yaml] +---- +apiVersion: machineconfiguration.openshift.io/v1 +kind: MachineConfigPool +metadata: + name: custom +spec: + machineConfigSelector: + matchExpressions: + - {key: machineconfiguration.openshift.io/role, operator: In, values: [custom,worker]} + nodeSelector: + matchLabels: + node-role.kubernetes.io/custom: "" +---- ++ +where: + +`metadata.name`:: Specifies a name for the machine config pool. +`spec.machineConfigSelector.matchExpressions`:: Specifies the node roles for the new node. This must include the `worker` role and the custom role. +`spec.nodeSelector.matchLabels`:: Specifies a node selector to use when adding nodes to this pool. + +.. Create the machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc create -f .yaml +---- + +. Add a label to the worker nodes that you want to move to the new custom pool by running the following command: ++ +[source,terminal] +---- +$ oc label node +---- ++ +Replace `` with the name of the node that you want to move and replace `` with the node selector you added to the machine config pool. ++ +.Example command +[source,terminal] +---- +$ oc label node ci-ln-g5tpp5k-72292-hz2mw-worker-b-ps8xh node-role.kubernetes.io/custom="" +---- + +.Verification + +. Check to see that the MCO created the new machine config pool by running the following command: ++ +[source,terminal] +---- +$ oc get mcp +---- ++ +.Example output +[source,terminal] +---- +NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE +custom rendered-custom-72be15c95699b6d39f70fce525f51bb2 True False False 1 1 1 0 12s +master rendered-master-9e25b616b551d6c77f490191f45161d7 True False False 3 3 3 0 32m +worker rendered-worker-72be15c95699b6d39f70fce525f51bb2 True False False 2 2 2 0 32m +---- ++ +In this example, `custom` is the new machine config pool. + +. Check to see if the node is in that pool by running the following command: ++ +[source,terminal] +---- +$ oc get nodes +---- ++ +.Example output +[source,terminal] +---- +NAME STATUS ROLES AGE VERSION +ci-ln-i61xqwb-72292-ftjn8-master-0 Ready control-plane,master 42m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-1 Ready control-plane,master 44m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-master-2 Ready control-plane,master 43m v1.35.3 +ci-ln-g5tpp5k-72292-hz2mw-worker-b-ps8xh Ready custom,worker 36m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-c-2lhcl Ready worker 36m v1.35.3 +ci-ln-i61xqwb-72292-ftjn8-worker-f-qgdb7 Ready worker 36m v1.35.3 +---- ++ +In this example, the `ci-ln-g5tpp5k-72292-hz2mw-worker-b-ps8xh` node is an existing node that was moved to the `custom` machine config pool. + From 937ff1b4b0913e4c792c33c20205856e7838dd9c Mon Sep 17 00:00:00 2001 From: Jake Berger Date: Fri, 8 May 2026 15:17:39 -0400 Subject: [PATCH 15/17] more fixing of build log errors --- rosa_architecture/rosa-architecture-models.adoc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rosa_architecture/rosa-architecture-models.adoc b/rosa_architecture/rosa-architecture-models.adoc index 712710ca357f..721915b48b9b 100644 --- a/rosa_architecture/rosa-architecture-models.adoc +++ b/rosa_architecture/rosa-architecture-models.adoc @@ -36,7 +36,7 @@ endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] [role="_additional-resources"] -[id="additional-resources_{context}"] +[id="additional-resources-classic_{context}"] == Additional resources * xref:../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-configuring.html[Configuring machine pools in Local Zones] From 132fe2b1d60692085873bed6a85ac04763d6fd8c Mon Sep 17 00:00:00 2001 From: Eric Ponvelle Date: Fri, 8 May 2026 15:17:00 -0500 Subject: [PATCH 16/17] Revert "OSDOCS-18319 [ROSA]: Vale and Dita checks for ROSA getting started" --- adding_service_cluster/adding-service.adoc | 1 - architecture/rosa-architecture-models.adoc | 1 - .../rosa-architecture-topology-overview.adoc | 12 -- ...ted-configure-an-idp-and-grant-access.adoc | 1 - ...rosa-getting-started-configure-an-idp.adoc | 31 ++--- ...ing-started-create-cluster-admin-user.adoc | 11 +- ...ing-started-creating-cluster-overview.adoc | 14 -- modules/rosa-getting-started-enable-rosa.adoc | 6 +- modules/rosa-getting-started-learn.adoc | 1 - .../rosa-getting-started-prerequisites.adoc | 18 --- ...-quickstart-creating-cluster-overview.adoc | 16 --- modules/rosa-quickstart-prerequisites.adoc | 18 --- ...rosa-sts-deployment-workflow-overview.adoc | 20 --- modules/rosa-sts-ocm-roles-permissions.adoc | 22 ---- modules/rosa-sts-oidc-overview.adoc | 10 -- modules/rosa-understanding-about.adoc | 16 --- .../rosa-architecture-models.adoc | 10 +- .../rosa-sts-about-iam-resources.adoc | 27 +++- rosa_architecture/rosa-understanding.adoc | 23 +++- .../rosa-managing-worker-nodes.adoc | 22 +--- .../rosa-getting-started.adoc | 100 ++++++++------ .../rosa-quickstart-guide-ui.adoc | 123 ++++++++++++------ .../rosa-sts-getting-started-workflow.adoc | 30 +++-- .../rosa-sts-creating-a-cluster-quickly.adoc | 9 +- ...reating-a-cluster-with-customizations.adoc | 2 +- .../rosa-sts-required-aws-service-quotas.adoc | 5 +- .../rosa-sts-setting-up-environment.adoc | 14 +- 27 files changed, 253 insertions(+), 310 deletions(-) delete mode 100644 modules/rosa-architecture-topology-overview.adoc delete mode 100644 modules/rosa-getting-started-creating-cluster-overview.adoc delete mode 100644 modules/rosa-getting-started-prerequisites.adoc delete mode 100644 modules/rosa-quickstart-creating-cluster-overview.adoc delete mode 100644 modules/rosa-quickstart-prerequisites.adoc delete mode 100644 modules/rosa-sts-deployment-workflow-overview.adoc delete mode 100644 modules/rosa-sts-ocm-roles-permissions.adoc delete mode 100644 modules/rosa-sts-oidc-overview.adoc delete mode 100644 modules/rosa-understanding-about.adoc diff --git a/adding_service_cluster/adding-service.adoc b/adding_service_cluster/adding-service.adoc index 105ea531a281..363bf2106eda 100644 --- a/adding_service_cluster/adding-service.adoc +++ b/adding_service_cluster/adding-service.adoc @@ -6,7 +6,6 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] -[role="_abstract"] You can add, access, and remove add-on services for your {product-title} ifdef::openshift-rosa[] (ROSA) diff --git a/architecture/rosa-architecture-models.adoc b/architecture/rosa-architecture-models.adoc index 6f69790f9dad..8c0e0f15d642 100644 --- a/architecture/rosa-architecture-models.adoc +++ b/architecture/rosa-architecture-models.adoc @@ -7,7 +7,6 @@ include::_attributes/common-attributes.adoc[] toc::[] -[role="_abstract"] {product-title} has a classic architecture cluster topology meaning the control plane and the worker nodes are deployed in the customer's AWS account. include::modules/rosa-hcp-classic-comparison.adoc[leveloffset=+1] diff --git a/modules/rosa-architecture-topology-overview.adoc b/modules/rosa-architecture-topology-overview.adoc deleted file mode 100644 index 03698c1111eb..000000000000 --- a/modules/rosa-architecture-topology-overview.adoc +++ /dev/null @@ -1,12 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_architecture/rosa-architecture-models.adoc - -:_mod-docs-content-type: CONCEPT -[id="rosa-architecture-topology-overview_{context}"] -= Cluster topology overview - -[role="_abstract"] -{product-title} has the following cluster topology: - -Hosted control plane (HCP) - The control plane is hosted in a Red{nbsp}Hat account and the worker nodes are deployed in the customer's AWS account. diff --git a/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc b/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc index 22e437250fe9..c67d9976a352 100644 --- a/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc +++ b/modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc @@ -7,7 +7,6 @@ [id="rosa-getting-started-configure-an-idp-and-grant-access_{context}"] = Configuring an identity provider and granting cluster access -[role="_abstract"] {product-title} (ROSA) includes a built-in OAuth server. After your ROSA cluster is created, you must configure OAuth to use an identity provider. You can then add members to your configured identity provider to grant them access to your cluster. You can also grant the identity provider users with `cluster-admin` or `dedicated-admin` privileges as required. diff --git a/modules/rosa-getting-started-configure-an-idp.adoc b/modules/rosa-getting-started-configure-an-idp.adoc index d01df746db92..5789d95f7e1a 100644 --- a/modules/rosa-getting-started-configure-an-idp.adoc +++ b/modules/rosa-getting-started-configure-an-idp.adoc @@ -14,7 +14,6 @@ ifeval::["{context}" == "rosa-quickstart"] :quickstart: endif::[] -[role="_abstract"] You can configure different identity provider types for your {product-title} (ROSA) cluster. Supported types include GitHub, GitHub Enterprise, GitLab, Google, LDAP, OpenID Connect and htpasswd identity providers. [IMPORTANT] @@ -41,12 +40,13 @@ endif::[] . If you do not have an existing GitHub organization to use for identity provisioning for your ROSA cluster, create one. Follow the steps in the link:https://docs.github.com/en/organizations/collaborating-with-groups-in-organizations/creating-a-new-organization-from-scratch[GitHub documentation]. . Configure a GitHub identity provider for your cluster that is restricted to the members of your GitHub organization. -.. Configure an identity provider using the interactive mode, replacing `` with the name of your cluster: +.. Configure an identity provider using the interactive mode: + [source,terminal] ---- -$ rosa create idp --cluster= --interactive +$ rosa create idp --cluster= --interactive <1> ---- +<1> Replace `` with the name of your cluster. + .Example output [source,terminal] @@ -56,49 +56,44 @@ Any optional fields can be left empty and a default will be selected. ? Type of identity provider: github ? Identity provider name: github-1 ? Restrict to members of: organizations -? GitHub organizations: +? GitHub organizations: <1> ? To use GitHub as an identity provider, you must first register the application: - Open the following URL: https://github.com/organizations//settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps./.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps./.p1.openshiftapps.com - Click on 'Register application' ... ---- -.. Follow the URL in the output and select *Register application* to register a new OAuth application in your GitHub organization, replacing `` with the name of your GitHub organization. By registering the application, you enable the OAuth server that is built into ROSA to authenticate members of your GitHub organization into your cluster. +<1> Replace `` with the name of your GitHub organization. +.. Follow the URL in the output and select *Register application* to register a new OAuth application in your GitHub organization. By registering the application, you enable the OAuth server that is built into ROSA to authenticate members of your GitHub organization into your cluster. + [NOTE] ==== The fields in the *Register a new OAuth application* GitHub form are automatically filled with the required values through the URL defined by the ROSA CLI. ==== -.. Use the information from your GitHub OAuth application page to populate the remaining `rosa create idp` interactive prompts: +.. Use the information from your GitHub OAuth application page to populate the remaining `rosa create idp` interactive prompts. + .Continued example output [source,terminal] ---- ... -? Client ID: -? Client Secret: [? for help] +? Client ID: <1> +? Client Secret: [? for help] <2> ? GitHub Enterprise Hostname (optional): -? Mapping method: claim +? Mapping method: claim <3> I: Configuring IDP for cluster '' I: Identity Provider 'github-1' has been created. It will take up to 1 minute for this configuration to be enabled. To add cluster administrators, see 'rosa grant user --help'. To login into the console, open https://console-openshift-console.apps...p1.openshiftapps.com and click on github-1. ---- -+ --- -where: - -`github_client_id`:: Specifies the client ID for your GitHub OAuth application. -`github_client_secret`:: Specifies a client secret for your GitHub OAuth application. -`claim`:: Specifies the mapping method. --- +<1> Replace `` with the client ID for your GitHub OAuth application. +<2> Replace `` with a client secret for your GitHub OAuth application. +<3> Specify `claim` as the mapping method. + [NOTE] ==== It might take approximately two minutes for the identity provider configuration to become active. If you have configured a `cluster-admin` user, you can watch the OAuth pods redeploy with the updated configuration by running `oc get pods -n openshift-authentication --watch`. ==== -+ .. Enter the following command to verify that the identity provider has been configured correctly: + [source,terminal] diff --git a/modules/rosa-getting-started-create-cluster-admin-user.adoc b/modules/rosa-getting-started-create-cluster-admin-user.adoc index 5a103dcd86e1..1fa872abe0a5 100644 --- a/modules/rosa-getting-started-create-cluster-admin-user.adoc +++ b/modules/rosa-getting-started-create-cluster-admin-user.adoc @@ -14,7 +14,6 @@ ifeval::["{context}" == "rosa-quickstart"] :quickstart: endif::[] -[role="_abstract"] Before configuring an identity provider, you can create a user with `cluster-admin` privileges for immediate access to your {product-title} (ROSA) cluster. [NOTE] @@ -33,12 +32,13 @@ endif::[] .Procedure -. Create a cluster administrator user, replacing `` with the name of your cluster: +. Create a cluster administrator user: + [source,terminal] ---- -$ rosa create admin --cluster= +$ rosa create admin --cluster= <1> ---- +<1> Replace `` with the name of your cluster. + .Example output [source,terminal] @@ -60,12 +60,13 @@ It might take approximately one minute for the `cluster-admin` user to become ac ifdef::getting-started[] . Log in to the cluster through the CLI: -.. Run the command provided in the output of the preceding step to log in, replacing `` and `` with the API URL and cluster administrator password for your environment: +.. Run the command provided in the output of the preceding step to log in: + [source,terminal] ---- -$ oc login --username cluster-admin --password +$ oc login --username cluster-admin --password <1> ---- +<1> Replace `` and `` with the API URL and cluster administrator password for your environment. .. Verify if you are logged in to the ROSA cluster as the `cluster-admin` user: + [source,terminal] diff --git a/modules/rosa-getting-started-creating-cluster-overview.adoc b/modules/rosa-getting-started-creating-cluster-overview.adoc deleted file mode 100644 index 663433117c74..000000000000 --- a/modules/rosa-getting-started-creating-cluster-overview.adoc +++ /dev/null @@ -1,14 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_getting_started/rosa-getting-started.adoc - -:_mod-docs-content-type: CONCEPT -[id="rosa-getting-started-creating-a-cluster_{context}"] -= Creating a ROSA cluster with STS - -[role="_abstract"] -Choose from one of the following methods to deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS). In each scenario, you can deploy your cluster by using {cluster-manager-first} or the ROSA CLI (`rosa`): - -* *Creating a ROSA cluster with STS using the default options*: You can create a ROSA cluster with STS quickly by using the default options and automatic STS resource creation. - -* *Creating a ROSA cluster with STS using customizations*: You can create a ROSA cluster with STS using customizations. You can also choose between the `auto` and `manual` modes when creating the required STS resources. diff --git a/modules/rosa-getting-started-enable-rosa.adoc b/modules/rosa-getting-started-enable-rosa.adoc index 01b42bc2b9db..d90849e98a37 100644 --- a/modules/rosa-getting-started-enable-rosa.adoc +++ b/modules/rosa-getting-started-enable-rosa.adoc @@ -30,20 +30,20 @@ Consider using a dedicated AWS account to run production clusters. If you are us + The *Verify ROSA prerequisites* page opens. -. Under *ROSA enablement*, ensure that a green checkmark and `You previously enabled ROSA` are displayed. +. Under *ROSA enablement*, ensure that a green check mark and `You previously enabled ROSA` are displayed. + If not, follow these steps: .. Select the checkbox beside `I agree to share my contact information with Red{nbsp}Hat`. .. Click *Enable ROSA*. + -After a short wait, a green checkmark and `You enabled ROSA` message are displayed. +After a short wait, a green check mark and `You enabled ROSA` message are displayed. . Under *Service Quotas*, ensure that a green check and `Your quotas meet the requirements for ROSA` are displayed. + If you see `Your quotas don't meet the minimum requirements`, take note of the quota type and the minimum listed in the error message. See the Amazon documentation on link:https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html[requesting a quota increase] for guidance. It may take several hours for Amazon to approve your quota request. -. Under *ELB service-linked role*, ensure that a green checkmark and `AWSServiceRoleForElasticLoadBalancing already exists` are displayed. +. Under *ELB service-linked role*, ensure that a green check mark and `AWSServiceRoleForElasticLoadBalancing already exists` are displayed. . Click *Continue to Red{nbsp}Hat*. + diff --git a/modules/rosa-getting-started-learn.adoc b/modules/rosa-getting-started-learn.adoc index 5aa99e194a5b..6b5bafebfcc4 100644 --- a/modules/rosa-getting-started-learn.adoc +++ b/modules/rosa-getting-started-learn.adoc @@ -5,5 +5,4 @@ [id="rosa-getting-started-learn_{context}"] = Getting started with {product-title} -[role="_abstract"] Use the following sections to find content to help you learn about and use {product-title}. \ No newline at end of file diff --git a/modules/rosa-getting-started-prerequisites.adoc b/modules/rosa-getting-started-prerequisites.adoc deleted file mode 100644 index e8234859507a..000000000000 --- a/modules/rosa-getting-started-prerequisites.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_getting_started/rosa-getting-started.adoc - -:_mod-docs-content-type: REFERENCE -[id="rosa-getting-started-prerequisites_{context}"] -= Prerequisites - -[role="_abstract"] -* You reviewed the introduction to {product-title} (ROSA), and the documentation on ROSA architecture models and architecture concepts. - -* You have read the documentation on the guidelines for planning your environment. -// Removed as part of OSDOCS-13310, until figures are verified. -//xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and - -* You have reviewed the detailed AWS prerequisites for ROSA with STS. - -* You have the AWS service quotas required to run a ROSA cluster. diff --git a/modules/rosa-quickstart-creating-cluster-overview.adoc b/modules/rosa-quickstart-creating-cluster-overview.adoc deleted file mode 100644 index 0d45ff10b6f8..000000000000 --- a/modules/rosa-quickstart-creating-cluster-overview.adoc +++ /dev/null @@ -1,16 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_getting_started/rosa-quickstart-guide-ui.adoc - -:_mod-docs-content-type: CONCEPT -[id="rosa-quickstart-creating-a-cluster_{context}"] -= Creating a ROSA cluster with AWS STS using the default auto mode - -[role="_abstract"] -{cluster-manager-first} is a managed service on the {hybrid-console-url} where you can install, modify, operate, and upgrade your Red{nbsp}Hat OpenShift clusters. This service allows you to work with all of your organization's clusters from a single dashboard. - -The procedures in this document use the `auto` modes in {cluster-manager} to immediately create the required Identity and Access Management (IAM) resources by using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider. - -When using the {cluster-manager} {hybrid-console-second} to create a {product-title} (ROSA) cluster that uses the STS, you can select the default options to create the cluster quickly. - -Before you can use the {cluster-manager} {hybrid-console-second} to deploy ROSA with STS clusters, you must associate your AWS account with your Red{nbsp}Hat organization and create the required account-wide STS roles and policies. diff --git a/modules/rosa-quickstart-prerequisites.adoc b/modules/rosa-quickstart-prerequisites.adoc deleted file mode 100644 index c96aef4df587..000000000000 --- a/modules/rosa-quickstart-prerequisites.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_getting_started/rosa-quickstart-guide-ui.adoc - -:_mod-docs-content-type: REFERENCE -[id="rosa-getting-started-prerequisites_{context}"] -= Prerequisites - -[role="_abstract"] -* You reviewed the introduction to {product-title}, and the documentation on {product-title} architecture models and concepts. - -* You have read the documentation on the guidelines for planning your environment. -// Removed as part of OSDOCS-13310, until figures are verified. -// xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and - -* You have reviewed the detailed AWS prerequisites for ROSA with STS. - -* You have the AWS service quotas required to run a ROSA cluster. diff --git a/modules/rosa-sts-deployment-workflow-overview.adoc b/modules/rosa-sts-deployment-workflow-overview.adoc deleted file mode 100644 index 7c42bb3f1699..000000000000 --- a/modules/rosa-sts-deployment-workflow-overview.adoc +++ /dev/null @@ -1,20 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_getting_started/rosa-sts-getting-started-workflow.adoc - -:_mod-docs-content-type: CONCEPT -[id="rosa-sts-overview-of-the-deployment-workflow_{context}"] -= Overview of the ROSA with STS deployment workflow - -[role="_abstract"] -The AWS Security Token Service (STS) is a global web service that provides short-term credentials for IAM or federated users. You can use AWS STS with {product-title} (ROSA) to allocate temporary, limited-privilege credentials for component-specific IAM roles. The service enables cluster components to make AWS API calls using secure cloud resource management practices. - -You can follow the workflow stages outlined below to set up and access a ROSA cluster that uses STS. - -. *Complete the AWS prerequisites for ROSA with STS*. To deploy a ROSA cluster with STS, your AWS account must meet the prerequisite requirements. -. *Review the required AWS service quotas*. To prepare for your cluster deployment, review the AWS service quotas that are required to run a ROSA cluster. -. *Set up the environment and install ROSA using STS*. Before you create a ROSA with STS cluster, you must enable ROSA in your AWS account, install and configure the required CLI tools, and verify the configuration of the CLI tools. You must also verify that the AWS Elastic Load Balancing (ELB) service role exists and that the required AWS resource quotas are available. -. *Create a ROSA cluster with STS quickly or create a cluster using customizations*. Use the ROSA CLI (`rosa`) or {cluster-manager-first} to create a cluster with STS. You can create a cluster quickly by using the default options, or you can apply customizations to suit the needs of your organization. -. *Access your cluster*. You can configure an identity provider and grant cluster administrator privileges to the identity provider users as required. You can also access a newly-deployed cluster quickly by configuring a `cluster-admin` user. -. *Revoke access to a ROSA cluster for a user*. You can revoke access to a ROSA with STS cluster from a user by using the ROSA CLI or the web console. -. *Delete a ROSA cluster*. You can delete a ROSA with STS cluster by using the ROSA CLI (`rosa`). After deleting a cluster, you can delete the STS resources by using the AWS Identity and Access Management (IAM) Console. diff --git a/modules/rosa-sts-ocm-roles-permissions.adoc b/modules/rosa-sts-ocm-roles-permissions.adoc deleted file mode 100644 index 0d4dae812e9b..000000000000 --- a/modules/rosa-sts-ocm-roles-permissions.adoc +++ /dev/null @@ -1,22 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_architecture/rosa-sts-about-iam-resources.adoc - -:_mod-docs-content-type: CONCEPT -[id="rosa-sts-ocm-roles-and-permissions_{context}"] -= {cluster-manager} roles and permissions - -[role="_abstract"] -If you create {product-title} clusters by using {cluster-manager-url}, you must have the following AWS IAM roles linked to your AWS account to create and manage the clusters. - -These AWS IAM roles are as follows: - -* The {product-title} user role (`user-role`) is an AWS role used by Red{nbsp}Hat to verify the customer's AWS identity. This role has no additional permissions, and the role has a trust relationship with the Red{nbsp}Hat installer account. -* An `ocm-role` resource grants the required permissions for installation of {product-title} clusters in {cluster-manager}. You can apply basic or administrative permissions to the `ocm-role` resource. If you create an administrative `ocm-role` resource, {cluster-manager} can create the needed AWS Operator roles and OpenID Connect (OIDC) provider. This IAM role also creates a trust relationship with the Red{nbsp}Hat installer account as well. -+ -[NOTE] -==== -The `ocm-role` IAM resource refers to the combination of the IAM role and the necessary policies created with it. -==== - -You must create this user role as well as an administrative `ocm-role` resource, if you want to use the auto mode in {cluster-manager} to create your Operator role policies and OIDC provider. diff --git a/modules/rosa-sts-oidc-overview.adoc b/modules/rosa-sts-oidc-overview.adoc deleted file mode 100644 index 8fd003431964..000000000000 --- a/modules/rosa-sts-oidc-overview.adoc +++ /dev/null @@ -1,10 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_architecture/rosa-sts-about-iam-resources.adoc - -:_mod-docs-content-type: CONCEPT -[id="rosa-sts-oidc-overview_{context}"] -= Open ID Connect (OIDC) requirements for Operator authentication - -[role="_abstract"] -For ROSA installations that use STS, you must create a cluster-specific OIDC provider that is used by the cluster Operators to authenticate or create your own OIDC configuration for your own OIDC provider. diff --git a/modules/rosa-understanding-about.adoc b/modules/rosa-understanding-about.adoc deleted file mode 100644 index b67910ac1f01..000000000000 --- a/modules/rosa-understanding-about.adoc +++ /dev/null @@ -1,16 +0,0 @@ -// Module included in the following assemblies: -// -// * rosa_architecture/rosa-understanding.adoc - -:_mod-docs-content-type: CONCEPT -[id="rosa-understanding-about_{context}"] -= About {product-title} - -[role="_abstract"] -{product-title} is a fully-managed, turnkey application platform that allows you to focus on delivering value to your customers by building and deploying applications. Red{nbsp}Hat site reliability engineering (SRE) experts manage the underlying platform so you do not have to worry about the complexity of infrastructure management. {product-title} provides seamless integration with Amazon CloudWatch, AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (VPC), and a wide range of additional AWS services to further accelerate the building and delivering of differentiating experiences to your customers. - -You subscribe to the service directly from your AWS account. After you create clusters, you can operate your clusters with the OpenShift web console, the ROSA CLI, or through {cluster-manager-first}. - -You receive {OCP-short} updates with new feature releases and a shared, common source for alignment with {OCP-short}. {product-title} supports the same versions of {OCP-short} as Red{nbsp}Hat OpenShift Dedicated to achieve version consistency. - -image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}] diff --git a/rosa_architecture/rosa-architecture-models.adoc b/rosa_architecture/rosa-architecture-models.adoc index 721915b48b9b..fe37fa7ba2ac 100644 --- a/rosa_architecture/rosa-architecture-models.adoc +++ b/rosa_architecture/rosa-architecture-models.adoc @@ -9,14 +9,12 @@ include::_attributes/common-attributes.adoc[] toc::[] [role="_abstract"] -Learn about {product-title} architecture models and cluster topologies. +{product-title} has the following cluster topology: -include::modules/rosa-architecture-topology-overview.adoc[leveloffset=+1] +Hosted control plane (HCP) - The control plane is hosted in a Red{nbsp}Hat account and the worker nodes are deployed in the customer's AWS account. include::modules/rosa-hcp-classic-comparison.adoc[leveloffset=+1] -[role="_additional-resources"] -[id="additional-resources_{context}"] .Additional resources * xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-sdpolicy-regions-az_rosa-hcp-service-definition[Regions and availability zones] @@ -35,9 +33,7 @@ endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] -[role="_additional-resources"] -[id="additional-resources-classic_{context}"] -== Additional resources +.Additional resources * xref:../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-configuring.html[Configuring machine pools in Local Zones] endif::openshift-rosa-hcp[] \ No newline at end of file diff --git a/rosa_architecture/rosa-sts-about-iam-resources.adoc b/rosa_architecture/rosa-sts-about-iam-resources.adoc index 040233912dfb..01dedbf9798e 100644 --- a/rosa_architecture/rosa-sts-about-iam-resources.adoc +++ b/rosa_architecture/rosa-sts-about-iam-resources.adoc @@ -55,14 +55,27 @@ ifdef::openshift-rosa-hcp[] * xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Creating a {hcp-title} cluster quickly] endif::openshift-rosa-hcp[] -include::modules/rosa-sts-ocm-roles-permissions.adoc[leveloffset=+1] +[id="rosa-sts-ocm-roles-and-permissions_{context}"] +== {cluster-manager} roles and permissions + +If you create ROSA clusters by using {cluster-manager-url}, you must have the following AWS IAM roles linked to your AWS account to create and manage the clusters. For more information, see xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account]. + +These AWS IAM roles are as follows: + +* The ROSA user role (`user-role`) is an AWS role used by Red{nbsp}Hat to verify the customer's AWS identity. This role has no additional permissions, and the role has a trust relationship with the Red{nbsp}Hat installer account. +* An `ocm-role` resource grants the required permissions for installation of ROSA clusters in {cluster-manager}. You can apply basic or administrative permissions to the `ocm-role` resource. If you create an administrative `ocm-role` resource, {cluster-manager} can create the needed AWS Operator roles and OpenID Connect (OIDC) provider. This IAM role also creates a trust relationship with the Red{nbsp}Hat installer account as well. ++ +[NOTE] +==== +The `ocm-role` IAM resource refers to the combination of the IAM role and the necessary policies created with it. +==== + +You must create this user role as well as an administrative `ocm-role` resource, if you want to use the auto mode in {cluster-manager} to create your Operator role policies and OIDC provider. include::modules/rosa-sts-understanding-ocm-role.adoc[leveloffset=+2] [role="_additional-resources"] -[id="additional-resources-ocm-roles_{context}"] .Additional resources -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-associating-account_rosa-sts-aws-prereqs[Associating your AWS account] * xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies-creation-methods_rosa-sts-about-iam-resources[Methods of account-wide role creation] include::modules/rosa-sts-ocm-role-creation.adoc[leveloffset=+2] @@ -115,13 +128,15 @@ include::modules/rosa-sts-about-operator-role-prefixes.adoc[leveloffset=+2] ifdef::openshift-rosa[] [role="_additional-resources"] -[id="additional-resources_{context}"] -== Additional resources +.Additional resources * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-cli_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations using the CLI] * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using {cluster-manager}] endif::openshift-rosa[] -include::modules/rosa-sts-oidc-overview.adoc[leveloffset=+1] +[id="rosa-sts-oidc-provider-requirements-for-operators_{context}"] +== Open ID Connect (OIDC) requirements for Operator authentication + +For ROSA installations that use STS, you must create a cluster-specific OIDC provider that is used by the cluster Operators to authenticate or create your own OIDC configuration for your own OIDC provider. include::modules/rosa-sts-oidc-provider-command.adoc[leveloffset=+2] diff --git a/rosa_architecture/rosa-understanding.adoc b/rosa_architecture/rosa-understanding.adoc index 2e7909ed4aad..946db885bd48 100644 --- a/rosa_architecture/rosa-understanding.adoc +++ b/rosa_architecture/rosa-understanding.adoc @@ -1,6 +1,6 @@ :_mod-docs-content-type: ASSEMBLY [id="rosa-understanding"] -= Understanding {product-title} += Understanding ROSA include::_attributes/attributes-openshift-dedicated.adoc[] :context: rosa-understanding @@ -8,9 +8,19 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] [role="_abstract"] -Learn about and manage containerized applications that integrate {product-title} with Amazon Web Services (AWS). Use the {cluster-manager-first}, the OpenShift web console, or the {rosa-cli} to operate clusters while Red Hat site reliability engineering (SRE) experts manage the underlying infrastructure. To get started, ensure that you have an AWS account and Red{nbsp}Hat account. +Learn about {product-title} (ROSA), interacting with ROSA by using {cluster-manager-first} and command-line interface (CLI) tools, consumption experience, and integration with Amazon Web Services (AWS) services. -include::modules/rosa-understanding-about.adoc[leveloffset=+1] +[id="rosa-understanding-about_{context}"] +== About ROSA + +ROSA is a fully-managed, turnkey application platform that allows you to focus on delivering value to your customers by building and deploying applications. Red{nbsp}Hat site reliability engineering (SRE) experts manage the underlying platform so you do not have to worry about the complexity of infrastructure management. ROSA provides seamless integration with Amazon CloudWatch, AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (VPC), and a wide range of additional AWS services to further accelerate the building and delivering of differentiating experiences to your customers. + +You subscribe to the service directly from your AWS account. After you create clusters, you can operate your clusters with the OpenShift web console, the ROSA CLI, or through {cluster-manager-first}. + +You receive OpenShift updates with new feature releases and a shared, common source for alignment with OpenShift Container Platform. ROSA supports the same versions of OpenShift as Red{nbsp}Hat OpenShift Dedicated and OpenShift Container Platform to achieve version consistency. + +image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}] +For additional information about ROSA installation, see link:https://www.redhat.com/en/products/interactive-walkthrough/install-rosa[Installing Red{nbsp}Hat OpenShift Service on AWS (ROSA) interactive walkthrough]. //[id="rosa-understanding-credential-modes_{context}"] //== Credential modes @@ -43,11 +53,16 @@ include::modules/rosa-understanding-about.adoc[leveloffset=+1] include::modules/rosa-sdpolicy-am-billing.adoc[leveloffset=+1] +[id="rosa-understanding-getting-started_{context}"] +== Getting started + +To get started with deploying your cluster, ensure your AWS account has met the prerequisites, you have a Red{nbsp}Hat account ready, and follow the procedures outlined in xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Getting started with {product-title}]. + [role="_additional-resources"] [id="additional-resources_{context}"] == Additional resources -* xref:../ocm/ocm-overview.adoc#ocm-overview[{cluster-manager}] +* xref:../ocm/ocm-overview.adoc#ocm-overview[OpenShift Cluster Manager] //* xref ../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] * xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Getting started with {product-title}] * link:https://aws.amazon.com/rosa/pricing/[AWS pricing page] \ No newline at end of file diff --git a/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc b/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc index d094199f6d28..bf3f70babedc 100644 --- a/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc +++ b/rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc @@ -3,7 +3,6 @@ include::_attributes/attributes-openshift-dedicated.adoc[] [id="rosa-managing-worker-nodes"] = Managing compute nodes :context: rosa-managing-worker-nodes - toc::[] [role="_abstract"] @@ -16,9 +15,9 @@ You can edit machine pool configuration options such as scaling, adding node lab ifdef::openshift-rosa-hcp[] You can also create new machine pools with Capacity Reservations. -*Overview of AWS Capacity Reservations* +.Overview of AWS Capacity Reservations -If you have reserved compute capacity using AWS Capacity Reservations for a specific instance type and Availability Zone (AZ), you can use it for your {product-title} worker nodes. Both On-Demand Capacity Reservations and Capacity Blocks for machine learning (ML) workloads are supported. +If you have reserved compute capacity using link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html[AWS Capacity Reservations] for a specific instance type and Availability Zone (AZ), you can use it for your {product-title} worker nodes. Both On-Demand Capacity Reservations and Capacity Blocks for machine learning (ML) workloads are supported. Purchase and manage a Capacity Reservation directly with AWS. After reserving the capacity, add a Capacity Reservation ID to a new machine pool when you create it in your {product-title} cluster. You can also use a Capacity Reservation shared with you from another AWS account within your AWS Organization. @@ -33,13 +32,7 @@ Using Capacity Reservations on machine pools in {product-title} clusters has the * You can only add a Capacity Reservation ID to a new machine pool. * You cannot use autoscaling with Capacity Reservations if you create a machine pool using the {rosa-cli}. However, you can enable both autoscaling and Capacity Reservations on machine pools created using {cluster-manager}. -You can create a machine pool with a Capacity Reservation using either {cluster-manager} or the {rosa-cli}. - -[role="_additional-resources"] -.Additional resources - -* link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html[AWS Capacity Reservations] - +You can create a machine pool with a Capacity Reservation using either xref:../../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#creating_machine_pools_ocm_rosa-managing-worker-nodes[{cluster-manager}] or xref:../../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#creating_machine_pools_cli_capres_rosa-managing-worker-nodes[the {rosa-cli}]. endif::openshift-rosa-hcp[] include::modules/creating-a-machine-pool.adoc[leveloffset=+1] @@ -55,7 +48,7 @@ endif::openshift-rosa-hcp[] ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources -* xref:../../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-security-groups-custom_rosa-sts-aws-prereqs[Additional custom security groups] +* xref:../../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-aws-prereqs.adoc#rosa-security-groups_prerequisites[Additional custom security groups] endif::openshift-rosa-hcp[] include::modules/configuring-machine-pool-disk-volume.adoc[leveloffset=+1] @@ -66,7 +59,7 @@ include::modules/configuring-machine-pool-disk-volume-cli.adoc[leveloffset=+2] ifndef::openshift-rosa-hcp[] [role="_additional-resources"] .Additional resources -* xref:../../cli_reference/rosa_cli/rosa-cli-commands.adoc#_rosa_create_machinepool[`rosa create machinepool` command] +* xref:../../cli_reference/rosa_cli/rosa-cli-commands.adoc#rosa-create-machinepool[`rosa create machinepool` command] endif::openshift-rosa-hcp[] include::modules/deleting-machine-pools.adoc[leveloffset=+1] @@ -94,16 +87,13 @@ include::modules/rosa-adding-tuning.adoc[leveloffset=+1] include::modules/rosa-node-drain-grace-period.adoc[leveloffset=+1] endif::openshift-rosa-hcp[] -[role="_additional-resources"] -[id="additional-resources_{context}"] == Additional resources * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-machinepools-about.adoc#rosa-nodes-machinepools-about[About machine pools] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[About autoscaling] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#rosa-nodes-about-autoscaling-nodes[Enabling autoscaling] * xref:../../rosa_cluster_admin/rosa_nodes/rosa-nodes-about-autoscaling-nodes.adoc#nodes-disabling-autoscaling-nodes[Disabling autoscaling] -* link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/capacity-reservation-overview.html[AWS Capacity Reservations] ifdef::openshift-rosa[] -* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-service-definition[{rosa-classic-title} Service Definition] +* xref:../../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-service-definition[{rosa-classic} Service Definition] endif::openshift-rosa[] ifdef::openshift-rosa-hcp[] * xref:../../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-hcp-service-definition[{product-title} Service Definition] diff --git a/rosa_getting_started/rosa-getting-started.adoc b/rosa_getting_started/rosa-getting-started.adoc index 5e586a5f461d..150b37dedae1 100644 --- a/rosa_getting_started/rosa-getting-started.adoc +++ b/rosa_getting_started/rosa-getting-started.adoc @@ -6,65 +6,91 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] -[role="_abstract"] -Create a {product-title} cluster, grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. - [NOTE] ==== -If you need a quick start guide, see the {product-title} quick start guide. +If you are looking for a quickstart guide for ROSA, see xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quickstart guide]. ==== -You can create a {product-title} cluster that uses AWS Security Token Service (STS). +Follow this getting started document to create a {product-title} (ROSA) cluster, grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. + +You can create a ROSA cluster either with or without the AWS Security Token Service (STS). The procedures in this document enable you to create a cluster that uses AWS STS. For more information about using AWS STS with ROSA clusters, see xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service]. + +[id="rosa-getting-started-prerequisites_{context}"] +== Prerequisites + +* You reviewed the xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[introduction to {product-title} (ROSA)], and the documentation on ROSA xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[architecture models] and xref:../architecture/architecture.adoc#architecture[architecture concepts]. + +* You have read the documentation on the xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[guidelines for planning your environment]. +// Removed as part of OSDOCS-13310, until figures are verified. +//xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and + +* You have reviewed the detailed xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for ROSA with STS]. + +* You have the xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas that are required to run a ROSA cluster]. + +include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+1] +include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+2] +include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+2] + +[id="rosa-getting-started-creating-a-cluster"] +== Creating a ROSA cluster with STS + +Choose from one of the following methods to deploy a {product-title} (ROSA) cluster that uses the AWS Security Token Service (STS). In each scenario, you can deploy your cluster by using {cluster-manager-first} or the ROSA CLI (`rosa`): + +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[*Creating a ROSA cluster with STS using the default options*]: You can create a ROSA cluster with STS quickly by using the default options and automatic STS resource creation. -include::modules/rosa-getting-started-prerequisites.adoc[leveloffset=+1] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[*Creating a ROSA cluster with STS using customizations*]: You can create a ROSA cluster with STS using customizations. You can also choose between the `auto` and `manual` modes when creating the required STS resources. -include::modules/rosa-getting-started-creating-cluster-overview.adoc[leveloffset=+1] -include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+2] -include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+3] -include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+3] +.Additional resources + +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-creating-cluster.adoc#rosa-creating-cluster[Creating a ROSA cluster without AWS STS] +* xref:../rosa_install_access_delete_clusters/rosa-aws-privatelink-creating-cluster.adoc#rosa-aws-privatelink-creating-cluster[Creating an AWS PrivateLink cluster on ROSA] +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] include::modules/rosa-getting-started-create-cluster-admin-user.adoc[leveloffset=+1] +.Additional resource + +* For steps to log in to the ROSA web console, see xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started-access-cluster-web-console_rosa-getting-started[Accessing a cluster through the web console] + include::modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc[leveloffset=+1] include::modules/rosa-getting-started-configure-an-idp.adoc[leveloffset=+2] + +.Additional resource + +* For detailed steps to configure each of the supported identity provider types, see xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS] + include::modules/rosa-getting-started-grant-user-access.adoc[leveloffset=+2] include::modules/rosa-getting-started-grant-admin-privileges.adoc[leveloffset=+2] +[role="_additional-resources"] +.Additional resources + +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] + include::modules/rosa-getting-started-access-cluster-web-console.adoc[leveloffset=+1] include::modules/deploy-app.adoc[leveloffset=+1] - include::modules/rosa-getting-started-revoking-admin-privileges-and-user-access.adoc[leveloffset=+1] include::modules/rosa-getting-started-revoke-admin-privileges.adoc[leveloffset=+2] include::modules/rosa-getting-started-revoke-user-access.adoc[leveloffset=+2] - include::modules/rosa-getting-started-deleting-a-cluster.adoc[leveloffset=+1] -[role="_additional-resources"] -[id="additional-resources_{context}"] -== Additional resources +[id="next-steps_{context}"] +== Next steps -* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[Introduction to {product-title}] -* xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{product-title} architecture models] -* xref:../architecture/architecture.adoc#architecture[Architecture concepts] -* xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[Guidelines for planning your environment] -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for {product-title} with STS] -* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas required to run a {product-title} cluster] -* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service] -* xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quick start guide] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a {product-title} cluster with STS using the default options] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a {product-title} cluster with STS using customizations] -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-creating-cluster.adoc#rosa-creating-cluster[Creating a {product-title} cluster without AWS STS] -* xref:../rosa_install_access_delete_clusters/rosa-aws-privatelink-creating-cluster.adoc#rosa-aws-privatelink-creating-cluster[Creating an AWS PrivateLink cluster on ROSA] -* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] -* xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started-access-cluster-web-console_rosa-getting-started[Accessing a cluster through the web console] -* xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] * xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console] * xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes] * xref:../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#preparing-to-configure-the-monitoring-stack-uwm[Preparing to configure the user workload monitoring stack] -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the {product-title} with STS deployment workflow] -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the {product-title} deployment workflow] -* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading {product-title} classic clusters] + +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources + +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the ROSA with STS deployment workflow] + +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow] + +* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading ROSA Classic clusters] diff --git a/rosa_getting_started/rosa-quickstart-guide-ui.adoc b/rosa_getting_started/rosa-quickstart-guide-ui.adoc index cd69febc503c..02e906a3bca0 100644 --- a/rosa_getting_started/rosa-quickstart-guide-ui.adoc +++ b/rosa_getting_started/rosa-quickstart-guide-ui.adoc @@ -6,62 +6,116 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] -[role="_abstract"] -Create a {product-title} cluster by using {cluster-manager-first} on the {hybrid-console-url}. After you create your cluster, you can grant user access, deploy your application, revoke user access, and delete your cluster. - [NOTE] ==== -If you are looking for a comprehensive getting started guide for {product-title}, see the comprehensive guide to getting started with {product-title}. +If you are looking for a comprehensive getting started guide for {product-title} (ROSA), see xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Comprehensive guide to getting started with {product-title}]. For additional information on ROSA installation, see link:https://www.redhat.com/en/products/interactive-walkthrough/install-rosa[Installing Red{nbsp}Hat OpenShift Service on AWS (ROSA) interactive walkthrough]. ==== -You can create a cluster that uses AWS Security Token Service (STS). +Follow this guide to quickly create a {product-title} (ROSA) cluster using {cluster-manager-first} on the {hybrid-console-url}, grant user access, deploy your first application, and learn how to revoke user access and delete your cluster. + +The procedures in this document enable you to create a cluster that uses AWS Security Token Service (STS). For more information about using AWS STS with ROSA clusters, see xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service]. image::291_OpenShift_on_AWS_Intro_1122_docs.png[{product-title}] -include::modules/rosa-quickstart-prerequisites.adoc[leveloffset=+1] +[id="rosa-getting-started-prerequisites_{context}"] +== Prerequisites + +* You reviewed the xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[introduction to {product-title} (ROSA)], and the documentation on ROSA xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[architecture models] and xref:../architecture/architecture.adoc#architecture[architecture concepts]. + +* You have read the documentation on the xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[guidelines for planning your environment]. +// Removed as part of OSDOCS-13310, until figures are verified. +// xref:../rosa_planning/rosa-limits-scalability.adoc#rosa-limits-scalability[limits and scalability] and + +* You have reviewed the detailed xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for ROSA with STS]. + +* You have the xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas that are required to run a ROSA cluster]. -include::modules/rosa-quickstart-creating-cluster-overview.adoc[leveloffset=+1] //This content is pulled from rosa-getting-started-environment-setup.adoc -include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+2] +include::modules/rosa-getting-started-environment-setup.adoc[leveloffset=+1] //This content is pulled from rosa-getting-started-enable-rosa.adoc -include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+3] +include::modules/rosa-getting-started-enable-rosa.adoc[leveloffset=+2] + //This content is pulled from rosa-getting-started-install-configure-cli-tools -include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+3] +include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+2] + + +//This content is pulled from rosa-sts-creating-a-cluster-quickly.adoc +[id="rosa-quickstart-creating-a-cluster"] +== Creating a ROSA cluster with AWS STS using the default auto mode + +{cluster-manager-first} is a managed service on the {hybrid-console-url} where you can install, modify, operate, and upgrade your Red{nbsp}Hat OpenShift clusters. This service allows you to work with all of your organization’s clusters from a single dashboard. +The procedures in this document use the `auto` modes in {cluster-manager} to immediately create the required Identity and Access Management (IAM) resources using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider. + +//This content is pulled from rosa-sts-creating-a-cluster-quickly-ocm.adoc +When using the {cluster-manager} {hybrid-console-second} to create a {product-title} (ROSA) cluster that uses the STS, you can select the default options to create the cluster quickly. + +Before you can use the {cluster-manager} {hybrid-console-second} to deploy ROSA with STS clusters, you must associate your AWS account with your Red{nbsp}Hat organization and create the required account-wide STS roles and policies. + //This content is pulled from rosa-sts-overview-of-the-default-cluster-specifications.adoc -include::modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc[leveloffset=+3] +include::modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc[leveloffset=+2] + //This content is pulled from rosa-sts-understanding-aws-account-association.adoc -include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset=+3] +include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset=+2] + //This content is pulled from rosa-sts-associating-your-aws-account.adoc -include::modules/rosa-sts-associating-your-aws-account.adoc[leveloffset=+3] +include::modules/rosa-sts-associating-your-aws-account.adoc[leveloffset=+2] + //This content is pulled from rosa-sts-creating-account-wide-sts-roles-and-policies.adoc -include::modules/rosa-sts-creating-account-wide-sts-roles-and-policies.adoc[leveloffset=+3] +include::modules/rosa-sts-creating-account-wide-sts-roles-and-policies.adoc[leveloffset=+2] + //This content is pulled from rosa-sts-creating-a-cluster-using-defaults-ocm.adoc -include::modules/rosa-sts-creating-a-cluster-using-defaults-ocm.adoc[leveloffset=+3] +include::modules/rosa-sts-creating-a-cluster-using-defaults-ocm.adoc[leveloffset=+2] + +//// +.Additional resources + +* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] +//// + //This content is pulled from rosa-getting-started-create-cluster-admin-user.adoc include::modules/rosa-getting-started-create-cluster-admin-user.adoc[leveloffset=+1] +.Additional resource + +* For steps to log in to the ROSA web console, see xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-getting-started-access-cluster-web-console_rosa-quickstart-guide-ui[Accessing a cluster through the web console]. + + //This content is pulled from rosa-getting-started-configure-an-idp-and-grant-access.adoc include::modules/rosa-getting-started-configure-an-idp-and-grant-access.adoc[leveloffset=+1] //This content is pulled from rosa-getting-started-configure-an-idp.adoc include::modules/rosa-getting-started-configure-an-idp.adoc[leveloffset=+2] +.Additional resource + +* For detailed steps to configure each of the supported identity provider types, see xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS]. + + //This content is pulled from rosa-getting-started-grant-user-access.adoc include::modules/rosa-getting-started-grant-user-access.adoc[leveloffset=+2] + //This content is pulled from rosa-getting-started-grant-admin-privileges.adoc include::modules/rosa-getting-started-grant-admin-privileges.adoc[leveloffset=+2] +[role="_additional-resources"] +.Additional resources + +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] +* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] + //This content is pulled from rosa-getting-started-access-cluster-web-console.adoc include::modules/rosa-getting-started-access-cluster-web-console.adoc[leveloffset=+1] @@ -69,40 +123,35 @@ include::modules/rosa-getting-started-access-cluster-web-console.adoc[leveloffse //This content is pulled from deploy-app.adoc include::modules/deploy-app.adoc[leveloffset=+1] + //This content is pulled from rosa-getting-started-revoking-admin-privileges-and-user-access.adoc include::modules/rosa-getting-started-revoking-admin-privileges-and-user-access.adoc[leveloffset=+1] + //This content is pulled from rosa-getting-started-revoke-admin-privileges.adoc include::modules/rosa-getting-started-revoke-admin-privileges.adoc[leveloffset=+2] -//This content is pulled from rosa-getting-started-revoke-user-access.adoc + +//This content is pulled from rosa-getting-started-revoke-admin-privileges.adoc include::modules/rosa-getting-started-revoke-user-access.adoc[leveloffset=+2] + //This content is pulled from rosa-getting-started-deleting-a-cluster.adoc include::modules/rosa-getting-started-deleting-a-cluster.adoc[leveloffset=+1] -[role="_additional-resources"] -[id="additional-resources_{context}"] -== Additional resources +[id="next-steps_{context}"] +== Next steps -* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding[Introduction to {product-title}] -* xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{product-title} architecture models] -* xref:../architecture/architecture.adoc#architecture[Architecture concepts] -* xref:../rosa_planning/rosa-planning-environment.adoc#rosa-planning-environment[Guidelines for planning your environment] -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for {product-title} with STS] -* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[AWS service quotas required to run a {product-title} cluster] -* xref:../rosa_architecture/rosa-understanding.adoc#rosa-understanding-aws-sts_rosa-understanding[Using the AWS Security Token Service] -* xref:../rosa_getting_started/rosa-getting-started.adoc#rosa-getting-started[Comprehensive guide to getting started with {product-title}] -* xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-account-wide-roles-and-policies_rosa-sts-about-iam-resources[Account-wide IAM role and policy reference] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-understanding-deployment-modes_rosa-sts-creating-a-cluster-with-customizations[Understanding the auto and manual deployment modes] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-life-cycle.adoc#rosa-life-cycle[{product-title} update life cycle] -* xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-getting-started-access-cluster-web-console_rosa-quickstart-guide-ui[Accessing a cluster through the web console] -* xref:../rosa_install_access_delete_clusters/rosa-sts-config-identity-providers.adoc#rosa-sts-config-identity-providers[Configuring identity providers for STS] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-cluster-admin-role_rosa-service-definition[Cluster administration role] -* xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-customer-admin-user_rosa-service-definition[Customer administrator user] * xref:../adding_service_cluster/adding-service.adoc#adding-service[Adding services to a cluster using the {cluster-manager} console] * xref:../rosa_cluster_admin/rosa_nodes/rosa-managing-worker-nodes.adoc#rosa-managing-worker-nodes[Managing compute nodes] * xref:../observability/monitoring/configuring-user-workload-monitoring/preparing-to-configure-the-monitoring-stack-uwm.adoc#preparing-to-configure-the-monitoring-stack-uwm[Preparing to configure the user workload monitoring stack] -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the {product-title} with STS deployment workflow] -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the {product-title} deployment workflow] -* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading {product-title} classic clusters] + +[role="_additional-resources"] +[id="additional-resources_{context}"] +== Additional resources + +* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-overview-of-the-deployment-workflow[Understanding the ROSA with STS deployment workflow] + +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow] + +* xref:../upgrading/rosa-upgrading-sts.adoc#rosa-upgrading-sts[Upgrading ROSA Classic clusters] diff --git a/rosa_getting_started/rosa-sts-getting-started-workflow.adoc b/rosa_getting_started/rosa-sts-getting-started-workflow.adoc index 8715972316a6..7ef6a476d6f5 100644 --- a/rosa_getting_started/rosa-sts-getting-started-workflow.adoc +++ b/rosa_getting_started/rosa-sts-getting-started-workflow.adoc @@ -6,21 +6,27 @@ include::_attributes/attributes-openshift-dedicated.adoc[] toc::[] -[role="_abstract"] -Before you create a {product-title} cluster, you must complete the {AWS} prerequisites. Verify that the required AWS service quotas are available, and set up your environment. +Before you create a {product-title} (ROSA) cluster, you must complete the AWS prerequisites, verify that the required AWS service quotas are available, and set up your environment. -include::modules/rosa-sts-deployment-workflow-overview.adoc[leveloffset=+1] +This document provides an overview of the ROSA with STS deployment workflow stages and refers to detailed resources for each stage. + +[id="rosa-sts-overview-of-the-deployment-workflow"] +== Overview of the ROSA with STS deployment workflow + +The AWS Security Token Service (STS) is a global web service that provides short-term credentials for IAM or federated users. You can use AWS STS with {product-title} (ROSA) to allocate temporary, limited-privilege credentials for component-specific IAM roles. The service enables cluster components to make AWS API calls using secure cloud resource management practices. + +You can follow the workflow stages outlined in this section to set up and access a ROSA cluster that uses STS. + +. xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[Complete the AWS prerequisites for ROSA with STS]. To deploy a ROSA cluster with STS, your AWS account must meet the prerequisite requirements. +. xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Review the required AWS service quotas]. To prepare for your cluster deployment, review the AWS service quotas that are required to run a ROSA cluster. +. xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Set up the environment and install ROSA using STS]. Before you create a ROSA with STS cluster, you must enable ROSA in your AWS account, install and configure the required CLI tools, and verify the configuration of the CLI tools. You must also verify that the AWS Elastic Load Balancing (ELB) service role exists and that the required AWS resource quotas are available. +. xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Create a ROSA cluster with STS quickly] or xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[create a cluster using customizations]. Use the ROSA CLI (`rosa`) or {cluster-manager-first} to create a cluster with STS. You can create a cluster quickly by using the default options, or you can apply customizations to suit the needs of your organization. +. xref:../rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc#rosa-sts-accessing-cluster[Access your cluster]. You can configure an identity provider and grant cluster administrator privileges to the identity provider users as required. You can also access a newly-deployed cluster quickly by configuring a `cluster-admin` user. +. xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-access-cluster.adoc#rosa-sts-deleting-access-cluster[Revoke access to a ROSA cluster for a user]. You can revoke access to a ROSA with STS cluster from a user by using the ROSA CLI or the web console. +. xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Delete a ROSA cluster]. You can delete a ROSA with STS cluster by using the ROSA CLI (`rosa`). After deleting a cluster, you can delete the STS resources by using the AWS Identity and Access Management (IAM) Console. [id="additional_resources_{context}"] [role="_additional-resources"] == Additional resources -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites for {product-title} with STS] -* xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas] -* xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Setting up the environment and installing {product-title} using STS] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a {product-title} cluster with STS quickly] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations] -* xref:../rosa_install_access_delete_clusters/rosa-sts-accessing-cluster.adoc#rosa-sts-accessing-cluster[Accessing your cluster] -* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-access-cluster.adoc#rosa-sts-deleting-access-cluster[Revoking access to a {product-title} cluster for a user] -* xref:../rosa_install_access_delete_clusters/rosa-sts-deleting-cluster.adoc#rosa-sts-deleting-cluster[Deleting a {product-title} cluster] -* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the {product-title} deployment workflow] +* xref:../rosa_install_access_delete_clusters/rosa_getting_started_iam/rosa-getting-started-workflow.adoc#rosa-understanding-the-deployment-workflow[Understanding the ROSA deployment workflow] diff --git a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc index ce54ae1b04fa..703fd3785038 100644 --- a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc +++ b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc @@ -11,19 +11,19 @@ Create a {product-title} cluster quickly by using the default options and automa [NOTE] ==== -If you are looking for a quick start guide for ROSA, see the {product-title} quick start guide. +If you are looking for a quickstart guide for ROSA, see xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quickstart guide]. ==== The procedures in this document use the `auto` modes in the ROSA CLI (`rosa`) and {cluster-manager} to immediately create the required IAM resources using the current AWS account. The required resources include the account-wide IAM roles and policies, cluster-specific Operator roles and policies, and OpenID Connect (OIDC) identity provider. -Alternatively, you can use `manual` mode, which outputs the `aws` commands needed to create the IAM resources instead of deploying them automatically. +Alternatively, you can use `manual` mode, which outputs the `aws` commands needed to create the IAM resources instead of deploying them automatically. For steps to deploy a {product-title} cluster by using `manual` mode or with customizations, see xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-using-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations]. include::snippets/oidc-cloudfront.adoc[] [id="prerequisites_{context}"] == Prerequisites -* Ensure that you have completed the AWS prerequisites. +* Ensure that you have completed the xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites]. include::modules/rosa-sts-overview-of-the-default-cluster-specifications.adoc[leveloffset=+1] include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset=+1] @@ -31,9 +31,6 @@ include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset [role="_additional-resources"] .Additional resources -* xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS prerequisites] -* xref:../rosa_getting_started/rosa-quickstart-guide-ui.adoc#rosa-quickstart-guide-ui[{product-title} quick start guide] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-using-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations] * xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-associating-your-aws-account_rosa-sts-creating-a-cluster-quickly[Associating your AWS account with your Red{nbsp}Hat organization] include::modules/osd-aws-vpc-required-resources.adoc[leveloffset=+1] diff --git a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc index bda44c021228..4bc7baf16bc5 100644 --- a/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc +++ b/rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc @@ -18,7 +18,7 @@ include::modules/rosa-sts-understanding-aws-account-association.adoc[leveloffset [role="_additional-resources"] .Additional resources -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using {cluster-manager}] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-cluster-customizations-ocm_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations by using OpenShift Cluster Manager] include::modules/rosa-sts-arn-path-customization-for-iam-roles-and-policies.adoc[leveloffset=+1] diff --git a/rosa_planning/rosa-sts-required-aws-service-quotas.adoc b/rosa_planning/rosa-sts-required-aws-service-quotas.adoc index c2c9c0effd85..14c9ee446dc8 100644 --- a/rosa_planning/rosa-sts-required-aws-service-quotas.adoc +++ b/rosa_planning/rosa-sts-required-aws-service-quotas.adoc @@ -11,10 +11,7 @@ Review this list of the required Amazon Web Service (AWS) service quotas that ar include::modules/rosa-required-aws-service-quotas.adoc[leveloffset=+1] -[role="_additional-resources"] -[id="additional-resources_{context}"] -== Additional resources - +== Next steps ifndef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-setting-up-environment.adoc#rosa-sts-setting-up-environment[Setting up the environment] endif::openshift-rosa-hcp[] diff --git a/rosa_planning/rosa-sts-setting-up-environment.adoc b/rosa_planning/rosa-sts-setting-up-environment.adoc index de32ae9bc09c..eb861e7bf052 100644 --- a/rosa_planning/rosa-sts-setting-up-environment.adoc +++ b/rosa_planning/rosa-sts-setting-up-environment.adoc @@ -30,17 +30,23 @@ ifdef::openshift-rosa-hcp[] include::modules/rosa-getting-started-install-configure-cli-tools.adoc[leveloffset=+1] endif::openshift-rosa-hcp[] +[id="next-steps_rosa-sts-setting-up-environment"] +== Next steps +ifndef::openshift-rosa-hcp[] +* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Create a {product-title} cluster with STS quickly] or xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[create a cluster using customizations]. +endif::openshift-rosa-hcp[] +ifdef::openshift-rosa-hcp[] +* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Create a {product-title} cluster] +endif::openshift-rosa-hcp[] + +[id="additional-resources"] [role="_additional-resources"] -[id="additional-resources_{context}"] == Additional resources ifndef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-sts-aws-prereqs[AWS Prerequisites] * xref:../rosa_planning/rosa-sts-required-aws-service-quotas.adoc#rosa-sts-required-aws-service-quotas[Required AWS service quotas and increase requests] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-a-cluster-quickly[Creating a {product-title} cluster with STS quickly] -* xref:../rosa_install_access_delete_clusters/rosa-sts-creating-a-cluster-with-customizations.adoc#rosa-sts-creating-a-cluster-with-customizations[Creating a cluster using customizations] endif::openshift-rosa-hcp[] ifdef::openshift-rosa-hcp[] * xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-hcp-prereqs[AWS Prerequisites] -* xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Creating a {product-title} cluster] // // TODO OSDOCS-11789: AWS quotas for HCP endif::openshift-rosa-hcp[] From 2cf74621c5bbbafab60be7780d547756fc214842 Mon Sep 17 00:00:00 2001 From: cbippley Date: Mon, 11 May 2026 11:16:56 -0400 Subject: [PATCH 17/17] OSDOCS-18910: Document OLM v1 deployment configuration API Add comprehensive documentation for the deployment configuration API in OLM v1, which provides feature parity with OLM v0 SubscriptionConfig. New modules: - olmv1-deployment-config-api.adoc: Core concepts and validation - olmv1-clusterobjectsets-deployment-mechanism.adoc: Underlying deployment mechanism - olmv1-customizing-operator-deployments.adoc: Step-by-step procedure - olmv1-deployment-config-examples.adoc: Common configuration examples - olmv1-deployment-config-reference.adoc: Complete field reference with merge behaviors - olmv1-deployment-config-troubleshooting.adoc: Troubleshooting guide Key features documented: - Environment variables, resources, node placement (nodeSelector, tolerations, affinity) - Volumes and volume mounts, annotations - Explicit merge/override behavior for each field - Schema validation and error handling - OLM v0 to v1 migration guidance All content is DITA-compliant and follows Red Hat/IBM style guidelines. Co-Authored-By: Claude Sonnet 4.5 --- .../vocabularies/OpenShiftDocs/accept.txt | 2 + .../vocabularies/OpenShiftDocs/reject.txt | 1 - .../ce/olmv1-configuring-extensions.adoc | 18 +- ...lusterobjectsets-deployment-mechanism.adoc | 91 +++++++++ ...lmv1-customizing-operator-deployments.adoc | 89 ++++++++ modules/olmv1-deployment-config-api.adoc | 139 +++++++++++++ modules/olmv1-deployment-config-examples.adoc | 191 ++++++++++++++++++ .../olmv1-deployment-config-reference.adoc | 161 +++++++++++++++ ...mv1-deployment-config-troubleshooting.adoc | 63 ++++++ 9 files changed, 752 insertions(+), 3 deletions(-) create mode 100644 modules/olmv1-clusterobjectsets-deployment-mechanism.adoc create mode 100644 modules/olmv1-customizing-operator-deployments.adoc create mode 100644 modules/olmv1-deployment-config-api.adoc create mode 100644 modules/olmv1-deployment-config-examples.adoc create mode 100644 modules/olmv1-deployment-config-reference.adoc create mode 100644 modules/olmv1-deployment-config-troubleshooting.adoc diff --git a/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt b/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt index 7faa7bca3121..4c4682012587 100644 --- a/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt +++ b/.vale/styles/config/vocabularies/OpenShiftDocs/accept.txt @@ -12,7 +12,9 @@ [Tt]elco [Uu]npause Assisted Installer +ClusterObjectSets? Control Plane Machine Set Operator +[Dd]eployment [Cc]onfigurations? GHz gpsd gpspipe diff --git a/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt b/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt index fcb24dfb1e44..9021905b2f8e 100644 --- a/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt +++ b/.vale/styles/config/vocabularies/OpenShiftDocs/reject.txt @@ -2,7 +2,6 @@ # Add terms that have a corresponding correctly capitalized form to accept.txt. [Dd]eployment [Cc]onfigs? -[Dd]eployment [Cc]onfigurations? [Oo]peratorize [Ss]ingle [Nn]ode OpenShift [Tt]hree [Nn]ode OpenShift diff --git a/extensions/ce/olmv1-configuring-extensions.adoc b/extensions/ce/olmv1-configuring-extensions.adoc index 746dec4812b0..e4ec8199101d 100644 --- a/extensions/ce/olmv1-configuring-extensions.adoc +++ b/extensions/ce/olmv1-configuring-extensions.adoc @@ -7,9 +7,9 @@ include::_attributes/common-attributes.adoc[] toc::[] [role="_abstract"] -In {olmv1-first}, extensions watch all namespaces by default. Some Operators support only namespace-scoped watching based on {olmv0} install modes. To install these Operators, configure the watch namespace for the extension. For more information, see "Discovering bundle install modes". +In {olmv1-first}, you can configure cluster extensions to customize installation behavior. This includes configuring watch namespaces for Operators that support only namespace-scoped watching based on {olmv0} install modes, and customizing operator pod deployments with resource limits, node placement, and other settings. -:FeatureName: Configuring a watch namespace for a cluster extension +:FeatureName: Configuring cluster extensions include::snippets/technology-preview.adoc[] include::modules/olmv1-config-api.adoc[leveloffset=+1] @@ -29,8 +29,22 @@ include::modules/olmv1-config-api-watch-namespace-examples.adoc[leveloffset=+2] include::modules/olmv1-clusterextension-watchnamespace-validation-errors.adoc[leveloffset=+2] +include::modules/olmv1-deployment-config-api.adoc[leveloffset=+1] + +include::modules/olmv1-clusterobjectsets-deployment-mechanism.adoc[leveloffset=+1] + +include::modules/olmv1-customizing-operator-deployments.adoc[leveloffset=+1] + +include::modules/olmv1-deployment-config-examples.adoc[leveloffset=+1] + +include::modules/olmv1-deployment-config-reference.adoc[leveloffset=+1] + +include::modules/olmv1-deployment-config-troubleshooting.adoc[leveloffset=+1] + [id="olmv1-configuring-extensions_additional-resources"] [role="_additional-resources"] == Additional resources * xref:../../extensions/ce/managing-ce.adoc#olmv1-installing-an-operator_managing-ce[Installing a cluster extension in all namespaces] +* link:https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/[Kubernetes: Assigning Pods to Nodes] +* link:https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/[Kubernetes: Taints and Tolerations] diff --git a/modules/olmv1-clusterobjectsets-deployment-mechanism.adoc b/modules/olmv1-clusterobjectsets-deployment-mechanism.adoc new file mode 100644 index 000000000000..6afb60de2c98 --- /dev/null +++ b/modules/olmv1-clusterobjectsets-deployment-mechanism.adoc @@ -0,0 +1,91 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-clusterobjectsets-deployment-mechanism_{context}"] += ClusterObjectSets deployment mechanism + +[role="_abstract"] +{olmv1} uses ClusterObjectSets as the underlying mechanism to deploy cluster extensions with phased rollouts and safe upgrades. + +:FeatureName: {olmv1} ClusterObjectSets +include::snippets/technology-preview.adoc[] + +ClusterObjectSets are cluster-scoped APIs representing versioned resource sets organized into ordered phases. {olmv1} uses ClusterObjectSets to deploy operator resources sequentially. + +[id="olmv1-clusterobjectsets-benefits_{context}"] +== Benefits + +Phased rollouts:: Resources deploy in a defined order by kind. For example, CRDs are created before deployments that use them. + +Safe upgrades:: Both old and new revisions remain active until the new version succeeds, preventing service disruption. + +Immutable revision records:: Immutable revisions provide a clear deployment record. + +Large bundle support:: References externalized secrets to bypass the etcd 1.5 MiB size limit, enabling large bundle deployments. + +[id="olmv1-clusterobjectsets-relationship_{context}"] +== Relationship to deployment configuration + +{olmv1} applies deployment configurations during the ClusterObjectSet process, modifying operator manifests before organizing them into phases. + +[id="olmv1-clusterobjectsets-phases_{context}"] +== Deployment phases + +Phases in order: + +. Namespaces +. Policies +. Identity resources +. Configuration resources +. Storage resources +. Custom resource definitions +. Roles +. Role bindings +. Infrastructure resources +. Deployments +. Scaling resources +. Publishing resources +. Admission resources + +Each phase completes before the next begins, ensuring foundational resources exist before dependent resources deploy. + +[id="olmv1-clusterobjectsets-inspecting_{context}"] +== Inspecting ClusterObjectSets + +Inspect ClusterObjectSets to view deployment status and revision history. + +. List all ClusterObjectSets in the cluster: ++ +[source,terminal] +---- +$ oc get clusterobjectsets +---- + +. List ClusterObjectSets for a specific extension: ++ +[source,terminal] +---- +$ oc get clusterobjectsets -l olm.operatorframework.io/owner-name= +---- ++ +Replace `` with your `ClusterExtension` name. + +. View the details of a specific ClusterObjectSet: ++ +[source,terminal] +---- +$ oc get clusterobjectset -o yaml +---- ++ +Shows deployment phases, resource status, and conditions. + +. Check the `ClusterExtension` status to see active revisions: ++ +[source,terminal] +---- +$ oc get clusterextension -o jsonpath='{.status.conditions}' | jq +---- ++ +Shows active revisions and their conditions. diff --git a/modules/olmv1-customizing-operator-deployments.adoc b/modules/olmv1-customizing-operator-deployments.adoc new file mode 100644 index 000000000000..efc2b4642416 --- /dev/null +++ b/modules/olmv1-customizing-operator-deployments.adoc @@ -0,0 +1,89 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: PROCEDURE +[id="olmv1-customizing-operator-deployments_{context}"] += Customizing operator deployments + +[role="_abstract"] +You can customize how operator pods are deployed by configuring deployment settings in the `ClusterExtension` resource. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +.Prerequisites + +* You have installed the {oc-first}. +* You have identified the operator you want to install and customize. + +.Procedure + +. Create a `ClusterExtension` resource with deployment configuration customizations: ++ +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + serviceAccount: + name: my-operator-installer + config: + configType: Inline + inline: + deploymentConfig: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule + source: + sourceType: Catalog + catalog: + packageName: my-operator + version: 1.0.0 +---- ++ +where: ++ +-- +`resources:`:: Specifies CPU and memory resource requests and limits for the operator pod. +`nodeSelector:`:: Restricts pod scheduling to infrastructure nodes. +`tolerations:`:: Allows the pod to be scheduled on nodes with the specified taint. +-- + +. Apply the `ClusterExtension` resource: ++ +[source,terminal] +---- +$ oc apply -f my-operator.yaml +---- + +. Verify the installation: ++ +[source,terminal] +---- +$ oc get clusterextension my-operator -o yaml +---- + +.Verification + +* Verify that the operator pod is running with the configured settings: ++ +[source,terminal] +---- +$ oc get pods -n my-operator-ns +---- ++ +The output shows the operator pod in the `Running` state with the configured deployment settings applied. diff --git a/modules/olmv1-deployment-config-api.adoc b/modules/olmv1-deployment-config-api.adoc new file mode 100644 index 000000000000..1d824dfaec5e --- /dev/null +++ b/modules/olmv1-deployment-config-api.adoc @@ -0,0 +1,139 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-deployment-config-api_{context}"] += Deployment configuration API + +[role="_abstract"] +Customize operator pod deployments by using the deployment configuration API in the `ClusterExtension` resource. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +Provides feature parity with {olmv0}'s `Subscription.spec.config`. Configure resources, node placement, storage, environment variables, and other deployment settings. + +[id="olmv1-deployment-config-structure_{context}"] +== Deployment configuration structure + +Specify deployment configuration in the `spec.config.inline.deploymentConfig` field as a JSON object. + +.Example deployment configuration +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: +spec: + namespace: + config: + configType: Inline + inline: + deploymentConfig: + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule +---- ++ +where: ++ +-- +`deploymentConfig:`:: Deployment configuration object. +`resources:`:: CPU and memory requests and limits. +`nodeSelector:`:: Node placement selector. +`tolerations:`:: Node taint tolerations. +-- + +[id="olmv1-deployment-config-fields_{context}"] +== Supported configuration fields + +Environment variables:: Add or override environment variables with `env` and `envFrom`. Values are merged with existing container environment variables, with `deploymentConfig` values taking precedence. + +Resource requirements:: Specify CPU and memory requests and limits with `resources`. Replaces existing resource requirements. + +Node selector:: Control pod node placement with `nodeSelector`. Replaces existing node selector. + +Tolerations:: Schedule pods on nodes with taints by using `tolerations`. Appended to existing tolerations. + +Affinity rules:: Define pod affinity and anti-affinity rules with `affinity`. Non-nil fields replace corresponding bundle fields. + +Volumes and volume mounts:: Add `emptyDir`, `configMap`, or `secret` volumes. Appended to existing volumes. + +Annotations:: Add custom pod annotations. Merged with existing annotations, with bundle values taking precedence on conflicts. + +[id="olmv1-deployment-config-validation_{context}"] +== Configuration validation + +{olmv1} validates configuration against a JSON schema generated from Kubernetes API definitions. The schema derives from the `SubscriptionConfig` type used in {olmv0}, providing consistent validation across versions. + +Invalid configurations prevent installation and report errors in the `ClusterExtension` resource's `Progressing` condition. Common validation errors include: + +* Unknown field errors when using unsupported configuration options +* Type mismatch errors when field values do not match the expected type +* Required field errors when mandatory nested fields are missing + +[NOTE] +==== +{olmv1} applies configurations during the ClusterObjectSet deployment process, modifying operator manifests before organizing them into phases. +==== + +[id="olmv1-deployment-config-migration_{context}"] +== Migrating from {olmv0} + +Transfer existing `Subscription.spec.config` settings to the `deploymentConfig` object. The format is identical. + +.Example {olmv0} subscription configuration +[source,yaml] +---- +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: my-operator +spec: + package: my-operator + channel: stable + config: + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule +---- + +.Equivalent {olmv1} cluster extension configuration +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + config: + configType: Inline + inline: + deploymentConfig: + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule + source: + sourceType: Catalog + catalog: + packageName: my-operator +---- diff --git a/modules/olmv1-deployment-config-examples.adoc b/modules/olmv1-deployment-config-examples.adoc new file mode 100644 index 000000000000..016165146053 --- /dev/null +++ b/modules/olmv1-deployment-config-examples.adoc @@ -0,0 +1,191 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-deployment-config-examples_{context}"] += Deployment configuration examples + +[role="_abstract"] +Common deployment configuration examples. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +[id="olmv1-deployment-config-env-vars_{context}"] +== Environment variables + +Add environment variables for runtime configuration. + +.Adding environment variables +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: kmm-operator +spec: + namespace: openshift-kmm + config: + configType: Inline + inline: + deploymentConfig: + env: + - name: KMM_MANAGED + value: "1" + source: + sourceType: Catalog + catalog: + packageName: kernel-module-management +---- ++ +where: ++ +-- +`KMM_MANAGED`:: Sets the environment variable used when deploying the Kernel Module Management Operator in a hub-and-spoke configuration. +-- + +[id="olmv1-deployment-config-volumes_{context}"] +== Custom volumes + +Mount a custom CA certificate for HTTPS communication through a proxy. + +.Mounting a custom CA certificate +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + config: + configType: Inline + inline: + deploymentConfig: + volumes: + - name: trusted-ca + configMap: + name: trusted-ca + items: + - key: ca-bundle.crt + path: tls-ca-bundle.pem + volumeMounts: + - name: trusted-ca + mountPath: /etc/pki/ca-trust/extracted/pem + readOnly: true + source: + sourceType: Catalog + catalog: + packageName: my-operator +---- ++ +where: ++ +-- +`volumes:`:: Creates a volume from the `trusted-ca` config map. +`volumeMounts:`:: Mounts the volume to the operator container at the specified path. +`mountPath:`:: The path where the certificate bundle is available inside the container. +-- + +[id="olmv1-deployment-config-affinity_{context}"] +== Pod anti-affinity + +Spread operator pods across nodes for high availability. + +.Pod anti-affinity for high availability +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: my-operator +spec: + namespace: my-operator-ns + config: + configType: Inline + inline: + deploymentConfig: + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app.kubernetes.io/name + operator: In + values: + - my-operator + topologyKey: kubernetes.io/hostname + source: + sourceType: Catalog + catalog: + packageName: my-operator +---- ++ +where: ++ +-- +`podAntiAffinity:`:: Configures anti-affinity rules for the operator pod. +`preferredDuringSchedulingIgnoredDuringExecution:`:: Specifies soft constraints that the scheduler tries to enforce but does not guarantee. +`topologyKey`:: Groups nodes by hostname to ensure pods are spread across different nodes. +-- + +[id="olmv1-deployment-config-combined_{context}"] +== Multiple customizations + +Combine multiple deployment customizations. + +.Production operator with combined customizations +[source,yaml] +---- +apiVersion: olm.operatorframework.io/v1 +kind: ClusterExtension +metadata: + name: production-operator +spec: + namespace: production-operators + serviceAccount: + name: production-operator-installer + config: + configType: Inline + inline: + deploymentConfig: + resources: + requests: + cpu: 200m + memory: 256Mi + limits: + cpu: 1000m + memory: 1Gi + env: + - name: LOG_LEVEL + value: info + - name: ENABLE_METRICS + value: "true" + nodeSelector: + node-role.kubernetes.io/infra: "" + tolerations: + - key: node-role.kubernetes.io/infra + operator: Exists + effect: NoSchedule + annotations: + monitoring.openshift.io/scrape: "true" + monitoring.openshift.io/port: "8080" + source: + sourceType: Catalog + catalog: + packageName: production-operator + version: 2.1.0 +---- ++ +where: ++ +-- +`resources:`:: Specifies memory and CPU requests and limits for the operator pod. +`env:`:: Defines environment variables for the operator. +`nodeSelector:`:: Restricts the pod to run on infrastructure nodes. +`tolerations:`:: Allows the pod to be scheduled on nodes with the specified taint. +`annotations:`:: Adds Prometheus monitoring annotations to the pod. +-- diff --git a/modules/olmv1-deployment-config-reference.adoc b/modules/olmv1-deployment-config-reference.adoc new file mode 100644 index 000000000000..dfd98332b4c0 --- /dev/null +++ b/modules/olmv1-deployment-config-reference.adoc @@ -0,0 +1,161 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: REFERENCE +[id="olmv1-deployment-config-reference_{context}"] += Deployment configuration field reference + +[role="_abstract"] +Deployment configuration field reference and {olmv0} to {olmv1} mapping. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +[id="olmv1-deployment-config-field-mapping_{context}"] +== Field mapping from {olmv0} to {olmv1} + +Field conversion from {olmv0} to {olmv1}: + +.{olmv0} to {olmv1} configuration field mapping +[cols="1,1,2",options="header"] +|=== +|{olmv0} field path +|{olmv1} field path +|Notes + +|`spec.config.env` +|`spec.config.inline.deploymentConfig.env` +|Environment variables are merged. {olmv1} values take precedence over bundle values. + +|`spec.config.envFrom` +|`spec.config.inline.deploymentConfig.envFrom` +|Environment variable sources are merged. + +|`spec.config.resources` +|`spec.config.inline.deploymentConfig.resources` +|Resource specifications completely replace bundle resource requirements. + +|`spec.config.nodeSelector` +|`spec.config.inline.deploymentConfig.nodeSelector` +|Node selectors completely replace bundle node selectors. + +|`spec.config.tolerations` +|`spec.config.inline.deploymentConfig.tolerations` +|Tolerations are appended to bundle tolerations. + +|`spec.config.affinity` +|`spec.config.inline.deploymentConfig.affinity` +|Affinity rules selectively override bundle affinity. Non-nil fields replace corresponding bundle fields. + +|`spec.config.volumes` +|`spec.config.inline.deploymentConfig.volumes` +|Volumes are appended to bundle volumes. + +|`spec.config.volumeMounts` +|`spec.config.inline.deploymentConfig.volumeMounts` +|Volume mounts are appended to bundle volume mounts. + +|`spec.config.selector` +|Not supported +|The `selector` field from {olmv0} is not supported in {olmv1}. This field was non-functional in {olmv0}. + +|=== + +[id="olmv1-deployment-config-merge-behavior_{context}"] +== Merge and override behavior + +Configuration fields have different merge behaviors: + +Replace:: Completely replaces bundle values. Applies to: `resources`, `nodeSelector` + +Append:: Adds to existing bundle values. Applies to: `tolerations`, `volumes`, `volumeMounts` + +Merge with precedence:: Merges with bundle values. Deployment configuration takes precedence on conflicts. Applies to: `env`, `envFrom` + +Merge with bundle precedence:: Merges with bundle values. Bundle takes precedence on conflicts. Applies to: `annotations` + +Selective override:: Non-nil fields replace corresponding bundle fields. Applies to: `affinity` + +[id="olmv1-deployment-config-env-reference_{context}"] +== Environment variable fields + +`env`:: An array of environment variable objects. Merged with existing container environment variables, with deployment configuration values taking precedence. Each object has: ++ +* `name`: Environment variable name (string, required). +* `value`: Environment variable value (string, optional). +* `valueFrom`: Reference to a secret or config map key (object, optional). + +`envFrom`:: An array of environment variable source objects merged with existing sources. Each object can reference: ++ +* `configMapRef`: Config map containing environment variables. +* `secretRef`: Secret containing environment variables. + +[id="olmv1-deployment-config-resources-reference_{context}"] +== Resource requirements fields + +`resources`:: Compute resource requirements that completely replace existing bundle resource requirements. Contains: ++ +* `requests`: Minimum resources required. +** `cpu`: CPU request (string, for example, `"100m"`, `"0.5"`). +** `memory`: Memory request (string, for example, `"128Mi"`, `"1Gi"`). +* `limits`: Maximum resources allowed. +** `cpu`: CPU limit (string). +** `memory`: Memory limit (string). + +[id="olmv1-deployment-config-node-placement-reference_{context}"] +== Node placement fields + +`nodeSelector`:: Map of key-value pairs for node selection. Completely replaces any existing node selector. Pods schedule only on nodes with all specified labels. ++ +.Example node selector +[source,yaml] +---- +nodeSelector: + node-role.kubernetes.io/infra: "" + disktype: ssd +---- + +`tolerations`:: Array of toleration objects appended to existing bundle tolerations. Each toleration has: ++ +* `key`: Taint key (string). +* `operator`: Operator (string: `Exists`, `Equal`). +* `value`: Taint value (string, required if `operator` is `Equal`). +* `effect`: Taint effect (string: `NoSchedule`, `PreferNoSchedule`, `NoExecute`). +* `tolerationSeconds`: Time before pod eviction for `NoExecute` effect (integer). + +`affinity`:: Affinity rules object. Non-nil fields replace corresponding bundle fields. Contains: ++ +* `nodeAffinity`: Node label-based scheduling rules. +* `podAffinity`: Pod label-based scheduling rules. +* `podAntiAffinity`: Pod spreading rules across nodes. + +[id="olmv1-deployment-config-storage-reference_{context}"] +== Storage fields + +`volumes`:: Array of volume objects appended to existing bundle volumes. Supported types: ++ +* `configMap`: Config map volume. +* `secret`: Secret volume. +* `emptyDir`: Empty directory volume. +* Each volume requires a `name` field (string). + +`volumeMounts`:: Array of volume mount objects appended to existing bundle volume mounts. Each mount has: ++ +* `name`: Volume name to mount (string, required). +* `mountPath`: The path within the container (string, required). +* `readOnly`: Whether the volume is read-only (boolean, optional). +* `subPath`: A path within the volume (string, optional). + +[id="olmv1-deployment-config-metadata-reference_{context}"] +== Metadata fields + +`annotations`:: A map of key-value pairs for pod annotations. Annotations from the deployment configuration are merged with bundle annotations. When keys conflict, bundle annotations take precedence. ++ +.Example annotations +[source,yaml] +---- +annotations: + monitoring.openshift.io/scrape: "true" + monitoring.openshift.io/port: "8080" +---- diff --git a/modules/olmv1-deployment-config-troubleshooting.adoc b/modules/olmv1-deployment-config-troubleshooting.adoc new file mode 100644 index 000000000000..9c2e8c155ae9 --- /dev/null +++ b/modules/olmv1-deployment-config-troubleshooting.adoc @@ -0,0 +1,63 @@ +// Module included in the following assemblies: +// +// * extensions/ce/olmv1-configuring-extensions.adoc + +:_mod-docs-content-type: CONCEPT +[id="olmv1-deployment-config-troubleshooting_{context}"] += Troubleshooting deployment configuration + +[role="_abstract"] +Common deployment configuration issues and their resolutions. + +:FeatureName: {olmv1} deployment configuration API +include::snippets/technology-preview.adoc[] + +[id="olmv1-deployment-config-troubleshooting-validation_{context}"] +== Validation errors + +Check the `Progressing` condition for validation errors when installation fails: + +[source,terminal] +---- +$ oc get clusterextension -o jsonpath='{.status.conditions[?(@.type=="Progressing")].message}' +---- + +Common validation errors and resolutions: + +Unknown field:: Configuration includes an unsupported field. Remove unsupported fields. + +Type mismatch:: Field value does not match the expected type. Verify field types match Kubernetes specifications. + +Required field missing:: Mandatory nested field is missing. Complete all required fields in nested structures. + +[id="olmv1-deployment-config-troubleshooting-applied_{context}"] +== Verifying applied configuration + +Inspect the operator deployment to verify applied configurations: + +[source,terminal] +---- +$ oc get deployment -n -l olm.operatorframework.io/owner-name= -o yaml +---- + +Configuration locations in the deployment specification: + +* **Environment variables**: `spec.template.spec.containers[].env` and `envFrom` +* **Resources**: `spec.template.spec.containers[].resources` +* **Node selector**: `spec.template.spec.nodeSelector` +* **Tolerations**: `spec.template.spec.tolerations` +* **Affinity**: `spec.template.spec.affinity` +* **Volumes**: `spec.template.spec.volumes` and `volumeMounts` +* **Annotations**: `spec.template.metadata.annotations` + +[id="olmv1-deployment-config-troubleshooting-conflicts_{context}"] +== Annotation conflicts + +Bundle annotations take precedence over deployment configuration annotations when keys conflict. Check bundle annotations: + +[source,terminal] +---- +$ oc get clusterextension -o jsonpath='{.status.install.bundle}' +---- + +To override a bundle annotation, modify the bundle or accept the bundle value.