The openshift-appliance utility enables self-contained OpenShift cluster installations, meaning that it does not rely on Internet connectivity or external registries.
It is a container-based utility that builds a disk image that includes the Agent-based installer.
That disk image can then be used to install multiple OpenShift clusters.
OpenShift Appliance is available for download at: https://quay.io/edge-infrastructure/openshift-appliance
- This is where
openshift-appliancegets used to create a raw sparse disk image.- raw: so it can be copied as-is to multiple servers.
- sparse: to minimize the physical size.
- The end result is a generic disk image with a partition layout as follows:
Name Type VFS Label Size Parent /dev/sda2 filesystem vfat EFI-SYSTEM 127M - /dev/sda3 filesystem ext4 boot 350M - /dev/sda4 filesystem xfs root 180G - /dev/sda5 filesystem ext4 agentboot 1.2G - /dev/sda6 filesystem iso9660 agentdata 18G -
- The two additional partitions:
agentboot: Agent-based installer ISO:- Allows a first boot.
- Used as a recovery / re-install partition (with an added GRUB menu entry).
agentdata: OCP release images payload.
- Note that sizes may change, depending on the configured
diskSizeGBand the selected OpenShift version configured inappliance-config.yaml(described below).
- This is where the disk image is written to the disk using tools such as
dd. - As mentioned above, the image is generic. Thus, the same image can be used on multiple servers for multiple clusters, assuming they have the same disk size.
- This is where the cluster will be deployed.
- The user boots the machine and mounts the configuration ISO (cluster configuration).
- The OpenShift installation will run until completion.
export APPLIANCE_IMAGE="quay.io/edge-infrastructure/openshift-appliance"
export APPLIANCE_ASSETS="/home/test/appliance_assets"A configuration file named appliance-config.yaml is required for running openshift-appliance.
podman run --rm -it --pull newer -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE generate-configResult:
INFO Generated config file in assets directory: appliance-config.yaml- Initially, the template will include comments about each option and will look as follows:
- Check the appliance-config details on how to set each parameter.
#
# Note: This is a sample ApplianceConfig file showing
# which fields are available to aid you in creating your
# own appliance-config.yaml file.
#
apiVersion: v1beta1
kind: ApplianceConfig
ocpRelease:
# OCP release version in major.minor or major.minor.patch format
# (in case of major.minor - latest patch version will be used)
# If the specified version is not yet available, the latest supported version will be used.
version: ocp-release-version
# OCP release update channel: stable|fast|eus|candidate
# Default: stable
# [Optional]
channel: ocp-release-channel
# OCP release CPU architecture: x86_64|aarch64|ppc64le
# Default: x86_64
# [Optional]
cpuArchitecture: cpu-architecture
# OCP release URL (use instead of channel/architecture)
# [Optional]
# url: oc-release-url
# Virtual size of the appliance disk image.
# If specified, should be at least 150GiB.
# If not specified, the disk image should be resized when
# cloning to a device (e.g. using virt-resize tool).
# [Optional]
diskSizeGB: disk-size
# PullSecret is required for mirroring the OCP release payload
# Can be obtained from: https://console.redhat.com/openshift/install/pull-secret
pullSecret: pull-secret
# Public SSH key for accessing the appliance during the bootstrap phase
# [Optional]
sshKey: ssh-key
# Password of user 'core' for connecting from console
# [Optional]
userCorePass: user-core-pass
# Local image registry details (used when building the appliance)
# Note: building an image internally by default.
# [Optional]
imageRegistry:
# The URI for the image
# Default: ""
# Examples:
# - docker.io/library/registry:2
# - quay.io/libpod/registry:2.8
# [Optional]
uri: uri
# The image registry container TCP port to bind. A valid port number is between 1024 and 65535.
# Default: 5005
# [Optional]
port: port
# Use the registry binary built internally (to avoid nested containers when running the image).
# Default: false
# [Optional]
useBinary: use-binary
# Enable all default CatalogSources (on openshift-marketplace namespace).
# Should be disabled for disconnected environments.
# Default: false
# [Optional]
enableDefaultSources: enable-default-sources
# Stop the local registry post cluster installation.
# Note that additional images and operators won't be available when stopped.
# Default: false
# [Optional]
stopLocalRegistry: stop-local-registry
# Create PinnedImageSets for both the master and worker MCPs.
# The PinnedImageSets will include all the images included in the appliance disk image.
# Requires openshift version 4.16 or above.
# WARNING:
# As of 4.18, PinnedImageSets feature is still not GA.
# Thus, enabling it will set the cluster to tech preview,
# which means the cluster cannot be upgraded
# (i.e. should only be used for testing purposes).
# Default: false
# [Optional]
createPinnedImageSets: create-pinned-image-sets
# Enable FIPS mode for the cluster.
# Note: 'fips' should be enabled also in install-config.yaml.
# Default: false
# [Optional]
enableFips: enable-fips
# Enable the interactive installation flow.
# Should be enabled to provide cluster configuration through the web UI
# (i.e. instead of using a config-image).
# Default: false
# [Optional]
enableInteractiveFlow: enable-interactive-flow
# Rename CatalogSource names generated by oc-mirror to the default naming.
# E.g. 'redhat-operators' instead of 'cs-redhat-operator-index-v4-19'.
# Default: false
# [Optional]
useDefaultSourceNames: use-default-source-names
# Additional images to be included in the appliance disk image.
# [Optional]
additionalImages:
- name: image-url
# Images to avoid including in the appliance disk image (by name or regular expression).
# [Optional]
blockedImages:
- name: image-url
# Operators to be included in the appliance disk image.
# See examples in https://github.com/openshift/oc-mirror/blob/main/docs/imageset-config-ref.yaml.
# [Optional]
operators:
- catalog: catalog-uri
packages:
- name: package-name
channels:
- name: channel-name- Modify it based on your needs. Note that:
diskSizeGB: Must be set according to the actual server disk size. If you have several server specs, you need an appliance image per each spec.ocpRelease.channel: OCP release update channel (stable|fast|eus|candidate)pullSecret: May be obtained from https://console.redhat.com/openshift/install/pull-secret (requires registration).imageRegistry.uri: Change it only if needed, otherwise the default should work.imageRegistry.port: Change the port number in case another app uses TCP 5005.
apiVersion: v1beta1
kind: ApplianceConfig
ocpRelease:
version: 4.14
channel: candidate
cpuArchitecture: x86_64
diskSizeGB: 200
pullSecret: '{"auths":{<redacted>}}'
sshKey: <redacted>
userCorePass: <redacted>- Note that any manifest added here will apply to any of the clusters installed using this image.
- Find more details and additional examples in OpenShift documentation:
- Create the openshift manifests directory
mkdir ${APPLIANCE_ASSETS}/openshift- Add one or more custom manifests under
${APPLIANCE_ASSETS}/openshift.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: master
name: 50-master-custom-file-factory
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,dGhpcyBjb250ZW50IGNhbWUgZnJvbSBidWlsZGluZyB0aGUgYXBwbGlhbmNlIGltYWdlCg==
mode: 420
path: /etc/custom_factory1.txt
overwrite: trueAdd any additional images that should be included as part of the appliance disk image. These images will be pulled during the oc-mirror procedure that downloads the release images.
E.g. Use the additionalImages array in appliance-config.yaml as follows:
additionalImages:
- name: quay.io/fedora/httpd-24
- name: quay.io/openshift/origin-cliAfter installing the cluster, images should be available for pulling using the image digest. To fetch the digest, use skopeo from inside the node. E.g.
skopeo inspect docker://registry.appliance.openshift.com:22625/fedora/httpd-24 | jq .Digest
"sha256:5d98ffbb97ea86633aed7ae2445b9d939e29639a292d3052efb078e72606ba04"podman pull quay.io/fedora/httpd-24@sha256:5d98ffbb97ea86633aed7ae2445b9d939e29639a292d3052efb078e72606ba04The image can be used, for example, to create a new application:
oc --kubeconfig auth/kubeconfig new-app --name httpd --image quay.io/fedora/httpd-24@sha256:5d98ffbb97ea86633aed7ae2445b9d939e29639a292d3052efb078e72606ba04 --allow-missing-imagesoc --kubeconfig auth/kubeconfig get deployment
NAME READY UP-TO-DATE AVAILABLE
httpd 1/1 1 1Operators packages can be included in the appliance disk image using the operators property in appliance-config.yaml. The relevant images will be pulled during the oc-mirror procedure, and the appropriate CatalogSources and ImageContentSourcePolicies will be automatically created in the installed cluster.
E.g. To include the elasticsearch-operator from redhat-operators catalog:
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14
packages:
- name: elasticsearch-operator
channels:
- name: stable-5.8Note: for each operator, ensure the name and channel are correct by listing the available operators in catalog:
oc-mirror list operators --catalog=registry.redhat.io/redhat/redhat-operator-index:v4.14To automatically install the included operators during cluster installation, add the relevant custom manifests to ${APPLIANCE_ASSETS}/openshift.
Note: these manifests will deploy the operators for any cluster installation. I.e. the manifests will be incorporated in the appliance disk image.
E.g. Cluster manifests to install OpenShift Elasticsearch Operator:
openshift/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: operators
labels:
name: operatorsopenshift/operatorgroup.yaml
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: operator-group
namespace: operators
spec:
targetNamespaces:
- operatorsopenshift/subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: elasticsearch
namespace: operators
spec:
installPlanApproval: Automatic
name: elasticsearch-operator
source: cs-redhat-operator-index
channel: stable
sourceNamespace: openshift-marketplaceIf a CustomResource should be applied post operator installation, add the relevant CR to ${APPLIANCE_ASSETS}/openshift/crs.
The CRs will be applied automatically during cluster installation.
E.g. a CR for kubevirt-hyperconverged operator:
openshift/crs/cnv_cr.yaml
apiVersion: hco.kubevirt.io/v1beta1
kind: HyperConverged
metadata:
name: kubevirt-hyperconverged
namespace: "openshift-cnv"- Make sure you have enough free disk space.
- The amount of space needed is defined by the configured
diskSizeGBvalue mentioned above, which is at least 150GiB.
- The amount of space needed is defined by the configured
- Building the image may take several minutes.
- The option
--privilegedis used because theopenshift-appliancecontainer needs to useguestfishto build the image. - The option
--net=hostis used because theopenshift-appliancecontainer needs to use the host networking for the image registry container it runs as a part of the build process.
sudo podman run --rm -it --pull newer --privileged --net=host -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE buildResult
INFO Successfully downloaded CoreOS ISO
INFO Successfully generated recovery CoreOS ISO
INFO Successfully pulled container registry image
INFO Successfully pulled OpenShift 4.14.0-rc.0 release images required for bootstrap
INFO Successfully pulled OpenShift 4.14.0-rc.0 release images required for installation
INFO Successfully generated data ISO
INFO Successfully downloaded appliance base disk image
INFO Successfully extracted appliance base disk image
INFO Successfully generated appliance disk image
INFO Time elapsed: 8m0s
INFO
INFO Appliance disk image was successfully created in assets directory: assets/appliance.raw
INFO
INFO Create configuration ISO using: openshift-install agent create config-image
INFO Download openshift-install from: https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.14.0-rc.0/openshift-install-linux.tar.gzBefore rebuilding the appliance, e.g. for changing diskSizeGB or ocpRelease, use the clean command. This command removes the temp folder and prepares the assets folder for a rebuild.
sudo podman run --rm -it -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE cleanNote: the command keeps the cache folder under assets intact, use the --cache flag to clean the entire cache as well.
sudo podman run --rm -it -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE clean --cacheUse a tool like dd to clone the disk image.
E.g.
dd if=appliance.raw of=/dev/sdX bs=1M status=progressThis will clone the appliance disk image onto sdX. To initiate the cluster installation, boot the machine from the sdX device.
Use virt-resize tool to resize and clone the disk image. E.g.
export APPLIANCE_IMAGE="quay.io/edge-infrastructure/openshift-appliance"
export APPLIANCE_ASSETS="/home/test/appliance_assets"
export TARGET_DEVICE="/dev/sda"
sudo podman run --rm -it --privileged --net=host -v $APPLIANCE_ASSETS:/assets --entrypoint virt-resize $APPLIANCE_IMAGE --expand /dev/sda4 /assets/appliance.raw $TARGET_DEVICE --no-sparseThis will resize and clone the disk image onto the specified TARGET_DEVICE. To initiate the cluster installation, boot the machine from the TARGET_DEVICE.
--no-sparse flag can be removed (which will improve the cloning speed).
As an alternative to manually cloning the disk image, see Deployment ISO section for instructions to generate an ISO that automates the flow.
Configure the disk to use /path/to/appliance.raw
- So far, the generated image has been completely generic. To install the cluster, the installer will need cluster-specific configuration.
- In order to generate the configuration image using
openshift-install, download the binary from the URL specified in build output. E.g. for4.14.0-rc.0https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.14.0-rc.0/openshift-install-linux.tar.gz
- Create a configuration directory
export CLUSTER_CONFIG=/home/test/cluster_config
mkdir $CLUSTER_CONFIG && cd $CLUSTER_CONFIG- Place both
install-config.yamlandagent-config.yamlfiles in that directory.
Notes:
- See details and examples in the ABI documentation: Installing an OpenShift Container Platform cluster with the Agent-based Installer
- For disconnected environments, specify a dummy pull-secret in install-config.yaml (e.g.
'{"auths":{"":{"auth":"dXNlcjpwYXNz"}}}'). - The SSH public key for
coreuser can be specified in install-config.yaml undersshKeyproperty. It can be used for logging into the machines post cluster installation.
agent-config.yaml
apiVersion: v1alpha1
kind: AgentConfig
rendezvousIP: 192.168.122.100install-config.yaml
apiVersion: v1
metadata:
name: appliance
baseDomain: example.com
controlPlane:
name: master
replicas: 1
compute:
- name: worker
replicas: 0
networking:
networkType: OVNKubernetes
machineNetwork:
- cidr: 192.168.122.0/24
platform:
none: {}
# Dummy pull-secret for disconnected environments
pullSecret: '{"auths":{"":{"auth":"dXNlcjpwYXNz"}}}'
# SSH public key for `core` user, can be used for logging into machine post cluster installation
sshKey: 'ssh-rsa ...'agent-config.yaml
apiVersion: v1alpha1
kind: AgentConfig
rendezvousIP: 192.168.122.100install-config.yaml
apiVersion: v1
metadata:
name: appliance
baseDomain: example.com
controlPlane:
name: master
replicas: 3
compute:
- name: worker
replicas: 2
networking:
networkType: OVNKubernetes
machineNetwork:
- cidr: 192.168.122.0/24
platform:
baremetal:
apiVIPs:
- 192.168.122.200
ingressVIPs:
- 192.168.122.201
# Dummy pull-secret for disconnected environments
pullSecret: '{"auths":{"":{"auth":"dXNlcjpwYXNz"}}}'
# SSH public key for `core` user, can be used for logging into machine post cluster installation
sshKey: 'ssh-rsa ...'- Note that any manifest added here will apply only to the cluster installed using this config-iso.
- Find more details and additional examples in OpenShift documentation:
- Create the openshift manifests directory
mkdir $CLUSTER_CONFIG/openshift- Add one or more custom manifests under
$CLUSTER_CONFIG/openshift. Same as in this MachineConfig example
To automatically install operators during cluster installation, add the relevant custom manifests (see example) to $CLUSTER_CONFIG/openshift.
Note: for disconnected environment, the operators should be included in the appliance.
When ready, generate the config ISO.
install-config.yaml and agent-config.yaml files - back them up first.
./openshift-install agent create config-image --dir $CLUSTER_CONFIGThe content of cluster_config directory should be
├── agentconfig.noarch.iso
├── auth
│ ├── kubeadmin-password
│ └── kubeconfigNote: The config ISO contains configurations and cannot be used as a bootable ISO.
- Mount the
agentconfig.noarch.isoas a CD-ROM on every node, or attach it using a USB stick. - Start the machine(s)
Use openshift-install to monitor the bootstrap and installation process
./openshift-install --dir $CLUSTER_CONFIG agent wait-for bootstrap-complete- Review OpenShift documentation: Waiting for the bootstrap process to complete
./openshift-install --dir $CLUSTER_CONFIG agent wait-for install-complete- Review OpenShift documentation: Completing installation on user-provisioned infrastructure
export KUBECONFIG=$CLUSTER_CONFIG/auth/kubeconfigoc get clusterverisonoc get clusteroperator- To reinstall the cluster using the above-mentioned
agentbootpartition, reboot all the nodes and select theRecovery: Agent-Based Installeroption.
To simplify the deployment process of the appliance disk image (appliance.raw), the deployment ISO can be used. Upon booting a machine with this ISO, the appliance disk image would be automatically cloned into the specified target device.
To build the ISO, appliance.raw disk image should be available under assets directory. I.e. the appliance disk image should be first built.
diskSizeGB property in appliance-config.yaml
Use the 'build iso' command for generating the ISO:
export APPLIANCE_IMAGE="quay.io/edge-infrastructure/openshift-appliance"
export APPLIANCE_ASSETS="/home/test/appliance_assets"
sudo podman run --rm -it --privileged -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE build iso --target-device /dev/sdaThe result should be an appliance.iso file under assets directory. To initiate the deployment, attach/mount the ISO to the machine and boot it. After the deployment is completed, boot from the target device to start cluster installation.
The command supports the following flags:
--target-device string Target device name to clone the appliance into (default "/dev/sda")
--post-script string Script file to invoke on completion (should be under assets directory)
--sparse-clone Use sparse cloning - requires an empty (zeroed) device
--dry-run Skip appliance cloning (useful for getting the target device name)
To perform post deployment operations create a bash script file under assets directory.
E.g. shutting down the machine post deployment
cat $APPLIANCE_ASSETS/post.sh
#!/bin/bash
echo Shutting down the machine...
shutdown -asudo podman run --rm -it --privileged -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE build iso --post-script post.shIn order to upgrade a cluster without an external registry, the upgrade ISO flow can be used. An upgrade ISO includes the entire release payload of a specific OCP version, which allows upgrading clusters in disconnected environments.
The process for upgrading a cluster is as follows:
- Create an
appliance-config.yamlfile:- Set the requested version under
ocpRelease. - Set
pullSecret.
- Set the requested version under
- Generate an ISO using
build upgrade-isocommand. - Attach the ISO to each node in the cluster.
- To start the upgrade, apply the generated MachineConfig yaml.
- This process is currently experimental.
- After upgrading a cluster, the ISO should not be detached.
- This is required to allow pulling images post-upgrade (might be needed, in some scenarios, for images missing from CRI-O containers-storage).
- Will be resolved using PinnedImageSet in a future version (probably OCP >= 4.19.0).
- Upgrading an old existing cluster is not supported. I.e. only clusters created after the introduction of the
Upgrade ISOfunctionality can be upgraded.
Specify the requested OCP version in appliance-config.yaml.
E.g. for upgrading a cluster to the latest stable 4.17:
apiVersion: v1beta1
kind: ApplianceConfig
ocpRelease:
version: 4.17
channel: stable
cpuArchitecture: x86_64
pullSecret: '{"auths":{<redacted>}}'Use the 'build upgrade-iso' command for generating an Upgrade ISO:
export APPLIANCE_IMAGE="quay.io/edge-infrastructure/openshift-appliance"
export APPLIANCE_ASSETS="/home/test/upgrade_assets"
sudo podman run --rm -it --pull newer --privileged -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE build upgrade-isoNotes:
- A configuration file named appliance-config.yaml is required in
APPLIANCE_ASSETSdir for building. - Ensure the
APPLIANCE_ASSETSdir is different from the one used for the disk image build.
The result should be the following two files (in APPLIANCE_ASSETS dir):
- An upgrade ISO:
upgrade-x.y.z.iso - A MachineConfig yaml:
upgrade-machine-config-x.y.z.yaml
- Attach the ISO to each node.
- Apply the MachineConfig to initiate the upgrade.
- Note: upgrade starts post-reboot of the nodes. I.e. can take a few minutes.
- Ensure a recent etcd backup is available in order to restore the cluster to a previous state in case the upgrade fails.
As an alternative for building an appliance disk image (appliance.raw), a live ISO can be generated instead. The live ISO flow is useful for use cases in which cloning a disk image to a device is cumbersome or not applicable. Similarly to the disk image flow, generating a config-image is required as well. Note that a recovery grub item is not supported.
Use the 'build live-iso' command for generating the ISO:
export APPLIANCE_IMAGE="quay.io/edge-infrastructure/openshift-appliance"
export APPLIANCE_ASSETS="/home/test/appliance_assets"
sudo podman run --rm -it --pull newer --privileged --net=host -v $APPLIANCE_ASSETS:/assets:Z $APPLIANCE_IMAGE build live-isoThe result should be an appliance.iso file under assets directory.
To initiate the deployment:
- Attach the config ISO.
- Attach the generated appliance ISO.
- Boot the machine.
Notes:
- The configuration file appliance-config.yaml is required in
APPLIANCE_ASSETSdir for building - i.e. similar to the disk image flow.- The
diskSizeGBproperty is not required for the live ISO flow.
- The
- Ensure the target device is first in the boot order (i.e. the live ISO should be booted only once). Or, if the target device isn't empty, select the live ISO manually during boot.
- A recovery grub item is not available using this flow.
- It's mandatory to keep the ISO attached during cluster installation.


