OpenShift Container Platform 4.17 Installation configuration
OpenShift Container Platform 4.17 Installation configuration
17
Installation configuration
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document describes how to perform initial OpenShift Container Platform cluster configuration.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .CUSTOMIZING
. . . . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . .
1.1. CREATING MACHINE CONFIGS WITH BUTANE 3
1.1.1. About Butane 3
1.1.2. Installing Butane 3
1.1.3. Creating a MachineConfig object by using Butane 4
1.2. ADDING DAY-1 KERNEL ARGUMENTS 5
1.3. ADDING KERNEL MODULES TO NODES 6
1.3.1. Building and testing the kernel module container 7
1.3.2. Provisioning a kernel module to OpenShift Container Platform 10
1.3.2.1. Provision kernel modules via a MachineConfig object 10
1.4. ENCRYPTING AND MIRRORING DISKS DURING INSTALLATION 12
1.4.1. About disk encryption 12
1.4.1.1. Configuring an encryption threshold 13
1.4.2. About disk mirroring 15
1.4.3. Configuring disk encryption and mirroring 15
1.4.4. Configuring a RAID-enabled data volume 22
1.4.5. Configuring an Intel® Virtual RAID on CPU (VROC) data volume 25
1.5. CONFIGURING CHRONY TIME SERVICE 26
1.6. ADDITIONAL RESOURCES 27
.CHAPTER
. . . . . . . . . . 2.
. . CONFIGURING
. . . . . . . . . . . . . . . . YOUR
. . . . . . .FIREWALL
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
..............
2.1. CONFIGURING YOUR FIREWALL FOR OPENSHIFT CONTAINER PLATFORM 28
2.2. OPENSHIFT CONTAINER PLATFORM NETWORK FLOW MATRIX 33
.CHAPTER
. . . . . . . . . . 3.
. . ENABLING
. . . . . . . . . . . . LINUX
. . . . . . . CONTROL
. . . . . . . . . . . GROUP
. . . . . . . . .VERSION
. . . . . . . . . .1. (CGROUP
. . . . . . . . . . .V1)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
..............
3.1. ENABLING LINUX CGROUP V1 DURING INSTALLATION 42
1
OpenShift Container Platform 4.17 Installation configuration
2
CHAPTER 1. CUSTOMIZING NODES
Creating machine configs that are included in manifest files to start up a cluster during
openshift-install.
Creating machine configs that are passed to running OpenShift Container Platform nodes via
the Machine Config Operator.
Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-
installer when installing bare-metal nodes allows per-machine configuration. These changes are
currently not visible to the Machine Config Operator.
The following sections describe features that you might want to configure on your nodes in this way.
Because modifying machine configs can be difficult, you can use Butane configs to create machine
configs for you, thereby making node configuration much easier.
TIP
Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config
Transpiler (FCCT).
Procedure
a. For the newest version of Butane, save the latest butane image to your current directory:
3
OpenShift Container Platform 4.17 Installation configuration
b. Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or
ppc64le, indicate the appropriate URL. For example:
$ curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64
--output butane
$ chmod +x butane
$ echo $PATH
Verification steps
You can now use the Butane tool by running the butane command:
$ butane <butane_file>
Prerequisites
Procedure
1. Create a Butane config file. The following example creates a file named 99-worker-custom.bu
that configures the system console to show kernel debug messages and specifies custom
settings for the chrony time service:
variant: openshift
version: 4.17.0
metadata:
name: 99-worker-custom
labels:
machineconfiguration.openshift.io/role: worker
openshift:
kernel_arguments:
- loglevel=7
storage:
files:
- path: /etc/chrony.conf
mode: 0644
4
CHAPTER 1. CUSTOMIZING NODES
overwrite: true
contents:
inline: |
pool 0.rhel.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
NOTE
2. Create a MachineConfig object by giving Butane the file that you created in the previous step:
A MachineConfig object YAML file is created for you to finish configuring your machines.
3. Save the Butane config in case you need to update the MachineConfig object in the future.
4. If the cluster is not running yet, generate manifest files and add the MachineConfig object
YAML file to the openshift directory. If the cluster is already running, apply the file as follows:
$ oc create -f 99-worker-custom.yaml
Additional resources
You need to do some low-level network configuration before the systems start.
You want to disable a feature, such as SELinux, so it has no impact on the systems when they
first come up.
5
OpenShift Container Platform 4.17 Installation configuration
WARNING
To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject
that object into the set of manifest files used by Ignition during cluster setup.
For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel
parameters. It is best to only add kernel arguments with this procedure if they are needed to complete
the initial OpenShift Container Platform installation.
Procedure
1. Change to the directory that contains the installation program and generate the Kubernetes
manifests for the cluster:
2. Decide if you want to add kernel arguments to worker or control plane nodes.
You can change master to worker to add kernel arguments to worker nodes instead. Create a
separate YAML file to add to both master and worker nodes.
When a kernel module is first deployed by following these instructions, the module is made available for
6
CHAPTER 1. CUSTOMIZING NODES
When a kernel module is first deployed by following these instructions, the module is made available for
the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy
the module so a compatible version of that module is available with the new kernel.
The way that this feature is able to keep the module up to date on each node is by:
Adding a systemd service to each node that starts at boot time to detect if a new kernel has
been installed and
If a new kernel is detected, the service rebuilds the module and installs it to the kernel
For information on the software needed for this procedure, see the kmods-via-containers github site.
Software tools and examples are not yet available in official RPM form and can only be obtained
for now from unofficial github.com sites noted in the procedure.
Third-party kernel modules you might add through these procedures are not supported by Red
Hat.
In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8
container. Keep in mind that modules are rebuilt automatically on each node when that node
gets a new kernel. For that reason, each node needs access to a yum repository that contains
the kernel and related packages needed to rebuild the module. That content is best provided
with a valid RHEL subscription.
Procedure
# subscription-manager register
7
OpenShift Container Platform 4.17 Installation configuration
5. Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a
kmods-via-container systemd service and loads it:
$ cd kmods-via-containers/
6. Get the kernel module source code. The source code might be used to build a third-party
module that you do not have control over, but is supplied by others. You will need content
similar to the content shown in the kvc-simple-kmod example that can be cloned to your
system as follows:
7. Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the
Dockerfile to Dockerfile.rhel:
$ cd kvc-simple-kmod
$ cat simple-kmod.conf
Example Dockerfile
KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-
simple-kmod.git"
KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel
KMOD_SOFTWARE_VERSION=dd1a7d4
KMOD_NAMES="simple-kmod simple-procfs-kmod"
8
CHAPTER 1. CUSTOMIZING NODES
Example output
11. To confirm that the kernel modules are loaded, use the lsmod command to list the modules:
Example output
simple_procfs_kmod 16384 0
simple_kmod 16384 0
12. Optional. Use other methods to check that the simple-kmod example is working:
Look for a "Hello world" message in the kernel ring buffer with dmesg:
Example output
Example output
simple-procfs-kmod number = 0
Run the spkut command to get more information from the module:
$ sudo spkut 44
Example output
9
OpenShift Container Platform 4.17 Installation configuration
Going forward, when the system boots this service will check if a new kernel is running. If there is a new
kernel, the service builds a new version of the kernel module and then loads it. If the module is already
built, it will just load it.
Provision kernel modules at cluster install time (day-1): You can create the content as a
MachineConfig object and provide it to openshift-install by including it with a set of manifest
files.
Provision kernel modules via Machine Config Operator (day-2): If you can wait until the
cluster is up and running to add your kernel module, you can deploy the kernel module software
via the Machine Config Operator (MCO).
In either case, each node needs to be able to get the kernel packages and related software packages at
the time that a new kernel is detected. There are a few ways you can set up each node to be able to
obtain that content.
Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and
copy them to the same location as the other files you provide when you build your Ignition
config.
Inside the Dockerfile, add pointers to a yum repository containing the kernel and other
packages. This must include new kernel packages as they are needed to match newly installed
kernels.
By packaging kernel module software with a MachineConfig object, you can deliver that software to
worker or control plane nodes at installation time or via the Machine Config Operator.
Procedure
# subscription-manager register
10
CHAPTER 1. CUSTOMIZING NODES
7. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using
the repositories cloned earlier:
$ FAKEROOT=$(mktemp -d)
$ cd kmods-via-containers
$ cd ../kvc-simple-kmod
8. Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running
the following command:
9. Create a Butane config file, 99-simple-kmod.bu, that embeds the kernel module tree and
enables the systemd service.
NOTE
See "Creating machine configs with Butane" for information about Butane.
11
OpenShift Container Platform 4.17 Installation configuration
variant: openshift
version: 4.17.0
metadata:
name: 99-simple-kmod
labels:
machineconfiguration.openshift.io/role: worker 1
storage:
trees:
- local: kmod-tree
systemd:
units:
- name: [email protected]
enabled: true
1 To deploy on control plane nodes, change worker to master. To deploy on both control
plane and worker nodes, perform the remainder of these instructions once for each node
type.
10. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml, containing the files
and configuration to be delivered:
11. If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If
the cluster is already running, apply the file as follows:
$ oc create -f 99-simple-kmod.yaml
Your nodes will start the [email protected] service and the kernel
modules will be loaded.
12. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug
node/<openshift-node>, then chroot /host). To list the modules, use the lsmod command:
Example output
simple_procfs_kmod 16384 0
simple_kmod 16384 0
12
CHAPTER 1. CUSTOMIZING NODES
TPM v2
This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server.
You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is
removed from the server.
Tang
Tang and Clevis are server and client components that enable network-bound disk encryption
(NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This
prevents decryption of the data unless the nodes are on a secure network where the Tang servers are
accessible. Clevis is an automated decryption framework used to implement decryption on the client
side.
IMPORTANT
The use of the Tang encryption mode to encrypt your disks is only supported for bare
metal and vSphere installations on user-provisioned infrastructure.
In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by
specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with
OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure.
When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using
the LUKS2 format.
This feature:
Each cluster can only have a single encryption method, Tang or TPM
Encryption applies to the installation disks only, not to the workload disks
Sets up disk encryption during the manifest installation phase, encrypting all data written to disk,
from first boot forward
In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can
also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data
decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a
secure network.
You can use the threshold attribute in your Butane configuration to define the minimum number of
13
OpenShift Container Platform 4.17 Installation configuration
You can use the threshold attribute in your Butane configuration to define the minimum number of
TPM v2 and Tang encryption conditions required for decryption to occur.
The threshold is met when the stated value is reached through any combination of the declared
conditions. In the case of offline provisioning, the offline server is accessed using an included
advertisement, and only uses that supplied advertisement if the number of online servers do not meet
the set threshold.
For example, the threshold value of 2 in the following configuration can be reached by accessing two
Tang servers, with the offline server available as a backup, or by accessing the TPM secure
cryptoprocessor and one of the Tang servers:
variant: openshift
version: 4.17.0
metadata:
name: worker-storage
labels:
machineconfiguration.openshift.io/role: worker
boot_device:
layout: x86_64 1
luks:
tpm2: true 2
tang: 3
- url: http://tang1.example.com:7500
thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX
- url: http://tang2.example.com:7500
thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF
- url: http://tang3.example.com:7500
thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9
advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" 4
threshold: 2 5
openshift:
fips: true
1 Set this field to the instruction set architecture of the cluster nodes. Some examples include,
x86_64, aarch64, or ppc64le.
2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file
system.
3 Include this section if you want to use one or more Tang servers.
4 Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding
rather than fetching the advertisement from the server at runtime. This lets the server be
unavailable at provisioning time.
5 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to
occur.
IMPORTANT
14
CHAPTER 1. CUSTOMIZING NODES
IMPORTANT
The default threshold value is 1. If you include multiple encryption conditions in your
configuration but do not specify a threshold, decryption can occur if any of the conditions
are met.
NOTE
If you require TPM v2 and Tang for decryption, the value of the threshold attribute must
equal the total number of stated Tang servers plus one. If the threshold value is lower, it
is possible to reach the threshold value by using a single encryption mode. For example, if
you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by
accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available.
Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a
pristine, non-degraded state.
NOTE
Prerequisites
You have downloaded the OpenShift Container Platform installation program on your
installation node.
NOTE
You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate
a thumbprint of the Tang exchange key.
Procedure
1. If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be
15
OpenShift Container Platform 4.17 Installation configuration
1. If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be
enabled in the host firmware for each node. This is required on most Dell systems. Check the
manual for your specific system.
2. If you want to use Tang to encrypt your cluster, follow these preparatory steps:
a. Set up a Tang server or access an existing one. See Network-bound disk encryption for
instructions.
c. On the RHEL 8 machine, run the following command to generate a thumbprint of the
exchange key. Replace http://tang1.example.com:7500 with the URL of your Tang server:
NOTE
Example output
PLjNyRdGw03zlRoGjQYMahSZGu9 1
When the Do you wish to trust these keys? [ynYN] prompt displays, type Y.
i. Obtain the advertisement from the server using the curl command. Replace
http://tang2.example.com:7500 with the URL of your Tang server:
Expected output
16
CHAPTER 1. CUSTOMIZING NODES
e. If the nodes are configured with static IP addressing, run coreos-installer iso customize --
dest-karg-append or use the coreos-installer --append-karg option when installing
RHCOS nodes to set the IP address of the installed system. Append the ip= and other
arguments needed for your network.
IMPORTANT
Some methods for configuring static IPs do not affect the initramfs after the
first boot and will not work with Tang encryption. These include the coreos-
installer --copy-network option, the coreos-installer iso customize --
network-keyfile option, and the coreos-installer pxe customize --
network-keyfile option, as well as adding ip= arguments to the kernel
command line of the live ISO or PXE image during installation. Incorrect
static IP configuration causes the second boot of the node to fail.
3. On your installation node, change to the directory that contains the installation program and
generate the Kubernetes manifests for the cluster:
1 Replace <installation_directory> with the path to the directory that you want to store the
installation files in.
4. Create a Butane config that configures disk encryption, mirroring, or both. For example, to
configure storage for compute nodes, create a $HOME/clusterconfig/worker-storage.bu file.
variant: openshift
version: 4.17.0
metadata:
name: worker-storage 1
labels:
machineconfiguration.openshift.io/role: worker 2
boot_device:
layout: x86_64 3
luks: 4
tpm2: true 5
tang: 6
- url: http://tang1.example.com:7500 7
thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8
- url: http://tang2.example.com:7500
thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF
advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected":
"eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" 9
threshold: 1 10
mirror: 11
devices: 12
17
OpenShift Container Platform 4.17 Installation configuration
- /dev/sda
- /dev/sdb
openshift:
fips: true 13
1 2 For control plane configurations, replace worker with master in both of these locations.
3 Set this field to the instruction set architecture of the cluster nodes. Some examples
include, x86_64, aarch64, or ppc64le.
4 Include this section if you want to encrypt the root file system. For more details, see "About
disk encryption".
5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root
file system.
6 Include this section if you want to use one or more Tang servers.
7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500
on the Tang server.
8 Specify the exchange key thumbprint, which was generated in a preceding step.
9 Optional: Specify the advertisement for your offline Tang server in valid JSON format.
10 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met
for decryption to occur. The default value is 1. For more information about this topic, see
"Configuring an encryption threshold".
11 Include this section if you want to mirror the boot disk. For more details, see "About disk
mirroring".
12 List all disk devices that should be included in the boot disk mirror, including the disk that
RHCOS will be installed onto.
IMPORTANT
To enable FIPS mode for your cluster, you must run the installation program from
a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS
mode. For more information about configuring FIPS mode on RHEL, see
Installing the system in FIPS mode . If you are configuring nodes to use both disk
encryption and mirroring, both features must be configured in the same Butane
configuration file. If you are configuring disk encryption on a node with FIPS
mode enabled, you must include the fips directive in the same Butane
configuration file, even if FIPS mode is also enabled in a separate manifest.
5. Create a control plane or compute node manifest from the corresponding Butane configuration
file and save it to the <installation_directory>/openshift directory. For example, to create a
manifest for the compute nodes, run the following command:
18
CHAPTER 1. CUSTOMIZING NODES
Repeat this step for each node type that requires disk encryption or mirroring.
6. Save the Butane configuration file in case you need to update the manifests in the future.
TIP
You can monitor the console log on the RHCOS nodes during installation for error messages
relating to disk encryption or mirroring.
IMPORTANT
If you configure additional data partitions, they will not be encrypted unless
encryption is explicitly requested.
Verification
After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is
enabled on the cluster nodes.
1. From the installation host, access a cluster node by using a debug pod:
$ oc debug node/compute-1
b. Set /host as the root directory within the debug shell. The debug pod mounts the root file
system of the node in /host within the pod. By changing the root directory to /host, you can
run binaries contained in the executable paths on the node:
# chroot /host
NOTE
a. From the debug shell, review the status of the root mapping on the node:
Example output
19
OpenShift Container Platform 4.17 Installation configuration
cipher: aes-xts-plain64 2
keysize: 512 bits
key location: keyring
device: /dev/sda4 3
sector size: 512
offset: 32768 sectors
size: 15683456 sectors
mode: read/write
1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the
RHCOS boot disks are encrypted using the LUKS2 format.
2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-
essiv:sha256 cipher is used if FIPS mode is enabled.
3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the
value will represent a software mirror device, for example /dev/md126.
b. List the Clevis plugins that are bound to the encrypted device:
1 Specify the device that is listed in the device field in the output of the preceding step.
Example output
1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1
1 In the example output, the Tang plugin is used by the Shamir’s Secret Sharing (SSS)
Clevis plugin for the /dev/sda4 device.
a. From the debug shell, list the software RAID devices on the node:
# cat /proc/mdstat
Example output
Personalities : [raid1]
md126 : active raid1 sdb3[1] sda3[0] 1
393152 blocks super 1.0 [2/2] [UU]
1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk
devices on the cluster node.
20
CHAPTER 1. CUSTOMIZING NODES
2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk
devices on the cluster node.
b. Review the details of each of the software RAID devices listed in the output of the
preceding command. The following example lists the details of the /dev/md126 device:
Example output
/dev/md126:
Version : 1.0
Creation Time : Wed Jul 7 11:07:36 2021
Raid Level : raid1 1
Array Size : 393152 (383.94 MiB 402.59 MB)
Used Dev Size : 393152 (383.94 MiB 402.59 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Name : any:md-boot 6
UUID : ccfa3801:c520e0b5:2bee2755:69043055
Events : 19
1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring.
3 4 States the number of underlying disk devices that are active and working.
5 States the number of underlying disk devices that are in a failed state.
7 8 Provides information about the underlying disk devices used by the software RAID
device.
21
OpenShift Container Platform 4.17 Installation configuration
Example output
In the example output, the /boot file system is mounted on the /dev/md126 software RAID
device and the root file system is mounted on /dev/md127.
4. Repeat the verification steps for each OpenShift Container Platform node type.
Additional resources
For more information about the TPM v2 and Tang encryption modes, see Configuring
automated unlocking of encrypted volumes using policy-based decryption.
Prerequisites
You have downloaded the OpenShift Container Platform installation program on your
installation node.
22
CHAPTER 1. CUSTOMIZING NODES
NOTE
Procedure
1. Create a Butane config that configures a data volume by using software RAID.
To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot
disk, create a $HOME/clusterconfig/raid1-storage.bu file, for example:
variant: openshift
version: 4.17.0
metadata:
name: raid1-storage
labels:
machineconfiguration.openshift.io/role: worker
boot_device:
mirror:
devices:
- /dev/disk/by-id/scsi-3600508b400105e210000900000490000
- /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6
storage:
disks:
- device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000
partitions:
- label: root-1
size_mib: 25000 1
- label: var-1
- device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6
partitions:
- label: root-2
size_mib: 25000 2
- label: var-2
raid:
- name: md-var
level: raid1
devices:
- /dev/disk/by-partlabel/var-1
- /dev/disk/by-partlabel/var-2
filesystems:
- device: /dev/md/md-var
path: /var
format: xfs
wipe_filesystem: true
with_mount_unit: true
When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is
23
OpenShift Container Platform 4.17 Installation configuration
1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is
recommended. If no value is specified, or if the specified value is smaller than the
recommended minimum, the resulting root file system will be too small, and future
reinstalls of RHCOS might overwrite the beginning of the data partition.
variant: openshift
version: 4.17.0
metadata:
name: raid1-alt-storage
labels:
machineconfiguration.openshift.io/role: worker
storage:
disks:
- device: /dev/sdc
wipe_table: true
partitions:
- label: data-1
- device: /dev/sdd
wipe_table: true
partitions:
- label: data-2
raid:
- name: md-var-lib-containers
level: raid1
devices:
- /dev/disk/by-partlabel/data-1
- /dev/disk/by-partlabel/data-2
filesystems:
- device: /dev/md/md-var-lib-containers
path: /var/lib/containers
format: xfs
wipe_filesystem: true
with_mount_unit: true
2. Create a RAID manifest from the Butane config you created in the previous step and save it to
the <installation_directory>/openshift directory. For example, to create a manifest for the
compute nodes, run the following command:
$ butane $HOME/clusterconfig/<butane_config>.bu -o
<installation_directory>/openshift/<manifest_name>.yaml 1
1 Replace <butane_config> and <manifest_name> with the file names from the previous
step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks.
3. Save the Butane config in case you need to update the manifest in the future.
24
CHAPTER 1. CUSTOMIZING NODES
Prerequisites
You have a system with Intel® Volume Management Device (VMD) enabled.
Procedure
1. Create the Intel® Matrix Storage Manager (IMSM) RAID container by running the following
command:
1 The RAID device names. In this example, there are two devices listed. If you provide more
than two device names, you must adjust the -n flag. For example, listing three devices
would use the flag -n3.
a. Create a dummy RAID0 volume in front of the real RAID1 volume by running the following
command:
c. Stop both RAID0 and RAID1 member arrays and delete the dummy RAID0 array with the
following commands:
$ mdadm -S /dev/md/dummy \
mdadm -S /dev/md/coreos \
mdadm --kill-subarray=0 /dev/md/imsm0
a. Get the UUID of the IMSM container by running the following command:
b. Install RHCOS and include the rd.md.uuid kernel argument by running the following
command:
25
OpenShift Container Platform 4.17 Installation configuration
Procedure
1. Create a Butane config including the contents of the chrony.conf file. For example, to
configure chrony on worker nodes, create a 99-worker-chrony.bu file.
NOTE
See "Creating machine configs with Butane" for information about Butane.
variant: openshift
version: 4.17.0
metadata:
name: 99-worker-chrony 1
labels:
machineconfiguration.openshift.io/role: worker 2
storage:
files:
- path: /etc/chrony.conf
mode: 0644 3
overwrite: true
contents:
inline: |
pool 0.rhel.pool.ntp.org iburst 4
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
1 2 On control plane nodes, substitute master for worker in both of these locations.
3 Specify an octal value mode for the mode field in the machine config file. After creating
the file and applying the changes, the mode is converted to a decimal value. You can check
the YAML file with the command oc get mc <mc-name> -o yaml.
4 Specify any valid, reachable time source, such as the one provided by your DHCP server.
Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org,
2.rhel.pool.ntp.org, or 3.rhel.pool.ntp.org.
If the cluster is not running yet, after you generate manifest files, add the MachineConfig
object file to the <installation_directory>/openshift directory, and then continue to create
the cluster.
$ oc apply -f ./99-worker-chrony.yaml
27
OpenShift Container Platform 4.17 Installation configuration
There are no special configuration considerations for services running on only controller nodes
compared to worker nodes.
NOTE
If your environment has a dedicated load balancer in front of your OpenShift Container
Platform cluster, review the allowlists between your firewall and load balancer to prevent
unwanted network restrictions to your cluster.
Procedure
28
CHAPTER 2. CONFIGURING YOUR FIREWALL
You can use the wildcards *.quay.io and *.openshiftapps.com instead of cdn.quay.io and
cdn0[1-6].quay.io in your allowlist.
You can use the wildcard *.access.redhat.com to simplify the configuration and ensure
that all subdomains, including registry.access.redhat.com, are allowed.
When you add a site, such as quay.io, to your allowlist, do not add a wildcard entry, such as
*.quay.io, to your denylist. In most cases, image registries use a content delivery network
(CDN) to serve images. If a firewall blocks access, image downloads are denied when the
initial download request redirects to a hostname such as cdn01.quay.io.
2. Set your firewall’s allowlist to include any site that provides resources for a language or
framework that your builds require.
3. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat
Insights:
4. If you use Alibaba Cloud, Amazon Web Services (AWS), Microsoft Azure, or Google Cloud
Platform (GCP) to host your cluster, you must grant access to the URLs that offer the cloud
provider API and DNS for that cloud:
29
OpenShift Container Platform 4.17 Installation configuration
30
CHAPTER 2. CONFIGURING YOUR FIREWALL
31
OpenShift Container Platform 4.17 Installation configuration
Operators require route access to perform health checks. Specifically, the authentication and
web console Operators connect to two routes to verify that the routes work. If you are the
cluster administrator and do not want to allow *.apps.<cluster_name>.<base_domain>, then
allow these routes:
oauth-openshift.apps.<cluster_name>.<base_domain>
canary-openshift-ingress-canary.apps.<cluster_name>.<base_domain>
32
CHAPTER 2. CONFIGURING YOUR FIREWALL
7. If you use a default Red Hat Network Time Protocol (NTP) server allow the following URLs:
1.rhel.pool.ntp.org
2.rhel.pool.ntp.org
3.rhel.pool.ntp.org
NOTE
If you do not use a default Red Hat NTP server, verify the NTP server for your platform
and allow it in your firewall.
Additional resources
Additionally, consider the following dynamic port ranges when managing ingress traffic:
NOTE
33
OpenShift Container Platform 4.17 Installation configuration
NOTE
The network flow matrix describes ingress traffic flows for a base OpenShift Container
Platform installation. It does not describe network flows for additional components, such
as optional Operators available from the Red Hat Marketplace. The matrix does not apply
for Hosted-Control-Plane, MicroShift, or standalone clusters.
34
CHAPTER 2. CONFIGURING YOUR FIREWALL
35
OpenShift Container Platform 4.17 Installation configuration
36
CHAPTER 2. CONFIGURING YOUR FIREWALL
37
OpenShift Container Platform 4.17 Installation configuration
38
CHAPTER 2. CONFIGURING YOUR FIREWALL
39
OpenShift Container Platform 4.17 Installation configuration
40
CHAPTER 2. CONFIGURING YOUR FIREWALL
41
OpenShift Container Platform 4.17 Installation configuration
IMPORTANT
For the most recent list of major functionality that has been deprecated or removed
within OpenShift Container Platform, refer to the Deprecated and removed features
section of the OpenShift Container Platform release notes.
cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over
cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall
Information, and enhanced resource management and isolation. However, cgroup v2 has different CPU,
memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might
experience slight differences in memory or CPU usage on clusters that run cgroup v2.
You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For
more information, see "Configuring the Linux cgroup on your nodes" in the "Additional resources" of this
section.
IMPORTANT
For the most recent list of major functionality that has been deprecated or removed
within OpenShift Container Platform, refer to the Deprecated and removed features
section of the OpenShift Container Platform release notes.
Procedure
apiVersion: config.openshift.io/v1
kind: Node
metadata:
42
CHAPTER 3. ENABLING LINUX CONTROL GROUP VERSION 1 (CGROUP V1)
name: cluster
spec:
cgroupMode: "v2"
Additional resources
43