Openshift Container Platform 4.4: Installing
Openshift Container Platform 4.4: Installing
Installing
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides information about installing OpenShift Container Platform and details
about some configuration processes.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .GATHERING
. . . . . . . . . . . . . INSTALLATION
. . . . . . . . . . . . . . . . .LOGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
1.1. GATHERING LOGS FROM A FAILED INSTALLATION 4
1.2. MANUALLY GATHERING LOGS WITH SSH ACCESS TO YOUR HOST(S) 5
1.3. MANUALLY GATHERING LOGS WITHOUT SSH ACCESS TO YOUR HOST(S) 6
.CHAPTER
. . . . . . . . . . 2.
. . SUPPORT
. . . . . . . . . . . FOR
. . . . . FIPS
. . . . . CRYPTOGRAPHY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
2.1. FIPS VALIDATION IN OPENSHIFT CONTAINER PLATFORM 7
2.2. FIPS SUPPORT IN COMPONENTS THAT THE CLUSTER USES 7
2.2.1. etcd 8
2.2.2. Storage 8
2.2.3. Runtimes 8
2.3. INSTALLING A CLUSTER IN FIPS MODE 8
.CHAPTER
. . . . . . . . . . 3.
. . INSTALLATION
. . . . . . . . . . . . . . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
3.1. INSTALLATION METHODS FOR DIFFERENT PLATFORMS 9
3.2. CUSTOMIZING NODES 10
3.2.1. Adding day-1 kernel arguments 10
3.2.2. Adding kernel modules to nodes 11
3.2.2.1. Building and testing the kernel module container 11
3.2.2.2. Provisioning a kernel module to OpenShift Container Platform 13
3.2.2.2.1. Provision kernel modules via a MachineConfig 14
3.2.3. Encrypting disks during installation 16
3.2.3.1. Enabling TPM v2 disk encryption 17
3.2.3.2. Enabling Tang disk encryption 18
3.2.4. Configuring chrony time service 20
3.2.5. Additional resources 21
3.3. CREATING A MIRROR REGISTRY FOR INSTALLATION IN A RESTRICTED NETWORK 21
3.3.1. About the mirror registry 21
3.3.2. Preparing the bastion host 21
3.3.2.1. Installing the CLI by downloading the binary 21
3.3.2.1.1. Installing the CLI on Linux 22
3.3.2.1.2. Installing the CLI on Windows 22
3.3.2.1.3. Installing the CLI on macOS 23
3.3.3. Creating a mirror registry 23
3.3.4. Adding the registry to your pull secret 26
3.3.5. Mirroring the OpenShift Container Platform image repository 27
3.3.6. Using Samples Operator imagestreams with alternate or mirrored registries 29
3.4. AVAILABLE CLUSTER CUSTOMIZATIONS 30
3.4.1. Cluster configuration resources 31
3.4.2. Operator configuration resources 31
3.4.3. Additional configuration resources 32
3.4.4. Informational Resources 32
3.5. CONFIGURING YOUR FIREWALL 33
3.5.1. Configuring your firewall for OpenShift Container Platform 33
3.6. CONFIGURING A PRIVATE CLUSTER 35
3.6.1. About private clusters 35
DNS 35
Ingress Controller 35
API server 35
3.6.2. Setting DNS to private 35
3.6.3. Setting the Ingress Controller to private 37
1
OpenShift Container Platform 4.4 Installing
2
Table of Contents
3
OpenShift Container Platform 4.4 Installing
Prerequisites
You attempted to install an OpenShift Container Platform cluster, and installation failed.
You provided an SSH key to the installation program, and that key is in your running ssh-agent
process.
NOTE
You use a different command to gather logs about an unsuccessful installation than to
gather logs from a running cluster. If you must gather logs from a running cluster, use the
oc adm must-gather command.
Prerequisites
Your OpenShift Container Platform installation failed before the bootstrap process finished.
The bootstrap node must be running and accessible through SSH.
The ssh-agent process is active on your computer, and you provided both the ssh-agent
process and the installation program the same SSH key.
If you tried to install a cluster on infrastructure that you provisioned, you must have the fully-
qualified domain names of the control plane, or master, machines.
Procedure
1. Generate the commands that are required to obtain the installation logs from the bootstrap and
control plane machines:
If you used infrastructure that you provisioned yourself, run the following command:
4
CHAPTER 1. GATHERING INSTALLATION LOGS
--master <master_1_address> \ 3
--master <master_2_address> \ 4
--master <master_3_address>" 5
1 For installation_directory, specify the same directory you specified when you ran
./openshift-install create cluster. This directory contains the OpenShift Container
Platform definition files that the installation program creates.
NOTE
A default cluster contains three control plane machines. List all of your
control plane machines as shown, no matter how many your cluster uses.
If you open a Red Hat support case about your installation failure, include the compressed logs
in the case.
Prerequisites
Procedure
1. Collect the bootkube.service service logs from the bootstrap host using the journalctl
command by running:
$ journalctl -b -f -u bootkube.service
2. Collect the bootstrap host’s container logs using the Podman logs. This is shown as a loop to get
all of the container logs from the host:
$ for pod in $(sudo podman ps -a -q); do sudo podman logs $pod; done
3. Alternatively, collect the host’s container logs using the tail command by running:
5
OpenShift Container Platform 4.4 Installing
# tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log
4. Collect the kubelet.service and crio.service service logs from the master and worker hosts
using the journalctl command by running:
5. Collect the master and worker host container logs using the tail command by running:
If you do not have SSH access to your node, you can access the systems journal to investigate what is
happening on your host.
Prerequisites
Procedure
6
CHAPTER 2. SUPPORT FOR FIPS CRYPTOGRAPHY
For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied
when the machines are deployed based on the status of an option in the install-config.yaml file, which
governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise
Linux machines, you must enable FIPS mode when you install the operating system on the machines
that you plan to use as worker machines. These configuration methods ensure that your cluster meet the
requirements of a FIPS compliance audit: only FIPS validated / Implementation Under Test
cryptography packages are enabled before the initial system boot.
Because FIPS must be enabled before the operating system that your cluster uses boots for the first
time, you cannot enable FIPS after you deploy a cluster.
OpenShift Container Platform components are written in Go and built with Red Hat’s golang compiler.
When you enable FIPS mode for your cluster, Red Hat’s golang compiler calls RHEL and RHCOS
cryptographic libraries for all OpenShift Container Platform components that require cryptographic
signing. At the initial release of OpenShift Container Platform version 4.3, only the ose-sdn package
uses the native golang cryptography, which is not FIPS validated / Implementation Under Test. Red Hat
verifies that all other packages use the FIPS validated / Implementation Under Test OpenSSL module.
Table 2.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.4
Attributes Limitations
FIPS support in RHEL 7 operating systems. The FIPS implementation does not offer a single
function that both computes hash functions and
FIPS support in CRI-O runtimes. validates the keys that are based on that hash. This
limitation will continue to be evaluated and improved
in future OpenShift Container Platform releases.
FIPS support in OpenShift Container Platform
services.
Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for
future OpenShift Container Platform releases.
Although the OpenShift Container Platform cluster itself uses FIPS validated / Implementation Under
7
OpenShift Container Platform 4.4 Installing
Although the OpenShift Container Platform cluster itself uses FIPS validated / Implementation Under
Test modules, ensure that the systems that support your OpenShift Container Platform cluster use
FIPS validated / Implementation Under Test modules for cryptography.
2.2.1. etcd
To ensure that the secrets that are stored in etcd use FIPS validated / Implementation Under Test
encryption, encrypt the etcd datastore by using a FIPS-approved cryptographic algorithm. After you
install the cluster, you can encrypt the etcd data by using the aes cbc algorithm.
2.2.2. Storage
For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-
provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and
enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are
protected by FIPS validated / Implementation Under Test encryption. You can configure your cluster to
encrypt the root filesystem of each node, as described in Customizing nodes.
2.2.3. Runtimes
To ensure that containers know that they are running on a host that is using FIPS validated /
Implementation Under Test cryptography modules, use CRI-O to manage your runtimes. CRI-O
supports FIPS-Mode, in that it configures the containers to know that they are running in FIPS mode.
Microsoft Azure
Bare metal
VMware vSphere
To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after
you install your cluster.
If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their
initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and
Enabling FIPS Mode in the RHEL 7 documentation.
8
CHAPTER 3. INSTALLATION CONFIGURATION
NOTE
Not all installation options are currently available for all platforms, as shown in the
following tables.
Default X X X X
Custom X X X X X
Network X X X
Operato
r
Private X X X
clusters
Existing X X X
virtual
private
network
s
Custom X X X X X X
Network X X
Operato
r
Restricte X X X X
d
network
9
OpenShift Container Platform 4.4 Installing
Creating MachineConfigs that are included in manifest files to start up a cluster during
openshift-install.
Creating MachineConfigs that are passed to running OpenShift Container Platform nodes via
the Machine Config Operator.
The following sections describe features that you might want to configure on your nodes in this way.
You want to disable a feature, such as SELinux, so it has no impact on the systems when they
first come up.
You need to do some low-level network configuration before the systems start.
To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject
that object into the set of manifest files used by Ignition during cluster setup.
For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel
parameters. It is best to only add kernel arguments with this procedure if they are needed to complete
the initial OpenShift Container Platform installation.
Procedure
10
CHAPTER 3. INSTALLATION CONFIGURATION
kernelArguments:
- 'loglevel=7'
EOF
You can change master to worker to add kernel arguments to worker nodes instead. Create a
separate YAML file to add to both master and worker nodes.
When a kernel module is first deployed by following these instructions, the module is made available for
the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy
the module so a compatible version of that module is available with the new kernel.
The way that this feature is able to keep the module up to date on each node is by:
Adding a systemd service to each node that starts at boot time to detect if a new kernel has
been installed and
If a new kernel is detected, the service rebuilds the module and installs it to the kernel
For information on the software needed for this procedure, see the kmods-via-containers github site.
Software tools and examples are not yet available in official RPM form and can only be obtained
for now from unofficial github.com sites noted in the procedure.
Third-party kernel modules you might add through these procedures are not supported by Red
Hat.
In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8
container. Keep in mind that modules are rebuilt automatically on each node when that node
gets a new kernel. For that reason, each node needs access to a yum repository that contains
the kernel and related packages needed to rebuild the module. That content is best provided
with a valid RHEL subscription.
Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the
process on a separate RHEL system. Gather the kernel module’s source code, the KVC framework, and
the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do
the following:
Procedure
11
OpenShift Container Platform 4.4 Installing
# subscription-manager register
Username: yourname
Password: ***************
# subscription-manager attach --auto
4. Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a
kmods-via-container systemd service and loads it:
$ cd kmods-via-containers/
$ sudo make install
$ sudo systemctl daemon-reload
5. Get the kernel module source code. The source code might be used to build a third-party
module that you do not have control over, but is supplied by others. You will need content
similar to the content shown in the kvc-simple-kmod example that can be cloned to your
system as follows:
$ cd ..
$ git clone https://github.com/kmods-via-containers/kvc-simple-kmod
6. Edit the configuration file, simple-kmod.conf, in his example, and change the name of the
Dockerfile to Dockerfile.rhel so the file appears as shown here:
$ cd kvc-simple-kmod
$ cat simple-kmod.conf
KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-
simple-kmod.git"
KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel
KMOD_SOFTWARE_VERSION=dd1a7d4
KMOD_NAMES="simple-kmod simple-procfs-kmod"
8. Enable and start the systemd service, then check the status:
12
CHAPTER 3. INSTALLATION CONFIGURATION
9. To confirm that the kernel modules are loaded, use the lsmod command to list the modules:
10. The simple-kmod example has a few other ways to test that it is working. Look for a "Hello world"
message in the kernel ring buffer with dmesg:
Run the spkut command to get more information from the module:
$ sudo spkut 44
KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64
Running userspace wrapper using the kernel module container...
+ podman run -i --rm --privileged
simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44
simple-procfs-kmod number = 0
simple-procfs-kmod number = 44
Going forward, when the system boots this service will check if a new kernel is running. If there is a new
kernel, the service builds a new version of the kernel module and then loads it. If the module is already
built, it will just load it.
Depending on whether or not you must have the kernel module in place when OpenShift Container
Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways:
Provision kernel modules at cluster install time (day-1): You can create the content as a
MachineConfig and provide it to openshift-install by including it with a set of manifest files.
Provision kernel modules via Machine Config Operator (day-2): If you can wait until the
cluster is up and running to add your kernel module, you can deploy the kernel module software
via the Machine Config Operator (MCO).
In either case, each node needs to be able to get the kernel packages and related software packages at
the time that a new kernel is detected. There are a few ways you can set up each node to be able to
obtain that content.
Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and
13
OpenShift Container Platform 4.4 Installing
Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and
copy them to the same location as the other files you provide when you build your Ignition
config.
Inside the Dockerfile, add pointers to a yum repository containing the kernel and other
packages. This must include new kernel packages as they are needed to match newly installed
kernels.
By packaging kernel module software with a MachineConfig you can deliver that software to worker or
master nodes at installation time or via the Machine Config Operator.
First create a base Ignition config that you would like to use. At installation time, the Ignition config will
contain the ssh public key to add to the authorized_keys file for the core user on the cluster. To add
the MachineConfig later via the MCO instead, the ssh public key is not required. For both type, the
example simple-kmod service creates a systemd unit file, which requires a kmods-via-
[email protected].
NOTE
The systemd unit is a workaround for an upstream bug and makes sure that the kmods-
[email protected] gets started on boot:
# subscription-manager register
Username: yourname
Password: ***************
# subscription-manager attach --auto
14
CHAPTER 3. INSTALLATION CONFIGURATION
"name": "require-kvc-simple-kmod.service",
"enabled": true,
"contents": "[Unit]\nRequires=kmods-via-containers@simple-
kmod.service\n[Service]\nType=oneshot\nExecStart=/usr/bin/true\n\n[Install]\nWantedBy=multi-
user.target"
}]
}
}
EOF
NOTE
You must add your public SSH key to the baseconfig.ign file to use the file
during openshift-install. The public SSH key is not needed if you create the
MachineConfig via the MCO.
NOTE
The mc-base.yaml is set to deploy the kernel module on worker nodes. To deploy on
master nodes, change the role from worker to master. To do both, you could repeat the
whole procedure using different file names for the two types of deployments.
3. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using
the repositories cloned earlier:
$ FAKEROOT=$(mktemp -d)
$ cd kmods-via-containers
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
$ cd ../kvc-simple-kmod
$ make install DESTDIR=${FAKEROOT}/usr/local CONFDIR=${FAKEROOT}/etc/
15
OpenShift Container Platform 4.4 Installing
$ cd ..
$ sudo yum install -y python3
git clone https://github.com/ashcrow/filetranspiler.git
5. Generate a final MachineConfig YAML (mc.yaml) and have it include the base Ignition config,
base MachineConfig, and the fakeroot directory with files you would like to deliver:
$ ./filetranspiler/filetranspile -i ./baseconfig.ign \
-f ${FAKEROOT} --format=yaml --dereference-symlinks \
| sed 's/^/ /' | (cat mc-base.yaml -) > 99-simple-kmod.yaml
6. If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If
the cluster is already running, apply the file as follows:
$ oc create -f 99-simple-kmod.yaml
Your nodes will start the [email protected] service and the kernel
modules will be loaded.
7. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug
node/<openshift-node>, then chroot /host). To list the modules, use the lsmod command:
Sets up disk encryption during the manifest installation phase so all data written to disk, from
first boot forward, is encrypted
TPM v2: This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor. To
implement TPM v2 disk encryption, create an Ignition config file as described below.
Tang: To use Tang to encrypt your cluster, you need to use a Tang server. Clevis implements
16
CHAPTER 3. INSTALLATION CONFIGURATION
Tang: To use Tang to encrypt your cluster, you need to use a Tang server. Clevis implements
decryption on the client side. Tang encryption mode is only supported for bare metal installs.
Follow one of the two procedures to enable disk encryption for the nodes in your cluster.
Use this procedure to enable TPM v2 mode disk encryption during OpenShift Container Platform
deployment.
Procedure
1. Check to see if TPM v2 encryption needs to be enabled in the BIOS on each node. This is
required on most Dell systems. Check the manual for your computer.
3. In the openshift directory, create a master and/or worker file to encrypt disks for those nodes.
Here are examples of those two files:
17
OpenShift Container Platform 4.4 Installing
- contents:
source: data:text/plain;base64,e30K
filesystem: root
mode: 420
path: /etc/clevis.json
EOF
4. Make a backup copy of the YAML file. You should do this because the file will be deleted when
you create the cluster.
Use this procedure to enable Tang mode disk encryption during OpenShift Container Platform
deployment.
Procedure
1. Access a Red Hat Enterprise Linux server from which you can configure the encryption settings
and run openshift-install to install a cluster and oc to work with it.
2. Set up or access an existing Tang server. See Network-bound disk encryption for instructions.
See Securing Automated Decryption New Cryptography and Techniques for a presentation on
Tang.
3. Add kernel arguments to configure networking when you do the RHCOS installations for your
cluster. If you miss this step, the second boot will fail. For example, to configure DHCP
networking, identify ip=dhcp or set static networking when you add parameters to the kernel
command line.
4. Generate the thumbprint. Install the clevis package, it is not already installed, and generate a
thumbprint from the Tang server. Replace the value of url with the Tang server URL:
PLjNyRdGw03zlRoGjQYMahSZGu9
5. Create a Base64 encoded file, replacing the URL of the Tang server (url) and thumbprint (thp)
you just generated:
$ (cat <<EOM
{
"url": "https://tang.example.com",
"thp": "PLjNyRdGw03zlRoGjQYMahSZGu9"
}
EOM
18
CHAPTER 3. INSTALLATION CONFIGURATION
) | base64 -w0
ewogInVybCI6ICJodHRwczovL3RhbmcuZXhhbXBsZS5jb20iLAogInRocCI6ICJaUk1leTFjR3cw
N3psVExHYlhuUWFoUzBHdTAiCn0K
6. Replace the “source” in the TPM2 example with the Base64 encoded file for one or both of
these examples for worker and/or master nodes:
19
OpenShift Container Platform 4.4 Installing
Procedure
1. Create the contents of the chrony.conf file and encode it as base64. For example:
ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli
L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2
RpciAv
dmFyL2xvZy9jaHJvbnkK
2. Create the MachineConfig file, replacing the base64 string with the one you just created
yourself. This example adds the file to master nodes. You can change it to worker or make an
additional MachineConfig for the worker role:
20
CHAPTER 3. INSTALLATION CONFIGURATION
path: /etc/chrony.conf
osImageURL: ""
EOF
4. If the cluster is not up yet, generate manifest files, add this file to the openshift directory, then
continue to create the cluster.
$ oc apply -f ./masters-chrony-configuration.yaml
IMPORTANT
You must have access to the internet to obtain the data that populates the mirror
repository. In this procedure, you place the mirror registry on a bastion host that has
access to both your network and the internet. If you do not have access to a bastion host,
use the method that best fits your restrictions to bring the contents of the mirror registry
into your restricted network.
The mirror registry is a key component that is required to complete an installation in a restricted
network. You can create this mirror on a bastion host, which can access both the internet and your
closed network, or by using other methods that meet your restrictions.
Because of the way that OpenShift Container Platform verifies integrity for the release payload, the
image references in your local registry are identical to the ones that are hosted by Red Hat on Quay.io.
During the bootstrapping process of installation, the images must have the same digests no matter
which repository they are pulled from. To ensure that the release payload is identical, you mirror the
images to your local repository.
You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
21
OpenShift Container Platform 4.4 Installing
You can install the OpenShift CLI (oc) in order to interact with OpenShift Container Platform from a
command-line interface. You can install oc on Linux, Windows, or macOS.
IMPORTANT
If you installed an earlier version of oc, you cannot use it to complete all of the commands
in OpenShift Container Platform 4.4. Download and install the new version of oc.
You can install the OpenShift CLI (oc) binary on Linux by using the following procedure.
Procedure
1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
3. In the Command-line interface section, select Linux from the drop-down menu and click
Download command-line tools.
$ echo $PATH
$ oc <command>
You can install the OpenShift CLI (oc) binary on Windows by using the following procedure.
Procedure
1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
3. In the Command-line interface section, select Windows from the drop-down menu and click
Download command-line tools.
C:\> path
22
CHAPTER 3. INSTALLATION CONFIGURATION
C:\> oc <command>
You can install the OpenShift CLI (oc) binary on macOS by using the following procedure.
Procedure
1. Navigate to the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site.
3. In the Command-line interface section, select MacOS from the drop-down menu and click
Download command-line tools.
$ echo $PATH
$ oc <command>
NOTE
The following procedure creates a simple registry that stores data in the /opt/registry
folder and runs in a podman container. You can use a different registry solution, such as
Red Hat Quay . Review the following procedure to ensure that your registry functions
correctly.
Prerequisites
You have a Red Hat Enterprise Linux (RHEL) server on your network to use as the registry host.
Procedure
On the bastion host, take the following actions:
23
OpenShift Container Platform 4.4 Installing
The podman package provides the container package that you run the registry in. The httpd-
tools package provides the htpasswd utility, which you use to create users.
# mkdir -p /opt/registry/{auth,certs,data}
3. Provide a certificate for the registry. If you do not have an existing, trusted certificate authority,
you can generate a self-signed certificate:
$ cd /opt/registry/certs
# openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out
domain.crt
Country Specify the two-letter ISO country code for your location. See the ISO 3166 country
Name (2 codes standard.
letter code)
Common Enter the host name for the registry host. Ensure that your hostname is in DNS and
Name (eg, that it resolves to the expected IP address.
your name
or your
server’s
hostname)
Email Enter your email address. For more information, see the req description in the
Address OpenSSL documentation.
4. Generate a user name and a password for your registry that uses the bcrpt format:
24
CHAPTER 3. INSTALLATION CONFIGURATION
1 For <local_registry_host_port>, specify the port that your mirror registry uses to serve
content.
1 2 For <local_registry_host_port>, specify the port that your mirror registry uses to serve
content.
# cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/
# update-ca-trust
You must trust your certificate to log in to your registry during the mirror process.
{"repositories":[]}
1 For <user_name> and <password>, specify the user name and password for your
registry. For <local_registry_host_name>, specify the registry domain name that you
specified in your certificate, such as registry.example.com. For
<local_registry_host_port>, specify the port that your mirror registry uses to serve
content.
25
OpenShift Container Platform 4.4 Installing
Prerequisites
Procedure
Complete the following steps on the bastion host:
1. Download your registry.redhat.io pull secret from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Generate the base64-encoded user name and password or token for your mirror registry:
BGVtbYk3ZHAtqXs=
1 For <user_name> and <password>, specify the user name and password that you
configured for your registry.
1 Specify the path to the folder to store the pull secret in and a name for the JSON file that
you create.
{
"auths": {
"cloud.openshift.com": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"quay.io": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"registry.connect.redhat.com": {
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
},
"registry.redhat.io": {
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
}
}
}
26
CHAPTER 3. INSTALLATION CONFIGURATION
4. Edit the new file and add a section that describes your registry to it:
"auths": {
...
"<local_registry_host_name>:<local_registry_host_port>": { 1
"auth": "<credentials>", 2
"email": "[email protected]"
},
...
1 For <local_registry_host_name>, specify the registry domain name that you specified in
your certificate, and for <local_registry_host_port>, specify the port that your mirror
registry uses to serve content.
2 For <credentials>, specify the base64-encoded user name and password for the mirror
registry that you generated.
{
"auths": {
"cloud.openshift.com": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"quay.io": {
"auth": "b3BlbnNo...",
"email": "[email protected]"
},
"registry.connect.redhat.com": {
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
},
"<local_registry_host_name>:<local_registry_host_port>": {
"auth": "<credentials>",
"email": "[email protected]"
},
"registry.redhat.io": {
"auth": "NTE3Njg5Nj...",
"email": "[email protected]"
}
}
}
Prerequisites
You configured a mirror registry to use in your restricted network and can access the certificate
and credentials that you configured.
You downloaded the pull secret from the Pull Secret page on the Red Hat OpenShift Cluster
27
OpenShift Container Platform 4.4 Installing
You downloaded the pull secret from the Pull Secret page on the Red Hat OpenShift Cluster
Manager site and modified it to include authentication to your mirror repository.
Procedure
Complete the following steps on the bastion host:
1. Review the OpenShift Container Platform downloads page to determine the version of
OpenShift Container Platform that you want to install and determine the corresponding tag on
the Repository Tags page.
$ export OCP_RELEASE=<release_version> 1
$ export LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' 2
$ export LOCAL_REPOSITORY='<repository_name>' 3
$ export PRODUCT_REPO='openshift-release-dev' 4
$ export LOCAL_SECRET_JSON='<path_to_pull_secret>' 5
$ export RELEASE_NAME="ocp-release" 6
1 For <release_version>, specify the tag that corresponds to the version of OpenShift
Container Platform to install for your architecture, such as 4.4.0-x86_64.
2 For <local_registry_host_name>, specify the registry domain name for your mirror
repository, and for <local_registry_host_port>, specify the port that it serves content on.
3 For <repository_name>, specify the name of the repository to create in your registry, such
as ocp4/openshift4.
4 The repository to mirror. For a production release, you must specify openshift-release-
dev.
5 For <path_to_pull_secret>, specify the absolute path to and file name of the pull secret
for your mirror registry that you created.
6 The release mirror. For a production release, you must specify ocp-release.
This command pulls the release information as a digest, and its output includes the
imageContentSources data that you require when you install your cluster.
4. Record the entire imageContentSources section from the output of the previous command.
The information about your mirrors is unique to your mirrored repository, and you must add the
imageContentSources section to the install-config.yaml file during installation.
5. To create the installation program that is based on the content that you mirrored, extract it and
pin it to the release:
28
CHAPTER 3. INSTALLATION CONFIGURATION
IMPORTANT
To ensure that you use the correct images for the version of OpenShift
Container Platform that you selected, you must extract the installation program
from the mirrored content.
You must perform this step on a machine with an active internet connection.
IMPORTANT
The Samples Operator prevents the use of the following registries for the Jenkins
imagestreams:
docker.io
registry.redhat.io
registry.access.redhat.com
quay.io.
NOTE
The cli, installer, must-gather, and tests imagestreams, while part of the install payload,
are not managed by the Samples Operator. These are not addressed in this procedure.
Prerequisites
Procedure
29
OpenShift Container Platform 4.4 Installing
2. Mirror images from registry.redhat.io associated with any imagestreams you need in the
restricted network environment into one of the defined mirrors, for example:
3. Add the required trusted CAs for the mirror in the cluster’s image configuration object:
4. Update the samplesRegistry field in the Samples Operator configuration object to contain the
hostname portion of the mirror location defined in the mirror configuration:
NOTE
This is required because the imagestream import process does not use the mirror
or search mechanism at this time.
5. Add any imagestreams that are not mirrored into the skippedImagestreams field of the
Samples Operator configuration object. Or if you do not want to support any of the sample
imagestreams, set the Samples Operator to Removed in the Samples Operator configuration
object.
NOTE
Any unmirrored imagestreams that are not skipped, or if the Samples Operator is
not changed to Removed, will result in the Samples Operator reporting a
Degraded status two hours after the imagestream imports start failing.
Many of the templates in the OpenShift namespace reference the imagestreams. So using
Removed to purge both the imagestreams and templates will eliminate the possibility of
attempts to use them if they are not functional because of any missing imagestreams.
Next steps
Install a cluster on infrastructure that you provision in your restricted nework, such as on VMware
vSphere, bare metal, or Amazon Web Services.
You modify the configuration resources to configure the major features of the cluster, such as the
30
CHAPTER 3. INSTALLATION CONFIGURATION
You modify the configuration resources to configure the major features of the cluster, such as the
image registry, networking configuration, image build behavior, and the identity provider.
For current documentation of the settings that you control by using these resources, use the oc explain
command, for example oc explain builds --api-version=config.openshift.io/v1
authentication.con Controls the identity providerand authentication configuration for the cluster.
fig.openshift.io
build.config.opens Controls default and enforced configuration for all builds on the cluster.
hift.io
console.config.ope Configures the behavior of the web console interface, including the logout behavior.
nshift.io
featuregate.config Enables FeatureGates so that you can use Tech Preview features.
.openshift.io
image.config.open Configures how specific image registries should be treated (allowed, disallowed,
shift.io insecure, CA details).
ingress.config.ope Configuration details related to routing such as the default domain for routes.
nshift.io
oauth.config.open Configures identity providers and other behavior related to internal OAuth server flows.
shift.io
project.config.ope Configures how projects are created including the project template.
nshift.io
proxy.config.opens Defines proxies to be used by components needing external network access. Note: not
hift.io all components currently consume this value.
31
OpenShift Container Platform 4.4 Installing
config.imageregist Configures internal image registry settings such as public routing, log levels, proxy
ry.operator.opensh settings, resource constraints, replica counts, and storage type.
ift.io
config.samples.op Configures the Samples Operator to control which example imagestreams and
erator.openshift.io templates are installed on the cluster.
clusterversion.c version In OpenShift Container Platform 4.4, you must not customize the
onfig.openshift. ClusterVersion resource for production clusters. Instead, follow the
io process to update a cluster.
dns.config.ope cluster You cannot modify the DNS settings for your cluster. You can view the
nshift.io DNS Operator status.
32
CHAPTER 3. INSTALLATION CONFIGURATION
infrastructure.c cluster Configuration details allowing the cluster to interact with its cloud
onfig.openshift. provider.
io
network.config. cluster You cannot modify your cluster networking after installation. To
openshift.io customize your network, follow the process to customize networking
during installation.
Procedure
URL Function
2. Whitelist any site that provides resources for a language or framework that your builds require.
3. If you do not disable Telemetry, you must grant access to the following URLs to access Red Hat
Insights:
URL Function
33
OpenShift Container Platform 4.4 Installing
URL Function
4. If you use Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to
host your cluster, you must grant access to the URLs that provide the cloud provider API and
DNS for that cloud:
URL Function
34
CHAPTER 3. INSTALLATION CONFIGURATION
URL Function
IMPORTANT
You can configure this change for only clusters that use infrastructure that you provision
to a cloud provider.
DNS
If you install OpenShift Container Platform on installer-provisioned infrastructure, the installation
program creates records in a pre-existing public zone and, where possible, creates a private zone for the
cluster’s own DNS resolution. In both the public zone and the private zone, the installation program or
cluster creates DNS entries for *.apps, for Ingress, and api, for the API server.
The *.apps records in the public and private zone are identical, so when you delete the public zone, the
private zone seamlessly provides all DNS resolution for the cluster.
Ingress Controller
Because the default Ingress object is created as public, the load balancer is internet-facing and in the
public subnets. You can replace the default Ingress Controller with an internal one.
API server
By default, the installation program creates appropriate network load balancers for the API server to
use for both internal and external traffic.
On Amazon Web Services (AWS), separate public and private load balancers are created. The load
balancers are identical except that an additional port is available on the internal one for use within the
cluster. Although the installation program automatically creates or destroys the load balancer based on
API server requirements, the cluster does not manage or maintain them. As long as you preserve the
cluster’s access to the API server, you can manually modify or move the load balancers. For the public
load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path.
On Google Cloud Platform, a single load balancer is created to manage both internal and external API
traffic, so you do not need to modify the load balancer.
On Microsoft Azure, both public and private load balancers are created. However, because of limitations
in current implementation, you just retain both load balancers in a private cluster.
35
OpenShift Container Platform 4.4 Installing
After you deploy a cluster, you can modify its DNS to use only a private zone.
Procedure
Note that the spec section contains both a private and a public zone.
Because the Ingress Controller consults the DNS definition when it creates Ingress objects,
When you create or modify Ingress objects, only private records are created.
IMPORTANT
DNS records for the existing Ingress objects are not modified when you remove
the public zone.
3. Optional: Review the DNS custom resource for your cluster and confirm that the public zone
was removed:
36
CHAPTER 3. INSTALLATION CONFIGURATION
uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
spec:
baseDomain: <base_domain>
privateZone:
tags:
Name: <infrastructureID>-int
kubernetes.io/cluster/<infrastructureID>-wfpg4: owned
status: {}
Procedure
The public DNS entry is removed, and the private zone entry is updated.
Prerequisites
Procedure
1. In the web portal or console for AWS or Azure, take the following actions:
For AWS, delete the external load balancer. The API DNS entry in the private zone
already points to the internal load balancer, which uses an identical configuration, so you
do not need to modify the internal load balancer.
37
OpenShift Container Platform 4.4 Installing
For Azure, delete the api-internal rule for the load balancer.
You modify the control plane machines, which contain master in the name, in the following step.
3. Remove the external load balancer from each control plane machine.
a. Edit a master Machine object to remove the reference to the external load balancer.
b. Remove the lines that describe the external load balancer, which are marked in the following
example, and save and exit the object specification:
...
spec:
providerSpec:
value:
...
loadBalancers:
- name: lk4pj-ext 1
type: network 2
- name: lk4pj-int
type: network
c. Repeat this process for each of the machines that contains master in the name.
38