0% found this document useful (0 votes)
142 views

OpenShift Container Platform-4.6-Installing On OpenStack-en-US

Uploaded by

Chinni Munni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views

OpenShift Container Platform-4.6-Installing On OpenStack-en-US

Uploaded by

Chinni Munni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 178

OpenShift Container Platform 4.

Installing on OpenStack

Installing OpenShift Container Platform OpenStack clusters

Last Updated: 2021-02-17


OpenShift Container Platform 4.6 Installing on OpenStack
Installing OpenShift Container Platform OpenStack clusters
Legal Notice
Copyright © 2021 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.

Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Abstract
This document provides instructions for installing and uninstalling OpenShift Container Platform
clusters on OpenStack Platform.
Table of Contents

Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INSTALLING
. . . . . . . . . . . . . ON
. . . . OPENSTACK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
1.1. INSTALLING A CLUSTER ON OPENSTACK WITH CUSTOMIZATIONS 6
1.1.1. Prerequisites 6
1.1.2. Resource guidelines for installing OpenShift Container Platform on RHOSP 6
1.1.2.1. Control plane and compute machines 7
1.1.2.2. Bootstrap machine 7
1.1.3. Internet and Telemetry access for OpenShift Container Platform 8
1.1.4. Enabling Swift on RHOSP 8
1.1.5. Verifying external network access 9
1.1.6. Defining parameters for the installation program 10
1.1.7. Obtaining the installation program 12
1.1.8. Creating the installation configuration file 12
1.1.9. Installation configuration parameters 14
1.1.9.1. Custom subnets in RHOSP deployments 23
1.1.9.2. Sample customized install-config.yaml file for RHOSP 24
1.1.10. Generating an SSH private key and adding it to the agent 25
1.1.11. Enabling access to the environment 26
1.1.11.1. Enabling access with floating IP addresses 26
1.1.11.2. Completing installation without floating IP addresses 27
1.1.12. Deploying the cluster 27
1.1.13. Verifying cluster status 29
1.1.14. Logging in to the cluster by using the CLI 29
1.1.15. Next steps 30
1.2. INSTALLING A CLUSTER ON OPENSTACK WITH KURYR 30
1.2.1. Prerequisites 30
1.2.2. About Kuryr SDN 31
1.2.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr 31
1.2.3.1. Increasing quota 33
1.2.3.2. Configuring Neutron 33
1.2.3.3. Configuring Octavia 34
1.2.3.3.1. The Octavia OVN Driver 37
1.2.3.4. Known limitations of installing with Kuryr 38
RHOSP general limitations 38
RHOSP version limitations 38
RHOSP environment limitations 38
RHOSP upgrade limitations 39
1.2.3.5. Control plane and compute machines 39
1.2.3.6. Bootstrap machine 39
1.2.4. Internet and Telemetry access for OpenShift Container Platform 40
1.2.5. Enabling Swift on RHOSP 40
1.2.6. Verifying external network access 41
1.2.7. Defining parameters for the installation program 42
1.2.8. Obtaining the installation program 44
1.2.9. Creating the installation configuration file 44
1.2.10. Installation configuration parameters 46
1.2.10.1. Custom subnets in RHOSP deployments 55
1.2.10.2. Sample customized install-config.yaml file for RHOSP with Kuryr 56
1.2.11. Generating an SSH private key and adding it to the agent 57
1.2.12. Enabling access to the environment 58
1.2.12.1. Enabling access with floating IP addresses 58
1.2.12.2. Completing installation without floating IP addresses 59

1
OpenShift Container Platform 4.6 Installing on OpenStack

1.2.13. Deploying the cluster 60


1.2.14. Verifying cluster status 61
1.2.15. Logging in to the cluster by using the CLI 61
1.2.16. Next steps 62
1.3. INSTALLING A CLUSTER ON OPENSTACK ON YOUR OWN INFRASTRUCTURE 62
1.3.1. Prerequisites 62
1.3.2. Internet and Telemetry access for OpenShift Container Platform 63
1.3.3. Resource guidelines for installing OpenShift Container Platform on RHOSP 63
1.3.3.1. Control plane and compute machines 64
1.3.3.2. Bootstrap machine 65
1.3.4. Downloading playbook dependencies 65
1.3.5. Downloading the installation playbooks 66
1.3.6. Obtaining the installation program 67
1.3.7. Generating an SSH private key and adding it to the agent 68
1.3.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image 69
1.3.9. Verifying external network access 70
1.3.10. Enabling access to the environment 71
1.3.10.1. Enabling access with floating IP addresses 71
1.3.10.2. Completing installation without floating IP addresses 72
1.3.11. Defining parameters for the installation program 72
1.3.12. Creating the installation configuration file 74
1.3.13. Installation configuration parameters 75
1.3.13.1. Custom subnets in RHOSP deployments 85
1.3.13.2. Sample customized install-config.yaml file for RHOSP 86
1.3.13.3. Setting a custom subnet for machines 86
1.3.13.4. Emptying compute machine pools 87
1.3.14. Creating the Kubernetes manifest and Ignition config files 88
1.3.15. Preparing the bootstrap Ignition files 89
1.3.16. Creating control plane Ignition config files on RHOSP 92
1.3.17. Creating network resources on RHOSP 93
1.3.18. Creating the bootstrap machine on RHOSP 94
1.3.19. Creating the control plane machines on RHOSP 95
1.3.20. Logging in to the cluster by using the CLI 96
1.3.21. Deleting bootstrap resources from RHOSP 96
1.3.22. Creating compute machines on RHOSP 97
1.3.23. Approving the certificate signing requests for your machines 97
1.3.24. Verifying a successful installation 100
1.3.25. Next steps 101
1.4. INSTALLING A CLUSTER ON OPENSTACK WITH KURYR ON YOUR OWN INFRASTRUCTURE 101
1.4.1. Prerequisites 101
1.4.2. About Kuryr SDN 101
1.4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr 102
1.4.3.1. Increasing quota 104
1.4.3.2. Configuring Neutron 104
1.4.3.3. Configuring Octavia 104
1.4.3.3.1. The Octavia OVN Driver 108
1.4.3.4. Known limitations of installing with Kuryr 108
RHOSP general limitations 108
RHOSP version limitations 109
RHOSP environment limitations 109
RHOSP upgrade limitations 109
1.4.3.5. Control plane and compute machines 110
1.4.3.6. Bootstrap machine 110

2
Table of Contents

1.4.4. Internet and Telemetry access for OpenShift Container Platform 110
1.4.5. Downloading playbook dependencies 111
1.4.6. Downloading the installation playbooks 112
1.4.7. Obtaining the installation program 113
1.4.8. Generating an SSH private key and adding it to the agent 114
1.4.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image 115
1.4.10. Verifying external network access 116
1.4.11. Enabling access to the environment 116
1.4.11.1. Enabling access with floating IP addresses 117
1.4.11.2. Completing installation without floating IP addresses 117
1.4.12. Defining parameters for the installation program 118
1.4.13. Creating the installation configuration file 120
1.4.14. Installation configuration parameters 121
1.4.14.1. Custom subnets in RHOSP deployments 130
1.4.14.2. Sample customized install-config.yaml file for RHOSP with Kuryr 131
1.4.14.3. Setting a custom subnet for machines 132
1.4.14.4. Emptying compute machine pools 133
1.4.14.5. Modifying the network type 133
1.4.15. Creating the Kubernetes manifest and Ignition config files 134
1.4.16. Preparing the bootstrap Ignition files 135
1.4.17. Creating control plane Ignition config files on RHOSP 138
1.4.18. Creating network resources on RHOSP 139
1.4.19. Creating the bootstrap machine on RHOSP 140
1.4.20. Creating the control plane machines on RHOSP 141
1.4.21. Logging in to the cluster by using the CLI 142
1.4.22. Deleting bootstrap resources from RHOSP 142
1.4.23. Creating compute machines on RHOSP 143
1.4.24. Approving the certificate signing requests for your machines 143
1.4.25. Verifying a successful installation 146
1.4.26. Next steps 146
1.5. INSTALLING A CLUSTER ON OPENSTACK IN A RESTRICTED NETWORK 147
1.5.1. About installations in restricted networks 147
1.5.1.1. Additional limits 147
1.5.2. Resource guidelines for installing OpenShift Container Platform on RHOSP 148
1.5.2.1. Control plane and compute machines 149
1.5.2.2. Bootstrap machine 149
1.5.3. Internet and Telemetry access for OpenShift Container Platform 149
1.5.4. Enabling Swift on RHOSP 150
1.5.5. Defining parameters for the installation program 150
1.5.6. Creating the RHCOS image for restricted network installations 152
1.5.7. Creating the installation configuration file 153
1.5.7.1. Installation configuration parameters 155
1.5.7.2. Sample customized install-config.yaml file for restricted OpenStack installations 165
1.5.8. Generating an SSH private key and adding it to the agent 167
1.5.9. Enabling access to the environment 168
1.5.9.1. Enabling access with floating IP addresses 168
1.5.9.2. Completing installation without floating IP addresses 169
1.5.10. Deploying the cluster 169
1.5.11. Verifying cluster status 171
1.5.12. Logging in to the cluster by using the CLI 171
1.6. UNINSTALLING A CLUSTER ON OPENSTACK 172
1.6.1. Removing a cluster that uses installer-provisioned infrastructure 172
1.7. UNINSTALLING A CLUSTER ON RHOSP FROM YOUR OWN INFRASTRUCTURE 173

3
OpenShift Container Platform 4.6 Installing on OpenStack

1.7.1. Downloading playbook dependencies 173


1.7.2. Removing a cluster from RHOSP that uses your own infrastructure 174

4
Table of Contents

5
OpenShift Container Platform 4.6 Installing on OpenStack

CHAPTER 1. INSTALLING ON OPENSTACK

1.1. INSTALLING A CLUSTER ON OPENSTACK WITH CUSTOMIZATIONS


In OpenShift Container Platform version 4.5, you can install a customized cluster on Red Hat OpenStack
Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before
you install the cluster.

1.1.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .

Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.

Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift).
Object storage is the recommended storage technology for OpenShift Container Platform
registry cluster deployment. For more information, see Optimizing storage .

Have metadata service enabled in RHOSP

1.1.2. Resource guidelines for installing OpenShift Container Platform on RHOSP


To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP)
quota must meet the following requirements:

Table 1.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP

Resource Value

Floating IP addresses 3

Ports 15

Routers 1

Subnets 1

RAM 112 GB

vCPUs 28

Volume storage 275 GB

Instances 7

Security groups 3

6
CHAPTER 1. INSTALLING ON OPENSTACK

Resource Value

Security group rules 60

A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.

IMPORTANT

If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.

NOTE

By default, your security group and security group rule quotas might be low. If you
encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60
<project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.

1.1.2.1. Control plane and compute machines

By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.

Each machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

TIP

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.

1.1.2.2. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

7
OpenShift Container Platform 4.6 Installing on OpenStack

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

1.1.3. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.1.4. Enabling Swift on RHOSP


Swift is operated by a user account with the swiftoperator role. Add the role to an account before you
run the installation program.

IMPORTANT

If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as
Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it
is unavailable, the installation program relies on the RHOSP block storage service,
commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present,
or if you do not want to use it, skip this section.

Prerequisites

You have a RHOSP administrator account on the target environment.

The Swift service is installed.

On Ceph RGW , the account in url option is enabled.

8
CHAPTER 1. INSTALLING ON OPENSTACK

Procedure
To enable Swift on RHOSP:

1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:

$ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

1.1.5. Verifying external network access


The OpenShift Container Platform installation process requires external network access. You must
provide an external network value to it, or deployment fails. Before you begin the process, verify that a
network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Prerequisites

Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries

Procedure

1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

$ openstack network list --long -c ID -c Name -c "Router Type"

Example output

+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .

IMPORTANT
9
OpenShift Container Platform 4.6 Installing on OpenStack

IMPORTANT

If the external network’s CIDR range overlaps one of the default network ranges, you
must change the matching network ranges in the install-config.yaml file before you start
the installation process.

The default network ranges are:

Network Range

machineNetwork 10.0.0.0/16

serviceNetwork 172.30.0.0/16

clusterNetwork 10.128.0.0/14


WARNING

If the installation program finds multiple networks with the same name, it sets one
of them at random. To avoid this behavior, create unique names for resources in
RHOSP.

NOTE

If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .

1.1.6. Defining parameters for the installation program


The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The
file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project
name, log in information, and authorization service URLs.

Procedure

1. Create the clouds.yaml file:

If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

IMPORTANT

Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.

If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.

10
CHAPTER 1. INSTALLING ON OPENSTACK

clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'

2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:

a. Copy the certificate authority file to your machine.

b. Add the machine to the certificate authority trust bundle:

$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/

c. Update the trust bundle:

$ sudo update-ca-trust extract

d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:

clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"

TIP

After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:

$ oc edit configmap -n openshift-config cloud-provider-config

3. Place the clouds.yaml file in one of the following locations:

a. The value of the OS_CLIENT_CONFIG_FILE environment variable

b. The current directory

c. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml

11
OpenShift Container Platform 4.6 Installing on OpenStack

d. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml


The installation program searches for clouds.yaml in that order.

1.1.7. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.1.8. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack
Platform (RHOSP).

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
12
CHAPTER 1. INSTALLING ON OPENSTACK

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select openstack as the platform to target.

iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.

iv. Specify the floating IP address to use for external access to the OpenShift API.

v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.

vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.

vii. Enter a name for your cluster. The name must be 14 or fewer characters long.

viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available

13
OpenShift Container Platform 4.6 Installing on OpenStack

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

1.1.9. Installation configuration parameters


Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.2. Required parameters

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

14
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.3. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

15
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

16
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

17
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

18
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

19
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.4. Additional Red Hat OpenStack Platform (RHOSP) parameters

Parameter Description Values

compute.platfor For compute machines, the Integer, for example 30.


m.openstack.ro size in gigabytes of the root
otVolume.size volume. If you do not set this
value, machines use
ephemeral storage.

compute.platfor For compute machines, the String, for example performance .


m.openstack.ro root volume’s type.
otVolume.type

controlPlane.pla For control plane machines, Integer, for example 30.


tform.openstack the size in gigabytes of the
.rootVolume.siz root volume. If you do not set
e this value, machines use
ephemeral storage.

controlPlane.pla For control plane machines, String, for example performance .


tform.openstack the root volume’s type.
.rootVolume.typ
e

20
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst The name of the RHOSP String, for example MyCloud.


ack.cloud cloud to use from the list of
clouds in the clouds.yaml
file.

platform.openst The RHOSP external network String, for example external.


ack.externalNet name to be used for
work installation.

platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.

Table 1.5. Optional RHOSP parameters

Parameter Description Values

compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.

compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs

compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

21
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.

controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs

controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.

22
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst The default machine pool


ack.defaultMach platform configuration. {
inePlatform "type": "ml.large",
"rootVolume": {
"size": 30,
"type": "performance"
}
}

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.ingressFloa to associate with the Ingress
tingIP port. To use this property, you
must also define the
platform.openstack.exter
nalNetwork property.

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.lbFloatingIP to associate with the API load
balancer. To use this property,
you must also define the
platform.openstack.exter
nalNetwork property.

platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.

platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.

The first item in


networking.machineNetw
ork must match the value of
machinesSubnet.

If you deploy to a custom


subnet, you cannot specify an
external DNS server to the
OpenShift Container Platform
installer. Instead, add DNS to
the subnet in RHOSP.

1.1.9.1. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.

23
OpenShift Container Platform 4.6 Installing on OpenStack

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.

This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that:

The target network and subnet are available.

DHCP is enabled on the target subnet.

You can provide installer credentials that have permission to create ports on the target
network.

If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.

Your network configuration does not rely on a provider network. Provider networks are not
supported.

NOTE

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

1.1.9.2. Sample customized install-config.yaml file for RHOSP

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform
(RHOSP) customization options.

IMPORTANT

This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.

apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14

24
CHAPTER 1. INSTALLING ON OPENSTACK

hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

1.1.10. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

25
OpenShift Container Platform 4.6 Installing on OpenStack

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.1.11. Enabling access to the environment


At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack
Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP
deployments.

You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.

1.1.11.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and
cluster applications.

Procedure

1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

$ openstack floating ip create --description "API <cluster_name>.<base_domain>"


<external_network>

2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>"


<external_network>

3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>

NOTE

If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.

26
CHAPTER 1. INSTALLING ON OPENSTACK

4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

platform.openstack.ingressFloatingIP

platform.openstack.lbFloatingIP

If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork parameter in the install-config.yaml file.

TIP

You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.

1.1.11.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.

In the install-config.yaml file, do not define the following parameters:

platform.openstack.ingressFloatingIP

platform.openstack.lbFloatingIP

If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for
you, and, without additional action, the installer will fail to retrieve an image from Glance. You must
configure external connectivity on your own.

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP
addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use
a proxy network or run the installer from a system that is on the same network as your machines.

NOTE

You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:

api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.

1.1.12. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

27
OpenShift Container Platform 4.6 Installing on OpenStack

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the location of your customized ./install-


config.yaml file.

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT
28
CHAPTER 1. INSTALLING ON OPENSTACK

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.1.13. Verifying cluster status


You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

1. In the cluster environment, export the administrator’s kubeconfig file:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.

2. View the control plane and compute machines created after a deployment:

$ oc get nodes

3. View your cluster’s version:

$ oc get clusterversion

4. View your Operators' status:

$ oc get clusteroperator

5. View all running pods in the cluster:

$ oc get pods -A

1.1.14. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the

29
OpenShift Container Platform 4.6 Installing on OpenStack

correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.1.15. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.

If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .

1.2. INSTALLING A CLUSTER ON OPENSTACK WITH KURYR


In OpenShift Container Platform version 4.5, you can install a customized cluster on Red Hat OpenStack
Platform (RHOSP) that uses Kuryr SDN. To customize the installation, modify parameters in the install-
config.yaml before you install the cluster.

1.2.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .

Verify that your network configuration does not rely on a provider network. Provider networks
30
CHAPTER 1. INSTALLING ON OPENSTACK

Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.

Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift).
Object storage is the recommended storage technology for OpenShift Container Platform
registry cluster deployment. For more information, see Optimizing storage .

1.2.2. About Kuryr SDN


Kuryr is a container network interface (CNI) plug-in solution that uses the Neutron and Octavia Red Hat
OpenStack Platform (RHOSP) services to provide networking for pods and Services.

Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container
Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging
OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between
pods and RHOSP virtual instances.

Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr
namespace:

kuryr-controller - a single service instance installed on a master node. This is modeled in


OpenShift Container Platform as a Deployment object.

kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift
Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet
object.

The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and
namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to
corresponding objects in Neutron and Octavia. This means that every network solution that implements
the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This
includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as
Neutron-compatible commercial SDNs.

Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant
networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform
SDN over an RHOSP network.

If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double
encapsulation. The performance benefit is negligible. Depending on your configuration, though, using
Kuryr to avoid having two overlays might still be beneficial.

Kuryr is not recommended in deployments where all of the following criteria are true:

The RHOSP version is less than 16.

The deployment uses UDP services, or a large number of TCP services on few hypervisors.

or

The ovn-octavia Octavia driver is disabled.

The deployment uses a large number of TCP services on few hypervisors.

1.2.3. Resource guidelines for installing OpenShift Container Platform on RHOSP


with Kuryr

31
OpenShift Container Platform 4.6 Installing on OpenStack

When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from
the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional
requirements on top of what a default install requires.

Use the following quota to satisfy a default cluster’s minimum requirements:

Table 1.6. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
with Kuryr

Resource Value

Floating IP addresses 3 - plus the expected number of Services of


LoadBalancer type

Ports 1500 - 1 needed per Pod

Routers 1

Subnets 250 - 1 needed per Namespace/Project

Networks 250 - 1 needed per Namespace/Project

RAM 112 GB

vCPUs 28

Volume storage 275 GB

Instances 7

Security groups 250 - 1 needed per Service and per NetworkPolicy

Security group rules 1000

Load balancers 100 - 1 needed per Service

Load balancer listeners 500 - 1 needed per Service-exposed port

Load balancer pools 500 - 1 needed per Service-exposed port

A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.

IMPORTANT

If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.

32
CHAPTER 1. INSTALLING ON OPENSTACK

IMPORTANT

If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora
driver rather than the OVN Octavia driver, security groups are associated with service
accounts instead of user projects.

Take the following notes into consideration when setting resources:

The number of ports that are required is larger than the number of pods. Kuryr uses ports pools
to have pre-created ports ready to be used by pods and speed up the pods' booting time.

Each network policy is mapped into an RHOSP security group, and depending on the
NetworkPolicy spec, one or more rules are added to the security group.

Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating
the number of security groups required for the quota.
If you are using RHOSP version 15 or earlier, or the ovn-octavia driver, each load balancer has a
security group with the user project.

The quota does not account for load balancer resources (such as VM resources), but you must
consider these resources when you decide the RHOSP deployment’s size. The default
installation will have more than 50 load balancers; the clusters must be able to accommodate
them.
If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer
VM is generated; services are load balanced through OVN flows.

An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.

To enable Kuryr SDN, your environment must meet the following requirements:

Run RHOSP 13+.

Have Overcloud with Octavia.

Use Neutron Trunk ports extension.

Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.

1.2.3.1. Increasing quota

When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP)
resources used by pods, services, namespaces, and network policies.

Procedure

Increase the quotas for a project by running the following command:

$ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets
250 --networks 250 <project>

1.2.3.2. Configuring Neutron

Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack
Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work.

33
OpenShift Container Platform 4.6 Installing on OpenStack

In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to
openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr
can properly handle network policies.

1.2.3.3. Configuring Octavia

Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift
Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use
Kuryr SDN.

To enable Octavia, you must include the Octavia service during the installation of the RHOSP
Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for
enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.

NOTE

The following steps only capture the key pieces required during the deployment of
RHOSP when dealing with Octavia. It is also important to note that registry methods vary.

This example uses the local registry method.

Procedure

1. If you are using the local registry, create a template to upload the images to the registry. For
example:

(undercloud) $ openstack overcloud container image prepare \


-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
--namespace=registry.access.redhat.com/rhosp13 \
--push-destination=<local-ip-from-undercloud.conf>:8787 \
--prefix=openstack- \
--tag-from-label {version}-{release} \
--output-env-file=/home/stack/templates/overcloud_images.yaml \
--output-images-file /home/stack/local_registry_images.yaml

2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:

...
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-
45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
push_destination: <local-ip-from-undercloud.conf>:8787

NOTE

The Octavia container versions vary depending upon the specific RHOSP release
installed.

3. Pull the container images from registry.redhat.io to the Undercloud node:

34
CHAPTER 1. INSTALLING ON OPENSTACK

(undercloud) $ sudo openstack overcloud container image upload \


--config-file /home/stack/local_registry_images.yaml \
--verbose

This may take some time depending on the speed of your network and Undercloud disk.

4. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you
must increase their listeners' default timeouts for the connections. The default timeout is 50
seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud
deploy command:

(undercloud) $ cat octavia_timeouts.yaml


parameter_defaults:
OctaviaTimeoutClientData: 1200000
OctaviaTimeoutMemberData: 1200000

NOTE

This is not needed for RHOSP 14+.

5. Install or update your Overcloud environment with Octavia:

$ openstack overcloud deploy --templates \


-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
-e octavia_timeouts.yaml

NOTE

This command only includes the files associated with Octavia; it varies based on
your specific installation of RHOSP. See the RHOSP documentation for further
information. For more information on customizing your Octavia installation, see
installation of Octavia using Director.

NOTE

When leveraging Kuryr SDN, the Overcloud installation requires the Neutron
trunk extension. This is available by default on director deployments. Use the
openvswitch firewall instead of the default ovs-hybrid when the Neutron
backend is ML2/OVS. There is no need for modifications if the backend is
ML2/OVN.

6. In RHOSP versions 13 and 15, add the project ID to the octavia.conf configuration file after you
create the project.

To enforce network policies across services, like when traffic goes through the Octavia load
balancer, you must ensure Octavia creates the Amphora VM security groups on the user
project.
This change ensures that required load balancer security groups belong to that project, and
that they can be updated to enforce services isolation.

NOTE
35
OpenShift Container Platform 4.6 Installing on OpenStack

NOTE

This task is unnecessary in RHOSP version 16 or later.

Octavia implements a new ACL API that restricts access to the load
balancers VIP.

a. Get the project ID

$ openstack project show <project>

Example output

+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | PROJECT_ID |
| is_domain | False |
| name | *<project>* |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+

b. Add the project ID to octavia.conf for the controllers.

i. Source the stackrc file:

$ source stackrc # Undercloud credentials

ii. List the Overcloud controllers:

$ openstack server list

Example output

+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+

| ID | Name | Status | Networks
| Image | Flavor |

+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+

| 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
ctlplane=192.168.24.8 | overcloud-full | controller |

| dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE |
ctlplane=192.168.24.6 | overcloud-full | compute |

36
CHAPTER 1. INSTALLING ON OPENSTACK


+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+

iii. SSH into the controller(s).

$ ssh [email protected]

iv. Edit the octavia.conf file to add the project into the list of projects where Amphora
security groups are on the user’s account.

# List of project IDs that are allowed to have Load balancer security groups
# belonging to them.
amp_secgroup_allowed_projects = PROJECT_ID

c. Restart the Octavia worker so the new configuration loads.

controller-0$ sudo docker restart octavia_worker

NOTE

Depending on your RHOSP environment, Octavia might not support UDP listeners. If you
use Kuryr SDN on RHOSP version 15 or earlier, UDP services are not supported. RHOSP
version 16 or later support UDP.

1.2.3.3.1. The Octavia OVN Driver

Octavia supports multiple provider drivers through the Octavia API.

To see all available Octavia provider drivers, on a command line, enter:

$ openstack loadbalancer provider list

Example output

+---------+-------------------------------------------------+
| name | description |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver. |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn | Octavia OVN driver. |
+---------+-------------------------------------------------+

Beginning with RHOSP version 16, the Octavia OVN provider driver (ovn) is supported on OpenShift
Container Platform on RHOSP deployments.

ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load
balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by
Director on deployments that use OVN Neutron ML2.

The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.

If Kuryr uses ovn instead of Amphora, it offers the following benefits:

37
OpenShift Container Platform 4.6 Installing on OpenStack

Decreased resource requirements. Kuryr does not require a load balancer VM for each service.

Reduced network latency.

Increased service creation speed by using OpenFlow rules instead of a VM for each service.

Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.

You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from
version 13 to version 16.

1.2.3.4. Known limitations of installing with Kuryr

Using OpenShift Container Platform with Kuryr SDN has several known limitations.

RHOSP general limitations


OpenShift Container Platform with Kuryr SDN does not support Service objects with type NodePort.

If the machines subnet is not connected to a router, or if the subnet is connected, but the router has no
external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.

RHOSP version limitations


Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP
version.

RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver
requires that one Amphora load balancer VM is deployed per OpenShift Container Platform
service. Creating too many services can cause you to run out of resources.
Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use
the Amphora driver. They are subject to the same resource concerns as earlier versions of
RHOSP.

Octavia RHOSP versions before 16 do not support UDP listeners. Therefore, OpenShift
Container Platform UDP services are not supported.

Octavia RHOSP versions before 16 cannot listen to multiple protocols on the same port.
Services that expose the same port to different protocols, like TCP and UDP, are not
supported.

RHOSP environment limitations


There are limitations when using Kuryr SDN that depend on your deployment environment.

Because of Octavia’s lack of support for the UDP protocol and multiple listeners, if the RHOSP version
is earlier than 16, Kuryr forces pods to use TCP for DNS resolution.

In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only.
In this case, the native Go resolver does not recognize the use-vc option in resolv.conf, which controls
whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails.

To ensure that TCP forcing is allowed, compile applications either with the environment variable
CGO_ENABLED set to 1, i.e. CGO_ENABLED=1, or ensure that the variable is absent.

In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails.

NOTE
38
CHAPTER 1. INSTALLING ON OPENSTACK

NOTE

musl-based containers, including Alpine-based containers, do not support the use-vc


option.

RHOSP upgrade limitations


As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the
Amphora images that are used for load balancers might be required.

You can address API changes on an individual basis.

If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two
ways:

Upgrade each VM by triggering a triggering a load balancer failover .

Leave responsibility for upgrading the VMs to users.

If the operator takes the first option, there might be short downtimes during failovers.

If the operator takes the second option, the existing load balancers will not support upgraded Octavia
API features, like UDP listeners. In this case, users must recreate their Services to use these features.

IMPORTANT

If OpenShift Container Platform detects a new Octavia version that supports UDP load
balancing, it recreates the DNS service automatically. The service recreation ensures that
the service default supports UDP load balancing.

The recreation causes the DNS service approximately one minute of downtime.

1.2.3.5. Control plane and compute machines

By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.

Each machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

TIP

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.

1.2.3.6. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

39
OpenShift Container Platform 4.6 Installing on OpenStack

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

1.2.4. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.2.5. Enabling Swift on RHOSP


Swift is operated by a user account with the swiftoperator role. Add the role to an account before you
run the installation program.

IMPORTANT

If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as
Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it
is unavailable, the installation program relies on the RHOSP block storage service,
commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present,
or if you do not want to use it, skip this section.

Prerequisites

You have a RHOSP administrator account on the target environment.

40
CHAPTER 1. INSTALLING ON OPENSTACK

The Swift service is installed.

On Ceph RGW , the account in url option is enabled.

Procedure
To enable Swift on RHOSP:

1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:

$ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

1.2.6. Verifying external network access


The OpenShift Container Platform installation process requires external network access. You must
provide an external network value to it, or deployment fails. Before you begin the process, verify that a
network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Prerequisites

Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries

Procedure

1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

$ openstack network list --long -c ID -c Name -c "Router Type"

Example output

+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .

IMPORTANT
41
OpenShift Container Platform 4.6 Installing on OpenStack

IMPORTANT

If the external network’s CIDR range overlaps one of the default network ranges, you
must change the matching network ranges in the install-config.yaml file before you start
the installation process.

The default network ranges are:

Network Range

machineNetwork 10.0.0.0/16

serviceNetwork 172.30.0.0/16

clusterNetwork 10.128.0.0/14


WARNING

If the installation program finds multiple networks with the same name, it sets one
of them at random. To avoid this behavior, create unique names for resources in
RHOSP.

NOTE

If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .

1.2.7. Defining parameters for the installation program


The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The
file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project
name, log in information, and authorization service URLs.

Procedure

1. Create the clouds.yaml file:

If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

IMPORTANT

Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.

If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.

42
CHAPTER 1. INSTALLING ON OPENSTACK

clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'

2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:

a. Copy the certificate authority file to your machine.

b. Add the machine to the certificate authority trust bundle:

$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/

c. Update the trust bundle:

$ sudo update-ca-trust extract

d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:

clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"

TIP

After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:

$ oc edit configmap -n openshift-config cloud-provider-config

3. Place the clouds.yaml file in one of the following locations:

a. The value of the OS_CLIENT_CONFIG_FILE environment variable

b. The current directory

c. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml

43
OpenShift Container Platform 4.6 Installing on OpenStack

d. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml


The installation program searches for clouds.yaml in that order.

1.2.8. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.2.9. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack
Platform (RHOSP).

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
44
CHAPTER 1. INSTALLING ON OPENSTACK

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select openstack as the platform to target.

iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.

iv. Specify the floating IP address to use for external access to the OpenShift API.

v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.

vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.

vii. Enter a name for your cluster. The name must be 14 or fewer characters long.

viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available

45
OpenShift Container Platform 4.6 Installing on OpenStack

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

1.2.10. Installation configuration parameters


Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.7. Required parameters

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

46
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.8. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

47
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

48
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

49
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

50
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

51
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.9. Additional Red Hat OpenStack Platform (RHOSP) parameters

Parameter Description Values

compute.platfor For compute machines, the Integer, for example 30.


m.openstack.ro size in gigabytes of the root
otVolume.size volume. If you do not set this
value, machines use
ephemeral storage.

compute.platfor For compute machines, the String, for example performance .


m.openstack.ro root volume’s type.
otVolume.type

controlPlane.pla For control plane machines, Integer, for example 30.


tform.openstack the size in gigabytes of the
.rootVolume.siz root volume. If you do not set
e this value, machines use
ephemeral storage.

controlPlane.pla For control plane machines, String, for example performance .


tform.openstack the root volume’s type.
.rootVolume.typ
e

52
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst The name of the RHOSP String, for example MyCloud.


ack.cloud cloud to use from the list of
clouds in the clouds.yaml
file.

platform.openst The RHOSP external network String, for example external.


ack.externalNet name to be used for
work installation.

platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.

Table 1.10. Optional RHOSP parameters

Parameter Description Values

compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.

compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs

compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

53
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.

controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs

controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.

54
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst The default machine pool


ack.defaultMach platform configuration. {
inePlatform "type": "ml.large",
"rootVolume": {
"size": 30,
"type": "performance"
}
}

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.ingressFloa to associate with the Ingress
tingIP port. To use this property, you
must also define the
platform.openstack.exter
nalNetwork property.

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.lbFloatingIP to associate with the API load
balancer. To use this property,
you must also define the
platform.openstack.exter
nalNetwork property.

platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.

platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.

The first item in


networking.machineNetw
ork must match the value of
machinesSubnet.

If you deploy to a custom


subnet, you cannot specify an
external DNS server to the
OpenShift Container Platform
installer. Instead, add DNS to
the subnet in RHOSP.

1.2.10.1. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.

55
OpenShift Container Platform 4.6 Installing on OpenStack

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.

This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that:

The target network and subnet are available.

DHCP is enabled on the target subnet.

You can provide installer credentials that have permission to create ports on the target
network.

If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.

Your network configuration does not rely on a provider network. Provider networks are not
supported.

NOTE

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

1.2.10.2. Sample customized install-config.yaml file for RHOSP with Kuryr

To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-
config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default
OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all
of the possible Red Hat OpenStack Platform (RHOSP) customization options.

IMPORTANT

This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.

apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:

56
CHAPTER 1. INSTALLING ON OPENSTACK

clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: Kuryr
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
trunkSupport: true
octaviaSupport: true
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

NOTE

Both trunkSupport and octaviaSupport are automatically discovered by the installer, so


there is no need to set them. But if your environment does not meet both requirements,
Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP
network and Octavia is required to create the OpenShift Container Platform services.

1.2.11. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

57
OpenShift Container Platform 4.6 Installing on OpenStack

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.2.12. Enabling access to the environment


At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack
Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP
deployments.

You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.

1.2.12.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and
cluster applications.

Procedure

1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

$ openstack floating ip create --description "API <cluster_name>.<base_domain>"


<external_network>

2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>"


<external_network>

3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

58
CHAPTER 1. INSTALLING ON OPENSTACK

api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>

NOTE

If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.

4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

platform.openstack.ingressFloatingIP

platform.openstack.lbFloatingIP

If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork parameter in the install-config.yaml file.

TIP

You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.

1.2.12.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.

In the install-config.yaml file, do not define the following parameters:

platform.openstack.ingressFloatingIP

platform.openstack.lbFloatingIP

If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for
you, and, without additional action, the installer will fail to retrieve an image from Glance. You must
configure external connectivity on your own.

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP
addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use
a proxy network or run the installer from a system that is on the same network as your machines.

NOTE

You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:

api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.

59
OpenShift Container Platform 4.6 Installing on OpenStack

1.2.13. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the location of your customized ./install-


config.yaml file.

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

60
CHAPTER 1. INSTALLING ON OPENSTACK

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

1.2.14. Verifying cluster status


You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

1. In the cluster environment, export the administrator’s kubeconfig file:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.

2. View the control plane and compute machines created after a deployment:

$ oc get nodes

3. View your cluster’s version:

$ oc get clusterversion

4. View your Operators' status:

$ oc get clusteroperator

5. View all running pods in the cluster:

$ oc get pods -A

1.2.15. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the

61
OpenShift Container Platform 4.6 Installing on OpenStack

correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.2.16. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.

If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .

1.3. INSTALLING A CLUSTER ON OPENSTACK ON YOUR OWN


INFRASTRUCTURE
In OpenShift Container Platform version 4.5, you can install a cluster on Red Hat OpenStack Platform
(RHOSP) that runs on user-provisioned infrastructure.

Using your own infrastructure allows you to integrate your cluster with existing infrastructure and
modifications. The process requires more labor on your part than installer-provisioned installations,
because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups.
However, Red Hat provides Ansible playbooks to help you in the deployment process.

1.3.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

62
CHAPTER 1. INSTALLING ON OPENSTACK

Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .

Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.

Have an RHOSP account where you want to install OpenShift Container Platform.

On the machine from which you run the installation program, have:

A single directory in which you can keep the files you create during the installation process

Python 3

1.3.2. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.3.3. Resource guidelines for installing OpenShift Container Platform on RHOSP


To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP)
quota must meet the following requirements:

Table 1.11. Recommended resources for a default OpenShift Container Platform cluster on RHOSP

63
OpenShift Container Platform 4.6 Installing on OpenStack

Resource Value

Floating IP addresses 3

Ports 15

Routers 1

Subnets 1

RAM 112 GB

vCPUs 28

Volume storage 275 GB

Instances 7

Security groups 3

Security group rules 60

A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.

IMPORTANT

If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.

NOTE

By default, your security group and security group rule quotas might be low. If you
encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60
<project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.

1.3.3.1. Control plane and compute machines

By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.

Each machine requires:

An instance from the RHOSP quota

64
CHAPTER 1. INSTALLING ON OPENSTACK

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

TIP

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.

1.3.3.2. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

1.3.4. Downloading playbook dependencies


The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require
several Python modules. On the machine where you will run the installer, add the modules' repositories
and then download them.

NOTE

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.

Prerequisites

Python 3 is installed on your machine.

Procedure

1. On a command line, add the repositories:

a. Register with Red Hat Subscription Manager:

$ sudo subscription-manager register # If not done already

b. Pull the latest subscription data:

$ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already

c. Disable the current repositories:

$ sudo subscription-manager repos --disable=* # If not done already

d. Add the required repositories:

65
OpenShift Container Platform 4.6 Installing on OpenStack

$ sudo subscription-manager repos \


--enable=rhel-8-for-x86_64-baseos-rpms \
--enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
--enable=ansible-2.9-for-rhel-8-x86_64-rpms \
--enable=rhel-8-for-x86_64-appstream-rpms

2. Install the modules:

$ sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr

3. Ensure that the python command points to python3:

$ sudo alternatives --set python /usr/bin/python3

1.3.5. Downloading the installation playbooks


Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red
Hat OpenStack Platform (RHOSP) infrastructure.

Prerequisites

The curl command-line tool is available on your machine.

Procedure

To download the playbooks to your working directory, run the following script from a command
line:

$ xargs -n 1 curl -O <<< '


https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/bootstrap.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/common.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/compute-nodes.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/control-
plane.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/inventory.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/network.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/security-
groups.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
bootstrap.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
compute-nodes.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
control-plane.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
load-balancers.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
network.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-

66
CHAPTER 1. INSTALLING ON OPENSTACK

security-groups.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
containers.yaml'

The playbooks are downloaded to your machine.

IMPORTANT

During the installation process, you can modify the playbooks to configure your
deployment.

Retain all playbooks for the life of your cluster. You must have the playbooks to remove
your OpenShift Container Platform cluster from RHOSP.

IMPORTANT

You must match any edits you make in the bootstrap.yaml, compute-nodes.yaml,
control-plane.yaml, network.yaml, and security-groups.yaml files to the
corresponding playbooks that are prefixed with down-. For example, edits to the
bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not
edit both files, the supported cluster removal process will fail.

1.3.6. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

67
OpenShift Container Platform 4.6 Installing on OpenStack

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.3.7. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

68
CHAPTER 1. INSTALLING ON OPENSTACK

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.3.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image
The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux
CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the
latest RHCOS image, then upload it using the RHOSP CLI.

Prerequisites

The RHOSP CLI is installed.

Procedure

1. Log in to the Red Hat customer portal’s Product Downloads page .

2. Under Version, select the most recent release of OpenShift Container Platform 4.5 for Red Hat
Enterprise Linux (RHEL) 8.

IMPORTANT

The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available.

3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) .

4. Decompress the image.

NOTE

You must decompress the RHOSP image before the cluster can use it. The
name of the downloaded file might not contain a compression extension, like .gz
or .tgz. To find out if or how the file is compressed, in a command line, enter:

$ file <name_of_downloaded_file>

5. From the image that you downloaded, create an image that is named rhcos in your cluster by

69
OpenShift Container Platform 4.6 Installing on OpenStack

5. From the image that you downloaded, create an image that is named rhcos in your cluster by
using the RHOSP CLI:

$ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-


${RHCOS_VERSION}-openstack.qcow2 rhcos

IMPORTANT

Depending on your RHOSP environment, you might be able to upload the image
in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.


WARNING

If the installation program finds multiple images with the same name, it
chooses one of them at random. To avoid this behavior, create unique
names for resources in RHOSP.

After you upload the image to RHOSP, it is usable in the installation process.

1.3.9. Verifying external network access


The OpenShift Container Platform installation process requires external network access. You must
provide an external network value to it, or deployment fails. Before you begin the process, verify that a
network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Prerequisites

Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries

Procedure

1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

$ openstack network list --long -c ID -c Name -c "Router Type"

Example output

+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .

NOTE
70
CHAPTER 1. INSTALLING ON OPENSTACK

NOTE

If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .

1.3.10. Enabling access to the environment


At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack
Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP
deployments.

You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.

1.3.10.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster
applications, and the bootstrap process.

Procedure

1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

$ openstack floating ip create --description "API <cluster_name>.<base_domain>"


<external_network>

2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>"


<external_network>

3. By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP:

$ openstack floating ip create --description "bootstrap machine" <external_network>

4. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>

NOTE

If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.

5. Add the FIPs to the inventory.yaml file as the values of the following variables:

os_api_fip

os_bootstrap_fip

71
OpenShift Container Platform 4.6 Installing on OpenStack

os_ingress_fip

If you use these values, you must also enter an external network as the value of the
os_external_network variable in the inventory.yaml file.

TIP

You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.

1.3.10.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.

In the inventory.yaml file, do not define the following variables:

os_api_fip

os_bootstrap_fip

os_ingress_fip

If you cannot provide an external network, you can also leave os_external_network blank. If you do not
provide a value for os_external_network, a router is not created for you, and, without additional action,
the installer will fail to retrieve an image from Glance. Later in the installation process, when you create
network resources, you must configure external connectivity on your own.

If you run the installer with the wait-for command from a system that cannot reach the cluster API due
to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in
these cases, you can use a proxy network or run the installer from a system that is on the same network
as your machines.

NOTE

You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:

api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.

1.3.11. Defining parameters for the installation program


The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The
file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project
name, log in information, and authorization service URLs.

Procedure

1. Create the clouds.yaml file:

If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

72
CHAPTER 1. INSTALLING ON OPENSTACK

If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

IMPORTANT

Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.

If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.

clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'

2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:

a. Copy the certificate authority file to your machine.

b. Add the machine to the certificate authority trust bundle:

$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/

c. Update the trust bundle:

$ sudo update-ca-trust extract

d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:

clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"

TIP
73
OpenShift Container Platform 4.6 Installing on OpenStack

TIP

After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:

$ oc edit configmap -n openshift-config cloud-provider-config

3. Place the clouds.yaml file in one of the following locations:

a. The value of the OS_CLIENT_CONFIG_FILE environment variable

b. The current directory

c. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml

d. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml


The installation program searches for clouds.yaml in that order.

1.3.12. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack
Platform (RHOSP).

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

74
CHAPTER 1. INSTALLING ON OPENSTACK

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select openstack as the platform to target.

iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.

iv. Specify the floating IP address to use for external access to the OpenShift API.

v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.

vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.

vii. Enter a name for your cluster. The name must be 14 or fewer characters long.

viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

You now have the file install-config.yaml in the directory that you specified.

1.3.13. Installation configuration parameters


Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.12. Required parameters

75
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

76
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.13. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

77
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

78
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

79
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

80
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

81
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

Table 1.14. Additional Red Hat OpenStack Platform (RHOSP) parameters

Parameter Description Values

compute.platfor For compute machines, the Integer, for example 30.


m.openstack.ro size in gigabytes of the root
otVolume.size volume. If you do not set this
value, machines use
ephemeral storage.

compute.platfor For compute machines, the String, for example performance .


m.openstack.ro root volume’s type.
otVolume.type

controlPlane.pla For control plane machines, Integer, for example 30.


tform.openstack the size in gigabytes of the
.rootVolume.siz root volume. If you do not set
e this value, machines use
ephemeral storage.

controlPlane.pla For control plane machines, String, for example performance .


tform.openstack the root volume’s type.
.rootVolume.typ
e

platform.openst The name of the RHOSP String, for example MyCloud.


ack.cloud cloud to use from the list of
clouds in the clouds.yaml
file.

platform.openst The RHOSP external network String, for example external.


ack.externalNet name to be used for
work installation.

82
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.

Table 1.15. Optional RHOSP parameters

Parameter Description Values

compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.

compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs

compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.

controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs

83
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.

platform.openst The default machine pool


ack.defaultMach platform configuration. {
inePlatform "type": "ml.large",
"rootVolume": {
"size": 30,
"type": "performance"
}
}

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.ingressFloa to associate with the Ingress
tingIP port. To use this property, you
must also define the
platform.openstack.exter
nalNetwork property.

84
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.lbFloatingIP to associate with the API load
balancer. To use this property,
you must also define the
platform.openstack.exter
nalNetwork property.

platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.

platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.

The first item in


networking.machineNetw
ork must match the value of
machinesSubnet.

If you deploy to a custom


subnet, you cannot specify an
external DNS server to the
OpenShift Container Platform
installer. Instead, add DNS to
the subnet in RHOSP.

1.3.13.1. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.

This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that:

The target network and subnet are available.

DHCP is enabled on the target subnet.

You can provide installer credentials that have permission to create ports on the target
network.

If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.

Your network configuration does not rely on a provider network. Provider networks are not
supported.

NOTE
85
OpenShift Container Platform 4.6 Installing on OpenStack

NOTE

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

1.3.13.2. Sample customized install-config.yaml file for RHOSP

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform
(RHOSP) customization options.

IMPORTANT

This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.

apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

1.3.13.3. Setting a custom subnet for machines

The IP range that the installation program uses by default might not match the Neutron subnet that you

86
CHAPTER 1. INSTALLING ON OPENSTACK

The IP range that the installation program uses by default might not match the Neutron subnet that you
create when you install OpenShift Container Platform. If necessary, update the CIDR value for new
machines by editing the installation configuration file.

Prerequisites

You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.

Procedure

1. On a command line, browse to the directory that contains install-config.yaml.

2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:

To set the value by using a script, run:

$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1
open(path, "w").write(yaml.dump(data, default_flow_style=False))'

1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24.

To set the value manually, open the file and set the value of networking.machineCIDR to
something that matches your intended Neutron subnet.

1.3.13.4. Emptying compute machine pools

To proceed with an installation that uses your own infrastructure, set the number of compute machines
in the installation configuration file to zero. Later, you create these machines manually.

Prerequisites

You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.

Procedure

1. On a command line, browse to the directory that contains install-config.yaml.

2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:

To set the value by using a script, run:

$ python -c '
import yaml;
path = "install-config.yaml";

87
OpenShift Container Platform 4.6 Installing on OpenStack

data = yaml.safe_load(open(path));
data["compute"][0]["replicas"] = 0;
open(path, "w").write(yaml.dump(data, default_flow_style=False))'

To set the value manually, open the file and set the value of compute.<first
entry>.replicas to 0.

1.3.14. Creating the Kubernetes manifest and Ignition config files


Because you must modify some cluster definition files and manually start the cluster machines, you must
generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the
Ignition configuration files, which are later used to create the cluster.

IMPORTANT

The Ignition config files that the installation program generates contain certificates that
expire after 24 hours, which are then renewed at that time. If the cluster is shut down
before renewing the certificates and the cluster is later restarted after the 24 hours have
elapsed, the cluster automatically recovers the expired certificates. The exception is that
you must manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.

Prerequisites

You obtained the OpenShift Container Platform installation program.

You created the install-config.yaml installation configuration file.

Procedure

1. Change to the directory that contains the installation program and generate the Kubernetes
manifests for the cluster:

$ ./openshift-install create manifests --dir=<installation_directory> 1

Example output

INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"


INFO Consuming Install Config from target directory
INFO Manifests created in: install_dir/manifests and install_dir/openshift

1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.

2. Remove the Kubernetes manifest files that define the control plane machines and compute
machine sets:

$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-
cluster-api_worker-machineset-*.yaml

88
CHAPTER 1. INSTALLING ON OPENSTACK

Because you create and manage these resources yourself, you do not have to initialize them.

You can preserve the machine set files to create compute machines by using the machine
API, but you must update references to them to match your environment.

3. Check that the mastersSchedulable parameter in the


<installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest
file is set to false. This setting prevents pods from being scheduled on the control plane
machines:

a. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.

b. Locate the mastersSchedulable parameter and ensure that it is set to false.

c. Save and exit the file.

4. To create the Ignition configuration files, run the following command from the directory that
contains the installation program:

$ ./openshift-install create ignition-configs --dir=<installation_directory> 1

1 For <installation_directory>, specify the same installation directory.

The following files are generated in the directory:

.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign

5. Export the metadata file’s infraID key as an environment variable:

$ export INFRA_ID=$(jq -r .infraID metadata.json)

TIP

Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that
you create. By doing so, you avoid name conflicts when making multiple deployments in the same
project.

1.3.15. Preparing the bootstrap Ignition files


The OpenShift Container Platform installation process relies on bootstrap machines that are created
from a bootstrap Ignition configuration file.

Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat
OpenStack Platform (RHOSP) uses to download the primary file.

Prerequisites

89
OpenShift Container Platform 4.6 Installing on OpenStack

You have the bootstrap Ignition file that the installer program generates, bootstrap.ign.

The infrastructure ID from the installer’s metadata file is set as an environment variable
($INFRA_ID).

If the variable is not set, see Creating the Kubernetes manifest and Ignition config files.

You have an HTTP(S)-accessible way to store the bootstrap Ignition file.

The documented procedure uses the RHOSP image service (Glance), but you can also use
the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova
server.

Procedure

1. Run the following Python script. The script modifies the bootstrap Ignition file to set the host
name and, if available, CA certificate file when it runs:

import base64
import json
import os

with open('bootstrap.ign', 'r') as f:


ignition = json.load(f)

files = ignition['storage'].get('files', [])

infra_id = os.environ.get('INFRA_ID', 'openshift').encode()


hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
files.append(
{
'path': '/etc/hostname',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64
}
})

ca_cert_path = os.environ.get('OS_CACERT', '')


if ca_cert_path:
with open(ca_cert_path, 'r') as f:
ca_cert = f.read().encode()
ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()

files.append(
{
'path': '/opt/openshift/tls/cloud-ca-cert.pem',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
}
})

ignition['storage']['files'] = files;

90
CHAPTER 1. INSTALLING ON OPENSTACK

with open('bootstrap.ign', 'w') as f:


json.dump(ignition, f)

2. Using the RHOSP CLI, create an image that uses the bootstrap Ignition file:

$ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign


<image_name>

3. Get the image’s details:

$ openstack image show <image_name>

Make a note of the file value; it follows the pattern v2/images/<image_ID>/file.

NOTE

Verify that the image you created is active.

4. Retrieve the image service’s public address:

$ openstack catalog show image

5. Combine the public address with the image file value and save the result as the storage
location. The location follows the pattern
<image_service_public_URL>/v2/images/<image_ID>/file.

6. Generate an auth token and save the token ID:

$ openstack token issue -c id -f value

7. Insert the following content into a file called $INFRA_ID-bootstrap-ignition.json and edit the
placeholders to match your own values:

{
"ignition": {
"config": {
"merge": [{
"source": "<storage_url>", 1
"httpHeaders": [{
"name": "X-Auth-Token", 2
"value": "<token_ID>" 3
}]
}]
},
"security": {
"tls": {
"certificateAuthorities": [{
"source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4
}]
}
},

91
OpenShift Container Platform 4.6 Installing on OpenStack

"version": "3.1.0"
}
}

1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage
URL.

2 Set name in httpHeaders to "X-Auth-Token".

3 Set value in httpHeaders to your token’s ID.

4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-
encoded certificate.

8. Save the secondary Ignition config file.

The bootstrap Ignition data will be passed to RHOSP during installation.


WARNING

The bootstrap Ignition file contains sensitive information, like clouds.yaml


credentials. Ensure that you store it in a secure place, and delete it after you
complete the installation process.

1.3.16. Creating control plane Ignition config files on RHOSP


Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own
infrastructure requires control plane Ignition config files. You must create multiple config files.

NOTE

As with the bootstrap Ignition configuration, you must explicitly define a host name for
each control plane machine.

Prerequisites

The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).

If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files."

Procedure

On a command line, run the following Python script:

$ for index in $(seq 0 2); do


MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
python -c "import base64, json, sys;
ignition = json.load(sys.stdin);
storage = ignition.get('storage', {});

92
CHAPTER 1. INSTALLING ON OPENSTACK

files = storage.get('files', []);


files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source':
'data:text/plain;charset=utf-8;base64,' +
base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip(), 'verification': {}},
'filesystem': 'root'});
storage['files'] = files;
ignition['storage'] = storage
json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
done

You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json,


<INFRA_ID>-master-1-ignition.json, and <INFRA_ID>-master-2-ignition.json.

1.3.17. Creating network resources on RHOSP


Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform
(RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks
that generate security groups, networks, subnets, routers, and ports.

Prerequisites

Python 3 is installed on your machine.

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

Procedure

1. Optional: Add an external network value to the inventory.yaml playbook:

Example external network value in the inventory.yaml Ansible playbook

...
# The public network providing connectivity to the cluster. If not
# provided, the cluster external connectivity must be provided in another
# way.

# Required for os_api_fip, os_ingress_fip, os_bootstrap_fip.


os_external_network: 'external'
...

IMPORTANT

If you did not provide a value for os_external_network in the inventory.yaml


file, you must ensure that VMs can access Glance and an external connection
yourself.

2. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml
playbook:

Example FIP values in the inventory.yaml Ansible playbook

...

93
OpenShift Container Platform 4.6 Installing on OpenStack

# OpenShift API floating IP address. If this value is non-empty, the


# corresponding floating IP will be attached to the Control Plane to
# serve the OpenShift API.
os_api_fip: '203.0.113.23'

# OpenShift Ingress floating IP address. If this value is non-empty, the


# corresponding floating IP will be attached to the worker nodes to serve
# the applications.
os_ingress_fip: '203.0.113.19'

# If this value is non-empty, the corresponding floating IP will be


# attached to the bootstrap machine. This is needed for collecting logs
# in case of install failure.
os_bootstrap_fip: '203.0.113.20'

IMPORTANT

If you do not define values for os_api_fip and os_ingress_fip, you must perform
post-installation network configuration.

If you do not define a value for os_bootstrap_fip, the installer cannot download
debugging information from failed installations.

See "Enabling access to the environment" for more information.

3. On a command line, create security groups by running the security-groups.yaml playbook:

$ ansible-playbook -i inventory.yaml security-groups.yaml

4. On a command line, create a network, subnet, and router by running the network.yaml
playbook:

$ ansible-playbook -i inventory.yaml network.yaml

5. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI
command:

$ openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2>


"$INFRA_ID-nodes"

1.3.18. Creating the bootstrap machine on RHOSP


Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack
Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The inventory.yaml, common.yaml, and bootstrap.yaml Ansible playbooks are in a common


directory.

The metadata.yaml file that the installation program created is in the same directory as the
94
CHAPTER 1. INSTALLING ON OPENSTACK

The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.

Procedure

1. On a command line, change the working directory to the location of the playbooks.

2. On a command line, run the bootstrap.yaml playbook:

$ ansible-playbook -i inventory.yaml bootstrap.yaml

3. After the bootstrap server is active, view the logs to verify that the Ignition files were received:

$ openstack console log show "$INFRA_ID-bootstrap"

1.3.19. Creating the control plane machines on RHOSP


Create three control plane machines by using the Ignition config files that you generated. Red Hat
provides an Ansible playbook that you run to simplify this process.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).

The inventory.yaml, common.yaml, and control-plane.yaml Ansible playbooks are in a


common directory.

You have the three Ignition files that were created in "Creating control plane Ignition config
files."

Procedure

1. On a command line, change the working directory to the location of the playbooks.

2. If the control plane Ignition config files aren’t already in your working directory, copy them into
it.

3. On a command line, run the control-plane.yaml playbook:

$ ansible-playbook -i inventory.yaml control-plane.yaml

4. Run the following command to monitor the bootstrapping process:

$ openshift-install wait-for bootstrap-complete

You will see messages that confirm that the control plane machines are running and have joined
the cluster:

INFO API v1.14.6+f9b5405 up

95
OpenShift Container Platform 4.6 Installing on OpenStack

INFO Waiting up to 30m0s for bootstrapping to complete...


...
INFO It is now safe to remove the bootstrap resources

1.3.20. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.3.21. Deleting bootstrap resources from RHOSP


Delete the bootstrap resources that you no longer need.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The inventory.yaml, common.yaml, and down-bootstrap.yaml Ansible playbooks are in a


common directory.

The control plane machines are running.

If you do not know the status of the machines, see "Verifying cluster status."

Procedure

96
CHAPTER 1. INSTALLING ON OPENSTACK

1. On a command line, change the working directory to the location of the playbooks.

2. On a command line, run the down-bootstrap.yaml playbook:

$ ansible-playbook -i inventory.yaml down-bootstrap.yaml

The bootstrap port, server, and floating IP address are deleted.


WARNING

If you did not disable the bootstrap Ignition file URL earlier, do so now.

1.3.22. Creating compute machines on RHOSP


After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook
that you run to simplify this process.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The inventory.yaml, common.yaml, and compute-nodes.yaml Ansible playbooks are in a


common directory.

The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.

The control plane is active.

Procedure

1. On a command line, change the working directory to the location of the playbooks.

2. On a command line, run the playbook:

$ ansible-playbook -i inventory.yaml compute-nodes.yaml

Next steps

Approve the certificate signing requests for the machines.

1.3.23. Approving the certificate signing requests for your machines


When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for
each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve
them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

97
OpenShift Container Platform 4.6 Installing on OpenStack

Prerequisites

You added machines to your cluster.

Procedure

1. Confirm that the cluster recognizes the machines:

$ oc get nodes

Example output

NAME STATUS ROLES AGE VERSION


master-0 Ready master 63m v1.19.0
master-1 Ready master 63m v1.19.0
master-2 Ready master 64m v1.19.0
worker-0 NotReady worker 76s v1.19.0
worker-1 NotReady worker 70s v1.19.0

The output lists all of the machines that you created.

NOTE

The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.

2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:

$ oc get csr

Example output

NAME AGE REQUESTOR CONDITION


csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-
bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-
bootstrapper Pending
...

In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.

3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:

NOTE
98
CHAPTER 1. INSTALLING ON OPENSTACK

NOTE

Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After you approve the initial
CSRs, the subsequent node client CSRs are automatically approved by the
cluster kube-controller-manager.

NOTE

For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.

To approve them individually, run the following command for each valid CSR:

$ oc adm certificate approve <csr_name> 1

1 <csr_name> is the name of a CSR from the list of current CSRs.

To approve all pending CSRs, run the following command:

$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}


{{end}}{{end}}' | xargs oc adm certificate approve

NOTE

Some Operators might not become available until some CSRs are approved.

4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:

$ oc get csr

Example output

NAME AGE REQUESTOR CONDITION


csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal
Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal
Pending
...

5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for

99
OpenShift Container Platform 4.6 Installing on OpenStack

5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:

To approve them individually, run the following command for each valid CSR:

$ oc adm certificate approve <csr_name> 1

1 <csr_name> is the name of a CSR from the list of current CSRs.

To approve all pending CSRs, run the following command:

$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}


{{end}}{{end}}' | xargs oc adm certificate approve

6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:

$ oc get nodes

Example output

NAME STATUS ROLES AGE VERSION


master-0 Ready master 73m v1.20.0
master-1 Ready master 73m v1.20.0
master-2 Ready master 74m v1.20.0
worker-0 Ready worker 11m v1.20.0
worker-1 Ready worker 11m v1.20.0

NOTE

It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.

Additional information

For more information on CSRs, see Certificate Signing Requests .

1.3.24. Verifying a successful installation


Verify that the OpenShift Container Platform installation is complete.

Prerequisites

You have the installation program (openshift-install)

Procedure

On a command line, enter:

$ openshift-install --log-level debug wait-for install-complete

The program outputs the console URL, as well as the administrator’s login information.

100
CHAPTER 1. INSTALLING ON OPENSTACK

1.3.25. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.

If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .

1.4. INSTALLING A CLUSTER ON OPENSTACK WITH KURYR ON YOUR


OWN INFRASTRUCTURE
In OpenShift Container Platform version 4.5, you can install a cluster on Red Hat OpenStack Platform
(RHOSP) that runs on user-provisioned infrastructure.

Using your own infrastructure allows you to integrate your cluster with existing infrastructure and
modifications. The process requires more labor on your part than installer-provisioned installations,
because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups.
However, Red Hat provides Ansible playbooks to help you in the deployment process.

1.4.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.

Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .

Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.

Have an RHOSP account where you want to install OpenShift Container Platform.

On the machine from which you run the installation program, have:

A single directory in which you can keep the files you create during the installation process

Python 3

1.4.2. About Kuryr SDN


Kuryr is a container network interface (CNI) plug-in solution that uses the Neutron and Octavia Red Hat
OpenStack Platform (RHOSP) services to provide networking for pods and Services.

Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container
Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging
OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between
pods and RHOSP virtual instances.

Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr
namespace:

kuryr-controller - a single service instance installed on a master node. This is modeled in

101
OpenShift Container Platform 4.6 Installing on OpenStack

kuryr-controller - a single service instance installed on a master node. This is modeled in


OpenShift Container Platform as a Deployment object.

kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift
Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet
object.

The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and
namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to
corresponding objects in Neutron and Octavia. This means that every network solution that implements
the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This
includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as
Neutron-compatible commercial SDNs.

Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant
networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform
SDN over an RHOSP network.

If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double
encapsulation. The performance benefit is negligible. Depending on your configuration, though, using
Kuryr to avoid having two overlays might still be beneficial.

Kuryr is not recommended in deployments where all of the following criteria are true:

The RHOSP version is less than 16.

The deployment uses UDP services, or a large number of TCP services on few hypervisors.

or

The ovn-octavia Octavia driver is disabled.

The deployment uses a large number of TCP services on few hypervisors.

1.4.3. Resource guidelines for installing OpenShift Container Platform on RHOSP


with Kuryr
When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from
the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional
requirements on top of what a default install requires.

Use the following quota to satisfy a default cluster’s minimum requirements:

Table 1.16. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
with Kuryr

Resource Value

Floating IP addresses 3 - plus the expected number of Services of


LoadBalancer type

Ports 1500 - 1 needed per Pod

Routers 1

102
CHAPTER 1. INSTALLING ON OPENSTACK

Resource Value

Subnets 250 - 1 needed per Namespace/Project

Networks 250 - 1 needed per Namespace/Project

RAM 112 GB

vCPUs 28

Volume storage 275 GB

Instances 7

Security groups 250 - 1 needed per Service and per NetworkPolicy

Security group rules 1000

Load balancers 100 - 1 needed per Service

Load balancer listeners 500 - 1 needed per Service-exposed port

Load balancer pools 500 - 1 needed per Service-exposed port

A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.

IMPORTANT

If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.

IMPORTANT

If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora
driver rather than the OVN Octavia driver, security groups are associated with service
accounts instead of user projects.

Take the following notes into consideration when setting resources:

The number of ports that are required is larger than the number of pods. Kuryr uses ports pools
to have pre-created ports ready to be used by pods and speed up the pods' booting time.

Each network policy is mapped into an RHOSP security group, and depending on the
NetworkPolicy spec, one or more rules are added to the security group.

Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating

103
OpenShift Container Platform 4.6 Installing on OpenStack

Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating
the number of security groups required for the quota.
If you are using RHOSP version 15 or earlier, or the ovn-octavia driver, each load balancer has a
security group with the user project.

The quota does not account for load balancer resources (such as VM resources), but you must
consider these resources when you decide the RHOSP deployment’s size. The default
installation will have more than 50 load balancers; the clusters must be able to accommodate
them.
If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer
VM is generated; services are load balanced through OVN flows.

An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.

To enable Kuryr SDN, your environment must meet the following requirements:

Run RHOSP 13+.

Have Overcloud with Octavia.

Use Neutron Trunk ports extension.

Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.

1.4.3.1. Increasing quota

When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP)
resources used by pods, services, namespaces, and network policies.

Procedure

Increase the quotas for a project by running the following command:

$ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets
250 --networks 250 <project>

1.4.3.2. Configuring Neutron

Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack
Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work.

In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to
openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr
can properly handle network policies.

1.4.3.3. Configuring Octavia

Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift
Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use
Kuryr SDN.

To enable Octavia, you must include the Octavia service during the installation of the RHOSP
Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for
enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.

104
CHAPTER 1. INSTALLING ON OPENSTACK

NOTE

The following steps only capture the key pieces required during the deployment of
RHOSP when dealing with Octavia. It is also important to note that registry methods vary.

This example uses the local registry method.

Procedure

1. If you are using the local registry, create a template to upload the images to the registry. For
example:

(undercloud) $ openstack overcloud container image prepare \


-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
--namespace=registry.access.redhat.com/rhosp13 \
--push-destination=<local-ip-from-undercloud.conf>:8787 \
--prefix=openstack- \
--tag-from-label {version}-{release} \
--output-env-file=/home/stack/templates/overcloud_images.yaml \
--output-images-file /home/stack/local_registry_images.yaml

2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:

...
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-
45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
push_destination: <local-ip-from-undercloud.conf>:8787

NOTE

The Octavia container versions vary depending upon the specific RHOSP release
installed.

3. Pull the container images from registry.redhat.io to the Undercloud node:

(undercloud) $ sudo openstack overcloud container image upload \


--config-file /home/stack/local_registry_images.yaml \
--verbose

This may take some time depending on the speed of your network and Undercloud disk.

4. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you
must increase their listeners' default timeouts for the connections. The default timeout is 50
seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud
deploy command:

(undercloud) $ cat octavia_timeouts.yaml


parameter_defaults:

105
OpenShift Container Platform 4.6 Installing on OpenStack

OctaviaTimeoutClientData: 1200000
OctaviaTimeoutMemberData: 1200000

NOTE

This is not needed for RHOSP 14+.

5. Install or update your Overcloud environment with Octavia:

$ openstack overcloud deploy --templates \


-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \
-e octavia_timeouts.yaml

NOTE

This command only includes the files associated with Octavia; it varies based on
your specific installation of RHOSP. See the RHOSP documentation for further
information. For more information on customizing your Octavia installation, see
installation of Octavia using Director.

NOTE

When leveraging Kuryr SDN, the Overcloud installation requires the Neutron
trunk extension. This is available by default on director deployments. Use the
openvswitch firewall instead of the default ovs-hybrid when the Neutron
backend is ML2/OVS. There is no need for modifications if the backend is
ML2/OVN.

6. In RHOSP versions 13 and 15, add the project ID to the octavia.conf configuration file after you
create the project.

To enforce network policies across services, like when traffic goes through the Octavia load
balancer, you must ensure Octavia creates the Amphora VM security groups on the user
project.
This change ensures that required load balancer security groups belong to that project, and
that they can be updated to enforce services isolation.

NOTE

This task is unnecessary in RHOSP version 16 or later.

Octavia implements a new ACL API that restricts access to the load
balancers VIP.

a. Get the project ID

$ openstack project show <project>

Example output

+-------------+----------------------------------+
| Field | Value |

106
CHAPTER 1. INSTALLING ON OPENSTACK

+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | PROJECT_ID |
| is_domain | False |
| name | *<project>* |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+

b. Add the project ID to octavia.conf for the controllers.

i. Source the stackrc file:

$ source stackrc # Undercloud credentials

ii. List the Overcloud controllers:

$ openstack server list

Example output

+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+

| ID | Name | Status | Networks
| Image | Flavor |

+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+

| 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
ctlplane=192.168.24.8 | overcloud-full | controller |

| dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE |
ctlplane=192.168.24.6 | overcloud-full | compute |

+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+

iii. SSH into the controller(s).

$ ssh [email protected]

iv. Edit the octavia.conf file to add the project into the list of projects where Amphora
security groups are on the user’s account.

# List of project IDs that are allowed to have Load balancer security groups
# belonging to them.
amp_secgroup_allowed_projects = PROJECT_ID

c. Restart the Octavia worker so the new configuration loads.

107
OpenShift Container Platform 4.6 Installing on OpenStack

controller-0$ sudo docker restart octavia_worker

NOTE

Depending on your RHOSP environment, Octavia might not support UDP listeners. If you
use Kuryr SDN on RHOSP version 15 or earlier, UDP services are not supported. RHOSP
version 16 or later support UDP.

1.4.3.3.1. The Octavia OVN Driver

Octavia supports multiple provider drivers through the Octavia API.

To see all available Octavia provider drivers, on a command line, enter:

$ openstack loadbalancer provider list

Example output

+---------+-------------------------------------------------+
| name | description |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver. |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn | Octavia OVN driver. |
+---------+-------------------------------------------------+

Beginning with RHOSP version 16, the Octavia OVN provider driver (ovn) is supported on OpenShift
Container Platform on RHOSP deployments.

ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load
balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by
Director on deployments that use OVN Neutron ML2.

The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.

If Kuryr uses ovn instead of Amphora, it offers the following benefits:

Decreased resource requirements. Kuryr does not require a load balancer VM for each service.

Reduced network latency.

Increased service creation speed by using OpenFlow rules instead of a VM for each service.

Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.

1.4.3.4. Known limitations of installing with Kuryr

Using OpenShift Container Platform with Kuryr SDN has several known limitations.

RHOSP general limitations


OpenShift Container Platform with Kuryr SDN does not support Service objects with type NodePort.

If the machines subnet is not connected to a router, or if the subnet is connected, but the router has no
external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.

108
CHAPTER 1. INSTALLING ON OPENSTACK

RHOSP version limitations


Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP
version.

RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver
requires that one Amphora load balancer VM is deployed per OpenShift Container Platform
service. Creating too many services can cause you to run out of resources.
Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use
the Amphora driver. They are subject to the same resource concerns as earlier versions of
RHOSP.

Octavia RHOSP versions before 16 do not support UDP listeners. Therefore, OpenShift
Container Platform UDP services are not supported.

Octavia RHOSP versions before 16 cannot listen to multiple protocols on the same port.
Services that expose the same port to different protocols, like TCP and UDP, are not
supported.

RHOSP environment limitations


There are limitations when using Kuryr SDN that depend on your deployment environment.

Because of Octavia’s lack of support for the UDP protocol and multiple listeners, if the RHOSP version
is earlier than 16, Kuryr forces pods to use TCP for DNS resolution.

In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only.
In this case, the native Go resolver does not recognize the use-vc option in resolv.conf, which controls
whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails.

To ensure that TCP forcing is allowed, compile applications either with the environment variable
CGO_ENABLED set to 1, i.e. CGO_ENABLED=1, or ensure that the variable is absent.

In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails.

NOTE

musl-based containers, including Alpine-based containers, do not support the use-vc


option.

RHOSP upgrade limitations


As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the
Amphora images that are used for load balancers might be required.

You can address API changes on an individual basis.

If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two
ways:

Upgrade each VM by triggering a triggering a load balancer failover .

Leave responsibility for upgrading the VMs to users.

If the operator takes the first option, there might be short downtimes during failovers.

If the operator takes the second option, the existing load balancers will not support upgraded Octavia
API features, like UDP listeners. In this case, users must recreate their Services to use these features.

IMPORTANT
109
OpenShift Container Platform 4.6 Installing on OpenStack

IMPORTANT

If OpenShift Container Platform detects a new Octavia version that supports UDP load
balancing, it recreates the DNS service automatically. The service recreation ensures that
the service default supports UDP load balancing.

The recreation causes the DNS service approximately one minute of downtime.

1.4.3.5. Control plane and compute machines

By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.

Each machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

TIP

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.

1.4.3.6. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

1.4.4. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

110
CHAPTER 1. INSTALLING ON OPENSTACK

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.4.5. Downloading playbook dependencies


The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require
several Python modules. On the machine where you will run the installer, add the modules' repositories
and then download them.

NOTE

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.

Prerequisites

Python 3 is installed on your machine.

Procedure

1. On a command line, add the repositories:

a. Register with Red Hat Subscription Manager:

$ sudo subscription-manager register # If not done already

b. Pull the latest subscription data:

$ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already

c. Disable the current repositories:

$ sudo subscription-manager repos --disable=* # If not done already

d. Add the required repositories:

$ sudo subscription-manager repos \


--enable=rhel-8-for-x86_64-baseos-rpms \
--enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
--enable=ansible-2.9-for-rhel-8-x86_64-rpms \
--enable=rhel-8-for-x86_64-appstream-rpms

2. Install the modules:

111
OpenShift Container Platform 4.6 Installing on OpenStack

$ sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr

3. Ensure that the python command points to python3:

$ sudo alternatives --set python /usr/bin/python3

1.4.6. Downloading the installation playbooks


Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red
Hat OpenStack Platform (RHOSP) infrastructure.

Prerequisites

The curl command-line tool is available on your machine.

Procedure

To download the playbooks to your working directory, run the following script from a command
line:

$ xargs -n 1 curl -O <<< '


https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/bootstrap.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/common.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/compute-nodes.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/control-
plane.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/inventory.yaml
https://raw.githubusercontent.com/openshift/installer/release-
4.5/upi/openstack/network.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/security-
groups.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
bootstrap.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
compute-nodes.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
control-plane.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
load-balancers.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
network.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
security-groups.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
containers.yaml'

The playbooks are downloaded to your machine.

IMPORTANT
112
CHAPTER 1. INSTALLING ON OPENSTACK

IMPORTANT

During the installation process, you can modify the playbooks to configure your
deployment.

Retain all playbooks for the life of your cluster. You must have the playbooks to remove
your OpenShift Container Platform cluster from RHOSP.

IMPORTANT

You must match any edits you make in the bootstrap.yaml, compute-nodes.yaml,
control-plane.yaml, network.yaml, and security-groups.yaml files to the
corresponding playbooks that are prefixed with down-. For example, edits to the
bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not
edit both files, the supported cluster removal process will fail.

1.4.7. Obtaining the installation program


Before you install OpenShift Container Platform, download the installation file on a local computer.

Prerequisites

You have a computer that runs Linux or macOS, with 500 MB of local disk space

Procedure

1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.

2. Select your infrastructure provider.

3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.

IMPORTANT

The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.

IMPORTANT

Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.

4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:

$ tar xvf openshift-install-linux.tar.gz

113
OpenShift Container Platform 4.6 Installing on OpenStack

5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.

1.4.8. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

114
CHAPTER 1. INSTALLING ON OPENSTACK

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.4.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image
The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux
CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the
latest RHCOS image, then upload it using the RHOSP CLI.

Prerequisites

The RHOSP CLI is installed.

Procedure

1. Log in to the Red Hat customer portal’s Product Downloads page .

2. Under Version, select the most recent release of OpenShift Container Platform 4.5 for Red Hat
Enterprise Linux (RHEL) 8.

IMPORTANT

The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available.

3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) .

4. Decompress the image.

NOTE

You must decompress the RHOSP image before the cluster can use it. The
name of the downloaded file might not contain a compression extension, like .gz
or .tgz. To find out if or how the file is compressed, in a command line, enter:

$ file <name_of_downloaded_file>

5. From the image that you downloaded, create an image that is named rhcos in your cluster by
using the RHOSP CLI:

$ openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-


${RHCOS_VERSION}-openstack.qcow2 rhcos

IMPORTANT

Depending on your RHOSP environment, you might be able to upload the image
in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.

115
OpenShift Container Platform 4.6 Installing on OpenStack


WARNING

If the installation program finds multiple images with the same name, it
chooses one of them at random. To avoid this behavior, create unique
names for resources in RHOSP.

After you upload the image to RHOSP, it is usable in the installation process.

1.4.10. Verifying external network access


The OpenShift Container Platform installation process requires external network access. You must
provide an external network value to it, or deployment fails. Before you begin the process, verify that a
network with the external router type exists in Red Hat OpenStack Platform (RHOSP).

Prerequisites

Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries

Procedure

1. Using the RHOSP CLI, verify the name and ID of the 'External' network:

$ openstack network list --long -c ID -c Name -c "Router Type"

Example output

+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+

A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .

NOTE

If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .

1.4.11. Enabling access to the environment


At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack
Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP
deployments.

You can configure OpenShift Container Platform API and application access by using floating IP

116
CHAPTER 1. INSTALLING ON OPENSTACK

You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.

1.4.11.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster
applications, and the bootstrap process.

Procedure

1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

$ openstack floating ip create --description "API <cluster_name>.<base_domain>"


<external_network>

2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>"


<external_network>

3. By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP:

$ openstack floating ip create --description "bootstrap machine" <external_network>

4. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>

NOTE

If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.

5. Add the FIPs to the inventory.yaml file as the values of the following variables:

os_api_fip

os_bootstrap_fip

os_ingress_fip

If you use these values, you must also enter an external network as the value of the
os_external_network variable in the inventory.yaml file.

TIP

You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.

1.4.11.2. Completing installation without floating IP addresses

117
OpenShift Container Platform 4.6 Installing on OpenStack

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.

In the inventory.yaml file, do not define the following variables:

os_api_fip

os_bootstrap_fip

os_ingress_fip

If you cannot provide an external network, you can also leave os_external_network blank. If you do not
provide a value for os_external_network, a router is not created for you, and, without additional action,
the installer will fail to retrieve an image from Glance. Later in the installation process, when you create
network resources, you must configure external connectivity on your own.

If you run the installer with the wait-for command from a system that cannot reach the cluster API due
to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in
these cases, you can use a proxy network or run the installer from a system that is on the same network
as your machines.

NOTE

You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:

api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.

1.4.12. Defining parameters for the installation program


The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The
file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project
name, log in information, and authorization service URLs.

Procedure

1. Create the clouds.yaml file:

If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

IMPORTANT

Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.

If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.

118
CHAPTER 1. INSTALLING ON OPENSTACK

clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'

2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:

a. Copy the certificate authority file to your machine.

b. Add the machine to the certificate authority trust bundle:

$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/

c. Update the trust bundle:

$ sudo update-ca-trust extract

d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:

clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"

TIP

After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:

$ oc edit configmap -n openshift-config cloud-provider-config

3. Place the clouds.yaml file in one of the following locations:

a. The value of the OS_CLIENT_CONFIG_FILE environment variable

b. The current directory

c. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml

119
OpenShift Container Platform 4.6 Installing on OpenStack

d. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml


The installation program searches for clouds.yaml in that order.

1.4.13. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack
Platform (RHOSP).

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select openstack as the platform to target.

iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.

iv. Specify the floating IP address to use for external access to the OpenShift API.

v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute

120
CHAPTER 1. INSTALLING ON OPENSTACK

v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.

vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.

vii. Enter a name for your cluster. The name must be 14 or fewer characters long.

viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.

3. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

You now have the file install-config.yaml in the directory that you specified.

1.4.14. Installation configuration parameters


Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.17. Required parameters

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

121
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

122
CHAPTER 1. INSTALLING ON OPENSTACK

Table 1.18. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

123
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

124
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

125
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

126
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.19. Additional Red Hat OpenStack Platform (RHOSP) parameters

Parameter Description Values

compute.platfor For compute machines, the Integer, for example 30.


m.openstack.ro size in gigabytes of the root
otVolume.size volume. If you do not set this
value, machines use
ephemeral storage.

compute.platfor For compute machines, the String, for example performance .


m.openstack.ro root volume’s type.
otVolume.type

controlPlane.pla For control plane machines, Integer, for example 30.


tform.openstack the size in gigabytes of the
.rootVolume.siz root volume. If you do not set
e this value, machines use
ephemeral storage.

controlPlane.pla For control plane machines, String, for example performance .


tform.openstack the root volume’s type.
.rootVolume.typ
e

127
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

platform.openst The name of the RHOSP String, for example MyCloud.


ack.cloud cloud to use from the list of
clouds in the clouds.yaml
file.

platform.openst The RHOSP external network String, for example external.


ack.externalNet name to be used for
work installation.

platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.

Table 1.20. Optional RHOSP parameters

Parameter Description Values

compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.

compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs

compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

128
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.

controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs

controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.

129
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

platform.openst The default machine pool


ack.defaultMach platform configuration. {
inePlatform "type": "ml.large",
"rootVolume": {
"size": 30,
"type": "performance"
}
}

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.ingressFloa to associate with the Ingress
tingIP port. To use this property, you
must also define the
platform.openstack.exter
nalNetwork property.

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.lbFloatingIP to associate with the API load
balancer. To use this property,
you must also define the
platform.openstack.exter
nalNetwork property.

platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.

platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.

The first item in


networking.machineNetw
ork must match the value of
machinesSubnet.

If you deploy to a custom


subnet, you cannot specify an
external DNS server to the
OpenShift Container Platform
installer. Instead, add DNS to
the subnet in RHOSP.

1.4.14.1. Custom subnets in RHOSP deployments

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.

130
CHAPTER 1. INSTALLING ON OPENSTACK

Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.

This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.

Before you run the OpenShift Container Platform installer with a custom subnet, verify that:

The target network and subnet are available.

DHCP is enabled on the target subnet.

You can provide installer credentials that have permission to create ports on the target
network.

If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.

Your network configuration does not rely on a provider network. Provider networks are not
supported.

NOTE

By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.

1.4.14.2. Sample customized install-config.yaml file for RHOSP with Kuryr

To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-
config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default
OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all
of the possible Red Hat OpenStack Platform (RHOSP) customization options.

IMPORTANT

This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.

apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:

131
OpenShift Container Platform 4.6 Installing on OpenStack

clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: Kuryr
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
trunkSupport: true
octaviaSupport: true
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...

NOTE

Both trunkSupport and octaviaSupport are automatically discovered by the installer, so


there is no need to set them. But if your environment does not meet both requirements,
Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP
network and Octavia is required to create the OpenShift Container Platform services.

1.4.14.3. Setting a custom subnet for machines

The IP range that the installation program uses by default might not match the Neutron subnet that you
create when you install OpenShift Container Platform. If necessary, update the CIDR value for new
machines by editing the installation configuration file.

Prerequisites

You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.

Procedure

1. On a command line, browse to the directory that contains install-config.yaml.

2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:

To set the value by using a script, run:

$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1
open(path, "w").write(yaml.dump(data, default_flow_style=False))'

1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24.

132
CHAPTER 1. INSTALLING ON OPENSTACK

To set the value manually, open the file and set the value of networking.machineCIDR to
something that matches your intended Neutron subnet.

1.4.14.4. Emptying compute machine pools

To proceed with an installation that uses your own infrastructure, set the number of compute machines
in the installation configuration file to zero. Later, you create these machines manually.

Prerequisites

You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.

Procedure

1. On a command line, browse to the directory that contains install-config.yaml.

2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:

To set the value by using a script, run:

$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["compute"][0]["replicas"] = 0;
open(path, "w").write(yaml.dump(data, default_flow_style=False))'

To set the value manually, open the file and set the value of compute.<first
entry>.replicas to 0.

1.4.14.5. Modifying the network type

By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead,
change the value in the installation configuration file that the program generated.

Prerequisites

You have the file install-config.yaml that was generated by the OpenShift Container Platform
installation program

Procedure

1. In a command prompt, browse to the directory that contains install-config.yaml.

2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:

To set the value by using a script, run:

$ python -c '
import yaml;
path = "install-config.yaml";

133
OpenShift Container Platform 4.6 Installing on OpenStack

data = yaml.safe_load(open(path));
data["networking"]["networkType"] = "Kuryr";
open(path, "w").write(yaml.dump(data, default_flow_style=False))'

To set the value manually, open the file and set networking.networkType to "Kuryr".

1.4.15. Creating the Kubernetes manifest and Ignition config files


Because you must modify some cluster definition files and manually start the cluster machines, you must
generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the
Ignition configuration files, which are later used to create the cluster.

IMPORTANT

The Ignition config files that the installation program generates contain certificates that
expire after 24 hours, which are then renewed at that time. If the cluster is shut down
before renewing the certificates and the cluster is later restarted after the 24 hours have
elapsed, the cluster automatically recovers the expired certificates. The exception is that
you must manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.

Prerequisites

You obtained the OpenShift Container Platform installation program.

You created the install-config.yaml installation configuration file.

Procedure

1. Change to the directory that contains the installation program and generate the Kubernetes
manifests for the cluster:

$ ./openshift-install create manifests --dir=<installation_directory> 1

Example output

INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"


INFO Consuming Install Config from target directory
INFO Manifests created in: install_dir/manifests and install_dir/openshift

1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.

2. Remove the Kubernetes manifest files that define the control plane machines and compute
machine sets:

$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-
cluster-api_worker-machineset-*.yaml

Because you create and manage these resources yourself, you do not have to initialize them.

134
CHAPTER 1. INSTALLING ON OPENSTACK

You can preserve the machine set files to create compute machines by using the machine
API, but you must update references to them to match your environment.

3. Check that the mastersSchedulable parameter in the


<installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest
file is set to false. This setting prevents pods from being scheduled on the control plane
machines:

a. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.

b. Locate the mastersSchedulable parameter and ensure that it is set to false.

c. Save and exit the file.

4. To create the Ignition configuration files, run the following command from the directory that
contains the installation program:

$ ./openshift-install create ignition-configs --dir=<installation_directory> 1

1 For <installation_directory>, specify the same installation directory.

The following files are generated in the directory:

.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign

5. Export the metadata file’s infraID key as an environment variable:

$ export INFRA_ID=$(jq -r .infraID metadata.json)

TIP

Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that
you create. By doing so, you avoid name conflicts when making multiple deployments in the same
project.

1.4.16. Preparing the bootstrap Ignition files


The OpenShift Container Platform installation process relies on bootstrap machines that are created
from a bootstrap Ignition configuration file.

Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat
OpenStack Platform (RHOSP) uses to download the primary file.

Prerequisites

You have the bootstrap Ignition file that the installer program generates, bootstrap.ign.

135
OpenShift Container Platform 4.6 Installing on OpenStack

The infrastructure ID from the installer’s metadata file is set as an environment variable
($INFRA_ID).

If the variable is not set, see Creating the Kubernetes manifest and Ignition config files.

You have an HTTP(S)-accessible way to store the bootstrap Ignition file.

The documented procedure uses the RHOSP image service (Glance), but you can also use
the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova
server.

Procedure

1. Run the following Python script. The script modifies the bootstrap Ignition file to set the host
name and, if available, CA certificate file when it runs:

import base64
import json
import os

with open('bootstrap.ign', 'r') as f:


ignition = json.load(f)

files = ignition['storage'].get('files', [])

infra_id = os.environ.get('INFRA_ID', 'openshift').encode()


hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip()
files.append(
{
'path': '/etc/hostname',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64
}
})

ca_cert_path = os.environ.get('OS_CACERT', '')


if ca_cert_path:
with open(ca_cert_path, 'r') as f:
ca_cert = f.read().encode()
ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip()

files.append(
{
'path': '/opt/openshift/tls/cloud-ca-cert.pem',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
}
})

ignition['storage']['files'] = files;

with open('bootstrap.ign', 'w') as f:


json.dump(ignition, f)

136
CHAPTER 1. INSTALLING ON OPENSTACK

2. Using the RHOSP CLI, create an image that uses the bootstrap Ignition file:

$ openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign


<image_name>

3. Get the image’s details:

$ openstack image show <image_name>

Make a note of the file value; it follows the pattern v2/images/<image_ID>/file.

NOTE

Verify that the image you created is active.

4. Retrieve the image service’s public address:

$ openstack catalog show image

5. Combine the public address with the image file value and save the result as the storage
location. The location follows the pattern
<image_service_public_URL>/v2/images/<image_ID>/file.

6. Generate an auth token and save the token ID:

$ openstack token issue -c id -f value

7. Insert the following content into a file called $INFRA_ID-bootstrap-ignition.json and edit the
placeholders to match your own values:

{
"ignition": {
"config": {
"merge": [{
"source": "<storage_url>", 1
"httpHeaders": [{
"name": "X-Auth-Token", 2
"value": "<token_ID>" 3
}]
}]
},
"security": {
"tls": {
"certificateAuthorities": [{
"source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4
}]
}
},
"version": "3.1.0"
}
}

Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage
137
OpenShift Container Platform 4.6 Installing on OpenStack

1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage
URL.

2 Set name in httpHeaders to "X-Auth-Token".

3 Set value in httpHeaders to your token’s ID.

4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-
encoded certificate.

8. Save the secondary Ignition config file.

The bootstrap Ignition data will be passed to RHOSP during installation.


WARNING

The bootstrap Ignition file contains sensitive information, like clouds.yaml


credentials. Ensure that you store it in a secure place, and delete it after you
complete the installation process.

1.4.17. Creating control plane Ignition config files on RHOSP


Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own
infrastructure requires control plane Ignition config files. You must create multiple config files.

NOTE

As with the bootstrap Ignition configuration, you must explicitly define a host name for
each control plane machine.

Prerequisites

The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).

If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files."

Procedure

On a command line, run the following Python script:

$ for index in $(seq 0 2); do


MASTER_HOSTNAME="$INFRA_ID-master-$index\n"
python -c "import base64, json, sys;
ignition = json.load(sys.stdin);
storage = ignition.get('storage', {});
files = storage.get('files', []);
files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source':
'data:text/plain;charset=utf-8;base64,' +
base64.standard_b64encode(b'$MASTER_HOSTNAME').decode().strip(), 'verification': {}},

138
CHAPTER 1. INSTALLING ON OPENSTACK

'filesystem': 'root'});
storage['files'] = files;
ignition['storage'] = storage
json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
done

You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json,


<INFRA_ID>-master-1-ignition.json, and <INFRA_ID>-master-2-ignition.json.

1.4.18. Creating network resources on RHOSP


Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform
(RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks
that generate security groups, networks, subnets, routers, and ports.

Prerequisites

Python 3 is installed on your machine.

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

Procedure

1. Optional: Add an external network value to the inventory.yaml playbook:

Example external network value in the inventory.yaml Ansible playbook

...
# The public network providing connectivity to the cluster. If not
# provided, the cluster external connectivity must be provided in another
# way.

# Required for os_api_fip, os_ingress_fip, os_bootstrap_fip.


os_external_network: 'external'
...

IMPORTANT

If you did not provide a value for os_external_network in the inventory.yaml


file, you must ensure that VMs can access Glance and an external connection
yourself.

2. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml
playbook:

Example FIP values in the inventory.yaml Ansible playbook

...
# OpenShift API floating IP address. If this value is non-empty, the
# corresponding floating IP will be attached to the Control Plane to
# serve the OpenShift API.
os_api_fip: '203.0.113.23'

139
OpenShift Container Platform 4.6 Installing on OpenStack

# OpenShift Ingress floating IP address. If this value is non-empty, the


# corresponding floating IP will be attached to the worker nodes to serve
# the applications.
os_ingress_fip: '203.0.113.19'

# If this value is non-empty, the corresponding floating IP will be


# attached to the bootstrap machine. This is needed for collecting logs
# in case of install failure.
os_bootstrap_fip: '203.0.113.20'

IMPORTANT

If you do not define values for os_api_fip and os_ingress_fip, you must perform
post-installation network configuration.

If you do not define a value for os_bootstrap_fip, the installer cannot download
debugging information from failed installations.

See "Enabling access to the environment" for more information.

3. On a command line, create security groups by running the security-groups.yaml playbook:

$ ansible-playbook -i inventory.yaml security-groups.yaml

4. On a command line, create a network, subnet, and router by running the network.yaml
playbook:

$ ansible-playbook -i inventory.yaml network.yaml

5. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI
command:

$ openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2>


"$INFRA_ID-nodes"

1.4.19. Creating the bootstrap machine on RHOSP


Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack
Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The inventory.yaml, common.yaml, and bootstrap.yaml Ansible playbooks are in a common


directory.

The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.

140
CHAPTER 1. INSTALLING ON OPENSTACK

Procedure

1. On a command line, change the working directory to the location of the playbooks.

2. On a command line, run the bootstrap.yaml playbook:

$ ansible-playbook -i inventory.yaml bootstrap.yaml

3. After the bootstrap server is active, view the logs to verify that the Ignition files were received:

$ openstack console log show "$INFRA_ID-bootstrap"

1.4.20. Creating the control plane machines on RHOSP


Create three control plane machines by using the Ignition config files that you generated. Red Hat
provides an Ansible playbook that you run to simplify this process.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).

The inventory.yaml, common.yaml, and control-plane.yaml Ansible playbooks are in a


common directory.

You have the three Ignition files that were created in "Creating control plane Ignition config
files."

Procedure

1. On a command line, change the working directory to the location of the playbooks.

2. If the control plane Ignition config files aren’t already in your working directory, copy them into
it.

3. On a command line, run the control-plane.yaml playbook:

$ ansible-playbook -i inventory.yaml control-plane.yaml

4. Run the following command to monitor the bootstrapping process:

$ openshift-install wait-for bootstrap-complete

You will see messages that confirm that the control plane machines are running and have joined
the cluster:

INFO API v1.14.6+f9b5405 up


INFO Waiting up to 30m0s for bootstrapping to complete...
...
INFO It is now safe to remove the bootstrap resources

141
OpenShift Container Platform 4.6 Installing on OpenStack

1.4.21. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

1.4.22. Deleting bootstrap resources from RHOSP


Delete the bootstrap resources that you no longer need.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The inventory.yaml, common.yaml, and down-bootstrap.yaml Ansible playbooks are in a


common directory.

The control plane machines are running.

If you do not know the status of the machines, see "Verifying cluster status."

Procedure

1. On a command line, change the working directory to the location of the playbooks.

2. On a command line, run the down-bootstrap.yaml playbook:

142
CHAPTER 1. INSTALLING ON OPENSTACK

$ ansible-playbook -i inventory.yaml down-bootstrap.yaml

The bootstrap port, server, and floating IP address are deleted.


WARNING

If you did not disable the bootstrap Ignition file URL earlier, do so now.

1.4.23. Creating compute machines on RHOSP


After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook
that you run to simplify this process.

Prerequisites

You downloaded the modules in "Downloading playbook dependencies."

You downloaded the playbooks in "Downloading the installation playbooks."

The inventory.yaml, common.yaml, and compute-nodes.yaml Ansible playbooks are in a


common directory.

The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.

The control plane is active.

Procedure

1. On a command line, change the working directory to the location of the playbooks.

2. On a command line, run the playbook:

$ ansible-playbook -i inventory.yaml compute-nodes.yaml

Next steps

Approve the certificate signing requests for the machines.

1.4.24. Approving the certificate signing requests for your machines


When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for
each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve
them yourself. The client requests must be approved first, followed by the server requests.

Prerequisites

You added machines to your cluster.

Procedure
143
OpenShift Container Platform 4.6 Installing on OpenStack

Procedure

1. Confirm that the cluster recognizes the machines:

$ oc get nodes

Example output

NAME STATUS ROLES AGE VERSION


master-0 Ready master 63m v1.19.0
master-1 Ready master 63m v1.19.0
master-2 Ready master 64m v1.19.0
worker-0 NotReady worker 76s v1.19.0
worker-1 NotReady worker 70s v1.19.0

The output lists all of the machines that you created.

NOTE

The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.

2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:

$ oc get csr

Example output

NAME AGE REQUESTOR CONDITION


csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-
bootstrapper Pending
csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-
bootstrapper Pending
...

In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.

3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:

NOTE

Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After you approve the initial
CSRs, the subsequent node client CSRs are automatically approved by the
cluster kube-controller-manager.

NOTE
144
CHAPTER 1. INSTALLING ON OPENSTACK

NOTE

For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.

To approve them individually, run the following command for each valid CSR:

$ oc adm certificate approve <csr_name> 1

1 <csr_name> is the name of a CSR from the list of current CSRs.

To approve all pending CSRs, run the following command:

$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}


{{end}}{{end}}' | xargs oc adm certificate approve

NOTE

Some Operators might not become available until some CSRs are approved.

4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:

$ oc get csr

Example output

NAME AGE REQUESTOR CONDITION


csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal
Pending
csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal
Pending
...

5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:

To approve them individually, run the following command for each valid CSR:

$ oc adm certificate approve <csr_name> 1

1 <csr_name> is the name of a CSR from the list of current CSRs.

145
OpenShift Container Platform 4.6 Installing on OpenStack

To approve all pending CSRs, run the following command:

$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}


{{end}}{{end}}' | xargs oc adm certificate approve

6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:

$ oc get nodes

Example output

NAME STATUS ROLES AGE VERSION


master-0 Ready master 73m v1.20.0
master-1 Ready master 73m v1.20.0
master-2 Ready master 74m v1.20.0
worker-0 Ready worker 11m v1.20.0
worker-1 Ready worker 11m v1.20.0

NOTE

It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.

Additional information

For more information on CSRs, see Certificate Signing Requests .

1.4.25. Verifying a successful installation


Verify that the OpenShift Container Platform installation is complete.

Prerequisites

You have the installation program (openshift-install)

Procedure

On a command line, enter:

$ openshift-install --log-level debug wait-for install-complete

The program outputs the console URL, as well as the administrator’s login information.

1.4.26. Next steps


Customize your cluster.

If necessary, you can opt out of remote health reporting .

If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.

146
CHAPTER 1. INSTALLING ON OPENSTACK

If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .

1.5. INSTALLING A CLUSTER ON OPENSTACK IN A RESTRICTED


NETWORK
In OpenShift Container Platform 4.5, you can install a cluster on Red Hat OpenStack Platform (RHOSP)
in a restricted network by creating an internal mirror of the installation release content.

Prerequisites

Create a mirror registry on your bastion host and obtain the imageContentSources data for
your version of OpenShift Container Platform.

IMPORTANT

Because the installation media is on the bastion host, use that computer to
complete all installation steps.

Review details about the OpenShift Container Platform installation and update processes .

Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version by
consulting the architecture documentation’s list of available platforms. You can also
compare platform support across different versions by viewing the OpenShift Container
Platform on RHOSP support matrix.

Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.

Have the metadata service enabled in RHOSP.

1.5.1. About installations in restricted networks


In OpenShift Container Platform 4.5, you can perform an installation that does not require an active
connection to the Internet to obtain software components. You complete an installation in a restricted
network on only infrastructure that you provision, not infrastructure that the installation program
provisions, so your platform selection is limited.

If you choose to perform a restricted network installation on a cloud platform, you still require access to
its cloud APIs. Some cloud functions, like Amazon Web Service’s IAM service, require Internet access, so
you might still require Internet access. Depending on your network, you might require less Internet
access for an installation on bare metal hardware or on VMware vSphere.

To complete a restricted network installation, you must create a registry that mirrors the contents of the
OpenShift Container Platform registry and contains the installation media. You can create this registry
on a mirror host, which can access both the Internet and your closed network, or by using other methods
that meet your restrictions.

1.5.1.1. Additional limits

Clusters in restricted networks have the following additional limitations and restrictions:

The ClusterVersion status includes an Unable to retrieve available updates error.

By default, you cannot use the contents of the Developer Catalog because you cannot access
147
OpenShift Container Platform 4.6 Installing on OpenStack

By default, you cannot use the contents of the Developer Catalog because you cannot access
the required image stream tags.

1.5.2. Resource guidelines for installing OpenShift Container Platform on RHOSP


To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP)
quota must meet the following requirements:

Table 1.21. Recommended resources for a default OpenShift Container Platform cluster on RHOSP

Resource Value

Floating IP addresses 3

Ports 15

Routers 1

Subnets 1

RAM 112 GB

vCPUs 28

Volume storage 275 GB

Instances 7

Security groups 3

Security group rules 60

A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.

IMPORTANT

If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.

NOTE

By default, your security group and security group rule quotas might be low. If you
encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60
<project> as an administrator to increase them.

An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.

148
CHAPTER 1. INSTALLING ON OPENSTACK

1.5.2.1. Control plane and compute machines

By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.

Each machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

TIP

Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.

1.5.2.2. Bootstrap machine

During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.

The bootstrap machine requires:

An instance from the RHOSP quota

A port from the RHOSP quota

A flavor with at least 16 GB memory, 4 vCPUs, and 25 GB storage space

1.5.3. Internet and Telemetry access for OpenShift Container Platform


In OpenShift Container Platform 4.5, you require access to the Internet to install your cluster. The
Telemetry service, which runs by default to provide metrics about cluster health and the success of
updates, also requires Internet access. If your cluster is connected to the Internet, Telemetry runs
automatically, and your cluster is registered to the Red Hat OpenShift Cluster Manager (OCM) .

Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.

You must have Internet access to:

Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.

Access Quay.io to obtain the packages that are required to install your cluster.

Obtain the packages that are required to perform cluster updates.

IMPORTANT
149
OpenShift Container Platform 4.6 Installing on OpenStack

IMPORTANT

If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.

1.5.4. Enabling Swift on RHOSP


Swift is operated by a user account with the swiftoperator role. Add the role to an account before you
run the installation program.

IMPORTANT

If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as
Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it
is unavailable, the installation program relies on the RHOSP block storage service,
commonly known as Cinder.

If Swift is present and you want to use it, you must enable access to it. If it is not present,
or if you do not want to use it, skip this section.

Prerequisites

You have a RHOSP administrator account on the target environment.

The Swift service is installed.

On Ceph RGW , the account in url option is enabled.

Procedure
To enable Swift on RHOSP:

1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:

$ openstack role add --user <user> --project <project> swiftoperator

Your RHOSP deployment can now use Swift for the image registry.

1.5.5. Defining parameters for the installation program


The OpenShift Container Platform installation program relies on a file that is called clouds.yaml. The
file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project
name, log in information, and authorization service URLs.

Procedure

1. Create the clouds.yaml file:

If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.

IMPORTANT
150
CHAPTER 1. INSTALLING ON OPENSTACK

IMPORTANT

Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.

If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.

clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'

2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:

a. Copy the certificate authority file to your machine.

b. Add the machine to the certificate authority trust bundle:

$ sudo cp ca.crt.pem /etc/pki/ca-trust/source/anchors/

c. Update the trust bundle:

$ sudo update-ca-trust extract

d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:

clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"

TIP
151
OpenShift Container Platform 4.6 Installing on OpenStack

TIP

After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:

$ oc edit configmap -n openshift-config cloud-provider-config

3. Place the clouds.yaml file in one of the following locations:

a. The value of the OS_CLIENT_CONFIG_FILE environment variable

b. The current directory

c. A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml

d. A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml


The installation program searches for clouds.yaml in that order.

1.5.6. Creating the RHCOS image for restricted network installations


Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container
Platform on a restricted-network Red Hat OpenStack Platform (RHOSP) environment.

Prerequisites

Obtain the OpenShift Container Platform installation program. For a restricted network
installation, the program is on your bastion host.

Procedure

1. Log in to the Red Hat Customer Portal’s Product Downloads page .

2. Under Version, select the most recent release of OpenShift Container Platform 4.5 for RHEL 8.

IMPORTANT

The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available.

3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW).

4. Decompress the image.

NOTE
152
CHAPTER 1. INSTALLING ON OPENSTACK

NOTE

You must decompress the RHOSP image before the cluster can use it. The
name of the downloaded file might not contain a compression extension, like .gz
or .tgz. To find out if or how the file is compressed, in a command line, enter:

$ file <name_of_downloaded_file>

5. Upload the image that you decompressed to a location that is accessible from the bastion
server, like Glance. For example:

$ openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --


disk-format qcow2 rhcos-${RHCOS_VERSION}

IMPORTANT

Depending on your RHOSP environment, you might be able to upload the image
in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.

CAUTION

If the installation program finds multiple images with the same name, it chooses one of them at
random. To avoid this behavior, create unique names for resources in RHOSP.

The image is now available for a restricted installation. Note the image name or location for use in
OpenShift Container Platform deployment.

1.5.7. Creating the installation configuration file


You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack
Platform (RHOSP).

Prerequisites

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster. For a restricted network installation, these files are on your bastion host.

Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible
location.

Have the imageContentSources values that were generated during mirror registry creation.

Procedure

1. Create the install-config.yaml file.

a. Change to the directory that contains the installation program and run the following
command:

$ ./openshift-install create install-config --dir=<installation_directory> 1

1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.

153
OpenShift Container Platform 4.6 Installing on OpenStack

IMPORTANT

Specify an empty directory. Some installation assets, like bootstrap X.509


certificates have short expiration intervals, so you must not reuse an
installation directory. If you want to reuse individual files from another cluster
installation, you can copy them into your directory. However, the file names
for the installation assets might change between releases. Use caution when
copying installation files from an earlier OpenShift Container Platform
version.

b. At the prompts, provide the configuration details for your cloud:

i. Optional: Select an SSH key to use to access your cluster machines.

NOTE

For production OpenShift Container Platform clusters on which you want


to perform installation debugging or disaster recovery, specify an SSH
key that your ssh-agent process uses.

ii. Select openstack as the platform to target.

iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.

iv. Specify the floating IP address to use for external access to the OpenShift API.

v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.

vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.

vii. Enter a name for your cluster. The name must be 14 or fewer characters long.

viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.

2. In install-config.yaml, set the value of platform.openstack.clusterOSImage to the image


location or name. For example:

platform:
openstack:
clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-
openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d

3. Edit the install-config.yaml file to provide the additional information that is required for an
installation in a restricted network.

a. Update the pullSecret value to contain the authentication information for your registry:

pullSecret: '{"auths":{"<bastion_host_name>:5000": {"auth": "<credentials>","email":


"[email protected]"}}}'

For <bastion_host_name>, specify the registry domain name that you specified in the
154
CHAPTER 1. INSTALLING ON OPENSTACK

For <bastion_host_name>, specify the registry domain name that you specified in the
certificate for your mirror registry, and for <credentials>, specify the base64-encoded user
name and password for your mirror registry.

b. Add the additionalTrustBundle parameter and value.

additionalTrustBundle: |
-----BEGIN CERTIFICATE-----

ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----

The value must be the contents of the certificate file that you used for your mirror registry,
which can be an exiting, trusted certificate authority or the self-signed certificate that you
generated for the mirror registry.

c. Add the image content resources, which look like this excerpt:

imageContentSources:
- mirrors:
- <bastion_host_name>:5000/<repo_name>/release
source: quay.example.com/openshift-release-dev/ocp-release
- mirrors:
- <bastion_host_name>:5000/<repo_name>/release
source: registry.example.com/ocp/release

To complete these values, use the imageContentSources that you recorded during mirror
registry creation.

4. Make any other modifications to the install-config.yaml file that you require. You can find more
information about the available parameters in the Installation configuration parameters
section.

5. Back up the install-config.yaml file so that you can use it to install multiple clusters.

IMPORTANT

The install-config.yaml file is consumed during the installation process. If you


want to reuse the file, you must back it up now.

1.5.7.1. Installation configuration parameters

Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.

NOTE

After installation, you cannot modify these parameters in the install-config.yaml file.

Table 1.22. Required parameters

155
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

apiVersion The API version for the String


install-config.yaml
content. The current version is
v1. The installer may also
support older API versions.

baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.

metadata Kubernetes resource Object


ObjectMeta, from which only
the name parameter is
consumed.

metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.

platform The configuration for the Object


specific platform upon which
to perform the installation:
aws, baremetal, azure ,
openstack, ovirt, vsphere.
For additional information
about platform.<platform>
parameters, consult the
following table for your
specific platform.

156
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

pullSecret Get this pull secret from


https://cloud.redhat.com/ope {
nshift/install/pull-secret to "auths":{
authenticate downloading "cloud.openshift.com":{
container images for "auth":"b3Blb=",
OpenShift Container Platform "email":"[email protected]"
components from services },
such as Quay.io. "quay.io":{
"auth":"b3Blb=",
"email":"[email protected]"
}
}
}

Table 1.23. Optional parameters

Parameter Description Values

additionalTrustBund A PEM-encoded X.509 certificate String


le bundle that is added to the nodes'
trusted certificate store. This trust
bundle may also be used when a proxy
has been configured.

compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.

compute.architectur Determines the instruction set String


e architecture of the machines in the
pool. Currently, heteregeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

157
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

compute.hyperthrea Whether to enable or disable Enabled or Disabled


ding simultaneous multithreading, or
hyperthreading, on compute
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

compute.name Required if you use compute. The worker


name of the machine pool.

compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.

compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.

controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.

controlPlane.archite Determines the instruction set String


cture architecture of the machines in the
pool. Currently, heterogeneous
clusters are not supported, so all pools
must specify the same architecture.
Valid values are amd64 (the default).

158
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

controlPlane.hypert Whether to enable or disable Enabled or Disabled


hreading simultaneous multithreading, or
hyperthreading, on control plane
machines. By default, simultaneous
multithreading is enabled to increase
the performance of your machines'
cores.

IMPORTANT

If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.

controlPlane.name Required if you use controlPlane . master


The name of the machine pool.

controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.

controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.

159
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

credentialsMode The Cloud Credential Operator (CCO) Mint , Passthrough, Manual, or an


mode. If no mode is specified, the empty string ( "").
CCO dynamically tries to determine
the capabilities of the provided
credentials, with a preference for mint
mode on the platforms where multiple
modes are supported.

NOTE

Not all CCO modes


are supported for all
cloud providers. For
more information on
CCO modes, see the
Cloud Credential
Operator entry in the
Red Hat Operators
reference content.

fips Enable or disable FIPS mode. The false or true


default is false (disabled). If FIPS
mode is enabled, the Red Hat
Enterprise Linux CoreOS (RHCOS)
machines that OpenShift Container
Platform runs on bypass the default
Kubernetes cryptography suite and use
the cryptography modules that are
provided with RHCOS instead.

imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.

imageContentSourc Required if you use String


es.source imageContentSources . Specify the
repository that users refer to, for
example, in image pull specifications.

imageContentSourc Specify one or more repositories that Array of strings


es.mirrors may also contain the same images.

networking The configuration for the pod network Object


provider in the cluster.

networking.clusterN The IP address pools for pods. The Array of objects


etwork default is 10.128.0.0/14 with a host
prefix of /23.

160
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

networking.clusterN Required if you use IP network. IP networks are


etwork.cidr networking.clusterNetwork. The IP represented as strings using Classless
block address pool. Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.clusterN Required if you use Integer


etwork.hostPrefix networking.clusterNetwork. The
prefix size to allocate to each node
from the CIDR. For example, 24 would
allocate 2^8=256 addresses to each
node.

networking.machine The IP address pools for machines. Array of objects


Network

networking.machine Required if you use IP network. IP networks are


Network.cidr networking.machineNetwork . The represented as strings using Classless
IP block address pool. The default is Inter-Domain Routing (CIDR) notation
10.0.0.0/16 for all platforms other with a traditional IP address or network
than libvirt. For libvirt, the default is number, followed by the forward slash
192.168.126.0/24 . (/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

networking.network The type of network to install. The String


Type default is OpenShiftSDN .

networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.

161
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

publish How to publish or expose the user- Internal or External. To deploy a


facing endpoints of your cluster, such private cluster, which cannot be
as the Kubernetes API, OpenShift accessed from the internet, set
routes. publish to Internal . The default
value is External.

sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.

Table 1.24. Additional Red Hat OpenStack Platform (RHOSP) parameters

Parameter Description Values

compute.platfor For compute machines, the Integer, for example 30.


m.openstack.ro size in gigabytes of the root
otVolume.size volume. If you do not set this
value, machines use
ephemeral storage.

compute.platfor For compute machines, the String, for example performance .


m.openstack.ro root volume’s type.
otVolume.type

controlPlane.pla For control plane machines, Integer, for example 30.


tform.openstack the size in gigabytes of the
.rootVolume.siz root volume. If you do not set
e this value, machines use
ephemeral storage.

controlPlane.pla For control plane machines, String, for example performance .


tform.openstack the root volume’s type.
.rootVolume.typ
e

162
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst The name of the RHOSP String, for example MyCloud.


ack.cloud cloud to use from the list of
clouds in the clouds.yaml
file.

platform.openst The RHOSP external network String, for example external.


ack.externalNet name to be used for
work installation.

platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.

Table 1.25. Optional RHOSP parameters

Parameter Description Values

compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.

compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs

compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

163
OpenShift Container Platform 4.6 Installing on OpenStack

Parameter Description Values

controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.

controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs

controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.

On clusters that use Kuryr,


RHOSP Octavia does not
support availability zones.
Load balancers and, if you are
using the Amphora provider
driver, OpenShift Container
Platform services that rely on
Amphora VMs, are not
created according to the
value of this property.

platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.

164
CHAPTER 1. INSTALLING ON OPENSTACK

Parameter Description Values

platform.openst The default machine pool


ack.defaultMach platform configuration. {
inePlatform "type": "ml.large",
"rootVolume": {
"size": 30,
"type": "performance"
}
}

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.ingressFloa to associate with the Ingress
tingIP port. To use this property, you
must also define the
platform.openstack.exter
nalNetwork property.

platform.openst An existing floating IP address An IP address, for example 128.0.0.1.


ack.lbFloatingIP to associate with the API load
balancer. To use this property,
you must also define the
platform.openstack.exter
nalNetwork property.

platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.

platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.

The first item in


networking.machineNetw
ork must match the value of
machinesSubnet.

If you deploy to a custom


subnet, you cannot specify an
external DNS server to the
OpenShift Container Platform
installer. Instead, add DNS to
the subnet in RHOSP.

1.5.7.2. Sample customized install-config.yaml file for restricted OpenStack installations

This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform
(RHOSP) customization options.

165
OpenShift Container Platform 4.6 Installing on OpenStack

IMPORTANT

This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.

apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineCIDR: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
region: region1
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
additionalTrustBundle: |

-----BEGIN CERTIFICATE-----

ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ

-----END CERTIFICATE-----

imageContentSources:
- mirrors:
- <mirror_registry>/<repo_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <mirror_registry>/<repo_name>/release
source: registry.svc.ci.openshift.org/ocp/release

166
CHAPTER 1. INSTALLING ON OPENSTACK

1.5.8. Generating an SSH private key and adding it to the agent


If you want to perform installation debugging or disaster recovery on your cluster, you must provide an
SSH key to both your ssh-agent and the installation program. You can use this key to access the
bootstrap machine in a public cluster to troubleshoot installation issues.

NOTE

In a production environment, you require disaster recovery and debugging.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.

NOTE

You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.

Procedure

1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:

$ ssh-keygen -t ed25519 -N '' \


-f <path>/<file_name> 1

1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.

Running this command generates an SSH key that does not require a password in the location
that you specified.

2. Start the ssh-agent process as a background task:

$ eval "$(ssh-agent -s)"

Example output

Agent pid 31874

3. Add your SSH private key to the ssh-agent:

$ ssh-add <path>/<file_name> 1

Example output

Identity added: /home/<you>/<path>/<file_name> (<computer_name>)

1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa

167
OpenShift Container Platform 4.6 Installing on OpenStack

Next steps

When you install OpenShift Container Platform, provide the SSH public key to the installation
program.

1.5.9. Enabling access to the environment


At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack
Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP
deployments.

You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.

1.5.9.1. Enabling access with floating IP addresses

Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and
cluster applications.

Procedure

1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:

$ openstack floating ip create --description "API <cluster_name>.<base_domain>"


<external_network>

2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:

$ openstack floating ip create --description "Ingress <cluster_name>.<base_domain>"


<external_network>

3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:

api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>

NOTE

If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.

4. Add the FIPs to the install-config.yaml file as the values of the following parameters:

platform.openstack.ingressFloatingIP

platform.openstack.lbFloatingIP

If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork parameter in the install-config.yaml file.

TIP
168
CHAPTER 1. INSTALLING ON OPENSTACK

TIP

You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.

1.5.9.2. Completing installation without floating IP addresses

You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.

In the install-config.yaml file, do not define the following parameters:

platform.openstack.ingressFloatingIP

platform.openstack.lbFloatingIP

If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for
you, and, without additional action, the installer will fail to retrieve an image from Glance. You must
configure external connectivity on your own.

If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP
addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use
a proxy network or run the installer from a system that is on the same network as your machines.

NOTE

You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:

api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>

If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.

1.5.10. Deploying the cluster


You can install OpenShift Container Platform on a compatible cloud platform.

IMPORTANT

You can run the create cluster command of the installation program only once, during
initial installation.

Prerequisites

Configure an account with the cloud platform that hosts your cluster.

Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.

Procedure

169
OpenShift Container Platform 4.6 Installing on OpenStack

1. Change to the directory that contains the installation program and initialize the cluster
deployment:

$ ./openshift-install create cluster --dir=<installation_directory> \ 1


--log-level=info 2

1 For <installation_directory>, specify the

2 To view different installation details, specify warn, debug, or error instead of info.

NOTE

If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.

When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.

Example output

...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s

NOTE

The cluster access and credential information also outputs to


<installation_directory>/.openshift_install.log when an installation succeeds.

IMPORTANT

The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.

IMPORTANT

You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.

170
CHAPTER 1. INSTALLING ON OPENSTACK

1.5.11. Verifying cluster status


You can verify your OpenShift Container Platform cluster’s status during or after installation.

Procedure

1. In the cluster environment, export the administrator’s kubeconfig file:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.

2. View the control plane and compute machines created after a deployment:

$ oc get nodes

3. View your cluster’s version:

$ oc get clusterversion

4. View your Operators' status:

$ oc get clusteroperator

5. View all running pods in the cluster:

$ oc get pods -A

1.5.12. Logging in to the cluster by using the CLI


You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The
kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.

Prerequisites

You deployed an OpenShift Container Platform cluster.

You installed the oc CLI.

Procedure

1. Export the kubeadmin credentials:

$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1

For <installation_directory>, specify the path to the directory that you stored the
171
OpenShift Container Platform 4.6 Installing on OpenStack

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2. Verify you can run oc commands successfully using the exported configuration:

$ oc whoami

Example output

system:admin

Next steps

Customize your cluster.

If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by
configuring additional trust stores.

If necessary, you can opt out of remote health reporting .

Learn how to use Operator Lifecycle Manager (OLM) on restricted networks .

If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .

1.6. UNINSTALLING A CLUSTER ON OPENSTACK


You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP).

1.6.1. Removing a cluster that uses installer-provisioned infrastructure


You can remove a cluster that uses installer-provisioned infrastructure from your cloud.

Prerequisites

Have a copy of the installation program that you used to deploy the cluster.

Have the files that the installation program generated when you created your cluster.

Procedure

1. From the directory that contains the installation program on the computer that you used to
install the cluster, run the following command:

$ ./openshift-install destroy cluster \


--dir=<installation_directory> --log-level=info 1 2

1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.

2 To view different details, specify warn, debug, or error instead of info.

NOTE
172
CHAPTER 1. INSTALLING ON OPENSTACK

NOTE

You must specify the directory that contains the cluster definition files for your
cluster. The installation program requires the metadata.json file in this directory
to delete the cluster.

2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform
installation program.

1.7. UNINSTALLING A CLUSTER ON RHOSP FROM YOUR OWN


INFRASTRUCTURE
You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-
provisioned infrastructure.

1.7.1. Downloading playbook dependencies


The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require
several Python modules. On the machine where you will run the process, add the modules' repositories
and then download them.

NOTE

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.

Prerequisites

Python 3 is installed on your machine.

Procedure

1. On a command line, add the repositories:

a. Register with Red Hat Subscription Manager:

$ sudo subscription-manager register # If not done already

b. Pull the latest subscription data:

$ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already

c. Disable the current repositories:

$ sudo subscription-manager repos --disable=* # If not done already

d. Add the required repositories:

$ sudo subscription-manager repos \


--enable=rhel-8-for-x86_64-baseos-rpms \
--enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
--enable=ansible-2.9-for-rhel-8-x86_64-rpms \
--enable=rhel-8-for-x86_64-appstream-rpms

173
OpenShift Container Platform 4.6 Installing on OpenStack

2. Install the modules:

$ sudo yum install python3-openstackclient ansible python3-openstacksdk

3. Ensure that the python command points to python3:

$ sudo alternatives --set python /usr/bin/python3

1.7.2. Removing a cluster from RHOSP that uses your own infrastructure
You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP)
that uses your own infrastructure. To complete the removal process quickly, run several Ansible
playbooks.

Prerequisites

Python 3 is installed on your machine.

You downloaded the modules in "Downloading playbook dependencies."

You have the playbooks that you used to install the cluster.

You modified the playbooks that are prefixed with down- to reflect any changes that you made
to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file
are reflected in the down-bootstrap.yaml file.

All of the playbooks are in a common directory.

Procedure

1. On a command line, run the playbooks that you downloaded:

$ ansible-playbook -i inventory.yaml \
down-bootstrap.yaml \
down-control-plane.yaml \
down-compute-nodes.yaml \
down-load-balancers.yaml \
down-network.yaml \
down-security-groups.yaml

2. Remove any DNS record changes you made for the OpenShift Container Platform installation.

OpenShift Container Platform is removed from your infrastructure.

174

You might also like