OpenShift Container Platform-4.6-Installing On OpenStack-en-US
OpenShift Container Platform-4.6-Installing On OpenStack-en-US
Installing on OpenStack
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This document provides instructions for installing and uninstalling OpenShift Container Platform
clusters on OpenStack Platform.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INSTALLING
. . . . . . . . . . . . . ON
. . . . OPENSTACK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
1.1. INSTALLING A CLUSTER ON OPENSTACK WITH CUSTOMIZATIONS 6
1.1.1. Prerequisites 6
1.1.2. Resource guidelines for installing OpenShift Container Platform on RHOSP 6
1.1.2.1. Control plane and compute machines 7
1.1.2.2. Bootstrap machine 7
1.1.3. Internet and Telemetry access for OpenShift Container Platform 8
1.1.4. Enabling Swift on RHOSP 8
1.1.5. Verifying external network access 9
1.1.6. Defining parameters for the installation program 10
1.1.7. Obtaining the installation program 12
1.1.8. Creating the installation configuration file 12
1.1.9. Installation configuration parameters 14
1.1.9.1. Custom subnets in RHOSP deployments 23
1.1.9.2. Sample customized install-config.yaml file for RHOSP 24
1.1.10. Generating an SSH private key and adding it to the agent 25
1.1.11. Enabling access to the environment 26
1.1.11.1. Enabling access with floating IP addresses 26
1.1.11.2. Completing installation without floating IP addresses 27
1.1.12. Deploying the cluster 27
1.1.13. Verifying cluster status 29
1.1.14. Logging in to the cluster by using the CLI 29
1.1.15. Next steps 30
1.2. INSTALLING A CLUSTER ON OPENSTACK WITH KURYR 30
1.2.1. Prerequisites 30
1.2.2. About Kuryr SDN 31
1.2.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr 31
1.2.3.1. Increasing quota 33
1.2.3.2. Configuring Neutron 33
1.2.3.3. Configuring Octavia 34
1.2.3.3.1. The Octavia OVN Driver 37
1.2.3.4. Known limitations of installing with Kuryr 38
RHOSP general limitations 38
RHOSP version limitations 38
RHOSP environment limitations 38
RHOSP upgrade limitations 39
1.2.3.5. Control plane and compute machines 39
1.2.3.6. Bootstrap machine 39
1.2.4. Internet and Telemetry access for OpenShift Container Platform 40
1.2.5. Enabling Swift on RHOSP 40
1.2.6. Verifying external network access 41
1.2.7. Defining parameters for the installation program 42
1.2.8. Obtaining the installation program 44
1.2.9. Creating the installation configuration file 44
1.2.10. Installation configuration parameters 46
1.2.10.1. Custom subnets in RHOSP deployments 55
1.2.10.2. Sample customized install-config.yaml file for RHOSP with Kuryr 56
1.2.11. Generating an SSH private key and adding it to the agent 57
1.2.12. Enabling access to the environment 58
1.2.12.1. Enabling access with floating IP addresses 58
1.2.12.2. Completing installation without floating IP addresses 59
1
OpenShift Container Platform 4.6 Installing on OpenStack
2
Table of Contents
1.4.4. Internet and Telemetry access for OpenShift Container Platform 110
1.4.5. Downloading playbook dependencies 111
1.4.6. Downloading the installation playbooks 112
1.4.7. Obtaining the installation program 113
1.4.8. Generating an SSH private key and adding it to the agent 114
1.4.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image 115
1.4.10. Verifying external network access 116
1.4.11. Enabling access to the environment 116
1.4.11.1. Enabling access with floating IP addresses 117
1.4.11.2. Completing installation without floating IP addresses 117
1.4.12. Defining parameters for the installation program 118
1.4.13. Creating the installation configuration file 120
1.4.14. Installation configuration parameters 121
1.4.14.1. Custom subnets in RHOSP deployments 130
1.4.14.2. Sample customized install-config.yaml file for RHOSP with Kuryr 131
1.4.14.3. Setting a custom subnet for machines 132
1.4.14.4. Emptying compute machine pools 133
1.4.14.5. Modifying the network type 133
1.4.15. Creating the Kubernetes manifest and Ignition config files 134
1.4.16. Preparing the bootstrap Ignition files 135
1.4.17. Creating control plane Ignition config files on RHOSP 138
1.4.18. Creating network resources on RHOSP 139
1.4.19. Creating the bootstrap machine on RHOSP 140
1.4.20. Creating the control plane machines on RHOSP 141
1.4.21. Logging in to the cluster by using the CLI 142
1.4.22. Deleting bootstrap resources from RHOSP 142
1.4.23. Creating compute machines on RHOSP 143
1.4.24. Approving the certificate signing requests for your machines 143
1.4.25. Verifying a successful installation 146
1.4.26. Next steps 146
1.5. INSTALLING A CLUSTER ON OPENSTACK IN A RESTRICTED NETWORK 147
1.5.1. About installations in restricted networks 147
1.5.1.1. Additional limits 147
1.5.2. Resource guidelines for installing OpenShift Container Platform on RHOSP 148
1.5.2.1. Control plane and compute machines 149
1.5.2.2. Bootstrap machine 149
1.5.3. Internet and Telemetry access for OpenShift Container Platform 149
1.5.4. Enabling Swift on RHOSP 150
1.5.5. Defining parameters for the installation program 150
1.5.6. Creating the RHCOS image for restricted network installations 152
1.5.7. Creating the installation configuration file 153
1.5.7.1. Installation configuration parameters 155
1.5.7.2. Sample customized install-config.yaml file for restricted OpenStack installations 165
1.5.8. Generating an SSH private key and adding it to the agent 167
1.5.9. Enabling access to the environment 168
1.5.9.1. Enabling access with floating IP addresses 168
1.5.9.2. Completing installation without floating IP addresses 169
1.5.10. Deploying the cluster 169
1.5.11. Verifying cluster status 171
1.5.12. Logging in to the cluster by using the CLI 171
1.6. UNINSTALLING A CLUSTER ON OPENSTACK 172
1.6.1. Removing a cluster that uses installer-provisioned infrastructure 172
1.7. UNINSTALLING A CLUSTER ON RHOSP FROM YOUR OWN INFRASTRUCTURE 173
3
OpenShift Container Platform 4.6 Installing on OpenStack
4
Table of Contents
5
OpenShift Container Platform 4.6 Installing on OpenStack
1.1.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .
Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.
Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift).
Object storage is the recommended storage technology for OpenShift Container Platform
registry cluster deployment. For more information, see Optimizing storage .
Table 1.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
Resource Value
Floating IP addresses 3
Ports 15
Routers 1
Subnets 1
RAM 112 GB
vCPUs 28
Instances 7
Security groups 3
6
CHAPTER 1. INSTALLING ON OPENSTACK
Resource Value
A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.
IMPORTANT
If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.
NOTE
By default, your security group and security group rule quotas might be low. If you
encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60
<project> as an administrator to increase them.
An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.
By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.
TIP
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.
7
OpenShift Container Platform 4.6 Installing on OpenStack
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.
IMPORTANT
If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as
Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it
is unavailable, the installation program relies on the RHOSP block storage service,
commonly known as Cinder.
If Swift is present and you want to use it, you must enable access to it. If it is not present,
or if you do not want to use it, skip this section.
Prerequisites
8
CHAPTER 1. INSTALLING ON OPENSTACK
Procedure
To enable Swift on RHOSP:
1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:
Your RHOSP deployment can now use Swift for the image registry.
Prerequisites
Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries
Procedure
1. Using the RHOSP CLI, verify the name and ID of the 'External' network:
Example output
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .
IMPORTANT
9
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If the external network’s CIDR range overlaps one of the default network ranges, you
must change the matching network ranges in the install-config.yaml file before you start
the installation process.
Network Range
machineNetwork 10.0.0.0/16
serviceNetwork 172.30.0.0/16
clusterNetwork 10.128.0.0/14
WARNING
If the installation program finds multiple networks with the same name, it sets one
of them at random. To avoid this behavior, create unique names for resources in
RHOSP.
NOTE
If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .
Procedure
If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.
IMPORTANT
Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.
If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.
10
CHAPTER 1. INSTALLING ON OPENSTACK
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:
d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
TIP
After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:
11
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.
IMPORTANT
Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.
4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
12
CHAPTER 1. INSTALLING ON OPENSTACK
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
a. Change to the directory that contains the installation program and run the following
command:
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
NOTE
iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.
iv. Specify the floating IP address to use for external access to the OpenShift API.
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.
vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.
vii. Enter a name for your cluster. The name must be 14 or fewer characters long.
viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
13
OpenShift Container Platform 4.6 Installing on OpenStack
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
NOTE
After installation, you cannot modify these parameters in the install-config.yaml file.
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
14
CHAPTER 1. INSTALLING ON OPENSTACK
metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.
compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.
15
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.
16
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.
controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.
17
OpenShift Container Platform 4.6 Installing on OpenStack
NOTE
imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.
18
CHAPTER 1. INSTALLING ON OPENSTACK
networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.
19
OpenShift Container Platform 4.6 Installing on OpenStack
sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
20
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.
compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.
compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs
compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
21
OpenShift Container Platform 4.6 Installing on OpenStack
controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.
controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs
controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.
22
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.
platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
23
OpenShift Container Platform 4.6 Installing on OpenStack
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.
This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.
Before you run the OpenShift Container Platform installer with a custom subnet, verify that:
You can provide installer credentials that have permission to create ports on the target
network.
If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.
Your network configuration does not rely on a provider network. Provider networks are not
supported.
NOTE
By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.
This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform
(RHOSP) customization options.
IMPORTANT
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
24
CHAPTER 1. INSTALLING ON OPENSTACK
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
Example output
25
OpenShift Container Platform 4.6 Installing on OpenStack
$ ssh-add <path>/<file_name> 1
Example output
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.
Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and
cluster applications.
Procedure
1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:
2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:
3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
NOTE
If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.
26
CHAPTER 1. INSTALLING ON OPENSTACK
4. Add the FIPs to the install-config.yaml file as the values of the following parameters:
platform.openstack.ingressFloatingIP
platform.openstack.lbFloatingIP
If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork parameter in the install-config.yaml file.
TIP
You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.
You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.
platform.openstack.ingressFloatingIP
platform.openstack.lbFloatingIP
If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for
you, and, without additional action, the installer will fail to retrieve an image from Glance. You must
configure external connectivity on your own.
If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP
addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use
a proxy network or run the installer from a system that is on the same network as your machines.
NOTE
You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:
api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>
If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
27
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1. Change to the directory that contains the installation program and initialize the cluster
deployment:
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s
NOTE
IMPORTANT
28
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.
2. View the control plane and compute machines created after a deployment:
$ oc get nodes
$ oc get clusterversion
$ oc get clusteroperator
$ oc get pods -A
29
OpenShift Container Platform 4.6 Installing on OpenStack
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.
If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .
1.2.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .
Verify that your network configuration does not rely on a provider network. Provider networks
30
CHAPTER 1. INSTALLING ON OPENSTACK
Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.
Have a storage service installed in RHOSP, like block storage (Cinder) or object storage (Swift).
Object storage is the recommended storage technology for OpenShift Container Platform
registry cluster deployment. For more information, see Optimizing storage .
Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container
Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging
OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between
pods and RHOSP virtual instances.
Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr
namespace:
kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift
Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet
object.
The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and
namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to
corresponding objects in Neutron and Octavia. This means that every network solution that implements
the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This
includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as
Neutron-compatible commercial SDNs.
Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant
networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform
SDN over an RHOSP network.
If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double
encapsulation. The performance benefit is negligible. Depending on your configuration, though, using
Kuryr to avoid having two overlays might still be beneficial.
Kuryr is not recommended in deployments where all of the following criteria are true:
The deployment uses UDP services, or a large number of TCP services on few hypervisors.
or
31
OpenShift Container Platform 4.6 Installing on OpenStack
When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from
the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional
requirements on top of what a default install requires.
Table 1.6. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
with Kuryr
Resource Value
Routers 1
RAM 112 GB
vCPUs 28
Instances 7
A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.
IMPORTANT
If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.
32
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora
driver rather than the OVN Octavia driver, security groups are associated with service
accounts instead of user projects.
The number of ports that are required is larger than the number of pods. Kuryr uses ports pools
to have pre-created ports ready to be used by pods and speed up the pods' booting time.
Each network policy is mapped into an RHOSP security group, and depending on the
NetworkPolicy spec, one or more rules are added to the security group.
Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating
the number of security groups required for the quota.
If you are using RHOSP version 15 or earlier, or the ovn-octavia driver, each load balancer has a
security group with the user project.
The quota does not account for load balancer resources (such as VM resources), but you must
consider these resources when you decide the RHOSP deployment’s size. The default
installation will have more than 50 load balancers; the clusters must be able to accommodate
them.
If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer
VM is generated; services are load balanced through OVN flows.
An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.
To enable Kuryr SDN, your environment must meet the following requirements:
Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.
When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP)
resources used by pods, services, namespaces, and network policies.
Procedure
$ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets
250 --networks 250 <project>
Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack
Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work.
33
OpenShift Container Platform 4.6 Installing on OpenStack
In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to
openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr
can properly handle network policies.
Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift
Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use
Kuryr SDN.
To enable Octavia, you must include the Octavia service during the installation of the RHOSP
Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for
enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.
NOTE
The following steps only capture the key pieces required during the deployment of
RHOSP when dealing with Octavia. It is also important to note that registry methods vary.
Procedure
1. If you are using the local registry, create a template to upload the images to the registry. For
example:
2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:
...
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-
45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
push_destination: <local-ip-from-undercloud.conf>:8787
NOTE
The Octavia container versions vary depending upon the specific RHOSP release
installed.
34
CHAPTER 1. INSTALLING ON OPENSTACK
This may take some time depending on the speed of your network and Undercloud disk.
4. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you
must increase their listeners' default timeouts for the connections. The default timeout is 50
seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud
deploy command:
NOTE
NOTE
This command only includes the files associated with Octavia; it varies based on
your specific installation of RHOSP. See the RHOSP documentation for further
information. For more information on customizing your Octavia installation, see
installation of Octavia using Director.
NOTE
When leveraging Kuryr SDN, the Overcloud installation requires the Neutron
trunk extension. This is available by default on director deployments. Use the
openvswitch firewall instead of the default ovs-hybrid when the Neutron
backend is ML2/OVS. There is no need for modifications if the backend is
ML2/OVN.
6. In RHOSP versions 13 and 15, add the project ID to the octavia.conf configuration file after you
create the project.
To enforce network policies across services, like when traffic goes through the Octavia load
balancer, you must ensure Octavia creates the Amphora VM security groups on the user
project.
This change ensures that required load balancer security groups belong to that project, and
that they can be updated to enforce services isolation.
NOTE
35
OpenShift Container Platform 4.6 Installing on OpenStack
NOTE
Octavia implements a new ACL API that restricts access to the load
balancers VIP.
Example output
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | PROJECT_ID |
| is_domain | False |
| name | *<project>* |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
Example output
+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+
│
| ID | Name | Status | Networks
| Image | Flavor |
│
+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+
│
| 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
ctlplane=192.168.24.8 | overcloud-full | controller |
│
| dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE |
ctlplane=192.168.24.6 | overcloud-full | compute |
36
CHAPTER 1. INSTALLING ON OPENSTACK
│
+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+
$ ssh [email protected]
iv. Edit the octavia.conf file to add the project into the list of projects where Amphora
security groups are on the user’s account.
# List of project IDs that are allowed to have Load balancer security groups
# belonging to them.
amp_secgroup_allowed_projects = PROJECT_ID
NOTE
Depending on your RHOSP environment, Octavia might not support UDP listeners. If you
use Kuryr SDN on RHOSP version 15 or earlier, UDP services are not supported. RHOSP
version 16 or later support UDP.
Example output
+---------+-------------------------------------------------+
| name | description |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver. |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn | Octavia OVN driver. |
+---------+-------------------------------------------------+
Beginning with RHOSP version 16, the Octavia OVN provider driver (ovn) is supported on OpenShift
Container Platform on RHOSP deployments.
ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load
balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by
Director on deployments that use OVN Neutron ML2.
The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.
37
OpenShift Container Platform 4.6 Installing on OpenStack
Decreased resource requirements. Kuryr does not require a load balancer VM for each service.
Increased service creation speed by using OpenFlow rules instead of a VM for each service.
Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.
You can configure your cluster to use the Octavia OVN driver after your RHOSP cloud is upgraded from
version 13 to version 16.
Using OpenShift Container Platform with Kuryr SDN has several known limitations.
If the machines subnet is not connected to a router, or if the subnet is connected, but the router has no
external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.
RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver
requires that one Amphora load balancer VM is deployed per OpenShift Container Platform
service. Creating too many services can cause you to run out of resources.
Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use
the Amphora driver. They are subject to the same resource concerns as earlier versions of
RHOSP.
Octavia RHOSP versions before 16 do not support UDP listeners. Therefore, OpenShift
Container Platform UDP services are not supported.
Octavia RHOSP versions before 16 cannot listen to multiple protocols on the same port.
Services that expose the same port to different protocols, like TCP and UDP, are not
supported.
Because of Octavia’s lack of support for the UDP protocol and multiple listeners, if the RHOSP version
is earlier than 16, Kuryr forces pods to use TCP for DNS resolution.
In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only.
In this case, the native Go resolver does not recognize the use-vc option in resolv.conf, which controls
whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails.
To ensure that TCP forcing is allowed, compile applications either with the environment variable
CGO_ENABLED set to 1, i.e. CGO_ENABLED=1, or ensure that the variable is absent.
In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails.
NOTE
38
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two
ways:
If the operator takes the first option, there might be short downtimes during failovers.
If the operator takes the second option, the existing load balancers will not support upgraded Octavia
API features, like UDP listeners. In this case, users must recreate their Services to use these features.
IMPORTANT
If OpenShift Container Platform detects a new Octavia version that supports UDP load
balancing, it recreates the DNS service automatically. The service recreation ensures that
the service default supports UDP load balancing.
The recreation causes the DNS service approximately one minute of downtime.
By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.
TIP
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.
39
OpenShift Container Platform 4.6 Installing on OpenStack
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.
IMPORTANT
If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as
Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it
is unavailable, the installation program relies on the RHOSP block storage service,
commonly known as Cinder.
If Swift is present and you want to use it, you must enable access to it. If it is not present,
or if you do not want to use it, skip this section.
Prerequisites
40
CHAPTER 1. INSTALLING ON OPENSTACK
Procedure
To enable Swift on RHOSP:
1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:
Your RHOSP deployment can now use Swift for the image registry.
Prerequisites
Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries
Procedure
1. Using the RHOSP CLI, verify the name and ID of the 'External' network:
Example output
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .
IMPORTANT
41
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If the external network’s CIDR range overlaps one of the default network ranges, you
must change the matching network ranges in the install-config.yaml file before you start
the installation process.
Network Range
machineNetwork 10.0.0.0/16
serviceNetwork 172.30.0.0/16
clusterNetwork 10.128.0.0/14
WARNING
If the installation program finds multiple networks with the same name, it sets one
of them at random. To avoid this behavior, create unique names for resources in
RHOSP.
NOTE
If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .
Procedure
If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.
IMPORTANT
Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.
If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.
42
CHAPTER 1. INSTALLING ON OPENSTACK
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:
d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
TIP
After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:
43
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.
IMPORTANT
Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.
4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
44
CHAPTER 1. INSTALLING ON OPENSTACK
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
a. Change to the directory that contains the installation program and run the following
command:
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
NOTE
iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.
iv. Specify the floating IP address to use for external access to the OpenShift API.
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.
vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.
vii. Enter a name for your cluster. The name must be 14 or fewer characters long.
viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
45
OpenShift Container Platform 4.6 Installing on OpenStack
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
NOTE
After installation, you cannot modify these parameters in the install-config.yaml file.
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
46
CHAPTER 1. INSTALLING ON OPENSTACK
metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.
compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.
47
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.
48
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.
controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.
49
OpenShift Container Platform 4.6 Installing on OpenStack
NOTE
imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.
50
CHAPTER 1. INSTALLING ON OPENSTACK
networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.
51
OpenShift Container Platform 4.6 Installing on OpenStack
sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
52
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.
compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.
compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs
compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
53
OpenShift Container Platform 4.6 Installing on OpenStack
controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.
controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs
controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.
54
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.
platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
55
OpenShift Container Platform 4.6 Installing on OpenStack
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.
This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.
Before you run the OpenShift Container Platform installer with a custom subnet, verify that:
You can provide installer credentials that have permission to create ports on the target
network.
If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.
Your network configuration does not rely on a provider network. Provider networks are not
supported.
NOTE
By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.
To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-
config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default
OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all
of the possible Red Hat OpenStack Platform (RHOSP) customization options.
IMPORTANT
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
56
CHAPTER 1. INSTALLING ON OPENSTACK
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: Kuryr
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
trunkSupport: true
octaviaSupport: true
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
NOTE
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
57
OpenShift Container Platform 4.6 Installing on OpenStack
Example output
$ ssh-add <path>/<file_name> 1
Example output
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.
Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and
cluster applications.
Procedure
1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:
2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:
3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:
58
CHAPTER 1. INSTALLING ON OPENSTACK
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
NOTE
If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.
4. Add the FIPs to the install-config.yaml file as the values of the following parameters:
platform.openstack.ingressFloatingIP
platform.openstack.lbFloatingIP
If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork parameter in the install-config.yaml file.
TIP
You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.
You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.
platform.openstack.ingressFloatingIP
platform.openstack.lbFloatingIP
If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for
you, and, without additional action, the installer will fail to retrieve an image from Glance. You must
configure external connectivity on your own.
If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP
addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use
a proxy network or run the installer from a system that is on the same network as your machines.
NOTE
You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:
api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>
If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.
59
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
1. Change to the directory that contains the installation program and initialize the cluster
deployment:
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s
NOTE
60
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.
2. View the control plane and compute machines created after a deployment:
$ oc get nodes
$ oc get clusterversion
$ oc get clusteroperator
$ oc get pods -A
61
OpenShift Container Platform 4.6 Installing on OpenStack
correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container
Platform installation.
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.
If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .
Using your own infrastructure allows you to integrate your cluster with existing infrastructure and
modifications. The process requires more labor on your part than installer-provisioned installations,
because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups.
However, Red Hat provides Ansible playbooks to help you in the deployment process.
1.3.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
62
CHAPTER 1. INSTALLING ON OPENSTACK
Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .
Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.
Have an RHOSP account where you want to install OpenShift Container Platform.
On the machine from which you run the installation program, have:
A single directory in which you can keep the files you create during the installation process
Python 3
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.
Table 1.11. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
63
OpenShift Container Platform 4.6 Installing on OpenStack
Resource Value
Floating IP addresses 3
Ports 15
Routers 1
Subnets 1
RAM 112 GB
vCPUs 28
Instances 7
Security groups 3
A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.
IMPORTANT
If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.
NOTE
By default, your security group and security group rule quotas might be low. If you
encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60
<project> as an administrator to increase them.
An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.
By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.
64
CHAPTER 1. INSTALLING ON OPENSTACK
TIP
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.
NOTE
These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.
Prerequisites
Procedure
65
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
Procedure
To download the playbooks to your working directory, run the following script from a command
line:
66
CHAPTER 1. INSTALLING ON OPENSTACK
security-groups.yaml
https://raw.githubusercontent.com/openshift/installer/release-4.5/upi/openstack/down-
containers.yaml'
IMPORTANT
During the installation process, you can modify the playbooks to configure your
deployment.
Retain all playbooks for the life of your cluster. You must have the playbooks to remove
your OpenShift Container Platform cluster from RHOSP.
IMPORTANT
You must match any edits you make in the bootstrap.yaml, compute-nodes.yaml,
control-plane.yaml, network.yaml, and security-groups.yaml files to the
corresponding playbooks that are prefixed with down-. For example, edits to the
bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not
edit both files, the supported cluster removal process will fail.
Prerequisites
You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.
IMPORTANT
Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.
67
OpenShift Container Platform 4.6 Installing on OpenStack
4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
Example output
68
CHAPTER 1. INSTALLING ON OPENSTACK
$ ssh-add <path>/<file_name> 1
Example output
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
1.3.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image
The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux
CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the
latest RHCOS image, then upload it using the RHOSP CLI.
Prerequisites
Procedure
2. Under Version, select the most recent release of OpenShift Container Platform 4.5 for Red Hat
Enterprise Linux (RHEL) 8.
IMPORTANT
The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available.
3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) .
NOTE
You must decompress the RHOSP image before the cluster can use it. The
name of the downloaded file might not contain a compression extension, like .gz
or .tgz. To find out if or how the file is compressed, in a command line, enter:
$ file <name_of_downloaded_file>
5. From the image that you downloaded, create an image that is named rhcos in your cluster by
69
OpenShift Container Platform 4.6 Installing on OpenStack
5. From the image that you downloaded, create an image that is named rhcos in your cluster by
using the RHOSP CLI:
IMPORTANT
Depending on your RHOSP environment, you might be able to upload the image
in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.
WARNING
If the installation program finds multiple images with the same name, it
chooses one of them at random. To avoid this behavior, create unique
names for resources in RHOSP.
After you upload the image to RHOSP, it is usable in the installation process.
Prerequisites
Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries
Procedure
1. Using the RHOSP CLI, verify the name and ID of the 'External' network:
Example output
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .
NOTE
70
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .
You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.
Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster
applications, and the bootstrap process.
Procedure
1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:
2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:
3. By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP:
4. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
NOTE
If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.
5. Add the FIPs to the inventory.yaml file as the values of the following variables:
os_api_fip
os_bootstrap_fip
71
OpenShift Container Platform 4.6 Installing on OpenStack
os_ingress_fip
If you use these values, you must also enter an external network as the value of the
os_external_network variable in the inventory.yaml file.
TIP
You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.
You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.
os_api_fip
os_bootstrap_fip
os_ingress_fip
If you cannot provide an external network, you can also leave os_external_network blank. If you do not
provide a value for os_external_network, a router is not created for you, and, without additional action,
the installer will fail to retrieve an image from Glance. Later in the installation process, when you create
network resources, you must configure external connectivity on your own.
If you run the installer with the wait-for command from a system that cannot reach the cluster API due
to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in
these cases, you can use a proxy network or run the installer from a system that is on the same network
as your machines.
NOTE
You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:
api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>
If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.
Procedure
If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.
72
CHAPTER 1. INSTALLING ON OPENSTACK
If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.
IMPORTANT
Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.
If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:
d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
TIP
73
OpenShift Container Platform 4.6 Installing on OpenStack
TIP
After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
a. Change to the directory that contains the installation program and run the following
command:
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
74
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.
iv. Specify the floating IP address to use for external access to the OpenShift API.
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.
vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.
vii. Enter a name for your cluster. The name must be 14 or fewer characters long.
viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
You now have the file install-config.yaml in the directory that you specified.
NOTE
After installation, you cannot modify these parameters in the install-config.yaml file.
75
OpenShift Container Platform 4.6 Installing on OpenStack
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.
76
CHAPTER 1. INSTALLING ON OPENSTACK
compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.
77
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.
78
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.
controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.
NOTE
79
OpenShift Container Platform 4.6 Installing on OpenStack
imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.
80
CHAPTER 1. INSTALLING ON OPENSTACK
networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.
sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
81
OpenShift Container Platform 4.6 Installing on OpenStack
82
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.
compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.
compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs
compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.
controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs
83
OpenShift Container Platform 4.6 Installing on OpenStack
controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.
84
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.
platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.
This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.
Before you run the OpenShift Container Platform installer with a custom subnet, verify that:
You can provide installer credentials that have permission to create ports on the target
network.
If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.
Your network configuration does not rely on a provider network. Provider networks are not
supported.
NOTE
85
OpenShift Container Platform 4.6 Installing on OpenStack
NOTE
By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.
This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform
(RHOSP) customization options.
IMPORTANT
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
The IP range that the installation program uses by default might not match the Neutron subnet that you
86
CHAPTER 1. INSTALLING ON OPENSTACK
The IP range that the installation program uses by default might not match the Neutron subnet that you
create when you install OpenShift Container Platform. If necessary, update the CIDR value for new
machines by editing the installation configuration file.
Prerequisites
You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.
Procedure
2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:
$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24.
To set the value manually, open the file and set the value of networking.machineCIDR to
something that matches your intended Neutron subnet.
To proceed with an installation that uses your own infrastructure, set the number of compute machines
in the installation configuration file to zero. Later, you create these machines manually.
Prerequisites
You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.
Procedure
2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:
$ python -c '
import yaml;
path = "install-config.yaml";
87
OpenShift Container Platform 4.6 Installing on OpenStack
data = yaml.safe_load(open(path));
data["compute"][0]["replicas"] = 0;
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
To set the value manually, open the file and set the value of compute.<first
entry>.replicas to 0.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the
Ignition configuration files, which are later used to create the cluster.
IMPORTANT
The Ignition config files that the installation program generates contain certificates that
expire after 24 hours, which are then renewed at that time. If the cluster is shut down
before renewing the certificates and the cluster is later restarted after the 24 hours have
elapsed, the cluster automatically recovers the expired certificates. The exception is that
you must manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.
Prerequisites
Procedure
1. Change to the directory that contains the installation program and generate the Kubernetes
manifests for the cluster:
Example output
1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.
2. Remove the Kubernetes manifest files that define the control plane machines and compute
machine sets:
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-
cluster-api_worker-machineset-*.yaml
88
CHAPTER 1. INSTALLING ON OPENSTACK
Because you create and manage these resources yourself, you do not have to initialize them.
You can preserve the machine set files to create compute machines by using the machine
API, but you must update references to them to match your environment.
4. To create the Ignition configuration files, run the following command from the directory that
contains the installation program:
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
TIP
Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that
you create. By doing so, you avoid name conflicts when making multiple deployments in the same
project.
Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat
OpenStack Platform (RHOSP) uses to download the primary file.
Prerequisites
89
OpenShift Container Platform 4.6 Installing on OpenStack
You have the bootstrap Ignition file that the installer program generates, bootstrap.ign.
The infrastructure ID from the installer’s metadata file is set as an environment variable
($INFRA_ID).
If the variable is not set, see Creating the Kubernetes manifest and Ignition config files.
The documented procedure uses the RHOSP image service (Glance), but you can also use
the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova
server.
Procedure
1. Run the following Python script. The script modifies the bootstrap Ignition file to set the host
name and, if available, CA certificate file when it runs:
import base64
import json
import os
files.append(
{
'path': '/opt/openshift/tls/cloud-ca-cert.pem',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
}
})
ignition['storage']['files'] = files;
90
CHAPTER 1. INSTALLING ON OPENSTACK
2. Using the RHOSP CLI, create an image that uses the bootstrap Ignition file:
NOTE
5. Combine the public address with the image file value and save the result as the storage
location. The location follows the pattern
<image_service_public_URL>/v2/images/<image_ID>/file.
7. Insert the following content into a file called $INFRA_ID-bootstrap-ignition.json and edit the
placeholders to match your own values:
{
"ignition": {
"config": {
"merge": [{
"source": "<storage_url>", 1
"httpHeaders": [{
"name": "X-Auth-Token", 2
"value": "<token_ID>" 3
}]
}]
},
"security": {
"tls": {
"certificateAuthorities": [{
"source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4
}]
}
},
91
OpenShift Container Platform 4.6 Installing on OpenStack
"version": "3.1.0"
}
}
1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage
URL.
4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-
encoded certificate.
WARNING
NOTE
As with the bootstrap Ignition configuration, you must explicitly define a host name for
each control plane machine.
Prerequisites
The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).
If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files."
Procedure
92
CHAPTER 1. INSTALLING ON OPENSTACK
Prerequisites
Procedure
...
# The public network providing connectivity to the cluster. If not
# provided, the cluster external connectivity must be provided in another
# way.
IMPORTANT
2. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml
playbook:
...
93
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If you do not define values for os_api_fip and os_ingress_fip, you must perform
post-installation network configuration.
If you do not define a value for os_bootstrap_fip, the installer cannot download
debugging information from failed installations.
4. On a command line, create a network, subnet, and router by running the network.yaml
playbook:
5. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI
command:
Prerequisites
The metadata.yaml file that the installation program created is in the same directory as the
94
CHAPTER 1. INSTALLING ON OPENSTACK
The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.
Procedure
1. On a command line, change the working directory to the location of the playbooks.
3. After the bootstrap server is active, view the logs to verify that the Ignition files were received:
Prerequisites
The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).
You have the three Ignition files that were created in "Creating control plane Ignition config
files."
Procedure
1. On a command line, change the working directory to the location of the playbooks.
2. If the control plane Ignition config files aren’t already in your working directory, copy them into
it.
You will see messages that confirm that the control plane machines are running and have joined
the cluster:
95
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
Prerequisites
If you do not know the status of the machines, see "Verifying cluster status."
Procedure
96
CHAPTER 1. INSTALLING ON OPENSTACK
1. On a command line, change the working directory to the location of the playbooks.
WARNING
If you did not disable the bootstrap Ignition file URL earlier, do so now.
Prerequisites
The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.
Procedure
1. On a command line, change the working directory to the location of the playbooks.
Next steps
Prerequisites
97
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
Procedure
$ oc get nodes
Example output
NOTE
The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.
2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:
$ oc get csr
Example output
In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:
NOTE
98
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After you approve the initial
CSRs, the subsequent node client CSRs are automatically approved by the
cluster kube-controller-manager.
NOTE
For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.
To approve them individually, run the following command for each valid CSR:
NOTE
Some Operators might not become available until some CSRs are approved.
4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:
$ oc get csr
Example output
5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
99
OpenShift Container Platform 4.6 Installing on OpenStack
5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:
To approve them individually, run the following command for each valid CSR:
6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:
$ oc get nodes
Example output
NOTE
It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.
Additional information
Prerequisites
Procedure
The program outputs the console URL, as well as the administrator’s login information.
100
CHAPTER 1. INSTALLING ON OPENSTACK
If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.
If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .
Using your own infrastructure allows you to integrate your cluster with existing infrastructure and
modifications. The process requires more labor on your part than installer-provisioned installations,
because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups.
However, Red Hat provides Ansible playbooks to help you in the deployment process.
1.4.1. Prerequisites
Review details about the OpenShift Container Platform installation and update processes.
Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version in the
Available platforms section. You can also compare platform support across different
versions by viewing the OpenShift Container Platform on RHOSP support matrix .
Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.
Have an RHOSP account where you want to install OpenShift Container Platform.
On the machine from which you run the installation program, have:
A single directory in which you can keep the files you create during the installation process
Python 3
Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container
Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging
OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between
pods and RHOSP virtual instances.
Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr
namespace:
101
OpenShift Container Platform 4.6 Installing on OpenStack
kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift
Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet
object.
The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and
namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to
corresponding objects in Neutron and Octavia. This means that every network solution that implements
the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This
includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as
Neutron-compatible commercial SDNs.
Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant
networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform
SDN over an RHOSP network.
If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double
encapsulation. The performance benefit is negligible. Depending on your configuration, though, using
Kuryr to avoid having two overlays might still be beneficial.
Kuryr is not recommended in deployments where all of the following criteria are true:
The deployment uses UDP services, or a large number of TCP services on few hypervisors.
or
Table 1.16. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
with Kuryr
Resource Value
Routers 1
102
CHAPTER 1. INSTALLING ON OPENSTACK
Resource Value
RAM 112 GB
vCPUs 28
Instances 7
A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.
IMPORTANT
If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.
IMPORTANT
If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora
driver rather than the OVN Octavia driver, security groups are associated with service
accounts instead of user projects.
The number of ports that are required is larger than the number of pods. Kuryr uses ports pools
to have pre-created ports ready to be used by pods and speed up the pods' booting time.
Each network policy is mapped into an RHOSP security group, and depending on the
NetworkPolicy spec, one or more rules are added to the security group.
Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating
103
OpenShift Container Platform 4.6 Installing on OpenStack
Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating
the number of security groups required for the quota.
If you are using RHOSP version 15 or earlier, or the ovn-octavia driver, each load balancer has a
security group with the user project.
The quota does not account for load balancer resources (such as VM resources), but you must
consider these resources when you decide the RHOSP deployment’s size. The default
installation will have more than 50 load balancers; the clusters must be able to accommodate
them.
If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer
VM is generated; services are load balanced through OVN flows.
An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.
To enable Kuryr SDN, your environment must meet the following requirements:
Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid.
When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP)
resources used by pods, services, namespaces, and network policies.
Procedure
$ sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets
250 --networks 250 <project>
Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack
Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work.
In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to
openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr
can properly handle network policies.
Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift
Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use
Kuryr SDN.
To enable Octavia, you must include the Octavia service during the installation of the RHOSP
Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for
enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update.
104
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
The following steps only capture the key pieces required during the deployment of
RHOSP when dealing with Octavia. It is also important to note that registry methods vary.
Procedure
1. If you are using the local registry, create a template to upload the images to the registry. For
example:
2. Verify that the local_registry_images.yaml file contains the Octavia images. For example:
...
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-
45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45
push_destination: <local-ip-from-undercloud.conf>:8787
- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44
push_destination: <local-ip-from-undercloud.conf>:8787
NOTE
The Octavia container versions vary depending upon the specific RHOSP release
installed.
This may take some time depending on the speed of your network and Undercloud disk.
4. Since an Octavia load balancer is used to access the OpenShift Container Platform API, you
must increase their listeners' default timeouts for the connections. The default timeout is 50
seconds. Increase the timeout to 20 minutes by passing the following file to the Overcloud
deploy command:
105
OpenShift Container Platform 4.6 Installing on OpenStack
OctaviaTimeoutClientData: 1200000
OctaviaTimeoutMemberData: 1200000
NOTE
NOTE
This command only includes the files associated with Octavia; it varies based on
your specific installation of RHOSP. See the RHOSP documentation for further
information. For more information on customizing your Octavia installation, see
installation of Octavia using Director.
NOTE
When leveraging Kuryr SDN, the Overcloud installation requires the Neutron
trunk extension. This is available by default on director deployments. Use the
openvswitch firewall instead of the default ovs-hybrid when the Neutron
backend is ML2/OVS. There is no need for modifications if the backend is
ML2/OVN.
6. In RHOSP versions 13 and 15, add the project ID to the octavia.conf configuration file after you
create the project.
To enforce network policies across services, like when traffic goes through the Octavia load
balancer, you must ensure Octavia creates the Amphora VM security groups on the user
project.
This change ensures that required load balancer security groups belong to that project, and
that they can be updated to enforce services isolation.
NOTE
Octavia implements a new ACL API that restricts access to the load
balancers VIP.
Example output
+-------------+----------------------------------+
| Field | Value |
106
CHAPTER 1. INSTALLING ON OPENSTACK
+-------------+----------------------------------+
| description | |
| domain_id | default |
| enabled | True |
| id | PROJECT_ID |
| is_domain | False |
| name | *<project>* |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
Example output
+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+
│
| ID | Name | Status | Networks
| Image | Flavor |
│
+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+
│
| 6bef8e73-2ba5-4860-a0b1-3937f8ca7e01 | controller-0 | ACTIVE |
ctlplane=192.168.24.8 | overcloud-full | controller |
│
| dda3173a-ab26-47f8-a2dc-8473b4a67ab9 | compute-0 | ACTIVE |
ctlplane=192.168.24.6 | overcloud-full | compute |
│
+--------------------------------------+--------------+--------+-----------------------+-------
---------+------------+
$ ssh [email protected]
iv. Edit the octavia.conf file to add the project into the list of projects where Amphora
security groups are on the user’s account.
# List of project IDs that are allowed to have Load balancer security groups
# belonging to them.
amp_secgroup_allowed_projects = PROJECT_ID
107
OpenShift Container Platform 4.6 Installing on OpenStack
NOTE
Depending on your RHOSP environment, Octavia might not support UDP listeners. If you
use Kuryr SDN on RHOSP version 15 or earlier, UDP services are not supported. RHOSP
version 16 or later support UDP.
Example output
+---------+-------------------------------------------------+
| name | description |
+---------+-------------------------------------------------+
| amphora | The Octavia Amphora driver. |
| octavia | Deprecated alias of the Octavia Amphora driver. |
| ovn | Octavia OVN driver. |
+---------+-------------------------------------------------+
Beginning with RHOSP version 16, the Octavia OVN provider driver (ovn) is supported on OpenShift
Container Platform on RHOSP deployments.
ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load
balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by
Director on deployments that use OVN Neutron ML2.
The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it.
Decreased resource requirements. Kuryr does not require a load balancer VM for each service.
Increased service creation speed by using OpenFlow rules instead of a VM for each service.
Distributed load balancing actions across all nodes instead of centralized on Amphora VMs.
Using OpenShift Container Platform with Kuryr SDN has several known limitations.
If the machines subnet is not connected to a router, or if the subnet is connected, but the router has no
external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer.
108
CHAPTER 1. INSTALLING ON OPENSTACK
RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver
requires that one Amphora load balancer VM is deployed per OpenShift Container Platform
service. Creating too many services can cause you to run out of resources.
Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use
the Amphora driver. They are subject to the same resource concerns as earlier versions of
RHOSP.
Octavia RHOSP versions before 16 do not support UDP listeners. Therefore, OpenShift
Container Platform UDP services are not supported.
Octavia RHOSP versions before 16 cannot listen to multiple protocols on the same port.
Services that expose the same port to different protocols, like TCP and UDP, are not
supported.
Because of Octavia’s lack of support for the UDP protocol and multiple listeners, if the RHOSP version
is earlier than 16, Kuryr forces pods to use TCP for DNS resolution.
In Go versions 1.12 and earlier, applications that are compiled with CGO support disabled use UDP only.
In this case, the native Go resolver does not recognize the use-vc option in resolv.conf, which controls
whether TCP is forced for DNS resolution. As a result, UDP is still used for DNS resolution, which fails.
To ensure that TCP forcing is allowed, compile applications either with the environment variable
CGO_ENABLED set to 1, i.e. CGO_ENABLED=1, or ensure that the variable is absent.
In Go versions 1.13 and later, TCP is used automatically if DNS resolution using UDP fails.
NOTE
If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two
ways:
If the operator takes the first option, there might be short downtimes during failovers.
If the operator takes the second option, the existing load balancers will not support upgraded Octavia
API features, like UDP listeners. In this case, users must recreate their Services to use these features.
IMPORTANT
109
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If OpenShift Container Platform detects a new Octavia version that supports UDP load
balancing, it recreates the DNS service automatically. The service recreation ensures that
the service default supports UDP load balancing.
The recreation causes the DNS service approximately one minute of downtime.
By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.
TIP
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.
110
CHAPTER 1. INSTALLING ON OPENSTACK
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.
NOTE
These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.
Prerequisites
Procedure
111
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
Procedure
To download the playbooks to your working directory, run the following script from a command
line:
IMPORTANT
112
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
During the installation process, you can modify the playbooks to configure your
deployment.
Retain all playbooks for the life of your cluster. You must have the playbooks to remove
your OpenShift Container Platform cluster from RHOSP.
IMPORTANT
You must match any edits you make in the bootstrap.yaml, compute-nodes.yaml,
control-plane.yaml, network.yaml, and security-groups.yaml files to the
corresponding playbooks that are prefixed with down-. For example, edits to the
bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not
edit both files, the supported cluster removal process will fail.
Prerequisites
You have a computer that runs Linux or macOS, with 500 MB of local disk space
Procedure
1. Access the Infrastructure Provider page on the Red Hat OpenShift Cluster Manager site. If you
have a Red Hat account, log in with your credentials. If you do not, create an account.
3. Navigate to the page for your installation type, download the installation program for your
operating system, and place the file in the directory where you will store the installation
configuration files.
IMPORTANT
The installation program creates several files on the computer that you use to
install your cluster. You must keep the installation program and the files that the
installation program creates after you finish installing the cluster. Both files are
required to delete the cluster.
IMPORTANT
Deleting the files created by the installation program does not remove your
cluster, even if the cluster failed during installation. To remove your cluster,
complete the OpenShift Container Platform uninstallation procedures for your
specific cloud provider.
4. Extract the installation program. For example, on a computer that uses a Linux operating
system, run the following command:
113
OpenShift Container Platform 4.6 Installing on OpenStack
5. From the Pull Secret page on the Red Hat OpenShift Cluster Manager site, download your
installation pull secret as a .txt file. This pull secret allows you to authenticate with the services
that are provided by the included authorities, including Quay.io, which serves the container
images for OpenShift Container Platform components.
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
Example output
$ ssh-add <path>/<file_name> 1
Example output
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
114
CHAPTER 1. INSTALLING ON OPENSTACK
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
1.4.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image
The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux
CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the
latest RHCOS image, then upload it using the RHOSP CLI.
Prerequisites
Procedure
2. Under Version, select the most recent release of OpenShift Container Platform 4.5 for Red Hat
Enterprise Linux (RHEL) 8.
IMPORTANT
The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available.
3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) .
NOTE
You must decompress the RHOSP image before the cluster can use it. The
name of the downloaded file might not contain a compression extension, like .gz
or .tgz. To find out if or how the file is compressed, in a command line, enter:
$ file <name_of_downloaded_file>
5. From the image that you downloaded, create an image that is named rhcos in your cluster by
using the RHOSP CLI:
IMPORTANT
Depending on your RHOSP environment, you might be able to upload the image
in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.
115
OpenShift Container Platform 4.6 Installing on OpenStack
WARNING
If the installation program finds multiple images with the same name, it
chooses one of them at random. To avoid this behavior, create unique
names for resources in RHOSP.
After you upload the image to RHOSP, it is usable in the installation process.
Prerequisites
Configure OpenStack’s networking service to have DHCP agents forward instances' DNS
queries
Procedure
1. Using the RHOSP CLI, verify the name and ID of the 'External' network:
Example output
+--------------------------------------+----------------+-------------+
| ID | Name | Router Type |
+--------------------------------------+----------------+-------------+
| 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External |
+--------------------------------------+----------------+-------------+
A network with an external router type appears in the network list. If at least one does not, see Creating
a default floating IP network and Creating a default provider network .
NOTE
If the Neutron trunk service plug-in is enabled, a trunk port is created by default. For
more information, see Neutron trunk port .
You can configure OpenShift Container Platform API and application access by using floating IP
116
CHAPTER 1. INSTALLING ON OPENSTACK
You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.
Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster
applications, and the bootstrap process.
Procedure
1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:
2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:
3. By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP:
4. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
NOTE
If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.
5. Add the FIPs to the inventory.yaml file as the values of the following variables:
os_api_fip
os_bootstrap_fip
os_ingress_fip
If you use these values, you must also enter an external network as the value of the
os_external_network variable in the inventory.yaml file.
TIP
You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.
117
OpenShift Container Platform 4.6 Installing on OpenStack
You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.
os_api_fip
os_bootstrap_fip
os_ingress_fip
If you cannot provide an external network, you can also leave os_external_network blank. If you do not
provide a value for os_external_network, a router is not created for you, and, without additional action,
the installer will fail to retrieve an image from Glance. Later in the installation process, when you create
network resources, you must configure external connectivity on your own.
If you run the installer with the wait-for command from a system that cannot reach the cluster API due
to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in
these cases, you can use a proxy network or run the installer from a system that is on the same network
as your machines.
NOTE
You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:
api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>
If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.
Procedure
If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.
IMPORTANT
Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.
If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.
118
CHAPTER 1. INSTALLING ON OPENSTACK
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:
d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
TIP
After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:
119
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
a. Change to the directory that contains the installation program and run the following
command:
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
IMPORTANT
NOTE
iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.
iv. Specify the floating IP address to use for external access to the OpenShift API.
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
120
CHAPTER 1. INSTALLING ON OPENSTACK
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.
vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.
vii. Enter a name for your cluster. The name must be 14 or fewer characters long.
viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
2. Modify the install-config.yaml file. You can find more information about the available
parameters in the Installation configuration parameters section.
3. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
You now have the file install-config.yaml in the directory that you specified.
NOTE
After installation, you cannot modify these parameters in the install-config.yaml file.
121
OpenShift Container Platform 4.6 Installing on OpenStack
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.
122
CHAPTER 1. INSTALLING ON OPENSTACK
compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
123
OpenShift Container Platform 4.6 Installing on OpenStack
controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.
controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.
124
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.
125
OpenShift Container Platform 4.6 Installing on OpenStack
networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.
126
CHAPTER 1. INSTALLING ON OPENSTACK
sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
127
OpenShift Container Platform 4.6 Installing on OpenStack
platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.
compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.
compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs
compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
128
CHAPTER 1. INSTALLING ON OPENSTACK
controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.
controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs
controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.
129
OpenShift Container Platform 4.6 Installing on OpenStack
platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.
platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
130
CHAPTER 1. INSTALLING ON OPENSTACK
Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice.
The subnet’s GUID is passed as the value of platform.openstack.machinesSubnet in the install-
config.yaml file.
This subnet is used as the cluster’s primary subnet; nodes and ports are created on it.
Before you run the OpenShift Container Platform installer with a custom subnet, verify that:
You can provide installer credentials that have permission to create ports on the target
network.
If your network configuration requires a router, it is created in RHOSP. Some configurations rely
on routers for floating IP address translation.
Your network configuration does not rely on a provider network. Provider networks are not
supported.
NOTE
By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network’s
CIDR block. To override these default values, set values for platform.openstack.apiVIP
and platform.openstack.ingressVIP that are outside of the DHCP allocation pool.
To deploy with Kuryr SDN instead of the default OpenShift SDN, you must modify the install-
config.yaml file to include Kuryr as the desired networking.networkType and proceed with the default
OpenShift Container Platform SDN installation steps. This sample install-config.yaml demonstrates all
of the possible Red Hat OpenStack Platform (RHOSP) customization options.
IMPORTANT
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
131
OpenShift Container Platform 4.6 Installing on OpenStack
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: Kuryr
platform:
openstack:
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
trunkSupport: true
octaviaSupport: true
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
NOTE
The IP range that the installation program uses by default might not match the Neutron subnet that you
create when you install OpenShift Container Platform. If necessary, update the CIDR value for new
machines by editing the installation configuration file.
Prerequisites
You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.
Procedure
2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:
$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24.
132
CHAPTER 1. INSTALLING ON OPENSTACK
To set the value manually, open the file and set the value of networking.machineCIDR to
something that matches your intended Neutron subnet.
To proceed with an installation that uses your own infrastructure, set the number of compute machines
in the installation configuration file to zero. Later, you create these machines manually.
Prerequisites
You have the install-config.yaml file that was generated by the OpenShift Container Platform
installation program.
Procedure
2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:
$ python -c '
import yaml;
path = "install-config.yaml";
data = yaml.safe_load(open(path));
data["compute"][0]["replicas"] = 0;
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
To set the value manually, open the file and set the value of compute.<first
entry>.replicas to 0.
By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead,
change the value in the installation configuration file that the program generated.
Prerequisites
You have the file install-config.yaml that was generated by the OpenShift Container Platform
installation program
Procedure
2. From that directory, either run a script to edit the install-config.yaml file or update the file
manually:
$ python -c '
import yaml;
path = "install-config.yaml";
133
OpenShift Container Platform 4.6 Installing on OpenStack
data = yaml.safe_load(open(path));
data["networking"]["networkType"] = "Kuryr";
open(path, "w").write(yaml.dump(data, default_flow_style=False))'
To set the value manually, open the file and set networking.networkType to "Kuryr".
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the
Ignition configuration files, which are later used to create the cluster.
IMPORTANT
The Ignition config files that the installation program generates contain certificates that
expire after 24 hours, which are then renewed at that time. If the cluster is shut down
before renewing the certificates and the cluster is later restarted after the 24 hours have
elapsed, the cluster automatically recovers the expired certificates. The exception is that
you must manually approve the pending node-bootstrapper certificate signing requests
(CSRs) to recover kubelet certificates. See the documentation for Recovering from
expired control plane certificates for more information.
Prerequisites
Procedure
1. Change to the directory that contains the installation program and generate the Kubernetes
manifests for the cluster:
Example output
1 For <installation_directory>, specify the installation directory that contains the install-
config.yaml file you created.
2. Remove the Kubernetes manifest files that define the control plane machines and compute
machine sets:
$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-
cluster-api_worker-machineset-*.yaml
Because you create and manage these resources yourself, you do not have to initialize them.
134
CHAPTER 1. INSTALLING ON OPENSTACK
You can preserve the machine set files to create compute machines by using the machine
API, but you must update references to them to match your environment.
4. To create the Ignition configuration files, run the following command from the directory that
contains the installation program:
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
TIP
Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that
you create. By doing so, you avoid name conflicts when making multiple deployments in the same
project.
Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat
OpenStack Platform (RHOSP) uses to download the primary file.
Prerequisites
You have the bootstrap Ignition file that the installer program generates, bootstrap.ign.
135
OpenShift Container Platform 4.6 Installing on OpenStack
The infrastructure ID from the installer’s metadata file is set as an environment variable
($INFRA_ID).
If the variable is not set, see Creating the Kubernetes manifest and Ignition config files.
The documented procedure uses the RHOSP image service (Glance), but you can also use
the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova
server.
Procedure
1. Run the following Python script. The script modifies the bootstrap Ignition file to set the host
name and, if available, CA certificate file when it runs:
import base64
import json
import os
files.append(
{
'path': '/opt/openshift/tls/cloud-ca-cert.pem',
'mode': 420,
'contents': {
'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64
}
})
ignition['storage']['files'] = files;
136
CHAPTER 1. INSTALLING ON OPENSTACK
2. Using the RHOSP CLI, create an image that uses the bootstrap Ignition file:
NOTE
5. Combine the public address with the image file value and save the result as the storage
location. The location follows the pattern
<image_service_public_URL>/v2/images/<image_ID>/file.
7. Insert the following content into a file called $INFRA_ID-bootstrap-ignition.json and edit the
placeholders to match your own values:
{
"ignition": {
"config": {
"merge": [{
"source": "<storage_url>", 1
"httpHeaders": [{
"name": "X-Auth-Token", 2
"value": "<token_ID>" 3
}]
}]
},
"security": {
"tls": {
"certificateAuthorities": [{
"source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4
}]
}
},
"version": "3.1.0"
}
}
Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage
137
OpenShift Container Platform 4.6 Installing on OpenStack
1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage
URL.
4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-
encoded certificate.
WARNING
NOTE
As with the bootstrap Ignition configuration, you must explicitly define a host name for
each control plane machine.
Prerequisites
The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).
If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files."
Procedure
138
CHAPTER 1. INSTALLING ON OPENSTACK
'filesystem': 'root'});
storage['files'] = files;
ignition['storage'] = storage
json.dump(ignition, sys.stdout)" <master.ign >"$INFRA_ID-master-$index-ignition.json"
done
Prerequisites
Procedure
...
# The public network providing connectivity to the cluster. If not
# provided, the cluster external connectivity must be provided in another
# way.
IMPORTANT
2. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml
playbook:
...
# OpenShift API floating IP address. If this value is non-empty, the
# corresponding floating IP will be attached to the Control Plane to
# serve the OpenShift API.
os_api_fip: '203.0.113.23'
139
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If you do not define values for os_api_fip and os_ingress_fip, you must perform
post-installation network configuration.
If you do not define a value for os_bootstrap_fip, the installer cannot download
debugging information from failed installations.
4. On a command line, create a network, subnet, and router by running the network.yaml
playbook:
5. Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI
command:
Prerequisites
The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.
140
CHAPTER 1. INSTALLING ON OPENSTACK
Procedure
1. On a command line, change the working directory to the location of the playbooks.
3. After the bootstrap server is active, view the logs to verify that the Ignition files were received:
Prerequisites
The infrastructure ID from the installation program’s metadata file is set as an environment
variable ($INFRA_ID).
You have the three Ignition files that were created in "Creating control plane Ignition config
files."
Procedure
1. On a command line, change the working directory to the location of the playbooks.
2. If the control plane Ignition config files aren’t already in your working directory, copy them into
it.
You will see messages that confirm that the control plane machines are running and have joined
the cluster:
141
OpenShift Container Platform 4.6 Installing on OpenStack
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
Prerequisites
If you do not know the status of the machines, see "Verifying cluster status."
Procedure
1. On a command line, change the working directory to the location of the playbooks.
142
CHAPTER 1. INSTALLING ON OPENSTACK
WARNING
If you did not disable the bootstrap Ignition file URL earlier, do so now.
Prerequisites
The metadata.yaml file that the installation program created is in the same directory as the
Ansible playbooks.
Procedure
1. On a command line, change the working directory to the location of the playbooks.
Next steps
Prerequisites
Procedure
143
OpenShift Container Platform 4.6 Installing on OpenStack
Procedure
$ oc get nodes
Example output
NOTE
The preceding output might not include the compute nodes, also known as
worker nodes, until some CSRs are approved.
2. Review the pending CSRs and ensure that you see the client requests with the Pending or
Approved status for each machine that you added to the cluster:
$ oc get csr
Example output
In this example, two machines are joining the cluster. You might see more approved CSRs in the
list.
3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pending status, approve the CSRs for your cluster machines:
NOTE
Because the CSRs rotate automatically, approve your CSRs within an hour of
adding the machines to the cluster. If you do not approve them within an hour, the
certificates will rotate, and more than two certificates will be present for each
node. You must approve all of these certificates. After you approve the initial
CSRs, the subsequent node client CSRs are automatically approved by the
cluster kube-controller-manager.
NOTE
144
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
For clusters running on platforms that are not machine API enabled, such as bare
metal and other user-provisioned infrastructure, you must implement a method
of automatically approving the kubelet serving certificate requests (CSRs). If a
request is not approved, then the oc exec, oc rsh, and oc logs commands
cannot succeed, because a serving certificate is required when the API server
connects to the kubelet. Any operation that contacts the Kubelet endpoint
requires this certificate approval to be in place. The method must watch for new
CSRs, confirm that the CSR was submitted by the node-bootstrapper service
account in the system:node or system:admin groups, and confirm the identity
of the node.
To approve them individually, run the following command for each valid CSR:
NOTE
Some Operators might not become available until some CSRs are approved.
4. Now that your client requests are approved, you must review the server requests for each
machine that you added to the cluster:
$ oc get csr
Example output
5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for
your cluster machines:
To approve them individually, run the following command for each valid CSR:
145
OpenShift Container Platform 4.6 Installing on OpenStack
6. After all client and server CSRs have been approved, the machines have the Ready status.
Verify this by running the following command:
$ oc get nodes
Example output
NOTE
It can take a few minutes after approval of the server CSRs for the machines to
transition to the Ready status.
Additional information
Prerequisites
Procedure
The program outputs the console URL, as well as the administrator’s login information.
If you need to enable external access to node ports, configure ingress cluster traffic by using a
node port.
146
CHAPTER 1. INSTALLING ON OPENSTACK
If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .
Prerequisites
Create a mirror registry on your bastion host and obtain the imageContentSources data for
your version of OpenShift Container Platform.
IMPORTANT
Because the installation media is on the bastion host, use that computer to
complete all installation steps.
Review details about the OpenShift Container Platform installation and update processes .
Verify that OpenShift Container Platform 4.5 is compatible with your RHOSP version by
consulting the architecture documentation’s list of available platforms. You can also
compare platform support across different versions by viewing the OpenShift Container
Platform on RHOSP support matrix.
Verify that your network configuration does not rely on a provider network. Provider networks
are not supported.
If you choose to perform a restricted network installation on a cloud platform, you still require access to
its cloud APIs. Some cloud functions, like Amazon Web Service’s IAM service, require Internet access, so
you might still require Internet access. Depending on your network, you might require less Internet
access for an installation on bare metal hardware or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the
OpenShift Container Platform registry and contains the installation media. You can create this registry
on a mirror host, which can access both the Internet and your closed network, or by using other methods
that meet your restrictions.
Clusters in restricted networks have the following additional limitations and restrictions:
By default, you cannot use the contents of the Developer Catalog because you cannot access
147
OpenShift Container Platform 4.6 Installing on OpenStack
By default, you cannot use the contents of the Developer Catalog because you cannot access
the required image stream tags.
Table 1.21. Recommended resources for a default OpenShift Container Platform cluster on RHOSP
Resource Value
Floating IP addresses 3
Ports 15
Routers 1
Subnets 1
RAM 112 GB
vCPUs 28
Instances 7
Security groups 3
A cluster might function with fewer than recommended resources, but its performance is not
guaranteed.
IMPORTANT
If RHOSP object storage (Swift) is available and operated by a user account with the
swiftoperator role, it is used as the default backend for the OpenShift Container
Platform image registry. In this case, the volume storage requirement is 175 GB. Swift
space requirements vary depending on the size of the image registry.
NOTE
By default, your security group and security group rule quotas might be low. If you
encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60
<project> as an administrator to increase them.
An OpenShift Container Platform deployment comprises control plane machines, compute machines,
and a bootstrap machine.
148
CHAPTER 1. INSTALLING ON OPENSTACK
By default, the OpenShift Container Platform installation process stands up three control plane and
three compute machines.
TIP
Compute machines host the applications that you run on OpenShift Container Platform; aim to run as
many as you can.
During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After
the production control plane is ready, the bootstrap machine is deprovisioned.
Once you confirm that your Red Hat OpenShift Cluster Manager inventory is correct, either maintained
automatically by Telemetry or manually using OCM, use subscription watch to track your OpenShift
Container Platform subscriptions at the account or multi-cluster level.
Access the Red Hat OpenShift Cluster Manager page to download the installation program and
perform subscription management. If the cluster has Internet access and you do not disable
Telemetry, that service automatically entitles your cluster.
Access Quay.io to obtain the packages that are required to install your cluster.
IMPORTANT
149
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If your cluster cannot have direct Internet access, you can perform a restricted network
installation on some types of infrastructure that you provision. During that process, you
download the content that is required and use it to populate a mirror registry with the
packages that you need to install a cluster and generate the installation program. With
some installation types, the environment that you install your cluster in will not require
Internet access. Before you update the cluster, you update the content of the mirror
registry.
IMPORTANT
If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as
Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it
is unavailable, the installation program relies on the RHOSP block storage service,
commonly known as Cinder.
If Swift is present and you want to use it, you must enable access to it. If it is not present,
or if you do not want to use it, skip this section.
Prerequisites
Procedure
To enable Swift on RHOSP:
1. As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will
access Swift:
Your RHOSP deployment can now use Swift for the image registry.
Procedure
If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it.
IMPORTANT
150
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
Remember to add a password to the auth field. You can also keep secrets in
a separate file from clouds.yaml.
If your RHOSP distribution does not include the Horizon web UI, or you do not want to use
Horizon, create the file yourself. For detailed information about clouds.yaml, see Config
files in the RHOSP documentation.
clouds:
shiftstack:
auth:
auth_url: http://10.10.14.42:5000/v3
project_name: shiftstack
username: shiftstack_user
password: XXX
user_domain_name: Default
project_domain_name: Default
dev-env:
region_name: RegionOne
auth:
username: 'devuser'
password: XXX
project_name: 'devonly'
auth_url: 'https://10.10.14.22:5001/v2.0'
2. If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint
authentication:
d. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-
accessible path to the CA certificate:
clouds:
shiftstack:
...
cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem"
TIP
151
OpenShift Container Platform 4.6 Installing on OpenStack
TIP
After you run the installer with a custom CA certificate, you can update the certificate by
editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a
command line, run:
Prerequisites
Obtain the OpenShift Container Platform installation program. For a restricted network
installation, the program is on your bastion host.
Procedure
2. Under Version, select the most recent release of OpenShift Container Platform 4.5 for RHEL 8.
IMPORTANT
The RHCOS images might not change with every release of OpenShift Container
Platform. You must download images with the highest version that is less than or
equal to the OpenShift Container Platform version that you install. Use the image
versions that match your OpenShift Container Platform version if they are
available.
3. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW).
NOTE
152
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
You must decompress the RHOSP image before the cluster can use it. The
name of the downloaded file might not contain a compression extension, like .gz
or .tgz. To find out if or how the file is compressed, in a command line, enter:
$ file <name_of_downloaded_file>
5. Upload the image that you decompressed to a location that is accessible from the bastion
server, like Glance. For example:
IMPORTANT
Depending on your RHOSP environment, you might be able to upload the image
in either .raw or .qcow2 formats. If you use Ceph, you must use the .raw format.
CAUTION
If the installation program finds multiple images with the same name, it chooses one of them at
random. To avoid this behavior, create unique names for resources in RHOSP.
The image is now available for a restricted installation. Note the image name or location for use in
OpenShift Container Platform deployment.
Prerequisites
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster. For a restricted network installation, these files are on your bastion host.
Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible
location.
Have the imageContentSources values that were generated during mirror registry creation.
Procedure
a. Change to the directory that contains the installation program and run the following
command:
1 For <installation_directory>, specify the directory name to store the files that the
installation program creates.
153
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
NOTE
iii. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for
installing the cluster.
iv. Specify the floating IP address to use for external access to the OpenShift API.
v. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane and compute
nodes.
vi. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of
this base and will also include the cluster name.
vii. Enter a name for your cluster. The name must be 14 or fewer characters long.
viii. Paste the pull secret that you obtained from the Pull Secret page on the Red Hat
OpenShift Cluster Manager site.
platform:
openstack:
clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-
openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d
3. Edit the install-config.yaml file to provide the additional information that is required for an
installation in a restricted network.
a. Update the pullSecret value to contain the authentication information for your registry:
For <bastion_host_name>, specify the registry domain name that you specified in the
154
CHAPTER 1. INSTALLING ON OPENSTACK
For <bastion_host_name>, specify the registry domain name that you specified in the
certificate for your mirror registry, and for <credentials>, specify the base64-encoded user
name and password for your mirror registry.
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
The value must be the contents of the certificate file that you used for your mirror registry,
which can be an exiting, trusted certificate authority or the self-signed certificate that you
generated for the mirror registry.
c. Add the image content resources, which look like this excerpt:
imageContentSources:
- mirrors:
- <bastion_host_name>:5000/<repo_name>/release
source: quay.example.com/openshift-release-dev/ocp-release
- mirrors:
- <bastion_host_name>:5000/<repo_name>/release
source: registry.example.com/ocp/release
To complete these values, use the imageContentSources that you recorded during mirror
registry creation.
4. Make any other modifications to the install-config.yaml file that you require. You can find more
information about the available parameters in the Installation configuration parameters
section.
5. Back up the install-config.yaml file so that you can use it to install multiple clusters.
IMPORTANT
Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe
your account on the cloud platform that hosts your cluster and optionally customize your cluster’s
platform. When you create the install-config.yaml installation configuration file, you provide values for
the required parameters through the command line. If you customize your cluster, you can modify the
install-config.yaml file to provide more details about the platform.
NOTE
After installation, you cannot modify these parameters in the install-config.yaml file.
155
OpenShift Container Platform 4.6 Installing on OpenStack
baseDomain The base domain of your A fully-qualified domain or subdomain name, such as
cloud provider. The base example.com .
domain is used to create
routes to your OpenShift
Container Platform cluster
components. The full DNS
name for your cluster is a
combination of the
baseDomain and
metadata.name parameter
values that uses the
<metadata.name>.
<baseDomain> format.
metadata.name The name of the cluster. DNS String of lowercase letters, hyphens (- ), and periods
records for the cluster are all (.), such as dev. The string must be 14 characters or
subdomains of fewer long.
{{.metadata.name}}.
{{.baseDomain}}.
156
CHAPTER 1. INSTALLING ON OPENSTACK
compute The configuration for the machines Array of machine-pool objects. For
that comprise the compute nodes. details, see the following "Machine-
pool" table.
157
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
compute.platform Required if you use compute. Use this aws, azure , gcp , openstack, ovirt,
parameter to specify the cloud vsphere, or {}
provider to host the worker machines.
This parameter value must match the
controlPlane.platform parameter
value.
compute.replicas The number of compute machines, A positive integer greater than or equal
which are also known as worker to 2. The default value is 3.
machines, to provision.
controlPlane The configuration for the machines Array of MachinePool objects. For
that comprise the control plane. details, see the following "Machine-
pool" table.
158
CHAPTER 1. INSTALLING ON OPENSTACK
IMPORTANT
If you disable
simultaneous
multithreading, ensure
that your capacity
planning accounts for
the dramatically
decreased machine
performance.
controlPlane.platfor Required if you use controlPlane . aws, azure , gcp , openstack, ovirt,
m Use this parameter to specify the cloud vsphere, or {}
provider that hosts the control plane
machines. This parameter value must
match the compute.platform
parameter value.
controlPlane.replica The number of control plane machines The only supported value is 3, which is
s to provision. the default value.
159
OpenShift Container Platform 4.6 Installing on OpenStack
NOTE
imageContentSourc Sources and repositories for the Array of objects. Includes a source
es release-image content. and, optionally, mirrors, as described
in the following rows of this table.
160
CHAPTER 1. INSTALLING ON OPENSTACK
networking.serviceN The IP address pools for services. The Array of IP networks. IP networks are
etwork default is 172.30.0.0/16. represented as strings using Classless
Inter-Domain Routing (CIDR) notation
with a traditional IP address or network
number, followed by the forward slash
(/) character, followed by a decimal
value between 0 and 32 that describes
the number of significant bits. For
example, 10.0.0.0/16 represents IP
addresses 10.0.0.0 through
10.0.255.255.
161
OpenShift Container Platform 4.6 Installing on OpenStack
sshKey The SSH key or keys to authenticate One or more keys. For example:
access your cluster machines.
sshKey:
NOTE <key1>
<key2>
For production <key3>
OpenShift Container
Platform clusters on
which you want to
perform installation
debugging or disaster
recovery, specify an
SSH key that your
ssh-agent process
uses.
162
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst The RHOSP flavor to use for String, for example m1.xlarge.
ack.computeFla control plane and compute
vor machines.
compute.platfor Additional networks that are A list of one or more UUIDs as strings. For example,
m.openstack.ad associated with compute fa806b2f-ac49-4bce-b9db-124bc64209bf.
ditionalNetworkI machines. Allowed address
Ds pairs are not created for
additional networks.
compute.platfor Additional security groups A list of one or more UUIDs as strings. For example,
m.openstack.ad that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
ditionalSecurity compute machines.
GroupIDs
compute.platfor RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
m.openstack.zo availability zones (AZs) to
nes install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
163
OpenShift Container Platform 4.6 Installing on OpenStack
controlPlane.pla Additional networks that are A list of one or more UUIDs as strings. For example,
tform.openstack associated with control plane fa806b2f-ac49-4bce-b9db-124bc64209bf.
.additionalNetw machines. Allowed address
orkIDs pairs are not created for
additional networks.
controlPlane.pla Additional security groups A list of one or more UUIDs as strings. For example,
tform.openstack that are associated with 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7.
.additionalSecur control plane machines.
ityGroupIDs
controlPlane.pla RHOSP Compute (Nova) A list of strings. For example, ["zone-1", "zone-2"].
tform.openstack availability zones (AZs) to
.zones install machines on. If this
parameter is not set, the
installer relies on the default
settings for Nova that the
RHOSP administrator
configured.
platform.openst The location from which the An HTTP or HTTPS URL, optionally with an SHA-256
ack.clusterOSI installer downloads the checksum.
mage RHCOS image.
For example,
You must set this parameter http://mirror.example.com/images/rhcos-
to perform an installation in a 43.81.201912131630.0-
restricted network. openstack.x86_64.qcow2.gz?
sha256=ffebbd68e8a1f2a245ca19522c16c86f6
7f9ac8e4e0c1f0a812b068b16f7265d. The value
can also be the name of an existing Glance image, for
example my-rhcos.
164
CHAPTER 1. INSTALLING ON OPENSTACK
platform.openst IP addresses for external DNS A list of IP addresses as strings. For example,
ack.externalDN servers that cluster instances ["8.8.8.8", "192.168.1.12"].
S use for DNS resolution.
platform.openst The UUID of a RHOSP subnet A UUID as a string. For example, fa806b2f-ac49-
ack.machinesS that the cluster’s nodes use. 4bce-b9db-124bc64209bf.
ubnet Nodes and virtual IP (VIP)
ports are created on this
subnet.
This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform
(RHOSP) customization options.
165
OpenShift Container Platform 4.6 Installing on OpenStack
IMPORTANT
This sample file is provided for reference only. You must obtain your install-config.yaml
file by using the installation program.
apiVersion: v1
baseDomain: example.com
clusterID: os-test
controlPlane:
name: master
platform: {}
replicas: 3
compute:
- name: worker
platform:
openstack:
type: ml.large
replicas: 3
metadata:
name: example
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineCIDR: 10.0.0.0/16
serviceNetwork:
- 172.30.0.0/16
networkType: OpenShiftSDN
platform:
openstack:
region: region1
cloud: mycloud
externalNetwork: external
computeFlavor: m1.xlarge
lbFloatingIP: 128.0.0.1
fips: false
pullSecret: '{"auths": ...}'
sshKey: ssh-ed25519 AAAA...
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- <mirror_registry>/<repo_name>/release
source: quay.io/openshift-release-dev/ocp-release
- mirrors:
- <mirror_registry>/<repo_name>/release
source: registry.svc.ci.openshift.org/ocp/release
166
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the
key is added to the core user’s ~/.ssh/authorized_keys list.
NOTE
You must use a local key, not one that you configured with platform-specific approaches
such as AWS key pairs.
Procedure
1. If you do not have an SSH key that is configured for password-less authentication on your
computer, create one. For example, on a computer that uses a Linux operating system, run the
following command:
1 Specify the path and file name, such as ~/.ssh/id_rsa, of the new SSH key.
Running this command generates an SSH key that does not require a password in the location
that you specified.
Example output
$ ssh-add <path>/<file_name> 1
Example output
1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa
167
OpenShift Container Platform 4.6 Installing on OpenStack
Next steps
When you install OpenShift Container Platform, provide the SSH public key to the installation
program.
You can configure OpenShift Container Platform API and application access by using floating IP
addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but
the installer will not configure a way to reach the API or applications externally.
Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and
cluster applications.
Procedure
1. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP:
2. Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP:
3. Add records that follow these patterns to your DNS server for the API and Ingress FIPs:
api.<cluster_name>.<base_domain>. IN A <API_FIP>
*.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>
NOTE
If you do not control the DNS server, you can add the record to your /etc/hosts
file. This action makes the API accessible to only you, which is not suitable for
production deployment but does allow installation for development and testing.
4. Add the FIPs to the install-config.yaml file as the values of the following parameters:
platform.openstack.ingressFloatingIP
platform.openstack.lbFloatingIP
If you use these values, you must also enter an external network as the value of the
platform.openstack.externalNetwork parameter in the install-config.yaml file.
TIP
168
CHAPTER 1. INSTALLING ON OPENSTACK
TIP
You can make OpenShift Container Platform resources available outside of the cluster by assigning a
floating IP address and updating your firewall configuration.
You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without
providing floating IP addresses.
platform.openstack.ingressFloatingIP
platform.openstack.lbFloatingIP
If you cannot provide an external network, you can also leave platform.openstack.externalNetwork
blank. If you do not provide a value for platform.openstack.externalNetwork, a router is not created for
you, and, without additional action, the installer will fail to retrieve an image from Glance. You must
configure external connectivity on your own.
If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP
addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use
a proxy network or run the installer from a system that is on the same network as your machines.
NOTE
You can enable name resolution by creating DNS records for the API and Ingress ports.
For example:
api.<cluster_name>.<base_domain>. IN A <api_port_IP>
*.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>
If you do not control the DNS server, you can add the record to your /etc/hosts file. This
action makes the API accessible to only you, which is not suitable for production
deployment but does allow installation for development and testing.
IMPORTANT
You can run the create cluster command of the installation program only once, during
initial installation.
Prerequisites
Configure an account with the cloud platform that hosts your cluster.
Obtain the OpenShift Container Platform installation program and the pull secret for your
cluster.
Procedure
169
OpenShift Container Platform 4.6 Installing on OpenStack
1. Change to the directory that contains the installation program and initialize the cluster
deployment:
2 To view different installation details, specify warn, debug, or error instead of info.
NOTE
If the cloud provider account that you configured on your host does not have
sufficient permissions to deploy the cluster, the installation process stops, and
the missing permissions are displayed.
When the cluster deployment completes, directions for accessing your cluster, including a link to
its web console and credentials for the kubeadmin user, display in your terminal.
Example output
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export
KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-
console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-
Wt5AL"
INFO Time elapsed: 36m22s
NOTE
IMPORTANT
The Ignition config files that the installation program generates contain
certificates that expire after 24 hours, which are then renewed at that time. If the
cluster is shut down before renewing the certificates and the cluster is later
restarted after the 24 hours have elapsed, the cluster automatically recovers the
expired certificates. The exception is that you must manually approve the
pending node-bootstrapper certificate signing requests (CSRs) to recover
kubelet certificates. See the documentation for Recovering from expired control
plane certificates for more information.
IMPORTANT
You must not delete the installation program or the files that the installation
program creates. Both are required to delete the cluster.
170
CHAPTER 1. INSTALLING ON OPENSTACK
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
The kubeconfig file contains information about the cluster that is used by the CLI to connect a
client to the correct cluster and API server.
2. View the control plane and compute machines created after a deployment:
$ oc get nodes
$ oc get clusterversion
$ oc get clusteroperator
$ oc get pods -A
Prerequisites
Procedure
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig 1
For <installation_directory>, specify the path to the directory that you stored the
171
OpenShift Container Platform 4.6 Installing on OpenStack
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
2. Verify you can run oc commands successfully using the exported configuration:
$ oc whoami
Example output
system:admin
Next steps
If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by
configuring additional trust stores.
If you did not configure RHOSP to accept application traffic over floating IP addresses,
configure RHOSP access with floating IP addresses .
Prerequisites
Have a copy of the installation program that you used to deploy the cluster.
Have the files that the installation program generated when you created your cluster.
Procedure
1. From the directory that contains the installation program on the computer that you used to
install the cluster, run the following command:
1 For <installation_directory>, specify the path to the directory that you stored the
installation files in.
NOTE
172
CHAPTER 1. INSTALLING ON OPENSTACK
NOTE
You must specify the directory that contains the cluster definition files for your
cluster. The installation program requires the metadata.json file in this directory
to delete the cluster.
2. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform
installation program.
NOTE
These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.
Prerequisites
Procedure
173
OpenShift Container Platform 4.6 Installing on OpenStack
1.7.2. Removing a cluster from RHOSP that uses your own infrastructure
You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP)
that uses your own infrastructure. To complete the removal process quickly, run several Ansible
playbooks.
Prerequisites
You have the playbooks that you used to install the cluster.
You modified the playbooks that are prefixed with down- to reflect any changes that you made
to their corresponding installation playbooks. For example, changes to the bootstrap.yaml file
are reflected in the down-bootstrap.yaml file.
Procedure
$ ansible-playbook -i inventory.yaml \
down-bootstrap.yaml \
down-control-plane.yaml \
down-compute-nodes.yaml \
down-load-balancers.yaml \
down-network.yaml \
down-security-groups.yaml
2. Remove any DNS record changes you made for the OpenShift Container Platform installation.
174