Red Hat OpenStack Platform-13-Director Installation and Usage-en-US
Red Hat OpenStack Platform-13-Director Installation and Usage-en-US
OpenStack Team
[email protected]
Legal Notice
Copyright © 2020 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This guide explains how to install Red Hat OpenStack Platform 13 in an enterprise environment
using the Red Hat OpenStack Platform director. This includes installing the director, planning your
environment, and creating an OpenStack environment with the director.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INTRODUCTION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
1.1. UNDERCLOUD 7
1.2. OVERCLOUD 8
1.3. HIGH AVAILABILITY 10
1.4. CONTAINERIZATION 10
1.5. CEPH STORAGE 11
.CHAPTER
. . . . . . . . . . 2.
. . REQUIREMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
..............
2.1. ENVIRONMENT REQUIREMENTS 12
2.2. UNDERCLOUD REQUIREMENTS 12
2.2.1. Virtualization Support 13
2.3. NETWORKING REQUIREMENTS 15
2.4. OVERCLOUD REQUIREMENTS 17
2.4.1. Compute Node Requirements 18
2.4.2. Controller Node Requirements 18
2.4.2.1. Virtualization Support 19
2.4.3. Ceph Storage Node Requirements 19
2.4.4. Object Storage Node Requirements 20
2.5. REPOSITORY REQUIREMENTS 21
.CHAPTER
. . . . . . . . . . 3.
. . PLANNING
. . . . . . . . . . . . YOUR
. . . . . . .OVERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24
..............
3.1. PLANNING NODE DEPLOYMENT ROLES 24
3.2. PLANNING NETWORKS 25
3.3. PLANNING STORAGE 30
3.4. PLANNING HIGH AVAILABILITY 31
.CHAPTER
. . . . . . . . . . 4.
. . .INSTALLING
. . . . . . . . . . . . .THE
. . . . .UNDERCLOUD
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
..............
4.1. CONFIGURING AN UNDERCLOUD PROXY 32
4.2. CREATING THE STACK USER 32
4.3. CREATING DIRECTORIES FOR TEMPLATES AND IMAGES 33
4.4. SETTING THE UNDERCLOUD HOSTNAME 33
4.5. REGISTERING AND UPDATING YOUR UNDERCLOUD 33
4.6. INSTALLING THE DIRECTOR PACKAGES 35
4.7. INSTALLING CEPH-ANSIBLE 35
4.8. CONFIGURING THE DIRECTOR 35
4.9. DIRECTOR CONFIGURATION PARAMETERS 36
4.10. CONFIGURING HIERADATA ON THE UNDERCLOUD 40
4.11. INSTALLING THE DIRECTOR 41
4.12. OBTAINING IMAGES FOR OVERCLOUD NODES 42
4.12.1. Single CPU architecture overclouds 42
4.12.2. Multiple CPU architecture overclouds 43
4.13. SETTING A NAMESERVER FOR THE CONTROL PLANE 45
4.14. NEXT STEPS 46
. . . . . . . . . . . 5.
CHAPTER . . CONFIGURING
................A
. . CONTAINER
. . . . . . . . . . . . . .IMAGE
. . . . . . . SOURCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
..............
5.1. REGISTRY METHODS 47
5.2. CONTAINER IMAGE PREPARATION COMMAND USAGE 47
5.3. CONTAINER IMAGES FOR ADDITIONAL SERVICES 49
5.4. USING THE RED HAT REGISTRY AS A REMOTE REGISTRY SOURCE 52
5.5. USING THE UNDERCLOUD AS A LOCAL REGISTRY 52
5.6. USING A SATELLITE SERVER AS A REGISTRY 54
5.7. NEXT STEPS 57
1
Red Hat OpenStack Platform 13 Director Installation and Usage
.CHAPTER
. . . . . . . . . . 6.
. . .CONFIGURING
...............A
. . BASIC
. . . . . . . OVERCLOUD
. . . . . . . . . . . . . . .WITH
. . . . . .THE
. . . . CLI
. . . . TOOLS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
..............
6.1. REGISTERING NODES FOR THE OVERCLOUD 59
6.2. INSPECTING THE HARDWARE OF NODES 60
6.3. AUTOMATICALLY DISCOVER BARE METAL NODES 67
6.4. GENERATE ARCHITECTURE SPECIFIC ROLES 69
6.5. TAGGING NODES INTO PROFILES 70
6.6. DEFINING THE ROOT DISK 71
6.7. USING THE OVERCLOUD-MINIMAL IMAGE TO AVOID USING A RED HAT SUBSCRIPTION ENTITLEMENT
73
6.8. CREATING AN ENVIRONMENT FILE THAT DEFINES NODE COUNTS AND FLAVORS 73
6.9. CONFIGURE OVERCLOUD NODES TO TRUST THE UNDERCLOUD CA 74
6.10. CUSTOMIZING THE OVERCLOUD WITH ENVIRONMENT FILES 76
6.11. CREATING THE OVERCLOUD WITH THE CLI TOOLS 77
6.12. INCLUDING ENVIRONMENT FILES IN OVERCLOUD CREATION 81
6.13. MANAGING OVERCLOUD PLANS 84
6.14. VALIDATING OVERCLOUD TEMPLATES AND PLANS 85
6.15. MONITORING THE OVERCLOUD CREATION 86
6.16. VIEWING THE OVERCLOUD DEPLOYMENT OUTPUT 86
6.17. ACCESSING THE OVERCLOUD 86
6.18. COMPLETING THE OVERCLOUD CREATION 86
.CHAPTER
. . . . . . . . . . 7.
. . CONFIGURING
. . . . . . . . . . . . . . . .A
. . BASIC
. . . . . . . OVERCLOUD
. . . . . . . . . . . . . . .WITH
. . . . . .THE
. . . . WEB
. . . . . .UI
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .88
..............
7.1. ACCESSING THE WEB UI 88
7.2. NAVIGATING THE WEB UI 89
7.3. IMPORTING AN OVERCLOUD PLAN IN THE WEB UI 92
7.4. REGISTERING NODES IN THE WEB UI 93
7.5. INSPECTING THE HARDWARE OF NODES IN THE WEB UI 95
7.6. TAGGING NODES INTO PROFILES IN THE WEB UI 96
7.7. EDITING OVERCLOUD PLAN PARAMETERS IN THE WEB UI 97
7.8. ADDING ROLES IN THE WEB UI 98
7.9. ASSIGNING NODES TO ROLES IN THE WEB UI 99
7.10. EDITING ROLE PARAMETERS IN THE WEB UI 99
7.11. STARTING THE OVERCLOUD CREATION IN THE WEB UI 101
7.12. COMPLETING THE OVERCLOUD CREATION 102
. . . . . . . . . . . 8.
CHAPTER . . .CONFIGURING
...............A
. . BASIC
. . . . . . . OVERCLOUD
. . . . . . . . . . . . . . .USING
. . . . . . .PRE-PROVISIONED
. . . . . . . . . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . .103
...............
8.1. CREATING A USER FOR CONFIGURING NODES 104
8.2. REGISTERING THE OPERATING SYSTEM FOR NODES 104
8.3. INSTALLING THE USER AGENT ON NODES 105
8.4. CONFIGURING SSL/TLS ACCESS TO THE DIRECTOR 106
8.5. CONFIGURING NETWORKING FOR THE CONTROL PLANE 106
8.6. USING A SEPARATE NETWORK FOR OVERCLOUD NODES 108
8.7. CONFIGURING CEPH STORAGE FOR PRE-PROVISIONED NODES 110
8.8. CREATING THE OVERCLOUD WITH PRE-PROVISIONED NODES 110
8.9. POLLING THE METADATA SERVER 111
8.10. MONITORING THE OVERCLOUD CREATION 113
8.11. ACCESSING THE OVERCLOUD 113
8.12. SCALING PRE-PROVISIONED NODES 114
8.13. REMOVING A PRE-PROVISIONED OVERCLOUD 115
8.14. COMPLETING THE OVERCLOUD CREATION 115
.CHAPTER
. . . . . . . . . . 9.
. . .PERFORMING
. . . . . . . . . . . . . . .TASKS
. . . . . . .AFTER
. . . . . . . .OVERCLOUD
. . . . . . . . . . . . . .CREATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
..............
9.1. MANAGING CONTAINERIZED SERVICES 116
9.2. CREATING THE OVERCLOUD TENANT NETWORK 117
2
Table of Contents
.CHAPTER
. . . . . . . . . . 10.
. . . CONFIGURING
. . . . . . . . . . . . . . . . THE
. . . . . OVERCLOUD
. . . . . . . . . . . . . . WITH
. . . . . . ANSIBLE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
...............
10.1. ANSIBLE-BASED OVERCLOUD CONFIGURATION (CONFIG-DOWNLOAD) 125
10.2. SWITCHING THE OVERCLOUD CONFIGURATION METHOD TO CONFIG-DOWNLOAD 126
10.3. ENABLING CONFIG-DOWNLOAD WITH PRE-PROVISIONED NODES 127
10.4. ENABLING ACCESS TO CONFIG-DOWNLOAD WORKING DIRECTORIES 128
10.5. CHECKING CONFIG-DOWNLOAD LOGS AND WORKING DIRECTORY 129
10.6. RUNNING CONFIG-DOWNLOAD MANUALLY 129
10.7. DISABLING CONFIG-DOWNLOAD 130
10.8. NEXT STEPS 131
.CHAPTER
. . . . . . . . . . 11.
. . .MIGRATING
. . . . . . . . . . . . .VIRTUAL
. . . . . . . . . MACHINES
. . . . . . . . . . . . BETWEEN
. . . . . . . . . . . COMPUTE
. . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
...............
11.1. MIGRATION TYPES 132
11.2. MIGRATION CONSTRAINTS 133
11.3. PRE-MIGRATION PROCEDURES 135
11.4. LIVE MIGRATE A VIRTUAL MACHINE 137
11.5. COLD MIGRATE A VIRTUAL MACHINE 138
11.6. CHECK MIGRATION STATUS 138
11.7. POST-MIGRATION PROCEDURES 139
11.8. TROUBLESHOOTING MIGRATION 140
. . . . . . . . . . . 12.
CHAPTER . . . CREATING
. . . . . . . . . . . .VIRTUALIZED
. . . . . . . . . . . . . . CONTROL
. . . . . . . . . . . .PLANES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
...............
12.1. VIRTUALIZED CONTROL PLANE ARCHITECTURE 143
12.2. BENEFITS AND LIMITATIONS OF VIRTUALIZING YOUR RHOSP OVERCLOUD CONTROL PLANE 143
12.3. PROVISIONING VIRTUALIZED CONTROLLERS USING THE RED HAT VIRTUALIZATION DRIVER 144
. . . . . . . . . . . 13.
CHAPTER . . . SCALING
. . . . . . . . . . OVERCLOUD
. . . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148
...............
13.1. ADDING NODES TO THE OVERCLOUD 148
13.2. INCREASING NODE COUNTS FOR ROLES 149
13.3. REMOVING COMPUTE NODES 150
13.4. REPLACING CEPH STORAGE NODES 151
13.5. REPLACING OBJECT STORAGE NODES 151
13.6. BLACKLISTING NODES 152
. . . . . . . . . . . 14.
CHAPTER . . . REPLACING
. . . . . . . . . . . . . CONTROLLER
. . . . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
...............
14.1. PREPARING FOR CONTROLLER REPLACEMENT 155
14.2. REMOVING A CEPH MONITOR DAEMON 156
14.3. PREPARING THE CLUSTER FOR CONTROLLER REPLACEMENT 158
14.4. REPLACING A CONTROLLER NODE 159
14.5. TRIGGERING THE CONTROLER NODE REPLACEMENT 160
14.6. CLEANING UP AFTER CONTROLLER NODE REPLACEMENT 161
. . . . . . . . . . . 15.
CHAPTER . . . REBOOTING
. . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
...............
15.1. REBOOTING THE UNDERCLOUD NODE 163
3
Red Hat OpenStack Platform 13 Director Installation and Usage
. . . . . . . . . . . 16.
CHAPTER . . . TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . . .DIRECTOR
. . . . . . . . . . . ISSUES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
...............
16.1. TROUBLESHOOTING NODE REGISTRATION 167
16.2. TROUBLESHOOTING HARDWARE INTROSPECTION 167
16.3. TROUBLESHOOTING WORKFLOWS AND EXECUTIONS 169
16.4. TROUBLESHOOTING OVERCLOUD CREATION 170
16.4.1. Accessing deployment command history 170
16.4.2. Orchestration 171
16.4.3. Bare Metal Provisioning 171
16.4.4. Post-Deployment Configuration 172
16.5. TROUBLESHOOTING IP ADDRESS CONFLICTS ON THE PROVISIONING NETWORK 173
16.6. TROUBLESHOOTING "NO VALID HOST FOUND" ERRORS 174
16.7. TROUBLESHOOTING THE OVERCLOUD AFTER CREATION 175
16.7.1. Overcloud Stack Modifications 175
16.7.2. Controller Service Failures 176
16.7.3. Containerized Service Failures 176
16.7.4. Compute Service Failures 178
16.7.5. Ceph Storage Service Failures 178
16.8. TUNING THE UNDERCLOUD 178
16.9. CREATING AN SOSREPORT 180
16.10. IMPORTANT LOGS FOR UNDERCLOUD AND OVERCLOUD 180
. . . . . . . . . . . .A.
APPENDIX . . SSL/TLS
. . . . . . . . . . CERTIFICATE
. . . . . . . . . . . . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
...............
A.1. INITIALIZING THE SIGNING HOST 182
A.2. CREATING A CERTIFICATE AUTHORITY 182
A.3. ADDING THE CERTIFICATE AUTHORITY TO CLIENTS 182
A.4. CREATING AN SSL/TLS KEY 183
A.5. CREATING AN SSL/TLS CERTIFICATE SIGNING REQUEST 183
A.6. CREATING THE SSL/TLS CERTIFICATE 184
A.7. USING THE CERTIFICATE WITH THE UNDERCLOUD 184
. . . . . . . . . . . .B.
APPENDIX . . POWER
. . . . . . . . .MANAGEMENT
. . . . . . . . . . . . . . . . DRIVERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
...............
B.1. REDFISH 186
B.2. DELL REMOTE ACCESS CONTROLLER (DRAC) 186
B.3. INTEGRATED LIGHTS-OUT (ILO) 186
B.4. CISCO UNIFIED COMPUTING SYSTEM (UCS) 187
B.5. FUJITSU INTEGRATED REMOTE MANAGEMENT CONTROLLER (IRMC) 187
B.6. VIRTUAL BASEBOARD MANAGEMENT CONTROLLER (VBMC) 188
B.7. RED HAT VIRTUALIZATION 191
B.8. MANUAL-MANAGEMENT DRIVER 191
. . . . . . . . . . . .C.
APPENDIX . . .WHOLE
. . . . . . . .DISK
. . . . . IMAGES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193
...............
C.1. DOWNLOADING THE BASE CLOUD IMAGE 194
C.2. DISK IMAGE ENVIRONMENT VARIABLES 194
C.3. CUSTOMIZING THE DISK LAYOUT 195
C.3.1. Modifying the Partitioning Schema 195
C.3.2. Modifying the Image Size 198
C.4. CREATING A SECURITY HARDENED WHOLE DISK IMAGE 198
C.5. UPLOADING A SECURITY HARDENED WHOLE DISK IMAGE 199
4
Table of Contents
. . . . . . . . . . . .D.
APPENDIX . . .ALTERNATIVE
. . . . . . . . . . . . . . .BOOT
. . . . . . .MODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
................
D.1. STANDARD PXE 200
D.2. UEFI BOOT MODE 200
. . . . . . . . . . . .E.
APPENDIX . . AUTOMATIC
. . . . . . . . . . . . . .PROFILE
. . . . . . . . . .TAGGING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201
...............
E.1. POLICY FILE SYNTAX 201
E.2. POLICY FILE EXAMPLE 202
E.3. IMPORTING POLICY FILES 204
E.4. AUTOMATIC PROFILE TAGGING PROPERTIES 204
. . . . . . . . . . . .F.
APPENDIX . . SECURITY
. . . . . . . . . . . .ENHANCEMENTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
................
F.1. CHANGING THE SSL/TLS CIPHER AND RULES FOR HAPROXY 206
. . . . . . . . . . . .G.
APPENDIX . . .RED
. . . . .HAT
. . . . OPENSTACK
. . . . . . . . . . . . . . PLATFORM
. . . . . . . . . . . . .FOR
. . . . .POWER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
................
G.1. CEPH STORAGE 208
G.2. COMPOSABLE SERVICES 208
5
Red Hat OpenStack Platform 13 Director Installation and Usage
6
CHAPTER 1. INTRODUCTION
CHAPTER 1. INTRODUCTION
The Red Hat OpenStack Platform director is a toolset for installing and managing a complete
OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation
for "OpenStack-On-OpenStack". This project takes advantage of OpenStack components to install a
fully operational OpenStack environment. This includes new OpenStack components that provision and
control bare metal systems to use as OpenStack nodes. This provides a simple method for installing a
complete Red Hat OpenStack Platform environment that is both lean and robust.
The Red Hat OpenStack Platform director uses two main concepts: an undercloud and an overcloud.
The undercloud installs and configures the overcloud. The next few sections outline the concept of
each.
1.1. UNDERCLOUD
The undercloud is the main director node. It is a single-system OpenStack installation that includes
components for provisioning and managing the OpenStack nodes that form your OpenStack
environment (the overcloud). The components that form the undercloud provide the multiple functions:
Environment Planning
The undercloud provides planning functions for users to create and assign certain node roles. The
undercloud includes a default set of nodes such as Compute, Controller, and various storage roles,
but also provides the ability to use custom roles. In addition, you can select which OpenStack
Platform services to include on each node role, which provides a method to model new node types or
isolate certain components on their own host.
Bare Metal System Control
The undercloud uses out-of-band management interface, usually Intelligent Platform Management
Interface (IPMI), of each node for power management control and a PXE-based service to discover
hardware attributes and install OpenStack to each node. This provides a method to provision bare
metal systems as OpenStack nodes. See Appendix B, Power Management Drivers for a full list of
power management drivers.
Orchestration
The undercloud provides a set of YAML templates that acts as a set of plans for your environment.
The undercloud imports these plans and follows their instructions to create the resulting OpenStack
environment. The plans also include hooks that allow you to incorporate your own customizations as
7
Red Hat OpenStack Platform 13 Director Installation and Usage
OpenStack Identity (keystone) - Provides authentication and authorization for the director’s
components.
OpenStack Bare Metal (ironic) and OpenStack Compute (nova) - Manages bare metal
nodes.
OpenStack Networking (neutron) and Open vSwitch - Controls networking for bare metal
nodes.
OpenStack Image Service (glance) - Stores images that are written to bare metal machines.
OpenStack Telemetry (ceilometer) - Performs monitoring and data collection. This also
includes:
OpenStack Telemetry Metrics (gnocchi) - Provides a time series database for metrics.
OpenStack Telemetry Event Storage (panko) - Provides event storage for monitoring.
OpenStack Workflow Service (mistral) - Provides a set of workflows for certain director-
specific actions, such as importing and deploying plans.
OpenStack Messaging Service (zaqar) - Provides a messaging service for the OpenStack
Workflow Service.
OpenStack Object Storage (swift) - Provides object storage for various OpenStack
Platform components, including:
1.2. OVERCLOUD
The overcloud is the resulting Red Hat OpenStack Platform environment created using the undercloud.
This includes different nodes roles that you define based on the OpenStack Platform environment you
aim to create. The undercloud includes a default set of overcloud node roles, which include:
Controller
Nodes that provide administration, networking, and high availability for the OpenStack environment.
8
CHAPTER 1. INTRODUCTION
Nodes that provide administration, networking, and high availability for the OpenStack environment.
An ideal OpenStack environment recommends three of these nodes together in a high availability
cluster.
A default Controller node contains the following components:
MariaDB
Open vSwitch
Compute
These nodes provide computing resources for the OpenStack environment. You can add more
Compute nodes to scale out your environment over time. A default Compute node contains the
following components:
KVM/QEMU
Open vSwitch
Storage
Nodes that provide storage for the OpenStack environment. This includes nodes for:
9
Red Hat OpenStack Platform 13 Director Installation and Usage
Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object
Storage Daemon (OSD). In addition, the director installs Ceph Monitor onto the Controller
nodes in situations where it deploys Ceph Storage nodes.
Block storage (cinder) - Used as external block storage for HA Controller nodes. This node
contains the following components:
Open vSwitch.
Object storage (swift) - These nodes provide a external storage layer for OpenStack Swift.
The Controller nodes access these nodes through the Swift proxy. This node contains the
following components:
Open vSwitch.
The OpenStack Platform director uses some key pieces of software to manage components on the
Controller node:
Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the
availability of OpenStack components across all nodes in the cluster.
Galera - Replicates the Red Hat OpenStack Platform database across the cluster.
NOTE
Red Hat OpenStack Platform director automatically configures the bulk of high
availability on Controller nodes. However, the nodes require some manual
configuration to enable power management controls. This guide includes these
instructions.
From version 13 and later, you can use the director to deploy High Availability for
Compute Instances (Instance HA). With Instance HA you can automate
evacuating instances from a Compute node when that node fails.
1.4. CONTAINERIZATION
10
CHAPTER 1. INTRODUCTION
Each OpenStack Platform service on the overcloud runs inside an individual Linux container on their
respective node. This provides a method to isolate services and provide an easy way to maintain and
upgrade OpenStack Platform. Red Hat supports several methods of obtaining container images for your
overcloud including:
This guide provides information on how to configure your registry details and perform basic container
operations. For more information on containerized services, see the Transitioning to Containerized
Services guide.
However, there is also a practical requirement to virtualize the storage layer with a solution like Red Hat
Ceph Storage so that you can scale the Red Hat OpenStack Platform storage layer from tens of
terabytes to petabytes (or even exabytes) of storage. Red Hat Ceph Storage provides this storage
virtualization layer with high availability and high performance while running on commodity hardware.
While virtualization might seem like it comes with a performance penalty, Ceph stripes block device
images as objects across the cluster; this means large Ceph Block Device images have better
performance than a standalone disk. Ceph Block devices also support caching, copy-on-write cloning,
and copy-on-read cloning for enhanced performance.
See Red Hat Ceph Storage for additional information about Red Hat Ceph Storage.
NOTE
11
Red Hat OpenStack Platform 13 Director Installation and Usage
CHAPTER 2. REQUIREMENTS
This chapter outlines the main requirements for setting up an environment to provision Red Hat
OpenStack Platform using the director. This includes the requirements for setting up the director,
accessing it, and the hardware requirements for hosts that the director provisions for OpenStack
services.
NOTE
Minimum Requirements:
Recommended Requirements:
3 host machines for Red Hat OpenStack Platform Controller nodes in a cluster
It is recommended to use bare metal systems for all nodes. At minimum, the Compute nodes
require bare metal systems.
All overcloud bare metal systems require an Intelligent Platform Management Interface (IPMI).
This is because the director controls the power management.
Set the internal BIOS clock of each node to UTC. This prevents issues with future-dated file
timestamps when hwclock synchronizes the BIOS clock before applying the timezone offset.
To deploy overcloud Compute nodes on POWER (ppc64le) hardware, read the overview in
Appendix G, Red Hat OpenStack Platform for POWER .
An 8-core 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
12
CHAPTER 2. REQUIREMENTS
A minimum of 16 GB of RAM.
The ceph-ansible playbook consumes 1 GB resident set size (RSS) per 10 hosts deployed
by the undercloud. If the deployed overcloud will use an existing Ceph cluster, or if it will
deploy a new Ceph cluster, then provision undercloud RAM accordingly.
A minimum of 100 GB of available disk space on the root disk. This includes:
The latest version of Red Hat Enterprise Linux 7 is installed as the host operating system.
Platform Notes
Kernel-based Virtual Machine (KVM) Hosted by Red Hat Enterprise Linux 7, as listed on
certified hypervisors.
VMware ESX and ESXi Hosted by versions of ESX and ESXi as listed on the
Red Hat Customer Portal Certification Catalogue.
IMPORTANT
Red Hat OpenStack Platform director requires that the latest version of Red Hat
Enterprise Linux 7 is installed as the host operating system. This means your virtualization
platform must also support the underlying Red Hat Enterprise Linux version.
13
Red Hat OpenStack Platform 13 Director Installation and Usage
Network Considerations
Note the following network considerations for your virtualized undercloud:
Power Management
The undercloud VM requires access to the overcloud nodes' power management devices. This is the
IP address set for the pm_addr parameter when registering nodes.
Provisioning network
The NIC used for the provisioning (ctlplane) network requires the ability to broadcast and serve
DHCP requests to the NICs of the overcloud’s bare metal nodes. As a recommendation, create a
bridge that connects the VM’s NIC to the same network as the bare metal NICs.
NOTE
A common problem occurs when the hypervisor technology blocks the undercloud from
transmitting traffic from an unknown address. - If using Red Hat Enterprise Virtualization,
disable anti-mac-spoofing to prevent this. - If using VMware ESX or ESXi, allow forged
transmits to prevent this. You must power off and on the director VM after you apply
these settings. Rebooting the VM is not sufficient.
Example Architecture
This is just an example of a basic undercloud virtualization architecture using a KVM server. It is intended
as a foundation you can build on depending on your network and resource requirements.
br-ex (eth0)
DHCP server on outside network assigns network configuration to undercloud using the
virtual NIC (eth0)
Provides access for the undercloud to access the power management interfaces for the
bare metal servers
br-ctlplane (eth1)
Undercloud fulfills DHCP and PXE boot requests through virtual NIC (eth1)
Bare metal servers for the overcloud boot through PXE over this network
The following command creates the undercloud virtual machine on the KVM host and create two virtual
NICs that connect to the respective bridges:
14
CHAPTER 2. REQUIREMENTS
This starts a libvirt domain. Connect to it with virt-manager and walk through the install process.
Alternatively, you can perform an unattended installation using the following options to include a
kickstart file:
Once installation completes, SSH into the instance as the root user and follow the instructions in
Chapter 4, Installing the undercloud
Backups
To back up a virtualized undercloud, there are multiple solutions:
Option 1: Follow the instructions in the Back Up and Restore the Director Undercloud Guide.
Option 2: Shut down the undercloud and take a copy of the undercloud virtual machine storage
backing.
Option 3: Take a snapshot of the undercloud VM if your hypervisor supports live or atomic
snapshots.
1. Merge the QCOW overlay file into the backing file and switch the undercloud VM back to using
the original file:
Provisioning network - Provides DHCP and PXE boot functions to help discover bare metal
systems for use in the overcloud. Typically, this network must use a native VLAN on a trunked
interface so that the director serves PXE boot and DHCP requests. Some server hardware
BIOSes support PXE boot from a VLAN, but the BIOS must also support translating that VLAN
into a native VLAN after booting, otherwise the undercloud will not be reachable. Currently, only
a small subset of server hardware fully supports this feature. This is also the network you use to
control power management through Intelligent Platform Management Interface (IPMI) on all
overcloud nodes.
15
Red Hat OpenStack Platform 13 Director Installation and Usage
External Network - A separate network for external access to the overcloud and undercloud.
The interface connecting to this network requires a routable IP address, either defined statically,
or dynamically through an external DHCP service.
This represents the minimum number of networks required. However, the director can isolate other Red
Hat OpenStack Platform network traffic into other networks. Red Hat OpenStack Platform supports
both physical interfaces and tagged VLANs for network isolation.
Single NIC configuration - One NIC for the Provisioning network on the native VLAN and
tagged VLANs that use subnets for the different overcloud network types.
Dual NIC configuration - One NIC for the Provisioning network and the other NIC for the
External network.
Dual NIC configuration - One NIC for the Provisioning network on the native VLAN and the
other NIC for tagged VLANs that use subnets for the different overcloud network types.
Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.
Additional physical NICs can be used for isolating individual networks, creating bonded
interfaces, or for delegating tagged VLAN traffic.
If using VLANs to isolate your network traffic types, use a switch that supports 802.1Q standards
to provide tagged VLANs.
During the overcloud creation, you will refer to NICs using a single name across all overcloud
machines. Ideally, you should use the same NIC on each overcloud node for each respective
network to avoid confusion. For example, use the primary NIC for the Provisioning network and
the secondary NIC for the OpenStack services.
Make sure the Provisioning network NIC is not the same NIC used for remote connectivity on
the director machine. The director installation creates a bridge using the Provisioning NIC, which
drops any remote connections. Use the External NIC for remote connections to the director
system.
The Provisioning network requires an IP range that fits your environment size. Use the following
guidelines to determine the total number of IP addresses to include in this range:
Include at least one IP address per node connected to the Provisioning network.
If planning a high availability configuration, include an extra IP address for the virtual IP of
the cluster.
Include additional IP addresses within the range for scaling the environment.
NOTE
NOTE
16
CHAPTER 2. REQUIREMENTS
NOTE
For more information on planning your IP address usage, for example, for
storage, provider, and tenant networks, see the Networking Guide .
Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the
External NIC (and any other NICs on the system). Also ensure that the Provisioning NIC has PXE
boot at the top of the boot order, ahead of hard disks and CD/DVD drives.
All overcloud bare metal systems require a supported power management interface, such as an
Intelligent Platform Management Interface (IPMI). This allows the director to control the power
management of each node.
Make a note of the following details for each overcloud system: the MAC address of the
Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This
information will be useful later when setting up the overcloud nodes.
If an instance needs to be accessible from the external internet, you can allocate a floating IP
address from a public network and associate it with an instance. The instance still retains its
private IP but network traffic uses NAT to traverse through to the floating IP address. Note that
a floating IP address can only be assigned to a single instance rather than multiple private IP
addresses. However, the floating IP address is reserved only for use by a single tenant, allowing
the tenant to associate or disassociate with a particular instance as required. This configuration
exposes your infrastructure to the external internet. As a result, you might need to check that
you are following suitable security practices.
To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond
may be a member of a given bridge. If you require multiple bonds or interfaces, you can
configure multiple bridges.
It is recommended to use DNS hostname resolution so that your overcloud nodes can connect
to external services, such as the Red Hat Content Delivery Network and network time servers.
IMPORTANT
17
Red Hat OpenStack Platform 13 Director Installation and Usage
The following sections detail the requirements for individual systems and nodes in the overcloud
installation.
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the
AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended this
processor has a minimum of 4 cores.
Memory
A minimum of 6 GB of RAM. Add additional RAM to this requirement based on the amount of
memory that you intend to make available to virtual machine instances.
Disk Space
A minimum of 40 GB of available disk space.
Network Interface Cards
A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two
NICs in a production environment. Use additional network interface cards for bonded interfaces or to
delegate tagged VLAN traffic.
Power Management
Each Compute node requires a supported power management interface, such as an Intelligent
Platform Management Interface (IPMI) functionality, on the server’s motherboard.
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Minimum amount of memory is 32 GB. However, the amount of recommended memory depends on
the number of vCPUs (which is based on CPU cores multiplied by hyper-threading value). Use the
following calculations as guidance:
Use 1.5 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 72
GB of RAM.
Use 3 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 144
GB of RAM
18
CHAPTER 2. REQUIREMENTS
For more information on measuring memory requirements, see "Red Hat OpenStack Platform
Hardware Requirements for Highly Available Controllers" on the Red Hat Customer Portal.
Red Hat provides several configuration recommendations for both Telemetry and Object Storage.
See Deployment Recommendations for Specific Red Hat OpenStack Platform Services for details.
Red Hat only supports virtualized controller nodes on Red Hat Virtualization platforms. See Virtualized
control planes for details.
Placement Groups
Ceph uses Placement Groups to facilitate dynamic and efficient object tracking at scale. In the case
of OSD failure or cluster re-balancing, Ceph can move or replicate a placement group and its
contents, which means a Ceph cluster can re-balance and recover efficiently. The default Placement
Group count that Director creates is not always optimal so it is important to calculate the correct
Placement Group count according to your requirements. You can use the Placement Group
calculator to calculate the correct count: Ceph Placement Groups (PGs) per Pool Calculator
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Red Hat typically recommends a baseline of 16GB of RAM per OSD host, with an additional 2 GB of
RAM per OSD daemon.
Disk Layout
Sizing is dependant on your storage need. The recommended Red Hat Ceph Storage node
configuration requires at least three or more disks in a layout similar to the following:
19
Red Hat OpenStack Platform 13 Director Installation and Usage
/dev/sda - The root disk. The director copies the main Overcloud image to the disk. This
should be at minimum 40 GB of available disk space.
/dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals. For
example, /dev/sdb1, /dev/sdb2, /dev/sdb3, and onward. The journal disk is usually a solid
state drive (SSD) to aid with system performance.
/dev/sdc and onward - The OSD disks. Use as many disks as necessary for your storage
requirements.
NOTE
Red Hat OpenStack Platform director uses ceph-ansible, which does not
support installing the OSD on the root disk of Ceph Storage nodes. This
means you need at least two or more disks for a supported Ceph Storage
node.
See the Deploying an Overcloud with Containerized Red Hat Ceph guide for more information about
installing an overcloud with a Ceph Storage cluster.
Processor
64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
Memory
Memory requirements depend on the amount of storage space. Ideally, use at minimum 1 GB of
memory per 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB per 1
TB of hard disk space, especially for small file (less 100GB) workloads.
Disk Space
Storage requirements depends on the capacity needed for the workload. It is recommended to use
SSD drives to store the account and container data. The capacity ratio of account and container
data to objects is of about 1 per cent. For example, for every 100TB of hard drive capacity, provide
1TB of SSD capacity for account and container data.
However, this depends on the type of stored data. If STORING mostly small objects, provide more
SSD space. For large objects (videos, backups), use less SSD space.
Disk Layout
The recommended node configuration requires a disk layout similar to the following:
/dev/sda - The root disk. The director copies the main overcloud image to the disk.
20
CHAPTER 2. REQUIREMENTS
/dev/sda - The root disk. The director copies the main overcloud image to the disk.
/dev/sdd and onward - The object server disks. Use as many disks as necessary for your
storage requirements.
Red Hat Enterprise Linux 7 Server rhel-7-server-rpms Base operating system repository
(RPMs) for x86_64 systems.
Red Hat Enterprise Linux 7 Server rhel-7-server-extras-rpms Contains Red Hat OpenStack
- Extras (RPMs) Platform dependencies.
Red Hat Enterprise Linux 7 Server rhel-7-server-rh-common- Contains tools for deploying and
- RH Common (RPMs) rpms configuring Red Hat OpenStack
Platform.
Red Hat Satellite Tools 6.3 (for rhel-7-server-satellite-tools- Tools for managing hosts with Red
RHEL 7 Server) (RPMs) x86_64 6.3-rpms Hat Satellite Server 6. Note that
using later versions of the
Satellite Tools repository might
cause the undercloud installation
to fail.
Red Hat Enterprise Linux High rhel-ha-for-rhel-7-server- High availability tools for Red Hat
Availability (for RHEL 7 Server) rpms Enterprise Linux. Used for
(RPMs) Controller node high availability.
21
Red Hat OpenStack Platform 13 Director Installation and Usage
Red Hat Ceph Storage OSD 3 for rhel-7-server-rhceph-3-osd- (For Ceph Storage Nodes)
Red Hat Enterprise Linux 7 Server rpms Repository for Ceph Storage
(RPMs) Object Storage daemon. Installed
on Ceph Storage nodes.
Red Hat Ceph Storage MON 3 for rhel-7-server-rhceph-3-mon- (For Ceph Storage Nodes)
Red Hat Enterprise Linux 7 Server rpms Repository for Ceph Storage
(RPMs) Monitor daemon. Installed on
Controller nodes in OpenStack
environments using Ceph Storage
nodes.
Red Hat Ceph Storage Tools 3 for rhel-7-server-rhceph-3-tools- Provides tools for nodes to
Red Hat Enterprise Linux 7 Server rpms communicate with the Ceph
(RPMs) Storage cluster. This repository
should be enabled for all nodes
when deploying an overcloud with
a Ceph Storage cluster.
Enterprise Linux for Real Time for rhel-7-server-nfv-rpms Repository for Real Time KVM
NFV (RHEL 7 Server) (RPMs) (RT-KVM) for NFV. Contains
packages to enable the real time
kernel. This repository should be
enabled for all Compute nodes
targeted for RT-KVM. NOTE: You
need a separate subscription to a
Red Hat OpenStack Platform
for Real Time SKU before you
can access this repository.
Red Hat Enterprise Linux for IBM rhel-7-for-power-le-rpms Base operating system repository
Power, little endian for ppc64le systems.
22
CHAPTER 2. REQUIREMENTS
NOTE
To configure repositories for your Red Hat OpenStack Platform environment in an offline
network, see "Configuring Red Hat OpenStack Platform Director in an Offline
Environment" on the Red Hat Customer Portal.
23
Red Hat OpenStack Platform 13 Director Installation and Usage
Controller
Provides key services for controlling your environment. This includes the dashboard (horizon),
authentication (keystone), image storage (glance), networking (neutron), orchestration (heat), and
high availability services. A Red Hat OpenStack Platform environment requires three Controller
nodes for a highly available production-level environment.
NOTE
Environments with one node can only be used for testing purposes, not for
production. Environments with two nodes or more than three nodes are not
supported.
Compute
A physical server that acts as a hypervisor, and provides the processing capabilities required for
running virtual machines in the environment. A basic Red Hat OpenStack Platform environment
requires at least one Compute node.
Ceph Storage
A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This
deployment role is optional.
Swift Storage
A host that provides external object storage for OpenStack’s swift service. This deployment role is
optional.
The following table contains some examples of different overclouds and defines the node types for
each scenario.
Small 3 1 - - 4
overcloud
Medium 3 3 - - 6
overcloud
Medium 3 3 - 3 9
overcloud with
additional
Object storage
24
CHAPTER 3. PLANNING YOUR OVERCLOUD
Medium 3 3 3 - 9
overcloud with
Ceph Storage
cluster
In addition, consider whether to split individual services into custom roles. For more information on the
composable roles architecture, see "Composable Services and Custom Roles" in the Advanced
Overcloud Customization guide.
Red Hat OpenStack Platform maps the different services onto separate network traffic types, which are
assigned to the various subnets in your environments. These network traffic types include:
Provisioning / Control Plane The director uses this network All nodes
traffic type to deploy new nodes
over PXE boot and orchestrate
the installation of OpenStack
Platform on the overcloud bare
metal servers. This network is
predefined before the installation
of the undercloud.
Internal API The Internal API network is used Controller, Compute, Cinder
for communication between the Storage, Swift Storage
OpenStack services using API
communication, RPC messages,
and database communication.
25
Red Hat OpenStack Platform 13 Director Installation and Usage
Storage Management OpenStack Object Storage (swift) Controller, Ceph Storage, Cinder
uses this network to synchronize Storage, Swift Storage
data objects between
participating replica nodes. The
proxy service acts as the
intermediary interface between
user requests and the underlying
storage layer. The proxy receives
incoming requests and locates
the necessary replica to retrieve
the requested data. Services that
use a Ceph back end connect
over the Storage Management
network, since they do not
interact with Ceph directly but
rather use the frontend service.
Note that the RBD driver is an
exception, as this traffic connects
directly to Ceph.
26
CHAPTER 3. PLANNING YOUR OVERCLOUD
In a typical Red Hat OpenStack Platform installation, the number of network types often exceeds the
number of physical network links. In order to connect all the networks to the proper hosts, the overcloud
uses VLAN tagging to deliver more than one network per interface. Most of the networks are isolated
subnets but some require a Layer 3 gateway to provide routing for Internet access or infrastructure
network connectivity.
NOTE
27
Red Hat OpenStack Platform 13 Director Installation and Usage
NOTE
It is recommended that you deploy a project network (tunneled with GRE or VXLAN)
even if you intend to use a neutron VLAN mode (with tunneling disabled) at deployment
time. This requires minor customization at deployment time and leaves the option
available to use tunnel networks as utility networks or virtualization networks in the future.
You still create Tenant networks using VLANs, but you can also create VXLAN tunnels for
special-use networks without consuming tenant VLANs. It is possible to add VXLAN
capability to a deployment with a Tenant VLAN, but it is not possible to add a Tenant
VLAN to an existing overcloud without causing disruption.
The director provides a method for mapping six of these traffic types to certain subnets or VLANs.
These traffic types include:
Internal API
Storage
Storage Management
Tenant Networks
External
Management (optional)
Any unassigned networks are automatically assigned to the same subnet as the Provisioning network.
The diagram below provides an example of a network topology where the networks are isolated on
separate VLANs. Each overcloud node uses two interfaces (nic2 and nic3) in a bond to deliver these
networks over their respective VLANs. Meanwhile, each overcloud node communicates with the
undercloud over the Provisioning network through a native VLAN using nic1.
The following table provides examples of network traffic mappings different network layouts:
29
Red Hat OpenStack Platform 13 Director Installation and Usage
Network 2 - External,
Floating IP (mapped
after overcloud
creation)
Network 3 - Tenant
Networks
Network 4 - Storage
Network 5 - Storage
Management
Network 6 -
Management (optional)
Network 7 - External,
Floating IP (mapped
after overcloud
creation)
NOTE
You can virtualize the overcloud control plane if you are using Red Hat Virtualization
(RHV). See Creating virtualized control planes for details.
NOTE
Using LVM on a guest instance that uses a back end cinder-volume of any driver or back-
end type results in issues with performance, volume visibility and availability, and data
corruption. These issues can be mitigated using a LVM filter. For more information, refer
to section 2.1 Back Ends in the Storage Guide and KCS article 3213311, "Using LVM on a
cinder volume exposes the data to the compute host."
The director provides different storage options for the overcloud environment. This includes:
Images - Glance manages images for VMs. Images are immutable. OpenStack treats images
as binary blobs and downloads them accordingly. You can use glance to store images in a
Ceph Block Device.
30
CHAPTER 3. PLANNING YOUR OVERCLOUD
Volumes - Cinder volumes are block devices. OpenStack uses volumes to boot VMs, or to
attach volumes to running VMs. OpenStack manages volumes using cinder services. You can
use cinder to boot a VM using a copy-on-write clone of an image.
File Systems - Manila shares are backed by file systems. OpenStack users manage shares
using manila services. You can use manila to manage shares backed by a CephFS file system
with data on the Ceph Storage Nodes.
Guest Disks - Guest disks are guest operating system disks. By default, when you boot a
virtual machine with nova, its disk appears as a file on the filesystem of the hypervisor
(usually under /var/lib/nova/instances/<uuid>/). Every virtual machine inside Ceph can be
booted without using Cinder, which lets you perform maintenance operations easily with the
live-migration process. Additionally, if your hypervisor dies it is also convenient to trigger
nova evacuate and run the virtual machine elsewhere.
IMPORTANT
For information about supported image formats, see the Image Service
chapter in the Instances and Images Guide .
See Red Hat Ceph Storage Architecture Guide for additional information.
IMPORTANT
Deploying a highly available overcloud without STONITH is not supported. You must
configure a STONITH device for each node that is a part of the Pacemaker cluster in a
highly available overcloud. For more information on STONITH and Pacemaker, see
Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High
Availability Clusters.
You can also configure high availability for Compute instances with the director (Instance HA). This
mechanism automates evacuation and re-spawning of instances on Compute nodes in case of node
failure. The requirements for Instance HA are the same as the general overcloud requirements, but you
must prepare your environment for the deployment by performing a few additional steps. For
information about how Instance HA works and installation instructions, see the High Availability for
Compute Instances guide.
31
Red Hat OpenStack Platform 13 Director Installation and Usage
Procedure
# vi /etc/environment
http_proxy
The proxy to use for standard HTTP requests.
https_proxy
The proxy to use for HTTPs requests.
no_proxy
A comma-separated list of IP addresses and domains excluded from proxy communications.
Include all IP addresses and domains relevant to the undercloud.
http_proxy=https://10.0.0.1:8080/
https_proxy=https://10.0.0.1:8080/
no_proxy=127.0.0.1,192.168.24.1,192.168.24.2,192.168.24.3
4. Restart your shell session. For example, logout and re-login to the undercloud.
Procedure
32
CHAPTER 4. INSTALLING THE UNDERCLOUD
Procedure
2. If either of the previous commands do not report the correct fully-qualified hostname or report
an error, use hostnamectl to set a hostname:
3. The director also requires an entry for the system’s hostname and base name in /etc/hosts. The
IP address in /etc/hosts must match the address that you plan to use for your undercloud public
API. For example, if the system is named manager.example.com and uses 10.0.0.1 for its IP
address, then /etc/hosts requires an entry like:
33
Red Hat OpenStack Platform 13 Director Installation and Usage
Procedure
1. Register your system with the Content Delivery Network. Enter your Customer Portal user
name and password when prompted:
2. Find the entitlement pool ID for Red Hat OpenStack Platform director. For example:
3. Locate the Pool ID value and attach the Red Hat OpenStack Platform 13 entitlement:
4. Disable all default repositories, and then enable the required Red Hat Enterprise Linux
repositories that contain packages that the director installation requires:
IMPORTANT
34
CHAPTER 4. INSTALLING THE UNDERCLOUD
5. Perform an update on your system to ensure that you have the latest base system packages:
Procedure
1. Install the command line tools for director installation and configuration:
If you use Red Hat Ceph Storage, or if your deployment uses an external Ceph Storage cluster, install
the ceph-ansible package. If you do not plan to use Ceph Storage, do not install the ceph-ansible
package.
Procedure
Procedure
1. Red Hat provides a basic template to help determine the required settings for your installation.
Copy this template to the stack user’s home directory:
35
Red Hat OpenStack Platform 13 Director Installation and Usage
[stack@director ~]$ cp \
/usr/share/instack-undercloud/undercloud.conf.sample \
~/undercloud.conf
2. Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you
omit or comment out a parameter, the undercloud installation uses the default value.
Defaults
The following parameters are defined in the [DEFAULT] section of the undercloud.conf file:
undercloud_hostname
Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures
all system host name settings. If left unset, the undercloud uses the current host name, but the user
must configure all system host name settings appropriately.
local_ip
The IP address defined for the director’s Provisioning NIC. This is also the IP address the director
uses for its DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you
are using a different subnet for the Provisioning network, for example, if it conflicts with an existing IP
address or subnet in your environment.
undercloud_public_host
The IP address or hostname defined for director Public API endpoints over SSL/TLS. The director
configuration attaches the IP address to the director software bridge as a routed IP address, which
uses the /32 netmask.
undercloud_admin_host
The IP address or hostname defined for director Admin API endpoints over SSL/TLS. The director
configuration attaches the IP address to the director software bridge as a routed IP address, which
uses the /32 netmask.
undercloud_nameservers
A list of DNS nameservers to use for the undercloud hostname resolution.
undercloud_ntp_servers
A list of network time protocol servers to help synchronize the undercloud’s date and time.
overcloud_domain_name
The DNS domain name to use when deploying the overcloud.
NOTE
subnets
List of routed network subnets for provisioning and introspection. See Subnets for more information.
The default value only includes the ctlplane-subnet subnet.
36
CHAPTER 4. INSTALLING THE UNDERCLOUD
local_subnet
The local subnet to use for PXE boot and DHCP interfaces. The local_ip address should reside in
this subnet. The default is ctlplane-subnet.
undercloud_service_certificate
The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you
obtain this certificate from a trusted certificate authority. Otherwise generate your own self-signed
certificate using the guidelines in Appendix A, SSL/TLS Certificate Configuration . These guidelines
also contain instructions on setting the SELinux context for your certificate, whether self-signed or
from an authority. This option has implications when deploying your overcloud. See Section 6.9,
“Configure overcloud nodes to trust the undercloud CA” for more information.
generate_service_certificate
Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used
for the undercloud_service_certificate parameter. The undercloud installation saves the resulting
certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem. The CA defined in the
certificate_generation_ca parameter signs this certificate. This option has implications when
deploying your overcloud. See Section 6.9, “Configure overcloud nodes to trust the undercloud CA”
for more information.
certificate_generation_ca
The certmonger nickname of the CA that signs the requested certificate. Only use this option if you
have set the generate_service_certificate parameter. If you select the local CA, certmonger
extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds it
to the trust chain.
service_principal
The Kerberos principal for the service using the certificate. Only use this if your CA requires a
Kerberos principal, such as in FreeIPA.
local_interface
The chosen interface for the director’s Provisioning NIC. This is also the device the director uses for
its DHCP and PXE boot services. Change this value to your chosen device. To see which device is
connected, use the ip addr command. For example, this is the result of an ip addr command:
In this example, the External NIC uses eth0 and the Provisioning NIC uses eth1, which is currently not
configured. In this case, set the local_interface to eth1. The configuration script attaches this
interface to a custom bridge defined with the inspection_interface parameter.
local_mtu
MTU to use for the local_interface. Do not exceed 1500 for the undercloud.
hieradata_override
Path to hieradata override file that configures Puppet hieradata on the director, providing custom
configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation
copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. See
Section 4.10, “Configuring hieradata on the undercloud” for details on using this feature.
37
Red Hat OpenStack Platform 13 Director Installation and Usage
net_config_override
Path to network configuration override template. If set, the undercloud uses a JSON format template
to configure the networking with os-net-config. This ignores the network parameters set in
undercloud.conf. Use this parameter when you want to configure bonding or add an option to the
interface. See /usr/share/instack-undercloud/templates/net-config.json.template for an example.
inspection_interface
The bridge the director uses for node introspection. This is custom bridge that the director
configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default
br-ctlplane.
inspection_iprange
A range of IP address that the director’s introspection service uses during the PXE boot and
provisioning process. Use comma-separated values to define the start and end of this range. For
example, 192.168.24.100,192.168.24.120. Make sure this range contains enough IP addresses for
your nodes and does not conflict with the range for dhcp_start and dhcp_end.
inspection_extras
Defines whether to enable extra hardware collection during the inspection process. Requires
python-hardware or python-hardware-detect package on the introspection image.
inspection_runbench
Runs a set of benchmarks during node introspection. Set to true to enable. This option is necessary if
you intend to perform benchmark analysis when inspecting the hardware of registered nodes. See
Section 6.2, “Inspecting the Hardware of Nodes” for more details.
inspection_enable_uefi
Defines whether to support introspection of nodes with UEFI-only firmware. For more information,
see Appendix D, Alternative Boot Modes .
enable_node_discovery
Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use
the fake_pxe driver as a default but you can set discovery_default_driver to override. You can also
use introspection rules to specify driver information for newly enrolled nodes.
discovery_default_driver
Sets the default driver for automatically enrolled nodes. Requires enable_node_discovery enabled
and you must include the driver in the enabled_drivers list. See Appendix B, Power Management
Drivers for a list of supported drivers.
undercloud_debug
Sets the log level of undercloud services to DEBUG. Set this value to true to enable.
undercloud_update_packages
Defines whether to update packages during the undercloud installation.
enable_tempest
Defines whether to install the validation tools. The default is set to false, but you can can enable
using true.
enable_telemetry
Defines whether to install OpenStack Telemetry services (ceilometer, aodh, panko, gnocchi) in the
undercloud. In Red Hat OpenStack Platform, the metrics backend for telemetry is provided by
gnocchi. Setting enable_telemetry parameter to true will install and set up telemetry services
automatically. The default value is false, which disables telemetry on the undercloud. This parameter
is required if using other products that consume metrics data, such as Red Hat CloudForms.
enable_ui
Defines Whether to install the director’s web UI. This allows you to perform overcloud planning and
38
CHAPTER 4. INSTALLING THE UNDERCLOUD
deployments through a graphical web interface. For more information, see Chapter 7, Configuring a
Basic Overcloud with the Web UI. Note that the UI is only available with SSL/TLS enabled using either
the undercloud_service_certificate or generate_service_certificate.
enable_validations
Defines whether to install the requirements to run validations.
enable_novajoin
Defines whether to install the novajoin metadata service in the Undercloud.
ipa_otp
Defines the one time password to register the Undercloud node to an IPA server. This is required
when enable_novajoin is enabled.
ipxe_enabled
Defines whether to use iPXE or standard PXE. The default is true, which enables iPXE. Set to false
to set to standard PXE. For more information, see Appendix D, Alternative Boot Modes .
scheduler_max_attempts
Maximum number of times the scheduler attempts to deploy an instance. Keep this greater or equal
to the number of bare metal nodes you expect to deploy at once to work around potential race
condition when scheduling.
clean_nodes
Defines whether to wipe the hard drive between deployments and after introspection.
enabled_hardware_types
A list of hardware types to enable for the undercloud. See Appendix B, Power Management Drivers
for a list of supported drivers.
additional_architectures
A list of (kernel) architectures that an overcloud will support. Currently this is limited to ppc64le
NOTE
When enabling support for ppc64le, you must also set ipxe_enabled to False
Passwords
The following parameters are defined in the [auth] section of the undercloud.conf file:
IMPORTANT
The configuration file examples for these parameters use <None> as a placeholder
value. Setting these values to <None> leads to a deployment error.
Subnets
Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a
subnet called ctlplane-subnet:
39
Red Hat OpenStack Platform 13 Director Installation and Usage
[ctlplane-subnet]
cidr = 192.168.24.0/24
dhcp_start = 192.168.24.5
dhcp_end = 192.168.24.24
inspection_iprange = 192.168.24.100,192.168.24.120
gateway = 192.168.24.1
masquerade = true
You can specify as many provisioning networks as necessary to suit your environment.
gateway
The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the
External network. Leave this as the default 192.168.24.1 unless you are either using a different IP
address for the director or want to directly use an external gateway.
NOTE
The director’s configuration script also automatically enables IP forwarding using the
relevant sysctl kernel parameter.
cidr
The network that the director uses to manage overcloud instances. This is the Provisioning network,
which the undercloud’s neutron service manages. Leave this as the default 192.168.24.0/24 unless
you are using a different subnet for the Provisioning network.
masquerade
Defines whether to masquerade the network defined in the cidr for external access. This provides
the Provisioning network with a degree of network address translation (NAT) so that it has external
access through the director.
dhcp_start; dhcp_end
The start and end of the DHCP allocation range for overcloud nodes. Ensure this range contains
enough IP addresses to allocate your nodes.
Modify the values for these parameters to suit your configuration. When complete, save the file.
Procedure
2. Add the customized hieradata to the file. For example, add the following to modify the
Compute (nova) service parameter force_raw_images from the default value of "True" to
"False":
nova::compute::force_raw_images: False
If there is no Puppet implementation for the parameter you want to set, then use the following
method to configure the parameter:
40
CHAPTER 4. INSTALLING THE UNDERCLOUD
nova::config::nova_config:
DEFAULT/<parameter_name>:
value: <parameter_value>
For example:
nova::config::nova_config:
DEFAULT/network_allocate_retries:
value: 20
ironic/serial_console_state_timeout:
value: 15
3. Set the hieradata_override parameter to the path of the hieradata file in your
undercloud.conf:
hieradata_override = /home/stack/hieradata.yaml
Procedure
This launches the director’s configuration script. The director installs additional packages and
configures its services to suit the settings in the undercloud.conf. This script takes several
minutes to complete.
stackrc - A set of initialization variables to help you access the director’s command line
tools.
2. The script also starts all OpenStack Platform services automatically. Check the enabled services
using the following command:
3. The script adds the stack user to the docker group to give the stack user has access to
container management commands. Refresh the stack user’s permissions with the following
command:
The command prompts you to log in again. Enter the stack user’s password.
4. To initialize the stack user to use the command line tools, run the following command:
41
Red Hat OpenStack Platform 13 Director Installation and Usage
The prompt now indicates OpenStack commands authenticate and execute against the
undercloud;
The director installation is complete. You can now use the director’s command line tools.
An introspection kernel and ramdisk - Used for bare metal system introspection over PXE boot.
A deployment kernel and ramdisk - Used for system provisioning and deployment.
An overcloud kernel, ramdisk, and full image - A base overcloud system that is written to the
node’s hard disk.
The following procedure shows how to obtain and install these images.
Procedure
1. Source the stackrc file to enable the director’s command line tools:
3. Extract the images archives to the images directory on the stack user’s home
(/home/stack/images):
42
CHAPTER 4. INSTALLING THE UNDERCLOUD
bm-deploy-kernel
bm-deploy-ramdisk
overcloud-full
overcloud-full-initrd
overcloud-full-vmlinuz
The script also installs the introspection images on the director’s PXE server.
This list will not show the introspection PXE images. The director copies these files to /httpboot.
Procedure
1. Source the stackrc file to enable the director’s command line tools:
3. Extract the archives to an architecture specific directory under the images directory on the
stack user’s home (/home/stack/images):
43
Red Hat OpenStack Platform 13 Director Installation and Usage
/usr/share/rhosp-director-images/overcloud-full-latest-13.0-${arch}.tar /usr/share/rhosp-
director-images/ironic-python-agent-latest-13.0-${arch}.tar ; do tar -C $arch -xf $i ; done ;
done
bm-deploy-kernel
bm-deploy-ramdisk
overcloud-full
overcloud-full-initrd
overcloud-full-vmlinuz
ppc64le-bm-deploy-kernel
ppc64le-bm-deploy-ramdisk
ppc64le-overcloud-full
The script also installs the introspection images on the director’s PXE server.
This list will not show the introspection PXE images. The director copies these files to /tftpboot.
44
CHAPTER 4. INSTALLING THE UNDERCLOUD
/tftpboot/ppc64le/:
total 457204
-rwxr-xr-x. 1 root root 19858896 Aug 8 19:34 agent.kernel
-rw-r--r--. 1 root root 448311235 Aug 8 19:34 agent.ramdisk
-rw-r--r--. 1 ironic-inspector ironic-inspector 336 Aug 8 02:06 default
NOTE
The default overcloud-full.qcow2 image is a flat partition image. However, you can also
import and use whole disk images. See Appendix C, Whole Disk Images for more
information.
Procedure
1. Source the stackrc file to enable the director’s command line tools:
IMPORTANT
If you aim to isolate service traffic onto separate networks, the overcloud nodes use the
DnsServers parameter in your network environment files.
45
Red Hat OpenStack Platform 13 Director Installation and Usage
46
CHAPTER 5. CONFIGURING A CONTAINER IMAGE SOURCE
This guide provides several use cases to configure your overcloud to use a registry. See
Section 5.1, “Registry Methods” for an explanation of these methods.
It is recommended to familiarize yourself with how to use the image preparation command. See
Section 5.2, “Container image preparation command usage” for more information.
To get started with the most common method for preparing a container image source, see
Section 5.5, “Using the undercloud as a local registry” .
Remote Registry
The overcloud pulls container images directly from registry.access.redhat.com. This method is the
easiest for generating the initial configuration. However, each overcloud node pulls each image
directly from the Red Hat Container Catalog, which can cause network congestion and slower
deployment. In addition, all overcloud nodes require internet access to the Red Hat Container
Catalog.
Local Registry
The undercloud uses the docker-distribution service to act as a registry. This allows the director to
synchronize the images from registry.access.redhat.com and push them to the docker-
distribution registry. When creating the overcloud, the overcloud pulls the container images from
the undercloud’s docker-distribution registry. This method allows you to store a registry internally,
which can speed up the deployment and decrease network congestion. However, the undercloud
only acts as a basic registry and provides limited life cycle management for container images.
NOTE
The docker-distribution service acts separately from docker. docker is used to pull and
push images to the docker-distribution registry and does not serve the images to the
overcloud. The overcloud pulls the images from the docker-distribution registry.
Satellite Server
Manage the complete application life cycle of your container images and publish them through a Red
Hat Satellite 6 server. The overcloud pulls the images from the Satellite server. This method provides
an enterprise grade solution to store, manage, and deploy Red Hat OpenStack Platform containers.
Select a method from the list and continue configuring your registry details.
NOTE
When building for a multi-architecture cloud, the local registry option is not supported.
47
Red Hat OpenStack Platform 13 Director Installation and Usage
This section provides an overview on how to use the openstack overcloud container image prepare
command, including conceptual information on the command’s various options.
--output-env-file
Defines the resulting environment file name.
parameter_defaults:
DockerAodhApiImage: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest
DockerAodhConfigImage: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest
...
The openstack overcloud container image prepare command uses the following options for this
function:
--output-images-file
Defines the resulting file name for the import list.
container_images:
- imagename: registry.access.redhat.com/rhosp13/openstack-aodh-api:latest
- imagename: registry.access.redhat.com/rhosp13/openstack-aodh-evaluator:latest
...
--namespace
Defines the namespace for the container images. This is usually a hostname or IP address with a
directory.
--prefix
Defines the prefix to add before the image names.
As a result, the director generates the image names using the following format:
[NAMESPACE]/[PREFIX][IMAGE NAME]
48
CHAPTER 5. CONFIGURING A CONTAINER IMAGE SOURCE
--tag-from-label
Use the value of the specified container image labels to discover the versioned tag for every image.
--tag
Sets the specific tag for all images. All OpenStack Platform container images use the same tag to
provide version synchronicity. When using in combination with --tag-from-label, the versioned tag is
discovered starting from this tag.
-e
Include environment files to enable additional container images.
The following table provides a sample list of additional services that use container images and their
respective environment file locations within the /usr/share/openstack-tripleo-heat-templates
directory.
Collectd environments/services-docker/collectd.yaml
Congress environments/services-docker/congress.yaml
Fluentd environments/services-docker/fluentd.yaml
49
Red Hat OpenStack Platform 13 Director Installation and Usage
Sensu environments/services-docker/sensu-client.yaml
Ceph Storage
If deploying a Red Hat Ceph Storage cluster with your overcloud, you need to include the
/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
environment file. This file enables the composable containerized services in your overcloud and the
director needs to know these services are enabled to prepare their images.
In addition to this environment file, you also need to define the Ceph Storage container location, which is
different from the OpenStack Platform services. Use the --set option to set the following parameters
specific to Ceph Storage:
--set ceph_namespace
Defines the namespace for the Ceph Storage container image. This functions similar to the --
namespace option.
--set ceph_image
Defines the name of the Ceph Storage container image. Usually,this is rhceph-3-rhel7.
--set ceph_tag
Defines the tag to use for the Ceph Storage container image. This functions similar to the --tag
option. When --tag-from-label is specified, the versioned tag is discovered starting from this tag.
The following snippet is an example that includes Ceph Storage in your container image files:
50
CHAPTER 5. CONFIGURING A CONTAINER IMAGE SOURCE
-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml \
...
environments/manila-isilon-config.yaml
environments/manila-netapp-config.yaml
environments/manila-vmax-config.yaml
environments/manila-cephfsnative-config.yaml
environments/manila-cephfsganesha-config.yaml
environments/manila-unity-config.yaml
environments/manila-vnx-config.yaml
For more information about customizing and deploying environment files, see the following resources:
Deploying the updated environment in CephFS via NFS Back End Guide for the Shared File
System Service
Deploy the Shared File System Service with NetApp Back Ends in NetApp Back End Guide for
the Shared File System Service
Deploy the Shared File System Service with a CephFS Back End in CephFS Back End Guide for
the Shared File System Service
Procedure
Use the -e option to include any environment files for optional services.
If using Ceph Storage, include the additional parameters to define the Ceph Storage
container image location: --set ceph_namespace, --set ceph_image, --set ceph_tag.
The director pushes each images to the docker-distribution registry running on the
undercloud.
During the overcloud creation, the nodes pull the relevant images from the undercloud’s
docker-distribution registry.
This keeps network traffic for container images within your internal network, which does not congest
your external network connection and can speed the deployment process.
52
CHAPTER 5. CONFIGURING A CONTAINER IMAGE SOURCE
Procedure
1. Find the address of the local undercloud registry. The address will use the following pattern:
<REGISTRY IP ADDRESS>:8787
Use the IP address of your undercloud, which you previously set with the local_ip parameter in
your undercloud.conf file. For the commands below, the address is assumed to be
192.168.24.1:8787.
2. Create a template to upload the the images to the local registry, and the environment file to
refer to those images:
Use the -e option to include any environment files for optional services.
If using Ceph Storage, include the additional parameters to define the Ceph Storage
container image location: --set ceph_namespace, --set ceph_image, --set ceph_tag.
Pulling the required images might take some time depending on the speed of your network and
your undercloud disk.
NOTE
5. The images are now stored on the undercloud’s docker-distribution registry. To view the list of
images on the undercloud’s docker-distribution registry using the following command:
53
Red Hat OpenStack Platform 13 Director Installation and Usage
To view a list of tags for a specific image, use the skopeo command:
The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an
example organization called ACME. Substitute this organization for your own Satellite 6 organization.
Procedure
$ source ~/stackrc
(undercloud) $ openstack overcloud container image prepare \
--namespace=rhosp13 \
--prefix=openstack- \
--output-images-file /home/stack/satellite_images \
Use the -e option to include any environment files for optional services.
If using Ceph Storage, include the additional parameters to define the Ceph Storage
container image location: --set ceph_namespace, --set ceph_image, --set ceph_tag.
NOTE
2. This creates a file called satellite_images with your container image information. You will use
this file to synchronize container images to your Satellite 6 server.
3. Remove the YAML-specific information from the satellite_images file and convert it into a flat
file containing only the list of images. The following sed commands accomplish this:
54
CHAPTER 5. CONFIGURING A CONTAINER IMAGE SOURCE
(undercloud) $ awk -F ':' '{if (NR!=1) {gsub("[[:space:]]", ""); print $2}}' ~/satellite_images >
~/satellite_images_names
This provides a list of images that you pull into the Satellite server.
4. Copy the satellite_images_names file to a system that contains the Satellite 6 hammer tool.
Alternatively, use the instructions in the Hammer CLI Guide to install the hammer tool to the
undercloud.
5. Run the following hammer command to create a new product ( OSP13 Containers) to your
Satellite organization:
NOTE
Depending on your configuration, hammer might ask for your Satellite server
username and password. You can configure hammer to automatically login using
a configuration file. See the "Authentication" section in the Hammer CLI Guide .
55
Red Hat OpenStack Platform 13 Director Installation and Usage
9. If your Satellite 6 server uses content views, create a new content view version to incorporate
the images.
11. Return to the undercloud and generate an environment file for the images on your Satellite
server. The following is an example command for generating the environment file:
NOTE
--namespace - The URL and port of the registry on the Satellite server. The default
registry port on Red Hat Satellite is 5000. For example, --
namespace=satellite6.example.com:5000.
If you do not use content views, the structure is [org]-[product]-. For example: acme-
osp13_containers-.
--set ceph_image=acme-osp13_containers-rhceph-3-rhel7
This ensures the overcloud uses the Ceph container image using the Satellite naming
56
CHAPTER 5. CONFIGURING A CONTAINER IMAGE SOURCE
This ensures the overcloud uses the Ceph container image using the Satellite naming
convention.
12. This creates an overcloud_images.yaml environment file, which contains the image locations
on the Satellite server. You include this file with your deployment.
57
Red Hat OpenStack Platform 13 Director Installation and Usage
For the examples in this chapter, all nodes are bare metal systems using IPMI for power management.
For more supported power management types and their options, see Appendix B, Power Management
Drivers.
Workflow
1. Create a node definition template and register blank nodes in the director.
Requirements
A set of bare metal machines for your nodes. The number of node required depends on the type
of overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for
information on overcloud roles). These machines also must comply with the requirements set
for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These
nodes do not require an operating system. The director copies a Red Hat Enterprise Linux 7
image to each node.
One network connection for the Provisioning network, which is configured as a native VLAN. All
nodes must connect to this network and comply with the requirements set in Section 2.3,
“Networking Requirements”. The examples in this chapter use 192.168.24.0/24 as the
Provisioning subnet with the following IP address assignments:
All other network types use the Provisioning network for OpenStack services. However, you can
create additional networks for other network traffic types.
A source for container images. See Chapter 5, Configuring a container image source for
instructions on how to generate an environment file containing your container image source.
58
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
{
"nodes":[
{
"mac":[
"bb:bb:bb:bb:bb:bb"
],
"name":"node01",
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.205"
},
{
"mac":[
"cc:cc:cc:cc:cc:cc"
],
"name":"node02",
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.206"
}
]
}
name
The logical name for the node.
pm_type
The power management driver to use. This example uses the IPMI driver (ipmi), which is the
preferred driver for power management.
NOTE
IPMI is the preferred supported power management driver. For more supported power
management types and their options, see Appendix B, Power Management Drivers . If
these power management drivers do not work as expected, use IPMI for your power
management.
59
Red Hat OpenStack Platform 13 Director Installation and Usage
pm_user; pm_password
The IPMI username and password. These attributes are optional for IPMI and Redfish, and are
mandatory for iLO and iDRAC.
pm_addr
The IP address of the IPMI device.
pm_port
(Optional) The port to access the specific IPMI device.
mac
(Optional) A list of MAC addresses for the network interfaces on the node. Use only the MAC
address for the Provisioning NIC of each system.
cpu
(Optional) The number of CPUs on the node.
memory
(Optional) The amount of memory in MB.
disk
(Optional) The size of the hard disk in GB.
arch
(Optional) The system architecture.
IMPORTANT
When building a multi-architecture cloud, the arch key is mandatory to distinguish nodes
using x86_64 and ppc64le architectures.
After creating the template, run the following commands to verify the formatting and syntax:
$ source ~/stackrc
(undercloud) $ openstack overcloud node import --validate-only ~/instackenv.json
Save the file to the stack user’s home directory (/home/stack/instackenv.json), then run the following
command to import the template to the director:
This imports the template and registers each node from the template into the director.
After the node registration and configuration completes, view a list of these nodes in the CLI:
NOTE
60
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
NOTE
You can also create policy files to automatically tag nodes into profiles immediately after
introspection. For more information on creating policy files and including them in the
introspection process, see Appendix E, Automatic Profile Tagging. Alternatively, you can
manually tag nodes into profiles as per the instructions in Section 6.5, “Tagging Nodes
into Profiles”.
Run the following command to inspect the hardware attributes of each node:
The --all-manageable option introspects only nodes in a managed state. In this example, it is all
of them.
The --provide option resets all nodes to an available state after introspection.
Monitor the progress of the introspection using the following command in a separate terminal window:
IMPORTANT
Make sure this process runs to completion. This process usually takes 15 minutes for bare
metal nodes.
To view introspection information about the node, run the following command:
Replace <UUID> with the UUID of the node that you want to retrieve introspection information for.
(undercloud) $ for node in $(openstack baremetal node list --fields uuid -f value) ; do openstack
baremetal node manage $node ; done
(undercloud) $ openstack overcloud node introspect --all-manageable --provide
61
Red Hat OpenStack Platform 13 Director Installation and Usage
For example:
For example:
62
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
|
| switch_port_description | ge-0/0/2.0
|
| switch_port_id | 507
|
| switch_port_link_aggregation_enabled | False
|
| switch_port_link_aggregation_id |0
|
| switch_port_link_aggregation_support | True
|
| switch_port_management_vlan_id | None
|
| switch_port_mau_type | Unknown
|
| switch_port_mtu | 1514
|
| switch_port_physical_capabilities | [u'1000BASE-T fdx', u'100BASE-TX fdx', u'100BASE-TX hdx',
u'10BASE-T fdx', u'10BASE-T hdx', u'Asym and Sym PAUSE fdx'] |
| switch_port_protocol_vlan_enabled | None
|
| switch_port_protocol_vlan_ids | None
|
| switch_port_protocol_vlan_support | None
|
| switch_port_untagged_vlan_id | 101
|
| switch_port_vlan_ids | [101]
|
| switch_port_vlans | [{u'name': u'RHOS13-PXE', u'id': 101}]
|
| switch_protocol_identities | None
|
| switch_system_name | rhos-compute-node-sw1
|
+--------------------------------------+----------------------------------------------------------------------------------
--------------------------------------+
For example, the numa_topology collector is part of these hardware inspection extras and includes the
following information for each NUMA node:
Use the openstack baremetal introspection data save _UUID_ | jq .numa_topology command to
retrieve this information, with the UUID of the bare-metal node.
63
Red Hat OpenStack Platform 13 Director Installation and Usage
The following example shows the retrieved NUMA information for a bare-metal node:
{
"cpus": [
{
"cpu": 1,
"thread_siblings": [
1,
17
],
"numa_node": 0
},
{
"cpu": 2,
"thread_siblings": [
10,
26
],
"numa_node": 1
},
{
"cpu": 0,
"thread_siblings": [
0,
16
],
"numa_node": 0
},
{
"cpu": 5,
"thread_siblings": [
13,
29
],
"numa_node": 1
},
{
"cpu": 7,
"thread_siblings": [
15,
31
],
"numa_node": 1
},
{
"cpu": 7,
"thread_siblings": [
7,
23
],
"numa_node": 0
},
{
"cpu": 1,
"thread_siblings": [
9,
64
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
25
],
"numa_node": 1
},
{
"cpu": 6,
"thread_siblings": [
6,
22
],
"numa_node": 0
},
{
"cpu": 3,
"thread_siblings": [
11,
27
],
"numa_node": 1
},
{
"cpu": 5,
"thread_siblings": [
5,
21
],
"numa_node": 0
},
{
"cpu": 4,
"thread_siblings": [
12,
28
],
"numa_node": 1
},
{
"cpu": 4,
"thread_siblings": [
4,
20
],
"numa_node": 0
},
{
"cpu": 0,
"thread_siblings": [
8,
24
],
"numa_node": 1
},
{
"cpu": 6,
"thread_siblings": [
14,
65
Red Hat OpenStack Platform 13 Director Installation and Usage
30
],
"numa_node": 1
},
{
"cpu": 3,
"thread_siblings": [
3,
19
],
"numa_node": 0
},
{
"cpu": 2,
"thread_siblings": [
2,
18
],
"numa_node": 0
}
],
"ram": [
{
"size_kb": 66980172,
"numa_node": 0
},
{
"size_kb": 67108864,
"numa_node": 1
}
],
"nics": [
{
"name": "ens3f1",
"numa_node": 1
},
{
"name": "ens3f0",
"numa_node": 1
},
{
"name": "ens2f0",
"numa_node": 0
},
{
"name": "ens2f1",
"numa_node": 0
},
{
"name": "ens1f1",
"numa_node": 0
},
{
"name": "ens1f0",
"numa_node": 0
},
66
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
{
"name": "eno4",
"numa_node": 0
},
{
"name": "eno1",
"numa_node": 0
},
{
"name": "eno3",
"numa_node": 0
},
{
"name": "eno2",
"numa_node": 0
}
]
}
Requirements
All overcloud nodes must have their BMCs configured to be accessible to director through the
IPMI.
All overcloud nodes must be configured to PXE boot from the NIC connected to the undercloud
control plane network.
Enable Auto-discovery
enable_node_discovery = True
discovery_default_driver = ipmi
enable_node_discovery - When enabled, any node that boots the introspection ramdisk
using PXE will be enrolled in ironic.
discovery_default_driver - Sets the driver to use for discovered nodes. For example, ipmi.
a. Add your IPMI credentials to a file named ipmi-credentials.json. You will need to replace
the username and password values in this example to suit your environment:
[
{
"description": "Set default IPMI credentials",
"conditions": [
67
Red Hat OpenStack Platform 13 Director Installation and Usage
Test Auto-discovery
2. Run openstack baremetal node list. You should see the new nodes listed in an enrolled state:
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal
node set $NODE --resource-class baremetal ; done
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal
node manage $NODE ; done
$ openstack overcloud node configure --all-manageable
$ for NODE in `openstack baremetal node list -c UUID -f value` ; do openstack baremetal
node provide $NODE ; done
68
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
1. Create a file named dell-drac-rules.json, with the following contents. You will need to replace
the username and password values in this example to suit your environment:
[
{
"description": "Set default IPMI credentials",
"conditions": [
{"op": "eq", "field": "data://auto_discovered", "value": true},
{"op": "ne", "field": "data://inventory.system_vendor.manufacturer",
"value": "Dell Inc."}
],
"actions": [
{"action": "set-attribute", "path": "driver_info/ipmi_username",
"value": "SampleUsername"},
{"action": "set-attribute", "path": "driver_info/ipmi_password",
"value": "RedactedSecurePassword"},
{"action": "set-attribute", "path": "driver_info/ipmi_address",
"value": "{data[inventory][bmc_address]}"}
]
},
{
"description": "Set the vendor driver for Dell hardware",
"conditions": [
{"op": "eq", "field": "data://auto_discovered", "value": true},
{"op": "eq", "field": "data://inventory.system_vendor.manufacturer",
"value": "Dell Inc."}
],
"actions": [
{"action": "set-attribute", "path": "driver", "value": "idrac"},
{"action": "set-attribute", "path": "driver_info/drac_username",
"value": "SampleUsername"},
{"action": "set-attribute", "path": "driver_info/drac_password",
"value": "RedactedSecurePassword"},
{"action": "set-attribute", "path": "driver_info/drac_address",
"value": "{data[inventory][bmc_address]}"}
]
}
]
69
Red Hat OpenStack Platform 13 Director Installation and Usage
Type Description
Default profile flavors compute, control, swift-storage, ceph-storage, and block-storage are created
during undercloud installation and are usable without modification in most environments.
NOTE
For a large number of nodes, use automatic profile tagging. See Appendix E, Automatic
Profile Tagging for more details.
To tag a node into a specific profile, add a profile option to the properties/capabilities parameter for
each node. For example, to tag your nodes to use Controller and Compute profiles respectively, use the
following commands:
The addition of the profile:compute and profile:control options tag the two nodes into each
respective profiles.
These commands also set the boot_option:local parameter, which defines how each node boots.
70
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
Depending on your hardware, you might also need to add the boot_mode parameter to uefi so that
nodes boot using UEFI instead of the default BIOS mode. For more information, see Section D.2, “UEFI
Boot Mode”.
After completing node tagging, check the assigned profiles or possible profiles:
(undercloud) $ openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 networker
(undercloud) $ openstack flavor set --property "cpu_arch"="x86_64" --property
"capabilities:boot_option"="local" --property "capabilities:profile"="networker" networker
There are several properties that you can define to help the director identify the root disk:
wwn_with_extension (String): Unique storage identifier with the vendor extension appended.
rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD).
by_path (String): The unique PCI path of the device. Use this property if you do not want to use
the UUID of the device.
IMPORTANT
71
Red Hat OpenStack Platform 13 Director Installation and Usage
IMPORTANT
Use the name property only for devices with persistent names. Do not use name to set
the root disk for any other device because this value can change when the node boots.
Complete the following steps to specify the root device using its serial number.
Procedure
1. Check the disk information from the hardware introspection of each node. Run the following
command to display the disk information of a node:
For example, the data for one node might show three disks:
[
{
"size": 299439751168,
"rotational": true,
"vendor": "DELL",
"name": "/dev/sda",
"wwn_vendor_extension": "0x1ea4dcc412a9632b",
"wwn_with_extension": "0x61866da04f3807001ea4dcc412a9632b",
"model": "PERC H330 Mini",
"wwn": "0x61866da04f380700",
"serial": "61866da04f3807001ea4dcc412a9632b"
}
{
"size": 299439751168,
"rotational": true,
"vendor": "DELL",
"name": "/dev/sdb",
"wwn_vendor_extension": "0x1ea4e13c12e36ad6",
"wwn_with_extension": "0x61866da04f380d001ea4e13c12e36ad6",
"model": "PERC H330 Mini",
"wwn": "0x61866da04f380d00",
"serial": "61866da04f380d001ea4e13c12e36ad6"
}
{
"size": 299439751168,
"rotational": true,
"vendor": "DELL",
"name": "/dev/sdc",
"wwn_vendor_extension": "0x1ea4e31e121cfb45",
"wwn_with_extension": "0x61866da04f37fc001ea4e31e121cfb45",
"model": "PERC H330 Mini",
"wwn": "0x61866da04f37fc00",
"serial": "61866da04f37fc001ea4e31e121cfb45"
}
]
2. Change to the root_device parameter for the node definition. The following example shows
72
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
2. Change to the root_device parameter for the node definition. The following example shows
how to set the root device to disk 2, which has 61866da04f380d001ea4e13c12e36ad6 as the
serial number:
NOTE
Ensure that you configure the BIOS of each node to include booting from the
root disk that you choose. Configure the boot order to boot from the network
first, then to boot from the root disk.
The director identifies the specific disk to use as the root disk. When you run the openstack overcloud
deploy command, the director provisions and writes the Overcloud image to the root disk.
Procedure
1. To configure director to use the overcloud-minimal image, create an environment file that
contains the following image definition:
parameter_defaults:
<roleName>Image: overcloud-minimal
2. Replace <roleName> with the name of the role and append Image to the name of the role. The
following example shows an overcloud-minimal image for Ceph storage nodes:
parameter_defaults:
CephStorageImage: overcloud-minimal
NOTE
The overcloud-minimal image supports only standard Linux bridges and not OVS
because OVS is an OpenStack service that requires an OpenStack subscription
entitlement.
73
Red Hat OpenStack Platform 13 Director Installation and Usage
the default configuration by specifying different node counts and flavors. For a small scale production
environment, you might want to consider to have at least 3 Controller nodes and 3 Compute nodes, and
assign specific flavors to make sure the nodes are created with the appropriate resource specifications.
This procedure shows how to create an environment file named node-info.yaml that stores the node
counts and flavor assignments.
2. Edit the file to include the node counts and flavors your need. This example deploys 3
Controller nodes, 3 Compute nodes, and 3 Ceph Storage nodes.
parameter_defaults:
OvercloudControllerFlavor: control
OvercloudComputeFlavor: compute
OvercloudCephStorageFlavor: ceph-storage
ControllerCount: 3
ComputeCount: 3
CephStorageCount: 3
This file is later used in Section 6.12, “Including Environment Files in Overcloud Creation” .
NOTE
For this approach to work, your overcloud nodes need a network route to the
undercloud’s public endpoint. It is likely that deployments that rely on spine-leaf
networking will need to apply this configuration.
User-provided certificates - This definition applies when you have provided your own certificate.
This could be from your own CA, or it might be self-signed. This is passed using the
undercloud_service_certificate option. In this case, you will need to either trust the self-signed
certificate, or the CA (depending on your deployment).
Auto-generated certificates - This definition applies when you use certmonger to generate the
certificate using its own local CA. This is enabled using the generate_service_certificate
option. In this case, there will be a CA certificate (/etc/pki/ca-trust/source/anchors/cm-local-
ca.pem), and there will be a server certificate used by the undercloud’s HAProxy instance. To
present this certificate to OpenStack, you will need to add the CA certificate to the inject-trust-
anchor-hiera.yaml file.
74
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
See Section 4.9, “Director configuration parameters” for descriptions and usage of the
undercloud_service_certificate and generate_service_certificate options.
1. Open the certificate file and copy only the certificate portion. Do not include the key:
$ vi /home/stack/ca.crt.pem
The certificate portion you need will look similar to this shortened example:
-----BEGIN CERTIFICATE-----
MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECg
wH
UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3
-----END CERTIFICATE-----
parameter_defaults:
CAMap:
overcloud-ca:
content: |
-----BEGIN CERTIFICATE-----
MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECg
wH
UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3
-----END CERTIFICATE-----
undercloud-ca:
content: |
-----BEGIN CERTIFICATE-----
MIIDlTCCAn2gAwIBAgIJAOnPtx2hHEhrMA0GCSqGSIb3DQEBCwUAMGExCzAJBgNV
BAYTAlVTMQswCQYDVQQIDAJOQzEQMA4GA1UEBwwHUmFsZWlnaDEQMA4GA1UECg
wH
UmVkIEhhdDELMAkGA1UECwwCUUUxFDASBgNVBAMMCzE5Mi4xNjguMC4yMB4XDTE3
-----END CERTIFICATE-----
NOTE
The certificate string must follow the PEM format and use the correct YAML
indentation within the content parameter.
The CA certificate is copied to each overcloud node during the overcloud deployment, causing it to
75
Red Hat OpenStack Platform 13 Director Installation and Usage
The CA certificate is copied to each overcloud node during the overcloud deployment, causing it to
trust the encryption presented by the undercloud’s SSL endpoints. For more information on including
environment files, see Section 6.12, “Including Environment Files in Overcloud Creation” .
The amount of nodes per each role and their flavors. It is vital to include this information for
overcloud creation.
The location of the container images for containerized OpenStack services. This is the file
created from one of the options in Chapter 5, Configuring a container image source .
Any network isolation files, starting with the initialization file (environments/network-
isolation.yaml) from the heat template collection, then your custom NIC configuration file, and
finally any additional network configurations.
Any external load balancing environment files if you are using an external load balancer. See
External Load Balancing for the Overcloud for more information.
Any storage environment files such as Ceph Storage, NFS, iSCSI, etc.
NOTE
It is recommended to keep your custom environment files organized in a separate directory, such as the
templates directory.
You can customize advanced features for your overcloud using the Advanced Overcloud Customization
guide.
For more detailed information on Heat templates and environment files, see the Understanding Heat
Templates section of the Advanced Overcloud Customization guide.
IMPORTANT
A basic overcloud uses local LVM storage for block storage, which is not a supported
configuration. It is recommended to use an external storage solution, such as Red Hat
Ceph Storage, for block storage.
76
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
WARNING
Parameter Description
77
Red Hat OpenStack Platform 13 Director Installation and Usage
Parameter Description
-e [EXTRA HEAT TEMPLATE] , --extra- Extra environment files to pass to the overcloud
template [EXTRA HEAT TEMPLATE] deployment. Can be specified more than once. Note
that the order of environment files passed to the
openstack overcloud deploy command is
important. For example, parameters from each
sequential environment file override the same
parameters from earlier environment files.
78
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
Parameter Description
Some command line parameters are outdated or deprecated in favor of using Heat template
parameters, which you include in the parameter_defaults section on an environment file. The following
table maps deprecated parameters to their Heat Template equivalents.
79
Red Hat OpenStack Platform 13 Director Installation and Usage
80
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform.
NOTE
81
Red Hat OpenStack Platform 13 Director Installation and Usage
The amount of nodes per each role and their flavors. It is vital to include this information for
overcloud creation.
The location of the container images for containerized OpenStack services. This is the file
created from one of the options in Chapter 5, Configuring a container image source .
Any network isolation files, starting with the initialization file (environments/network-
isolation.yaml) from the heat template collection, then your custom NIC configuration file, and
finally any additional network configurations.
Any external load balancing environment files if you are using an external load balancer. See
External Load Balancing for the Overcloud for more information.
Any storage environment files such as Ceph Storage, NFS, iSCSI, etc.
NOTE
Any environment files added to the overcloud using the -e option become part of your overcloud’s stack
definition. The following command is an example of how to start the overcloud creation with custom
environment files included:
--templates
Creates the overcloud using the Heat template collection in /usr/share/openstack-tripleo-heat-
templates as a foundation
-e /home/stack/templates/node-info.yaml
Adds an environment file to define how many nodes and which flavors to use for each role. For
example:
parameter_defaults:
OvercloudControllerFlavor: control
OvercloudComputeFlavor: compute
OvercloudCephStorageFlavor: ceph-storage
82
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
ControllerCount: 3
ComputeCount: 3
CephStorageCount: 3
-e /home/stack/templates/overcloud_images.yaml
Adds an environment file containing the container image sources. See Chapter 5, Configuring a
container image source for more information.
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml
Adds an environment file to initialize network isolation in the overcloud deployment.
NOTE
-e /home/stack/templates/network-environment.yaml
Adds an environment file to customize network isolation.
-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
Adds an environment file to enable Ceph Storage services.
-e /home/stack/templates/ceph-custom-config.yaml
Adds an environment file to customize our Ceph Storage configuration.
-e /home/stack/inject-trust-anchor-hiera.yaml
Adds an environment file to install a custom certificate in the undercloud.
--ntp-server pool.ntp.org
Use an NTP server for time synchronization. This is required for keeping the Controller node cluster
in synchronization.
-r /home/stack/templates/roles_data.yaml
(optional) The generated roles data if using custom roles or enabling a multi architecture cloud. See
Section 6.4, “Generate architecture specific roles” for more information.
The director requires these environment files for re-deployment and post-deployment functions in
Chapter 9, Performing Tasks after Overcloud Creation. Failure to include these files can result in damage
to your overcloud.
2. Run the openstack overcloud deploy command again with the same environment files
Do not edit the overcloud configuration directly as such manual configuration gets overridden by the
director’s configuration when updating the overcloud stack with the director.
83
Red Hat OpenStack Platform 13 Director Installation and Usage
(undercloud) $ ls -1 ~/templates
00-node-info.yaml
10-network-isolation.yaml
20-network-environment.yaml
30-storage-environment.yaml
40-rhel-registration.yaml
templates
The core Heat template collection to use. This acts as a substitute for the --templates command line
option.
environments
A list of environment files to include. This acts as a substitute for the --environment-file (-e)
command line option.
templates: /usr/share/openstack-tripleo-heat-templates/
environments:
- ~/templates/00-node-info.yaml
- ~/templates/10-network-isolation.yaml
- ~/templates/20-network-environment.yaml
- ~/templates/30-storage-environment.yaml
- ~/templates/40-rhel-registration.yaml
To create a new plan, run the following command as the stack user:
This creates a plan from the core Heat template collection in /usr/share/openstack-tripleo-heat-
templates. The director names the plan based on your input. In this example, it is my-overcloud. The
director uses this name as a label for the object storage container, the workflow environment, and
overcloud stack names.
84
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
NOTE
The openstack overcloud deploy command essentially uses all of these commands to
remove the existing plan, upload a new plan with environment files, and deploy the plan.
Use the rendered template in ~/overcloud-validation for the validation tests that follow.
NOTE
This command identifies any syntax errors in the template. If the template syntax validates successfully,
the output shows a preview of the resulting overcloud template.
85
Red Hat OpenStack Platform 13 Director Installation and Usage
The openstack stack list --nested command shows the current stage of the overcloud creation.
This loads the necessary environment variables to interact with your overcloud from the director host’s
CLI. The command prompt changes to indicate this:
(overcloud) $
To return to interacting with the director’s host, run the following command:
Each node in the overcloud also contains a user called heat-admin. The stack user has SSH access to
this user on each node. To access a node over SSH, find the IP address of the desired node:
Then connect to the node using the heat-admin user and the node’s IP address:
This concludes the creation of the overcloud using the command line tools. For post-creation functions,
86
CHAPTER 6. CONFIGURING A BASIC OVERCLOUD WITH THE CLI TOOLS
This concludes the creation of the overcloud using the command line tools. For post-creation functions,
see Chapter 9, Performing Tasks after Overcloud Creation.
87
Red Hat OpenStack Platform 13 Director Installation and Usage
For the examples in this chapter, all nodes are bare metal systems using IPMI for power management.
For more supported power management types and their options, see Appendix B, Power Management
Drivers.
Workflow
1. Register blank nodes using a node definition template and manual registration.
Requirements
The director node created in Chapter 4, Installing the undercloud with the UI enabled
A set of bare metal machines for your nodes. The number of node required depends on the type
of overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for
information on overcloud roles). These machines also must comply with the requirements set
for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These
nodes do not require an operating system. The director copies a Red Hat Enterprise Linux 7
image to each node.
One network connection for our Provisioning network, which is configured as a native VLAN. All
nodes must connect to this network and comply with the requirements set in Section 2.3,
“Networking Requirements”.
All other network types use the Provisioning network for OpenStack services. However, you can
create additional networks for other network traffic types.
IMPORTANT
When enabling a multi-architecture cloud, the UI workflow is not supported. Please follow
the instructions in Chapter 6, Configuring a Basic Overcloud with the CLI Tools
Username - The administration user for the director. The default is admin.
Password - The password for the administration user. Run sudo hiera admin_password as the
88
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH THE WEB UI
Password - The password for the administration user. Run sudo hiera admin_password as the
stack user on the undercloud host terminal to find out the password.
When logging in to the UI, the UI accesses the OpenStack Identity Public API and obtains the endpoints
for the other Public API services. These services include
Component UI Purpose
OpenStack Object Storage ( swift) For storage of the Heat template collection or plan
used for the overcloud creation.
Plans
A menu item at the top of the UI. This page acts as the main UI section and allows you to define the
plan to use for your overcloud creation, the nodes to assign to each role, and the status of the
current overcloud. This section also provides a deployment workflow to guide you through each step
of the overcloud creation process, including setting deployment parameters and assigning your
nodes to roles.
89
Red Hat OpenStack Platform 13 Director Installation and Usage
Nodes
A menu item at the top of the UI. This page acts as a node configuration section and provides
methods for registering new nodes and introspecting registered nodes. This section also shows
information such as the power state, introspection status, provision state, and hardware information.
Clicking on the overflow menu item (the triple dots) on the right of each node displays the disk
information for the chosen node.
Validations
Clicking on the Validations menu option displays a panel on the right side of the page.
90
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH THE WEB UI
Pre-deployment
Post-deployment
Pre-Introspection
Pre-Upgrade
Post-Upgrade
These validation tasks run automatically at certain points in the deployment. However, you can also run
them manually. Click the Play button for a validation task you want to run. Click the title of each
validation task to run it, or click a validation title to view more information about it.
91
Red Hat OpenStack Platform 13 Director Installation and Usage
3. Configure Roles and Assign Nodes- Assign nodes to roles and modify role-specific
parameters.
The undercloud installation and configuration automatically uploads a plan. You can also import multiple
plans in the web UI. Click on the All Plans breadcrumb on the Plan screen. This displays the current
Plans listing. Change between multiple plans by clicking on a card.
92
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH THE WEB UI
Click Import Plan and a window appears asking you for the following information:
Plan Name - A plain text name for the plan. For example overcloud.
Upload Type - Choose whether to upload a Tar Archive (tar.gz) or a full Local Folder (Google
Chrome only).
Plan Files - Click browser to choose the plan on your local file system.
If you need to copy the director’s Heat template collection to a client machine, archive the files and copy
them:
$ cd /usr/share/openstack-tripleo-heat-templates/
$ tar -cf ~/overcloud.tar *
$ scp ~/overcloud.tar [email protected]:~/.
Once the director UI uploads the plan, the plan appears in the Plans listing and you can now configure it.
Click on the plan card of your choice.
93
Red Hat OpenStack Platform 13 Director Installation and Usage
The director requires a list of nodes for registration, which you can supply using one of two methods:
1. Uploading a node definition template - This involves clicking the Upload from File button and
selecting a file. See Section 6.1, “Registering Nodes for the Overcloud” for the syntax of the
node definition template.
2. Manually registering each node - This involves clicking Add New and providing a set of details
for the node.
The details you need to provide for manual registration include the following:
Name
A plain text name for the node. Use only RFC3986 unreserved characters.
Driver
The power management driver to use. This example uses the IPMI driver (ipmi) but other drivers are
available. See Appendix B, Power Management Drivers for available drivers.
IPMI IP Address
The IP address of the IPMI device.
IPMI Port
The port to access the IPMI device.
IPMI Username; IPMI Password
The IPMI username and password.
Architecture
94
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH THE WEB UI
NOTE
The UI also allows for registration of nodes using Dell Remote Access Controller (DRAC)
power management. These nodes use the pxe_drac driver. For more information, see
Section B.2, “Dell Remote Access Controller (DRAC)” .
After entering your node information, click Register Nodes at the bottom of the window.
The director registers the nodes. Once complete, you can use the UI to perform introspection on the
nodes.
NOTE
You can also create policy files to automatically tag nodes into profiles immediately after
introspection. For more information on creating policy files and including them in the
introspection process, see Appendix E, Automatic Profile Tagging. Alternatively, you can
tag nodes into profiles through the UI. See Section 7.9, “Assigning Nodes to Roles in the
Web UI” for details on manually tagging nodes.
IMPORTANT
Make sure this process runs to completion. This process usually takes 15 minutes for bare
metal nodes.
95
Red Hat OpenStack Platform 13 Director Installation and Usage
Once the introspection process completes, select all nodes with the Provision State set to
manageable then click the Provide Nodes button. Wait until the Provision State changes to available.
The Nodes screen includes an additional menu toggle that provides extra node management actions,
such as Tag Nodes.
1. Select the nodes you want to tag using the check boxes.
4. Select an existing profile. To create a new profile, select Specify Custom Profile and enter the
name in Custom Profile.
NOTE
If you create a custom profile, you must also assign the profile tag to a new flavor.
See Section 6.5, “Tagging Nodes into Profiles” for more information on creating
new flavors.
96
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH THE WEB UI
Overall Settings
This provides a method to include different features from your overcloud. These features are
defined in the plan’s capabilities-map.yaml file with each feature using a different environment file.
For example, under Storage you can select Storage Environment, which the plan maps to the
environments/storage-environment.yaml file and allows you to configure NFS, iSCSI, or Ceph
settings for your overcloud. The Other tab contains any environment files detected in the plan but
not listed in the capabilities-map.yaml, which is useful for adding custom environment files included
in the plan. Once you have selected the features to include, click Save Changes.
Parameters
This includes various base-level and environment file parameters for your overcloud. Once you have
modified your parameters, click Save Changes.
97
Red Hat OpenStack Platform 13 Director Installation and Usage
Clicking this icon displays a selection of cards representing available roles to add to your environment.
To add a role, mark the checkbox in the role’s top-right corner.
98
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH THE WEB UI
To assign nodes to a role, scroll to the 3 Configure Roles and Assign Nodessection on the Plan
screen. Each role uses a spinner widget to assign the number of nodes to a role. The available nodes per
roles are based on the tagged nodes in Section 7.6, “Tagging Nodes into Profiles in the Web UI” .
This changes the *Count parameter for each role. For example, if you change the number of nodes in
the Controller role to 3, this sets the ControllerCount parameter to 3. You can also view and edit these
count values in the Parameters tab of the deployment configuration. See Section 7.7, “Editing
Overcloud Plan Parameters in the Web UI” for more information.
Parameters
This includes various role specific parameters. For example, if you are editing the controller role, you
can change the default flavor for the role using the OvercloudControlFlavor parameter. Once you
have modified your role specific parameters, click Save Changes.
99
Red Hat OpenStack Platform 13 Director Installation and Usage
Services
This defines the service-specific parameters for the chosen role. The left panel shows a list of
services that you select and modify. For example, to change the time zone, click the
OS::TripleO:Services:Timezone service and change the TimeZone parameter to your desired time
zone. Once you have modified your service-specific parameters, click Save Changes.
Network Configuration
This allows you to define an IP address or subnet range for various networks in your overcloud.
100
CHAPTER 7. CONFIGURING A BASIC OVERCLOUD WITH THE WEB UI
IMPORTANT
Although the role’s service parameters appear in the UI, some services might be disabled
by default. You can enable these services through the instructions in Section 7.7, “Editing
Overcloud Plan Parameters in the Web UI”. See also the Composable Roles section of the
Advanced Overcloud Customization guide for information on enabling these services.
If you have not run or passed all the validations for the undercloud, a warning message appears. Make
sure that your undercloud host satisfies the requirements before running a deployment.
101
Red Hat OpenStack Platform 13 Director Installation and Usage
The UI regularly monitors the progress of the overcloud’s creation and display a progress bar indicating
the current percentage of progress. The View detailed information link displays a log of the current
OpenStack Orchestration stacks in your overcloud.
After the overcloud creation process completes, the 4 Deploy section displays the current overcloud
status and the following details:
Password - The password for the OpenStack admin user on the overcloud.
102
CHAPTER 8. CONFIGURING A BASIC OVERCLOUD USING PRE-PROVISIONED NODES
You can provision nodes using an external tool and let the director control the overcloud
configuration only.
You can use nodes without relying on the director’s provisioning methods. This is useful if
creating an overcloud without power management control or using networks with DHCP/PXE
boot restrictions.
The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or
OpenStack Image (glance) for managing nodes.
This scenario provides basic configuration with no custom features. However, you can add advanced
configuration options to this basic overcloud and customize it to your specifications using the
instructions in the Advanced Overcloud Customization guide.
IMPORTANT
Requirements
A set of bare metal machines for your nodes. The number of nodes required depends on the
type of overcloud you intend to create (see Section 3.1, “Planning Node Deployment Roles” for
information on overcloud roles). These machines also must comply with the requirements set
for each node type. For these requirements, see Section 2.4, “Overcloud Requirements”. These
nodes require Red Hat Enterprise Linux 7.5 or later installed as the host operating system. Red
Hat recommends using the latest version available.
One network connection for managing the pre-provisioned nodes. This scenario requires
uninterrupted SSH access to the nodes for orchestration agent configuration.
One network connection for the Control Plane network. There are two main scenarios for this
network:
Using the Provisioning Network as the Control Plane, which is the default scenario. This
network is usually a layer-3 (L3) routable network connection from the pre-provisioned
nodes to the director. The examples for this scenario use following IP address assignments:
Director 192.168.24.1
103
Red Hat OpenStack Platform 13 Director Installation and Usage
Controller 0 192.168.24.2
Compute 0 192.168.24.3
Using a separate network. In situations where the director’s Provisioning network is a private
non-routable network, you can define IP addresses for the nodes from any subnet and
communicate with the director over the Public API endpoint. There are certain caveats to
this scenario, which this chapter examines later in Section 8.6, “Using a Separate Network
for Overcloud Nodes”.
All other network types in this example also use the Control Plane network for OpenStack
services. However, you can create additional networks for other network traffic types.
1. On each overcloud node, create the user named stack and set a password on each node. For
example, use the following on the Controller node:
3. Once you have created and configured the stack user on all pre-provisioned nodes, copy the
stack user’s public SSH key from the director node to each overcloud node. For example, to
copy the director’s public SSH key to the Controller node:
IMPORTANT
Standalone Ceph nodes are an exception and do not require a Red Hat OpenStack
Platform subscription. For standalone Ceph nodes, director requires newer ansible
packages to be installed. It is essential to enable rhel-7-server-openstack-13-
deployment-tools-rpms repository on all Ceph nodes without active Red Hat
OpenStack Platform subscriptions to obtain Red Hat OpenStack Platform-compatible
deployment tools.
104
CHAPTER 8. CONFIGURING A BASIC OVERCLOUD USING PRE-PROVISIONED NODES
The following procedure shows how to register each node to the Red Hat Content Delivery Network.
Perform these steps on each node:
1. Run the registration command and enter your Customer Portal user name and password when
prompted:
2. Find the entitlement pool for the Red Hat OpenStack Platform 13:
3. Use the pool ID located in the previous step to attach the Red Hat OpenStack Platform 13
entitlements:
IMPORTANT
6. Update your system to ensure sure you have the latest base system packages:
Each pre-provisioned node uses the OpenStack Orchestration (heat) agent to communicate with the
105
Red Hat OpenStack Platform 13 Director Installation and Usage
Each pre-provisioned node uses the OpenStack Orchestration (heat) agent to communicate with the
director. The agent on each node polls the director and obtains metadata tailored to each node. This
metadata allows the agent to configure each node.
Install the initial packages for the orchestration agent on each node:
This ensures the overcloud nodes can access the director’s Public API over SSL/TLS.
The director’s Control Plane network, which is the subnet defined with the network_cidr
parameter from your undercloud.conf file. The nodes either requires direct access to this
subnet or routable access to the subnet.
The director’s Public API endpoint, specified as the undercloud_public_host parameter from
your undercloud.conf file. This option is available if either you do not have an L3 route to the
Control Plane or you aim to use SSL/TLS communication when polling the director for
metadata. See Section 8.6, “Using a Separate Network for Overcloud Nodes” for additional
steps for configuring your overcloud nodes to use the Public API endpoint.
The director uses a Control Plane network to manage and configure a standard overcloud. For an
overcloud with pre-provisioned nodes, your network configuration might require some modification to
accommodate how the director communicates with the pre-provisioned nodes.
NOTE
106
CHAPTER 8. CONFIGURING A BASIC OVERCLOUD USING PRE-PROVISIONED NODES
NOTE
If using network isolation, make sure your NIC templates do not include the NIC used for
undercloud access. These template can reconfigure the NIC, which can lead to
connectivity and configuration problems during deployment.
Assigning IP Addresses
If not using network isolation, you can use a single Control Plane network to manage all services. This
requires manual configuration of the Control Plane NIC on each node to use an IP address within the
Control Plane network range. If using the director’s Provisioning network as the Control Plane, make
sure the chosen overcloud IP addresses fall outside of the DHCP ranges for both provisioning
(dhcp_start and dhcp_end) and introspection (inspection_iprange).
During standard overcloud creation, the director creates OpenStack Networking (neutron) ports to
automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network.
However, this can cause the director to assign different IP addresses to the ones manually configured
for each node. In this situation, use a predictable IP address strategy to force the director to use the pre-
provisioned IP assignments on the Control Plane.
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-
templates/deployed-server/deployed-neutron-port.yaml
parameter_defaults:
DeployedServerPortMap:
controller-0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.2
subnets:
- cidr: 24
compute-0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.3
subnets:
- cidr: 24
1. The name of the assignment, which follows the format <node_hostname>-<network> where
the <node_hostname> value matches the short hostname for the node and <network>
matches the lowercase name of the network. For example: controller-0-ctlplane for controller-
0.example.com and compute-0-ctlplane for compute-0.example.com.
fixed_ips/ip_address - Defines the fixed IP addresses for the control plane. Use multiple
ip_address parameters in a list to define multiple IP addresses.
107
Red Hat OpenStack Platform 13 Director Installation and Usage
A later step in this chapter uses the resulting environment file (ctlplane-assignments.yaml) as part of
the openstack overcloud deploy command.
The overcloud nodes must accommodate the basic network configuration from Section 8.5,
“Configuring Networking for the Control Plane”.
You must enable SSL/TLS on the director for Public API endpoint usage. For more information,
see Section 4.9, “Director configuration parameters” and Appendix A, SSL/TLS Certificate
Configuration.
You must define an accessible fully qualified domain name (FQDN) for director. This FQDN
must resolve to a routable IP address for the director. Use the undercloud_public_host
parameter in the undercloud.conf file to set this FQDN.
The examples in this section use IP address assignments that differ from the main scenario:
Controller 0 192.168.100.2
Compute 0 192.168.100.3
The following sections provide additional configuration for situations that require a separate network for
overcloud nodes.
Orchestration Configuration
With SSL/TLS communication enabled on the undercloud, the director provides a Public API endpoint
for most services. However, OpenStack Orchestration (heat) uses the internal endpoint as a default
provider for metadata. This means the undercloud requires some modification so overcloud nodes can
access OpenStack Orchestration on public endpoints. This modification involves changing some Puppet
hieradata on the director.
The hieradata_override in your undercloud.conf allows you to specify additional Puppet hieradata for
undercloud configuration. Use the following steps to modify hieradata relevant to OpenStack
Orchestration:
108
CHAPTER 8. CONFIGURING A BASIC OVERCLOUD USING PRE-PROVISIONED NODES
1. If you are not using a hieradata_override file already, create a new one. This example uses one
located at /home/stack/hieradata.yaml.
heat_clients_endpoint_type: public
heat::engine::default_deployment_signal_transport: TEMP_URL_SIGNAL
This changes the endpoint type from the default internal to public and changes the signaling
method to use TempURLs from OpenStack Object Storage (swift).
3. In your undercloud.conf, set the hieradata_override parameter to the path of the hieradata
file:
hieradata_override = /home/stack/hieradata.yaml
4. Rerun the openstack undercloud install command to implement the new configuration
options.
This switches the orchestration metadata server to use URLs on the director’s Public API.
IP Address Assignments
The method for IP assignments is similar to Section 8.5, “Configuring Networking for the Control Plane” .
However, since the Control Plane is not routable from the deployed servers, you use the
DeployedServerPortMap parameter to assign IP addresses from your chosen overcloud node subnet,
including the virtual IP address to access the Control Plane. The following is a modified version of the
ctlplane-assignments.yaml environment file from Section 8.5, “Configuring Networking for the
Control Plane” that accommodates this network architecture:
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-
templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-
templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::Network::Ports::RedisVipPort: /usr/share/openstack-tripleo-heat-
templates/network/ports/noop.yaml 1
parameter_defaults:
NeutronPublicInterface: eth1
EC2MetadataIp: 192.168.100.1 2
ControlPlaneDefaultRoute: 192.168.100.1
DeployedServerPortMap:
control_virtual_ip:
fixed_ips:
- ip_address: 192.168.100.1
subnets:
- cidr: 24
controller-0-ctlplane:
fixed_ips:
- ip_address: 192.168.100.2
subnets:
- cidr: 24
compute-0-ctlplane:
fixed_ips:
109
Red Hat OpenStack Platform 13 Director Installation and Usage
- ip_address: 192.168.100.3
subnets:
- cidr: 24
2 The EC2MetadataIp and ControlPlaneDefaultRoute parameters are set to the value of the
Control Plane virtual IP address. The default NIC configuration templates require these parameters
and you must set them to use a pingable IP address to pass the validations performed during
deployment. Alternatively, customize the NIC configuration so they do not require these
parameters.
bash /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/enable-ssh-admin.sh
Using the example export command, set the OVERCLOUD_HOSTS variable to the IP addresses of the
overcloud hosts intended to be used as Ceph clients (such as the Compute, Block Storage, Image, File
System, Telemetry services, and so forth). The enable-ssh-admin.sh script configures a user on the
overcloud nodes that Ansible uses to configure Ceph clients.
--disable-validations - Disables basic CLI validations for services not used with pre-
provisioned infrastructure, otherwise the deployment will fail.
110
CHAPTER 8. CONFIGURING A BASIC OVERCLOUD USING PRE-PROVISIONED NODES
If using your own custom roles file, make sure to include the disable_constraints: True
parameter with each role. For example:
- name: ControllerDeployedServer
disable_constraints: True
CountDefault: 1
ServicesDefault:
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephMon
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::CephRgw
...
The following is an example overcloud deployment command with the environment files specific to the
pre-provisioned architecture:
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy \
[other arguments] \
--disable-validations \
-e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-bootstrap-
environment-rhel.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/deployed-server-pacemaker-
environment.yaml \
-r /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-server-roles-data.yaml
This begins the overcloud configuration. However, the deployment stack pauses when the overcloud
node resources enter the CREATE_IN_PROGRESS stage:
This pause is due to the director waiting for the orchestration agent on the overcloud nodes to poll the
metadata server. The next section shows how to configure nodes to start polling the metadata server.
IMPORTANT
Only use automatic configuration for the initial deployment. Do not use automatic
configuration if scaling up your nodes.
Automatic Configuration
The director’s core Heat template collection contains a script that performs automatic configuration of
the Heat agent on the overcloud nodes. The script requires you to source the stackrc file as the stack
user to authenticate with the director and query the orchestration service:
111
Red Hat OpenStack Platform 13 Director Installation and Usage
In addition, the script also requires some additional environment variables to define the nodes roles and
their IP addressess. These environment variables are:
OVERCLOUD_ROLES
A space-separated list of roles to configure. These roles correlate to roles defined in your roles data
file.
[ROLE]_hosts
Each role requires an environment variable with a space-separated list of IP addresses for nodes in
the role.
Run the script to configure the orchestration agent on each overcloud node:
(undercloud) $ /usr/share/openstack-tripleo-heat-templates/deployed-server/scripts/get-occ-config.sh
NOTE
The script accesses the pre-provisioned nodes over SSH using the same user executing
the script. In this case, the script authenticates with the stack user.
Queries the director’s orchestration services for the metadata URL for each node.
Accesses the node and configures the agent on each node with its specific metadata URL.
Once the script completes, the overcloud nodes start polling orchestration service on the director. The
stack deployment continues.
Manual configuration
If you prefer to manually configure the orchestration agent on the pre-provisioned nodes, use the
following command to query the orchestration service on the director for each node’s metadata URL:
This displays the stack name and metadata URL for each node:
112
CHAPTER 8. CONFIGURING A BASIC OVERCLOUD USING PRE-PROVISIONED NODES
ts6lr4tm5p44-deployed-server-td42md2tap4g/43d302fa-d4c2-40df-b3ac-624d6075ef27?
temp_url_sig=58313e577a93de8f8d2367f8ce92dd7be7aac3a1&temp_url_expires=2147483586
1. Remove the existing os-collect-config.conf template. This ensures the agent does not override
our manual changes:
2. Configure the /etc/os-collect-config.conf file to use the corresponding metadata URL. For
example, the Controller node uses the following:
[DEFAULT]
collectors=request
command=os-refresh-config
polling_interval=30
[request]
metadata_url=http://192.168.24.1:8080/v1/AUTH_6fce4e6019264a5b8283e7125f05b764/ov-
edServer-ts6lr4tm5p44-deployed-server-td42md2tap4g/43d302fa-d4c2-40df-b3ac-
624d6075ef27?
temp_url_sig=58313e577a93de8f8d2367f8ce92dd7be7aac3a1&temp_url_expires=214748358
6
After you have configured and restarted them, the orchestration agents poll the director’s orchestration
service for overcloud configuration. The deployment stack continues its creation and the stack for each
node eventually changes to CREATE_COMPLETE.
The heat stack-list --show-nested command shows the current stage of the overcloud creation.
The director generates a script to configure and help authenticate interactions with your overcloud from
113
Red Hat OpenStack Platform 13 Director Installation and Usage
The director generates a script to configure and help authenticate interactions with your overcloud from
the director host. The director saves this file, overcloudrc, in your stack user’s home director. Run the
following command to use this file:
This loads the necessary environment variables to interact with your overcloud from the director host’s
CLI. The command prompt changes to indicate this:
(overcloud) $
To return to interacting with the director’s host, run the following command:
The general process for scaling up pre-provisioned nodes includes the following steps:
2. Scale up the nodes. See Chapter 13, Scaling overcloud nodes for these instructions.
3. After executing the deployment command, wait until the director creates the new node
resources. Manually configure the pre-provisioned nodes to poll the director’s orchestration
server metadata URL as per the instructions in Section 8.9, “Polling the Metadata Server” .
In most scaling operations, you must obtain the UUID value of the node to pass to openstack
overcloud node delete. To obtain this UUID, list the resources for the specific role:
Replace <RoleName> in the above command with the actual name of the role that you are scaling down.
For example, for the ComputeDeployedServer role:
114
CHAPTER 8. CONFIGURING A BASIC OVERCLOUD USING PRE-PROVISIONED NODES
Use the stack_name column in the command output to identify the UUID associated with each node.
The stack_name includes the integer value of the index of the node in the Heat resource group. For
example, in the following sample output:
+------------------------------------+----------------------------------+
| physical_resource_id | stack_name |
+------------------------------------+----------------------------------+
| 294d4e4d-66a6-4e4e-9a8b- | overcloud-ComputeDeployedServer- |
| 03ec80beda41 | no7yfgnh3z7e-1-ytfqdeclwvcg |
| d8de016d- | overcloud-ComputeDeployedServer- |
| 8ff9-4f29-bc63-21884619abe5 | no7yfgnh3z7e-0-p4vb3meacxwn |
| 8c59f7b1-2675-42a9-ae2c- | overcloud-ComputeDeployedServer- |
| 2de4a066f2a9 | no7yfgnh3z7e-2-mmmaayxqnf3o |
+------------------------------------+----------------------------------+
The indices 0, 1, or 2 in the stack_name column correspond to the node order in the Heat resource
group. Pass the corresponding UUID value from the physical_resource_id column to openstack
overcloud node delete command.
Once you have removed overcloud nodes from the stack, power off these nodes. Under a standard
deployment, the bare metal services on the director control this function. However, with pre-provisioned
nodes, you should either manually shutdown these nodes or use the power management control for
each physical system. If you do not power off the nodes after removing them from the stack, they might
remain operational and reconnect as part of the overcloud environment.
After powering down the removed nodes, reprovision them back to a base operating system
configuration so that they do not unintentionally join the overcloud in the future
NOTE
Do not attempt to reuse nodes previously removed from the overcloud without first
reprovisioning them with a fresh base operating system. The scale down process only
removes the node from the overcloud stack and does not uninstall any packages.
After removing the overcloud, power off all nodes and reprovision them back to a base operating
system configuration.
NOTE
Do not attempt to reuse nodes previously removed from the overcloud without first
reprovisioning them with a fresh base operating system. The removal process only
deletes the overcloud stack and does not uninstall any packages.
115
Red Hat OpenStack Platform 13 Director Installation and Usage
NOTE
Before running these commands, check that you are logged into an overcloud node and
not running these commands on the undercloud.
$ sudo docker ps
To stop a containerized service, use the docker stop command. For example, to stop the keystone
container:
To start a stopped containerized service, use the docker start command. For example, to start the
116
CHAPTER 9. PERFORMING TASKS AFTER OVERCLOUD CREATION
To start a stopped containerized service, use the docker start command. For example, to start the
keystone container:
NOTE
Any changes to the service configuration files within the container revert after restarting
the container. This is because the container regenerates the service configuration based
upon files on the node’s local file system in /var/lib/config-data/puppet-generated/. For
example, if you edit /etc/keystone/keystone.conf within the keystone container and
restart the container, the container regenerates the configuration using /var/lib/config-
data/puppet-generated/keystone/etc/keystone/keystone.conf on the node’s local file
system, which overwrites any the changes made within the container before the restart.
Monitoring containers
To check the logs for a containerized service, use the docker logs command. For example, to view the
logs for the keystone container:
Accessing containers
To enter the shell for a containerized service, use the docker exec command to launch /bin/bash. For
example, to enter the shell for the keystone container:
To enter the shell for the keystone container as the root user:
# exit
For information about troubleshooting OpenStack Platform containerized services, see Section 16.7.3,
“Containerized Service Failures”.
$ source ~/overcloudrc
(overcloud) $ openstack network create default
(overcloud) $ openstack subnet create default --network default --gateway 172.20.1.1 --subnet-range
172.20.0.0/16
This creates a basic Neutron network called default. The overcloud automatically assigns IP addresses
from this network using an internal DHCP mechanism.
117
Red Hat OpenStack Platform 13 Director Installation and Usage
Source the overcloud and create an External network in Neutron. For example:
$ source ~/overcloudrc
(overcloud) $ openstack network create public --external --provider-network-type flat --provider-
physical-network datacentre
(overcloud) $ openstack subnet create public --network public --dhcp --allocation-pool
start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24
In this example, you create a network with the name public. The overcloud requires this specific name
for the default floating IP pool. This is also important for the validation tests in Section 9.7, “Validating
the Overcloud”.
This command also maps the network to the datacentre physical network. As a default, datacentre
maps to the br-ex bridge. Leave this option as the default unless you have used custom neutron settings
during the overcloud creation.
$ source ~/overcloudrc
(overcloud) $ openstack network create public --external --provider-network-type vlan --provider-
physical-network datacentre --provider-segment 104
(overcloud) $ openstack subnet create public --network public --dhcp --allocation-pool
start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24
The provider:segmentation_id value defines the VLAN to use. In this case, you can use 104.
118
CHAPTER 9. PERFORMING TASKS AFTER OVERCLOUD CREATION
You have mapped the additional bridge during deployment. For example, to map a new bridge
called br-floating to the floating physical network, use the following in an environment file:
parameter_defaults:
NeutronBridgeMappings: "datacentre:br-ex,floating:br-floating"
$ source ~/overcloudrc
(overcloud) $ openstack network create ext-net --external --provider-physical-network floating --
provider-network-type vlan --provider-segment 105
(overcloud) $ openstack subnet create ext-subnet --network ext-net --dhcp --allocation-pool
start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 --subnet-range 10.1.2.0/24
When creating a provider network, you associate it with a physical network, which uses a bridge mapping.
This is similar to floating IP network creation. You add the provider network to both the Controller and
the Compute nodes because the Compute nodes attach VM virtual network interfaces directly to the
attached network interface.
For example, if the desired provider network is a VLAN on the br-ex bridge, use the following command
to add a provider network on VLAN 201:
$ source ~/overcloudrc
(overcloud) $ openstack network create provider_network --provider-physical-network datacentre --
provider-network-type vlan --provider-segment 201 --share
This command creates a shared network. It is also possible to specify a tenant instead of specifying --
share. That network will only be available to the specified tenant. If you mark a provider network as
external, only the operator may create ports on that network.
Add a subnet to a provider network if you want neutron to provide DHCP services to the tenant
instances:
Other networks might require access externally through the provider network. In this situation, create a
new router so that other networks can route traffic through the provider network:
119
Red Hat OpenStack Platform 13 Director Installation and Usage
Attach other networks to this router. For example, if you had a subnet called subnet1, you can attach it
to the router with the following commands:
This adds subnet1 to the routing table and allows traffic using subnet1 to route to the provider
network.
Command options
ram
Use the ram option to define the maximum RAM for the flavor.
disk
Use the disk option to define the hard disk space for the flavor.
vcpus
Use the vcpus option to define the quantity of virtual CPUs for the flavor.
Use $ openstack flavor create --help to learn more about the openstack flavor create command.
$ source ~/stackrc
(undercloud) $ sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201
type=internal
(undercloud) $ sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201
Before running the OpenStack Integration Test Suite, check that the heat_stack_owner role exists in
120
CHAPTER 9. PERFORMING TASKS AFTER OVERCLOUD CREATION
Before running the OpenStack Integration Test Suite, check that the heat_stack_owner role exists in
your overcloud:
$ source ~/overcloudrc
(overcloud) $ openstack role list
+----------------------------------+------------------+
| ID | Name |
+----------------------------------+------------------+
| 6226a517204846d1a26d15aae1af208f | swiftoperator |
| 7c7eb03955e545dd86bbfeb73692738b | heat_stack_owner |
+----------------------------------+------------------+
$ source ~/stackrc
(undercloud) $ sudo ovs-vsctl del-port vlan201
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates \
-e ~/templates/node-info.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e ~/templates/network-environment.yaml \
-e ~/templates/storage-environment.yaml \
--ntp-server pool.ntp.org
The director checks the overcloud stack in heat, and then updates each item in the stack with the
environment files and heat templates. It does not recreate the overcloud, but rather changes the
existing overcloud.
IMPORTANT
Removing parameters from custom environment files does not revert the parameter
value to the default configuration. You must identify the default value from the core heat
template collection in /usr/share/openstack-tripleo-heat-templates and set the value in
your custom environment file manually.
If you aim to include a new environment file, add it to the openstack overcloud deploy command with a
-e option. For example:
121
Red Hat OpenStack Platform 13 Director Installation and Usage
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates \
-e ~/templates/new-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e ~/templates/network-environment.yaml \
-e ~/templates/storage-environment.yaml \
-e ~/templates/node-info.yaml \
--ntp-server pool.ntp.org
This includes the new parameters and resources from the environment file into the stack.
IMPORTANT
Procedure
$ source ~/stackrc
(undercloud) $ tripleo-ansible-inventory --list
The --list option provides details on all hosts. This outputs the dynamic inventory in a JSON
format:
2. To execute Ansible playbooks on your environment, run the ansible command and include the
full path of the dynamic inventory tool using the -i option. For example:
overcloud for all overcloud child nodes i.e. controller and compute
122
CHAPTER 9. PERFORMING TASKS AFTER OVERCLOUD CREATION
Exchange [OTHER OPTIONS] for the additional Ansible options. Some useful options
include:
-u [USER] to change the SSH user that executes the Ansible automation. The default
SSH user for the overcloud is automatically defined using the ansible_ssh_user
parameter in the dynamic inventory. The -u option overrides this parameter.
IMPORTANT
Ansible automation on the overcloud falls outside the standard overcloud stack. This
means subsequent execution of the openstack overcloud deploy command might
override Ansible-based configuration for OpenStack Platform services on overcloud
nodes.
Create a new image by taking a snapshot of a running server and download the image.
$ source ~/overcloudrc
(overcloud) $ openstack server image create instance_name --name image_name
(overcloud) $ openstack image save image_name --file exported_vm.qcow2
Upload the exported image into the overcloud and launch a new instance.
IMPORTANT
Each VM disk has to be copied from the existing OpenStack environment and into the
new Red Hat OpenStack Platform. Snapshots using QCOW will lose their original layering
system.
{"stacks:delete": "rule:deny_everybody"}
123
Red Hat OpenStack Platform 13 Director Installation and Usage
This prevents removal of the overcloud with the heat client. To allow removal of the overcloud, delete
the custom policy and save /etc/heat/policy.json.
$ source ~/stackrc
(undercloud) $ openstack overcloud delete overcloud
Once the removal completes, follow the standard steps in the deployment scenarios to recreate your
overcloud.
For the overcloud, you can adjust the interval using the KeystoneCronToken values. For more
information, see the Overcloud Parameters guide.
124
CHAPTER 10. CONFIGURING THE OVERCLOUD WITH ANSIBLE
IMPORTANT
This feature is available in this release as a Technology Preview, and therefore is not fully
supported by Red Hat. It should only be used for testing, and should not be deployed in a
production environment. For more information about Technology Preview features, see
Scope of Coverage Details.
It is possible to use Ansible as the main method to apply the overcloud configuration. This chapter
provides steps on enabling this feature on your overcloud.
Although director automatically generates the Ansible playbooks, it is a good idea to familiarize yourself
with Ansible syntax. See https://docs.ansible.com/ for more information about how to use Ansible.
NOTE
Ansible also uses the concept of roles, which are different to OpenStack Platform
director roles.
NOTE
This configuration method does not support deploying Ceph Storage clusters on any
nodes.
Replaces the communication and transport of the configuration deployment data between Heat
and the Heat agent (os-collect-config) on the overcloud nodes
The director uses Heat to create the stack and all descendant resources.
Heat still creates any OpenStack service resources, including bare metal node and network
creation.
Although Heat creates all deployment data from SoftwareDeployment resources to perform the
overcloud installation and configuration, it does not apply any of the configuration. Instead, Heat only
provides the data through its API. Once the stack is created, a Mistral workflow queries the Heat API for
the deployment data and applies the configuration by running ansible-playbook with an Ansible
inventory file and a generated set of playbooks.
Procedure
$ source ~/stackrc
2. Run the overcloud deployment command and include the --config-download option and the
environment file to disable heat-based configuration:
--config-download enables the additional Mistral workflow, which applies the configuration
with ansible-playbook instead of Heat.
-e /usr/share/openstack-tripleo-heat-templates/environments/config-download-
environment.yaml is a required environment file that maps the Heat software deployment
configuration resources to their Ansible-based equivalents. This provides the configuration
data through the Heat API without Heat applying configuration.
--overcloud-ssh-user and --overcloud-ssh-key are used to SSH into each overcloud node,
create an initial tripleo-admin user, and inject an SSH key into /home/tripleo-
admin/.ssh/authorized_keys. To inject the SSH key, the user specifies credentials for the
initial SSH connection with --overcloud-ssh-user (defaults to heat-admin) and --
overcloud-ssh-key (defaults to ~/.ssh/id_rsa). To limit exposure to the private key
specified with --overcloud-ssh-key, the director never passes this key to any API service,
such as Heat or Mistral, and only the director’s openstack overcloud deploy command
uses this key to enable access for the tripleo-admin user.
When running this command, make sure you also include any other files relevant to your
overcloud. For example:
3. The overcloud deployment command performs the standard stack operations. However, when
126
CHAPTER 10. CONFIGURING THE OVERCLOUD WITH ANSIBLE
3. The overcloud deployment command performs the standard stack operations. However, when
the overcloud stack reaches the configuration stage, the stack switches to the config-
download method for configuring the overcloud:
4. After the Ansible configuration of the overcloud completes, the director provides a report of the
successful and failed tasks and the access URLs for the overcloud:
Ansible passed.
Overcloud configuration completed.
Started Mistral Workflow tripleo.deployment.v1.get_horizon_url. Execution ID: 0e4ca4f6-
9d14-418a-9c46-27692649b584
Overcloud Endpoint: http://10.0.0.1:5000/
Overcloud Horizon Dashboard URL: http://10.0.0.1:80/dashboard
Overcloud rc file: /home/stack/overcloudrc
Overcloud Deployed
If using pre-provisioned nodes, you need to perform an additional step to ensure a successful
deployment with config-download.
Procedure
parameter_defaults:
HostnameMap:
[HEAT HOSTNAME]: [ACTUAL HOSTNAME]
[HEAT HOSTNAME]: [ACTUAL HOSTNAME]
The [HEAT HOSTNAME] usually follows the following convention: [STACK NAME]-[ROLE]-
127
Red Hat OpenStack Platform 13 Director Installation and Usage
The [HEAT HOSTNAME] usually follows the following convention: [STACK NAME]-[ROLE]-
[INDEX]. For example:
parameter_defaults:
HostnameMap:
overcloud-controller-0: controller-00-rack01
overcloud-controller-1: controller-01-rack02
overcloud-controller-2: controller-02-rack03
overcloud-novacompute-0: compute-00-rack01
overcloud-novacompute-1: compute-01-rack01
overcloud-novacompute-2: compute-02-rack01
3. When running a config-download deployment, include the environment file with the -e option.
For example:
Before accessing these working directories, you need to set the appropriate permissions for your stack
user.
Procedure
1. The mistral group can read all files under /var/lib/mistral. Grant the interactive stack user on
the undercloud read-only access to these files:
The command prompts you to log in again. Enter the stack user’s password.
$ ls /var/lib/mistral/
128
CHAPTER 10. CONFIGURING THE OVERCLOUD WITH ANSIBLE
Procedure
1. List all executions using the openstack workflow execution list command and find the
workflow ID of the chosen Mistral execution that executed config-download:
<execution uuid> is the UUID of the Mistral execution that ran ansible-playbook.
2. Alternatively, look for the most recently modified directory under /var/lib/mistral to quickly find
the log for the most recent deployment:
Procedure
$ cd /var/lib/mistral/<execution uuid>/
<execution uuid> is the UUID of the Mistral execution that ran ansible-playbook.
$ ./ansible-playbook-command.sh
3. You can pass additional Ansible arguments to this script, which in turn are passed unchanged to
the ansible-playbook command. This makes it is possible to take further advantage of Ansible
features, such as check mode (--check), limiting hosts ( --limit), or overriding variables (-e). For
example:
4. The working directory contains a playbook called deploy_steps_playbook.yaml, which runs the
overcloud configuration. To view this playbook:
$ less deploy_steps_playbook.yaml
The playbook uses various task files contained with the working directory. Some task files are
129
Red Hat OpenStack Platform 13 Director Installation and Usage
The playbook uses various task files contained with the working directory. Some task files are
common to all OpenStack Platform roles and some are specific to certain OpenStack Platform
roles and servers.
5. The working directory also contains sub-directories that correspond to each role defined in your
overcloud’s roles_data file. For example:
$ ls Controller/
Each OpenStack Platform role directory also contains sub-directories for individual servers of
that role type. The directories use the composable role hostname format. For example:
$ ls Controller/overcloud-controller-0
6. The Ansible tasks are tagged. To see the full list of tags use the CLI argument --list-tags for
ansible-playbook:
Then apply tagged configuration using the --tags, --skip-tags, or --start-at-task with the
ansible-playbook-command.sh script. For example:
WARNING
Procedure
$ source ~/stackrc
2. Run the overcloud deployment command but do not include the --config-download option or
the 'config-download-environment.yaml` environment file:
130
CHAPTER 10. CONFIGURING THE OVERCLOUD WITH ANSIBLE
When running this command, make sure you also include any other files relevant to your
overcloud. For example:
3. The overcloud deployment command performs the standard stack operations, including
configuration with Heat.
131
Red Hat OpenStack Platform 13 Director Installation and Usage
Compute Node Maintenance: If you must temporarily take a Compute node out of service, you
can temporarily migrate virtual machines running on the Compute node to another Compute
node. Common scenarios include hardware maintenance, hardware repair, kernel upgrades and
software updates.
Failing Compute Node: If a Compute node is about to fail and must be serviced or replaced,
you must migrate virtual machines from the failing Compute node to a healthy Compute node.
For Compute nodes that have already failed, see Evacuating VMs .
Workload Rebalancing: You can consider migrating one or more virtual machines to another
Compute node to rebalance the workload. For example, you can consolidate virtual machines on
a Compute node to conserve power, migrate virtual machines to a Compute node that is
physically closer to other networked resources to reduce latency, or distribute virtual machines
across Compute nodes to avoid hot spots and increase resiliency.
The director configures all Compute nodes to provide secure migration. All Compute nodes also require
a shared SSH key to provide each host’s nova user with access to other Compute nodes during the
migration process. The director creates this key using the OS::TripleO::Services::NovaCompute
composable service. This composable service is one of the main services included on all Compute roles
by default (see Composable Services and Custom Roles in Advanced Overcloud Customization).
Live Migration
Live migration involves spinning up the virtual machine on the destination node and shutting down the
virtual machine on the source node seamlessly while maintaining state consistency.
Live migration handles virtual machine migration with little or no perceptible downtime. In some cases,
virtual machines cannot use live migration. See Migration Constraints for details on migration
constraints.
132
CHAPTER 11. MIGRATING VIRTUAL MACHINES BETWEEN COMPUTE NODES
Cold Migration
Cold migration or non-live migration involves nova shutting down a virtual machine before migrating it
from the source Compute node to the destination Compute node.
Cold migration involves some downtime for the virtual machine. However, cold migration still provides
the migrated virtual machine with access to the same volumes and IP addresses.
IMPORTANT
For source Compute nodes that have already failed, see Evacuation. Migration requires
that both the source and destination Compute nodes are running.
CPU constraints
The source and destination Compute nodes must have the same CPU architecture. For example, Red
Hat does not support migrating a virtual machine from an x86_64 CPU to a ppc64le CPU. In some cases,
the CPU of the source and destination Compute node must match exactly, such as virtual machines that
use CPU host passthrough. In all cases, the CPU features of the destination node must be a superset of
the CPU features on the source node. Using CPU pinning introduces additional constraints. For more
information, see Live Migration Constraints .
Memory constraints
The destination Compute node must have sufficient available RAM. Memory oversubscription can cause
migration to fail. Additionally, virtual machines that use a NUMA topology must have sufficient available
RAM on the same NUMA node on the destination Compute node.
133
Red Hat OpenStack Platform 13 Director Installation and Usage
between the Compute nodes over the control plane network by default. By contrast, volume-backed
instances that use shared storage, such as Red Hat Ceph Storage, do not have to migrate the volumes,
because each Compute node already has access to the shared storage.
NOTE
Network congestion in the control plane network caused by migrating local disks or virtual
machines that consume large amounts of RAM could impact the performance of other
systems that use the control plane network, such as RabbitMQ.
No new operations during migration:To achieve state consistency between the copies of the
virtual machine on the source and destination nodes, Red Hat OpenStack Platform must
prevent new operations during live migration. Otherwise, live migration could take a long time or
potentially never end if writes to memory occur faster than live migration can replicate the state
of the memory.
Non-Uniform Memory Access (NUMA): You can live migrate virtual machines that have a
NUMA topology only when NovaEnableNUMALiveMigration is set to True in the Compute
configuration. This parameter is enabled by default only when the Compute host is configured
for an OVS-DPDK deployment.
CPU Pinning: When a flavor uses CPU pinning, the flavor implicitly introduces a NUMA
topology to the virtual machine and maps its CPUs and memory to specific host CPUs and
memory. The difference between a simple NUMA topology and CPU pinning is that NUMA uses
a range of CPU cores, whereas CPU pinning uses specific CPU cores. For more information, see
Configuring CPU pinning with NUMA . To live migrate virtual machines that use CPU pinning, the
destination host must be empty and must have equivalent hardware.
Data Plane Development Kit (DPDK): When a virtual machine uses DPDK, such as a virtual
machine running Open vSwitch with dpdk-netdev, the virtual machine also uses huge pages
which imposes a NUMA topology such that OpenStack Compute (nova) pins the virtual machine
to a NUMA node.
OpenStack Compute can live migrate a virtual machine that uses NUMA, CPU pinning or DPDK.
However, the destination Compute node must have sufficient capacity on the same NUMA node that
the virtual machine uses on the source Compute node. For example, if a virtual machine uses NUMA 0
on overcloud-compute-0, when migrating the virtual machine to overcloud-compute-1, you must
ensure that overcloud-compute-1 has sufficient capacity on NUMA 0 to support the virtual machine in
order to use live migration.
Single-root Input/Output Virtualization (SR-IOV): You can assign SR-IOV Virtual Functions
134
CHAPTER 11. MIGRATING VIRTUAL MACHINES BETWEEN COMPUTE NODES
(VFs) to virtual machines. However, this prevents live migration. Unlike a regular network device,
an SR-IOV VF network device does not have a permanent unique MAC address. The VF network
device receives a new MAC address each time the Compute node reboots or when nova-
scheduler migrates the virtual machine to a new Compute node. Consequently, nova cannot
live migrate virtual machines that use SR-IOV in OpenStack Platform 13. You must cold migrate
virtual machines that use SR-IOV.
PCI passthrough: QEMU/KVM hypervisors support attaching PCI devices on the Compute
node to a virtual machine. PCI passthrough allows a virtual machine to have exclusive access to
PCI devices, which appear and behave as if they are physically attached to the virtual machine’s
operating system. However, since PCI passthrough involves physical addresses, nova does not
support live migration of virtual machines using PCI passthrough in OpenStack Platform 13.
Procedure
1. From the undercloud, identify the source Compute node host name and the destination
Compute node host name.
$ source ~/overcloudrc
$ openstack compute service list
2. List virtual machines on the source Compute node and locate the ID of the virtual machine or
machines that you want to migrate:
Replace [source] with the host name of the source Compute node.
Replace [source] with the host name of the source Compute node.
NOTE
135
Red Hat OpenStack Platform 13 Director Installation and Usage
NOTE
You can live migrate instances that use CPU pinning or huge pages, or that have
a NUMA topology, only when NovaEnableNUMALiveMigration is set to "True" in
the Compute configuration. This parameter is enabled by default only when the
Compute host is configured for an OVS-DPDK deployment.
1. If the destination Compute node for NUMA, CPU-pinned or DPDK virtual machines is not
disabled, disable it to prevent the scheduler from assigning virtual machines to the node.
Replace [dest] with the host name of the destination Compute node.
2. Ensure that the destination Compute node has no virtual machines, except for virtual machines
previously migrated from the source Compute node when migrating multiple DPDK or NUMA
virtual machines.
Replace [dest] with the host name of the destination Compute node.
3. Ensure that the destination Compute node has sufficient resources to run the NUMA, CPU-
pinned or DPDK virtual machine.
Replace overcloud-compute-n with the host name of the destination Compute node.
4. To discover NUMA information about the source or destination Compute nodes, run the
following commands:
$ ssh root@overcloud-compute-n
# lscpu && lscpu | grep NUMA
# virsh nodeinfo
# virsh capabilities
# exit
5. If you are unsure if a virtual machine uses NUMA, check the flavor of the virtual machine.
136
CHAPTER 11. MIGRATING VIRTUAL MACHINES BETWEEN COMPUTE NODES
Replace [flavor] with the name or ID of the flavor. If the result of the properties field includes
hw:mem_page_size with a value other than any such as 2MB, 2048 or 1GB, the virtual machine
has a NUMA topology. If the properties field includes
aggregate_instance_extra_specs:pinned='true', the virtual machine uses CPU pinning. If the
properties field includes hw:numa_nodes, the OpenStack Compute (nova) service restricts
the virtual machine to a specific NUMA node.
6. For each virtual machine that uses NUMA, consider retrieving information about the NUMA
topology from the underlying Compute node so that you can verify that the NUMA topology on
the destination Compute node reflects the NUMA topology of the source Compute node after
migration is complete.
$ ssh root@overcloud-compute-n
# virsh vcpuinfo [vm]
# virsh numatune [vm]
# exit
Replace [vm] with the name of the virtual machine. The vcpuinfo command provides details
about NUMA and CPU pinning. The numatune command provides details about which NUMA
node the virtual machine is using.
Procedure
1. To live migrate a virtual machine, specify the virtual machine and the destination Compute
node:
Replace [vm] with the name or ID of the virtual machine. Replace [dest] with the hostname of
the destination Compute node. Specify the --block-migration flag if migrating a locally stored
volume.
2. Wait for migration to complete. See Check Migration Status to check the status of the
migration.
4. For virtual machines using NUMA, CPU-pinning or DPDK, consider retrieving information about
the NUMA topology from a Compute node to compare it with NUMA topology retrieved during
the pre-migration procedure.
137
Red Hat OpenStack Platform 13 Director Installation and Usage
$ ssh root@overcloud-compute-n
# virsh vcpuinfo [vm]
# virsh numatune [vm]
# exit
Replace overcloud-compute-n with the host name of the Compute node. Replace [vm] with
the name of the virtual machine. Comparing the NUMA topologies of the source and
destination Compute nodes helps to ensure that the source and destination Compute nodes
use the same NUMA topology.
5. Repeat this procedure for each additional virtual machine that you intend to migrate.
When you have finished migrating the virtual machines, proceed to the Post-migration Procedures .
Procedure
Replace <vm> with the ID of the VM to migrate. Specify the --block-migration flag if migrating
a locally stored volume.
2. Wait for migration to complete. See Check Migration Status to check the status of the
migration.
When you have finished migrating virtual machines, proceed to the Post-migration Procedures .
138
CHAPTER 11. MIGRATING VIRTUAL MACHINES BETWEEN COMPUTE NODES
Migration involves numerous state transitions before migration is complete. During a healthy migration,
the migration state typically transitions as follows:
1. Queued: nova accepted the request to migrate a virtual machine and migration is pending.
4. Post-migrating: nova has built the virtual machine on the destination Compute node and is
freeing up resources on the source Compute node.
5. Completed: nova has completed migrating the virtual machine and finished freeing up
resources on the source Compute node.
Procedure
Replace [vm] with the virtual machine name or ID. Replace [migration] with the ID of the
migration.
Sometimes virtual machine migration can take a long time or encounter errors. See Section 11.8,
“Troubleshooting Migration” for details.
$ source ~/overcloudrc
$ openstack compute service set [source] nova-compute --enable
Replace [source] with the host name of the source Compute node.
NOTE
139
Red Hat OpenStack Platform 13 Director Installation and Usage
NOTE
You can live migrate instances that use CPU pinning or huge pages, or that have a NUMA
topology, only when NovaEnableNUMALiveMigration is set to "True" in the Compute
configuration. This parameter is enabled by default only when the Compute host is
configured for an OVS-DPDK deployment.
$ source ~/overcloudrc
$ openstack compute service set [dest] nova-compute --enable
Replace [dest] with the host name of the destination Compute node.
When live migration enters a failed state, it is typically followed by an error state. The following common
issues can cause a failed state:
5. The virtual machine on the source Compute node gets deleted before migration to the
destination Compute node is complete.
140
CHAPTER 11. MIGRATING VIRTUAL MACHINES BETWEEN COMPUTE NODES
Replace [vm] with the virtual machine name or ID, and [migration] with the ID of the migration.
IMPORTANT
Replace [vm] with the virtual machine name or ID. Replace [migration] with the ID of the
migration.
141
Red Hat OpenStack Platform 13 Director Installation and Usage
Compute node maps NIC 1 to NUMA node 5, after migration the virtual machine might route network
traffic from a first CPU across the bus to a second CPU with NUMA node 5 to route traffic to NIC 1—
resulting in expected behavior, but degraded performance. Similarly, if NUMA node 0 on the source
Compute node has sufficient available CPU and RAM, but NUMA node 0 on the destination Compute
node already has virtual machines using some of the resources, the virtual machine might run properly
but suffer performance degradation. See Section 11.2, “Migration constraints” for additional details.
142
CHAPTER 12. CREATING VIRTUALIZED CONTROL PLANES
This chapter explains how to virtualize your Red Hat OpenStack Platform (RHOSP) control plane for the
overcloud using RHOSP and Red Hat Virtualization.
NOTE
The following architecture diagram illustrates how to deploy a virtualized control plane. You distribute
the overcloud with the Controller nodes running on VMs on Red Hat Virtualization. You run the Compute
and storage nodes on bare metal.
NOTE
The OpenStack Bare Metal Provisioning (ironic) service includes a driver for Red Hat Virtualization VMs,
staging-ovirt. You can use this driver to manage virtual nodes within a Red Hat Virtualization
environment. You can also use it to deploy overcloud controllers as virtual machines within a Red Hat
Virtualization environment.
143
Red Hat OpenStack Platform 13 Director Installation and Usage
Benefits
Virtualizing the overloud control plane has a number of benefits that prevent downtime and improve
performance.
You can allocate resources to the virtualized controllers dynamically, using hot add and hot
remove to scale CPU and memory as required. This prevents downtime and facilitates increased
capacity as the platform grows.
You can deploy additional infrastructure VMs on the same Red Hat Virtualization cluster. This
minimizes the server footprint in the data center and maximizes the efficiency of the physical
nodes.
You can use composable roles to define more complex RHOSP control planes. This allows you
to allocate resources to specific components of the control plane.
You can maintain systems without service interruption by using the VM live migration feature.
You can integrate third-party or custom tools supported by Red Hat Virtualization.
Limitations
Virtualized control planes limit the types of configurations that you can use.
Virtualized Ceph Storage nodes and Compute nodes are not supported.
Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel.
Red Hat Virtualization does not support N_Port ID Virtualization (NPIV). Therefore, Block
Storage (cinder) drivers that need to map LUNs from a storage back end to the controllers,
where cinder-volume runs by default, do not work. You need to create a dedicated role for
cinder-volume instead of including it on the virtualized controllers. For more information, see
Composable Services and Custom Roles.
Prerequisites
You must have a 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.
You must have the following software already installed and configured:
Red Hat Virtualization. For more information, see Red Hat Virtualization Documentation
Suite.
Red Hat OpenStack Platform (RHOSP). For more information, see Director Installation and
Usage.
You must have the virtualized Controller nodes prepared in advance. These requirements are
the same as for bare-metal Controller nodes. For more information, see Controller Node
Requirements.
You must have the bare-metal nodes being used as overcloud Compute nodes, and the storage
nodes, prepared in advance. For hardware specifications, see the Compute Node Requirements
144
CHAPTER 12. CREATING VIRTUALIZED CONTROL PLANES
and Ceph Storage Node Requirements . To deploy overcloud Compute nodes on POWER
(ppc64le) hardware, see Red Hat OpenStack Platform for POWER .
You must have the logical networks created, and your cluster or host networks ready to use
network isolation with multiple networks. For more information, see Logical Networks.
You must have the internal BIOS clock of each node set to UTC. This prevents issues with
future-dated file timestamps when hwclock synchronizes the BIOS clock before applying the
timezone offset.
TIP
To avoid performance bottlenecks, use composable roles and keep the data plane services on the bare-
metal Controller nodes.
Procedure
1. Enable the staging-ovirt driver in the director undercloud by adding the driver to
enabled_hardware_types in the undercloud.conf configuration file:
enabled_hardware_types = ipmi,redfish,ilo,idrac,staging-ovirt
If the undercloud is set up correctly, the command returns the following result:
+---------------------+-----------------------+
| Supported driver(s) | Active host(s) |
+---------------------+-----------------------+
| idrac | localhost.localdomain |
| ilo | localhost.localdomain |
| ipmi | localhost.localdomain |
| pxe_drac | localhost.localdomain |
| pxe_ilo | localhost.localdomain |
| pxe_ipmitool | localhost.localdomain |
| redfish | localhost.localdomain |
| staging-ovirt | localhost.localdomain |
4. Update the overcloud node definition template, for instance, nodes.json, to register the VMs
hosted on Red Hat Virtualization with director. For more information, see Registering Nodes for
the Overcloud. Use the following key:value pairs to define aspects of the VMs to deploy with
your overcloud:
145
Red Hat OpenStack Platform 13 Director Installation and Usage
For example:
{
"nodes": [
{
"name":"osp13-controller-0",
"pm_type":"staging-ovirt",
"mac":[
"00:1a:4a:16:01:56"
],
"cpu":"2",
"memory":"4096",
"disk":"40",
"arch":"x86_64",
"pm_user":"admin@internal",
"pm_password":"password",
"pm_addr":"rhvm.example.com",
"pm_vm_name":"{vernum}-controller-0",
"capabilities": "profile:control,boot_option:local"
},
}
5. Configure an affinity group in Red Hat Virtualization with "soft negative affinity" to ensure high
availability is implemented for your controller VMs. For more information, see Affinity Groups.
6. Open the Red Hat Virtualization Manager interface, and use it to map each VLAN to a separate
logical vNIC in the controller VMs. For more information, see Logical Networks.
7. Set no_filter in the vNIC of the director and controller VMs, and restart the VMs, to disable the
MAC spoofing filter on the networks attached to the controller VMs. For more information, see
Virtual Network Interface Cards .
146
CHAPTER 12. CREATING VIRTUALIZED CONTROL PLANES
8. Deploy the overcloud to include the new virtualized controller nodes in your environment:
147
Red Hat OpenStack Platform 13 Director Installation and Usage
WARNING
Do not use openstack server delete to remove nodes from the overcloud. Read
the procedures defined in this section to properly remove and replace nodes.
There might be situations where you need to add or remove nodes after the creation of the overcloud.
For example, you might need to add more Compute nodes to the overcloud. This situation requires
updating the overcloud.
Use the following table to determine support for scaling each node type:
Compute Y Y
IMPORTANT
Ensure to leave at least 10 GB free space before scaling the overcloud. This free space
accommodates image conversion and caching during the node provisioning process.
Procedure
1. Create a new JSON file (newnodes.json) containing the new node details to register:
{
"nodes":[
148
CHAPTER 13. SCALING OVERCLOUD NODES
{
"mac":[
"dd:dd:dd:dd:dd:dd"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.207"
},
{
"mac":[
"ee:ee:ee:ee:ee:ee"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.208"
}
]
}
$ source ~/stackrc
(undercloud) $ openstack overcloud node import newnodes.json
3. After registering the new nodes, run the following commands to launch the introspection
process for each new node:
This process detects and benchmarks the hardware properties of the nodes.
Procedure
1. Tag each new node with the role you want. For example, to tag a node with the Compute role,
run the following command:
149
Red Hat OpenStack Platform 13 Director Installation and Usage
2. Scaling the overcloud requires that you edit the environment file that contains your node
counts and re-deploy the overcloud. For example, to scale your overcloud to 5 Compute nodes,
edit the ComputeCount parameter:
parameter_defaults:
...
ComputeCount: 5
...
3. Rerun the deployment command with the updated file, which in this example is called node-
info.yaml:
Ensure you include all environment files and options from your initial overcloud creation. This
includes the same scale parameters for non-Compute nodes.
IMPORTANT
Before removing a Compute node from the overcloud, migrate the workload from the
node to other Compute nodes.
Procedure
$ source ~/stack/overcloudrc
2. Disable the Compute service on the outgoing node on the overcloud to prevent the node from
scheduling new instances:
4. When you remove overcloud nodes, you must update the overcloud stack in the director using
the local template files. First, identify the UUID of the overcloud stack:
150
CHAPTER 13. SCALING OVERCLOUD NODES
6. Run the following command to delete the nodes from the stack and update the plan accordingly:
IMPORTANT
If you passed any extra environment files when you created the overcloud, pass
them here again using the -e or --environment-file option to avoid making
undesired manual changes to the overcloud.
7. Ensure the openstack overcloud node delete command runs to completion before you
continue. Use the openstack stack list command and check the overcloud stack has reached
an UPDATE_COMPLETE status.
You are now free to remove the node from the overcloud and re-provision it for other purposes.
Procedure
1. Increase the Object Storage count using the ObjectStorageCount parameter. This parameter is
usually located in node-info.yaml, which is the environment file containing your node counts:
151
Red Hat OpenStack Platform 13 Director Installation and Usage
parameter_defaults:
ObjectStorageCount: 4
The ObjectStorageCount parameter defines the quantity of Object Storage nodes in your
environment. In this situation, we scale from 3 to 4 nodes.
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates -e node-info.yaml
ENVIRONMENT_FILES
3. After the deployment command completes, the overcloud contains an additional Object Storage
node.
4. Replicate data to the new node. Before removing a node (in this case, overcloud-
objectstorage-1), wait for a replication pass to finish on the new node. Check the replication
pass progress in the /var/log/swift/swift.log file. When the pass finishes, the Object Storage
service should log entries similar to the following example:
5. To remove the old node from the ring, reduce the ObjectStorageCount parameter to the omit
the old node. In this case, reduce it to 3:
parameter_defaults:
ObjectStorageCount: 3
6. Create a new environment file named remove-object-node.yaml. This file identifies and
removes the specified Object Storage node. The following content specifies the removal of
overcloud-objectstorage-1:
parameter_defaults:
ObjectStorageRemovalPolicies:
[{'resource_list': ['1']}]
The director deletes the Object Storage node from the overcloud and updates the rest of the nodes on
the overcloud to accommodate the node removal.
152
CHAPTER 13. SCALING OVERCLOUD NODES
parameter_defaults:
DeploymentServerBlacklist:
- overcloud-compute-0
- overcloud-compute-1
- overcloud-compute-2
NOTE
The server names in the parameter value are the names according to OpenStack
Orchestration (heat), not the actual server hostnames.
Include this environment file with your openstack overcloud deploy command:
$ source ~/stackrc
(undercloud) $ openstack overcloud deploy --templates \
-e server-blacklist.yaml \
[OTHER OPTIONS]
Heat blacklists any servers in the list from receiving updated Heat deployments. After the stack
operation completes, any blacklisted servers remain unchanged. You can also power off or stop the os-
collect-config agents during the operation.
WARNING
Exercise caution when blacklisting nodes. Only use a blacklist if you fully
understand how to apply the requested change with a blacklist in effect. It is
possible to create a hung stack or configure the overcloud incorrectly using
the blacklist feature. For example, if a cluster configuration changes applies
to all members of a Pacemaker cluster, blacklisting a Pacemaker cluster
member during this change can cause the cluster to fail.
When adding servers to the blacklist, further changes to those nodes are
not supported until the server is removed from the blacklist. This includes
updates, upgrades, scale up, scale down, and node replacement.
153
Red Hat OpenStack Platform 13 Director Installation and Usage
To clear the blacklist for subsequent stack operations, edit the DeploymentServerBlacklist to use an
empty array:
parameter_defaults:
DeploymentServerBlacklist: []
WARNING
154
CHAPTER 14. REPLACING CONTROLLER NODES
Complete the steps in this section to replace a Controller node. The Controller node replacement
process involves running the openstack overcloud deploy command to update the overcloud with a
request to replace a Controller node.
IMPORTANT
The following procedure applies only to high availability environments. Do not use this
procedure if using only one Controller node.
Procedure
$ source stackrc
(undercloud) $ openstack stack list --nested
The overcloud stack and its subsequent child stacks should have either a
CREATE_COMPLETE or UPDATE_COMPLETE.
3. Check that your undercloud contains 10 GB free storage to accommodate for image caching
and conversion when provisioning the new node.
4. Check the status of Pacemaker on the running Controller nodes. For example, if 192.168.0.47 is
the IP address of a running Controller node, use the following command to get the Pacemaker
status:
The output should show all services running on the existing nodes and stopped on the failed
node.
5. Check the following parameters on each node of the overcloud MariaDB cluster:
wsrep_local_state_comment: Synced
155
Red Hat OpenStack Platform 13 Director Installation and Usage
wsrep_cluster_size: 2
Use the following command to check these parameters on each running Controller node. In
this example, the Controller node IP addresses are 192.168.0.47 and 192.168.0.46:
6. Check the RabbitMQ status. For example, if 192.168.0.47 is the IP address of a running
Controller node, use the following command to get the status:
The running_nodes key should only show the two available nodes and not the failed node.
7. If you are using Open Virtual Switch (OVS) and replaced Controller nodes in the past without
restarting the OVS agents, then restart the agents on the compute nodes before replacing this
Controller. Restarting the OVS agents ensures that they have a full complement of RabbitMQ
connections.
Run the following command to restart the OVS agent:
8. Disable fencing, if enabled. For example, if 192.168.0.47 is the IP address of a running Controller
node, use the following command to disable fencing:
NOTE
156
CHAPTER 14. REPLACING CONTROLLER NODES
NOTE
Adding a new Controller to the cluster also adds a new Ceph monitor daemon
automatically.
Procedure
# ssh [email protected]
# sudo su -
NOTE
If the controller is unreachable, skip steps 1 and 2 and continue the procedure at
step 3 on any working controller node.
For example:
4. On the Ceph monitor node, remove the monitor entry from /etc/ceph/ceph.conf. For example,
if you remove controller-1, then remove the IP and hostname for controller-1.
Before:
After:
NOTE
The director updates the ceph.conf file on the relevant overcloud nodes when
you add the replacement controller node. Normally, director manages this
configuration file exclusively and you should not edit the file manually. However,
you can edit the file manually to ensure consistency in case the other nodes
restart before you add the new node.
6. Optionally, archive the monitor data and save the archive on another server:
157
Red Hat OpenStack Platform 13 Director Installation and Usage
# mv /var/lib/ceph/mon/<cluster>-<daemon_id> /var/lib/ceph/mon/removed-<cluster>-
<daemon_id>
Procedure
2. If the old node is still reachable, log in to one of the remaining nodes and stop pacemaker on the
old node. For this example, stop pacemaker on overcloud-controller-1:
NOTE
3. After stopping Pacemaker on the old node (i.e. it is shown as Stopped in pcs status), delete
the old node from the corosync configuration on each node and restart Corosync. For this
example, the following command logs into overcloud-controller-0 and overcloud-controller-2
removes the node:
4. Log in to one of the remaining nodes and delete the node from the cluster with the crm_node
command:
5. The overcloud database must continue to run during the replacement procedure. To ensure
Pacemaker does not stop Galera during this procedure, select a running Controller node and run
the following command on the undercloud using the Controller node’s IP address:
158
CHAPTER 14. REPLACING CONTROLLER NODES
If the node is a virtual node, identify the node that contains the failed disk and restore the disk
from a backup. Ensure that the MAC address of the NIC used for PXE boot on the failed server
remains the same after disk replacement.
If the node is a bare metal node, replace the disk, prepare the new disk with your overcloud
configuration, and perform a node introspection on the new hardware.
Complete the following example steps to replace the the overcloud-controller-1 node with the
overcloud-controller-3 node. The overcloud-controller-3 node has the ID 75b25e9a-948d-424a-9b3b-
f0ef70a6eacf.
IMPORTANT
To replace the node with an existing ironic node, enable maintenance mode on the
outgoing node so that the director does not automatically reprovision the node.
Procedure
$ source ~/stackrc
$ NODE=$(openstack baremetal node list -f csv --quote minimal | grep $INSTANCE | cut -f1
-d,)
5. If the Controller node is a virtual node, run the following command on the Controller host to
replace the virtual disk from a backup:
$ cp <VIRTUAL_DISK_BACKUP> /var/lib/libvirt/images/<VIRTUAL_DISK>
Replace <VIRTUAL_DISK_BACKUP> with the path to the backup of the failed virtual disk, and
replace <VIRTUAL_DISK> with the name of the virtual disk that you want to replace.
If you do not have a backup of the outgoing node, you must use a new virtualized node.
If the Controller node is a bare metal node, complete the following steps to replace the disk with
a new bare metal disk:
159
Red Hat OpenStack Platform 13 Director Installation and Usage
b. Prepare the node with the same configuration as the failed node.
Procedure
parameters:
ControllerRemovalPolicies:
[{'resource_list': ['1']}]
NOTE
3. The director removes the old node, creates a new one, and updates the overcloud stack. You
can check the status of the overcloud stack with the following command:
4. Once the deployment command completes, the director shows the old node replaced with the
new node:
160
CHAPTER 14. REPLACING CONTROLLER NODES
+------------------------+-----------------------+
| overcloud-compute-0 | ctlplane=192.168.0.44 |
| overcloud-controller-0 | ctlplane=192.168.0.47 |
| overcloud-controller-2 | ctlplane=192.168.0.46 |
| overcloud-controller-3 | ctlplane=192.168.0.48 |
+------------------------+-----------------------+
Procedure
2. Enable Pacemaker management of the Galera cluster and start Galera on the new node:
3. Perform a final status check to make sure services are running correctly:
NOTE
If any services have failed, use the pcs resource refresh command to resolve
and restart the failed services.
5. Source the overcloudrc file so that you can interact with the overcloud:
$ source ~/overcloudrc
8. If necessary, add your hosting router to the L3 agent on the new node. Use the following
example command to add a hosting router r1 to the L3 agent using the UUID 2d1c1dc1-d9d4-
4fa9-b2c8-f29cd1a649d4:
161
Red Hat OpenStack Platform 13 Director Installation and Usage
9. Compute services for the removed node still exist in the overcloud and require removal. Check
the compute services for the removed node:
11. If you are using Open Virtual Switch (OVS), and the IP address for the Controller node has
changed, then you must restart the OVS agent on all compute nodes:
162
CHAPTER 15. REBOOTING NODES
If rebooting all nodes in one role, it is advisable to reboot each node individually. This helps
retain services for that role during the reboot.
If rebooting all nodes in your OpenStack Platform environment, use the following list to guide
the reboot order:
Procedure
$ sudo reboot
Procedure
1. Select a node to reboot. Log into it and stop the cluster before rebooting:
163
Red Hat OpenStack Platform 13 Director Installation and Usage
a. If the node uses Pacemaker services, check that the node has rejoined the cluster:
b. If the node uses Systemd services, check that all services are enabled:
Procedure
$ sudo reboot
3. Wait until the node boots and rejoins the MON cluster.
Procedure
1. Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing
temporarily:
2. Select the first Ceph Storage node to reboot and log into it.
$ sudo reboot
164
CHAPTER 15. REBOOTING NODES
$ sudo ceph -s
6. Log out of the node, reboot the next node, and check its status. Repeat this process until you
have rebooted all Ceph storage nodes.
7. When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:
Select a Compute node to reboot and disable it so that it does not provision new instances
Procedure
$ source ~/stackrc
(undercloud) $ openstack server list --name compute
$ source ~/overcloudrc
(overcloud) $ openstack compute service list
(overcloud) $ openstack compute service set [hostname] nova-compute --disable
165
Red Hat OpenStack Platform 13 Director Installation and Usage
NOTE
The nova command might cause some deprecation warnings, which are safe
to ignore.
8. Continue migrating instances until none remain on the chosen Compute Node.
$ source ~/overcloudrc
(overcloud) $ openstack compute service set [hostname] nova-compute --enable
166
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
The /var/log directory contains logs for many common OpenStack Platform components as well
as logs for standard Red Hat Enterprise Linux applications.
The journald service provides logs for various components. Note that ironic uses two units:
openstack-ironic-api and openstack-ironic-conductor. Likewise, ironic-inspector uses two
units as well: openstack-ironic-inspector and openstack-ironic-inspector-dnsmasq. Use
both units for each respective component. For example:
$ source ~/stackrc
(undercloud) $ sudo journalctl -u openstack-ironic-inspector -u openstack-ironic-inspector-
dnsmasq
$ source ~/stackrc
(undercloud) $ openstack baremetal port list --node [NODE UUID]
Here are some common scenarios where environment misconfiguration occurs and advice on how to
diagnose and resolve them.
167
Red Hat OpenStack Platform 13 Director Installation and Usage
Normally the introspection process uses the openstack overcloud node introspect command.
However, if running the introspection directly with ironic-inspector, it might fail to discover nodes in the
AVAILABLE state, which is meant for deployment and not for discovery. Change the node status to the
MANAGEABLE state before discovery:
$ source ~/stackrc
(undercloud) $ openstack baremetal node manage [NODE UUID]
$ source ~/stackrc
(undercloud) $ openstack baremetal introspection abort [NODE UUID]
You can also wait until the process times out. If necessary, change the timeout setting in /etc/ironic-
inspector/inspector.conf to another period in minutes.
1. Provide a temporary password to the openssl passwd -1 command to generate an MD5 hash.
For example:
2. Edit the /httpboot/inspector.ipxe file, find the line starting with kernel, and append the
rootpwd parameter and the MD5 hash. For example:
Alternatively, you can append the sshkey parameter with your public SSH key.
NOTE
Quotation marks are required for both the rootpwd and sshkey parameters.
3. Start the introspection and find the IP address from either the arp command or the DHCP logs:
$ arp
$ sudo journalctl -u openstack-ironic-inspector-dnsmasq
4. SSH as a root user with the temporary password or the SSH key.
168
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
$ ssh [email protected]
For example, when running the openstack overcloud deploy command, the OpenStack Workflow
service executes two workflows. The first one uploads the deployment plan:
Workflow Objects
OpenStack Workflow uses the following objects to keep track of the workflow:
Actions
A particular instruction that OpenStack performs once an associated task runs. Examples include
running shell scripts or performing HTTP requests. Some OpenStack components have in-built
actions that OpenStack Workflow uses.
Tasks
Defines the action to run and the result of running the action. These tasks usually have actions or
other workflows associated with them. Once a task completes, the workflow directs to another task,
usually depending on whether the task succeeded or failed.
Workflows
A set of tasks grouped together and executed in a specific order.
Executions
Defines a particular action, task, or workflow running.
169
Red Hat OpenStack Platform 13 Director Installation and Usage
$ source ~/stackrc
(undercloud) $ openstack workflow execution list | grep "ERROR"
Get the UUID of the failed workflow execution (for example, dffa96b0-f679-4cd2-a490-
4769a3825262) and view the execution and its output:
This provides information about the failed task in the execution. The openstack workflow execution
show also displays the workflow used for the execution (for example,
tripleo.plan_management.v1.publish_ui_logs_to_swift). You can view the full workflow definition
using the following command:
This is useful for identifying where in the workflow a particular task occurs.
You can also view action executions and their results using a similar command syntax:
If an overcloud deployment has failed at any of these levels, use the OpenStack clients and service log
files to diagnose the failed deployment. You can also run the following command to display details of the
failure:
Understanding historical director deployment commands and arguments can be useful for
170
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
Understanding historical director deployment commands and arguments can be useful for
troubleshooting and support. You can view this information in /home/stack/.tripleo/history.
16.4.2. Orchestration
In most cases, Heat shows the failed overcloud stack after the overcloud creation fails:
$ source ~/stackrc
(undercloud) $ openstack stack list --nested --property status=FAILED
+-----------------------+------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+-----------------------+------------+--------------------+----------------------+
| 7e88af95-535c-4a55... | overcloud | CREATE_FAILED | 2015-04-06T17:57:16Z |
+-----------------------+------------+--------------------+----------------------+
If the stack list is empty, this indicates an issue with the initial Heat setup. Check your Heat templates
and configuration options, and check for any error messages that presented after running openstack
overcloud deploy.
$ source ~/stackrc
(undercloud) $ openstack baremetal node list
+----------+------+---------------+-------------+-----------------+-------------+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+----------+------+---------------+-------------+-----------------+-------------+
| f1e261...| None | None | power off | available | False |
| f0b8c1...| None | None | power off | available | False |
+----------+------+---------------+-------------+-----------------+-------------+
Here are some common issues that arise from the provisioning process.
Review the Provision State and Maintenance columns in the resulting table. Check for the
following:
Provision State is set to manageable. This usually indicates an issue with the registration or
discovery processes. For example, if Maintenance sets itself to True automatically, the
nodes are usually using the wrong power management credentials.
If Provision State is available, then the problem occurred before bare metal deployment has
even started.
If Provision State is active and Power State is power on, the bare metal deployment has
finished successfully. This means that the problem occurred during the post-deployment
configuration step.
If Provision State is wait call-back for a node, the bare metal provisioning process has not yet
171
Red Hat OpenStack Platform 13 Director Installation and Usage
If Provision State is wait call-back for a node, the bare metal provisioning process has not yet
finished for this node. Wait until this status changes, otherwise, connect to the virtual console of
the failed node and check the output.
If Provision State is error or deploy failed, then bare metal provisioning has failed for this node.
Check the bare metal node’s details:
Look for last_error field, which contains error description. If the error message is vague, you can
use logs to clarify it:
If you see wait timeout error and the node Power State is power on, connect to the virtual
console of the failed node and check the output.
List all the resources from the overcloud stack to see which one failed:
$ source ~/stackrc
(undercloud) $ openstack stack resource list overcloud --filter status=FAILED
Check for any information in the resource_status_reason field that can help your diagnosis.
Use the nova command to see the IP addresses of the overcloud nodes.
Log in as the heat-admin user to one of the deployed nodes. For example, if the stack’s resource list
shows the error occurred on a Controller node, log in to a Controller node. The heat-admin user has
sudo access.
Check the os-collect-config log for a possible reason for the failure.
In some cases, nova fails deploying the node in entirety. This situation would be indicated by a failed
OS::Heat::ResourceGroup for one of the overcloud role types. Use nova to see the failure in this case.
172
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
The most common error shown will reference the error message No valid host was found. See
Section 16.6, “Troubleshooting "No Valid Host Found" Errors” for details on troubleshooting this error. In
other cases, look at the following log files for further troubleshooting:
/var/log/nova/*
/var/log/heat/*
/var/log/ironic/*
The post-deployment process for Controller nodes uses five main steps for the deployment. This
includes:
Step Description
Install nmap:
173
Red Hat OpenStack Platform 13 Director Installation and Usage
Use nmap to scan the IP address range for active addresses. This example scans the 192.168.24.0/24
range, replace this with the IP subnet of the Provisioning network (using CIDR bitmask notation):
For example, you should see the IP address(es) of the undercloud, and any other hosts that are present
on the subnet. If any of the active IP addresses conflict with the IP ranges in undercloud.conf, you will
need to either change the IP address ranges or free up the IP addresses before introspecting or
deploying the overcloud nodes.
NoValidHost: No valid host was found. There are not enough hosts available.
This means the nova Scheduler could not find a bare metal node suitable for booting the new instance.
This in turn usually means a mismatch between resources that nova expects to find and resources that
ironic advertised to nova. Check the following in this case:
1. Make sure introspection succeeds for you. Otherwise check that each node contains the
required ironic node properties. For each node:
$ source ~/stackrc
(undercloud) $ openstack baremetal node show [NODE UUID]
Check the properties JSON field has valid values for keys cpus, cpu_arch, memory_mb and
local_gb.
2. Check that the nova flavor used does not exceed the ironic node properties above for a
required number of nodes:
3. Check that sufficient nodes are in the available state according to openstack baremetal node
174
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
3. Check that sufficient nodes are in the available state according to openstack baremetal node
list. Nodes in manageable state usually mean a failed introspection.
4. Check the nodes are not in maintenance mode. Use openstack baremetal node list to check. A
node automatically changing to maintenance mode usually means incorrect power credentials.
Check them and then remove maintenance mode:
5. If you’re using the Automated Health Check (AHC) tools to perform automatic node tagging,
check that you have enough nodes corresponding to each flavor/profile. Check the
capabilities key in properties field for openstack baremetal node show. For example, a node
tagged for the Compute role should contain profile:compute.
6. It takes some time for node information to propagate from ironic to nova after introspection.
The director’s tool usually accounts for it. However, if you performed some steps manually,
there might be a short period of time when nodes are not available to nova. Use the following
command to check the total resources in your system:
Scaling Nodes
Removing Nodes
Replacing Nodes
Modifying the stack is similar to the process of creating the stack, in that the director checks the
availability of the requested number of nodes, provisions additional or removes existing nodes, and then
applies the Puppet configuration. Here are some guidelines to follow in situations when modifying the
overcloud stack.
As an initial step, follow the advice set in Section 16.4.4, “Post-Deployment Configuration”. These same
steps can help diagnose problems with updating the overcloud heat stack. In particular, use the
following command to help identify problematic resources:
List all resources in the overcloud stack and their current states. This helps identify which resource is
175
Red Hat OpenStack Platform 13 Director Installation and Usage
List all resources in the overcloud stack and their current states. This helps identify which resource is
causing failures in the stack. You can trace this resource failure to its respective parameters and
configuration in the heat template collection and the Puppet modules.
openstack stack event list overcloud
List all events related to the overcloud stack in chronological order. This includes the initiation,
completion, and failure of all resources in the stack. This helps identify points of resource failure.
The next few sections provide advice to diagnose issues on specific node types.
The Controller nodes use Pacemaker to manage the resources and services in the high availability
cluster. The Pacemaker Configuration System (pcs) command is a tool that manages a Pacemaker
cluster. Run this command on a Controller node in the cluster to perform configuration and monitoring
functions. Here are few commands to help troubleshoot overcloud services on a high availability cluster:
pcs status
Provides a status overview of the entire cluster including enabled resources, failed resources, and
online nodes.
pcs resource show
Shows a list of resources, and their respective nodes.
pcs resource disable [resource]
Stop a particular resource.
pcs resource enable [resource]
Start a particular resource.
pcs cluster standby [node]
Place a node in standby mode. The node is no longer available in the cluster. This is useful for
performing maintenance on a specific node without affecting the cluster.
pcs cluster unstandby [node]
Remove a node from standby mode. The node becomes available in the cluster again.
Use these Pacemaker commands to identify the faulty component and/or node. After identifying the
component, view the respective component log file in /var/log/.
NOTE
Before running these commands, check that you are logged into an overcloud node and
not running these commands on the undercloud.
176
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
Each container retains standard output from its main process. This output acts as a log to help
determine what actually occurs during a container run. For example, to view the log for the keystone
container, use the following command:
This provides a JSON object containing low-level configuration data. You can pipe the output to the jq
command to parse specific data. For example, to view the container mounts for the keystone container,
run the following command:
You can also use the --format option to parse data to a single line, which is useful for running commands
against sets of container data. For example, to recreate the options used to run the keystone container,
use the following inspect command with the --format option:
NOTE
Use these options in conjunction with the docker run command to recreate the container for
troubleshooting purposes:
NOTE
177
Red Hat OpenStack Platform 13 Director Installation and Usage
Replace <COMMAND> with your desired command. For example, each container has a health check
script to verify the service connection. You can run the health check script for keystone with the
following command:
To access the container’s shell, run docker exec using /bin/bash as the command:
Exporting a container
When a container fails, you might need to investigate the full contents of the file. In this case, you can
export the full file system of a container as a tar archive. For example, to export the keystone
container’s file system, run the following command:
This command create the keystone.tar archive, which you can extract and explore.
If performing maintenance on the Compute node, migrate the existing instances from the host
to an operational Compute node, then disable the node. See Chapter 11, Migrating Virtual
Machines Between Compute Nodes for more information on node migrations.
The Identity Service (keystone) uses a token-based system for access control against the other
OpenStack services. After a certain period, the database will accumulate a large number of
unused tokens; a default cronjob flushes the token table every day. It is recommended that you
monitor your environment and adjust the token flush interval as needed. For the undercloud,
you can adjust the interval using crontab -u keystone -e. Note that this is a temporary change
and that openstack undercloud update will reset this cronjob back to its default.
178
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
Heat stores a copy of all template files in its database’s raw_template table each time you run
openstack overcloud deploy. The raw_template table retains all past templates and grows in
size. To remove unused templates in the raw_templates table, create a daily cronjob that clears
unused templates that exist in the database for longer than a day:
Sometimes the director might not have enough resources to perform concurrent node
provisioning. The default is 10 nodes at the same time. To reduce the number of concurrent
nodes, set the max_concurrent_builds parameter in /etc/nova/nova.conf to a value less than
10 and restart the nova services:
max_connections
Number of simultaneous connections to the database. The recommended value is 4096.
innodb_additional_mem_pool_size
The size in bytes of a memory pool the database uses to store data dictionary information
and other internal data structures. The default is usually 8M and an ideal value is 20M for the
undercloud.
innodb_buffer_pool_size
The size in bytes of the buffer pool, the memory area where the database caches table and
index data. The default is usually 128M and an ideal value is 1000M for the undercloud.
innodb_flush_log_at_trx_commit
Controls the balance between strict ACID compliance for commit operations, and higher
performance that is possible when commit-related I/O operations are rearranged and done
in batches. Set to 1.
innodb_lock_wait_timeout
The length of time in seconds a database transaction waits for a row lock before giving up.
Set to 50.
innodb_max_purge_lag
This variable controls how to delay INSERT, UPDATE, and DELETE operations when purge
operations are lagging. Set to 10000.
innodb_thread_concurrency
The limit of concurrent operating system threads. Ideally, provide at least two threads for
each CPU and disk resource. For example, if using a quad-core CPU and a single disk, use 10
threads.
Ensure that heat has enough workers to perform an overcloud creation. Usually, this depends on
how many CPUs the undercloud has. To manually set the number of workers, edit the
/etc/heat/heat.conf file, set the num_engine_workers parameter to the number of workers you
need (ideally 4), and restart the heat engine:
179
Red Hat OpenStack Platform 13 Director Installation and Usage
"How to collect all required logs for Red Hat Support to investigate an OpenStack issue"
Introspection /var/log/ironic-inspector/ironic-inspector.log
180
CHAPTER 16. TROUBLESHOOTING DIRECTOR ISSUES
181
Red Hat OpenStack Platform 13 Director Installation and Usage
NOTE
For overcloud SSL/TLS certificate creation, see "Enabling SSL/TLS on Overcloud Public
Endpoints" in the Advanced Overcloud Customization guide.
The /etc/pki/CA/index.txt file stores records of all signed certificates. Check if this file exists. If it does
not exist, create an empty file:
The /etc/pki/CA/serial file identifies the next serial number to use for the next certificate to sign. Check
if this file exists. If it does not exist, create a new file with a new starting value:
For example, generate a key and certificate pair to act as the certificate authority:
The openssl req command asks for certain details about your authority. Enter these details.
182
APPENDIX A. SSL/TLS CERTIFICATE CONFIGURATION
$ cp /etc/pki/tls/openssl.cnf .
Edit the custom openssl.cnf file and set SSL parameters to use for the director. An example of the
types of parameters to modify include:
[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
[req_distinguished_name]
countryName = Country Name (2 letter code)
countryName_default = AU
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = Queensland
localityName = Locality Name (eg, city)
localityName_default = Brisbane
organizationalUnitName = Organizational Unit Name (eg, section)
organizationalUnitName_default = Red Hat
commonName = Common Name
commonName_default = 192.168.0.1
commonName_max = 64
[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.0.1
DNS.1 = instack.localdomain
DNS.2 = vip.localdomain
DNS.3 = 192.168.0.1
If using a fully qualified domain name to access over SSL/TLS, use the domain name instead.
183
Red Hat OpenStack Platform 13 Director Installation and Usage
DNS - A list of domain names for clients to access the director over SSL. Also include the Public
API IP address as a DNS entry at the end of the alt_names section.
Make sure to include the SSL/TLS key you created in Section A.4, “Creating an SSL/TLS Key” for the -
key option.
Use the server.csr.pem file to create the SSL/TLS certificate in the next section.
$ sudo openssl ca -config openssl.cnf -extensions v3_req -days 3650 -in server.csr.pem -out
server.crt.pem -cert ca.crt.pem -keyfile ca.key.pem
The configuration file specifying the v3 extensions. Include this as the -config option.
The certificate signing request from Section A.5, “Creating an SSL/TLS Certificate Signing
Request” to generate the certificate and sign it throught a certificate authority. Include this as
the -in option.
The certificate authority you created in Section A.2, “Creating a Certificate Authority” , which
signs the certificate. Include this as the -cert option.
The certificate authority private key you created in Section A.2, “Creating a Certificate
Authority”. Include this as the -keyfile option.
This results in a certificate named server.crt.pem. Use this certificate in conjunction with the SSL/TLS
key from Section A.4, “Creating an SSL/TLS Key” to enable SSL/TLS.
This creates a undercloud.pem file. You specify the location of this file for the
undercloud_service_certificate option in your undercloud.conf file. This file also requires a special
SELinux context so that the HAProxy tool can read it. Use the following example as a guide:
184
APPENDIX A. SSL/TLS CERTIFICATE CONFIGURATION
undercloud_service_certificate = /etc/pki/instack-certs/undercloud.pem
In addition, make sure to add your certificate authority from Section A.2, “Creating a Certificate
Authority” to the undercloud’s list of trusted Certificate Authorities so that different services within the
undercloud have access to the certificate authority:
Continue installing the undercloud as per the instructions in Section 4.8, “Configuring the director” .
185
Red Hat OpenStack Platform 13 Director Installation and Usage
B.1. REDFISH
A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force
(DMTF)
pm_type
Set this option to redfish.
pm_user; pm_password
The Redfish username and password.
pm_addr
The IP address of the Redfish controller.
pm_system_id
The canonical path to the system resource. This path should include the root service, version, and
the path/unqiue ID for the system. For example: /redfish/v1/Systems/CX34R87.
redfish_verify_ca
If the Redfish service in your baseboard management controller (BMC) is not configured to use a
valid TLS certificate signed by a recognized certificate authority (CA), the Redfish client in ironic fails
to connect to the BMC. Set the redfish_verify_ca option to false to mute the error. However, be
aware that disabling BMC authentication compromises the access security of your BMC.
pm_type
Set this option to idrac.
pm_user; pm_password
The DRAC username and password.
pm_addr
The IP address of the DRAC host.
pm_type
Set this option to ilo.
pm_user; pm_password
The iLO username and password.
186
APPENDIX B. POWER MANAGEMENT DRIVERS
pm_addr
The IP address of the iLO interface.
The director also requires an additional set of utilities for iLo. Install the python-proliantutils
package and restart the openstack-ironic-conductor service:
HP nodes must have a minimum ILO firmware version of 1.85 (May 13 2015) for successful
introspection. The director has been successfully tested with nodes using this ILO firmware
version.
pm_type
Set this option to cisco-ucs-managed.
pm_user; pm_password
The UCS username and password.
pm_addr
The IP address of the UCS interface.
pm_service_profile
The UCS service profile to use. Usually takes the format of org-root/ls-[service_profile_name]. For
example:
"pm_service_profile": "org-root/ls-Nova-1"
The director also requires an additional set of utilities for UCS. Install the python-UcsSdk
package and restart the openstack-ironic-conductor service:
Fujitsu’s iRMC is a Baseboard Management Controller (BMC) with integrated LAN connection and
187
Red Hat OpenStack Platform 13 Director Installation and Usage
Fujitsu’s iRMC is a Baseboard Management Controller (BMC) with integrated LAN connection and
extended functionality. This driver focuses on the power management for bare metal systems
connected to the iRMC.
IMPORTANT
pm_type
Set this option to irmc.
pm_user; pm_password
The username and password for the iRMC interface.
pm_addr
The IP address of the iRMC interface.
pm_port (Optional)
The port to use for iRMC operations. The default is 443.
pm_auth_method (Optional)
The authentication method for iRMC operations. Use either basic or digest. The default is basic
pm_client_timeout (Optional)
Timeout (in seconds) for iRMC operations. The default is 60 seconds.
pm_sensor_method (Optional)
Sensor data retrieval method. Use either ipmitool or scci. The default is ipmitool.
The director also requires an additional set of utilities if you enabled SCCI as the sensor
method. Install the python-scciclient package and restart the openstack-ironic-conductor
service:
IMPORTANT
This option uses virtual machines instead of bare metal nodes. This means it is available
for testing and evaluation purposes only. It is not recommended for Red Hat OpenStack
Platform enterprise environments.
1. On the KVM host, enable the OpenStack Platform repository and install the python-virtualbmc
package:
188
APPENDIX B. POWER MANAGEMENT DRIVERS
2. Create a virtual baseboard management controller (BMC) for each virtual machine using the
vbmc command. For example, to create a BMC for virtual machines named Node01 and
Node02, define the port to access each BMC and set the authentication details, enter the
following commands:
5. Verify that your changes are applied to the firewall settings and the ports are open:
NOTE
Use a different port for each virtual machine. Port numbers lower than 1025
require root privileges in the system.
6. Start each of the BMCs you have created using the following commands:
NOTE
You must repeat this step after rebooting the KVM host.
7. To verify that you can manage the nodes using ipmitool, display the power status of a remote
node:
Registering Nodes
Use the following parameters in your /home/stack/instackenv.json node registration file:
pm_type
Set this option to ipmi.
189
Red Hat OpenStack Platform 13 Director Installation and Usage
pm_user; pm_password
Specify the IPMI username and password for the node’s virtual BMC device.
pm_addr
Specify the IP address of the KVM host that contains the node.
pm_port
Specify the port to access the specific node on the KVM host.
mac
Specify a list of MAC addresses for the network interfaces on the node. Use only the MAC address
for the Provisioning NIC of each system.
For example:
{
"nodes": [
{
"pm_type": "ipmi",
"mac": [
"aa:aa:aa:aa:aa:aa"
],
"pm_user": "admin",
"pm_password": "p455w0rd!",
"pm_addr": "192.168.0.1",
"pm_port": "6230",
"name": "Node01"
},
{
"pm_type": "ipmi",
"mac": [
"bb:bb:bb:bb:bb:bb"
],
"pm_user": "admin",
"pm_password": "p455w0rd!",
"pm_addr": "192.168.0.1",
"pm_port": "6231",
"name": "Node02"
}
]
}
190
APPENDIX B. POWER MANAGEMENT DRIVERS
pm_type
Set this option to staging-ovirt.
pm_user; pm_password
The username and password for your Red Hat Virtualization environment. The username also
includes the authentication provider. For example: admin@internal.
pm_addr
The IP address of the Red Hat Virtualization REST API.
pm_vm_name
The name of the virtual machine to control.
mac
A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the
Provisioning NIC of each system.
enabled_hardware_types = ipmi,staging-ovirt
IMPORTANT
This option is available for testing and evaluation purposes only. It is not recommended
for Red Hat OpenStack Platform enterprise environments.
pm_type
Set this option to manual-management.
This driver does not use any authentication details because it does not control power
management.
In your instackenv.json node inventory file, set the pm_type to manual-management for
191
Red Hat OpenStack Platform 13 Director Installation and Usage
In your instackenv.json node inventory file, set the pm_type to manual-management for
the nodes that you want to manage manually.
When performing introspection on nodes, manually power the nodes after running the
openstack overcloud node introspect command.
When performing overcloud deployment, check the node status with the ironic node-list
command. Wait until the node status changes from deploying to deploy wait-callback and
then manually power the nodes.
After the overcloud provisioning process completes, reboot the nodes. To check the
completion of provisioning, check the node status with the ironic node-list command, wait
until the node status changes to active, then manually reboot all overcloud nodes.
192
APPENDIX C. WHOLE DISK IMAGES
IMPORTANT
The following process uses the director’s image building feature. Red Hat only supports
images built using the guidelines contained in this section. Custom images built outside of
these specifications are not supported.
A security hardened image includes extra security measures necessary for Red Hat OpenStack Platform
deployments where security is an important feature. Some of the recommendations for a secure image
are as follows:
The /tmp directory is mounted on a separate volume or partition and has the rw, nosuid, nodev,
noexec, and relatime flags
The /var, /var/log and the /var/log/audit directories are mounted on separate volumes or
partitions, with the rw ,relatime flags
The /home directory is mounted on a separate partition or volume and has the rw, nodev,
relatime flags
To disable the kernel support for USB using boot loader configuration by adding nousb
Blacklist insecure modules (usb-storage, cramfs, freevxfs, jffs2, hfs, hfsplus, squashfs, udf,
vfat) and prevent them from being loaded.
Remove any insecure packages (kdump installed by kexec-tools and telnet) from the image as
they are installed by default
3. Customize the image by modifying the partition schema and the size
193
Red Hat OpenStack Platform 13 Director Installation and Usage
NOTE
The image building process temporarily registers the image with a Red Hat subscription
and unregisters the system once the image building process completes.
To build a disk image, set Linux environment variables that suit your environment and requirements:
DIB_LOCAL_IMAGE
Sets the local image to use as your basis.
REG_ACTIVATION_KEY
Use an activation key instead as part of the registration process.
REG_AUTO_ATTACH
Defines whether or not to automatically attach the most compatible subscription.
REG_BASE_URL
The base URL of the content delivery server to pull packages. The default Customer Portal
Subscription Management process uses https://cdn.redhat.com. If using a Red Hat Satellite 6
server, this parameter should use the base URL of your Satellite server.
REG_ENVIRONMENT
Registers to an environment within an organization.
REG_METHOD
Sets the method of registration. Use portal to register a system to the Red Hat Customer Portal. Use
satellite to register a system with Red Hat Satellite 6.
REG_ORG
The organization to register the images.
REG_POOL_ID
The pool ID of the product subscription information.
REG_PASSWORD
Gives the password for the user account registering the image.
REG_REPOS
A string of repository names separated with commas (no spaces). Each repository in this string is
enabled through subscription-manager.
Use the following repositories for a security hardened whole disk image:
194
APPENDIX C. WHOLE DISK IMAGES
rhel-7-server-rpms
rhel-7-server-extras-rpms
rhel-ha-for-rhel-7-server-rpms
rhel-7-server-optional-rpms
rhel-7-server-openstack-13-rpms
REG_SAT_URL
The base URL of the Satellite server to register Overcloud nodes. Use the Satellite’s HTTP URL and
not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not
https://satellite.example.com.
REG_SERVER_URL
Gives the hostname of the subscription service to use. The default is for the Red Hat Customer
Portal at subscription.rhn.redhat.com. If using a Red Hat Satellite 6 server, this parameter should
use the hostname of your Satellite server.
REG_USER
Gives the user name for the account registering the image.
The following is an example set of commands to export a set of environment variables to temporarily
register a local QCOW2 image to the Red Hat Customer Portal:
$ export DIB_LOCAL_IMAGE=./rhel-server-7.5-x86_64-kvm.qcow2
$ export REG_METHOD=portal
$ export REG_USER="[your username]"
$ export REG_PASSWORD="[your password]"
$ export REG_REPOS="rhel-7-server-rpms \
rhel-7-server-extras-rpms \
rhel-ha-for-rhel-7-server-rpms \
rhel-7-server-optional-rpms \
rhel-7-server-openstack-13-rpms"
To modify the partitioning layout and disk size, perform the following steps:
Modify the global size of the image by updating the DIB_IMAGE_SIZE environment variable.
195
Red Hat OpenStack Platform 13 Director Installation and Usage
$ export DIB_BLOCK_DEVICE_CONFIG='<yaml_schema_with_partitions>'
The following YAML structure represents the modified logical volume partitioning layout to
accommodate enough space to pull overcloud container images:
export DIB_BLOCK_DEVICE_CONFIG='''
- local_loop:
name: image0
- partitioning:
base: image0
label: mbr
partitions:
- name: root
flags: [ boot,primary ]
size: 40G
- lvm:
name: lvm
base: [ root ]
pvs:
- name: pv
base: root
options: [ "--force" ]
vgs:
- name: vg
base: [ "pv" ]
options: [ "--force" ]
lvs:
- name: lv_root
base: vg
extents: 23%VG
- name: lv_tmp
base: vg
extents: 4%VG
- name: lv_var
base: vg
extents: 45%VG
- name: lv_log
base: vg
extents: 23%VG
- name: lv_audit
base: vg
extents: 4%VG
- name: lv_home
base: vg
extents: 1%VG
- mkfs:
name: fs_root
base: lv_root
type: xfs
label: "img-rootfs"
mount:
mount_point: /
fstab:
options: "rw,relatime"
fsck-passno: 1
196
APPENDIX C. WHOLE DISK IMAGES
- mkfs:
name: fs_tmp
base: lv_tmp
type: xfs
mount:
mount_point: /tmp
fstab:
options: "rw,nosuid,nodev,noexec,relatime"
fsck-passno: 2
- mkfs:
name: fs_var
base: lv_var
type: xfs
mount:
mount_point: /var
fstab:
options: "rw,relatime"
fsck-passno: 2
- mkfs:
name: fs_log
base: lv_log
type: xfs
mount:
mount_point: /var/log
fstab:
options: "rw,relatime"
fsck-passno: 3
- mkfs:
name: fs_audit
base: lv_audit
type: xfs
mount:
mount_point: /var/log/audit
fstab:
options: "rw,relatime"
fsck-passno: 4
- mkfs:
name: fs_home
base: lv_home
type: xfs
mount:
mount_point: /home
fstab:
options: "rw,nodev,relatime"
fsck-passno: 2
'''
Use this sample YAML content as a basis for your image’s partition schema. Modify the partition sizes
and layout to suit your needs.
NOTE
Define the right partition sizes for the image as you will not be able to resize them after
the deployment.
197
Red Hat OpenStack Platform 13 Director Installation and Usage
# cp /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images.yaml \
/home/stack/overcloud-hardened-images-custom.yaml
NOTE
Edit the DIB_IMAGE_SIZE in the configuration file to adjust the values as necessary:
...
environment:
DIB_PYTHON_VERSION: '2'
DIB_MODPROBE_BLACKLIST: 'usb-storage cramfs freevxfs jffs2 hfs hfsplus squashfs udf vfat
bluetooth'
DIB_BOOTLOADER_DEFAULT_CMDLINE: 'nofb nomodeset vga=normal console=tty0
console=ttyS0,115200 audit=1 nousb'
DIB_IMAGE_SIZE: '40' 1
COMPRESS_IMAGE: '1'
IMPORTANT
When the director deploys the overcloud, it creates a RAW version of the overcloud
image. This means your undercloud must have necessary free space to accommodate the
RAW image. For example, if you increase the security hardened image size to 40G, you
must have 40G of space available on the undercloud’s hard disk.
IMPORTANT
When the director eventually writes the image to the physical disk, the director creates a
64MB configuration drive primary partition at the end of the disk. When creating your
whole disk image, ensure it is less than the size of the physical disk to accommodate this
extra partition.
198
APPENDIX C. WHOLE DISK IMAGES
The overcloud-hardened-full.qcow2 image that you have created contains all the necessary security
features.
1. Rename the newly generated image and move it to your images directory:
# mv overcloud-hardened-full.qcow2 ~/images/overcloud-full.qcow2
If you want to replace an existing image with the security hardened image, use the --update-existing
flag. This will overwrite the original overcloud-full image with a new security hardened image you
generated.
199
Red Hat OpenStack Platform 13 Director Installation and Usage
To change from iPXE to PXE, edit the undercloud.conf file on the director host and set ipxe_enabled
to False:
ipxe_enabled = False
For more information on this process, see the article "Changing from iPXE to PXE in Red Hat OpenStack
Platform director".
ipxe_enabled = True
inspection_enable_uefi = True
Set the boot mode to uefi for each registered node. For example, to add or replace the existing
boot_mode parameters in the capabilities property:
NOTE
Check that you have retained the profile and boot_option capabilities with this
command.
In addition, set the boot mode to uefi for each flavor. For example:
200
APPENDIX E. AUTOMATIC PROFILE TAGGING
The policies can identify and isolate underperforming or unstable nodes from use in the
overcloud.
The policies can define whether to automatically tag nodes into specific profiles.
Description
This is a plain text description of the rule.
Example:
Conditions
A condition defines an evaluation using the following key-value pattern:
field
Defines the field to evaluate. For field types, see Section E.4, “Automatic Profile Tagging Properties”
op
Defines the operation to use for the evaluation. This includes the following:
eq - Equal to
ne - Not equal to
lt - Less than
gt - Greater than
invert
Boolean value to define whether to invert the result of the evaluation.
201
Red Hat OpenStack Platform 13 Director Installation and Usage
multiple
Defines the evaluation to use if multiple results exist. This includes:
value
Defines the value in the evaluation. If the field and operation result in the value, the condition return a
true result. If not, the condition returns false.
Example:
"conditions": [
{
"field": "local_gb",
"op": "ge",
"value": 1024
}
],
Actions
An action is performed if the condition returns as true. It uses the action key and additional keys
depending on the value of action:
fail - Fails the introspection. Requires a message parameter for the failure message.
set-attribute - Sets an attribute on an Ironic node. Requires a path field, which is the path to an
Ironic attribute (e.g. /driver_info/ipmi_address), and a value to set.
set-capability - Sets a capability on an Ironic node. Requires name and value fields, which are
the name and the value for a new capability accordingly. The existing value for this same
capability is replaced. For example, use this to define node profiles.
extend-attribute - The same as set-attribute but treats the existing value as a list and appends
value to it. If the optional unique parameter is set to True, nothing is added if the given value is
already in a list.
Example:
"actions": [
{
"action": "set-capability",
"name": "profile",
"value": "swift-storage"
}
]
202
APPENDIX E. AUTOMATIC PROFILE TAGGING
[
{
"description": "Fail introspection for unexpected nodes",
"conditions": [
{
"op": "lt",
"field": "memory_mb",
"value": 4096
}
],
"actions": [
{
"action": "fail",
"message": "Memory too low, expected at least 4 GiB"
}
]
},
{
"description": "Assign profile for object storage",
"conditions": [
{
"op": "ge",
"field": "local_gb",
"value": 1024
}
],
"actions": [
{
"action": "set-capability",
"name": "profile",
"value": "swift-storage"
}
]
},
{
"description": "Assign possible profiles for compute and controller",
"conditions": [
{
"op": "lt",
"field": "local_gb",
"value": 1024
},
{
"op": "ge",
"field": "local_gb",
"value": 40
}
],
"actions": [
{
"action": "set-capability",
"name": "compute_profile",
"value": "1"
},
{
"action": "set-capability",
203
Red Hat OpenStack Platform 13 Director Installation and Usage
"name": "control_profile",
"value": "1"
},
{
"action": "set-capability",
"name": "profile",
"value": null
}
]
}
]
Fail introspection if memory is lower than 4096 MiB. Such rules can be applied to exclude nodes
that should not become part of your cloud.
Nodes with a hard drive size 1 TiB and bigger are assigned the swift-storage profile
unconditionally.
Nodes with a hard drive less than 1 TiB but more than 40 GiB can be either Compute or
Controller nodes. We assign two capabilities (compute_profile and control_profile) so that the
openstack overcloud profiles match command can later make the final choice. For that to
work, we remove the existing profile capability, otherwise it will have priority.
NOTE
Using introspection rules to assign the profile capability always overrides the existing
value. However, [PROFILE]_profile capabilities are ignored for nodes with an existing
profile capability.
After introspection completes, check the nodes and their assigned profiles:
If you made a mistake in introspection rules, you can delete them all:
Automatic Profile Tagging evaluates the following node properties for the field attribute for each
204
APPENDIX E. AUTOMATIC PROFILE TAGGING
Automatic Profile Tagging evaluates the following node properties for the field attribute for each
condition:
Property Description
local_gb The total storage space of the node’s root disk. See
Section 6.6, “Defining the root disk” for more
information about setting the root disk for a node.
205
Red Hat OpenStack Platform 13 Director Installation and Usage
Set the following hieradata using the hieradata_override undercloud configuration option:
tripleo::haproxy::ssl_cipher_suite
The cipher suite to use in HAProxy.
tripleo::haproxy::ssl_options
The SSL/TLS rules to use in HAProxy.
For example, you might aim to use the following cipher and rules:
Cipher: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-
POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-
RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-
AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-
ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-
AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-
CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-
SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-
SHA:DES-CBC3-SHA:!DSS
tripleo::haproxy::ssl_cipher_suite: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-
CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-
SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-
AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-
SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-
SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-
SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-
AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-
CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-
SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
tripleo::haproxy::ssl_options: no-sslv3 no-tls-tickets
NOTE
Set the hieradata_override parameter in the undercloud.conf file to use the hieradata override file
you created before running openstack undercloud install:
206
APPENDIX F. SECURITY ENHANCEMENTS
[DEFAULT]
...
hieradata_override = haproxy-hiera-overrides.yaml
...
207
Red Hat OpenStack Platform 13 Director Installation and Usage
For example:
parameter_defaults:
CephAnsiblePlaybook: /usr/share/ceph-ansible/site.yml.sample
CephClientKey: AQDLOh1VgEp6FRAAFzT7Zw+Y9V6JJExQAsRnRQ==
CephClusterFSID: 4b5c8c0a-ff60-454b-a1b4-9747aa737d19
CephExternalMonHost: 172.16.1.7, 172.16.1.8
Cinder
Glance
Keystone
Neutron
Swift
For more details please see the documentation for composable services and custom roles for more
information. Below would be one way to move the listed services from the Controller node to a
dedicated ppc64le node:
208
APPENDIX G. RED HAT OPENSTACK PLATFORM FOR POWER
- controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
# For systems with both IPv4 and IPv6, you may specify a gateway network for
# each, such as ['ControlPlane', 'External']
default_route_networks: ['External']
HostnameFormatDefault: '%stackname%-controllerppc64le-%index%'
ImageDefault: ppc64le-overcloud-full
ServicesDefault:
- OS::TripleO::Services::Aide
- OS::TripleO::Services::AuditD
- OS::TripleO::Services::CACerts
- OS::TripleO::Services::CephClient
- OS::TripleO::Services::CephExternal
- OS::TripleO::Services::CertmongerUser
- OS::TripleO::Services::CinderApi
- OS::TripleO::Services::CinderBackendDellPs
- OS::TripleO::Services::CinderBackendDellSc
- OS::TripleO::Services::CinderBackendDellEMCUnity
- OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI
- OS::TripleO::Services::CinderBackendDellEMCVNX
- OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI
- OS::TripleO::Services::CinderBackendNetApp
- OS::TripleO::Services::CinderBackendScaleIO
- OS::TripleO::Services::CinderBackendVRTSHyperScale
- OS::TripleO::Services::CinderBackup
- OS::TripleO::Services::CinderHPELeftHandISCSI
- OS::TripleO::Services::CinderScheduler
- OS::TripleO::Services::CinderVolume
- OS::TripleO::Services::Collectd
- OS::TripleO::Services::Docker
- OS::TripleO::Services::Fluentd
- OS::TripleO::Services::GlanceApi
- OS::TripleO::Services::GlanceRegistry
- OS::TripleO::Services::Ipsec
- OS::TripleO::Services::Iscsid
- OS::TripleO::Services::Kernel
- OS::TripleO::Services::Keystone
- OS::TripleO::Services::LoginDefs
- OS::TripleO::Services::MySQLClient
- OS::TripleO::Services::NeutronApi
- OS::TripleO::Services::NeutronBgpVpnApi
- OS::TripleO::Services::NeutronSfcApi
- OS::TripleO::Services::NeutronCorePlugin
- OS::TripleO::Services::NeutronDhcpAgent
- OS::TripleO::Services::NeutronL2gwAgent
- OS::TripleO::Services::NeutronL2gwApi
- OS::TripleO::Services::NeutronL3Agent
- OS::TripleO::Services::NeutronLbaasv2Agent
- OS::TripleO::Services::NeutronLbaasv2Api
- OS::TripleO::Services::NeutronLinuxbridgeAgent
- OS::TripleO::Services::NeutronMetadataAgent
209
Red Hat OpenStack Platform 13 Director Installation and Usage
- OS::TripleO::Services::NeutronML2FujitsuCfab
- OS::TripleO::Services::NeutronML2FujitsuFossw
- OS::TripleO::Services::NeutronOvsAgent
- OS::TripleO::Services::NeutronVppAgent
- OS::TripleO::Services::Ntp
- OS::TripleO::Services::ContainersLogrotateCrond
- OS::TripleO::Services::OpenDaylightOvs
- OS::TripleO::Services::Rhsm
- OS::TripleO::Services::RsyslogSidecar
- OS::TripleO::Services::Securetty
- OS::TripleO::Services::SensuClient
- OS::TripleO::Services::SkydiveAgent
- OS::TripleO::Services::Snmp
- OS::TripleO::Services::Sshd
- OS::TripleO::Services::SwiftProxy
- OS::TripleO::Services::SwiftDispersion
- OS::TripleO::Services::SwiftRingBuilder
- OS::TripleO::Services::SwiftStorage
- OS::TripleO::Services::Timezone
- OS::TripleO::Services::TripleoFirewall
- OS::TripleO::Services::TripleoPackages
- OS::TripleO::Services::Tuned
- OS::TripleO::Services::Vpp
- OS::TripleO::Services::OVNController
- OS::TripleO::Services::OVNMetadataAgent
- OS::TripleO::Services::Ptp
EO_TEMPLATE
(undercloud) [stack@director roles]$ sed -i~ -e '/OS::TripleO::Services::\
(Cinder\|Glance\|Swift\|Keystone\|Neutron\)/d' Controller.yaml
(undercloud) [stack@director roles]$ cd ../
(undercloud) [stack@director templates]$ openstack overcloud roles generate \
--roles-path roles -o roles_data.yaml \
Controller Compute ComputePPC64LE ControllerPPC64LE BlockStorage ObjectStorage
CephStorage
210