Distributed Compute Node DCN Architecture-En-US
Distributed Compute Node DCN Architecture-En-US
OpenShift 18.0
Edge and storage configuration for Red Hat OpenStack Services on Openshift
OpenStack Team
[email protected]
Legal Notice
Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
You can deploy Red Hat OpenStack Services on OpenShift (RHOSO) with a distributed compute
node (DCN) architecture for edge sites at remote locations. Each site can have its own Red Hat
Ceph Storage back end for Image service (glance).
Table of Contents
Table of Contents
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .UNDERSTANDING
. . . . . . . . . . . . . . . . . . . DCN
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . .
1.1. REQUIRED SOFTWARE FOR DCN ARCHITECTURE 5
1.2. DCN STORAGE 5
.CHAPTER
. . . . . . . . . . 2.
. . PLANNING
. . . . . . . . . . . .A
. . DISTRIBUTED
. . . . . . . . . . . . . . .COMPUTE
. . . . . . . . . . . NODE
. . . . . . . (DCN)
. . . . . . . DEPLOYMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .
2.1. CONSIDERATIONS FOR STORAGE ON DCN ARCHITECTURE 6
2.2. CONSIDERATIONS FOR NETWORKING ON DCN ARCHITECTURE 6
2.3. IP ADDRESS POOL SIZING FOR THE INTERNALAPI NETWORK 7
.CHAPTER
. . . . . . . . . . 3.
. . INSTALLING
. . . . . . . . . . . . . .AND
. . . . . PREPARING
. . . . . . . . . . . . . THE
. . . . .OPERATORS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
3.1. PREREQUISITES 10
3.2. INSTALLING THE OPENSTACK OPERATOR 10
.CHAPTER
. . . . . . . . . . 4.
. . .DEPLOYING
. . . . . . . . . . . . .THE
. . . . .DCN
. . . . .CONTROL
. . . . . . . . . . . PLANE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11. . . . . . . . . . . . .
4.1. CREATING A SPINE-LEAF NETWORK TOPOLOGY FOR DCN 11
4.2. PREPARING DCN NETWORKING 12
4.3. CREATING THE DCN CONTROL PLANE 21
4.4. CEPH SECRET KEY SITE DISTRIBUTION 28
.CHAPTER
. . . . . . . . . . 5.
. . DEPLOYING
. . . . . . . . . . . . . .A. .DCN
. . . . .NODE
. . . . . . .SET
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
..............
5.1. CONFIGURING THE DATA PLANE NODE NETWORKS 32
5.2. CONFIGURING AND DEPLOYING HYPERCONVERGED RED HAT CEPH STORAGE 37
5.3. CONFIGURING THE DCN DATA PLANE 38
5.4. EXAMPLE NODE SET RESOURCE 43
5.5. UPDATING THE CONTROL PLANE 46
5.6. UPDATING THE CONTROL PLANE AFTER DEPLOYING DCN EDGE LOCATIONS 48
.CHAPTER
. . . . . . . . . . 6.
. . .VALIDATING
. . . . . . . . . . . . . EDGE
. . . . . . .STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57
..............
6.1. IMPORTING FROM A LOCAL FILE 57
6.2. IMPORTING AN IMAGE FROM A WEB SERVER 57
6.3. COPYING AN IMAGE TO A NEW SITE 58
6.4. CONFIRMING THAT AN INSTANCE AT AN EDGE SITE CAN BOOT WITH IMAGE BASED VOLUMES 59
6.5. CONFIRMING IMAGE SNAPSHOTS CAN BE CREATED AND COPIED BETWEEN SITES 60
6.6. TESTING A BACKUP AND RESTORE ACROSS EDGE SITES 60
1
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
2
PROVIDING FEEDBACK ON RED HAT DOCUMENTATION
To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat
Jira account, you can create an account at https://issues.redhat.com.
1. Click the following link to open a Create Issue page: Create Issue
2. Complete the Summary and Description fields. In the Description field, include the
documentation URL, chapter or section number, and a detailed description of the issue. Do not
modify any other fields in the form.
3. Click Create.
3
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
NOTE
An upgrade from Red Hat OpenStack Platform (RHOSP) 17.1 to Red Hat OpenStack
Services on OpenShift (RHOSO) 18.0.3 is not currently supported for Distributed
Compute Node (DCN) deployments.
Distributed compute node (DCN) architecture is for edge use cases that require Compute and storage
nodes to be deployed remotely while sharing a centralized control plane. With DCN architecture, you can
position workloads strategically closer to your operational needs for higher performance.
The central location consists of, at a minimum, the RHOSO control plane installed on a Red Hat
OpenShift Container Platform (RHOCP) cluster. Compute nodes can also be deployed at the central
location. The edge locations consist of Compute and optional storage nodes.
DCN architecture consists of multiple availability zones (AZs) to ensure isolation per-site scheduling of
the OpenStack resources.
You configure each site with a unique AZ. In this guide, the central site uses az0, the first edge location
uses az1, and so on. You can use any naming convention to ensure that the AZ names are unique per
site.
The hub is the central site with core routers and a datacenter gateway (DC-GW). The hub hosts
the control plane which manages the geographically dispersed sites.
The spokes are the remote edge sites. Each site is defined by using an
OpenStackDataPlaneNodeSet custom resource. Red Hat Ceph Storage (RHCS) is used as the
storage back end. You can deploy RHCS in either a hyperconverged configuration, or as a
standalone storage back end.
When you launch an instance at an edge site, the required image is copied to the local Image service
4
CHAPTER 1. UNDERSTANDING DCN
When you launch an instance at an edge site, the required image is copied to the local Image service
(glance) store automatically. You can copy images from the central Image store to edge sites by using
glance multistore to save time during instance launch.
Without storage.
The storage you deploy is dedicated to the site you deploy it on.
DCN architecture uses an Image service (glance) pod, and a Block Storage service (cinder) pod for each
site, deployed at the central location, on the Red Hat OpenShift Container Platform (RHOCP). cluster
For edge sites deployed without storage, additional tooling is available so that you can cache and store
images in the Compute service (nova) cache. Caching Image service images in the Compute service
provides faster boot times for instances by avoiding the process of downloading images across a WAN
link.
5
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
Copying a volume between edge sites. You can work around this by creating an image from the
volume and using the Image service (glance) to copy the image. After the image is copied, you
can create a volume from it.
Instance migration, live or cold, either between edge sites, or from the central location to edge
sites. You can only migrate instances within a site boundary. To move an image between sites,
you must snapshot the image, and use glance image-import.
You must upload images to the central location before copying them to edge sites. A copy of
each image must exist in the Image service at the central location.
You must use the RBD storage driver for the Image, Compute and Block Storage services.
For each site, including the central location, assign a unique availability zone.
You can migrate an offline volume from an edge site to the central location, or vice versa. You
cannot migrate volumes directly between edge sites.
Network latency: Balance the latency as measured in round-trip time (RTT), with the expected
6
CHAPTER 2. PLANNING A DISTRIBUTED COMPUTE NODE (DCN) DEPLOYMENT
Network drop outs: If the edge site temporarily loses connection to the central site, then no
control plane API or CLI operations can be executed at the impacted edge site for the duration
of the outage. For example, Compute nodes at the edge site are consequently unable to create
a snapshot of an instance, issue an auth token, or delete an image. General control plane API
and CLI operations remain functional during this outage, and can continue to serve any other
edge sites that have a working connection.
Image type: You must use raw images when deploying a DCN architecture with Ceph storage.
Image sizing:
Compute images: Compute images are downloaded from the central location. These
images are potentially large files that are transferred across all necessary networks from the
central site to the edge site during provisioning.
Instance images: If there is no block storage at the edge, then the Image service images
traverse the WAN during first use. The images are copied or cached locally to the target
edge nodes for all subsequent use. There is no size limit for images. Transfer times vary with
available bandwidth and network latency.
Provider networks are the most common approach for DCN deployments. Note that the
Networking service (neutron) does not validate where you can attach available networks. For
example, if you use a provider network called "site-a" only in edge site A, the Networking service
does not validate and prevent you from attaching "site-a" to an instance at site B, which does
not work.
Site-specific networks: A limitation in DCN networking arises if you use networks that are
specific to a certain site: When you deploy centralized neutron controllers with Compute nodes,
there are no triggers in the Networking service to identify a certain Compute node as a remote
node. Consequently, the Compute nodes receive a list of other Compute nodes and
automatically form tunnels between each other. The tunnels are formed from edge to edge
through the central site. If you use VXLAN or GENEVE, every Compute node at every site
forms a tunnel with every other Compute node, whether or not they are local or remote. This is
not an issue if you are using the same networks everywhere. When you use VLANs, the
Networking service expects that all Compute nodes have the same bridge mappings, and that
all VLANs are available at every site.
If edge servers are not pre-provisioned, you must configure DHCP relay for introspection and
provisioning on routed segments.
Routing must be configured either on the cloud or within the networking infrastructure that
connects each edge site to the hub. You should implement a networking design that allocates
an L3 subnet for each RHOSO cluster network (external, internal API, and so on), unique to each
site.
7
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
example, when you update the cinderVolumes field in the OpenStackControlPlane custom resource
(CR), add a field called glance_api_servers under customServiceConfig:
cinderVolumes:
az0:
customServiceConfig: |
[DEFAULT]
enabled_backends = az0
glance_api_servers = https://glance-az0-internal.openstack.svc:9292
The Image service endpoint DNS name maps to a load balancer IP address in the internalapi address
pool as indicated by the internal metadata annotations:
[glance_store]
default_backend = ceph
[ceph]
rbd_store_ceph_conf = /etc/ceph/ceph.conf
store_description = "ceph RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
The range of addresses in this address pool should be sized according to the number of DCN sites. For
example, the following shows only 10 available addresses in the internalapi network.
Use commands like the following after updating the glance section of the OpenStackControlPlane CR in
order to confirm that the Glance Operator has created the service endpoint and route.
8
CHAPTER 2. PLANNING A DISTRIBUTED COMPUTE NODE (DCN) DEPLOYMENT
24h
glance-c1ca8-az1-edge-api ClusterIP None <none> 9292/TCP
23h
9
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
3.1. PREREQUISITES
An operational RHOCP cluster, version 4.16. For the RHOCP system requirements, see Red Hat
OpenShift Container Platform cluster requirements in Planning your deployment .
You are logged in to the RHOCP cluster as a user with cluster-admin privileges.
Procedure
4. Click the OpenStack Operator tile with the Red Hat source label.
7. Click Install to make the Operator available to the openstack-operators namespace. The
Operators are deployed and ready when the Status of the OpenStack Operator is Succeeded.
10
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
Before you deploy the control plane, you must also configure the RHOCP networks. A DCN deployment
of RHOSO requires the management of a larger number subnets. The subnets that you use are specific
to your environment. This document uses the following configuration in each of its examples.
NodeNetworkConfigurationPolicy
Use the NodeNetworkConfigurationPolicy CR to configure the interfaces for each isolated
network on each worker node in RHOCP cluster.
NetworkAttachmentDefinition
Use the NetworkAttachmentDefinition CR to attach service pods to the isolated networks, where
needed.
L2Advertisement
Use the L2Advertisement resource to define how the Virtual IPs (VIPs) are announced.
IPAddressPool
Use the IPAddressPool resource to configure which IPs can be used as VIPs.
NetConfig
Use the NetConfig CR to specify the subnets for the data plane networks.
OpenStackControlPlane
Use the OpenStackControlPlane to define and configure OpenStack services on OpenShift.
11
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
Prerequisites
Procedure
2. In each nncp CR file, configure the interfaces for each isolated network. Each service interface
must have its own unique address:
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
labels:
osp/nncm-config-type: standard
name: worker-0
namespace: openstack
spec:
desiredState:
dns-resolver:
config:
search: []
server:
- 192.168.122.1
interfaces:
- description: internalapi vlan interface
ipv4:
address:
- ip: 172.17.0.10
prefix-length: "24"
dhcp: false
enabled: true
ipv6:
enabled: false
mtu: 1496
name: internalapi
state: up
type: vlan
vlan:
base-iface: enp7s0
id: "20"
- description: storage vlan interface
ipv4:
address:
- ip: 172.18.0.10
prefix-length: "24"
dhcp: false
12
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
enabled: true
ipv6:
enabled: false
mtu: 1496
name: storage
state: up
type: vlan
vlan:
base-iface: enp7s0
id: "21"
- description: tenant vlan interface
ipv4:
address:
- ip: 172.19.0.10
prefix-length: "24"
dhcp: false
enabled: true
ipv6:
enabled: false
mtu: 1496
name: tenant
state: up
type: vlan
vlan:
base-iface: enp7s0
id: "22"
- description: ctlplane interface
mtu: 1500
name: enp7s0
state: up
type: ethernet
- bridge:
options:
stp:
enabled: false
port:
- name: enp7s0
vlan: {}
description: linux-bridge over ctlplane interface
ipv4:
address:
- ip: 192.168.122.10
prefix-length: "24"
dhcp: false
enabled: true
ipv6:
enabled: false
mtu: 1500
name: ospbr
state: up
type: linux-bridge
3. Add the route-rules attribute and the route configuration to networks in each remote location
to each nncp CR file:
route-rules:
13
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
config: []
routes:
config:
- destination: 192.168.133.0/24
next-hop-address: 192.168.122.1
next-hop-interface: ospbr
table-id: 254
- destination: 192.168.144.0/24
next-hop-address: 192.168.122.1
next-hop-interface: ospbr
table-id: 254
- destination: 172.17.10.0/24
next-hop-address: 172.17.0.1
next-hop-interface: internalapi
table-id: 254
- destination: 172.18.10.0/24
next-hop-address: 172.18.0.1
next-hop-interface: storage
table-id: 254
- destination: 172.19.10.0/24
next-hop-address: 172.19.0.1
next-hop-interface: tenant
table-id: 254
- destination: 172.17.20.0/24
next-hop-address: 172.17.0.1
next-hop-interface: internalapi
table-id: 254
- destination: 172.18.20.0/24
next-hop-address: 172.18.0.1
next-hop-interface: storage
table-id: 254
- destination: 172.19.20.0/24
next-hop-address: 172.19.0.1
next-hop-interface: tenant
table-id: 254
nodeSelector:
kubernetes.io/hostname: worker-0
node-role.kubernetes.io/worker: ""
NOTE
Each service network routes to the same network at each remote location. For
example, the internapi network (172.17.0.0/24) has a route to the internalapi
network at each remote location (172.17.10.0/24 and 172.17.20.0/24) through a
local router at 172.17.0.1.
$ oc create -f worker0-nncp.yaml
$ oc create -f worker1-nncp.yaml
$ oc create -f worker2-nncp.yaml
14
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
labels:
osp/net: internalapi
osp/net-attach-def-type: standard
name: internalapi
namespace: openstack
spec:
config: |
{
"cniVersion": "0.3.1",
"name": "internalapi",
"type": "macvlan",
"master": "internalapi",
"ipam": {
"type": "whereabouts",
"range": "172.17.0.0/24",
"range_start": "172.17.0.30",
"range_end": "172.17.0.70",
"routes": [
{ "dst": "172.17.10.0/24", "gw": "172.17.0.1" },
{ "dst": "172.17.20.0/24", "gw": "172.17.0.1" }
]
}
}
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
labels:
osp/net: ctlplane
osp/net-attach-def-type: standard
name: ctlplane
namespace: openstack
spec:
config: |
{
"cniVersion": "0.3.1",
"name": "ctlplane",
"type": "macvlan",
"master": "ospbr",
"ipam": {
"type": "whereabouts",
"range": "192.168.122.0/24",
"range_start": "192.168.122.30",
"range_end": "192.168.122.70",
"routes": [
{ "dst": "192.168.133.0/24", "gw": "192.168.122.1" },
{ "dst": "192.168.144.0/24", "gw": "192.168.122.1" }
15
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
]
}
}
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
labels:
osp/net: storage
osp/net-attach-def-type: standard
name: storage
namespace: openstack
spec:
config: |
{
"cniVersion": "0.3.1",
"name": "storage",
"type": "macvlan",
"master": "storage",
"ipam": {
"type": "whereabouts",
"range": "172.18.0.0/24",
"range_start": "172.18.0.30",
"range_end": "172.18.0.70",
"routes": [
{ "dst": "172.18.10.0/24", "gw": "172.18.0.1" },
{ "dst": "172.18.20.0/24", "gw": "172.18.0.1" }
]
}
}
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
labels:
osp/net: tenant
osp/net-attach-def-type: standard
name: tenant
namespace: openstack
spec:
config: |
{
"cniVersion": "0.3.1",
"name": "tenant",
"type": "macvlan",
"master": "tenant",
"ipam": {
"type": "whereabouts",
"range": "172.19.0.0/24",
"range_start": "172.19.0.30",
"range_end": "172.19.0.70",
16
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
"routes": [
{ "dst": "172.19.10.0/24", "gw": "172.19.0.1" },
{ "dst": "172.19.20.0/24", "gw": "172.19.0.1" }
]
}
}
$ oc create -f internalapi-net-attach-def.yaml
$ oc create -f control-net-attach-def.yaml
$ oc create -f storage-net-attach-def.yaml
$ oc create -f tenant-net-attach-def.yaml
7. Create a NetConfig CR definition file to define which IPs can be used as Virtual IPs (VIPs). Each
network defined under the dnsDomain field, with allocationRanges for each geographic
reigon. These ranges cannot overlap with the whereabouts IPAM range.
a. Create the file with the added allocation ranges for the control plane networking similar to
the following:
apiVersion: network.openstack.org/v1beta1
kind: NetConfig
metadata:
name: netconfig
namespace: openstack
spec:
networks:
- dnsDomain: ctlplane.example.com
mtu: 1500
name: ctlplane
subnets:
- allocationRanges:
- end: 192.168.122.120
start: 192.168.122.100
- end: 192.168.122.170
start: 192.168.122.150
cidr: 192.168.122.0/24
gateway: 192.168.122.1
name: subnet1
routes:
- destination: 192.168.133.0/24
nexthop: 192.168.122.1
- destination: 192.168.144.0/24
nexthop: 192.168.122.1
- allocationRanges:
- end: 192.168.133.120
start: 192.168.133.100
- end: 192.168.133.170
start: 192.168.133.150
cidr: 192.168.133.0/24
gateway: 192.168.133.1
name: subnet2
routes:
- destination: 192.168.122.0/24
nexthop: 192.168.133.1
17
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
- destination: 192.168.144.0/24
nexthop: 192.168.133.1
- allocationRanges:
- end: 192.168.144.120
start: 192.168.144.100
- end: 192.168.144.170
start: 192.168.144.150
cidr: 192.168.144.0/24
gateway: 192.168.144.1
name: subnet3
routes:
- destination: 192.168.122.0/24
nexthop: 192.168.144.1
- destination: 192.168.133.0/24
nexthop: 192.168.144.1
- dnsDomain: internalapi.example.com
mtu: 1496
name: internalapi
subnets:
- allocationRanges:
- end: 172.17.0.250
start: 172.17.0.100
cidr: 172.17.0.0/24
name: subnet1
routes:
- destination: 172.17.10.0/24
nexthop: 172.17.0.1
- destination: 172.17.20.0/24
nexthop: 172.17.0.1
vlan: 20
- allocationRanges:
- end: 172.17.10.250
start: 172.17.10.100
cidr: 172.17.0.0/24
name: subnet2
routes:
- destination: 172.17.0.0/24
nexthop: 172.17.10.1
- destination: 172.17.20.0/24
nexthop: 172.17.10.1
vlan: 30
- allocationRanges:
- end: 172.17.20.250
start: 172.17.20.100
cidr: 172.17.20.0/24
name: subnet3
routes:
- destination: 172.17.0.0/24
nexthop: 172.17.20.1
- destination: 172.17.10.0/24
nexthop: 172.17.20.1
vlan: 40
18
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
- dnsDomain: external.example.com
mtu: 1500
name: external
subnets:
- allocationRanges:
- end: 10.0.0.250
start: 10.0.0.100
cidr: 10.0.0.0/24
name: subnet1
vlan: 22
- dnsDomain: storage.example.com
mtu: 1496
name: storage
subnets:
- allocationRanges:
- end: 172.18.0.250
start: 172.18.0.100
cidr: 172.18.0.0/24
name: subnet1
routes:
- destination: 172.18.10.0/24
nexthop: 172.18.0.1
- destination: 172.18.20.0/24
nexthop: 172.18.0.1
vlan: 21
- allocationRanges:
- end: 172.18.10.250
start: 172.18.10.100
cidr: 172.18.10.0/24
name: subnet2
routes:
- destination: 172.18.0.0/24
nexthop: 172.18.10.1
- destination: 172.18.20.0/24
nexthop: 172.18.10.1
vlan: 31
- allocationRanges:
- end: 172.18.20.250
start: 172.18.20.100
cidr: 172.18.20.0/24
name: subnet3
routes:
- destination: 172.18.0.0/24
nexthop: 172.18.20.1
- destination: 172.18.10.0/24
nexthop: 172.18.20.1
vlan: 41
- dnsDomain: tenant.example.com
mtu: 1496
name: tenant
subnets:
19
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
- allocationRanges:
- end: 172.19.0.250
start: 172.19.0.100
cidr: 172.19.0.0/24
name: subnet1
routes:
- destination: 172.19.10.0/24
nexthop: 172.19.0.1
- destination: 172.19.20.0/24
nexthop: 172.19.0.1
vlan: 22
- allocationRanges:
- end: 172.19.10.250
start: 172.19.10.100
cidr: 172.19.10.0/24
name: subnet2
routes:
- destination: 172.19.0.0/24
nexthop: 172.19.10.1
- destination: 172.19.20.0/24
nexthop: 172.19.10.1
vlan: 32
- allocationRanges:
- end: 172.19.20.250
start: 172.19.20.100
cidr: 172.19.20.0/24
name: subnet3
routes:
- destination: 172.19.0.0/24
nexthop: 172.19.20.1
- destination: 172.19.10.0/24
nexthop: 172.19.20.1
vlan: 42
- dnsDomain: storagemgmt.example.com
mtu: 1500
name: storagemgmt
subnets:
- allocationRanges:
- end: 172.20.0.250
start: 172.20.0.100
cidr: 172.20.0.0/24
name: subnet1
routes:
- destination: 172.20.10.0/24
nexthop: 172.20.0.1
- destination: 172.20.20.0/24
nexthop: 172.20.0.1
vlan: 23
- allocationRanges:
- end: 172.20.10.250
start: 172.20.10.100
cidr: 172.20.10.0/24
name: subnet2
20
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
routes:
- destination: 172.20.0.0/24
nexthop: 172.20.10.1
- destination: 172.20.20.0/24
nexthop: 172.20.10.1
vlan: 33
- allocationRanges:
- end: 172.20.20.250
start: 172.20.20.100
cidr: 172.20.20.0/24
name: subnet3
routes:
- destination: 172.20.0.0/24
nexthop: 172.20.20.1
- destination: 172.20.10.0/24
nexthop: 172.20.20.1
vlan: 43
oc create -f netconfig
Prerequisites
The RHOCP cluster is not configured with any network policies that prevent communication
between the openstack-operators namespace and the control plane namespace (default
openstack). Use the following command to check the existing network policies on the cluster:
You are logged on to a workstation that has access to the RHOCP cluster, as a user with
cluster-admin privileges.
Procedure
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
2. Use the spec field to specify the Secret CR you create to provide secure access to your pod,
21
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
2. Use the spec field to specify the Secret CR you create to provide secure access to your pod,
and the storageClass you create for your Red Hat OpenShift Container Platform (RHOCP)
cluster storage back end:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
metadata:
name: openstack-control-plane
namespace: openstack
spec:
secret: osp-secret
storageClass: <RHOCP_storage_class>
Replace <RHOCP_storage_class> with the storage class you created for your RHOCP
cluster storage back end.
3. Add service configurations. Include service configurations for all required services:
cinder:
uniquePodNames: false
apiOverride:
route: {}
template:
customServiceConfig: |
[DEFAULT]
storage_availability_zone = az0
databaseInstance: openstack
secret: osp-secret
cinderAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cinderScheduler:
replicas: 1
cinderVolumes:
az0:
networkAttachments:
- storage
replicas: 0
NOTE
NOTE
22
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
NOTE
Set the replicas field to a value of 0. The replica count is changed and
additional cinderVolume services are added after storage is configured.
nova:
apiOverride:
route: {}
template:
apiServiceTemplate:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
metadataServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
schedulerServiceTemplate:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
cellTemplates:
cell0:
cellDatabaseAccount: nova-cell0
cellDatabaseInstance: openstack
cellMessageBusInstance: rabbitmq
23
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
hasAPIAccess: true
cell1:
cellDatabaseAccount: nova-cell1
cellDatabaseInstance: openstack-cell1
cellMessageBusInstance: rabbitmq-cell1
noVNCProxyServiceTemplate:
enabled: true
networkAttachments:
- ctlplane
hasAPIAccess: true
secret: osp-secret
dns:
template:
options:
- key: server
values:
- 192.168.122.1
- key: server
values:
- 192.168.122.2
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: ctlplane
metallb.universe.tf/allow-shared-ip: ctlplane
metallb.universe.tf/loadBalancerIPs: 192.168.122.80
spec:
type: LoadBalancer
replicas: 2
Galera
galera:
templates:
openstack:
storageRequest: 5000M
secret: osp-secret
replicas: 3
openstack-cell1:
storageRequest: 5000M
secret: osp-secret
replicas: 3
keystone:
apiOverride:
route: {}
template:
override:
service:
24
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
replicas: 3
glance:
apiOverrides:
default:
route: {}
template:
databaseInstance: openstack
storage:
storageRequest: 10G
secret: osp-secret
keystoneEndpoint: default
glanceAPIs:
default:
replicas: 0
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
networkAttachments:
- storage
NOTE
You must initially set the replicas field to a value of 0. The replica count is
changed and additional glanceAPI services are added after storage is
configured.
barbican:
apiOverride:
route: {}
template:
databaseInstance: openstack
secret: osp-secret
25
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
barbicanAPI:
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
barbicanWorker:
replicas: 3
barbicanKeystoneListener:
replicas: 1
Memcached
memcached:
templates:
memcached:
replicas: 3
neutron:
apiOverride:
route: {}
template:
customServiceConfig: |
[DEFAULT]
network_scheduler_driver =
neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler 1
default_availability_zones = az0
[ml2_type_vlan]
network_vlan_ranges = datacentre:1:1000
[neutron]
physnets = datacentre
replicas: 3
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
secret: osp-secret
networkAttachments:
- internalapi
26
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
OVN
ovn:
template:
ovnController:
external-ids"
availability-zones:
- az0
enable-chassis-as-gateway: true
ovn-bridge: br-int
ovn-encap-type: geneve
system-id: random
networkAttachment: tenant
nicMappings:
datacentre: ospbr
ovnDBCluster:
ovndbcluster-nb:
replicas: 3
dbType: NB
storageRequest: 10G
networkAttachment: internalapi
ovndbcluster-sb:
replicas: 3
dbType: SB
storageRequest: 10G
networkAttachment: internalapi
ovnNorthd:
networkAttachment: internalapi
placement:
apiOverride:
route: {}
template:
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
type: LoadBalancer
databaseInstance: openstack
replicas: 3
secret: osp-secret
RabbitMQ
27
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
rabbitmq:
templates:
rabbitmq:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.85
spec:
type: LoadBalancer
rabbitmq-cell1:
replicas: 3
override:
service:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.86
spec:
type: LoadBalancer
Additional resources
Providing secure access to the Red Hat OpenStack Services on OpenShift services
Add the key for each Ceph backend to the secret for the default location.
Add the the key for the default Ceph backend, as well as the local ceph backend for each
additional location.
For three locations, az0, az1, and az2, you must have three secrets. Locations az1 and az2 each have
keys for the local backend as well as the keys for az0. Location az0 contains all Ceph back end keys.
Procedure
You create the required secrets after Ceph has been deployed at each edge location, and the keyring
and configuration file for each has been collected. Alternatively, you can deploy each Ceph backend as
needed, and update secrets with each edge deployment.
28
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
a. If you have already deployed Red Hat Ceph Storage (RHCS) at all edge sites which require
storage, create a secret for az0 which contains all keyrings and conf files:
b. If you have not deployed RHCS at all edge sites, create a secret for az0 which contains the
keyring and conf file for az0:
2. When you deploy RHCS at the edge location at availability zone 1 (az1), create a secret for
location az1 which contains keyrings and conf files for the local backend, and the default
backend:
4. When you deploy RHCS at the edge location at availability zone 2 (az2) create a secret for
location az2 which contains keyrings and conf files for the local backend, and the default
backend:
29
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
6. [Optional] When you have finished creating the necessary keys, you can verify that they show
up in the openstack namespace:
Example output
secret/ceph-conf-az-0
secret/ceph-conf-az-1
secret/ceph-conf-az-2
7. When you create an OpenStackDataPlaneNodeSet, use the appropriate key under the
extraMounts field:
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-edpm-dcn-0
namespace: openstack
spec:
...
nodeTemplate:
extraMounts:
- extraVolType: Ceph
volumes:
- name: ceph
secret:
secretName: ceph-conf-az-0
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
8. When you create a data plane NodeSet, you must also update the OpenStackControlPlane
custom resource (CR) with the secret name:
apiVersion: core.openstack.org/v1beta1
kind: OpenStackControlPlane
spec:
extraMounts:
- name: v1
region: r1
extraVol:
- propagation:
30
CHAPTER 4. DEPLOYING THE DCN CONTROL PLANE
- az0
- CinderBackup
extraVolType: Ceph
volumes:
- name: ceph
secret:
name: ceph-conf-az-0
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
- propagation:
- az1
extraVolType: Ceph
volumes:
- name: ceph
secret:
name: ceph-conf-az-1
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
...
NOTE
If the CinderBackup service is a part of the deployment, then you must include it
in the propagation list because it does not have the availability zone in its pod
name.
9. When you update the glanceAPIs field in the OpenStackControlPlane CR, the Image service
(glance) pod name matches the extraMounts propagation instances:
glanceAPIs:
az0:
customServiceConfig: |
...
az1:
customServiceConfig: |
...
10. When you update the cinderVolumes field in the OpenStackControlPlane CR, the Block
Storage service (cinder) pod names must also match the extraMounts propagation instance s:
kind: OpenStackControlPlane
spec:
<...>
cinder
<...>
cinderVolumes:
az0:
<...>
az1:
<...>
31
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
Prerequisites
Control plane deployment is complete but has not yet been modified to use Ceph Storage.
The data plane nodes have been pre-provisioned with an operating system.
The data plane nodes are accessible through an SSH key that Ansible can use.
If you are using HCI, then the data plane nodes have disks available to be used as Ceph OSDs.
There are a minimum of three available data plane nodes. Ceph Storage clusters must have a
minimum of three nodes to ensure redundancy.
Procedure
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: dcn-data-plane-networks
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
spec:
...
services:
- bootstrap
- configure-network
- validate-network
- install-os
- ceph-hci-pre
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
32
CHAPTER 5. DEPLOYING A DCN NODE SET
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
nodeTemplate:
ansible:
ansiblePort: 22
ansibleUser: cloud-admin
ansibleVars:
edpm_enable_chassis_gw: true
edpm_ovn_availability_zones:
- az0
4. Optional: The ceph-hci-pre service prepares data plane nodes to host Red Hat Ceph Storage
services after network configuration using the edpm_ceph_hci_pre edpm-ansible role. By
default, the edpm_ceph_hci_pre_enabled_services parameter of this role only contains RBD,
RGW, and NFS services. DCN only supports RBD services at DCN sites. If you are deploying
HCI, disable the RGW and NFS services by adding the edpm_ceph_hci_pre_enabled_services
parameter, and adding only ceph RBD services.
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-edpm
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
nodeTemplate:
ansible:
ansiblePort: 22
ansibleUser: cloud-admin
ansibleVars:
edpm_ceph_hci_pre_enabled_services:
- ceph_mon
- ceph_mgr
- ceph_osd
...
NOTE
If other services, such as the Dashboard, are deployed with HCI nodes, they must
be added to the edpm_ceph_hci_pre_enabled_services parameter list. For
more information about this role, see edpm_ceph_hci_pre role.
5. Configure the Red Hat Ceph Storage cluster network for storage management.
The following example has 3 nodes. It assumes the storage management is on VLAN23:
33
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-edpm
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
nodeTemplate:
ansible:
ansiblePort: 22
ansibleUser: cloud-admin
ansibleVars:
edpm_ceph_hci_pre_enabled_services:
- ceph_mon
- ceph_mgr
- ceph_osd
edpm_fips_mode: check
edpm_iscsid_image: {{ registry_url }}/openstack-iscsid:{{ image_tag }}
edpm_logrotate_crond_image: {{ registry_url }}/openstack-cron:{{ image_tag }}
edpm_network_config_hide_sensitive_logs: false
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
nic1: 52:54:00:1e:af:6b
nic2: 52:54:00:d9:cb:f4
edpm-compute-1:
nic1: 52:54:00:f2:bc:af
nic2: 52:54:00:f1:c7:dd
edpm-compute-2:
nic1: 52:54:00:dd:33:14
nic2: 52:54:00:50:fb:c3
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic2
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
34
CHAPTER 5. DEPLOYING A DCN NODE SET
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup(vars, networks_lower[network] ~ _mtu) }}
vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }}
addresses:
- ip_netmask:
{{ lookup(vars, networks_lower[network] ~ _ip) }}/{{ lookup(vars,
networks_lower[network] ~ _cidr) }}
routes: {{ lookup(vars, networks_lower[network] ~ _host_routes) }}
{% endfor %}
edpm_neutron_metadata_agent_image: {{ registry_url }}/openstack-neutron-metadata-
agent-ovn:{{ image_tag }}
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
edpm_selinux_mode: enforcing
edpm_sshd_allowed_ranges:
- 192.168.111.0/24
- 192.168.122.0/24
- 192.168.133.0/24
- 192.168.144.0/24
edpm_sshd_configure_firewall: true
enable_debug: false
gather_facts: false
image_tag: current-podified
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
service_net_map:
nova_api_network: internalapi
nova_libvirt_network: internalapi
storage_mgmt_cidr: "24"
storage_mgmt_host_routes: []
storage_mgmt_mtu: 9000
storage_mgmt_vlan_id: 23
storage_mtu: 9000
timesync_ntp_servers:
- hostname: pool.ntp.org
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
managementNetwork: ctlplane
networks:
- defaultRoute: true
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
nodes:
edpm-compute-0:
ansible:
host: 192.168.122.100
hostName: compute-0
networks:
- defaultRoute: true
35
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
fixedIP: 192.168.122.100
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: storagemgmt
subnetName: subnet1
- name: tenant
subnetName: subnet1
edpm-compute-1:
ansible:
host: 192.168.122.101
hostName: compute-1
networks:
- defaultRoute: true
fixedIP: 192.168.122.101
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: storagemgmt
subnetName: subnet1
- name: tenant
subnetName: subnet1
edpm-compute-2:
ansible:
host: 192.168.122.102
hostName: compute-2
networks:
- defaultRoute: true
fixedIP: 192.168.122.102
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: storagemgmt
subnetName: subnet1
- name: tenant
subnetName: subnet1
preProvisioned: true
services:
- bootstrap
- configure-network
- validate-network
- install-os
- ceph-hci-pre
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
36
CHAPTER 5. DEPLOYING A DCN NODE SET
$ oc apply -f <dataplane_cr_file>
NOTE
NOTE
The following steps are specifically for a hyperconverged configuration of Red hat Ceph
Storage (RHCS), and are not required if you have deployed an external RHCS cluster.
Configure and deploy Red Hat Ceph Storage by editing the configuration file and using the cephadm
utility.
Procedure
2. Add the Storage and Storage Management network ranges. Red Hat Ceph Storage uses the
Storage network as the Red Hat Ceph Storage public_network and the Storage Management
network as the cluster_network.
The following example is for a configuration file entry where the Storage network range is
172.18.0.0/24 and the Storage Management network range is 172.20.0.0/24:
[global]
public_network = 172.18.0.0/24
cluster_network = 172.20.0.0/24
3. Add collocation boundaries between the Compute service and Ceph OSD services. Boundaries
should be set between collocated Compute service and Ceph OSD services to reduce CPU and
memory contention.
The following is an example for a Ceph configuration file entry with these boundaries set:
37
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
[osd]
osd_memory_target_autotune = true
osd_numa_auto_affinity = true
[mgr]
mgr/cephadm/autotune_memory_target_ratio = 0.2
In this example, the osd_memory_target_autotune parameter is set to true so that the OSD
daemons adjust memory consumption based on the osd_memory_target option. The
autotune_memory_target_ratio defaults to 0.7. This means 70 percent of the total RAM in the
system is the starting point from which any memory consumed by non-autotuned Ceph
daemons is subtracted. The remaining memory is divided between the OSDs; assuming all OSDs
have osd_memory_target_autotune set to true. For HCI deployments, you can set
mgr/cephadm/autotune_memory_target_ratio to 0.2 so that more memory is available for the
Compute service.
For additional information about service collocation, see Collocating services in a HCI
environment for NUMA nodes.
NOTE
If these values need to be adjusted after the deployment, use the ceph config
set osd <key> <value> command.
4. Deploy Ceph Storage with the edited configuration file on a data plane node:
$ cephadm bootstrap --config <config_file> --mon-ip <data_plane_node_ip> --skip-
monitoring-stack
Replace <data_plane_node_ip> with the Storage network IP address of the data plane
node on which Red Hat Ceph Storage will be installed.
NOTE
If monitoring services have not been deployed, see the Red Hat Ceph
Storage documentation for information and procedures on enabling
monitoring services.
5. After the Red Hat Ceph Storage cluster is bootstrapped on the first EDPM node, see Red Hat
Ceph Storage installation in the Red Hat Ceph Storage Installation Guide to add the other
EDPM nodes to the Ceph cluster.
38
CHAPTER 5. DEPLOYING A DCN NODE SET
Prerequisites
Procedure
2. To make the cephx key and configuration file available for the Compute service (nova), use the
extraMounts parameter.
The following is an example of using the extraMounts parameter for this purpose:
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
spec:
...
nodeTemplate:
extraMounts:
- extraVolType: Ceph
volumes:
- name: ceph
secret:
secretName: ceph-conf-files
mounts:
- name: ceph
mountPath: "/etc/ceph"
readOnly: true
3. Create a ConfigMap to add required configuration details to the the Compute service (nova).
Create a file called ceph-nova-az0.yaml and add contents similar to the following. You must
add the the Image service (glance) endpoint for the local availability zone, as well as set the
cross_az_attach parameter to false:
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-nova-az0
namespace: openstack
data:
03-ceph-nova.conf:
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/az0.conf
images_rbd_glance_store_name = az0
images_rbd_glance_copy_poll_interval = 15
images_rbd_glance_copy_timeout = 600
rbd_user = openstack
rbd_secret_uuid = 9cfb3a03-3f91-516a-881e-a675f67c30ea
hw_disk_discard = unmap
volume_use_multipath = False
[glance]
endpoint_override = http://glance-az0-internal.openstack.svc:9292
valid_interfaces = internal
39
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
[cinder]
cross_az_attach = False
catalog_info = volumev3:cinderv3:internalURL
oc create -f ceph-nova-az0.yaml
5. Create a custom Compute (nova) service to use the ConfigMap. Create a file called nova-
custom-az0.yaml and add contents similar to the following. You must add the name of the
ConfigMap that you just created under the dataSources field:
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneService
metadata:
name: nova-custom-ceph-az0
spec:
addCertMounts: false
caCerts: combined-ca-bundle
dataSources:
- configMapRef:
name: ceph-nova-az0
- secretRef:
name: nova-cell1-compute-config
- secretRef:
name: nova-migration-ssh-key
edpmServiceType: nova
playbook: osp.edpm.nova
tlsCerts:
default:
contents:
- dnsnames
- ips
edpmRoleServiceName: nova
issuer: osp-rootca-issuer-internal
networks:
- ctlplane
oc create -f nova-custom-ceph-az0.yaml
NOTE
You must create a unique ConfigMap and custom Compute service for each
availability zone. Append the availability zone to the end of these file names as
shown in the previous steps.
8. Edit the services list to restore all of the services removed in Configuring the data plane node
networks. Restoring the full services list allows the remaining jobs to be run that complete the
configuration of the HCI environment.
The following is an example of a full services list with the additional services in bold:
40
CHAPTER 5. DEPLOYING A DCN NODE SET
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
spec:
...
services:
- bootstrap
- configure-network
- validate-network
- install-os
- ceph-hci-pre
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
- install-certs
- ceph-client
- ovn
- neutron-metadata
- libvirt
- nova-custom-ceph-az0
NOTE
In addition to restoring the default service list, the ceph-client service is added
after the run-os service. The ceph-client service configures EDPM nodes as
clients of a Red Hat Ceph Storage server. This service distributes the files
necessary for the clients to connect to the Red Hat Ceph Storage server. The
ceph-hci-pre service is only needed when you deploy HCI.
9. Optional: You can assign compute nodes to Compute service (nova) cells the same as you can
in any other environment. Replace the nova service in your OpenStackDataPlaneNodeSet CR
with your custom nova service:
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-cell2
spec:
services:
- download-cache
- bootstrap
- configure-network
- validate-network
- install-os
- configure-os
- ssh-known-hosts
- run-os
- ovn
- libvirt
- *nova-cell-custom*
NOTE
41
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
NOTE
If you are using cells, then the neutron-metadata service is unique per cell and
defined separately. For example neutron-metadata-cell1:
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneService
metadata:
labels:
app.kubernetes.io/instance: neutron-metadata-cell1
app.kubernetes.io/name: openstackdataplaneservice
app.kubernetes.io/part-of: openstack-operator
name: neutron-metadata-cell1
...
The nova-custom-ceph service is unique for each availability zone and defined
separately. For example, nova-custom-ceph-az0:
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneService
metadata:
labels:
app.kubernetes.io/instance: nova-custom-ceph-az0
app.kubernetes.io/name: openstackdataplaneservice
app.kubernetes.io/part-of: openstack-operator
name: nova-custom-ceph-az0
namespace: openstack
10. Optional: If you are deploying Red Hat Ceph Storage (RHCS) as a hyperconverged solution,
complete the following steps:
apiVersion: v1
kind: ConfigMap
metadata:
name: reserved-memory-nova
data:
04-reserved-memory-nova.conf: |
[DEFAULT]
reserved_host_memory_mb=75000
NOTE
42
CHAPTER 5. DEPLOYING A DCN NODE SET
kind: OpenStackDataPlaneService
<...>
spec:
configMaps:
- ceph-nova
- reserved-memory-nova
$ oc apply -f <dataplane_cr_file>
NOTE
example-node-set-resource
apiVersion: dataplane.openstack.org/v1beta1
kind: OpenStackDataPlaneNodeSet
metadata:
name: openstack-edpm
namespace: openstack
spec:
env:
- name: ANSIBLE_FORCE_COLOR
value: "True"
networkAttachments:
- ctlplane
nodeTemplate:
ansible:
ansiblePort: 22
ansibleUser: cloud-admin
ansibleVars:
edpm_ceph_hci_pre_enabled_services:
- ceph_mon
- ceph_mgr
- ceph_osd
43
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
edpm_fips_mode: check
edpm_iscsid_image: {{ registry_url }}/openstack-iscsid:{{ image_tag }}
edpm_logrotate_crond_image: {{ registry_url }}/openstack-cron:{{ image_tag }}
edpm_network_config_hide_sensitive_logs: false
edpm_network_config_os_net_config_mappings:
edpm-compute-0:
nic1: 52:54:00:1e:af:6b
nic2: 52:54:00:d9:cb:f4
edpm-compute-1:
nic1: 52:54:00:f2:bc:af
nic2: 52:54:00:f1:c7:dd
edpm-compute-2:
nic1: 52:54:00:dd:33:14
nic2: 52:54:00:50:fb:c3
edpm_network_config_template: |
---
{% set mtu_list = [ctlplane_mtu] %}
{% for network in nodeset_networks %}
{{ mtu_list.append(lookup(vars, networks_lower[network] ~ _mtu)) }}
{%- endfor %}
{% set min_viable_mtu = mtu_list | max %}
network_config:
- type: ovs_bridge
name: {{ neutron_physical_bridge_name }}
mtu: {{ min_viable_mtu }}
use_dhcp: false
dns_servers: {{ ctlplane_dns_nameservers }}
domain: {{ dns_search_domains }}
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
routes: {{ ctlplane_host_routes }}
members:
- type: interface
name: nic2
mtu: {{ min_viable_mtu }}
# force the MAC address of the bridge to this interface
primary: true
{% for network in nodeset_networks %}
- type: vlan
mtu: {{ lookup(vars, networks_lower[network] ~ _mtu) }}
vlan_id: {{ lookup(vars, networks_lower[network] ~ _vlan_id) }}
addresses:
- ip_netmask:
{{ lookup(vars, networks_lower[network] ~ _ip) }}/{{ lookup(vars, networks_lower[network]
~ _cidr) }}
routes: {{ lookup(vars, networks_lower[network] ~ _host_routes) }}
{% endfor %}
edpm_neutron_metadata_agent_image: {{ registry_url }}/openstack-neutron-metadata-agent-
ovn:{{ image_tag }}
edpm_nodes_validation_validate_controllers_icmp: false
edpm_nodes_validation_validate_gateway_icmp: false
edpm_selinux_mode: enforcing
edpm_sshd_allowed_ranges:
- 192.168.111.0/24
- 192.168.122.0/24
- 192.168.133.0/24
44
CHAPTER 5. DEPLOYING A DCN NODE SET
- 192.168.144.0/24
edpm_sshd_configure_firewall: true
enable_debug: false
gather_facts: false
image_tag: current-podified
neutron_physical_bridge_name: br-ex
neutron_public_interface_name: eth0
service_net_map:
nova_api_network: internalapi
nova_libvirt_network: internalapi
storage_mgmt_cidr: "24"
storage_mgmt_host_routes: []
storage_mgmt_mtu: 9000
storage_mgmt_vlan_id: 23
storage_mtu: 9000
timesync_ntp_servers:
- hostname: pool.ntp.org
ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret
managementNetwork: ctlplane
networks:
- defaultRoute: true
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: tenant
subnetName: subnet1
nodes:
edpm-compute-0:
ansible:
host: 192.168.122.100
hostName: compute-0
networks:
- defaultRoute: true
fixedIP: 192.168.122.100
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: storagemgmt
subnetName: subnet1
- name: tenant
subnetName: subnet1
edpm-compute-1:
ansible:
host: 192.168.122.101
hostName: compute-1
networks:
- defaultRoute: true
fixedIP: 192.168.122.101
name: ctlplane
subnetName: subnet1
45
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: storagemgmt
subnetName: subnet1
- name: tenant
subnetName: subnet1
edpm-compute-2:
ansible:
host: 192.168.122.102
hostName: compute-2
networks:
- defaultRoute: true
fixedIP: 192.168.122.102
name: ctlplane
subnetName: subnet1
- name: internalapi
subnetName: subnet1
- name: storage
subnetName: subnet1
- name: storagemgmt
subnetName: subnet1
- name: tenant
subnetName: subnet1
preProvisioned: true
services:
- bootstrap
- configure-network
- validate-network
- install-os
- ceph-hci-pre
- configure-os
- ssh-known-hosts
- run-os
- reboot-os
NOTE
Prerequisites
You have deployed a node set at the central location using Red Hat OpenStack services on
OpenShift.
Procedure
46
CHAPTER 5. DEPLOYING A DCN NODE SET
cinderBackup:
customServiceConfig: |
[DEFAULT]
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_pool = backups
backup_ceph_user = openstack
For more information about configuring the Block Storage backup service, see Configuring the
Block Storage backup service.
2. Update the Block Storage cinder volume service in your openstack_control_plane.yaml file:
cinderVolumes:
az0:
customServiceConfig: |
[DEFAULT]
enabled_backends = ceph
glance_api_servers = https://glance-az0-internal.openstack.svc:9292
[ceph]
volume_backend_name = ceph
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf = /etc/ceph/az0.conf
rbd_user = openstack
rbd_pool = volumes
rbd_flatten_volume_from_snapshot = False
rbd_secret_uuid = 795dcbca-e715-5ac3-9b7e-a3f5c64eb89f
rbd_cluster_name = az0
backend_availability_zone = az0
For more information about configuring the Block Storage volume service, see Configuring the
volume service.
3. Add the extraMounts field to your openstack_control_plane.yaml file to define the services
that require access to the Red Hat Ceph Storage secret:
extraMounts:
- extraVol:
- extraVolType: Ceph
mounts:
- mountPath: /etc/ceph
name: ceph
readOnly: true
propagation:
- az0
- CinderBackup
volumes:
- name: ceph
projected:
sources:
- secret:
name: ceph-conf-az-0
4. Update the Image service (glance) in your openstack_control_plane.yaml file to configure the
47
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
4. Update the Image service (glance) in your openstack_control_plane.yaml file to configure the
Block Storage service as the backend:
glanceAPIs:
az0:
customServiceConfig: |
[DEFAULT]
enabled_import_methods = [web-download,copy-image,glance-direct]
enabled_backends = az0:rbd
[glance_store]
default_backend = az0
[az0]
rbd_store_ceph_conf = /etc/ceph/az0.conf
store_description = "az0 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
oc apply -f openstack_control_plane.yaml
6. Add the AZ to a host aggregate. This allows OpenStack administrators to schedule workloads to
a geographical location based on image metadata.
# oc rsh openstackclient
When you deploy a DCN node set with storage, you must update two fields of the
OpenStackControlPlane CR at the central location:
48
CHAPTER 5. DEPLOYING A DCN NODE SET
cinderVolumes
glanceAPIs
Neutron
OVN
NOTE
If you are using cells, you must also configure cells for the new DCN site.
Prerequisites
Procedure
1. In the neutron service configuration, update the customServiceConfig field to add the new
availability zone and network leaf:
customServiceConfig: |
[DEFAULT]
router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler
network_scheduler_driver =
neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler
default_availability_zones = az0,az1
[ml2_type_vlan]
network_vlan_ranges = datacentre:1:1000,leaf1:1:1000
[neutron]
physnets = datacentre,leaf
ovnController:
external-ids:
availability-zones:
- az0
- az1
enable-chassis-as-gateway
ovn-bridge: br-int
onv-enap-type: geneve
system-id: random
networkAttachment: tenant
nicMappings:
datacentre: ospbr
49
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
cinderVolumes:
az0:
customServiceConfig: |
[DEFAULT]
....
az1:
customServiceConfig: |
[DEFAULT]
enabled_backends = ceph
glance_api_servers = https://glance-az1-internal.openstack.svc:9292
[ceph]
volume_backend_name = az1
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf = /etc/ceph/az1.conf
rbd_user = openstack
rbd_pool = volumes
rbd_flatten_volume_from_snapshot = False
rbd_secret_uuid = 19ccdd60-79a0-5f0f-aece-ece700e514f8
rbd_cluster_name = az1
backend_availability_zone = az1
4. Register an Image Service (glance) pod to the Identity Service (keystone) catalog:
In DCN, an Image Service pod is deployed for each node set. A single Image Service pod is
registered to the Identity service (keystone) catalog at any one time. For this reason, in the top-
level Glance CR, a keystoneEndpoint parameter is defined and exposed. Unless a single
instance is deployed, the human operator can choose, before the main OpenStackControlPlane
CR is applied, which instance should be registered in keystone. Because our default endpoint is
the az0 glance API the keystoneEndpoint is set to az0:
spec:
<...>
glance:
enabled: true
keystoneEndpoint: az0
glanceAPIs:
az0:
apiTimeout: 60
glanceAPIs:
az1:
customServiceConfig: |
[DEFAULT]
enabled_import_methods = [web-download,copy-image,glance-direct]
enabled_backends = az0:rbd,az1:rbd
[glance_store]
default_backend = az1
[az1]
50
CHAPTER 5. DEPLOYING A DCN NODE SET
rbd_store_ceph_conf = /etc/ceph/az1.conf
store_description = "az1 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
[az0]
rbd_store_ceph_conf = /etc/ceph/az0.conf
store_description = "az0 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.81
spec:
type: LoadBalancer
replicas: 2
type: edge
az0:
customServiceConfig: |
[DEFAULT]
enabled_import_methods = [web-download,copy-image,glance-direct]
enabled_backends = az0:rbd,az1:rbd
[glance_store]
default_backend = az0
[az0]
rbd_store_ceph_conf = /etc/ceph/az0.conf
store_description = "az0 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
[az1]
rbd_store_ceph_conf = /etc/ceph/az1.conf
store_description = "az1 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.80
spec:
51
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
type: LoadBalancer
replicas: 3
type: split
NOTE
Availability zone az0 is of type split, and all other availability zones are of type
edge.
The split type is for cloud users to use when uploading images. The edge type is
created so that when Cinder or Nova interact with Glance they can be configured
to whichever glance is local to them. Use at least 3 replicas for the default split
glance pods and 2 replicas for the edge glance pods and increase replicas
proportionally to the workload.
oc apply -f openstack_control_plane.yaml
8. Continue to update the control plane for each additional edge site that is added. Add Red Hat
Storage Service (RHCS) to your OpenShift secrets as needed.
a. In the neutron service configuration, update the customServiceConfig field to add the new
availability zone and network leaf:
customServiceConfig: |
[DEFAULT]
router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler
network_scheduler_driver =
neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler
default_availability_zones = az0,az1,az2
[ml2_type_vlan]
network_vlan_ranges = datacentre:1:1000,leaf1:1:1000,leaf2:1:1000
[neutron]
physnets = datacentre,leaf1,leaf2
ovnController:
external-ids:
availability-zones:
- az0
- az1
- az2
enable-chassis-as-gateway
ovn-bridge: br-int
onv-enap-type: geneve
52
CHAPTER 5. DEPLOYING A DCN NODE SET
system-id: random
networkAttachment: tenant
nicMappings:
datacentre: ospbr
cinderVolumes:
az0:
customServiceConfig: |
[DEFAULT]
...
az1:
customServiceConfig: |
[DEFAULT]
...
az2:
customServiceConfig: |
[DEFAULT]
enabled_backends = ceph
glance_api_servers = https://glance-az2-internal.openstack.svc:9292
[ceph]
volume_backend_name = ceph
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_ceph_conf = /etc/ceph/az2.conf
rbd_user = openstack
rbd_pool = volumes
rbd_flatten_volume_from_snapshot = False
rbd_secret_uuid = 5c0c7a8e-55b1-5fa8-bc5c-9756b7862d2f
rbd_cluster_name = az2
backend_availability_zone = az2
glanceAPIs:
az0:
customServiceConfig: |
[DEFAULT]
enabled_import_methods = [web-download,copy-image,glance-direct]
enabled_backends = az0:rbd,az1:rbd,az2:rbd
[glance_store]
default_backend = az0
[az0]
rbd_store_ceph_conf = /etc/ceph/az0.conf
store_description = "az0 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
[az1]
rbd_store_ceph_conf = /etc/ceph/az1.conf
53
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
54
CHAPTER 5. DEPLOYING A DCN NODE SET
type: edge
az2:
customServiceConfig: |
[DEFAULT]
enabled_import_methods = [web-download,copy-image,glance-direct]
enabled_backends = az0:rbd,az2:rbd
[glance_store]
default_backend = az2
[az2]
rbd_store_ceph_conf = /etc/ceph/az2.conf
store_description = "az2 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
[az0]
rbd_store_ceph_conf = /etc/ceph/az0.conf
store_description = "az0 RBD backend"
rbd_store_pool = images
rbd_store_user = openstack
rbd_thin_provisioning = True
networkAttachments:
- storage
override:
service:
internal:
metadata:
annotations:
metallb.universe.tf/address-pool: internalapi
metallb.universe.tf/allow-shared-ip: internalapi
metallb.universe.tf/loadBalancerIPs: 172.17.0.82
spec:
type: LoadBalancer
replicas: 2
type: edge
oc apply -f openstack_control_plane.yaml
10. Add the AZ to a host aggregate. This allows OpenStack administrators to schedule workloads to
a geographical location by passing the --availabliity-zone argument:
# oc rsh openstackclient
55
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
56
CHAPTER 6. VALIDATING EDGE STORAGE
You can import images into glance that are available on the local filesystem or available on a web server.
NOTE
Always store an image copy in the central site, even if there are no instances using the
image at the central location.
Prerequisites
1. Check the stores that are available through the Image service by using the glance stores-info
command. In the following example, three stores are available: central, dcn1, and dcn2. These
correspond to glance stores at the central location and edge sites, respectively:
$ glance stores-info
+----------+----------------------------------------------------------------------------------+
| Property | Value |
+----------+----------------------------------------------------------------------------------+
| stores | [{"default": "true", "id": "az0", "description": "central rbd glance |
| | store"}, {"id": "az1", "description": "z1 rbd glance store"}, |
| | {"id": "az2", "description": "az2 rbd glance store"}] |
+----------+----------------------------------------------------------------------------------+
1. Ensure that your image file is in RAW format. If the image is not in raw format, you must convert
the image before importing it into the Image service:
$ file cirros-0.5.1-x86_64-disk.img
cirros-0.5.1-x86_64-disk.img: QEMU QCOW2 Image (v3), 117440512 bytes
Import the image into the default back end at the central site:
57
Red Hat OpenStack Services on OpenShift 18.0 Deploying a Distributed Compute Node (DCN) architecture
This procedure assumes that the default Image Conversion plugin is enabled in the Image service
(glance). This feature automatically converts QCOW2 file formats into RAW images, which are optimal
for Ceph RBD. You can confirm that a glance image is in RAW format by running the glance image-
show ID | grep disk_format.
Procedure
1. Use the image-create-via-import parameter of the glance command to import an image from a
web server. Use the --stores parameter.
# glance image-create-via-import \
--disk-format qcow2 \
--container-format bare \
--name cirros \
--uri http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \
--import-method web-download \
--stores az0,az1
In this example, the qcow2 cirros image is downloaded from the official Cirros site, converted to
RAW by glance, and imported into the central site and edge site 1 as specified by the --stores
parameter.
Alternatively you can replace --stores with --all-stores True to upload the image to all of the stores.
1. Use the UUID of the glance image for the copy operation:
NOTE
In this example, the --stores option specifies that the cirros image will be copied
from the central site, az0, to edge sites az1 and az2. Alternatively, you can use
the --all-stores True option, which uploads the image to all the stores that don’t
currently have the image.
2. Confirm a copy of the image is in each store. Note that the stores key, which is the last item in
the properties map, is set to az0,az1,az2.:
NOTE
58
CHAPTER 6. VALIDATING EDGE STORAGE
NOTE
Always store an image copy in the central site even if there is no VM using it on that site.
Procedure
1. Identify the ID of the image to create as a volume, and pass that ID to the openstack volume
create command:
2. Identify the volume ID of the newly created volume and pass it to the openstack server create
command:
3. You can verify that the volume is based on the image by running the rbd command within a
ceph-mon container at the az1 edge site to list the volumes pool.
4. Confirm that you can create a cinder snapshot of the root volume of the instance. Ensure that
the server is stopped to quiesce data to create a clean snapshot. Use the --force option,
because the volume status remains in-use when the instance is off.
5. List the contents of the volumes pool on the az1 Ceph cluster to show the newly created
snapshot.
2. Copy the image from the az1 edge site back to the central location, which is the default
backend for glance:
Prerequisites
The Block Storage backup service is deployed in the central AZ. For more information, see
Updating the control plane.
Procedure
Replace <az0> with the name of the central availability zone that hosts the cinder-backup
service.
Replace <edge_volume> with the name of the volume that you want to back up.
NOTE
If you experience issues with Ceph keyrings, you might need to restart the
cinder-backup container so that the keyrings copy from the host to the
container successfully.
60
CHAPTER 6. VALIDATING EDGE STORAGE
Replace <az_2> with the name of the availability zone where you want to restore the
backup.
Replace <volume_backup> with the name of the volume backup that you created in the
previous step.
61