Integrated Openstack 41 Administration Guide
Integrated Openstack 41 Administration Guide
OpenStack
Administration Guide
Update 2
Modified on 13 NOV 2018
VMware Integrated OpenStack 4.1
VMware Integrated OpenStack Administration Guide
You can find the most up-to-date technical documentation on the VMware website at:
https://docs.vmware.com/
VMware, Inc.
3401 Hillview Ave.
Palo Alto, CA 94304
www.vmware.com
©
Copyright 2015-2018 VMware, Inc. All rights reserved. Copyright and trademark information.
VMware, Inc. 2
Contents
Updated Information 8
2 Deployment Configuration 9
Add Capacity to an OpenStack Deployment 9
Add Compute Clusters to Your Deployment 9
Add Storage to a Compute Node 10
Add Storage to the Image Service 11
Modify Management and API Access Networks 11
Update Component Credentials 12
Add Certificates to Your Deployment 12
Configure Public API Rate Limiting 13
Create a Custom Theme for the VMware Integrated OpenStack Dashboard 14
Profile OpenStack Services 16
Create a Tenant Virtual Data Center 17
VMware, Inc. 3
VMware Integrated OpenStack Administration Guide
Domain Management 40
Configure LDAP Authentication 40
Configure VMware Identity Manager Federation 43
6 OpenStack Instances 48
Import Virtual Machines into VMware Integrated OpenStack 48
Migrate an Instance 51
Enable Live Resize 52
Use Affinity to Control OpenStack Instance Placement 53
Use DRS to Control OpenStack Instance Placement 54
Define VM and Host Groups for Placing OpenStack Instances 55
Create a DRS Rule for OpenStack Instance Placement 55
Apply VM Group Settings to Image Metadata 56
Configure QoS Resource Allocation for Instances 57
Use Storage Policy-Based Management with OpenStack Instances 59
Configure Virtual CPU Pinning 60
Configure OpenStack Instances for NUMA 61
Configuring Passthrough Devices on OpenStack Instances 62
Configure Passthrough for Networking Devices 62
Configure Passthrough for Non-Networking Devices 64
7 OpenStack Flavors 67
Default Flavor Configurations 67
Create a Flavor 67
Delete a Flavor 68
Modify Flavor Metadata 69
Supported Flavor Extra Specs 70
VMware, Inc. 4
VMware Integrated OpenStack Administration Guide
9 Glance Images 79
Importing Images to the Image Service 79
Import Images Using the GUI 79
Import Images in Supported Formats Using the CLI 80
Import Images in Unsupported Formats 82
Import a Virtual Machine Template as an Image 83
Migrate an Image 83
Modify the Default Behavior for Nova Snapshots 84
Modify the Default Cinder Upload-to-Image Behavior 85
Supported Image Metadata 86
VMware, Inc. 5
VMware Integrated OpenStack Administration Guide
VMware, Inc. 6
VMware Integrated OpenStack
Administration Guide 1
The VMware Integrated OpenStack Administration Guide shows you how to perform administrative tasks
in VMware Integrated OpenStack, including how to create and manage projects, users, accounts, flavors,
images, and networks.
Intended Audience
This guide is for cloud administrators who want to create and manage resources with an OpenStack
®
deployment that is fully integrated with VMware vSphere . To do so successfully, you should be familiar
with the OpenStack components and functions.
Terminology
For definitions of terms as they are used in this document, see the VMware Glossary at https://
www.vmware.com/topics/glossary and the OpenStack Glossary at https://docs.openstack.org/doc-contrib-
guide/common/glossary.html.
VMware, Inc. 7
Updated Information
The VMware Integrated OpenStack Administration Guide is updated with each release of the product or
when necessary.
This table provides the update history of the VMware Integrated OpenStack Administration Guide.
Revision Description
Update 2 (13 NOV 2018) n Added documents about creating availability zones.
n Added document about creating provider security groups.
n Various corrections and improvements
13 JUL 2018 Updated Import Virtual Machines into VMware Integrated OpenStack to add a link to DCLI Cannot
Connect to Server.
VMware, Inc. 8
Deployment Configuration 2
You can modify the configuration of your VMware Integrated OpenStack deployment to add capacity,
enable profiling, update credentials, and change or customize various settings.
Prerequisites
In vSphere, create the cluster that you want to add to your deployment.
If you want to add compute clusters from a separate compute vCenter Server instance, the following
restrictions apply:
n You must deploy VMware Integrated OpenStack in HA mode with NSX-T Data Center networking.
Other deployment and networking modes do not support adding compute clusters from separate
vCenter Server instances.
n You cannot add compute clusters from separate compute vCenter Server instances in the same
availability zone.
VMware, Inc. 9
VMware Integrated OpenStack Administration Guide
Procedure
1 In the vSphere Web Client, select Home > VMware Integrated OpenStack.
3 If you want to add compute clusters from a separate vCenter Server instance, first add the instance to
your deployment.
b Click the Add (plus sign) icon at the top left of the pane.
c Enter the FQDN of the vCenter Server instance and administrator credentials and click OK.
5 Click the Add (plus sign) icon at the top left of the pane.
6 Select the vCenter Server instance and availability zone for the compute cluster that you want to add
and click Next.
8 Select one or more datastores for the compute cluster to consume and click Next.
9 Select the management virtual machine and desired datastore and click Next.
The processing capacity of your deployment increases accordingly with the size of the additional compute
cluster.
Adding a datastore causes the compute service to restart and might temporarily interrupt OpenStack
services.
Procedure
1 In the vSphere Web Client, select Home > VMware Integrated OpenStack.
3 Open the Nova Storage tab and click the Add (plus sign) icon at the top left of the pane.
4 Select the cluster to which you want to add a datastore and click Next.
5 Select one or more datastores to add to the cluster and click Next.
The storage capacity for the selected compute node increases accordingly with the size of the additional
datastore.
VMware, Inc. 10
VMware Integrated OpenStack Administration Guide
Adding a datastore causes the image service to restart and might temporarily interrupt OpenStack
services.
Procedure
1 In the vSphere Web Client, select Home > VMware Integrated OpenStack.
3 Open the Glance Storage tab and click the Add (plus sign) icon at the top left of the pane.
The storage capacity for the image service increases accordingly with the size of the additional datastore.
Procedure
1 In the vSphere Web Client, select Home > VMware Integrated OpenStack.
3 On the Networks tab, right-click the network that you want to modify.
n To add an IP address range, select Add IP Range and enter the IP address range that you want
to add to the network.
You can click Add IP Range to add multiple IP address ranges at once.
Important The management and API access networks cannot include more than 100 IP
addresses each.
n To change the DNS server, select Change DNS and enter the DNS servers that you want to add
to the network.
Modifying the DNS settings will briefly interrupt the network connection.
VMware, Inc. 11
VMware Integrated OpenStack Administration Guide
Important If you want to change the NSX-T Data Center password, perform the following steps:
1 Log in to a controller node and run the systemctl stop neutron-server command to stop the
Neutron server service.
3 Change the password in VMware Integrated OpenStack as described in the following section.
The Neutron server service will restart after you change the password in VMware Integrated OpenStack.
Procedure
1 In the vSphere Web Client, select Home > VMware Integrated OpenStack.
The Change Passwords panel contains text boxes for updating the current LDAP server, NSX
Manager, and vCenter Server credentials.
To retain the original settings for a component, leave the text boxes blank.
The certificates that you add must be signed by a certificate authority (CA) and created from a certificate
signing request (CSR) generated by VMware Integrated OpenStack. Using wildcard certificates is not
supported.
Procedure
1 In the vSphere Web Client, select Home > VMware Integrated OpenStack.
4 If you require a new CA-signed certificate, enter the information for the CSR and click Generate.
5 After you have obtained the certificate from the CA, click Import and select the certificate file.
VMware, Inc. 12
VMware Integrated OpenStack Administration Guide
If a client exceeds the rate limit, it receives an HTTP 429 Too Many Requests response. The Retry-
After header in the response indicates how long the client must wait before making further calls.
You can enable rate limiting by service. For example, you might want to throttle Nova API service calls
more tightly than Neutron API service calls.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
4 Uncomment the haproxy_throttle_period parameter and set it to the number of seconds that
clients must wait if a rate limit is exceeded.
5 If you want to configure rate limits for specific APIs, uncomment the max_requests and
request_period parameters for those services and configure them as desired.
The APIs that can be rate limited and the corresponding parameters are listed as follows.
Option Description
VMware, Inc. 13
VMware Integrated OpenStack Administration Guide
Option Description
haproxy_throttle_period: 60
haproxy_neutron_max_requests: 50
haproxy_neutron_request_period: 10
You can upload a custom logo, style sheet, or bookmark icon to the OpenStack Management Server and
configure it to display as your theme.
VMware, Inc. 14
VMware Integrated OpenStack Administration Guide
Prerequisites
n Custom logos should be 216 pixels long by 35 pixels wide. Graphics with different dimensions might
not be displayed properly.
Procedure
2 Create the /opt/vmware/vio/custom/horizon directory and transfer the desired theme files to this
directory.
3 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
n To configure a bookmark icon, uncomment the horizon_favicon parameter and set its value to
the path of your icon file.
n To configure a dashboard logo (displayed in the top-left corner of each page), uncomment the
horizon_logo parameter and set its value to the path of your dashboard logo file.
n To configure a login logo, uncomment the horizon_logo_splash parameter and set its value to
the path of your login logo file.
The VMware Integrated OpenStack dashboard loads your custom theme files. After the service becomes
available, you can log in and choose the VMware theme to display your customizations.
Note If you edit or disable the custom theme at a later time, clear the browser cache so that the updated
theme can be displayed.
VMware, Inc. 15
VMware Integrated OpenStack Administration Guide
VMware Integrated OpenStack supports the profiling of Cinder, Glance, Heat, Neutron, and Nova
commands. You can store profiler trace data with Ceilometer or vRealize Log Insight.
Prerequisites
n If you want to use Ceilometer to store trace data, enable Ceilometer. See "Enable the Ceilometer
Component" in the VMware Integrated OpenStack Installation and Configuration Guide.
n If you want to use vRealize Log Insight to store trace data, deploy and configure vRealize Log Insight.
See the Getting Started document for vRealize Log Insight.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
6 If you are using vRealize Log Insight, uncomment the os_profiler_connection_string parameter
and set its value to the location of your vRealize Log Insight server.
Enter the vRealize Log Insight server address in the following format: "loginsight://
username:password@loginsight-ip"
Specify the user name and password of a user with the USER role on your vRealize Log Insight
deployment.
8 If you are using vRealize Log Insight, log in to the controller node and set the
OSPROFILER_CONNECTION_STRING environment variable to the vRealize Log Insight server address that
you specified in the custom.yml file.
export OSPROFILER_CONNECTION_STRING="loginsight://username:password@loginsight-ip"
VMware, Inc. 16
VMware Integrated OpenStack Administration Guide
You can now enable profiling on OpenStack commands. Run the desired command with the --profile
parameter and specify your OSProfiler password. The command outputs a profiling trace UUID. Run
OSProfiler with that UUID to generate a report. The following example profiles the cinder list
command:
Important This feature is offered in VMware Integrated OpenStack Carrier Edition only. For more
information, see "VMware Integrated OpenStack Licensing" in the VMware Integrated OpenStack
Installation and Configuration Guide.
Project quotas limit OpenStack resources across multiple compute nodes or availability zones, but they
do not guarantee resource availability. By creating a tenant virtual data center to allocate CPU and
memory for an OpenStack project on a compute node, you provide a resource guarantee for tenants and
avoid noisy neighbor scenarios in a multi-tenant environment.
The tenant virtual data center allocates resources at the compute node level. You can also allocate
resources on the virtual network function (VNF) level using the same flavor. For instructions, see
Configure QoS Resource Allocation for Instances.
Tenant virtual data centers are managed using the viocli utility. For more information, see viocli
inventory-admin Command.
Procedure
4 Select the admin project from the drop-down menu in the title bar.
c Select Update Metadata next to the flavor that you want to use.
VMware, Inc. 17
VMware Integrated OpenStack Administration Guide
d In the Available Metadata pane, expand VMware Policies and click the Add (plus sign) icon
next to Tenant Virtual Datacenter.
e Set the value of vmware:tenant_vdc to the UUID of the tenant virtual data center and click
Save.
You can run the viocli inventory-admin list-tenant-vdcs command on the OpenStack
Management Server to find the UUID of all tenant virtual data centers.
The tenant virtual data center is created. You can now launch instances in the tenant virtual data center
by configuring them with the flavor that you modified in this procedure.
What to do next
You can display the resource pools in a tenant virtual data center by running the viocli inventory-admin
show-tenant-vdc --id tvdc-uuid command. Each resource pool is listed with its provider ID, project ID,
status, minimum and maximum CPU, minimum and maximum memory, and compute node information. If
a tenant virtual data center includes multiple resource pools, the first row displays aggregate information
for all pools.
You can update your tenant virtual data centers by running the viocli inventory-admin update-
tenant-vdc command. For specific parameters, see viocli inventory-admin Command.
You can delete an unneeded tenant virtual data center by running the viocli inventory-admin delete-
tenant-vdc --id tvdc-uuid command.
VMware, Inc. 18
Neutron Network Configuration 3
You can create provider and external networks for your VMware Integrated OpenStack deployment,
configure availability zones, and perform other advanced networking tasks.
n Create a Neutron Availability Zone with NSX Data Center for vSphere
n Specify Tenant Router Types for NSX Data Center for vSphere
n Configure Dynamic Routing for Neutron Networks with NSX Data Center for vSphere
A provider network can be dedicated to one project or shared among multiple projects. Tenants can
create virtual machines in provider networks or connect their tenant networks to a provider network
through a Neutron router.
The specific configuration for creating a provider network depends on the networking mode of your
VMware Integrated OpenStack deployment.
VMware, Inc. 19
VMware Integrated OpenStack Administration Guide
Prerequisites
n Define a VLAN for the provider network and record its ID.
n To use DHCP with VM form-factor NSX Edge nodes, enable forged transmit and promiscuous mode
on the port group containing the edge nodes. For instructions, see "Configure the Security Policy for a
Distributed Port Group or Distributed Port" in the vSphere Networking document.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
Option Description
6 If you want multiple projects to use the provider network, select Shared.
Option Description
Network Address Enter the IP address range for the subnet in CIDR format (for example,
192.0.2.0/24).
Gateway IP Enter the gateway IP address. If you do not enter a value, the first IP address in the
subnet is used. If you do not want a gateway on the subnet, select Disable
Gateway.
VMware, Inc. 20
VMware Integrated OpenStack Administration Guide
a Under Allocation Pools, enter IP address pools from which to allocate the IP addresses of virtual
machines created on the network. Enter pools as two IP addresses separated by a comma (for
example, 192.0.2.10,192.0.2.15). If you do not specify any IP address pools, the entire
subnet is available for allocation.
b Under DNS Name Servers, enter the IP address of one or more DNS servers to use on the
subnet.
c Under Host Routes, enter additional routes to advertise to the hosts on the subnet. Enter routes
as the destination IP address in CIDR format and the next hop separated by a comma (for
example, 192.0.2.0/24,192.51.100.1).
9 Click Create.
Prerequisites
n If you want to create a VLAN-based network, define a VLAN for the provider network and record its
ID.
n To use DHCP on a VLAN-based network with VM form-factor NSX Edge nodes, you must enable
forged transmit and promiscuous mode on the port group containing the edge nodes. For instructions,
see "Configure the Security Policy for a Distributed Port Group or Distributed Port" in the vSphere
Networking document.
n If you want to create a port group-based network, create a port group for the provider network and
record its managed object identifier (MOID).
Procedure
2 Select the admin project from the drop-down menu in the title bar.
Option Description
Provider Network Type Select Flat, VLAN, Port Group, or VXLAN from the drop-down menu.
VMware, Inc. 21
VMware Integrated OpenStack Administration Guide
Option Description
Physical Network n If you selected Flat or VLAN for the network type, enter the MOID of the
distributed switch for the provider network.
n If you selected Port Group for the network type, enter the MOID of the port
group for the provider network.
n If you selected VXLAN for the network type, this value is determined
automatically.
Segmentation ID If you selected VLAN for the network type, enter the VLAN ID defined for the
provider network.
6 If you want multiple projects to use the provider network, select Shared.
Option Description
Network Address Enter the IP address range for the subnet in CIDR format (for example,
192.0.2.0/24).
Gateway IP Enter the gateway IP address. If you do not enter a value, the first IP address in the
subnet is used. If you do not want a gateway on the subnet, select Disable
Gateway.
a Under Allocation Pools, enter IP address pools from which to allocate the IP addresses of virtual
machines created on the network. Enter pools as two IP addresses separated by a comma (for
example, 192.0.2.10,192.0.2.15). If you do not specify any IP address pools, the entire
subnet is available for allocation.
b Under DNS Name Servers, enter the IP address of one or more DNS servers to use on the
subnet.
c Under Host Routes, enter additional routes to advertise to the hosts on the subnet. Enter routes
as the destination IP address in CIDR format and the next hop separated by a comma (for
example, 192.0.2.0/24,192.51.100.1).
9 Click Create.
Prerequisites
Define a VLAN for the provider network and record its ID.
VMware, Inc. 22
VMware Integrated OpenStack Administration Guide
Procedure
2 Select the admin project from the drop-down menu in the title bar.
Option Description
6 If you want multiple projects to use the provider network, select Shared.
Option Description
Network Address Enter the IP address range for the subnet in CIDR format (for example,
192.0.2.0/24).
Gateway IP Enter the gateway IP address. If you do not enter a value, the first IP address in the
subnet is used. If you do not want a gateway on the subnet, select Disable
Gateway.
a Under Allocation Pools, enter IP address pools from which to allocate the IP addresses of virtual
machines created on the network. Enter pools as two IP addresses separated by a comma (for
example, 192.0.2.10,192.0.2.15). If you do not specify any IP address pools, the entire
subnet is available for allocation.
b Under DNS Name Servers, enter the IP address of one or more DNS servers to use on the
subnet.
c Under Host Routes, enter additional routes to advertise to the hosts on the subnet. Enter routes
as the destination IP address in CIDR format and the next hop separated by a comma (for
example, 192.0.2.0/24,192.51.100.1).
9 Click Create.
VMware, Inc. 23
VMware Integrated OpenStack Administration Guide
An external network can be dedicated to one project or shared among multiple projects. Tenants cannot
create virtual machines in external networks.
The specific configuration for creating an external network depends on the networking mode of your
VMware Integrated OpenStack deployment.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
Option Description
Provider Network Type Select Local to connect tenant logical routers to the default tier-0 router or
External to connect tenant logical routers to another tier-0 router.
Physical Network If you selected External as the provider network type, enter the UUID of the tier-0
router to which you want to connect future tenant logical routers.
6 If you want multiple projects to use the external network, select Shared.
Option Description
Network Address Enter the IP address range for the subnet in CIDR format (for example,
192.0.2.0/24).
Gateway IP Enter the gateway IP address. If you do not enter a value, the first IP address in the
subnet is used. If you do not want a gateway on the subnet, select Disable
Gateway.
VMware, Inc. 24
VMware Integrated OpenStack Administration Guide
a Under Allocation Pools, enter IP address pools from which to allocate the floating IP addresses
of tenant logical routers. Enter pools as two IP addresses separated by a comma (for example,
192.0.2.10,192.0.2.15). If you do not specify any IP address pools, the entire subnet is
available for allocation.
b Under DNS Name Servers, enter the IP address of one or more DNS servers to use on the
subnet.
c Under Host Routes, enter additional routes to advertise to the hosts on the subnet. Enter routes
as the destination IP address in CIDR format and the next hop separated by a comma (for
example, 192.0.2.0/24,192.51.100.1).
10 Click Create.
Prerequisites
n If you want to create a VLAN-based network, define a VLAN for the external network and record its
ID.
n If you want to create a port group-based network, create a port group for the external network and
record its managed object identifier (MOID).
Procedure
2 Select the admin project from the drop-down menu in the title bar.
Option Description
Provider Network Type Select Flat, VLAN, Port Group, or VXLAN from the drop-down menu.
Physical Network n If you selected Flat or VLAN for the network type, enter the MOID of the
distributed switch for the provider network.
n If you selected Port Group for the network type, enter the MOID of the port
group for the provider network.
n If you selected VXLAN for the network type, this value is determined
automatically.
Segmentation ID If you selected VLAN for the network type, enter the VLAN ID defined for the
provider network.
VMware, Inc. 25
VMware Integrated OpenStack Administration Guide
6 If you want multiple projects to use the provider network, select Shared.
Option Description
Network Address Enter the IP address range for the subnet in CIDR format (for example,
192.0.2.0/24).
Gateway IP Enter the gateway IP address. If you do not enter a value, the first IP address in the
subnet is used. If you do not want a gateway on the subnet, select Disable
Gateway.
a Under Allocation Pools, enter IP address pools from which to allocate the floating IP addresses
of tenant logical routers. Enter pools as two IP addresses separated by a comma (for example,
192.0.2.10,192.0.2.15). If you do not specify any IP address pools, the entire subnet is
available for allocation.
b Under DNS Name Servers, enter the IP address of one or more DNS servers to use on the
subnet.
c Under Host Routes, enter additional routes to advertise to the hosts on the subnet. Enter routes
as the destination IP address in CIDR format and the next hop separated by a comma (for
example, 192.0.2.0/24,192.51.100.1).
10 Click Create.
Prerequisites
In NSX-T Data Center, create a bridge cluster that includes two dedicated ESXi hosts. See "Create a
Bridge Cluster" in the NSX-T Administration Guide.
Procedure
VMware, Inc. 26
VMware Integrated OpenStack Administration Guide
3 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
4 Create a logical Layer 2 gateway, specifying the UUID of the NSX-T Data Center bridge cluster as the
device name.
The interface name value is ignored, and the name is automatically assigned.
5 Create the logical Layer 2 gateway connection using the gateway created in the previous step.
Compute nodes on the overlay network can now access the specified VLAN.
Prerequisites
Create a port group and tag it with the ID of the VLAN to which you want to connect your compute nodes.
Procedure
3 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
4 Create a logical Layer 2 gateway, specifying the managed object identifier (MOID) of the port group
as the interface name.
NSX Data Center for vSphere creates a dedicated distributed logical router (DLR) from the backup
edge pool. The device name value is ignored, and the object is automatically assigned a name in the
format "L2 bridging-gateway-id".
5 Create the logical Layer 2 gateway connection using the gateway created in the previous step.
VMware, Inc. 27
VMware Integrated OpenStack Administration Guide
Note For VDS deployments, only provider networks can be transparent. For NSX Data Center for
vSphere and NSX-T Data Center deployments, only tenant networks can be transparent.
To enable VLAN transparency on a network, include the --transparent-vlan parameter and disable
port security when you create the network. For example:
MAC learning in VMware Integrated OpenStack is implemented differently for NSX-T Data Center and
NSX Data Center for vSphere deployments.
n For NSX-T Data Center deployments, MAC learning in VMware Integrated OpenStack is provided by
NSX-T Data Center MAC learning. For more information, see "Understanding MAC Management
Switching Profile" in the NSX-T Administration Guide.
n For NSX Data Center for vSphere deployments, MAC learning in VMware Integrated OpenStack is
implemented by enabling forged transmit and promiscuous mode. The guest must request
promiscuous mode.
n For NSX Data Center for vSphere deployments, performance will be affected because vNICs that
request promiscuous mode receive a copy of every packet.
n For NSX Data Center for vSphere deployments, no RARP requests are generated for the multiple
MAC addresses behind a single vNIC when a virtual machine is migrated with vMotion. This can
result in a loss of connectivity.
Procedure
2 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
VMware, Inc. 28
VMware Integrated OpenStack Administration Guide
3 Disable port security and security groups on the port where you want to configure MAC learning.
Prerequisites
n Configure the new edge cluster to use the appropriate distributed switch. You can create a new
distributed switch for the zone if desired.
n In NSX Data Center for vSphere, create a transport zone that includes the new edge cluster.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
4 Uncomment the nsxv_availability_zones parameter and set its value to the name of the
availability zone that you want to create.
The value of this parameter can include multiple availability zones. Separate multiple names with
commas (,).
Option Description
zone_name Enter the name of the availability zone that you want to configure.
resource_pool_id Enter the managed object identifier (MOID) of the resource pool that you created
for the new availability zone.
VMware, Inc. 29
VMware Integrated OpenStack Administration Guide
Option Description
datastore_id Enter the MOID of the datastore that you want to use for the new availability zone.
edge_ha Enter True to enable high availability for edge nodes or False to disable it.
ha_datastore_id Enter the MOID of the datastore that you want to use for high availability for edge
nodes.
If you set edge_ha to False, do not specify a value for the ha_datastore_id
parameter.
external_network Enter the MOID of the external network port group on the distributed switch for the
new availability zone.
vdn_scope_id Enter the MOID of the transport zone that you created for the new availability zone.
mgt_net_id Enter the MOID of the management network for your deployment.
mgt_net_proxy_ips Enter the IP addresses of the metadata proxy server for your deployment.
dvs_id Enter the MOID of the distributed switch for the new availability zone.
Ensure that there is one copy of the preceding parameters for each availability zone configured.
What to do next
Prerequisites
n Verify that your edge cluster has at least two hosts. If not, you might receive an anti-affinity error.
n If you want to specify edge host groups, create and configure the host groups in vSphere.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
VMware, Inc. 30
VMware Integrated OpenStack Administration Guide
5 If you want to use edge host groups, uncomment the nsxv_edge_host_groups parameter and set its
value to the two edge host groups that you created, separated by a comma (,).
9 If your environment already includes NSX Edge nodes, enable HA on those nodes and migrate them
to the specified host groups.
To find the ID of an NSX Edge node, you can run the sudo -u neutron nsxadmin -r edges -
o nsx-list command.
If you want to migrate only specific edge nodes, you can use the following command:
Edge HA is enabled for the desired nodes. If you specified edge host groups, current and future edge
nodes are created in those groups.
What to do next
You can update the edge host groups in custom.yml after the original configuration. After deploying
custom.yml, run the following commands to update the environment:
Then perform Step 9 again to migrate edge nodes to the new host groups.
VMware, Inc. 31
VMware Integrated OpenStack Administration Guide
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
4 Uncomment the nsxv_tenant_router_types parameter and specify the router types that you want
to make available to tenants.
You can enter exclusive, shared, distributed, or any combination separated by commas (,).
The values of the nsxv_tenant_router_types parameter are used in order as the default router
types.
Tenants can create routers only of the types listed. If a tenant creates a router without specifying a type,
the first available type is used by default.
After you enable BGP, the logical subnets created by your tenants are advertised outside of your
environment without requiring source NAT or floating IP addresses. You must first create a VXLAN
external network that you later use as internal interface for your gateway edges.
Procedure
VMware, Inc. 32
VMware Integrated OpenStack Administration Guide
3 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
4 Create an IPv4 address scope for future tenant subnets and the subnet of your external VXLAN
network.
Option Description
--pool-prefix Enter the network address of the subnet pool in CIDR format (for example,
192.0.2.0/24). Subnets will be allocated from this network.
--default-prefixlen Enter the network prefix length (in bits) to use for new subnets that are created
without specifying a prefix length.
--address-scope Enter the name of the IPv4 address scope that you created in Step 4.
Note OpenStack will advertise this subnet pool to the physical fabric. Specify a prefix that is not
currently in use.
Option Description
--pool-prefix Enter the network address of the subnet pool in CIDR format (for example,
192.51.100.0/24). Subnets will be allocated from this network.
--default-prefixlen Enter the network prefix length (in bits) to use for new subnets that are created
without specifying a prefix length.
--address-scope Enter the name of the IPv4 address scope that you created in Step 4.
This command creates a new logical switch in NSX Data Center for vSphere.
VMware, Inc. 33
VMware Integrated OpenStack Administration Guide
Option Description
external-network Enter the name of the VXLAN-based external network that you created in Step 7.
external-subnet-address Enter the network address for the subnet in CIDR format (for example,
192.51.100.0/28).
--allocation-pool Enter the first and last IP addresses of the range that you want to allocate from this
subnet.
--subnetpool Enter the subnet pool that you created in Step 5 for the external network.
Option Description
local-as Enter the local AS number for the edge node. The edges and physical routers
cannot be in the same AS.
external-iface Enter the managed object identifier (MOID) of the port group associated with the
VLAN that connects the edge nodes to the physical routers. After the colon, enter
the IP address of the edge node on the management network.
internal-iface Enter the virtual wire identifier of the VXLAN-based external network. After the
colon, enter the IP address of the edge node on the physical network.
To find the virtual wire identifier, run the openstack network show external-
network-name command and locate the value of the provider:physical_network
parameter.
For the gw-edge-ids parameter, use the edge identifier (for example, edge-4) instead of the name.
You can run the sudo -u neutron nsxadmin -r bgp-gw-edge -o view command to display the
identifier of each BGP edge node.
VMware, Inc. 34
VMware Integrated OpenStack Administration Guide
Option Description
remote-as Enter the AS number of the physical routers connected to the edge nodes.
a Ensure that the AS of the physical routers is the remote AS of the edge nodes.
VMware, Inc. 35
VMware Integrated OpenStack Administration Guide
Tenant users can create their BGP routers. The tenant user must be admin to configure a router
without SNAT.
a Create two logical switches for a tenant and subnet pools for them.
BGP works with all OpenStack Logical Routers form factors : shared , distributed , and
exclusive.
BGP dynamic routing is now configured on the provider side and tenants can also use it.
Standard security groups are created and managed by tenants, whereas provider security groups are
created and managed by the cloud administrator. Provider security groups take precedence over
standard security groups and are enforced on all virtual machines in a project.
For instructions about standard security groups, see "Working with Security Groups" in the VMware
Integrated OpenStack User's Guide.
Procedure
VMware, Inc. 36
VMware Integrated OpenStack Administration Guide
Note Provider security group rules block the specified traffic, whereas standard security rules allow
the specified traffic.
Option Description
--direction Specify ingress to block incoming traffic or egress to block outgoing traffic.
If you do not include this parameter, ingress is used by default.
--protocol Specify the protocol to block. Enter an integer representation ranging from 0 to 255
or one of the following values:
n icmp
n icmpv6
n tcp
n udp
To block all protocols, do not include this parameter.
--remote-ip-prefix Enter the source network of traffic to block (for example, 10.10.0.0/24).
This parameter cannot be used together with the --remote-group-id parameter.
--remote-group-id Enter the name or ID of the source security group of traffic to block.
This parameter cannot be used together with the --remote-ip-prefix parameter.
The provider security group rules are enforced on all newly created ports on virtual machines in the
specified project and cannot be overridden by tenant-defined security groups.
What to do next
You can enforce one or more provider security groups on existing ports by running the following
command:
VMware, Inc. 37
VMware Integrated OpenStack Administration Guide
Provider and standard security groups can both consume NSX Data Center for vSphere security policies.
Rule-based provider and standard security groups can also be used together with security policy-based
security groups. However, a security group associated with a security policy cannot also contain rules.
Security policies take precedence over all security group rules. If more than one security policy is
enforced on a port, the order in which the policies are enforced is determined by NSX Data Center for
vSphere. You can change the order in the vSphere Web Client on the Security > Firewall page under
Networking and Security.
Prerequisites
Create the desired security policies in NSX Data Center for vSphere. See "Create a Security Policy" in the
NSX Administration Guide.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
Option Description
nsxv_default_policy_id Enter the ID of the NSX Data Center for vSphere security policy that you want to
associate with the default security group for new projects. If you do not want to use
a security policy by default, you can leave this parameter commented out.
To find the ID of a security policy, select Home > Networking & Security and click
Service Composer. Open the Security Policies tab and click the Show Columns
icon at the bottom left of the table. Select Object Id and click OK. The ID of each
security policy is displayed in the table.
nsxv_allow_tenant_rules_with_policy Enter true to allow tenants to create security groups and rules or false to prevent
tenants from creating security groups or rules.
VMware, Inc. 38
VMware Integrated OpenStack Administration Guide
7 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
8 If you want to use additional security groups with security policies, you can perform the following
steps:
n To associate an NSX Data Center for vSphere security policy with a new security group, create
the group and update it with the desired policy:
n To migrate an existing security group to a security policy-based group, run the following
command:
Note This command removes all rules from the specified security group. Ensure that the target
policy is configured such that the network connection will not be interrupted.
9 Configure Neutron to prioritize NSX Data Center for vSphere security policies over security groups.
VMware, Inc. 39
Authentication and Identity 4
In VMware Integrated OpenStack, authentication and identity management are provided by the Keystone
component. In addition to SQL-backed OpenStack users, you can also configure authentication through
LDAP or through identity federation with VMware Identity Manager.
n Domain Management
Domain Management
You can create domains to manage users and tenants.
All VMware Integrated OpenStack deployments contain the local and Default domains.
n The local domain includes service users and is backed by a local SQL database.
n The Default domain contains standard OpenStack users. If you configure LDAP during VMware
Integrated OpenStack installation (single domain), the Default domain is backed by LDAP and also
contains LDAP users. Otherwise, the Default is backed by the local SQL database.
You can create and manage additional domains as needed. For example, you can create a separate
domain for federated users. To manage domains, select Identity > Domains on the VMware Integrated
OpenStack dashboard.
VMware Integrated OpenStack supports SQL plus one or more domains as an identity source, up to a
maximum of 10 domains.
VMware, Inc. 40
VMware Integrated OpenStack Administration Guide
Prerequisites
Contact your LDAP administrator or use tools such as ldapsearch or Apache Directory Studio to obtain
the correct values for LDAP settings.
Procedure
1 In the vSphere Web Client, select Home > VMware Integrated OpenStack.
4 Click the Add (plus sign) icon to configure a new LDAP source or the Edit (pencil) icon to modify an
existing configuration.
Option Description
Active Directory domain name Specify the full Active Directory domain name.
Keystone domain name Enter the Keystone domain name for the LDAP source.
Do not use default or local as the Keystone domain.
Bind user Enter the user name to bind to Active Directory for LDAP requests.
Bind password Enter the password to allow the LDAP client access to the LDAP server.
Domain controllers (Optional) Enter the IP addresses of one or more domain controllers, separated
with commas (,).
If you do not specify a domain controller, VMware Integrated OpenStack will
automatically choose an existing Active Directory domain controller.
Site (Optional) Enter a specific deployment site within your organization to limit LDAP
searching to that site.
User Tree DN (Optional) Enter the search base for users (for example, DC=vmware, DC=com).
In most Active Directory deployments, the top of the user tree is used by default.
Important If your directory contains more than 1,000 objects (users and groups),
you must apply a filter to ensure that fewer than 1,000 objects are returned.
For more information about filters, see "Search Filter Syntax" in the Microsoft
documentation at https://docs.microsoft.com/en-us/windows/win32/adsi/search-
filter-syntax.
Group tree DN (Optional) Enter the search base for groups. The LDAP suffix is used by default.
VMware, Inc. 41
VMware Integrated OpenStack Administration Guide
Option Description
LDAP admin user Enter an LDAP user to act as an administrator for the domain. If you specify an
LDAP admin user, the admin project will be created in the Keystone domain for
LDAP, and this user will be assigned the admin role in that project. This user can
then log in to Horizon and perform other operations in the Keystone domain for
LDAP.
If you do not specify an LDAP admin user, you must use the OpenStack command-
line interface to add a project to the Keystone domain for LDAP and assign the
admin role to an LDAP user in that project.
You can select the Advanced settings check box to display additional LDAP configuration fields.
Option Description
User objectclass (Optional) Enter the LDAP object class for users.
User ID attribute (Optional) Enter the LDAP attribute mapped to the user ID. This value cannot be a
multi-valued attribute.
User name attribute (Optional) Enter the LDAP attribute mapped to the user name.
User mail attribute (Optional) Enter the LDAP attribute mapped to the user email.
User password attribute (Optional) Enter the LDAP attribute mapped to the password.
Group objectclass (Optional) Enter the LDAP object class for groups.
Group ID attribute (Optional) Enter the LDAP attribute mapped to the group ID.
Group name attribute (Optional) Enter the LDAP attribute mapped to the group name.
Group member attribute (Optional) Enter the LDAP attribute mapped to the group member name.
Group description attribute (Optional) Enter the LDAP attribute mapped to the group description.
7 Click OK.
8 If you did not specify an LDAP admin user, configure a project and administrator for the Keystone
domain for LDAP.
b Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
VMware, Inc. 42
VMware Integrated OpenStack Administration Guide
e In the Keystone domain for LDAP, assign the admin role to the LDAP user.
openstack role add admin --user ldap-username --user-domain ldap-domain --domain ldap-domain
f In the new project, assign the admin role to the LDAP user.
openstack role add admin --user ldap-username --user-domain ldap-domain --project new-project
--project-domain ldap-domain
LDAP authentication is configured on your VMware Integrated OpenStack deployment. You can log in to
the VMware Integrated OpenStack dashboard as the LDAP admin user that you specified during
configuration.
Users can authenticate with VMware Identity Manager over the Security Association Markup Language
(SAML) 2.0 protocol. Federated users must authenticate using the VMware Integrated OpenStack
dashboard. The OpenStack command-line interface is not supported.
Prerequisites
n Ensure that your VMware Identity Manager instance can communicate with the VMware Integrated
OpenStack management network.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
VMware, Inc. 43
VMware Integrated OpenStack Administration Guide
Option Description
federation_idp_id Enter a name for the identity provider. This name is used in OpenStack
Management Server command-line operations and cannot include special
characters or spaces.
federation_idp_name Enter a display name for the identity provider. This name is shown to users under
Authenticate using when they log in to the VMware Integrated OpenStack
dashboard.
vidm_address Enter the FQDN of your VMware Identity Manager instance (for example,
https://vxlan-vm-2-10.network.example.com).
vidm_password Enter the password for the VMware Identity Manager administrator.
vidm_insecure Enter false to verify TLS certificates or true to disable certificate verification.
vidm_group Enter the user group in VMware Identity Manager to use for federation.
b Select the admin project from the drop-down menu in the title bar.
f Click Save.
VMware Integrated OpenStack is integrated with VMware Identity Manager, and federated users and
groups are imported into OpenStack. When you access the VMware Integrated OpenStack dashboard,
you can choose the VMware Identity Manager identity provider to log in as a federated user.
VMware, Inc. 44
OpenStack Projects and Users 5
In VMware Integrated OpenStack, cloud administrators manage permissions through user, group, and
project definitions. Projects in OpenStack equate to tenants in vCloud Suite. You can control network
security on the project level through provider security groups or NSX Data Center for vSphere security
policies.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
4 On the Project Information tab, enter a name and description and select whether to enable the
project.
6 (Optional) On the Project Groups tab, add user groups to the project.
The VMware Integrated OpenStack dashboard assigns an ID to the new project, and the project is listed
on the Projects page.
Note The project ID generated is 32 characters in length. However, when filtering by project ID specific
to the security group section in Neutron server logs or in vRealize Log Insight, use only the first 22
characters.
VMware, Inc. 45
VMware Integrated OpenStack Administration Guide
What to do next
In the Actions column to the right of each project, you can modify project settings, including adding and
removing users and groups, modifying project quotas, and changing the name or enabled status of the
project.
If you disable a project, it is no longer accessible to its members, but its instances continue to run, and
project data is retained. Users that are assigned only to disabled projects cannot log in to the VMware
Integrated OpenStack dashboard.
You can select one or more projects and click Delete Projects to remove them permanently. Deleted
projects cannot be restored.
Prerequisites
Create and enable at least one OpenStack project. See Create an OpenStack Project.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
Option Description
Password/Confirm Password Enter a preliminary password for the user. The password can be changed after the
user logs in for the first time.
Primary Project Select the project to which the user is assigned. A user account must be assigned
to at least one project.
Role Select a role for the user. The user inherits the permissions assigned to the
specified role.
Enable Select Enable to allow to user to log in and perform OpenStack operations.
What to do next
In the Actions column to the right of each user, you can modify user settings, change the user password,
and enable or disable the user.
VMware, Inc. 46
VMware Integrated OpenStack Administration Guide
If you want to assign a single user to multiple projects, select Identity > Projects and click Manage
Members next to the desired project.
You can create a group containing multiple users for simpler administration. See Create a User Group.
You can select one or more users and click Delete Users to remove them permanently. Deleted users
cannot be restored.
Prerequisites
Procedure
2 Select the admin project from the drop-down menu in the title bar.
5 In the Actions column to the right of the new group, click Manage Members.
What to do next
You can add the user group when you create or modify a project. All users in the group inherit the roles
specified in the project for the group.
VMware, Inc. 47
OpenStack Instances 6
Instances are virtual machines that run in the cloud.
You can manage instances for users in various projects. You can view, terminate, edit, perform a soft or
hard reboot, create a snapshot from, and migrate instances. You can also view the logs for instances or
start a VNC console for an instance.
n Migrate an Instance
n If a virtual machine has multiple disks, the disks are imported as Cinder volumes.
n Existing networks are imported as provider networks of type portgroup with access restricted to the
given tenant.
n After a virtual machine with a specific network backing is imported, the same network cannot be
imported to a different project.
VMware, Inc. 48
VMware Integrated OpenStack Administration Guide
n Neutron ports are automatically created based on the IP and MAC address of the network interface
card on the virtual machine.
Note If the DHCP server cannot maintain the same IP address during lease renewal, the instance
information in OpenStack will show the incorrect IP address. To avoid this problem, use static DHCP
bindings on existing DHCP servers and do not run new OpenStack instances on imported networks.
You import VMs using the Data Center Command-Line Interface (DCLI) on the OpenStack Management
Server.
Prerequisites
n Deploy VMware Integrated OpenStack with NSX Data Center for vSphere or VDS networking.
Importing virtual machines is not supported for NSX-T Data Center deployments.
n Verify that the virtual machines that you want to import are in the same vCenter Server instance.
Procedure
1 In vSphere, add the clusters containing the desired virtual machines as compute clusters in your
VMware Integrated OpenStack deployment. For instructions, see Add Compute Clusters to Your
Deployment.
3 If you want to prevent imported virtual machines from being relocated or renamed, update your
deployment configuration.
a If your deployment is not using a custom.yml file, copy the template custom.yml file to
the /opt/vmware/vio/custom directory.
If you cannot connect to the server, see DCLI Cannot Connect to Server.
VMware, Inc. 49
VMware Integrated OpenStack Administration Guide
Note When you execute a command, DCLI prompts you to enter the administrator credentials for
your vCenter Server instance. You can save these credentials to avoid entering your username and
password every time.
Option Description
--cluster Enter the compute cluster that contains the virtual machines that you want to import.
--tenant-mapping Specify whether to map imported virtual machines to OpenStack projects based on their
{FOLDER | location in folders or resource pools.
RESOURCE_POOL} If you do not include this parameter, all imported VMs will become instances in the
import_service project by default.
--root-folder If you specified FOLDER for the --tenant-mapping parameter, you can provide the name
ROOT_FOLDER of the root folder containing the virtual machines to be imported.
All virtual machines in the specified folder or any of its subfolders are imported as
instances into an OpenStack project with the same name as the folder in which they are
located.
Note If you specify --tenant-mapping FOLDER but do not specify --root-folder, the
name of the top-level folder in the cluster is used by default.
--root-resource-pool If you specified RESOURCE_POOL for the --tenant-mapping parameter, you can provide
ROOT_RESOURCE_POOL the name of the root resource pool containing the virtual machines to be imported.
All virtual machines in the specified resource pool or any of its child resource pools are
imported as instances into an OpenStack project with the same name as the resource
pool in which they are located.
com vmware vio vm unmanaged importvm --vm vm-id [--tenant project-name] [--nic-mac-address
nic-mac --nic-ipv4-address nic-ip] [--root-disk root-disk-path] [--nics specifications]
Option Description
--vm Enter the identifier of the virtual machine that you want to import.
You can view the ID values of all unmanaged virtual machines by running the com vmware vio vm
unmanaged list command.
--tenant Specify the OpenStack project into which you want to import the virtual machine.
If you do not include this parameter, the import_service project is used by default.
--nic-mac- Enter the MAC address of the network interface card on the virtual machine.
address If you do not include this parameter, the import process attempts to discover the MAC and IP
addresses automatically.
Note If you include this parameter, you must also include the nic_ipv4_address parameter.
VMware, Inc. 50
VMware Integrated OpenStack Administration Guide
Option Description
--nic-ipv4- Enter the IP address and prefix for the network interface card on the virtual machine. Enter the value in
address CIDR notation (for example, 10.10.1.1/24).
This parameter must be used together with the --nic-mac-address parameter.
--root-disk For a virtual machine with multiple disks, specify the root disk datastore path in the following format:
--root-disk '[datastore1] foo/foo_1.vmdk'
--nics For a virtual machine with multiple NICs, specify the MAC and IP addresses of each NIC in JSON
format.
Use the following key-value pairs:
n mac_address: MAC address of the NIC in standard format
n ipv4_address: IPv4 address in CIDR notation
For example:
Migrate an Instance
You can live-migrate an OpenStack instance to a different compute node.
Note Instances managed by VMware Integrated OpenStack must be migrated by using OpenStack
commands. Do not use vCenter Server or other methods to migrate OpenStack instances.
Prerequisites
n The source and target compute nodes must both be located within the same vCenter Server instance.
n The source and target compute nodes must have at least one distributed switch in common. If two
distributed switches are attached to the source compute node but only one distributed switch is
attached to the target compute node, live migration will succeed but the OpenStack instance will be
connected only to the port group of the distributed switch common to both compute nodes.
Procedure
3 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
VMware, Inc. 51
VMware Integrated OpenStack Administration Guide
Specify the name of the vSphere data center and datastore that contain the volume attached to the
instance.
n To find the name of a compute node, run the openstack host list command and view the
Host Name column.
n To find the UUID of the instance, run the openstack server list command and view the ID
column.
What to do next
You can run the openstack server show instance-uuid command to confirm that the instance has been
migrated to the desired compute node.
Prerequisites
n Do not create live resize-enabled instances using SR-IOV-enabled ports. Live resize is not compatible
with SR-IOV.
n Do not use live resize-enabled instances in tenant virtual data centers. Live resize is not compatible
with tenant virtual data centers.
In addition, the following conditions apply for live resizing of disk size:
n Use a SCSI virtual disk adapter type for the image. IDE adapter types are not supported.
n Deploy virtual machines from the image as full clones. Linked clones cannot be live-resized.
Procedure
2 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
VMware, Inc. 52
VMware Integrated OpenStack Administration Guide
openstack image create image-name --disk-format {vmdk | iso} --container-format bare --file image-
file {--public | --private} [--property vmware_adaptertype="vmdk-adapter-type"] [--property
vmware_disktype="{sparse | preallocated | streamOptimized}"] --property vmware_ostype="operating-
system" --property img_linked_clone="false" --property os_live_resize="{vcpu | memory | disk}"
Option Description
--container-format Enter bare. The container format argument is not currently used by Glance.
{--public | --private} Include --public to make the image available to all users or --private to make
the image available only to the current user.
--property vmware_adaptertype Specify the adapter type of the VMDK disk. For disk live resize, you must specify a
SCSI adapter.
If you do not include this parameter, the adapter type is determined by
introspection.
--property os_live_resize Specify vcpu, memory, disk, or any combination separated by commas (for
example, vcpu,memory,disk).
When you create virtual machines using the image that you defined in this procedure, those virtual
machines can be resized without needing to be powered off.
Affinity and anti-affinity policies cannot determine the specific ESXi host on which instances are placed.
These policies only control whether instances are placed on the same hosts as other instances in a
server group. To place instances on specific hosts, see Use DRS to Control OpenStack Instance
Placement.
Prerequisites
Verify that the intended filter configuration does not conflict with any existing administrative configuration,
such as DRS rules that manage instance placement on hosts.
VMware, Inc. 53
VMware Integrated OpenStack Administration Guide
Procedure
2 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
Option Description
--policy Enter affinity to place instances on the same host or anti-affinity to prevent
instances from being placed on the same host.
4 When you launch an instance, pass the server group as a scheduler hint to implement affinity or anti-
affinity.
openstack server create instance-name --image image-uuid --flavor flavor-name --nic net-
id=network-uuid --hint group=servergroup-uuid
What to do next
Confirm that the affinity rules and instances are configured correctly. In vCenter Server, select the
compute cluster, open the Configure tab, and click VM/Host Rules.
Procedure
VMware, Inc. 54
VMware Integrated OpenStack Administration Guide
Prerequisites
n Ensure that the compute cluster contains at least one virtual machine. If the compute cluster does not
contain any virtual machines, create a dummy virtual machine for this procedure.
n On the compute cluster, enable DRS and set DRS Automation to Partially automated or Fully
automated.
Procedure
1 In the vSphere Web Client, select the compute cluster and click Configure.
3 Create a VM group.
a Click Add.
b Enter a name and select VM Group from the Type drop-down menu.
c Click Add.
e Click OK.
a Click Add.
b Enter a name and select Host Group from the Type drop-down menu.
c Click Add.
e Click OK.
What to do next
Create a rule that determines how OpenStack instances assigned to the VM group are distributed on the
hosts in the host group.
Prerequisites
n Define at least one VM group and at least one host group. See Define VM and Host Groups for
Placing OpenStack Instances.
VMware, Inc. 55
VMware Integrated OpenStack Administration Guide
n On the compute cluster, enable DRS and set DRS Automation to Partially automated or Fully
automated.
Procedure
1 In the vSphere Web Client, click the compute cluster and select Configure.
4 Enter a name for the rule and select the Enable rule option.
6 In the VM Group drop-down menu, select the VM group that contains the OpenStack instances you
want to place.
Option Description
Must run on hosts in group OpenStack instances in the specified VM group must run on hosts in the specified
host group.
Should run on hosts in group OpenStack instances in the specified VM group should, but are not required, to run
on hosts in the specified host group.
Must not run on hosts in group OpenStack instances in the specified VM group must never run on hosts in the
specified host group.
Should not run on hosts in group OpenStack instances in the specified VM group should not, but may, run on hosts
in the specified host group.
8 In the Host Group drop-down menu, select the host group that contains the hosts on which the
OpenStack instances will be placed and click OK.
What to do next
In the VMware Integrated OpenStack dashboard, you can modify the metadata for a specific image to
ensure that all instances generated from that image are automatically included in the VM group and
therefore subject to the DRS rule.
Prerequisites
VMware, Inc. 56
VMware Integrated OpenStack Administration Guide
Procedure
1 Log in to the VMware Integrated OpenStack dashboard as a cloud administrator and select the admin
project from the drop-down menu in the title bar.
4 Click the down arrow next to the flavor that you want to use and select Update Metadata.
5 In the Available Metadata pane, expand VMware Driver Options and click the Add (plus sign) icon
next to DRS VM group.
6 Enter the desired VM group name as the value of the vmware_vm_group parameter and click Save.
All OpenStack instances generated from this source image will be automatically assigned to the specified
VM group and governed by its DRS rules.
Note Configuring virtual interface quotas is not supported in NSX-T Data Center. The VIF limit,
reservation, and shares settings cannot be used with NSX-T Data Center deployments.
QoS resource allocation can also be specified by flavor extra specs or image metadata. If flavor and
image settings conflict, the image metadata configuration takes precedence.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
n To use flavor extra specs for QoS configuration, perform the following steps:
c Select Update Metadata next to the flavor that you want to use.
n To use image metadata for QoS configuration, perform the following steps:
c Click the down arrow next to the image that you want to use and select Update Metadata.
VMware, Inc. 57
VMware Integrated OpenStack Administration Guide
5 Click the Add (plus sign) icon next to the item that you want to use.
Quota: CPU Limit Specify the maximum CPU allocation in megahertz. The value 0 indicates that CPU
usage is not limited.
Quota: CPU Shares Level Specify the level of CPU shares allocated. You can enter custom and add the CPU
Shares Value quota to provide a custom value.
Quota: CPU Shares Value Specify the number of CPU shares allocated.
If the CPU Shares Level quota is not set to custom, this value is ignored.
Quota: Disk IO Limit Specify the maximum disk transaction allocation in IOPS. The value 0 indicates
that disk transactions are not limited.
Quota: Disk IO Reservation Specify the guaranteed disk transaction allocation in IOPS.
Quota: Disk IO Shares Level Specify the level of disk transaction shares allocated. You can enter custom and
add the Disk IO Shares Value quota to provide a custom value.
Quota: Disk IO Shares Value Specify the number of disk transaction shares allocated.
If the Disk IO Shares Level quota is not set to custom, this value is ignored.
Quota: Memory Limit Specify the maximum memory allocation in megabytes. The value 0 indicates that
memory usage is not limited.
Quota: Memory Shares Level Specify the level of memory shares allocated. You can enter custom and add the
Memory Shares Value quota to provide a custom value.
Quota: Memory Shares Value Specify the number of memory shares allocated.
If the Memory Shares Level quota is not set to custom, this value is ignored.
Quota: VIF Limit Specify the maximum virtual interface bandwidth allocation in Mbps. The value 0
indicates that virtual interface bandwidth is not limited.
Quota: VIF Reservation Specify the guaranteed virtual interface bandwidth allocation in Mbps.
Quota: VIF Shares Level Specify the level of virtual interface bandwidth shares allocated. You can enter
custom and add the VIF Shares Value quota to provide a custom value.
Quota: VIF Shares Value Specify the number of virtual interface bandwidth shares allocated.
If the VIF Shares Level quota is not set to custom, this value is ignored.
6 Click Save.
You can now deploy QoS-enabled instances by configuring them with the flavor or image that you
modified in this procedure.
To apply QoS settings to an existing instance, resize the instance and select the flavor with the desired
QoS settings. The specified settings take effect after the resize process is complete.
VMware, Inc. 58
VMware Integrated OpenStack Administration Guide
Prerequisites
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
5 Uncomment the nova_pbm_default_policy parameter and set its value to the name of the storage
policy to use by default when an instance is created with a flavor that is not associated with a storage
policy.
10 Select the admin project from the drop-down menu in the title bar.
VMware, Inc. 59
VMware Integrated OpenStack Administration Guide
14 In the Available Metadata pane, expand VMware Policies and click the Add (plus sign) icon next to
Storage Policy.
15 Enter the desired storage policy name as the value of the vmware:storage_policy parameter and
click Save.
The specified vSphere storage policy is applied to all new OpenStack instances that are created from the
flavor. The default storage policy is applied to all new instances that are created from a flavor not
associated with a storage policy.
Important This feature is offered in VMware Integrated OpenStack Carrier Edition only. For more
information, see "VMware Integrated OpenStack Licensing" in the VMware Integrated OpenStack
Installation and Configuration Guide.
Virtual CPU pinning enables high latency sensitivity and ensures that all memory and an entire physical
core are reserved for the virtual CPU of an OpenStack instance. You configure virtual CPU pinning on a
flavor and then create instances with that flavor.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
4 Create a new flavor or choose an existing flavor to use for virtual CPU pinning.
5 Select Update Metadata next to the flavor that you want to use.
6 In the Available Metadata pane, select and configure the required metadata.
a Expand CPU Pinning and click the Add (plus sign) icon next to CPU Pinning policy.
c Expand VMware Policies and click the Add (plus sign) icon next to VM latency sensitivity.
e Expand VMware Quota and click the Add (plus sign) icon next to CPU Reservation in
Percentage and Memory Reservation in Percentage.
7 Click Save.
VMware, Inc. 60
VMware Integrated OpenStack Administration Guide
What to do next
You can now enable virtual CPU pinning on an instance by configuring it with the flavor that you modified
in this procedure.
Important This feature is offered in VMware Integrated OpenStack Carrier Edition only. For more
information, see "VMware Integrated OpenStack Licensing" in the VMware Integrated OpenStack
Installation and Configuration Guide.
NUMA links small, cost-effective nodes using a high-performance connection to provide low latency and
high throughput. This performance is often required for virtual network functions (VNFs) in
telecommunications environments. For information about NUMA in vSphere, see "Using NUMA Instances
with ESXi" in vSphere Resource Management.
To obtain information about your current NUMA configuration, run the following command on your ESXi
hosts:
Prerequisites
n Ensure that vCPUs, memory, and physical NICs intended for virtual machine traffic are placed on
same node.
n In vSphere, create a teaming policy that includes all physical NICs on the NUMA node. See "Teaming
and Failover Policy" in vSphere Networking.
Procedure
2 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
3 Create a Neutron network on which all physical NICs are located on a single NUMA node.
5 Launch an OpenStack instance using the flavor and network created in this procedure.
VMware, Inc. 61
VMware Integrated OpenStack Administration Guide
Important This feature is offered in VMware Integrated OpenStack Carrier Edition only. For more
information, see "VMware Integrated OpenStack Licensing" in the VMware Integrated OpenStack
Installation and Configuration Guide.
Passthrough associates a physical device with a virtual machine, reducing the latency caused by
virtualization. The following table shows how passthrough is implemented in VMware Integrated
OpenStack.
Nova compute n Collects the list of SR-IOV devices and updates the list of
PCI device specifications.
n Embeds the host object ID in device specifications.
Nova PCI manager n Creates and maintains a device pool with address, vendor
ID, product ID, and host ID.
n Allocates and deallocates PCI devices to instances based
on PCI requests.
Nova scheduler n Schedules instance placement on hosts that match the PCI
requests
Important This feature is offered in VMware Integrated OpenStack Carrier Edition only. For more
information, see "VMware Integrated OpenStack Licensing" in the VMware Integrated OpenStack
Installation and Configuration Guide.
This procedure uses OpenStack Neutron to enable passthrough for networking devices. For non-
networking devices, see Configure Passthrough for Non-Networking Devices.
Prerequisites
n Verify that your OpenStack deployment is using VDS or NSX Data Center for vSphere networking.
Deployments with NSX-T Data Center do not support passthrough.
VMware, Inc. 62
VMware Integrated OpenStack Administration Guide
n To enable SR-IOV, see "Enable SR-IOV on a Host Physical Adapter" in vSphere Networking.
n To enable DirectPath I/O, see "Enable Passthrough for a Network Device on a Host" in vSphere
Networking.
n Create a dedicated compute cluster for SR-IOV devices. DRS rules do not apply to these devices.
n To persist the MAC address of a physical device, add its cluster as a compute node before enabling
direct passthrough on the device. If direct passthrough has already been enabled, you can disable it,
restart the cluster, and enable direct passthrough again.
Procedure
2 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
Option Description
--tenant-id Specify the UUID of the project for which to create the port. You can find the UUID
of a project by running the openstack project list command.
--provider:physical_network n For a VLAN network, specify the managed object identifier (MOID) of the
distributed switch.
n For a port group network, specify the MOID of the port group.
--provider:segmentation_id If you want to create a VLAN-based network, enter the VLAN ID.
Option Description
network-id Specify the UUID of the network on which to create the port. You can find the UUID
of a network by running the openstack network list command.
--tenant-id Specify the UUID of the project for which to create the port. You can find the UUID
of a project by running the openstack project list command.
VMware, Inc. 63
VMware Integrated OpenStack Administration Guide
Option Description
Note Port security is not supported for direct and direct-physical ports and will be
automatically disabled for the port created.
You can now deploy passthrough-enabled virtual machines by configuring them with the port that you
created during this procedure.
Important This feature is offered in VMware Integrated OpenStack Carrier Edition only. For more
information, see "VMware Integrated OpenStack Licensing" in the VMware Integrated OpenStack
Installation and Configuration Guide.
This procedure uses OpenStack Nova to enable passthrough for non-networking devices. For networking
devices, see Configure Passthrough for Networking Devices .
Prerequisites
n Verify that your OpenStack deployment is using VDS or NSX Data Center for vSphere networking.
Deployments with NSX-T Data Center do not support passthrough.
n To enable SR-IOV, see "Enable SR-IOV on a Host Physical Adapter" in vSphere Networking.
n To enable DirectPath I/O, see "Enable Passthrough for a Network Device on a Host" in vSphere
Networking.
n Create a dedicated compute cluster for SR-IOV devices. DRS rules do not apply to these devices.
n To persist the MAC address of a physical device, add its cluster as a compute node before enabling
direct passthrough on the device. If direct passthrough has already been enabled, you can disable it,
restart the cluster, and enable direct passthrough again.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
VMware, Inc. 64
VMware Integrated OpenStack Administration Guide
4 Uncomment the nova_pci_alias parameter and modify its value to match your device.
where:
7 Select the admin project from the drop-down menu in the title bar.
c Select Update Metadata next to the flavor that you want to use.
d In the Available Metadata pane, expand VMware Driver Options for Flavors and click the Add
(plus sign) icon next to PCI Passthrough alias.
Option Description
virtual-device-name The virtual device name that you specified in Step 4 of this procedure.
device-count The number of virtual functions that can be called in one request.
This value can range from 1 to 10.
c Click the down arrow next to the flavor that you want to use and select Update Metadata.
d In the Available Metadata pane, expand VMware Driver Options and click the Add (plus sign)
icon next to Virtual Network Interface.
e Select your device from the drop-down list next to the hw_vif_model parameter and click Save.
VMware, Inc. 65
VMware Integrated OpenStack Administration Guide
You can now deploy passthrough-enabled virtual machines by configuring them with the flavor and image
that you modified during this procedure.
VMware, Inc. 66
OpenStack Flavors 7
In OpenStack, a flavor is a preset configuration that defines the compute, memory, and storage capacity
of an instance. When you create an instance, you configure the server by selecting a flavor.
Administrative users can create, edit, and delete flavors.
n Create a Flavor
n Delete a Flavor
m1.tiny 1 512 1
m1.small 1 2048 20
m1.medium 2 4096 40
m1.large 4 8192 80
Create a Flavor
Administrative users can create custom flavors.
Prerequisites
Verify that you are logged in to the VMware Integrated OpenStack dashboard as a cloud administrator.
VMware, Inc. 67
VMware Integrated OpenStack Administration Guide
Procedure
1 On the VMware Integrated OpenStack dashboard, select the admin project from the drop-down menu
in the title bar.
Parameter Description
VCPUs Number of virtual CPUs that an instance made from this flavor will use.
RAM MB Megabytes of RAM for virtual machines made from this flavor.
Root Disk GB Gigabytes of disk used for the root (/) partition in instances made from this flavor.
Ephemeral Disk GB Gigabytes of disk space to use for the ephemeral partition. If unspecified, the value is 0 by default.
Ephemeral disks offer machine local disk storage linked to the life cycle of a VM instance. When a VM is
terminated, all data on the ephemeral disk is lost. Ephemeral disks are not included in snapshots.
5 Click Create Flavor at the bottom of the dialog box to complete the process.
6 (Optional) Specify which projects can access instances created from specific flavors.
a On the Flavors page, click Edit Flavor in the Actions column of the instance.
b In the Edit Flavor dialog box, click the Flavor Access tab.
c Use the toggle controls to select the projects that can access the instance.
d Click Save.
a On the Flavors page, click Edit Flavor in the Actions column of the instance.
b In the Edit Flavor dialog box, modify the settings in either the Flavor Info or Flavor Access tab.
c Click Save.
Delete a Flavor
You can manage the number and variety of flavors by deleting those that no longer meet users' needs,
duplicate other flavors, or for other reasons.
Note You cannot undo the deletion of a flavor. Do not delete default flavors.
VMware, Inc. 68
VMware Integrated OpenStack Administration Guide
Prerequisites
You must be logged in to the VMware Integrated OpenStack dashboard as a cloud administrator to
perform this task.
Procedure
1 In the VMware Integrated OpenStack dashboard, select the admin project from the drop-down menu
in the title bar.
You can also use image metadata to specify many flavor metadata settings. If a conflict occurs, the image
metadata configuration overrules the flavor metadata configuration.
Prerequisites
n Verify that you are logged in to the VMware Integrated OpenStack dashboard as a cloud
administrator.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
4 (Optional) Create a flavor specific to the intended use of the metadata application.
Create a custom flavor to contain the specific configuration. The custom flavor leaves the original
flavor configuration intact and available for other instance creation.
6 In the Actions column of the image listing, click the down arrow and select Update Metadata.
7 Click the plus sign (+) next to the metadata properties to add.
In the column under Existing Metadata, the newly added metadata properties appear.
VMware, Inc. 69
VMware Integrated OpenStack Administration Guide
For example, you might have to select an option from a drop-down list or enter a string value.
9 Click Save.
The newly added flavor metadata properties are now configured. This configuration is applied to all future
OpenStack instances that are created from this flavor.
Note Configuring virtual interface quotas is not supported in NSX-T Data Center. The following extra
specs cannot be used with NSX-T Data Center deployments:
n quota:vif_limit
n quota:vif_reservation
n quota:vif_shares_level
n quota:vif_shares_share
If an image metadata and flavor extra spec conflict, the image metadata takes precedence over the flavor
extra spec.
hw:vifs_multi_thread Specify true to provide each virtual interface with its own
transmit thread.
quota:cpu_shares_level Specify the level of CPU shares allocated. You can enter
custom and add the cpu_shares_share parameter to provide a
custom value.
VMware, Inc. 70
VMware Integrated OpenStack Administration Guide
quota:disk_io_shares_level Specify the level of disk transaction shares allocated. You can
enter custom and add the disk_io_shares_share parameter
to provide a custom value.
quota:memory_shares_level Specify the level of memory shares allocated. You can enter
custom and add the memory_shares_share parameter to
provide a custom value.
VMware, Inc. 71
VMware Integrated OpenStack Administration Guide
vmware:tenant_vdc Specify the UUID of the tenant virtual data center in which to
place instances.
VMware, Inc. 72
Cinder Volumes and Volume
Types 8
Volumes are block storage devices that you attach to instances to enable persistent storage.
As a cloud administrator, you can manage volumes and volume types for users in various projects. You
can create and delete volume types, and you can view and delete volumes.
Cloud users can attach a volume to a running instance or detach a volume and attach it to another
instance at any time. For information about cloud user operations, see "Working with Volumes" in the
VMware Integrated OpenStack User Guide.
Procedure
2 Select the admin project from the drop-down menu in the title bar.
3 Select Admin > System > Volume Types and click Create Volume Type.
5 If you want to make the volume type available to certain projects only, deselect Public.
VMware, Inc. 73
VMware Integrated OpenStack Administration Guide
7 If you want to associate a vSphere storage profile with the volume type, perform the following steps:
b Click Create.
d Enter the name of the vSphere storage profile in the Value text box.
e Click Create.
8 If you want to set a default adapter for the volume type, perform the following steps:
b Click Create.
The following values are supported: lsiLogic, busLogic, lsiLogicsas, paraVirtual, and
ide.
e Click Create.
9 If your volume type is not public, select Edit Access in the Actions column and specify the projects
that can use the volume type.
If you do not specify any projects, the volume type is visible only to cloud administrators.
Tenants can select a volume type when creating a volume or modifying an existing volume. The settings
defined by the specified volume type are then applied to the new volume.
What to do next
If you want to change the name or description of a volume type, click Edit Volume Type in the Actions
column and make the desired changes. To delete unneeded volume types, select them in the Volume
Types table and click Delete Volume Types.
Note When a volume is created from an image, the value of the vmware_adaptertype metadata in the
image is used to determine the adapter type for the volume.
VMware, Inc. 74
VMware Integrated OpenStack Administration Guide
n IDE: ide
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
4 Uncomment the cinder_volume_default_adapter_type parameter and set its value to the desired
adapter type.
Note You cannot migrate a volume that has snapshots attached. You must detach all snapshots before
migrating a volume.
Prerequisites
n On the datastore cluster, enable Storage DRS and set it to No Automation (Manual Mode).
Procedure
VMware, Inc. 75
VMware Integrated OpenStack Administration Guide
Option Description
dc-name Enter the name of the data center that contains the desired datastore.
See "Place a Datastore in Maintenance Mode" in the vSphere Resource Management document.
When you place the datastore in maintenance mode, the datastore is evacuated and the volumes
automatically migrate to other datastores in the same datastore cluster.
Prerequisites
Detach all snapshots from the volumes that you want to migrate.
Procedure
Option Description
--source-dc Enter the data center containing the volumes that you want to migrate.
This parameter must be used together with the --source-ds parameter. If you
want to migrate specified volumes only, do not include this parameter.
--source-ds Enter the datastore containing the volumes that you want to migrate.
This parameter must be used together with the --source-dc parameter. If you
want to migrate specified volumes only, do not include this parameter.
--volume-ids Enter the UUID of the volume that you want to migrate. You can include multiple
UUIDs separated by commas (,).
If you want to migrate all volumes from a datastore, use the --source-dc and --
source-ds parameters instead of this parameter.
VMware, Inc. 76
VMware Integrated OpenStack Administration Guide
Option Description
dest-dc-name Enter the name of the data center that contains the datastore to which you want to
migrate volumes.
dest-ds-name Enter the name of the datastore to which you want to migrate volumes.
--ignore-storage-policy Include this parameter to migrate volumes to the target datastore even if the
datastore does not comply with the storage policy of the volume.
Note After an attached volume is migrated, the corresponding shadow virtual machine remains on the
original datastore but has no disk. When you detach the volume, the disk will be re-attached to the
shadow virtual machine.
Prerequisites
Detach all snapshots from the volumes that you want to migrate.
Procedure
This step prepares all volumes on the specified datastore for migration.
Option Description
dc-name Enter the data center that contains the desired volume.
n To find the name of a compute node, run the openstack host list command and view the
Host Name column.
n To find the UUID of the instance, run the openstack server list command and view the ID
column.
VMware, Inc. 77
VMware Integrated OpenStack Administration Guide
4 In the vSphere Web Client, migrate the shadow virtual machine for the volume.
For information, see "Migrate a Virtual Machine to New Storage in the vSphere Web Client" in the
vCenter Server and Host Management document.
5 If you want to migrate the shadow virtual machine to a cluster in a different availability zone, update
the availability zone for the volume.
The Cinder volume and the disk of the corresponding shadow virtual machine are migrated to the new
datastore.
vmware:clone_type Specify the clone type. You can specify the following types:
n Full clone: full
n Linked clone: linked
vmware:storage_profile Enter the name of the storage policy to use for new volumes.
vmware:adapter_type Specify the adapter type used to attach the volume. You can
specify the following types:
n IDE: ide
n LSI Logic: lsiLogic
n LSI Logic SAS: lsiLogicsas
n BusLogic Parallel: busLogic
n VMware Paravirtual SCSI: paraVirtual
VMware, Inc. 78
Glance Images 9
In the OpenStack context, an image is a file that contains a virtual disk from which you can install an
operating system on a virtual machine. You create an instance in your OpenStack cloud by using one of
the images available. The VMware Integrated OpenStack image service component natively supports
images that are packaged in the ISO, OVA, and VMDK formats.
If you have images in vSphere that you want to use in OpenStack, you can export them in one of the
supported formats and upload them to the image service. You can also use the glance-import tool to
convert RAW, QCOW2, VDI, and VHD images to the VMDK format and use them in OpenStack.
n Migrate an Image
To be successfully imported, verify that the image is in one of the natively supported image formats (ISO,
OVA, VMDK) or in a format that can be converted to VMDK before the import process (RAW, QCOW2,
VDI, VHD).
n VMDK
n ISO
n OVA
VMware, Inc. 79
VMware Integrated OpenStack Administration Guide
Procedure
2 Select the admin project from the drop-down menu in the title bar.
3 Select Admin > System > Images and click Create Image.
Option Action
Source Type Select File to select a local file or URL to specify a remote file.
Disk Adapter Type For VMDK images, select the adapter type.
Minimum Disk (GB) Specify the minimum disk size for the image in gigabytes.
Minimum RAM (MB) Specify the minimum RAM for the image in megabytes.
Visibility Select Public to make the image available to all projects or Private to make the
image available only to the current project.
What to do next
Tenants can launch OpenStack instances using the imported image. For instructions, see "Start an
OpenStack Instance from an Image" in the VMware Integrated OpenStack User's Guide.
In the Actions column next to an image, you can edit the image, update its metadata, delete the image,
or create a volume from the image.
n VMDK
n ISO
n OVA
VMware, Inc. 80
VMware Integrated OpenStack Administration Guide
Procedure
2 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
Option Description
--name Enter a name for the image file in the image service.
--disk_format Enter the disk format of the source image. You can specify iso or vmdk.
For images in OVA format, use vmdk as the disk format.
--container_format Enter bare. The container format argument is not currently used by Glance.
--visibility Enter public to make the image available to all users or private to make the
image available only to the current user.
Note
n For disks using paravirtual adapters, include this parameter and set it to
paraVirtual.
n For disks using LSI Logic SAS adapters, include this parameter and set it to
lsiLogicsas.
What to do next
You can run the glance image-list command to see the name and status of the images in your
deployment.
Tenants can launch OpenStack instances using the imported image. For instructions, see "Start an
OpenStack Instance from an Image" in the VMware Integrated OpenStack User's Guide.
VMware, Inc. 81
VMware Integrated OpenStack Administration Guide
You can also use this procedure to import images in the supported OVA and VMDK formats if desired.
Procedure
3 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
Parameter Description
--name Enter a name for the image file in the image service.
--image-format Specify the format of the source image file. Non-VMDK images are converted
automatically to the VMDK format.
You can use the following formats:
n VMDK
n OVA
n RAW
n QCOW2
n VDI
n VHD
The task information and status is displayed. Large images might take some time to import. You can
run the following command to check the status of the import task:
What to do next
You can run the glance image-list command to see the name and status of the images in your
deployment.
Tenants can launch OpenStack instances using the imported image. For instructions, see "Start an
OpenStack Instance from an Image" in the VMware Integrated OpenStack User's Guide.
VMware, Inc. 82
VMware Integrated OpenStack Administration Guide
Prerequisites
n Verify that the virtual machine template is located in the same vCenter Server instance as your
VMware Integrated OpenStack deployment.
n Verify that the virtual machine template does not have multiple disks, a CD-ROM drive, or a floppy
drive.
Procedure
2 Switch to the root user and load the cloud administrator credentials file.
sudo su -
source ~/cloudadmin.rc
You can check the VM and Templates View in the vSphere Client to confirm the location of the
template.
The specified virtual machine template is imported as an image. You can launch OpenStack instances
from the image or configure additional settings, such as image metadata.
Migrate an Image
You can migrate an image to another datastore while preserving its UUID and metadata.
Prerequisites
Determine the UUID of the image that you want to migrate and of the project containing the image. You
can use the openstack image list command to display the UUID of each image and the openstack
image show command to display the UUID of the project that contains a specified image.
Procedure
1 In the vSphere Web Client, open the VMs and Templates view and locate the image that you want to
migrate.
The image is located in the folder for the project that contains it.
VMware, Inc. 83
VMware Integrated OpenStack Administration Guide
6 Click Finish.
The image is moved to the new datastore. You can continue to launch instances from it normally.
Before VMware Integrated OpenStack 2.5, the default behavior was to store Nova snapshots as stream-
optimized VMDK disks. This procedure enables you to restore the pre-2.5 default.
Procedure
##############################
# Glance Template Store
# options that affect the use of glance template store
##############################
#glance_default_store: vi
nova_snapshot_format: streamOptimized
#cinder_image_format: template
VMware, Inc. 84
VMware Integrated OpenStack Administration Guide
Before VMware Integrated OpenStack 2.5, the default behavior was to store the Glance images as
streamOptimized VMDK disks. This procedure enables you to restore the pre-2.5 default.
Procedure
##############################
# Glance Template Store
# options that affect the use of glance template store
##############################
#glance_default_store: vi
#nova_snapshot_format: template
cinder_image_format: streamOptimized
VMware, Inc. 85
VMware Integrated OpenStack Administration Guide
Note Configuring virtual interface quotas is not supported in NSX-T Data Center. The following
metadata cannot be used with NSX-T Data Center deployments:
n quota_vif_limit
n quota_vif_reservation
n quota_vif_shares_level
n quota_vif_shares_share
If an image metadata and flavor extra spec conflict, the image metadata takes precedence over the flavor
extra spec.
quota_cpu_shares_level Specify the level of CPU shares allocated. You can enter
custom and add the cpu_shares_share parameter to provide a
custom value.
quota_disk_io_shares_level Specify the level of disk transaction shares allocated. You can
enter custom and add the disk_io_shares_share parameter
to provide a custom value.
VMware, Inc. 86
VMware Integrated OpenStack Administration Guide
quota_memory_shares_level Specify the level of memory shares allocated. You can enter
custom and add the memory_shares_share parameter to
provide a custom value.
vmware_latency_sensitivity_level Specify the latency sensitivity level for virtual machines. Setting
this key will adjust certain settings on virtual machines.
vmware_tenant_vdc Specify the UUID of the tenant virtual data center in which to
place instances.
VMware, Inc. 87
Backup and Recovery 10
You can back up your VMware Integrated OpenStack installation to ensure that you can recover from
errors that may occur.
For information about backing up Cinder, see Configure the Backup Service for Cinder.
Prerequisites
Procedure
VMware, Inc. 88
VMware Integrated OpenStack Administration Guide
What to do next
If an error occurs on your deployment, you can recover individual nodes or the entire deployment. To
recover individual nodes, see Recover OpenStack Nodes. To restore your deployment, see Restore Your
Deployment from a Backup.
Prerequisites
n Verify that the owner of the NFS share folder has the same UID as Cinder on the controller nodes.
The default Cinder UID is 107.
Procedure
2 If your deployment is not using a custom.yml file, copy the template custom.yml file to the /opt/
vmware/vio/custom directory.
5 Uncomment the cinder_backup_share parameter and set its value to the location of the shared
NFS directory.
You can now use the cinder backup-create command to back up your Cinder volumes.
VMware, Inc. 89
VMware Integrated OpenStack Administration Guide
Prerequisites
Verify that you have a backup of the management server and database available. See Back Up Your
Deployment.
Procedure
Option Description
backup-folder Enter the name of the backup folder for OpenStack Management Server data.
These folders are in the format vio_ms_yyyymmddhhmmss.
nfs-host-ip Specify the IP address of the NFS host where the backup folder is located.
Option Description
backup-folder Enter the name of the backup folder for the OpenStack database. These folders
are in the format vio_os_db_yyyymmddhhmmss.
nfs-host-ip Specify the IP address of the NFS host where the backup folder is located.
The OpenStack Management Server and OpenStack database are restored to the state of the backups.
When you recover a VMware Integrated OpenStack node, it returns to the state of a newly deployed
node.
Prerequisites
n If you want to recover all database nodes, you must have a backup of the OpenStack database. See
Back Up Your Deployment.
n Ensure that the datastore has sufficient free space to contain the original and recovered nodes at the
same time. The recovery process will delete the original node, but space for both nodes is temporarily
required. To avoid this issue, you can power off and delete the existing node before recovering it.
Procedure
VMware, Inc. 90
VMware Integrated OpenStack Administration Guide
To display the nodes in your deployment, use the viocli show command. The values shown in the
VM Name and Role columns can be used to recover nodes.
a To recover a non-database node, run the following command:
Option Description
-r Enter the names of the roles to recover. All nodes assigned to the specified role
will be recovered. You can specify -n in addition to this parameter to recover
single nodes outside of the specified role.
sudo viocli recover {-n node1... | -r role} -dn backup-name -nfs nfs-host:/backup-folder
Option Description
-n Enter the names of the database nodes to recover. You can specify DB nodes
for HA deployments or the ControlPlane node for compact or tiny deployments.
-nfs Specify the NFS host and directory where the backup is located in the format
remote-host:/remote-dir.
The recovery process may take several minutes. You can check the status of your node by viewing your
OpenStack deployment in the vSphere Web Client.
VMware, Inc. 91
Troubleshooting VMware
Integrated OpenStack 11
If errors occur, you can perform troubleshooting actions to restore your OpenStack deployment to
operating status.
/var/log/apache2/error.log Logs access errors for the VMware Integrated OpenStack Manager.
VMware, Inc. 92
VMware Integrated OpenStack Administration Guide
Controller Logs
Name and Location Description
Database Logs
Name and Location Description
VMware, Inc. 93
VMware Integrated OpenStack Administration Guide
The parameters in the following table are located in the custom.yml file. You must run the viocli
deployment configure command before your changes can take effect.
InvalidAllocationCapac
ityExceeded: Unable
to create allocation
for 'MEMORY_MB' on
resource provider
InvalidAllocationCapac
ityExceeded: Unable
to create allocation
for 'VCPU' on
resource provider
VMware, Inc. 94
VMware Integrated OpenStack Administration Guide
nova_disk_allocation_rati 0.0 Allocation ratio of virtual disk Increase the value to address
o space to physical disk space the following error in nova-
for disk filters placement-api.log:
InvalidAllocationCapac
ityExceeded: Unable
to create allocation
for 'DISK_GB' on
resource provider
keystone_token_expiration 7200 Time in seconds that a token Increase the value to address
_time remains valid the following error in various
log files:
WARNING
keystoneclient.middlew
are.auth_token [-]
Authorization failed
for token
haproxy_nova_compute_clie 1200s Time in seconds that the load Increase these values to
nt_timeout balancer waits for a response address the following error in
from Nova acting as a client nova-compute.log:
haproxy_cinder_client_lb_ 300s Time in seconds that the load Increase these values to
timeout balancer waits for a response address the following error in
from Cinder acting as a client cinder-volume.log:
Problem
VMware Integrated OpenStack installed successfully, but the vApp is not displayed in vSphere.
Solution
VMware, Inc. 95
VMware Integrated OpenStack Administration Guide
a Click Fix.
3 If the problem persists, confirm that the OpenStack Management Server can connect to the vCenter
Server instance.
4 Log in to the OpenStack Management Server and check the logs in the /var/log/oms folder to
confirm that the OpenStack Management Server service initiated properly.
7 If the problem persists, log in to the vCenter Server virtual machine and restart the vSphere Web
Client service.
8 Log out of the vSphere Web Client and log back in.
VMware, Inc. 96
VMware Integrated OpenStack Administration Guide
If you use the command-line interface to rename availability zones, you might see different names in the
vSphere Web Client and the VMware Integrated OpenStack dashboard. In the Availability Zones column
on the Manage > Nova Compute tab for your deployment, desynchronized availability zones are
displayed in red. You can resynchronize the availability zones to fix the issue.
Procedure
1 Log in to the OpenStack Management Server and list the availability zones in your OpenStack
deployment.
Problem
Cause
Solution
For example,
cinder_backup_file_size: 52428800
VMware, Inc. 97
VMware Integrated OpenStack Administration Guide
Problem
Attempting to verify the Cinder backup configuration results in a permission error when creating the initial
backup.
Cause
VMware Integrated OpenStack does not have the correct permissions to write to the NFS share.
Solution
cd /var/lib/cinder/backup_mount/
chown -R cinder:cinder *
Solution
This corrects the configuration and gives the Cinder component permission to access the NFS share.
Problem
Cause
DCLI cannot connect to the vAPI endpoint because the service is not running.
Solution
VMware, Inc. 98
VMware Integrated OpenStack Administration Guide
What to do next
VMware, Inc. 99
Using the OpenStack
Management Server APIs 12
VMware Integrated OpenStack includes RESTful APIs that you can use to deploy and manage
OpenStack.
Before using the APIs, you must authenticate with the OpenStack Management Server API endpoint
using the administrator credentials for your vCenter Server instance. To authenticate, make a POST
request to https://mgmt-server-ip:8443/v1/j_spring_security_check and include
j_username=vcenter-user&j_password=vcenter-password in the request body.
After authentication, you are granted access to the APIs until the session expires. If using a web browser,
you must accept the server certificate to establish a secure channel between the browser and the
OpenStack Management Server before you can submit an API request.
For more information about APIs, see the VMware Integrated OpenStack API reference at https://
code.vmware.com/apis/252. If you have installed VMware Integrated OpenStack, you can also view the
API specifications at https://mgmt-server-ip:8443/swagger-ui.html.
For NSX deployments, the nsxadmin utility is also provided to perform certain network-related operations.
For more information, see the nsxadmin documentation at https://opendev.org/x/vmware-nsx/src/branch/
master/doc/source/admin_util.rst.
The parameters supported by viocli and viopatch are described as follows. You can also run viocli
-h or viopatch -h to display the supported parameters.
Mandatory or
Parameter Optional Description
NFS-VOLUME Mandatory Name or IP address of the target NFS volume and directory in the format
remote-host:remote-dir.
For example: 192.168.1.77:/backups
You can also run viocli backup -h or viocli backup --help to display the parameters for the
command.
The backup file of the management server is labeled with a timestamp in vio_ms_yyyymmddhhmmss
format. The backup file of the OpenStack database is labeled with a timestamp in
vio_os_db_yyyymmddhhmmss format.
Note To generate a certificate signing request (CSR) or update an existing certificate, see viocli
deployment Command.
The viocli certificate command supports a variety of actions to perform different tasks. The
following parameters apply to all actions.
You can run viocli certificate -h or viocli certificate --help to display the actions and
parameters for the command. You can also use the -h or --help option on any action to display
parameters for the action. For example, viocli certificate add -h will show parameters for the add
action.
--cert CERT-FILE Mandatory Certificate to add. The certificate must be in PEM format.
Mandatory or
Parameter Optional Description
Mandatory or
Parameter Optional Description
You can also run viocli dbverify -h or viocli dbverify --help to display the parameters for the
command.
The viocli deployment command supports a variety of actions to perform different tasks. The following
parameters apply to all actions.
You can run viocli deployment -h or viocli deployment --help to display the parameters for the
command. You can also use the -h or --help option on any action to display parameters for the action.
For example, viocli deployment configure -h will show parameters for the configure action.
--limit {controller | compute | db Optional Updates the configuration for only the specified
| memcache} component.
--tags TAGS Optional Runs only those configuration tasks that are marked with
the specified tags.
Mandatory or
Parameter Optional Description
-c COUNTRY or --country_name COUNTRY Optional Two-letter ISO country code in which the organization applying
for the certificate is located.
If you do not include this option in the command, you will be
prompted to enter a value.
-f CERT-PATH or --file CERT-PATH Optional Absolute path to the desired certificate file. The certificate must
be in PEM format.
Mandatory or
Parameter Optional Description
--node NODE Optional Obtains log files for the specified nodes only. The following values are
supported:
n ceilometer
n compute
n controller
n db
n dhcp
n lb
n local
n memcache
n mongodb
n mq
n storage
-nrl or --non-rollover-log- Optional Collects only those logs that have not been archived.
only
--recent-logs Optional Collects only the log file to which the service process is currently writing.
n Missing processes
Mandatory or
Parameter Optional Description
--period SECONDS Optional Uses data from the specified period (in seconds) only. For example, --period
300 will assess the status of the deployment in the last 5 minutes.
--format {text | Optional Outputs the status report in the specified format.
json} If you do not enter a value, text is used by default.
You can also run viocli ds-migrate-prep -h or viocli ds-migrate-prep --help to display the
parameters for the command.
End Point Operations Management is a component of VMware vRealize Operations Manager. For more
information, see the vRealize Operations Manager Help document for your version.
The viocli epops command supports a variety of actions to perform different tasks. The following
parameters apply to all actions.
You can run viocli epops -h or viocli epops --help to display the parameters for the command.
You can also use the -h or --help option on any action to display parameters for the action. For
example, viocli epops install -h will show parameters for the install action.
-s TGZ-FILE or --source TGZ-FILE Mandatory Local path or URL to the agent installer package.
-c PROP-FILE or --config PROP-FILE Mandatory Local path to the agent configuration file.
-c PROP-FILE or --config PROP-FILE Mandatory Local path to the agent configuration file.
The viocli identity command supports a variety of actions to perform different tasks. The following
parameters apply to all actions.
You can run viocli identity -h or viocli identity --help to display the parameters for the
command. You can also use the -h or --help option on any action to display parameters for the action.
For example, viocli identity add -h will show parameters for the add action.
Mandatory or
Parameter Optional Description
--id DOMAIN Mandatory Identifier of an identity source. The local domain is represented by 0 and the default
domain by 1.
--id DOMAIN Mandatory Identifier of an identity source. The local domain is represented by 0 and the default
domain by 1.
Mandatory or
Parameter Optional Description
Mandatory or
Parameter Optional Description
--id DOMAIN Mandatory Identifier of an identity source. The local domain is represented by 0 and the default
domain by 1.
n Orphaned Nova instances are those for which a corresponding virtual machine does not exist in
vSphere.
n Orphaned virtual machines are those for which a corresponding instance does not exist in the
OpenStack database.
n Orphaned shadow virtual machines are those for which a corresponding Cinder volume does not
exist in the OpenStack database.
The viocli inventory-admin command collects vCenter Server and OpenStack credentials from
internal inventories. This command requires that you authenticate as an OpenStack administrator. The
domain and user name of this account are set in /root/cloudadmin.rc as the OS_PROJECT_DOMAIN_NAME,
OS_USERNAME, and OS_USER_DOMAIN_NAME variables. You can also set the password for this account as the
OS_PASSWORD environment variable to avoid entering this password every time you run the command.
The viocli inventory-admin command supports a variety of actions to perform different tasks. The
following parameters apply to all actions.
Mandatory or
Parameter Optional Description
--no-grace-period Optional Ignores the grace period when determining whether objects are orphaned.
Objects modified in the past 30 minutes are included in the results only when
this parameter is set.
You can run viocli inventory-admin -h or viocli inventory-admin --help to display the
parameters for the command. You can also use the -h or --help option on any action to display
parameters for the action. For example, viocli inventory-admin show-instances -h will show
parameters for the show-instances action.
--filename ZONE-MAP Optional Path to the file containing the availability zone map. The file must be in JSON
format.
--cpu-reserve CPU-MIN Optional CPU cycles in MHz to reserve for the VDC.
If you do not enter a value, 0 is used by default.
--cpu-limit CPU-MAX Optional Maximum limit for CPU usage on the VDC (in MHz).
If you do not enter a value, CPU usage is not limited.
--mem-limit MEMORY-MAX Optional Maximum limit for memory consumption on the VDC (in megabytes).
If you do not enter a value, memory consumption is not limited.
To enable LBaaS through the command line interface, see "Configure LBaaS Using the CLI" in the
VMware Integrated OpenStack User's Guide.
Because most OpenStack nodes are stateless, you can recover them without a backup file. However, a
backup file is necessary to recover OpenStack database nodes.
viocli recover [-d NAME] {-n NODE1... | -r ROLE1... [-n NODE1...]} [-dn BACKUP -nfs NFS-VOLUME]
Mandatory or
Parameter Optional Description
-n, --node NODE Mandatory Recovers one or more nodes. You can specify multiple nodes separated with commas.
unless -r is To display the nodes in your deployment, use the viocli show command. The values
used. shown in the VM Name column can be used as arguments for this parameter.
For example, the following command recovers two nodes from the specified NFS
backup file.
viocli recover –n VIO-DB-0 VIO-DB-1 –dn vio_os_db_20150830215406 -nfs
10.146.29.123:/backups
-r ROLE or --role Mandatory Recovers all nodes assigned to the specified role. You can specify multiple roles
ROLE unless -n is separated with commas. You can also specify -n or --node in the same command to
used. recover additional nodes that are not assigned to that role.
To display the roles in your deployment, use the viocli show command. The values
shown in the Role column can be used as arguments for this parameter.
For example, the following command recovers the nodes assigned to the DB role from
the specified NFS backup file.
viocli recover -r DB -dn vio_os_db_20150830215406 -nfs 10.146.29.123:/backups
Mandatory or
Parameter Optional Description
-dn BACKUP or -- Mandatory for Folder containing the OpenStack database backup files. OpenStack database backup
dir-name BACKUP full OpenStack folders are in vio_os_db_yyyymmddhhmmss format.
database This parameter is mandatory when recovering the following items:
recovery
n For an HA deployment: the DB role or all three database nodes (VIO-DB-0, VIO-DB-1,
and VIO-DB-2)
n For a compact or tiny deployment: the ControlPlane role or the VIO-ControlPlane-0
node
-nfs NFS-VOLUME Mandatory for Name or IP address of the target NFS volume and directory in the format remote-host:/
full OpenStack remote-dir.
database For example: 192.168.1.77:/backups
recovery This parameter is mandatory when recovering the following items:
n For an HA deployment: the DB role or all three database nodes (VIO-DB-0, VIO-DB-1,
and VIO-DB-2)
n For a compact or tiny deployment: the ControlPlane role or the VIO-ControlPlane-0
node
You can also run viocli recover -h or viocli recover --help to display the parameters for the
command.
Mandatory or
Parameter Optional Description
NFS-VOLUME Mandatory Name or IP address of the target NFS volume and directory in the
format remote-host:remote-dir.
For example: 192.168.1.77:/backups
You can also run viocli restore -h or viocli restore --help to display the parameters for the
command.
The backup file of the VMware Integrated OpenStack management server is labeled with a timestamp in
vio_ms_yyyymmddhhmmss format. The backup file of the VMware Integrated OpenStack database is
labeled with a timestamp in vio_os_db_yyyymmddhhmmss format.
To roll back a recent patch, see "Roll Back a VMware Integrated OpenStack Patch" in the VMware
Integrated OpenStack Installation and Configuration Guide.
To revert from a recent upgrade, see "Revert to a Previous VMware Integrated OpenStack Deployment"
in the VMware Integrated OpenStack Installation and Configuration Guide.
The viocli services stop command stops only the services running in your deployment. To stop the
entire cluster, including virtual machines, run the viocli deployment stop command instead.
You can also run viocli services -h or viocli services --help to display the parameters for the
command.
Mandatory or
Parameter Optional Description
-i or --inventory Optional Displays the contents of the inventory file for the current deployment.
-p or --inventory-path Optional Displays the path to the inventory file for the current deployment.
You can also run viocli show -h or viocli show --help to display the parameters for the command.
The viocli upgrade command supports a variety of actions to perform different tasks. The following
parameters apply to all actions.
You can run viocli upgrade -h or viocli upgrade --help to display the parameters for the
command. You can also use the -h or --help option on any action to display parameters for the action.
For example, viocli upgrade prepare -h will show parameters for the prepare action.
NFS-VOLUME Mandatory Name or IP address of the target NFS volume and directory in the format remote-
host:remote-dir.
For example: 192.168.1.77:/backups
Mandatory or
Parameter Optional Description
NFS-DIR-NAME Mandatory Local mount point to attach the target NFS volume.
BLUE-VIOUSER-PASSWORD Optional Password of the viouser account on the old OpenStack Management
Server.
If you do not include this option in the command, you will be prompted to
enter the password.
Note If possible, use the vSphere Web Client to upgrade your deployment instead of this command.
Note To migrate attached volumes, you must migrate the entire instance.
To migrate volumes for shadow virtual machines, use the viocli ds-migrate-prep command and then
complete the migration using the vSphere Web Client.
Mandatory or
Parameter Optional Description
--volume-ids UUID1 Mandatory unless -- Migrates one or more volumes specified by UUID. To specify multiple
source-dc and -- volumes, separate the UUIDs with commas.
source-ds are used. For example, the following command migrates two volumes to datastore
DS-01 in data center DC-01.
viocli volume-migrate --volume-ids 25e121d9-1153-4d15-92f8-
c92c10b4987f,4f1120e1-9ed4-421a-b65b-908ab1c6bc50 DC-01 DS-01
Mandatory or
Parameter Optional Description
You can also run viocli volume-migrate -h or viocli volume-migrate --help to display the
parameters for the command.
viocli vros enable [-d NAME] -vt VRA-TENANT -vh VRA-HOST -va VRA-ADMIN -vrh VROS-HOST
Mandatory or
Parameter Optional Description
-vt VRA-TENANT or --vra_tenant VRA-TENANT Mandatory Tenant to which the vRealize Automation system
administrator belongs.
-va VRA-ADMIN or --vra_admin VRA-ADMIN Mandatory Username of the vRealize Automation system
administrator.
-vrh VROS-HOST or --vros_host VROS-HOST Mandatory IP or host name for the vRealize Orchestrator
OpenStack Plug-In service.
You can also run viocli vros -h or viocli vros --help to display the parameters for the command.
You can also run viopatch add -h or viopatch add --help to display the parameters for the
command.
You must use the viopatch add command to add patches before you can install them.
Mandatory or
Parameter Optional Description
You can also run viopatch install -h or viopatch install --help to display the parameters for
the command.
You can also run viopatch list -h or viopatch list --help to display the parameters for the
command.
Important The viopatch snapshot take command stops OpenStack services. Services will be
started again when the patch is installed. If you decide not to install a patch after taking a snapshot, you
can manually start OpenStack services by running the viocli services start command.
You can also run viopatch snapshot -h or viopatch snapshot --help to display the parameters for
the command.
To roll back a recent patch, see "Roll Back a VMware Integrated OpenStack Patch" in the VMware
Integrated OpenStack Installation and Configuration Guide.
You can also run viopatch version -h or viopatch version --help to display the parameters for
the command.