AHV-Admin-Guide-v5_20
AHV-Admin-Guide-v5_20
AHV 5.20
September 7, 2023
Contents
AHV Overview........................................................................................ 4
Storage Overview....................................................................................................................... 4
AHV Turbo........................................................................................................................ 5
Acropolis Dynamic Scheduling in AHV....................................................................................... 6
Disabling Acropolis Dynamic Scheduling.......................................................................... 7
Enabling Acropolis Dynamic Scheduling...........................................................................7
Virtualization Management Web Console Interface.....................................................................8
Viewing the AHV Version on Prism Element......................................................................8
Viewing the AHV Version on Prism Central....................................................................... 9
AHV Cluster Power Outage Handling......................................................................................... 9
Node Management................................................................................11
Nonconfigurable AHV Components.......................................................................................... 11
Nutanix Software............................................................................................................ 11
AHV Settings.................................................................................................................. 11
Controller VM Access...............................................................................................................12
Admin User Access to Controller VM..............................................................................12
Nutanix User Access to Controller VM............................................................................14
Controller VM Password Complexity Requirements.........................................................15
AHV Host Access..................................................................................................................... 16
Initial Configuration........................................................................................................ 17
Accessing the AHV Host Using the Admin Account........................................................ 18
Changing the Root User Password................................................................................. 18
Changing Nutanix User Password...................................................................................19
AHV Host Password Complexity Requirements............................................................... 19
Verifying the Cluster Health..................................................................................................... 20
Putting a Node into Maintenance Mode.................................................................................... 22
Exiting a Node from the Maintenance Mode............................................................................. 24
Shutting Down a Node in a Cluster (AHV)................................................................................ 25
Starting a Node in a Cluster (AHV)...........................................................................................26
Shutting Down an AHV Cluster.................................................................................................27
Rebooting an AHV Node in a Nutanix Cluster........................................................................... 30
Changing CVM Memory Configuration (AHV)............................................................................31
Changing the AHV Hostname................................................................................................... 31
Changing the Name of the CVM Displayed in the Prism Web Console....................................... 32
Compute-Only Node Configuration (AHV Only)......................................................................... 33
Adding a Compute-Only Node to an AHV Cluster............................................................35
ii
Enabling LACP and LAG (AHV Only)............................................................................... 63
VLAN Configuration........................................................................................................ 66
Enabling RSS Virtio-Net Multi-Queue by increasing the Number of VNIC Queues...................... 69
Changing the IP Address of an AHV Host.................................................................................72
OVAs...................................................................................................152
OVA Restrictions.................................................................................................................... 152
Copyright............................................................................................ 153
iii
AHV OVERVIEW
As the default option for Nutanix HCI, the native Nutanix hypervisor, AHV, represents a unique approach
to virtualization that offers the powerful virtualization capabilities needed to deploy and manage enterprise
applications. AHV complements the HCI value by integrating native virtualization along with networking,
infrastructure, and operations management with a single intuitive interface - Nutanix Prism.
Virtualization teams find AHV easy to learn and transition to from legacy virtualization solutions with
familiar workflows for VM operations, live migration, VM high availability, and virtual network management.
AHV includes resiliency features, including high availability and dynamic scheduling without the need for
additional licensing, and security is integral to every aspect of the system from the ground up. AHV also
incorporates the optional Flow Security and Networking, allowing easy access to hypervisor-based network
microsegmentation and advanced software-defined networking.
See the Field Installation Guide for information about how to deploy and create a cluster. Once you create
the cluster by using Foundation, you can use this guide to perform day-to-day management tasks.
Limitations
For information about AHV configuration limitations, see Nutanix Configuration Maximums webpage.
Nested Virtualization
Nutanix does not support nested virtualization (nested VMs) in an AHV cluster.
Storage Overview
AHV uses a Distributed Storage Fabric to deliver data services such as storage provisioning, snapshots,
clones, and data protection to VMs directly.
In AHV clusters, AOS passes all disks to the VMs as raw SCSI block devices. By that means, the I/O path
is lightweight and optimized. Each AHV host runs an iSCSI redirector, which establishes a highly resilient
storage path from each VM to storage across the Nutanix cluster.
QEMU is configured with the iSCSI redirector as the iSCSI target portal. Upon a login request, the
redirector performs an iSCSI login redirect to a healthy Stargate (preferably the local one).
AHV Turbo
AHV Turbo represents significant advances to the data path in AHV. AHV Turbo provides an I/O path that
bypasses QEMU and services storage I/O requests, which lowers CPU usage and increases the amount of
storage I/O available to VMs.
AHV Turbo represents significant advances to the data path in AHV.
When you use QEMU, all I/O travels through a single queue that can impact system performance. AHV
Turbo provides an I/O path that uses the multi-queue approach to bypasses QEMU. The multi-queue
approach allows the data to flow from a VM to the storage more efficiently. This results in a much higher
I/O capacity and lower CPU usage. The storage queues automatically scale out to match the number of
vCPUs configured for a given VM, and results in a higher performance as the workload scales up.
AHV Turbo is transparent to VMs and is enabled by default on VMs that runs in AHV clusters. For
maximum VM performance, ensure that the following conditions are met:
• The latest Nutanix VirtIO package is installed for Windows VMs. For information on how to download
and install the latest VirtIO package, see Installing or Upgrading Nutanix VirtIO for Windows.
Note: Multi-queue is enabled by default in current Linux distributions. For details, refer your vendor-specific
documentation for Linux distribution.
In addition to multi-queue approach for storage I/O, you can also achieve the maximum network I/O
performance using the multi-queue approach for any vNICs in the system. For information about how to
Note: Ensure that the guest operating system fully supports multi-queue before you enable it. For details,
refer your vendor-specific documentation for Linux distribution.
• ADS improves the initial placement of the VMs depending on the VM configuration.
• Nutanix Volumes uses ADS for balancing sessions of the externally available iSCSI targets.
Note: ADS honors all the configured host affinities, VM-host affinities, VM-VM antiaffinity policies, and HA
policies.
By default, ADS is enabled and Nutanix recommends you keep this feature enabled. However, see
Disabling Acropolis Dynamic Scheduling on page 7 for information about how to disable the ADS
feature. See Enabling Acropolis Dynamic Scheduling on page 7 for information about how to enable
the ADS feature if you previously disabled the feature.
ADS monitors the following resources:
Note:
• During migration, a VM consumes resources on both the source and destination hosts as the
High Availability (HA) reservation algorithm must protect the VM on both hosts. If a migration
fails due to lack of free resources, turn off some VMs so that migration is possible.
• If a problem is detected and ADS cannot solve the issue (for example, because of limited
CPU or storage resources), the migration plan might fail. In these cases, an alert is generated.
Monitor these alerts from the Alerts dashboard of the Prism Element web console and take
necessary remedial actions.
Note: For a storage hotspot, ADS looks at the last 40 minutes of data and uses a smoothing algorithm
to use the most recent data. For a CPU hotspot, ADS looks at the last 10 minutes of data only, that is, the
average CPU usage over the last 10 minutes.
Following are the possible reasons if there is an obvious hotspot, but the VMs did not migrate:
• If there is a huge VM (16 vCPUs) at 100% usage, and accounts for 75% of the AHV host usage
(which is also at 100% usage).
• The other hosts are loaded at ~ 40% usage.
In these situations, the other hosts cannot accommodate the large VM without causing contention there
as well. Lazan does not prioritize one host or VM over others for contention, so it leaves the VM where it
is hosted.
• Number of all-flash nodes in the cluster is less than the replication factor.
If the cluster has an RF2 configuration, the cluster must have a minimum of two all-flash nodes for
successful migration of VMs on all the all-flash nodes.
Migrations Audit
Prism Central displays the list of all the VM migration operations generated by ADS. In Prism Central, go
to Menu -> Activity -> Audits to display the VM migrations list. You can filter the migrations by clicking
Filters and selecting Migrate in the Operation Type tab. The list displays all the VM migration tasks
created by ADS with details such as the source and target host, VM name, and time of migration.
Procedure
2. Disable ADS.
nutanix@cvm$ acli ads.update enable=false
No action is taken by ADS to solve the contentions after you disable the ADS feature. You must
manually take the remedial actions or you can enable the feature.
2. Enable ADS.
nutanix@cvm$ acli ads.update enable=true
Procedure
2. The Hypervisor Summary widget widget on the top left side of the Home page displays the AHV
version.
Procedure
3. Click the host you want to see the hypervisor version for.
4. The Host detail view page displays the Properties widget that lists the Hypervisor Version.
Note:
Nutanix Software
Modifying any of the following Nutanix software settings may inadvertently constrain performance of your
Nutanix cluster or render the Nutanix cluster inoperable.
AHV Settings
Nutanix AHV is a cluster-optimized hypervisor appliance.
Alteration of the hypervisor appliance (unless advised by Nutanix Technical Support) is unsupported and
may result in the hypervisor or VMs functioning incorrectly.
Unsupported alterations include (but are not limited to):
Controller VM Access
Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the cluster, some
operations affect the entire cluster.
Most administrative functions of a Nutanix cluster can be performed through the web console (Prism),
however, there are some management tasks that require access to the Controller VM (CVM) over SSH.
Nutanix recommends restricting CVM SSH access with password or key authentication.
This topic provides information about how to access the Controller VM as an admin user and nutanix user.
admin User Access
Use the admin user access for all tasks and operations that you must perform on the controller VM.
As an admin user with default credentials, you cannot access nCLI. You must change the default
password before you can use nCLI. Nutanix recommends that you do not create additional CVM
user accounts. Use the default accounts (admin or nutanix), or use sudo to elevate to the root
account.
For more information about admin user access, see Admin User Access to Controller VM on
page 12.
nutanix User Access
Nutanix strongly recommends that you do not use the nutanix user access unless the procedure
(as provided in a Nutanix Knowledge Base article or user guide) specifically requires the use of the
nutanix user access.
For more information about nutanix user access, see Nutanix User Access to Controller VM on
page 14.
You can perform most administrative functions of a Nutanix cluster through the Prism web consoles or
REST API. Nutanix recommends using these interfaces whenever possible and disabling Controller
VM SSH access with password or key authentication. Some functions, however, require logging on to a
Controller VM with SSH. Exercise caution whenever connecting directly to a Controller VM as it increases
the risk of causing cluster issues.
Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does not import or
change any locale settings. The Nutanix software is not localized, and running the commands with any locale
other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.
Note:
• As an admin user, you cannot access nCLI by using the default credentials. If you are logging
in as the admin user for the first time, you must log on through the Prism web console or SSH
to the Controller VM. Also, you cannot change the default password of the admin user through
nCLI. To change the default password of the admin user, you must log on through the Prism
web console or SSH to the Controller VM.
• When you make an attempt to log in to the Prism web console for the first time after you
upgrade to AOS 5.1 from an earlier AOS version, you can use your existing admin user
password to log in and then change the existing password (you are prompted) to adhere to the
password complexity requirements. However, if you are logging in to the Controller VM with
SSH for the first time after the upgrade as the admin user, you must use the default admin user
password (Nutanix/4u) and then change the default password (you are prompted) to adhere to
the Controller VM Password Complexity Requirements.
• You cannot delete the admin user account.
• The default password expiration age for the admin user is 60 days. You can configure the
minimum and maximum password expiration days based on your security requirement.
When you change the admin user password, you must update any applications and scripts using the admin
user credentials for authentication. Nutanix recommends that you create a user assigned with the admin
role instead of using the admin user for authentication. The Prism Element Web Console Guide describes
authentication and roles.
Following are the default credentials to access a Controller VM.
1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and
the following credentials.
2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
New password:
Retype new password:
Password changed.
See the requirements listed in Controller VM Password Complexity Requirements to set a secure
password.
For information about logging on to a Controller VM by using the admin user account through the Prism
web console, see Logging Into The Web Console in the Prism Element Web Console Guide.
Note:
• As a nutanix user, you cannot access nCLI by using the default credentials. If you are logging
in as the nutanix user for the first time, you must log on through the Prism web console or
SSH to the Controller VM. Also, you cannot change the default password of the nutanix user
through nCLI. To change the default password of the nutanix user, you must log on through
the Prism web console or SSH to the Controller VM.
• When you make an attempt to log in to the Prism web console for the first time after you
upgrade the AOS from an earlier AOS version, you can use your existing nutanix user
password to log in and then change the existing password (you are prompted) to adhere to the
password complexity requirements. However, if you are logging in to the Controller VM with
SSH for the first time after the upgrade as the nutanix user, you must use the default nutanix
user password (nutanix/4u) and then change the default password (you are prompted) to
adhere to the Controller VM Password Complexity Requirements on page 15.
• You cannot delete the nutanix user account.
When you change the nutanix user password, you must update any applications and scripts using the
nutanix user credentials for authentication. Nutanix recommends that you create a user assigned with the
nutanix role instead of using the nutanix user for authentication. The Prism Element Web Console Guide
describes authentication and roles.
Following are the default credentials to access a Controller VM.
Procedure
1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and
the following credentials.
2. Respond to the prompts, providing the current and new nutanix user password.
Changing password for nutanix.
Old Password:
New password:
Retype new password:
Password changed.
See Controller VM Password Complexity Requirements on page 15to set a secure password.
For information about logging on to a Controller VM by using the nutanix user account through the
Prism web console, see Logging Into The Web Console in the Prism Element Web Console Guide.
Note: Ensure that the following conditions are met for the special characters usage in the CVM
password:
• The special characters are appropriately used while setting up the CVM password. In some
cases, for example when you use ! followed by a number in the CVM password, it leads to
a special meaning at the system end, and the system may replace it with a command from
the bash history. In this case, you may generate a password string different from the actual
password that you intend to set.
• The special character used in the CVM password are ASCII printable characters only. For
information about ACSII printable characters, refer ASCII printable characters (character
code 32-127) article on ASCII code website.
Note: From AOS 5.15.5 with AHV 20190916.410 onwards, AHV has two new user accounts—admin and
nutanix.
• root—It is used internally by the AOS. The root user is used for the initial access and configuration of
the AHV host.
• admin—It is used to log on to an AHV host. The admin user is recommended for accessing the AHV
host.
• nutanix—It is used internally by the AOS and must not be used for interactive logon.
Exercise caution whenever connecting directly to an AHV host as it increases the risk of causing cluster
issues.
Following are the default credentials to access an AHV host:
nutanix nutanix/4u
Initial Configuration
Procedure
1. Use SSH and log on to the AHV host using the root account.
$ ssh root@<AHV Host IP Address>
Nutanix AHV
root@<AHV Host IP Address> password: # default password nutanix/4u
Procedure
1. Log on to the AHV host with SSH using the admin account.
$ ssh admin@ <AHV Host IP Address>
Nutanix AHV
2. Enter the admin user password configured in the Initial Configuration on page 17.
admin@<AHV Host IP Address> password:
Procedure
1. Log on to the AHV host using the admin account with SSH.
2. Enter the admin user password configured in the Initial Configuration on page 17.
See AHV Host Password Complexity Requirements on page 19 to set a secure password.
Procedure
1. Log on to the AHV host using the admin account with SSH.
4. Respond to the prompts and provide the current and new root password.
Changing password for root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
See AHV Host Password Complexity Requirements on page 19 to set a secure password.
Procedure
1. Log on to the AHV host using the admin account with SSH.
4. Respond to the prompts and provide the current and new nutanix password.
Changing password for nutanix.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
See AHV Host Password Complexity Requirements on page 19 to set a secure password.
• At least 15 characters.
• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least eight characters different from the previous password.
• At most three consecutive occurrences of any given character.
• At most four consecutive occurrences of any given class.
The password cannot be the same as the last five passwords.
Note: If you see any critical alerts, resolve the issues by referring to the indicated KB articles. If you are
unable to resolve any issues, contact Nutanix Support.
Note: If you receive alerts indicating expired encryption certificates or a key manager is not reachable,
resolve these issues before you shut down the cluster. If you do not resolve these issues, data loss of the
cluster might occur.
2. Verify if the cluster can tolerate a single-node failure. Do one of the following:
» In the Prism Element web console, in the Home page, check the status of the Data Resiliency
Status dashboard.
Verify that the status is OK. If the status is anything other than OK, resolve the indicated issues
before you perform any maintenance activity.
» Log on to a Controller VM (CVM) with SSH and check the fault tolerance status of the cluster.
nutanix@cvm$ ncli cluster get-domain-fault-tolerance-status type=node
An output similar to the following is displayed:
Important:
Domain Type : NODE
Component Type : STATIC_CONFIGURATION
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 14:22:09 GMT+05:00 2015
The value of the Current Fault Tolerance column must be at least 1 for all the nodes in the cluster.
Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2),
you can only shut down one node for each cluster. If an RF2 cluster would have more than one node shut
down, shut down the entire cluster.
Procedure
2. Determine the IP address of the node that you want to put into maintenance mode.
nutanix@cvm$ acli host.list
Note the value of Hypervisor IP for the node that you want to put in maintenance mode.
Note: Never put Controller VM and AHV hosts into maintenance mode on single-node clusters. It is
recommended to shut down guest VMs before proceeding with disruptive changes.
Replace host-IP-address with either the IP address or host name of the AHV host that you want to
shut down.
The following are optional parameters for running the acli host.enter_maintenance_mode command:
• wait: Set the wait parameter to true to wait for the host evacuation attempt to finish.
• non_migratable_vm_action: By default the non_migratable_vm_action parameter is set to block,
which means VMs with GPU, CPU passthrough, PCI passthrough, and host affinity policies are not
migrated or shut down when you put a node into maintenance mode.
If you want to automatically shut down such VMs, set the non_migratable_vm_action parameter to
acpi_shutdown.
5. See Verifying the Cluster Health on page 20 to once again check if the cluster can tolerate a single-
node failure.
Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.2
In this example, the host ID is 1234.
Wait for a few minutes until the CVM is put into the maintenance mode.
What to do next
Perform the maintenance activity. Once the maintenance activity is complete, remove the node from the
maintenance mode. See Exiting a Node from the Maintenance Mode on page 24 for more information.
Procedure
Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.2
In this example, the host ID is 1234.
a. From any other CVM in the cluster, run the following command to exit the CVM from the
maintenance mode.
nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=false
Replace host-ID with the ID of the host.
Note: The command fails if you run the command from the CVM that is in the maintenance mode.
Do not continue if the CVM has failed to exit the maintenance mode.
a. From any CVM in the cluster, run the following command to exit the AHV host from the maintenance
mode.
nutanix@cvm$ acli host.exit_maintenance_mode host-ip
Replace host-ip with the new IP address of the host.
This command migrates (live migration) all the VMs that were previously running on the host back to
the host.
b. Verify if the host has exited the maintenance mode.
nutanix@cvm$ acli host.get host-ip
In the output that is displayed, ensure that node_state equals to kAcropolisNormal or
AcropolisNormal and schedulable equals to True.
Contact Nutanix Support if any of the steps described in this document produce unexpected results.
• Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2),
you can only shut down one node for each cluster. If an RF2 cluster would have more than one node shut
down, shut down the entire cluster.
See Verifying the Cluster Health on page 20 to check if the cluster can tolerate a single-node failure.
Do not proceed if the cluster cannot tolerate a single-node failure.
• Put the node you want to shut down into maintenance mode.
See Putting a Node into Maintenance Mode on page 22 for instructions about how to put a node into
maintenance mode.
You can list all the hosts in the cluster by running nutanix@cvm$ acli host.list command, and note
the value of Hypervisor IP for the node you want to shut down.
Procedure
1. Using SSH, log on to the Controller VM on the host you want to shut down.
Note: Once the cvm_shutdown command is issued, it might take a few minutes before CVM is powered
off completely. After the cvm_shutdown command is completed successfully, Nutanix recommends that
you wait up to 4 minutes before shutting down the AHV host.
What to do next
See Starting a Node in a Cluster (AHV) on page 26 for instructions about how to start a node, including
how to start a CVM and how to exit a node from maintenance mode.
Procedure
1. On the hardware appliance, power on the node. The CVM starts automatically when your reboot the
node.
2. If the node is in maintenance mode, log on (SSH) to the Controller VM and remove the node from
maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.
4. Verify that the status of all services on all the CVMs are Up.
nutanix@cvm$ cluster status
If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the
Nutanix cluster.
CVM: <host IP-Address> Up
Zeus UP [9935, 9980, 9981, 9994, 10015,
10037]
Scavenger UP [25880, 26061, 26062]
Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]
VipMonitor UP [18387, 18464, 18465, 18466, 18474]
Stargate UP [24993, 25032]
InsightsDataTransfer UP [25258, 25348, 25349, 25388, 25391,
25393, 25396]
Ergon UP [25263, 25414, 25415]
Cerebro UP [25272, 25462, 25464, 25581]
Chronos UP [25281, 25488, 25489, 25547]
Curator UP [25294, 25528, 25529, 25585]
Prism UP [25718, 25801, 25802, 25899, 25901,
25906, 25941, 25942]
Warning: If you receive alerts indicating expired encryption certificates or a key manager is not
reachable, resolve these issues before you shut down the cluster. If you do not resolve these issues, data
loss of the cluster might occur.
Procedure
1. Shut down the services or VMs associated with AOS features or Nutanix products. For example, shut
down all the Nutanix file server VMs (FSVMs). See the documentation of those features or products for
more information.
» Shut down the guest VMs from within the guest OS.
» Shut down the guest VMs by using the Prism Element web console.
» If you are running many VMs, shut down the VMs by using aCLI:
d. If any VMs are on, consider powering off the VMs from within the guest OS. To force shut down
through AHV, run the following command:
nutanix@cvm$ acli vm.off vm-name
Replace vm-name with the name of the VM you want to shut down.
The output displays the message The state of the cluster: stop, which confirms that the
cluster has stopped.
Note: The following system services continue to run even after the cluster has stopped successfully:
• Zeus
• Scavenger
• Xmount
• VIPMonitor
You can observe the status of these system services in the output logs:
The state of the cluster: stop
Lockdown mode: Disabled CVM: 10.xx.x.xxx Up
Zeus UP [13130, 13326, 13327, 13347]
Scavenger UP [15015, 15141, 15142, 15143]
Xmount UP [15012, 15121, 15122, 15147]
SysStatCollector DOWN []
IkatProxy DOWN []
IkatControlPlane DOWN []
SSLTerminator DOWN []
SecureFileSync DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []
4. Shut down all the CVMs in the cluster. Log on to each CVM in the cluster with SSH and shut down that
CVM.
nutanix@cvm$ sudo shutdown -P now
5. Shut down each node in the cluster. Perform the following steps for each node in the cluster.
Note: The navigation path, tabs, and UI layout in IPMI web console can vary based on the hardware
used at your site.
c. Ping each host to verify that all AHV hosts are shut down.
a. Press the power button on the front of the block for each node.
b. Log on to the IPMI web console of each node.
c. On the System tab, check the Power Control status to verify if the node is powered on.
Note: The navigation path, tabs, and UI layout in IPMI web console can vary based on the hardware
used at your site.
a. Wait for approximately 5 minutes after you start the last node to allow the cluster services to start.
All CVMs start automatically after you start all the nodes.
b. Log on to any CVM in the cluster with SSH.
c. Start the cluster.
nutanix@cvm$ cluster start
e. Start the guest VMs from within the guest OS or use the Prism Element web console.
If you are running many VMs, start the VMs by using aCLI:
nutanix@cvm$ for i in $(acli vm.list power_state=off | grep -v NTNX | awk 'NR!=1
{print $NF}');do acli vm.on $i ; done
f. Start the services or VMs associated with AOS features or Nutanix products. For example, start all
the FSVMs. See the documentation of those features or products for more information.
g. Verify if all guest VMs are powered on by using the Prism Element web console.
Note: Reboot host is a graceful restart workflow. All the user VMs are migrated to another host when you
perform a reboot operation for a host. There is no impact on the user workload due to the reboot operation.
• Ensure the Cluster Resiliency is OK on the Prism web console prior to any restart activities.
• For successful automated restarts of hosts, ensure that the cluster has HA or resource capacity.
• Ensure that the guest VMs can migrate between hosts as the hosts are placed in maintenance mode. If
not, manual intervention may be required.
Procedure
2. Click the gear icon in the main menu and then select Reboot in the Settings page.
3. In the Request Reboot window, select the nodes you want to restart, and click Reboot.
A progress bar is displayed that indicates the progress of the restart of each node.
Procedure
Note: The system prompts you to enter the admin user password if you run the change_ahv_hostname
command with sudo.
Replace host-IP-address with the IP address of the host whose name you want to change and new-
host-name with the new hostname for the AHV host.
If you want to update the hostname of multiple hosts in the cluster, run the script for one host at a time
(sequentially).
Note: The Prism Element web console displays the new hostname after a few minutes.
Changing the Name of the CVM Displayed in the Prism Web Console
You can change the CVM name that is displayed in the Prism web console. The procedure described
in this document does not change the CVM name that is displayed in the terminal or console of an SSH
session.
• 1. Checks if the new name starts with NTNX- and ends with -CVM. The CVM name must have only
letters, numbers, and dashes (-).
2. Checks if the CVM has received a shutdown token.
3. Powers off the CVM. The script does not put the CVM or host into maintenance mode. Therefore,
the VMs are not migrated from the host and continue to run with the I/O operations redirected to
another CVM while the current CVM is in a powered off state.
4. Changes the CVM name, enables autostart, and powers on the CVM.
Perform the following to change the CVM name displayed in the Prism web console.
Procedure
1. Use SSH to log on to a CVM other than the CVM whose name you want to change.
Note: Do not run this command from the CVM whose name you want to change, because the script
powers off the CVM. In this case, when the CVM is powered off, you lose connectivity to the CVM from
the SSH console and the script abruptly ends.
Note: Clusters that have compute-only nodes do not support virtual switches. Instead, use bridge
configurations for network connections. For more information, see Virtual Switch Limitations on
page 60.
You can use a supported server or an existing hyperconverged (HC) node as a CO node. To use a node
as CO, image the node as CO by using Foundation and then add that node to the cluster by using the
Prism Element web console. For more information about how to image a node as a CO node, see the Field
Installation Guide.
Note: If you want an existing HC node that is already a part of the cluster to work as a CO node, remove
that node from the cluster, image that node as CO by using Foundation, and add that node back to the
cluster. For more information about how to remove a node, see Modifying a Cluster.
• The Nutanix cluster must be at least a three-node cluster before you add a compute-only node.
However, Nutanix recommends that the cluster has four nodes before you add a compute-only node.
• The ratio of compute-only to hyperconverged nodes in a cluster must not exceed the following:
1 compute-only : 2 hyperconverged
Restrictions
Nutanix does not support the following features or tasks on a CO node in this release:
1. Host boot disk replacement
2. Network segmentation
3. Virtual Switch configuration: Use bridge configurations instead.
Note: If you have storage-only AHV nodes in clusters with compute-only nodes being ESXI or Hyper-
V, deployment of default virtual switch vs0 fails. In such cases, the Prism Element, Prism Central or CLI
workflows for virtual switch management are unavailable to manage the bridges and bonds. Use the
manage_ovs command options to manage the bridges and bonds.
Note: Run the manage_ovs commands for a CO from any CVM running on a hyperconverged node.
Perform the networking tasks for each CO node in the cluster individually.
For more information about networking configuration of the AHV hosts, see Host Network Management in
the AHV Administration Guide.
• Observe the requirements and restrictions listed in Compute-Only Node Configuration (AHV Only).
• Log on to CVM using SSH, and disable all the virtual switches including the default virtual switch (vs0),
using the following command:
nutanix@cvm:~$ acli net.disable_virtual_switch virtual_switch=<virtual-switch-name>
In the above command, replace <virtual-switch-name> with the actual virtual switch name in your
network.
Procedure
» Click the gear icon in the main menu and select Expand Cluster on the Settings page.
» Go to the hardware dashboard (see Hardware Dashboard) and click Expand Cluster.
The system displays the Expand Cluster window:
Note: To expand a cluster with CO node, do not select Prepare Now and Expand Later. This option
is used to only prepare the nodes and expand the cluster at a later point in time. For CO nodes, node
preparation is not supported.
The system displays the following error in the Configure Host tab, if you proceed with
Prepare Now and Expand Later option:
6. Under Host or CVM IP, type the IP address of the AHV host and click Save.
Note: The CO node does not have a Controller VM and you must therefore provide the IP address of
the AHV host.
The system prompts you to run checks and expand the cluster with the selected CO node.
• Run Checks - Used to only run pre-checks required for cluster expansion. Once all pre-checks are
successful, you can click the Expand Cluster to add the CO node to the cluster.
• Expand Cluster - Used to run both; pre-checks required for cluster expansion and expand cluster
operation together.
The add-node process begins, and Prism Element performs a set of checks before the node is added
to the cluster. Once all checks are completed and the node is added successfully, the system displays
the following result:
Note:
Check the progress of the operation in the Tasks menu of the Prism Element web console. The
operation takes approximately five to seven minutes to complete.
Important: Virtual switch configuration is not supported when there are CO nodes in the cluster. The
system displays the error message if you attempt to reconfigure the virtual switch for the cluster, using
the following command:
nutanix@cvm:~$ acli net.migrate_br_to_virtual_switch br0 vs_name=vs0
• Configuring Layer 2 switching through virtual switch and Open vSwitch bridges. When configuring
virtual switch vSwitch, you configure bridges, bonds, and VLANs.
• Optionally changing the IP address, netmask, and default gateway that were specified for the hosts
during the imaging process.
Virtual Switch Do not modify the OpenFlow tables of any bridges configured in any VS
configurations in the AHV hosts.
Do not delete or rename OVS bridge br0.
Do not modify the native Linux bridge virbr0.
Switch Hops Nutanix nodes send storage replication traffic to each other in a distributed
fashion over the top-of-rack network. One Nutanix node can, therefore,
send replication traffic to any other Nutanix node in the cluster. The
network should provide low and predictable latency for this traffic. Ensure
that there are no more than three switches between any two Nutanix nodes
in the same cluster.
Switch Fabric A switch fabric is a single leaf-spine topology or all switches connected to
the same switch aggregation layer. The Nutanix VLAN shares a common
broadcast domain within the fabric. Connect all Nutanix nodes that form
a cluster to the same switch fabric. Do not stretch a single Nutanix cluster
across multiple, disconnected switch fabrics.
Every Nutanix node in a cluster should therefore be in the same L2
broadcast domain and share the same IP subnet.
WAN Links A WAN (wide area network) or metro link connects different physical sites
over a distance. As an extension of the switch fabric requirement, do not
place Nutanix nodes in the same cluster if they are separated by a WAN.
VLANs Add the Controller VM and the AHV host to the same VLAN. Place all
CVMs and AHV hosts in a cluster in the same VLAN. By default the CVM
and AHV host are untagged, shown as VLAN 0, which effectively places
them on the native VLAN configured on the upstream physical switch.
Note: Do not add any other device (including guest VMs) to the VLAN to
which the CVM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.
Default VS bonded port (br0- Aggregate the fastest links of the same speed on the physical host to a VS
up) bond on the default vs0 and provision VLAN trunking for these interfaces
on the physical switch.
By default, interfaces in the bond in the virtual switch operate in the
recommended active-backup mode.
Note: The mixing of bond modes across AHV hosts in the same cluster
is not recommended and not supported.
1 GbE and 10 GbE Ensure you do not form a NIC combination or mix NICs in any of the
interfaces (physical host) following ways in the same bond:
• If you plan to use 1 GbE uplinks, do not include them in the same bond
as the 10 GbE interfaces.
• If you choose to configure only 1 GbE uplinks, when migration of
memory-intensive VMs becomes necessary, power off and power on
in a new host instead of using live migration. In this context, memory-
intensive VMs are VMs whose memory changes at a rate that exceeds
the bandwidth offered by the 1 GbE uplinks.
Nutanix recommends the manual procedure for memory-intensive
VMs because live migration, which you initiate either manually or by
placing the host in maintenance mode, might appear prolonged or
unresponsive and might eventually fail.
Use the aCLI on any CVM in the cluster to start the VMs on another
AHV host:
nutanix@cvm$ acli vm.on vm_list host=host
Replace vm_list with a comma-delimited list of VM names and replace
host with the IP address or UUID of the target host.
• If you must use only 1GbE uplinks, add them into a bond to increase
bandwidth and use the balance-TCP (LACP) or balance-SLB bond
mode.
IPMI port on the hypervisor Do not use VLAN trunking on switch ports that connect to the IPMI
host interface. Configure the switch ports as access ports for management
simplicity.
Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612).
Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches
with larger buffers for production workloads.
Cut-through versus store-and-forward selection depends on network
design. In designs with no oversubscription and no speed mismatches you
can use low-latency cut-through switches. If you have any oversubscription
or any speed mismatch in the network design, then use a switch with larger
buffers. Port-to-port latency should be no higher than 2 microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch
ports that are connected to the hypervisor host.
Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture.
This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2
network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.
Jumbo Frames The Nutanix CVM uses the standard Ethernet MTU (maximum
transmission unit) of 1,500 bytes for all the network interfaces by default.
The standard 1,500 byte MTU delivers excellent performance and stability.
Nutanix does not support configuring the MTU on network interfaces of a
CVM to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical
network interfaces of AHV hosts and guest VMs if the applications on your
guest VMs require them. If you choose to use jumbo frames on hypervisor
hosts, be sure to enable them end to end in the desired network and
consider both the physical and virtual network infrastructure impacted by
the change.
If you try to configure, on the default virtual switch vs0, an MTU value that
does not fall within the range of 1500 ~ 9000 bytes, Prism displays an error
and fails to apply the configuration.
Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.
Rack Awareness and Block Block awareness and rack awareness provide smart placement of
Awareness Nutanix cluster services, metadata, and VM data to help maintain data
availability, even when you lose an entire block or rack. The same network
requirements for low latency and high throughput between servers in the
same cluster still apply when using block and rack awareness.
The following diagrams show sample network configurations using Open vSwitch and Virtual Switch.
IP Address Management
IP Address Management (IPAM) is a feature of AHV that allows it to assign IP addresses automatically to
VMs by using DHCP. You can configure each virtual network with a specific IP address subnet, associated
domain settings, and IP address pools available for assignment to VMs.
An AHV network is defined as a managed network or an unmanaged network based on the IPAM setting.
Managed Network
Managed network refers to an AHV network in which IPAM is enabled.
Note: You can enable IPAM only when you are creating a virtual network. You cannot enable or disable
IPAM for an existing virtual network.
IPAM enabled or disabled status has implications. For example, when you want to reconfigure the IP
address of a Prism Central VM, the procedure to do so may involve additional steps for managed networks
(that is, networks with IPAM enabled) where the new IP address belongs to an IP address range different
from the previous IP address range. See Reconfiguring the IP Address and Gateway of Prism Central VMs
in Prism Central Guide.
Uplink configuration uses bonds to improve traffic management. The bond types are defined for the
aggregated OVS bridges.A new bond type - No uplink bond - provides a no-bonding option. A virtual switch
configured with the No uplink bond uplink bond type has 0 or 1 uplinks.
When you configure a virtual switch with any other bond type, you must select at least two uplink ports on
every node.
If you change the uplink configuration of vs0, AOS applies the updated settings to all the nodes in the
cluster one after the other (the rolling update process). To update the settings in a cluster, AOS performs
the following tasks when configuration method applied is Standard:
1. Puts the node in maintenance mode (migrates VMs out of the node)
2. Applies the updated settings
3. Checks connectivity with the default gateway
4. Exits maintenance mode
5. Proceeds to apply the updated settings to the next node
AOS does not put the nodes in maintenance mode when the Quick configuration method is applied.
• Speed—Fast (1s)
• Mode—Active fallback-active-
backup
• Priority—Default. This is not
configurable.
• Virtual switches are not enabled in a cluster that has one or more compute-only nodes. See Virtual
Switch Limitations on page 60 and Virtual Switch Requirements on page 60.
• If you select the Active-Active policy, you must manually enable LAG and LACP on the corresponding
ToR switch for each node in the cluster.
• If you reimage a cluster with the Active-Active policy enabled, the default virtual switch (vs0) on the
reimaged cluster is once again the Active-Backup policy. The other virtual switches are removed
during reimage.
Note:
If you are modifying an existing bond, AHV removes the bond and then re-creates the bond with
the specified interfaces.
Ensure that the interfaces you want to include in the bond are physically connected to the Nutanix
appliance before you run the command described in this topic. If the interfaces are not physically
connected to the Nutanix appliance, the interfaces are not added to the bond.
Note: If you are modifying an existing bond, AHV removes the bond and then re-creates the bond with the
specified interfaces.
Ensure that the interfaces you want to include in the bond are physically connected to the Nutanix
appliance before you run the command described in this topic. If the interfaces are not physically
connected to the Nutanix appliance, the interfaces are not added to the bond.
Ensure that the pre-checks listed in LCM Prechecks section of the Life Cycle Manager Guide and
the Always and Host Disruptive Upgrades types of pre-checks listed KB-4584 pass for Virtual
Switch deployments.
Bridge Migration
After upgrading to a compatible version of AOS, you can migrate bridges other than br0 that existed on the
nodes. When you migrate the bridges, the system converts the bridges to virtual switches.
See Virtual Switch Migration Requirements in Virtual Switch Requirements on page 60.
Note: You can migrate only those bridges that are present on every compute node in the cluster. See
Migrating Bridges after Upgrade topic in Prism Element Web Console Guide.
Note: If a host already included in a cluster is removed and then added back, it is treated as a new
host.
• The system validates the default bridge br0 and uplink bond br0-up to check if it conforms to the
default virtual switch vs0 already present on the cluster.
If br0 and br0-up conform, the system includes the new host and its uplinks in vs0.
If br0 and br0-up do not conform,then the system generates an NCC alert.
• The system does not automatically add any other bridge configured on the new host to any other
virtual switch in the cluster.
It generates NCC alerts for all the other non-default virtual switches.
VS Management
You can manage virtual switches from Prism Central or Prism Web Console. You can also use aCLI or
REST APIs to manage them. See the Acropolis API Reference and Command Reference guides for more
information.
You can also use the appropriate aCLI commands for virtual switches from the following list:
• net.create_virtual_switch
• net.list_virtual_switch
• net.get_virtual_switch
• net.update_virtual_switch
• net.delete_virtual_switch
• net.migrate_br_to_virtual_switch
• net.disable_virtual_switch
• An internal port with the same name as the default bridge; that is, an internal port named br0. This is the
access port for the hypervisor host.
• A bonded port named br0-up. The bonded port aggregates all the physical interfaces available on the
node. For example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces
are aggregated on br0-up. This configuration is necessary for Foundation to successfully image the
node regardless of which interfaces are connected to the network.
Note:
Before you begin configuring a virtual network on a node, you must disassociate the 1 GbE
interfaces from the br0-up port. This disassociation occurs when you modify the default virtual
switch (vs0) and create new virtual switches. Nutanix recommends that you aggregate only the
10 GbE or faster interfaces on br0-up and use the 1 GbE interfaces on a separate OVS bridge
deployed in a separate virtual switch.
See Virtual Switch Management on page 61 for information about virtual switch
management.
The following diagram illustrates the default factory configuration of OVS on an AHV node:
• Before migrating to Virtual Switch, all bridge br0 bond interfaces should have the same bond type on all
hosts in the cluster. For example, all hosts should use the Active-backup bond type or balance-tcp. If
some hosts use Active-backup and other hosts use balance-tcp, virtual switch migration fails.
• Before migrating to Virtual Switch, if using LACP:
• Confirm that all bridge br0 lacp-fallback parameters on all hosts are set to the case sensitive value
True with manage_ovs show_uplinks |grep lacp-fallback:. Any host with lowercase true
causes virtual switch migration failure.
• Confirm that the LACP speed on the physical switch is set to fast or 1 second. Also ensure that the
switch ports are ready to fallback to individual mode if LACP negotiation fails due to a configuration
such as no lacp suspend-individual.
• Before migrating to the Virtual Switch, confirm that the upstream physical switch is set to spanning-
tree portfast or spanning-tree port type edge trunk. Failure to do so may lead to a 30-second
network timeout and the virtual switch migration may fail because it uses 20-second non-modifiable
timer.
• Ensure that the pre-checks listed in LCM Prechecks section of the Life Cycle Manager Guide and
the Always and Host Disruptive Upgrades types of pre-checks listed KB-4584 pass for Virtual Switch
deployments.
• For the default virtual switch vs0,
• All configured uplink ports must be available for connecting the network. In Active-Backup bond type,
the active port is selected from any configured uplink port that is linked. Therefore, the virtual switch
vs0 can use all the linked ports for communication with other CVMs/hosts.
• All the host IP addresses in the virtual switch vs0 must be resolvable to the configured gateway
using ARP.
Important: Use this procedure only when you need to modify the inconsistent bonds in a migrated bridge
across hosts in a cluster, that is preventing Acropolis (AOS) from deploying the virtual switch for the migrated
bridge.
Do not use ovs-vsctl commands to make the bridge level changes. Use the manage_ovs commands,
instead.
The manage_ovs command allows you to update the cluster configuration. The changes are applied
and retained across host restarts. The ovs-vsctl command allows you to update the live running host
configuration but does not update the AOS cluster configuration and the changes are lost at host restart.
This behavior of ovs-vsctl introduces connectivity issues during maintenance, such as upgrades or
hardware replacements.
ovs-vsctl is usually used during a break/fix situation where a host may be isolated on the network and
requires a workaround to gain connectivity before the cluster configuration can actually be updated using
manage_ovs.
Note: Disable the virtual switch before you attempt to change the bonds or bridge.
If you hit an issue where the virtual switch is automatically re-created after it is disabled (with AOS
versions 5.20.0 or 5.20.1), follow steps 1 and 2 below to disable such an automatically re-created
virtual switch again before migrating the bridges. For more information, see KB-3263.
Be cautious when using the disable_virtual_switch command because it deletes all the
configurations from the IDF, not only for the default virtual switch vs0, but also any virtual
switches that you may have created (such as vs1 or vs2). Therefore, before you use the
disable_virtual_switch command, ensure that you check a list of existing virtual switches,
that you can get using the acli net.get_virtual_switch command.
Complete this procedure on each host Controller VM that is sharing the bridge that needs to be migrated to
a virtual switch.
Procedure
Note: You can use the nutanix@cvm$ acli net.delete_virtual_switch vs_name command
to delete a specific VS and re-create it with the appropriate bond type.
• bridge-name: Provide the name of the bridge, such as br0 for the virtual switch on which you want
to set the uplink bond mode.
• bond-name: Provide the name of the uplink port such as br0-up for which you want to set the bond
mode.
• bond-type: Provide the bond mode that you require to be used uniformly across the hosts on the
named bridge.
Use the manage_ovs --help command for help on this command.
Note: To disable LACP, change the bond type from LACP Active-Active (balance-tcp) to Active-Backup/
Active-Active with MAC pinning (balance-slb) by setting the bond_mode using this command as active-
backup or balance-slb.
Ensure that you turn off LACP on the connected ToR switch port as well. To avoid blocking
of the bond uplinks during the bond type change on the host, ensure that you follow the ToR
switch best practices to enable LACP fallback or passive mode.
To enable LACP, configure bond-type as balance-tcp (Active-Active) with additional
variables --lacp_mode fast and --lacp_fallback true.
4. (If migrating to AOS version earlier than 5.20.2) Check if the issue in the note and disable the virtual
switch.
What to do next
After making the bonds consistent across all the hosts configured in the bridge, migrate the bridge or
enable the virtual switch. For more information, see:
• Configuring a Virtual Network for Guest VM Interfaces in the Prism Element Web Console Guide.
• Network Connections in the Prism Central Guide.
To check whether LACP is enabled or disabled, use the following command.
nutanix@cvm$ manage_ovs show_uplinks
1. Login to Prism element and navigate to Settings > Network Configuration > Virtual Switch.
You can also login to Prism Central, and navigate to Network & Security > Subnets > Network
Configuration > Virtual Switch.
The system displays the Virtual Switch tab.
2.
Click the Edit icon ( ) for the target virtual switch on which you want to configure LACP and LAG.
The system displays the Edit Virtual Switch window:
3. In the General tab, choose Standard (Recommended) option in the Select Configuration Method
field, and click Next.
Note: The Standard configuration method puts each node in maintenance mode before applying the
updated settings. After applying the updated settings, the node exits from maintenance mode. For more
information, see Virtual Switch Workflow.
4. In the Uplink Configuration tab, select Active-Active in the Bond Type field, and click Save.
Note: The Active-Active bond type configures all AHV hosts with the fast setting for LACP speed,
causing the AHV host to request LACP control packets at the rate of one per second from the physical
switch. In addition, the Active-Active bond type configuration sets LACP fallback to Active-Backup
This completes the LAG and LACP configuration on the cluster. At this stage, cluster starts the Rolling
Reboot operation for all the AHV hosts. Wait for the reboot operation to complete before you put the
node and CVM in maintenance mode and change the switch ports.
For more information about how to manually perform the rolling reboot operation for an AHV host, see
Rebooting an AHV Node in a Nutanix Cluster.
Note: Before you put a node in maintenance mode, see Verifying the Cluster Health and carry out the
necessary checks.
The Step 6 in Putting a Node into Maintenance Mode using Web Console section puts the Controller
VM in maintenance mode.
6. Change the settings for the interface on the switch that is directly connected to the Nutanix node to
match the LACP and LAG settings made in the Edit Virtual Switch window above.
For more information about how to change the LACP settings of the switch that is directly connected to
the node, refer to the vendor-specific documentation of the deployed switch.
Nutanix recommends that you perform the following configurations for LACP settings on the switch:
• bond-name with the actual name of the uplink port such as br0-up in the above commands.
• [AHV host IP] with the actual AHV host IP at your site.
8. Remove the node and Controller VM from maintenance mode. For more information, see Exiting a
Node from the Maintenance Mode using Web Console.
The Controller VM exits maintenance mode during the same process.
What to do next
Do the following after completing the procedure to enable LAG and LACP in all the AHV nodes the
connected ToR switches:
• Verify that the status of all services on all the CVMs are Up. Run the following command and check if
the status of the services is displayed as Up in the output:
nutanix@cvm$ cluster status
• Log on to the Prism Element of the node and check the Data Resiliency Status widget displays OK.
VLAN Configuration
You can set up a VLAN-based segmented virtual network on an AHV node by assigning the ports on virtual
bridges managed by virtual switches to different VLANs. VLAN port assignments are configured from the
Controller VM that runs on each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations on
page 44. For information about assigning guest VMs to a virtual switch and VLAN, see Network
Connections in the Prism Central Guide.
Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the cluster.
Once the process begins, hosts and CVMs partially lose network access to each other and VM data or
storage containers become unavailable until the process completes.
To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:
Procedure
3. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
Replace host_vlan_tag with the VLAN tag for hosts.
6. Verify connectivity to the IP address of the AHV host by performing a ping test.
7. Exit the AHV host and the CVM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.
Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the cluster.
Once the process begins, hosts and CVMs partially lose network access to each other and VM data or
storage containers become unavailable until the process completes.
Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are logged on
to the Controller VM through its public interface. To change the VLAN ID, log on to the internal interface that
has IP address 192.168.5.254.
Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:
Procedure
root@host# logout
9. Verify connectivity to the Controller VMs external IP address by performing a ping test from the same
subnet. For example, perform a ping from another Controller VM or directly from the host itself.
10. Exit the AHV host and the Controller VM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.
Note: You must shut down the guest VM to change the number of VNIC queues. Therefore, make this
change during a planned maintenance window. The VNIC status might change from Up->Down->Up or a
restart of the guest OS might be required to finalize the settings depending on the guest OS implementation
requirements.
Procedure
1. Determine the exact name of the guest VM for which you want to change the number of VNIC queues
using the following command:
nutanix@cvm$ acli vm.list
An output similar to the following is displayed:
nutanix@cvm$ acli vm.list
VM name VM UUID
ExampleVM1 a91a683a-4440-45d9-8dbe-xxxxxxxxxxxx
ExampleVM2 fda89db5-4695-4055-a3d4-xxxxxxxxxxxx
...
2. Determine the MAC address of the VNIC and confirm the current number of VNIC queues using the
following command:
nutanix@cvm$ acli vm.nic_get VM-name
Replace VM-name with the name of the VM.
An output similar to the following is displayed:
nutanix@cvm$ acli vm.nic_get VM-name
...
mac_addr: "50:6b:8d:2f:zz:zz"
...
(queues: 2) <- If there is no output of 'queues', the setting is default (1
queue).
Note: AOS defines queues as the maximum number of Tx/Rx queue pairs (default is 1).
3. Determine the total count of vCPUs assigned to the VM using the following command:
nutanix@cvm$ acli vm.get VM-name | grep num.*vcpu
Replace VM-name with the name of the VM.
An output similar to the following is displayed:
num_cores_per_vcpu: 4
Note: N must be less than or equal to the total count of vCPUs assigned to the guest VM.
7. Confirm in the guest OS documentation if any additional steps are required to enable multi-queue in
VirtIO-net.
For example, for RHEL and CentOS VMs, perform the following steps:
Note: It is active by default on CentOS VMs. You might have to start it on RHEL VMs.
Caution: All Controller VMs and hypervisor hosts must be on the same subnet.
Warning: Ensure that you perform the steps in the exact order as indicated in this document.
1. Verify the cluster health by following the instructions in Verifying the Cluster Health.
Do not proceed if the cluster cannot tolerate failure of at least one node.
2. Put the node into the maintenance mode. This requires putting both the AHV host and the Controller
VM into maintenance mode. See Putting a Node into Maintenance Mode on page 22 for instructions
about how to put a node into maintenance mode.
Procedure
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
f. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
AHV Host to a VLAN on page 67.
g. Verify network connectivity by pinging the gateway, other CVMs, and AHV hosts.
2. Log on to the Controller VM that is running on the AHV host whose IP address you changed and restart
genesis.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed:
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
See Controller VM Access on page 12 for information about how to log on to a Controller VM.
Genesis takes a few minutes to restart.
3. Verify if the IP address of the hypervisor host has changed. Run the following nCLI command from any
CVM other than the one in the maintenance mode.
nutanix@cvm$ ncli host list
An output similar to the following is displayed:
nutanix@cvm$ ncli host list
Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.4 <- New IP Address
...
Note: You cannot manage your guest VMs after the Acropolis service is stopped.
b. Verify if the Acropolis service is DOWN on all the CVMs, except the one in the maintenance mode.
nutanix@cvm$ cluster status | grep -v UP
An output similar to the following is displayed:
6. Verify if all processes on all the CVMs, except the one in the maintenance mode, are in the UP state.
nutanix@cvm$ cluster status | grep -v UP
7. Exit the AHV host and the Controller VM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.
Creating a VM (AHV)
In AHV clusters, you can create a new virtual machine (VM) through the Prism Element web console.
Procedure
Note: This option does not appear in clusters that do not support this feature.
Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are creating a
Linux VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows VM with
the hardware clock pointing to the desired timezone.
d. Use this VM as an agent VM: Select this option to make this VM as an agent VM.
You can use this option for the VMs that must be powered on before the rest of the VMs (for
example, to provide network functions before the rest of the VMs are powered on on the host) and
must be powered off after the rest of the VMs are powered off (for example, during maintenance
mode operations). Agent VMs are never migrated to any other host in the cluster. If an HA event
4. (For GPU-enabled AHV clusters only) To configure GPU access, click Add GPU in the Graphics
section, and then do the following in the Add GPU dialog box:
For more information, see GPU and vGPU Support on page 128.
a. To configure GPU pass-through, in GPU Mode, click Passthrough, select the GPU that you want
to allocate, and then click Add.
If you want to allocate additional GPUs to the VM, repeat the procedure as many times as you
need to. Make sure that all the allocated pass-through GPUs are on the same host. If all specified
GPUs of the type that you want to allocate are in use, you can proceed to allocate the GPU to the
Note: This option is available only if you have installed the GRID host driver on the GPU hosts in
the cluster.
For more information about the NVIDIA GRID host driver installation instructions, see the
NVIDIA Grid Host Driver for Nutanix AHV Installation Guide.
You can assign multiple virtual GPU to a VM. A vGPU is assigned to the VM only if a vGPU is
available when the VM is starting up.
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support on page 133 and
Restrictions for Multiple vGPU Support on page 133.
Note:
Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile
type.
After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the Same VM
on page 136.
a. Type: Select the type of storage device, DISK or CD-ROM, from the pull-down list.
The following fields and options vary depending on whether you choose DISK or CD-ROM.
b. Operation: Specify the device contents from the pull-down list.
• Select Clone from ADSF file to copy any file from the cluster that can be used as an image
onto the disk.
• Select Empty CD-ROM to create a blank CD-ROM device. (This option appears only when CD-
ROM is selected in the previous field.) A CD-ROM device is needed when you intend to provide
a system image from CD-ROM.
• Select Allocate on Storage Container to allocate space without specifying an image. (This
option appears only when DISK is selected in the previous field.) Selecting this option means
• For device Disk, select from SCSI, SATA, PCI, or IDE bus type.
• For device CD-ROM, you can select either IDE,or SATA bus type.
Note: SCSI bus is the preferred bus type and it is used in most cases. Ensure you have installed
the VirtIO drivers in the guest OS. For more information about VirtIO drivers, see Nutanix VirtIO for
Windows in AHV Administration Guide.
Caution: Use SATA, PCI, IDE for compatibility purpose when the guest OS does not have VirtIO
drivers to support SCSI devices. This may have performance implications. For more information
about VirtIO drivers, see Nutanix VirtIO for Windows in AHV Administration Guide.
Note: For AHV 5.16 and later, you cannot use an IDE device if Secured Boot is enabled for UEFI
Mode boot configuration.
» Legacy BIOS: Select legacy BIOS to boot the VM with legacy BIOS firmware.
» UEFI: Select UEFI to boot the VM with UEFI firmware. UEFI firmware supports larger hard drives,
faster boot time, and provides more security features. For more information about UEFI firmware,
seeUEFI Support for VM on page 116 .
» Secure Boot is supported with AOS 5.16. The current support to Secure Boot is limited to the aCLI.
For more information about Secure Boot, see Secure Boot Support for VMs on page 122 . To
enable Secure Boot, do the following:
• Select UEFI.
• Power-off the VM.
• Log on to the aCLI and update the VM to enable Secure Boot. For more information, see Updating
a VM to Enable Secure Boot in the AHV Administration Guide.
a. Subnet Name: Select the target virtual LAN from the drop-down list.
The list includes all defined networks (see Network Configuration For VM Interfaces.).
Note: Selecting IPAM enabled subnet from the drop-down list displays the Private IP Assignment
information that provides information about the number of free IP addresses available in the subnet
and in the IP pool.
b. Network Connection State: Select the state for the network that you want it to operate in after
VM creation. The options are Connected or Disconnected.
c. Private IP Assignment: This is a read-only field and displays the following:
Note: Acropolis leader generates MAC address for the VM on AHV. The first 24 bits of the MAC
address is set to 50-6b-8d (0101 0000 0110 1101 1000 1101) and are reserved by Nutanix, the
25th bit is set to 1 (reserved by Acropolis leader), the 26th bit to 48th bits are auto generated random
numbers.
a. Select the host or hosts on which you want configure the affinity for this VM.
b. Click Save.
The selected host or hosts are listed. This configuration is permanent. The VM will not be moved
from this host or hosts even in case of HA event and will take effect once the VM starts.
9. To customize the VM by using Cloud-init (for Linux VMs) or Sysprep (for Windows VMs), select the
Custom Script check box.
Fields required for configuring Cloud-init and Sysprep, such as options for specifying a configuration
script or answer file and text boxes for specifying paths to required files, appear below the check box.
» If you uploaded the file to a storage container on the cluster, click ADSF path, and then enter the
path to the file.
Enter the ADSF prefix (adsf://) followed by the absolute path to the file. For example, if the user
data is in /home/my_dir/cloud.cfg, enter adsf:///home/my_dir/cloud.cfg. Note the use of
three slashes.
» If the file is available on your local computer, click Upload a file, click Choose File, and then
upload the file.
» If you want to create or paste the contents of the file, click Type or paste script, and then use the
text box that is provided.
11. To copy one or more files to a location on the VM (Linux VMs) or to a location in the ISO file (Windows
VMs) during initialization, do the following:
a. In Source File ADSF Path, enter the absolute path to the file.
b. In Destination Path in VM, enter the absolute path to the target directory and the file name.
For example, if the source file entry is /home/my_dir/myfile.txt then the entry for the
Destination Path in VM should be /<directory_name>/copy_desitation> i.e. /mnt/
myfile.txt.
c. To add another file or directory, click the button beside the destination path field. In the new row
that appears, specify the source and target details.
12. When all the field entries are correct, click the Save button to create the VM and close the Create VM
dialog box.
The new VM appears in the VM table view.
Managing a VM (AHV)
You can use the web console to manage virtual machines (VMs) in Acropolis managed clusters.
Note: Your available options depend on the VM status, type, and permissions. Unavailable options are
grayed out.
Procedure
a. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
b. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
Ensure that VM must have at least one empty IDE CD-ROM slot to attach the ISO.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual
machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check
box.
The Self-Service Restore feature is enabled of the VM. The guest VM administrator can restore the
desired file or files from the VM. For more information about self-service restore feature, see Self-
Service Restore in the Data Protection and Recovery with Prism Element guide.
d. After you select Enable Nutanix Guest Tools check box the VSS snapshot feature is enabled by
default.
After this feature is enabled, Nutanix native in-guest VmQuiesced Snapshot Service (VSS) agent
takes snapshots for VMs that support VSS.
Note:
The AHV VM snapshots are not application consistent. The AHV snapshots are taken
from the VM entity menu by selecting a VM and clicking Take Snapshot.
The application consistent snapshots feature is available with Protection Domain based
snapshots and Recovery Points in Prism Central. For more information, see Conditions
for Application-consistent Snapshots in the Data Protection and Recovery with Prism
Element guide.
e. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual
machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
Note:
• If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is
powered off, enable NGT from the UI and power on the VM. If cloned VM is powered
on, enable NGT from the UI and restart the nutanix guest agent service.
• If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and
Mounting the NGT Installer Simultaneously on Multiple Cloned VMs.
If you eject the CD, you can mount the CD back again by logging into the Controller VM and
running the following nCLI command.
nutanix@cvm$ ncli ngt mount vm-id=virtual_machine_id
For example, to mount the NGT on the VM with
VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987,
type the following command.
nutanix@cvm$ ncli ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-
a345-4e52-9af6-
• Clicking the Mount ISO button displays the following window that allows you to mount an ISO
image to the VM. To mount an image, select the desired image and CD-ROM drive from the pull-
down lists and then click the Mount button.
• Clicking the C-A-D icon button sends a CtrlAltDel command to the VM.
• Clicking the camera icon button takes a screenshot of the console window.
• Clicking the power icon button allows you to power on/off the VM. These are the same options that
you can access from the Power On Actions or Power Off Actions action link below the VM table
(see next step).
6. To start or shut down the VM, click the Power on (or Power off) action link.
Power on begins immediately. If you want to power off the VMs, you are prompted to select one of the
following options:
• Power Off. Hypervisor performs a hard power off action on the VM.
• Power Cycle. Hypervisor performs a hard restart action on the VM.
• Reset. Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown. Operating system of the VM performs a graceful shutdown.
• Guest Reboot. Operating system of the VM performs a graceful restart.
Note: If you perform power operations such as Guest Reboot or Guest Shutdown by using the Prism
Element web console or API on Windows VMs, these operations might silently fail without any error
messages if at that time a screen saver is running in the Windows VM. Perform the same power
operations again immediately, so that they succeed.
7. To make a snapshot of the VM, click the Take Snapshot action link.
For more information, see Virtual Machine Snapshots.
Note: Nutanix recommends to live migrate VMs when they are under light load. If they are migrated
while heavily utilized, migration may fail because of limited bandwidth.
• Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and Restrictions for
Multiple vGPU Support in the AHV Administration Guide.
• Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile type.
• For more information on vGPU profile selection, see:
• Virtual GPU Types for Supported GPUs in the NVIDIA Virtual GPU Software User Guide in the
NVIDIA's Virtual GPU Software Documentation webpage, and
• GPU and vGPU Support in the AHV Administration Guide.
Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space
associated with that vDisk is not reclaimed unless you also delete the VM snapshots.
To increase the memory allocation and the number of vCPUs on your VMs while the VMs are powered
on (hot-pluggable), do the following:
a. In the vCPUs field, you can increase the number of vCPUs on your VMs while the VMs are
powered on.
b. In the Number of Cores Per vCPU field, you can change the number of cores per vCPU only if
the VMs are powered off.
c. In the Memory field, you can increase the memory allocation on your VMs while the VMs are
powered on.
For more information about hot-pluggable vCPUs and memory, see Virtual Machine Memory and CPU
Hot-Plug Configurations in the AHV Administration Guide.
To attach a volume group to the VM, do the following:
a. In the Volume Groups section, click Add volume group, and then do one of the following:
a. To enable flash mode on the VM, click the Enable Flash Mode check box.
» After you enable this feature on the VM, the status is updated in the VM table view. To view the
status of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in
the VM table view.
» You can disable the flash mode feature for individual virtual disks. To update the flash mode for
individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable
Flash Mode check box.
11. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete
the VM.
The deleted VM disappears from the list of VMs in the table.
vNIC Detach (hot-unplug) Yes vNIC detach is vNIC detach is No action needed
Successful. successful.
• Using Prism Central: See
Managing a VM (AHV) No Observe the vNIC detach is not Power cycle the
topic in Prism Central following logs: successful. VM for successful
Guide. vNIC detach.
Device
• Using Prism Element web detached
console: See Managing a successfully
VM (AHV) on page 84.
• Using acli: Log on to the
CVM with SSH and run the
following command:
nutanix@cvm$ acli
vm.nic_delete
<vm_name> <nic mac
address>
or,
nutanix@cvm$ acli
vm.nic_update
<vm_name> <nic
mac address>
connected=false
Replace the following
attributes in the above
commands:
Note: In most cases, it is observed that the ACPI mechanism failure occurs when no guest OS is installed
on the VM.
• The state including the power state (for example, powered-on, powered-off, suspended) of the VMs.
• The data includes all the files that make up the VM. This data also includes the data from disks,
configurations, and devices, such as virtual network interface cards.
For more information about creating VM snapshots, see Creating a VM Snapshot Manually section in the
Prism Web Console Guide.
You can schedule and generate snapshots as a part of the disaster recovery process using Nutanix DR
solutions. AOS generates snapshots when you protect a VM with a protection domain using the Data
Protection dashboard in Prism Web Console (see the Data Protection and Recovery with Prism Element
guide). Similarly, Recovery Points (snapshots are called Recovery Points in Prism Central) when you
protect a VM with a protection policy using Data Protection dashboard in Prism Central (see the Leap
Administration Guide).
For example, in the Data Protection dashboard in Prism Web Console, you can create schedules to
generate snapshots using various RPO schemes such as asynchronous replication with frequency
intervals of 60 minutes or more, or NearSync replication with frequency intervals of as less as 20 seconds
up to 15 minutes. These schemes create snapshots in addition to the ones generated by the schedules, for
example, asynchronous replication schedules generate snapshots according to the configured schedule
and, in addition, an extra snapshot every 6 hours. Similarly, NearSync generates snapshots according to
the configured schedule and also generates one extra snapshot every hour.
Similarly, you can use the options in the Data Protection section of Prism Central to generate Recovery
Points using the same RPO schemes.
Windows VM Provisioning
Nutanix VirtIO for Windows
Nutanix VirtIO is a collection of drivers for para-virtual devices that enhance the stability and performance
of virtual machines on AHV.
Nutanix VirtIO is available in two formats:
• VirtIO ISO file - Use it when VM does not yet have a Windows OS installed. For more information, see
Installing or Upgrading Nutanix VirtIO for Windows on page 97.
Prior to AOS 6.6 NGT includes the VM Mobility package which is a re-packaging of VirtIO. The
repackaging is done with additional changes to enable a built-in driver that
is pre-installed in Windows but not enabled by default. This driver is used to
enable the SCSI controller required by some Windows editions for seamless
mobility between different types of hypervisors.
In this case, the VM Mobility package uses the same version numbering as
VirtIO.
AOS 6.6 and above NGT contains both a VirtIO and a VM Mobility package. The Nutanix VirtIO
package contains all the VirtIO drivers, and the VM Mobility package is no
longer re-packaged with the VirtIO drivers and contains only the change to
enable the SCSI controller.
The NGT release is aligned with the AOS release and the bundled VirtIO package is updated in the next
NGT release. Nutanix does not back-port the NGT releases to the previous AOS releases.
Caution: If you install an older version of NGT, the latest VirtIO version, even if installed, might be replaced
by the older VirtIO version.
The VirtIO release is not aligned with the AOS release. To ensure that you have the latest VirtIO drivers,
either install the latest NGT version or update the drivers using the latest VirtIO package available on the
Nutanix Support portal. For more information, see Installing or Upgrading Nutanix VirtIO for Windows on
page 97.
VirtIO Requirements
Requirements for Nutanix VirtIO for Windows.
VirtIO supports the following operating systems:
Note: On Windows 7 and Windows Server 2008 R2, install Microsoft KB3033929 or update the operating
system with the latest Windows Update to enable support for SHA2 certificates.
Caution: The VirtIO installation or upgrade may fail if multiple Windows VSS snapshots are present in the
guest VM. The installation or upgrade failure is due to the timeout that occurs during installation of Nutanix
VirtIO SCSI pass-through controller driver.
It is recommended to clean up the VSS snapshots or temporarily disconnect the drive that
contains the snapshots. Ensure that you only delete the snapshots that are no longer needed. For
Procedure
1. Go to the Nutanix Support portal and select Downloads > AHV and click VirtIO.
» If you are creating a new Windows VM, download the ISO file. The installer is available on the ISO if
your VM does not have Internet access.
» If you are updating drivers in a Windows VM, download the MSI installer file.
» For the ISO: Upload the ISO to the cluster, as described in the Configuring Images topic in Prism
Element Web Console Guide.
» For the MSI: open the download file to run the MSI.
The Nutanix VirtIO setup wizard shows a status bar and completes installation.
Note: To automatically install Nutanix VirtIO, see Installing or Upgrading Nutanix VirtIO for Windows on
page 97.
If you have already installed Nutanix VirtIO, use the following procedure to upgrade VirtIO to a latest
version. If you have not yet installed Nutanix VirtIO, use the following procedure to install Nutanix VirtIO.
Procedure
1. Go to the Nutanix Support portal, select Downloads > AHV, and click VirtIO.
» Extract the VirtIO ISO into the same VM where you load Nutanix VirtIO, for easier installation.
If you choose this option, proceed directly to step 7.
» Download the VirtIO ISO for Windows to your local machine.
If you choose this option, proceed to step 3.
3. Upload the ISO to the cluster, as described in the Configuring Images topic of Prism Element Web
Console Guide.
4. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.
• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO
6. Click Add.
Open the devices and select the specific Nutanix drivers for download. For each device, right-click and
Update Driver Software into the drive containing the VirtIO ISO. For each device, follow the wizard
instructions until you receive installation confirmation.
• Upload the Windows installer ISO to your cluster as described in the Configuring Images topic in Web
Console Guide.
• Upload the Nutanix VirtIO ISO to your cluster as described in the Configuring Images topic in Web
Console Guide.
Procedure
Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are creating a
Linux VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows VM with
the hardware clock pointing to the desired timezone.
d. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
e. MEMORY: Enter the amount of memory for the VM (in GiB).
5. If you are creating a Windows VM, add a Windows CD-ROM to the VM.
a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated fields.
• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: SATA
• IMAGE: Select the Nutanix VirtIO ISO.
b. Click Add.
• TYPE: DISK
• OPERATION: ALLOCATE ON STORAGE CONTAINER
• BUS TYPE: SATA
• STORAGE CONTAINER: Select the appropriate storage container.
• SIZE: Enter the number for the size of the hard drive (in GiB).
b. Click Add to add the disk driver.
• TYPE: DISK
• OPERATION: CLONE FROM IMAGE
• BUS TYPE: SATA
• CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image you
created previously.
b. Click Add to add the disk driver.
9. Optionally, after you have migrated or created a VM, add a network interface card (NIC).
What to do next
Install Windows by following Installing Windows on a VM on page 105.
Installing Windows on a VM
Install a Windows virtual machine.
Procedure
6. Select the desired language, time and currency format, and keyboard information.
10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.
The amd64 folder contains drivers for 64-bit operating systems. The x86 folder contains drivers for 32-
bit operating systems.
Note: From Nutanix VirtIO driver version 1.1.5, the driver package contains Windows Hardware Quality
Lab (WHQL) certified driver for Windows.
13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress, which can take several minutes.
14. Enter your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows setup completes installation.
15. Follow the instructions in Installing or Upgrading Nutanix VirtIO for Windows on page 97 to install
other drivers which are part of Nutanix VirtIO package.
Windows Defender Credential Guard uses Microsoft virtualization-based security to isolate user credentials
in the virtualization-based security (VBS) module in AHV. When you enable Windows Defender Credential
Guard on an AHV guest VM, the guest VM runs on top of AHV running both the Windows OS and
VBS. Each Windows OS guest VM, which has credential guard enabled, has a VBS to securely store
credentials.
Limitations
• Windows Defender Credential guard is not supported on hosts with AMD CPUs.
Note: vTPM is supported with AOS 6.5.1 or later and AHV 20220304.242 or later release versions
only.
Caution: Use of Windows Defender Credential Guard in your AHV clusters impacts VM performance.
If you enable Windows Defender Credential Guard on AHV guest VMs, VM density drops by ~15–20%.
This expected performance impact is due to nested virtualization overhead added as a result of enabling
credential guard.
Procedure
1. Enable Windows Defender Credential Guard when you are either creating a VM or updating a VM. Do
one of the following:
See UEFI Support for VM on page 116 and Secure Boot Support for VMs on page 122 for more
information about these features.
e. Proceed to configure other attributes for your Windows VM.
See Creating a Windows VM on AHV with Nutanix VirtIO on page 102 for more information.
f. Click Save.
g. Turn on the VM.
Note:
If the VM is configured to use BIOS, install the guest OS again.
If the VM is already configured to use UEFI, skip the step to select Secure Boot.
See UEFI Support for VM on page 116 and Secure Boot Support for VMs on page 122 for more
information about these features.
d. Click Save.
e. Turn on the VM.
5. Open command prompt in the Windows VM and apply the Group Policy settings:
> gpupdate /force
If you have not enabled Windows Defender Credential Guard (step 4) and perform this step (step 5), a
warning similar to the following is displayed:
Updating policy...
For more detailed information, review the event log or run GPRESULT /H GPReport.html
from the command line to access information about Group Policy results.
Event Viewer displays a warning for the group policy with an error message that indicates Secure Boot
is not enabled on the VM.
To view the warning message in Event Viewer, do the following:
Note: Ensure that you follow the steps in the order that is stated in this document to successfully enable
Windows Defender Credential Guard.
a. In the Windows VM, open System Information by typing msinfo32 in the search field next to the
Start menu.
b. Verify if the values of the parameters are as indicated in the following screen shot:
Note:
• If you choose to apply the VM-host affinity policy, it limits Acropolis HA and Acropolis Dynamic
Scheduling (ADS) in such a way that a virtual machine cannot be restarted or migrated to a
host that does not conform to the requirements of the affinity policy as this policy is enforced
mandatorily.
• The VM-host anti-affinity policy is not supported.
• VMs configured with host affinity settings retain these settings if the VM is migrated to a new
cluster. Remove the VM-host affinity policies applied to a VM that you want to migrate to
another cluster, as the UUID of the host is retained by the VM and it does not allow the VM
to restart on the destination cluster. When you attempt to protect such VMs, it is successful.
However, some disaster recovery operations like migration fail and attempts to power on these
VMs also fail.
• VMs with host affinity policies can only be migrated to the hosts specified in the affinity policy. If
only one host is specified, the VM cannot be migrated or started on another host during an HA
event. For more information, see Non-Migratable Hosts on page 114.
You can define the VM-host affinity policies by using Prism Element during the VM create or update
operation. For more information, see Creating a VM (AHV).
Note:
• Currently, you can only define VM-VM anti-affinity policy by using aCLI. For more information,
see Configuring VM-VM Anti-Affinity Policy on page 113.
• The VM-VM affinity policy is not supported.
Note: If a VM is cloned that has the affinity policies configured, then the policies are not automatically
applied to the cloned VM. However, if a VM is restored from a DR snapshot, the policies are automatically
applied to the VM.
Procedure
2. Create a group.
nutanix@cvm$ acli vm_group.create group_name
Replace group_name with the name of the group.
3. Add the VMs on which you want to define anti-affinity to the group.
nutanix@cvm$ acli vm_group.add_vms group_name vm_list=vm_name
Replace group_name with the name of the group. Replace vm_name with the name of the VMs that you
want to define anti-affinity on. In case of multiple VMs, you can specify comma-separated list of VM
names.
Procedure
Non-Migratable Hosts
VMs with GPU, CPU passthrough, PCI passthrough, and host affinity policies are not migrated to other
hosts in the cluster. Such VMs are treated in a different manner in scenarios where VMs are required to
migrate to other hosts in the cluster.
Scenario Behavior
One-click upgrade VM is powered off.
Life-cycle management (LCM) Pre-check for LCM fails and the VMs are not migrated.
Note: You can also perform these power operations by using the V3 API calls. For more information, see
developer.nutanix.com.
Procedure
• Boot faster
• Avoid legacy option ROM address constraints
• Include robust reliability and fault management
• Use UEFI drivers
Note:
• Nutanix supports the starting of VMs with UEFI firmware in an AHV cluster. However, if a
VM is added to a protection domain and later restored on a different cluster, the VM loses
boot configuration. To restore the lost boot configuration, see Setting up Boot Device on
page 119.
• Nutanix also provides limited support for VMs migrated from a Hyper-V cluster.
You can create or update VMs with UEFI firmware by using acli commands, Prism Element web console,
or Prism Central web console. For more information about creating a VM by using the Prism Element web
console or Prism Central web console, see Creating a VM (AHV) on page 75. For information about
creating a VM by using aCLI, see Creating UEFI VMs by Using aCLI on page 116.
Note: If you are creating a VM by using aCLI commands, you can define the location of the storage
container for UEFI firmware and variables. Prism Element web console or Prism Central web console does
not provide the option to define the storage container to store UEFI firmware and variables.
For more information about the supported OSes for the guest VMs, see the AHV Guest OS section in
the ]Compatibility and Interoperability Matrix document.
Procedure
Note: When you update the location of the storage container, clear the UEFI configuration and update
the location of nvram_container to a container of your choice.
What to do next
Go to the UEFI BIOS menu and configure the UEFI firmware settings. For more information about
accessing and setting the UEFI firmware, see Getting Familiar with UEFI Firmware Menu on page 117.
Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box,
and immediately press F2 when the VM starts to boot.
Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-
production hours or during a maintenance period.
4. Use the up or down arrow key to go to Device Manager and press Enter.
The Device Manager page appears.
6. In the OVMF Settings page, use the up or down arrow key to go to the Change Preferred field and
use the right or left arrow key to increase or decrease the boot resolution.
The default boot resolution is 1280X1024.
8. Select Reset and click Submit in the Power off/Reset dialog box to restart the VM.
After you restart the VM, the OS displays the changed resolution.
• VM is in powered on state.
• The system behavior associated with following VM conditions is noted:
VM is installed with UEFI and Any change made to the boot order persists and the changes are saved
the EFI boot partition exists. in the nvVars file in the EFI partition.
No guest OS is installed on the Any change made to the boot order persists and the changes are saved
VM but the EFI boot partition in the nvVars file in the EFI partition.
exists.
VM with no EFI boot partition. Any change made to the boot order persists only while the VM is on (or
rebooted), but a power off/on action reverts the boot order change to the
previous setting.
Procedure
To set up the boot device for a UEFI VM, perform the following steps:
Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box,
and immediately press F2 when the VM starts to boot.
Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-
production hours or during a maintenance period.
4. Use the up or down arrow key to go to Boot Manager and press Enter.
The Boot Manager screen displays the list of available boot devices in the cluster.
Procedure
Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box,
and immediately press F2 when the VM starts to boot.
Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-
production hours or during a maintenance period.
4. Use the up or down arrow key to go to Boot Maintenance Manager and press Enter.
5. In the Boot Maintenance Manager screen, use the up or down arrow key to go to the Auto Boot
Time-out field.
The default boot-time value is 0 seconds.
The boot-time value is changed. The VM starts after the defined boot-time value.
Limitations
Secure Boot for guest VMs has the following limitation:
• Nutanix does not support converting a VM that uses IDE disks or legacy BIOS to VMs that use Secure
Boot.
• The minimum supported version of the Nutanix VirtIO package for Secure boot-enabled VMs is 1.1.6.
• Secure boot VMs do not permit CPU, memory, or PCI disk hot plug.
Requirements
Following are the requirements for Secure Boot:
Procedure
Note: Specifying the machine type is required to enable the secure boot feature. UEFI is enabled by
default when the Secure Boot feature is enabled.
Procedure
Note:
• If you disable the secure boot flag alone, the machine type remains q35, unless you disable
that flag explicitly.
• UEFI is enabled by default when the Secure Boot feature is enabled. Disabling Secure Boot
does not revert the UEFI flags.
Procedure
a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create <vm_name> network=network [vlan_mode={kAccess |
kTrunked}] [trunked_networks=networks]
Specify appropriate values for the following parameters:
• network. Name of the virtual network to which you want to connect the virtual NIC.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The parameter
is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set to kAccess.
To include the default VLAN, VLAN 0, include it in the list of trunked networks. To trunk all VLANs,
set vlan_mode to kTrunked and skip this parameter.
• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for
access mode and to kTrunked for trunk mode. Default: kAccess.
b. Configure an existing virtual NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_update <vm_name> mac_addr update_vlan_trunk_info=true
[vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]
Specify appropriate values for the following parameters:
• mac_addr. MAC address of the virtual NIC to update (the MAC address is used to identify the
virtual NIC). Required to update a virtual NIC.
• update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. Set
update_vlan_trunk_info=true to enable trunked mode. If not specified, the parameter defaults to
false and the vlan_mode and trunked_networks parameters are ignored.
Note: You must set the update_vlan_trunk_info to true. If you do not set this parameter to true,
"trunked_networks" are not changed.
• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set to
kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To trunk
all VLANs, set vlan_mode to kTrunked and skip this parameter.
Note: You cannot decrease the memory allocation and the number of CPUs on your VMs while the VMs are
powered on.
You can change the memory and CPU configuration of your VMs by using the Acropolis CLI (aCLI) (see
Managing a VM (AHV) in the Prism Element Web Console Guide or see Managing a VM (AHV) and
Managing a VM (Self Service) in the Prism Central Guide).
Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the
memory is not online, you cannot use the new memory. Perform the following procedure to make the
memory online.
1. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/devices/system/memory/memoryXXX/state
Display the state of a specific memory block.
$ grep line /sys/devices/system/memory/*/state
2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more memory
to that VM so that the final memory is greater than 3 GB, results in a memory-overflow condition. To
resolve the issue, restart the guest OS (CentOS 7.2) with the following setting:
swiotlb=force
CPU OS Limitation
On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you might
have to bring the CPUs online. For each hot-plugged CPU, run the following command to bring the CPU
online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Replace <n> with the number of the hot plugged CPU.
Procedure
Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported version,
you must power cycle the VM that was instantiated and powered on before the upgrade, so that it is
compatible with the memory and CPU hot-plug feature. This power-cycle has to be done only once after
the upgrade. New VMs created on the supported version shall have the hot-plug compatibility by default.
Procedure
2. Check how many NUMA nodes are available on each AHV host in the cluster.
nutanix@cvm$ hostssh "numactl --hardware"
The console displays an output similar to the following:
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 128837 MB
node 0 free: 862 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 129021 MB
node 1 free: 352 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128859 MB
node 0 free: 1076 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129000 MB
node 1 free: 436 MB
Replace <vm_name> with the name of the VM on which you want to enable vNUMA or vUMA. Replace x
with the values for the following indicated parameters:
Note: You can configure either pass-through or a vGPU for a guest VM but not both.
This guide describes the concepts related to the GPU and vGPU support in AHV. For the configuration
procedures, see the Prism Element Web Console Guide.
For driver installation instructions, see the NVIDIA Grid Host Driver for Nutanix AHV Installation Guide.
Note: VMs with GPU are not migrated to other hosts in the cluster. For more information, see Non-
Migratable Hosts on page 114.
Supported GPUs
The following GPUs are supported:
Note: These GPUs are supported only by the AHV version that is bundled with the AOS release.
Limitations
GPU pass-through support has the following limitations:
• Live migration of VMs with a GPU configuration is not supported. Live migration of VMs is necessary
when the BIOS, BMC, and the hypervisor on the host are being upgraded. During these upgrades, VMs
that have a GPU configuration are powered off and then powered on automatically when the node is
back up.
• VM pause and resume are not supported.
• You cannot hot add VM memory if the VM is using a GPU.
• Hot add and hot remove support is not available for GPUs.
• You can change the GPU configuration of a VM only when the VM is turned off.
Note: NVIDIA does not support Windows Guest VMs on the C-series NVIDIA vGPU types. See the NVIDIA
documentation on Virtual GPU software for more information.
Note: If the specified license is not available on the licensing server, the VM starts up and functions
normally, but the vGPU runs with reduced capability.
• Memory is not reserved for the VM on the failover host by the HA process. When the VM fails over, if
sufficient memory is not available, the VM cannot power on.
• vGPU resources are not reserved on the failover host. When the VM fails over, if the required vGPU
resources are not available on the failover host, the VM cannot power on.
• If both site A and site B have the same GPU boards (and the same assignable vGPU profiles), failovers
work seamlessly. However, with protection domains, no additional steps are required. GPU profiles
are restored correctly and vGPU console settings persist after recovery. With Leap DR, vGPU console
settings do not persist after recovery.
• If site A and site B have different GPU boards and vGPU profiles, you must manually remove the vGPU
profile before you power on the VM in site B.
The vGPU console settings are persistent after recovery and all failovers are supported for the following:
For information about the behavior See the Recovery of vGPU-enabled VMs topic in the Data Protection
and Recovery with Prism Element guide.
See Enabling or Disabling Console Support for vGPU VMs on page 141 for more information about
configuring the support.
For SRA and SRM support, see the Nutanix SRA documentation.
Note: ADS support requires live migration of VMs with vGPU be operational in the cluster. See Live
Migration of VMs with vGPUs above for minimum NVIDIA and AOS versions that support live migration of
VMs with vGPUs.
When a number of VMs with vGPUs are running on a host and you enable ADS support for the cluster, the
Lazan manager invokes VM migration tasks to resolve resource hotspots or fragmentation in the cluster to
power on incoming vGPU VMs. The Lazan manager can migrate vGPU-enabled VMs to other hosts in the
cluster only if:
• The other hosts support compatible or identical vGPU resources as the source host (hosting the vGPU-
enabled VMs).
• The host affinity is not set for the vGPU-enabled VM.
For more information about limitations, see Live Migration of VMs with Virtual GPUs on page 140 and
Limitations of Live Migration Support on page 141.
For more information about ADS, see Acropolis Dynamic Scheduling in AHV on page 6.
Note: Multiple vGPUs on the same VM are supported on NVIDIA Virtual GPU software version 10.1
(440.53) or later.
You can deploy virtual GPUs of different types. A single physical GPU can be divided into the number
of vGPUs depending on the type of vGPU profile that is used on the physical GPU. Each physical GPU
on a GPU board supports more than one type of vGPU profile. For example, a Tesla® M60 GPU device
provides different types of vGPU profiles like M60-0Q, M60-1Q, M60-2Q, M60-4Q, and M60-8Q.
You can only add multiple vGPUs of the same type of vGPU profile to a single VM. For example, consider
that you configure a VM on a Node that has one NVidia Tesla® M60 GPU board. Tesla® M60 provides two
physical GPUs, each supporting one M60-8Q (profile) vGPU, thus supporting a total of two M60-8Q vGPUs
for the entire host.
For restrictions on configuring multiple vGPUs on the same VM, see Restrictions for Multiple vGPU
Support on page 133.
For steps to add multiple vGPUs to the same VM, see Creating a VM (AHV) and Adding Multiple vGPUs to
a VM information in Prism Element Web Console Guide or Creating a VM through Prism Central (AHV) and
Adding Multiple vGPUs to a VM in Prism Central Guide.
• All the vGPUs that you assign to one VM must be of the same type. In the aforesaid example, with the
Tesla® M60 GPU device, you can assign multiple M60-8Q vGPU profiles. You cannot assign one vGPU
of the M60-1Q type and another vGPU of the M60-8Q type.
Note: You can configure any number of vGPUs of the same type on a VM. However, the cluster
calculates a maximum number of vGPUs of the same type per VM. This number is defined as
max_instances_per_vm. This number is variable and changes based on the GPU resources available
in the cluster and the number of VMs deployed. If the number of vGPUs of a specific type that you
configured on a VM exceeds the max_instances_per_vm number, then the VM fails to power on and the
following error message is displayed:
Operation failed: NoHostResources: No host has enough available GPU for VM <name of
VM>(UUID of VM).
• Configure multiple vGPUs only of the highest type using Prism. The highest type of vGPU profile is
based on the driver deployed in the cluster. In the aforesaid example, on a Tesla® M60 device, you can
only configure multiple vGPUs of the M60-8Q type. Prism prevents you from configuring multiple vGPUs
of any other type such as M60-2Q.
Note:
You can use CLI (acli) to configure multiple vGPUs of other available types. See Acropolis
Command-Line Interface (aCLI) for the aCLI information. Use the vm.gpu_assign <vm.name>
gpu=<gpu-type> command multiple times, once for each vGPU, to configure multiple vGPUs
of other available types.
See the GPU board and software documentation for more information.
• Configure either a passthrough GPU or vGPUs on the same VM. You cannot configure both
passthrough GPU and vGPUs. Prism automatically disallows such configurations after the first GPU is
configured.
• The VM powers on only if the requested type and number of vGPUs are available in the node.
In the aforesaid example, the VM, which is configured with two M60-8Q vGPUs, fails to power on if
another VM sharing the same GPU board is already using one M60-8Q vGPU. This is because the
Tesla® M60 GPU board allows only two M60-8Q vGPUs. Of these, one is already used by another VM.
Important:
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support on page 133 and
Restrictions for Multiple vGPU Support on page 133.
After you add the first vGPU, do the following on the Create VM or Update VM dialog box (the main dialog
box) to add more vGPUs:
Procedure
4. Repeat the steps for each vGPU addition you want to make.
• Destination node is equipped with the required resources for the VM.
• The VM GPU drivers are compatible with the AHV host GPU drivers.
If the destination node is not equipped with the enough resources or there is any compatibility issue
between the VM GPU drivers and AHV host GPU drivers, the LCM forcibly shuts down the non-HA-
protected VMs.
Note: Live migration of VMs with vGPUs is supported for vGPUs created with minimum NVIDIA Virtual GPU
software version 10.1 (440.53).
Important: In an HA event involving any GPU node, the node locality of the affected vGPU VMs is not
restored after GPU node recovery. The affected vGPU VMs are not migrated back to their original GPU host
intentionally to avoid extended VM stun time expected while migrating vGPU frame buffer. If vGPU VM node
locality is required, migrate the affected vGPU VMs to the desired host manually. For information about the
steps to migrate a live VM with vGPUs, see Migrating Live a VM with Virtual GPUs in the Prism Central
Guide and the Prism Web Console Guide.
Note:
Important frame buffer and VM stun time considerations are:
• The GPU board (for example, NVIDIA Tesla M60) vendor provides the information for
maximum frame buffer size of vGPU types (for example, M60-8Q type) that can be configured
on VMs. However, the actual frame buffer usage may be lower than the maximum sizes.
• The VM stun time depends on the number of vGPUs configured on the VM being migrated.
Stun time may be longer in case of multiple vGPUs operating on the VM.
The stun time also depends on the network factors such bandwidth available for use during the
migration.
• To another host within the same cluster. Both Prism Web Console (Prism Element) and Prism Central
allow you to live migrate a vGPU-enabled VM to another host within the same cluster.
• To a host outside the cluster, that is, a host in another cluster. Only Prism Central allows you to migrate
a vGPU-enabled VM to a host outside the cluster.)
For information about the steps to live migrate a vGPU-enabled VM, see the following:
• Migrating Live a vGPU-enabled VM Within the Cluster in the Prism Web Console Guide.
• Migrating Within the Cluster in the Prism Central Infrastructure Guide.
• Migrating Outside the Cluster in the Prism Central Infrastructure Guide.
• Live migration is supported for VMs configured with single or multiple virtual GPUs. It is not supported
for VMs configured with passthrough GPUs.
• The target cluster for the migration must have adequate and available GPU resources, with the same
vGPU types as configured for the VMs to be migrated, to support the vGPUs on the VMs that need to
be migrated.
See Restrictions for Multiple vGPU Support on page 133 for more details.
• The VMs with vGPUs that need to be migrated live cannot be protected with high availability.
• Ensure that the VM is not powered off.
• Ensure that you have the right GPU software license that supports live migration of vGPUs. The source
and target clusters must have the same license type. You require an appropriate license of NVIDIA
GRID software version. See Live Migration of VMs with Virtual GPUs on page 140 for minimum
license requirements.
1. Run the following aCLI command to check if console support is enabled or disabled for the VM with
vGPUs.
acli> vm.get vm-name
Where vm-name is the name of the VM for which you want to check the console support status.
The step result includes the following parameter for the specified VM:
gpu_console=False
Where False indicates that console support is not enabled for the VM. This parameter is displayed as
True when you enable console support for the VM. The default value for gpu_console= is False since
console support is disabled by default.
Note: The console may not display the gpu_console parameter in the output of the vm.get
command if the gpu_console parameter was not previously enabled.
2. Run the following aCLI command to enable or disable console support for the VM with vGPU:
vm.update vm-name gpu_console=true | false
Where:
• true—indicates that you are enabling console support for the VM with vGPU.
• false—indicates that you are disabling console support for the VM with vGPU.
3. Run the vm.get command to check if gpu_console value is true indicating that console support is
enabled or false indicating that console support is disabled as you configured it.
If the value indicated in the vm.get command output is not what is expected, then perform Guest
Shutdown of the VM with vGPU. Next, run the vm.on vm-name aCLI command to turn the VM on
again. Then run vm.get command and check the gpu_console= value.
4. Click a VM name in the VM table view to open the VM details page. Click Launch Console.
The Console opens but only a black screen is displayed.
5. Click on the console screen. Click one of the following key combinations based on the operating system
you are accessing the cluster from.
Procedure
1. Log on to the Prism web console, click the gear icon, and then click Network Configuration in the
menu.
2. On Network Configuration > Subnets tab, click the Edit action link of the network for which you want
to configure a PXE environment.
The VMs that require the PXE boot information must be on this network.
a. Select the Enable IP address management check box and complete the following configurations:
• In the Network IP Prefix field, enter the network IP address, with prefix, of the subnet that you
are updating.
• In the Gateway IP Address field, enter the gateway IP address of the subnet that you are
updating.
• To provide DHCP settings for the VM, select the DHCP Settings check box and provide the
following information.
Domain Search Enter the VLAN domain name. Use only the domain name format.
Example: nutanix.com
Boot File Name The name of the boot file that the VMs need to download from the TFTP
host server.
Example: boot_ahv202010
4. Under IP Address Pools, click Create Pool to add IP address pools for the subnet.
(Mandatory for Overlay type subnets) This section provides the Network IP Prefix and Gateway IP
fields for the subnet.
(Optional for VLAN type subnet) Check this box to display the Network IP Prefix and Gateway IP
fields and configure the IP address details.
5. (Optional and for VLAN networks only) Check the Override DHCP Server dialog box and enter an IP
address in the DHCP Server IP Address field.
You can configure a DHCP server using the Override DHCP Server option only in case of VLAN
networks.
The DHCP Server IP address (reserved IP address for the Acropolis DHCP server) is visible only to
VMs on this network and responds only to DHCP requests. If this box is not checked, the DHCP Server
IP Address field is not displayed and the DHCP server IP address is generated automatically. The
automatically generated address is network_IP_address_subnet.254, or if the default gateway is
using that address, network_IP_address_subnet.253.
Usually the default DHCP server IP is configured as the last usable IP in the subnet (For eg., its
10.0.0.254 for 10.0.0.0/24 subnet). If you want to use a different IP address in the subnet as the DHCP
server IP, use the override option.
6. Click Close.
Procedure
2. Create a VM.
5. Update the boot device setting so that the VM boots over the network.
nutanix@cvm$ acli vm.update_boot_device vm mac_addr=mac_addr
Replace vm with the name of the VM and mac_addr with the MAC address of the virtual interface that
the VM must use to boot over the network.
For example, update the boot device setting of the VM named nw-boot-vm so that the VM uses the
virtual interface with MAC address 00-00-5E-00-53-FF.
nutanix@cvm$ acli vm.update_boot_device nw-boot-vm mac_addr=00-00-5E-00-53-FF
Procedure
1. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start browsing the
DSF datastore.
Note: The root directory displays storage containers and you cannot change it. You can only upload
files to one of the storage containers and not directly to the root directory. To create or delete storage
containers, you can use the Prism user interface.
2. Authenticate by using Prism username and password or, for advanced users, use the public key that is
managed through the Prism cluster lockdown user interface.
Note:
• vDisk load balancing is disabled by default for volume groups that are directly attached to VMs.
However, vDisk load balancing is enabled by default for volume groups that are attached to
VMs by using a data services IP address.
• If you use web console to clone a volume group that has load balancing enabled, the volume
group clone does not have load balancing enabled by default. To enable load balancing
on the volume group clone, you must set the load_balance_vm_attachments parameter to
true using acli or Rest API.
• You can attach a maximum number of 10 load balanced volume groups per guest VM.
• For Linux VMs, ensure that the SCSI device timeout is 60 seconds. For information
about how to check and modify the SCSI device timeout, see the Red Hat documentation
at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/
online_storage_reconfiguration_guide/task_controlling-scsi-command-timer-onlining-devices.
Perform the following procedure to enable load balancing of vDisks by using aCLI.
Procedure
Note: To modify an existing volume group, you must first detach all the VMs that are attached to that
volume group before you enable vDisk load balancing.
Procedure
• Events page:
1. Navigate to Activity > Events from the entities menu to access the Events page in Prism
Central.
Navigate to Alerts > Events from the main menu to access the Events page in the Prism web
console.
2. Locate or search for the following string, and hover over or click the string:
VMs restarted due to HA failover
The system displays the list of restarted VMs in the Summary page and as a hover text for the
selected event.
For example:
VMs restarted due to HA failover: <VM_Name1>, <VM_Name2>, <VM_Name3>,
<VM_Name4>. VMs were running on host X.X.X.1 prior to HA.
General Considerations
You cannot migrate images or volume groups.
You cannot perform the following operations during an ongoing vDisk migration:
• Clone the VM
• Resize the VM
• Take a snapshot
Note: During vDisk migration, the logical usage of a vDisk is more than the total capacity of the vDisk. The
issue occurs because the logical usage of the vDisk includes the space occupied in both the source and
destination containers. Once the migration is complete, the logical usage of the vDisk returns to its normal
value.
Migration of vDisks stalls if sufficient storage space is not available in the target storage container.
Ensure that the target container has sufficient storage space before you begin migration.
• You cannot migrate vDisks of a VM that is protected by a protection domain or protection policy. When
you start the migration, ensure that the VM is not protected by a protection domain or protection policy.
If you want to migrate vDisks of such a VM, do the following:
Procedure
» Migrate specific vDisks by using either the UUID of the vDisk or address of the vDisk.
Migrate specific vDisks by using the UUID of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name device_uuid_list=device_uuid
container=target-container wait=false
Replace vm-name with the name of VM, device_uuid with the device UUID of the vDisk, and
target-container with the name of the target storage container.
Run nutanix@cvm$ acli vm.get <vm-name> to determine the device UUID of the vDisk.
You can migrate multiple vDisks at a time by specifying a comma-separated list of device UUIDs of
the vDisks.
Alternatively, you can migrate vDisks by using the address of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name disk_addr_list=disk-address
container=target-container wait=false
Replace vm-name with the name of VM, disk-address with the address of the disk, and target-
container with the name of the target storage container.
Run nutanix@cvm$ acli vm.get <vm-name> to determine the address of the vDisk.
Following is the format of the vDisk address:
bus.index
3. Check the status of the migration in the Tasks menu of the Prism Element web console.
• If you cancel an ongoing migration, AOS retains the vDisks that have not yet been migrated
in the source container. AOS does not migrate vDisks that have already been migrated to
the target container back to the source container.
• If sufficient storage space is not available in the original storage container, migration of
vDisks back to the original container stalls. To resolve the issue, ensure that the source
container has sufficient storage space.
OVA Restrictions
You can perform the OVA operations subject to the following restrictions:
• QCOW2: Default disk format auto-selected in the Export as OVA dialog box.
• VMDK: Deselect QCOW2 and select VMDK, if required, before you submit the VM export request
when you export a VM.
• When you export a VM or upload an OVA and the VM or OVA does not have any disks, the disk
format is irrelevant.
• Upload an OVA to multiple clusters using a URL as the source for the OVA. You can upload an OVA
only to a single cluster when you use the local OVA File source.
• Perform the OVA operations only with appropriate permissions. You can run the OVA operations that
you have permissions for, based on your assigned user role.
• The OVA that results from exporting a VM on AHV is compatible with any AHV version 5.18 or later.
• The minimum supported versions for performing OVA operations are AOS 5.18, Prism Central 2020.8,
and AHV-20190916.253.