0% found this document useful (0 votes)
113 views153 pages

AHV-Admin-Guide-v5_20

The AHV Administration Guide provides comprehensive information on managing the Nutanix hypervisor, AHV 5.20, including virtualization management, node management, network management, and virtual machine management. Key features include Acropolis Dynamic Scheduling for resource optimization, AHV Turbo for improved I/O performance, and a user-friendly web console interface for managing virtual environments. The guide also covers configuration procedures, limitations, and compatibility with AOS, ensuring users can effectively deploy and manage their Nutanix clusters.

Uploaded by

Alias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views153 pages

AHV-Admin-Guide-v5_20

The AHV Administration Guide provides comprehensive information on managing the Nutanix hypervisor, AHV 5.20, including virtualization management, node management, network management, and virtual machine management. Key features include Acropolis Dynamic Scheduling for resource optimization, AHV Turbo for improved I/O performance, and a user-friendly web console interface for managing virtual environments. The guide also covers configuration procedures, limitations, and compatibility with AOS, ensuring users can effectively deploy and manage their Nutanix clusters.

Uploaded by

Alias
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 153

AHV Administration Guide

AHV 5.20
September 7, 2023
Contents

AHV Overview........................................................................................ 4
Storage Overview....................................................................................................................... 4
AHV Turbo........................................................................................................................ 5
Acropolis Dynamic Scheduling in AHV....................................................................................... 6
Disabling Acropolis Dynamic Scheduling.......................................................................... 7
Enabling Acropolis Dynamic Scheduling...........................................................................7
Virtualization Management Web Console Interface.....................................................................8
Viewing the AHV Version on Prism Element......................................................................8
Viewing the AHV Version on Prism Central....................................................................... 9
AHV Cluster Power Outage Handling......................................................................................... 9

Node Management................................................................................11
Nonconfigurable AHV Components.......................................................................................... 11
Nutanix Software............................................................................................................ 11
AHV Settings.................................................................................................................. 11
Controller VM Access...............................................................................................................12
Admin User Access to Controller VM..............................................................................12
Nutanix User Access to Controller VM............................................................................14
Controller VM Password Complexity Requirements.........................................................15
AHV Host Access..................................................................................................................... 16
Initial Configuration........................................................................................................ 17
Accessing the AHV Host Using the Admin Account........................................................ 18
Changing the Root User Password................................................................................. 18
Changing Nutanix User Password...................................................................................19
AHV Host Password Complexity Requirements............................................................... 19
Verifying the Cluster Health..................................................................................................... 20
Putting a Node into Maintenance Mode.................................................................................... 22
Exiting a Node from the Maintenance Mode............................................................................. 24
Shutting Down a Node in a Cluster (AHV)................................................................................ 25
Starting a Node in a Cluster (AHV)...........................................................................................26
Shutting Down an AHV Cluster.................................................................................................27
Rebooting an AHV Node in a Nutanix Cluster........................................................................... 30
Changing CVM Memory Configuration (AHV)............................................................................31
Changing the AHV Hostname................................................................................................... 31
Changing the Name of the CVM Displayed in the Prism Web Console....................................... 32
Compute-Only Node Configuration (AHV Only)......................................................................... 33
Adding a Compute-Only Node to an AHV Cluster............................................................35

Host Network Management...................................................................43


Prerequisites for Configuring Networking................................................................................ 44
AHV Networking Recommendations......................................................................................... 44
IP Address Management................................................................................................. 50
Layer 2 Network Management.................................................................................................. 51
About Virtual Switch....................................................................................................... 51
Virtual Switch Requirements...........................................................................................60
Virtual Switch Limitations............................................................................................... 60
Virtual Switch Management............................................................................................ 61

ii
Enabling LACP and LAG (AHV Only)............................................................................... 63
VLAN Configuration........................................................................................................ 66
Enabling RSS Virtio-Net Multi-Queue by increasing the Number of VNIC Queues...................... 69
Changing the IP Address of an AHV Host.................................................................................72

Virtual Machine Management................................................................75


Supported Guest VM Types for AHV........................................................................................ 75
Creating a VM (AHV)................................................................................................................ 75
Managing a VM (AHV).............................................................................................................. 84
Limitation for vNIC Hot-Unplugging................................................................................ 93
Virtual Machine Snapshots....................................................................................................... 94
Windows VM Provisioning.........................................................................................................95
Nutanix VirtIO for Windows.............................................................................................95
Installing Windows on a VM.......................................................................................... 105
Windows Defender Credential Guard Support in AHV................................................... 107
Affinity Policies for AHV......................................................................................................... 112
Configuring VM-VM Anti-Affinity Policy......................................................................... 113
Removing VM-VM Anti-Affinity Policy............................................................................ 114
Non-Migratable Hosts....................................................................................................114
Performing Power Operations on VMs by Using Nutanix Guest Tools (aCLI)............................ 115
UEFI Support for VM.............................................................................................................. 116
Creating UEFI VMs by Using aCLI.................................................................................116
Getting Familiar with UEFI Firmware Menu................................................................... 117
Secure Boot Support for VMs....................................................................................... 122
Secure Boot Considerations......................................................................................... 122
Creating/Updating a VM with Secure Boot Enabled.......................................................122
Virtual Machine Network Management................................................................................... 123
Configuring a Virtual NIC to Operate in Access or Trunk Mode..................................... 123
Virtual Machine Memory and CPU Hot-Plug Configurations.................................................... 124
Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)....................................125
Virtual Machine Memory Management (vNUMA)..................................................................... 126
Enabling vNUMA on Virtual Machines........................................................................... 126
GPU and vGPU Support......................................................................................................... 128
Supported GPUs........................................................................................................... 128
GPU Pass-Through for Guest VMs................................................................................ 129
NVIDIA GRID Virtual GPU Support on AHV................................................................... 130
PXE Configuration for AHV VMs............................................................................................. 142
Configuring the PXE Environment for AHV VMs............................................................ 143
Configuring a VM to Boot over a Network..................................................................... 144
Uploading Files to DSF for Microsoft Windows Users............................................................. 145
Enabling Load Balancing of vDisks in a Volume Group........................................................... 146
Viewing list of restarted VMs after an HA event......................................................................147

Live vDisk Migration Across Storage Containers................................. 149


Migrating a vDisk to Another Container..................................................................................150

OVAs...................................................................................................152
OVA Restrictions.................................................................................................................... 152

Copyright............................................................................................ 153

iii
AHV OVERVIEW
As the default option for Nutanix HCI, the native Nutanix hypervisor, AHV, represents a unique approach
to virtualization that offers the powerful virtualization capabilities needed to deploy and manage enterprise
applications. AHV complements the HCI value by integrating native virtualization along with networking,
infrastructure, and operations management with a single intuitive interface - Nutanix Prism.
Virtualization teams find AHV easy to learn and transition to from legacy virtualization solutions with
familiar workflows for VM operations, live migration, VM high availability, and virtual network management.
AHV includes resiliency features, including high availability and dynamic scheduling without the need for
additional licensing, and security is integral to every aspect of the system from the ground up. AHV also
incorporates the optional Flow Security and Networking, allowing easy access to hypervisor-based network
microsegmentation and advanced software-defined networking.
See the Field Installation Guide for information about how to deploy and create a cluster. Once you create
the cluster by using Foundation, you can use this guide to perform day-to-day management tasks.

AOS and AHV Compatibility


For information about the AOS and AHV compatibility with this release, see the Compatibility and
Interoperability Matrix.

Limitations
For information about AHV configuration limitations, see Nutanix Configuration Maximums webpage.

Nested Virtualization
Nutanix does not support nested virtualization (nested VMs) in an AHV cluster.

Storage Overview
AHV uses a Distributed Storage Fabric to deliver data services such as storage provisioning, snapshots,
clones, and data protection to VMs directly.
In AHV clusters, AOS passes all disks to the VMs as raw SCSI block devices. By that means, the I/O path
is lightweight and optimized. Each AHV host runs an iSCSI redirector, which establishes a highly resilient
storage path from each VM to storage across the Nutanix cluster.
QEMU is configured with the iSCSI redirector as the iSCSI target portal. Upon a login request, the
redirector performs an iSCSI login redirect to a healthy Stargate (preferably the local one).

AHV | AHV Overview | 4


Figure 1: AHV Storage

AHV Turbo
AHV Turbo represents significant advances to the data path in AHV. AHV Turbo provides an I/O path that
bypasses QEMU and services storage I/O requests, which lowers CPU usage and increases the amount of
storage I/O available to VMs.
AHV Turbo represents significant advances to the data path in AHV.
When you use QEMU, all I/O travels through a single queue that can impact system performance. AHV
Turbo provides an I/O path that uses the multi-queue approach to bypasses QEMU. The multi-queue
approach allows the data to flow from a VM to the storage more efficiently. This results in a much higher
I/O capacity and lower CPU usage. The storage queues automatically scale out to match the number of
vCPUs configured for a given VM, and results in a higher performance as the workload scales up.
AHV Turbo is transparent to VMs and is enabled by default on VMs that runs in AHV clusters. For
maximum VM performance, ensure that the following conditions are met:

• The latest Nutanix VirtIO package is installed for Windows VMs. For information on how to download
and install the latest VirtIO package, see Installing or Upgrading Nutanix VirtIO for Windows.

Note: No additional configuration is required at this stage.

• The VM has more than one vCPU.


• The workloads are multi-threaded.

Note: Multi-queue is enabled by default in current Linux distributions. For details, refer your vendor-specific
documentation for Linux distribution.

In addition to multi-queue approach for storage I/O, you can also achieve the maximum network I/O
performance using the multi-queue approach for any vNICs in the system. For information about how to

AHV | AHV Overview | 5


enable multi-queue and set an optimum number of queues, see Enabling RSS Virtio-Net Multi-Queue by
Increasing the Number of VNIC Queues.

Note: Ensure that the guest operating system fully supports multi-queue before you enable it. For details,
refer your vendor-specific documentation for Linux distribution.

Acropolis Dynamic Scheduling in AHV


Acropolis Dynamic Scheduling (ADS) proactively monitors your cluster for any compute and storage I/O
contentions or hotspots over a period of time. If ADS detects a problem, ADS creates a migration plan that
eliminates hotspots in the cluster by migrating VMs from one host to another.
You can monitor VM migration tasks from the Task dashboard of the Prism Element web console.
Following are the advantages of ADS:

• ADS improves the initial placement of the VMs depending on the VM configuration.
• Nutanix Volumes uses ADS for balancing sessions of the externally available iSCSI targets.

Note: ADS honors all the configured host affinities, VM-host affinities, VM-VM antiaffinity policies, and HA
policies.

By default, ADS is enabled and Nutanix recommends you keep this feature enabled. However, see
Disabling Acropolis Dynamic Scheduling on page 7 for information about how to disable the ADS
feature. See Enabling Acropolis Dynamic Scheduling on page 7 for information about how to enable
the ADS feature if you previously disabled the feature.
ADS monitors the following resources:

• VM CPU Utilization: Total CPU usage of each guest VM.


• Storage CPU Utilization: Storage controller (Stargate) CPU usage per VM or iSCSI target
ADS does not monitor memory and networking usage.

How Acropolis Dynamic Scheduling Works


Lazan is the ADS service in an AHV cluster. AOS selects a Lazan manager and Lazan solver among the
hosts in the cluster to effectively manage ADS operations.
ADS performs the following tasks to resolve compute and storage I/O contentions or hotspots:

• The Lazan manager gathers statistics from the components it monitors.


• The Lazan solver (runner) checks the statistics for potential anomalies and determines how to resolve
them, if possible.
• The Lazan manager invokes the tasks (for example, VM migrations) to resolve the situation.

Note:

• During migration, a VM consumes resources on both the source and destination hosts as the
High Availability (HA) reservation algorithm must protect the VM on both hosts. If a migration
fails due to lack of free resources, turn off some VMs so that migration is possible.
• If a problem is detected and ADS cannot solve the issue (for example, because of limited
CPU or storage resources), the migration plan might fail. In these cases, an alert is generated.
Monitor these alerts from the Alerts dashboard of the Prism Element web console and take
necessary remedial actions.

AHV | AHV Overview | 6


• If the host, firmware, or AOS upgrade is in progress and if any resource contention occurs
during the upgrade period, ADS does not perform any resource contention rebalancing.

When Is a Hotspot Detected?


Lazan runs every 15 minutes and analyzes the resource usage for at least that period of time. If the
resource utilization of an AHV host remains >85% for the span of 15 minutes, Lazan triggers migration
tasks to remove the hotspot.

Note: For a storage hotspot, ADS looks at the last 40 minutes of data and uses a smoothing algorithm
to use the most recent data. For a CPU hotspot, ADS looks at the last 10 minutes of data only, that is, the
average CPU usage over the last 10 minutes.

Following are the possible reasons if there is an obvious hotspot, but the VMs did not migrate:

• Lazan cannot resolve a hotspot. For example:

• If there is a huge VM (16 vCPUs) at 100% usage, and accounts for 75% of the AHV host usage
(which is also at 100% usage).
• The other hosts are loaded at ~ 40% usage.
In these situations, the other hosts cannot accommodate the large VM without causing contention there
as well. Lazan does not prioritize one host or VM over others for contention, so it leaves the VM where it
is hosted.
• Number of all-flash nodes in the cluster is less than the replication factor.
If the cluster has an RF2 configuration, the cluster must have a minimum of two all-flash nodes for
successful migration of VMs on all the all-flash nodes.

Migrations Audit
Prism Central displays the list of all the VM migration operations generated by ADS. In Prism Central, go
to Menu -> Activity -> Audits to display the VM migrations list. You can filter the migrations by clicking
Filters and selecting Migrate in the Operation Type tab. The list displays all the VM migration tasks
created by ADS with details such as the source and target host, VM name, and time of migration.

Disabling Acropolis Dynamic Scheduling


Perform the procedure described in this topic to disable ADS. Nutanix recommends you keep ADS
enabled.

Procedure

1. Log on to a Controller VM in your cluster with SSH.

2. Disable ADS.
nutanix@cvm$ acli ads.update enable=false
No action is taken by ADS to solve the contentions after you disable the ADS feature. You must
manually take the remedial actions or you can enable the feature.

Enabling Acropolis Dynamic Scheduling


If you have disabled the ADS feature and want to enable the feature, perform the following procedure.

AHV | AHV Overview | 7


Procedure

1. Log onto a Controller VM in your cluster with SSH.

2. Enable ADS.
nutanix@cvm$ acli ads.update enable=true

Virtualization Management Web Console Interface


You can manage the virtualization management features by using the Prism GUI (Prism Element and
Prism Central web consoles).
You can do the following by using the Prism web consoles:

• Configure network connections


• Create virtual machines
• Manage virtual machines (launch console, start/shut down, take snapshots, migrate, clone, update, and
delete)
• Monitor virtual machines
• Enable VM high availability
See Prism Element Web Console Guide and Prism Central Infrastructure Guide.

Viewing the AHV Version on Prism Element


You can see the AHV version installed in the Prism Element web console.

About this task


To view the AHV version installed on the host, do the following.

Procedure

1. Log in to Prism web console.

2. The Hypervisor Summary widget widget on the top left side of the Home page displays the AHV
version.

Figure 2: LCM Page Displays AHV Version

AHV | AHV Overview | 8


Viewing the AHV Version on Prism Central
You can see the AHV version installed in the Prism Central console.

About this task


To view the AHV version installed on any host in the clusters managed by the Prism Central, do the
following.

Procedure

1. Log on to Prism Central.

2. In side bar, select Hardware > Hosts > Summary tab.

3. Click the host you want to see the hypervisor version for.

4. The Host detail view page displays the Properties widget that lists the Hypervisor Version.

Figure 3: Hypervisor Version in Host Detail View

AHV Cluster Power Outage Handling


When a power outage occurs, the cluster goes down. After the power is restored, the Nutanix/AHV cluster
is recovered by default in the following order of events:
1. Nodes have power restored - which could occur automatically or with manual intervention, depending
on BIOS settings.
2. The AHV host automatically starts the CVM.
3. The CVM connects to the host. The CVM can be the local CVM or another CVM in the cluster.
4. The CVM performs a series of checks to identify the VMs that were running on the hosts and applies
the appropriate power-on actions.
5. The agent VMs which were previously running on the host are powered on.
6. The remaining user VMs (UVMs) which were previously running on the host are then inspected. If the
UVMs were not protected by HA, then they are started. If the UVMs were protected by HA and were
started on a different host, then a live migration is triggered to migrate the UVMs back to the recovered
host.

Note:

AHV | AHV Overview | 9


• The powering on operation across hosts in case of cluster outage may not follow any order,
and the checks from the CVM are also unordered. Each of the hosts are handled in parallel.
• There is no option to affect the VM start-up order or keep the VMs in powered down state.

AHV | AHV Overview | 10


NODE MANAGEMENT
Nonconfigurable AHV Components
The components listed here are configured by the Nutanix manufacturing and installation processes. Do
not modify any of these components except under the direction of Nutanix Support.

Nutanix Software
Modifying any of the following Nutanix software settings may inadvertently constrain performance of your
Nutanix cluster or render the Nutanix cluster inoperable.

• Local datastore name.


• Configuration and contents of any CVM (except memory configuration to enable certain features).

Important: Note the following important considerations about CVMs.

• Do not delete the Nutanix CVM.


• Do not take a snapshot of the CVM for backup.
• Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
• Do not create additional CVM user accounts.
Use the default accounts (admin or nutanix), or use sudo to elevate to the root account.
• Do not decrease CVM memory below recommended minimum amounts required for cluster
and add-in features.
Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process
detect and monitor CVM memory.
• Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.
Normal cluster operations might be affected if there are connectivity issues with the third-party
storage you attach to the hosts in a Nutanix cluster.
• Do not run any commands on a CVM that are not in the Nutanix documentation.

AHV Settings
Nutanix AHV is a cluster-optimized hypervisor appliance.
Alteration of the hypervisor appliance (unless advised by Nutanix Technical Support) is unsupported and
may result in the hypervisor or VMs functioning incorrectly.
Unsupported alterations include (but are not limited to):

• Hypervisor configuration, including installed packages


• Controller VM virtual hardware configuration file (.xml file). Each AOS version and upgrade includes
a specific Controller VM virtual hardware configuration. Therefore, do not edit or otherwise modify the
Controller VM virtual hardware configuration file.
• iSCSI settings
• Open vSwitch settings

• Installation of third-party software not approved by Nutanix

AHV | Node Management | 11


• Installation or upgrade of software packages from non-Nutanix sources (using yum, rpm, or similar)
• Taking snapshots of the Controller VM
• Creating user accounts on AHV hosts
• Changing the timezone of the AHV hosts. By default, the timezone of an AHV host is set to UTC.

• Joining AHV hosts to Active Directory or OpenLDAP domains

Controller VM Access
Although each host in a Nutanix cluster runs a hypervisor independent of other hosts in the cluster, some
operations affect the entire cluster.
Most administrative functions of a Nutanix cluster can be performed through the web console (Prism),
however, there are some management tasks that require access to the Controller VM (CVM) over SSH.
Nutanix recommends restricting CVM SSH access with password or key authentication.
This topic provides information about how to access the Controller VM as an admin user and nutanix user.
admin User Access
Use the admin user access for all tasks and operations that you must perform on the controller VM.
As an admin user with default credentials, you cannot access nCLI. You must change the default
password before you can use nCLI. Nutanix recommends that you do not create additional CVM
user accounts. Use the default accounts (admin or nutanix), or use sudo to elevate to the root
account.
For more information about admin user access, see Admin User Access to Controller VM on
page 12.
nutanix User Access
Nutanix strongly recommends that you do not use the nutanix user access unless the procedure
(as provided in a Nutanix Knowledge Base article or user guide) specifically requires the use of the
nutanix user access.

For more information about nutanix user access, see Nutanix User Access to Controller VM on
page 14.
You can perform most administrative functions of a Nutanix cluster through the Prism web consoles or
REST API. Nutanix recommends using these interfaces whenever possible and disabling Controller
VM SSH access with password or key authentication. Some functions, however, require logging on to a
Controller VM with SSH. Exercise caution whenever connecting directly to a Controller VM as it increases
the risk of causing cluster issues.

Warning: When you connect to a Controller VM with SSH, ensure that the SSH client does not import or
change any locale settings. The Nutanix software is not localized, and running the commands with any locale
other than en_US.UTF-8 can cause severe cluster issues.
To check the locale used in an SSH session, run /usr/bin/locale. If any environment variables
are set to anything other than en_US.UTF-8, reconnect with an SSH configuration that does not
import or change any locale settings.

Admin User Access to Controller VM


You can access the Controller VM as the admin user (admin user name and password) with SSH.
For security reasons, the password of the admin user must meet Controller VM Password Complexity
Requirements. When you log on to the Controller VM as the admin user for the first time, you are prompted
to change the default password.

AHV | Node Management | 12


See Controller VM Password Complexity Requirements to set a secure password.
After you have successfully changed the password, the new password is synchronized across all Controller
VMs and interfaces (Prism web console, nCLI, and SSH).

Note:

• As an admin user, you cannot access nCLI by using the default credentials. If you are logging
in as the admin user for the first time, you must log on through the Prism web console or SSH
to the Controller VM. Also, you cannot change the default password of the admin user through
nCLI. To change the default password of the admin user, you must log on through the Prism
web console or SSH to the Controller VM.
• When you make an attempt to log in to the Prism web console for the first time after you
upgrade to AOS 5.1 from an earlier AOS version, you can use your existing admin user
password to log in and then change the existing password (you are prompted) to adhere to the
password complexity requirements. However, if you are logging in to the Controller VM with
SSH for the first time after the upgrade as the admin user, you must use the default admin user
password (Nutanix/4u) and then change the default password (you are prompted) to adhere to
the Controller VM Password Complexity Requirements.
• You cannot delete the admin user account.
• The default password expiration age for the admin user is 60 days. You can configure the
minimum and maximum password expiration days based on your security requirement.

• nutanix@cvm$ sudo chage -M MAX-DAYS admin

• nutanix@cvm$ sudo chage -m MIN-DAYS admin

When you change the admin user password, you must update any applications and scripts using the admin
user credentials for authentication. Nutanix recommends that you create a user assigned with the admin
role instead of using the admin user for authentication. The Prism Element Web Console Guide describes
authentication and roles.
Following are the default credentials to access a Controller VM.

Table 1: Controller VM Credentials

Interface Target User Name Password

SSH client Nutanix Controller VM admin Nutanix/4u


nutanix nutanix/4u
Prism web console Nutanix Controller VM admin Nutanix/4u

Accessing the Controller VM Using the Admin User Account

About this task


Perform the following procedure to log on to the Controller VM by using the admin user with SSH for the
first time.

AHV | Node Management | 13


Procedure

1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and
the following credentials.

• User name: admin


• Password: Nutanix/4u
You are now prompted to change the default password.

2. Respond to the prompts, providing the current and new admin user password.
Changing password for admin.
Old Password:
New password:
Retype new password:
Password changed.
See the requirements listed in Controller VM Password Complexity Requirements to set a secure
password.
For information about logging on to a Controller VM by using the admin user account through the Prism
web console, see Logging Into The Web Console in the Prism Element Web Console Guide.

Nutanix User Access to Controller VM


You can access the Controller VM as the nutanix user (nutanix user name and password) with SSH. For
security reasons, the password of the nutanix user must meet the Controller VM Password Complexity
Requirements on page 15. When you log on to the Controller VM as the nutanix user for the first time,
you are prompted to change the default password.
See Controller VM Password Complexity Requirements on page 15to set a secure password.
After you have successfully changed the password, the new password is synchronized across all Controller
VMs and interfaces (Prism web console, nCLI, and SSH).

Note:

• As a nutanix user, you cannot access nCLI by using the default credentials. If you are logging
in as the nutanix user for the first time, you must log on through the Prism web console or
SSH to the Controller VM. Also, you cannot change the default password of the nutanix user
through nCLI. To change the default password of the nutanix user, you must log on through
the Prism web console or SSH to the Controller VM.
• When you make an attempt to log in to the Prism web console for the first time after you
upgrade the AOS from an earlier AOS version, you can use your existing nutanix user
password to log in and then change the existing password (you are prompted) to adhere to the
password complexity requirements. However, if you are logging in to the Controller VM with
SSH for the first time after the upgrade as the nutanix user, you must use the default nutanix
user password (nutanix/4u) and then change the default password (you are prompted) to
adhere to the Controller VM Password Complexity Requirements on page 15.
• You cannot delete the nutanix user account.

AHV | Node Management | 14


• You can configure the minimum and maximum password expiration days based on your
security requirement.

• nutanix@cvm$ sudo chage -M MAX-DAYS admin

• nutanix@cvm$ sudo chage -m MIN-DAYS admin

When you change the nutanix user password, you must update any applications and scripts using the
nutanix user credentials for authentication. Nutanix recommends that you create a user assigned with the
nutanix role instead of using the nutanix user for authentication. The Prism Element Web Console Guide
describes authentication and roles.
Following are the default credentials to access a Controller VM.

Table 2: Controller VM Credentials

Interface Target User Name Password

SSH client Nutanix Controller VM admin Nutanix/4u


nutanix nutanix/4u
Prism web console Nutanix Controller VM admin Nutanix/4u

Accessing the Controller VM Using the Nutanix User Account

About this task


Perform the following procedure to log on to the Controller VM by using the nutanix user with SSH for the
first time.

Procedure

1. Log on to the Controller VM with SSH by using the management IP address of the Controller VM and
the following credentials.

• User name: nutanix


• Password: nutanix/4u
You are now prompted to change the default password.

2. Respond to the prompts, providing the current and new nutanix user password.
Changing password for nutanix.
Old Password:
New password:
Retype new password:
Password changed.
See Controller VM Password Complexity Requirements on page 15to set a secure password.
For information about logging on to a Controller VM by using the nutanix user account through the
Prism web console, see Logging Into The Web Console in the Prism Element Web Console Guide.

Controller VM Password Complexity Requirements


The password must meet the following complexity requirements:

AHV | Node Management | 15


• At least eight characters long.
• At least one lowercase letter.
• At least one uppercase letter.
• At least one number.
• At least one special character.

Note: Ensure that the following conditions are met for the special characters usage in the CVM
password:

• The special characters are appropriately used while setting up the CVM password. In some
cases, for example when you use ! followed by a number in the CVM password, it leads to
a special meaning at the system end, and the system may replace it with a command from
the bash history. In this case, you may generate a password string different from the actual
password that you intend to set.
• The special character used in the CVM password are ASCII printable characters only. For
information about ACSII printable characters, refer ASCII printable characters (character
code 32-127) article on ASCII code website.

• At least four characters difference from the old password.


• Must not be among the last 5 passwords.
• Must not have more than 2 consecutive occurrences of a character.
• Must not be longer than 199 characters.
If a password for an account (CVM account) is entered five times unsuccessfully within a 15-minute period,
the account is locked for 15 minutes.

AHV Host Access


You can perform most of the administrative functions of a Nutanix cluster using the Prism web consoles
or REST API. Nutanix recommends using these interfaces whenever possible. Some functions, however,
require logging on to an AHV host with SSH.

Note: From AOS 5.15.5 with AHV 20190916.410 onwards, AHV has two new user accounts—admin and
nutanix.

Nutanix provides the following users to access the AHV host:

• root—It is used internally by the AOS. The root user is used for the initial access and configuration of
the AHV host.
• admin—It is used to log on to an AHV host. The admin user is recommended for accessing the AHV
host.
• nutanix—It is used internally by the AOS and must not be used for interactive logon.

Exercise caution whenever connecting directly to an AHV host as it increases the risk of causing cluster
issues.
Following are the default credentials to access an AHV host:

AHV | Node Management | 16


Table 3: AHV Host Credentials

Interface Target User Name Password

SSH client AHV Host root nutanix/4u


admin There is no default
password for admin.
You must set it
during the initial
configuration.

nutanix nutanix/4u

Initial Configuration

About this task


The AHV host is shipped with the default password for the root and nutanix users, which must be
changed using SSH when you log on to the AHV host for the first time. After changing the default
passwords and the admin password, all subsequent logins to the AHV host must be with the admin user.
Perform the following procedure to change admin user account password for the first time:

Note: Perform this initial configuration on all the AHV hosts.

Procedure

1. Use SSH and log on to the AHV host using the root account.
$ ssh root@<AHV Host IP Address>
Nutanix AHV
root@<AHV Host IP Address> password: # default password nutanix/4u

2. Change the default root user password.


root@ahv# passwd root
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

3. Change the default nutanix user password.


root@ahv# passwd nutanix
Changing password for user nutanix.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

4. Change the admin user password.


root@ahv# passwd admin
Changing password for user admin.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

AHV | Node Management | 17


Accessing the AHV Host Using the Admin Account

About this task


After setting the admin password in the Initial Configuration on page 17, use the admin user for all
subsequent logins.
Perform the following procedure to access the AHV host using the admin account.

Procedure

1. Log on to the AHV host with SSH using the admin account.
$ ssh admin@ <AHV Host IP Address>
Nutanix AHV

2. Enter the admin user password configured in the Initial Configuration on page 17.
admin@<AHV Host IP Address> password:

Changing Admin User Password

About this task


Perform these steps to change the admin password on every AHV host in the cluster:

Procedure

1. Log on to the AHV host using the admin account with SSH.

2. Enter the admin user password configured in the Initial Configuration on page 17.

3. Run the sudo command to change to admin user password.


$ sudo passwd admin

4. Respond to the prompts and provide the new password.


[sudo] password for admin:
Changing password for user admin.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Note: Repeat this step for each AHV host.

See AHV Host Password Complexity Requirements on page 19 to set a secure password.

Changing the Root User Password

About this task


Perform these steps to change the root password on every AHV host in the cluster:

Procedure

1. Log on to the AHV host using the admin account with SSH.

2. Run the sudo command to change to root user.

AHV | Node Management | 18


3. Change the root password.
root@ahv# passwd root

4. Respond to the prompts and provide the current and new root password.
Changing password for root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Note: Repeat this step for each AHV host.

See AHV Host Password Complexity Requirements on page 19 to set a secure password.

Changing Nutanix User Password

About this task


Perform these steps to change the nutanix password on every AHV host in the cluster:

Procedure

1. Log on to the AHV host using the admin account with SSH.

2. Run the sudo command to change to root user.

3. Change the nutanix password.


root@ahv# passwd nutanix

4. Respond to the prompts and provide the current and new nutanix password.
Changing password for nutanix.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Note: Repeat this step for each AHV host.

See AHV Host Password Complexity Requirements on page 19 to set a secure password.

AHV Host Password Complexity Requirements


The password you choose must meet the following complexity requirements:

AHV | Node Management | 19


• In configurations with high-security requirements, the password must contain:

• At least 15 characters.
• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least eight characters different from the previous password.
• At most three consecutive occurrences of any given character.
• At most four consecutive occurrences of any given class.
The password cannot be the same as the last five passwords.

• In configurations without high-security requirements, the password must contain:

• At least eight characters.


• At least one upper case letter (A–Z).
• At least one lower case letter (a–z).
• At least one digit (0–9).
• At least one printable ASCII special (non-alphanumeric) character. For example, a tilde (~),
exclamation point (!), at sign (@), number sign (#), or dollar sign ($).
• At least three characters different from the previous password.
• At most three consecutive occurrences of any given character.
The password cannot be the same as the last five passwords.
In both types of configuration, if a password for an account is entered five times unsuccessfully within a 15-
minute period, the account is locked for 15 minutes.

Verifying the Cluster Health


Before you perform operations such as restarting a CVM or AHV host and putting an AHV host into
maintenance mode, check if the cluster can tolerate a single-node failure.

Before you begin


Ensure that you are running the most recent version of NCC.

About this task

Note: If you see any critical alerts, resolve the issues by referring to the indicated KB articles. If you are
unable to resolve any issues, contact Nutanix Support.

Perform the following steps to avoid unexpected downtime or performance issues.

AHV | Node Management | 20


Procedure

1. Review and resolve any critical alerts. Do one of the following:

» In the Prism Element web console, go to the Alerts page.


» Log on to a Controller VM (CVM) with SSH and display the alerts.
nutanix@cvm$ ncli alert ls

Note: If you receive alerts indicating expired encryption certificates or a key manager is not reachable,
resolve these issues before you shut down the cluster. If you do not resolve these issues, data loss of the
cluster might occur.

2. Verify if the cluster can tolerate a single-node failure. Do one of the following:

» In the Prism Element web console, in the Home page, check the status of the Data Resiliency
Status dashboard.
Verify that the status is OK. If the status is anything other than OK, resolve the indicated issues
before you perform any maintenance activity.
» Log on to a Controller VM (CVM) with SSH and check the fault tolerance status of the cluster.
nutanix@cvm$ ncli cluster get-domain-fault-tolerance-status type=node
An output similar to the following is displayed:

Important:
Domain Type : NODE
Component Type : STATIC_CONFIGURATION
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 14:22:09 GMT+05:00 2015

Domain Type : NODE


Component Type : ERASURE_CODE_STRIP_SIZE
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 13:19:58 GMT+05:00 2015

Domain Type : NODE


Component Type : METADATA
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Mon Sep 28 14:35:25 GMT+05:00 2015

Domain Type : NODE


Component Type : ZOOKEEPER
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Thu Sep 17 11:09:39 GMT+05:00 2015

Domain Type : NODE


Component Type : EXTENT_GROUPS
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 13:19:58 GMT+05:00 2015

Domain Type : NODE


Component Type : OPLOG
Current Fault Tolerance : 1

AHV | Node Management | 21


Fault Tolerance Details :
Last Update Time : Wed Nov 18 13:19:58 GMT+05:00 2015

Domain Type : NODE


Component Type : FREE_SPACE
Current Fault Tolerance : 1
Fault Tolerance Details :
Last Update Time : Wed Nov 18 14:20:57 GMT+05:00 2015

The value of the Current Fault Tolerance column must be at least 1 for all the nodes in the cluster.

Putting a Node into Maintenance Mode


You may be required to put a node into maintenance mode in certain situations such as making changes to
the network configuration of a node or for performing manual firmware upgrades.

Before you begin

Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2),
you can only shut down one node for each cluster. If an RF2 cluster would have more than one node shut
down, shut down the entire cluster.

About this task


When a host is in maintenance mode, AOS marks the host as unschedulable so that no new VM instances
are created on it. Next, an attempt is made to evacuate VMs from the host.
If the evacuation attempt fails, the host remains in the "entering maintenance mode" state, where it is
marked unschedulable, waiting for user remediation. You can shut down VMs on the host or move them to
other nodes. Once the host has no more running VMs, it is in maintenance mode.
When a host is in maintenance mode, VMs are moved from that host to other hosts in the cluster. After
exiting maintenance mode, those VMs are automatically returned to the original host, eliminating the need
to manually move them.
VMs with GPU, CPU passthrough, PCI passthrough, and host affinity policies are not migrated to other
hosts in the cluster. You can choose to shut down such VMs while putting the node into maintenance
mode.
Agent VMs are always shut down if you put a node in maintenance mode and are powered on again after
exiting maintenance mode.
Perform the following steps to put the node into maintenance mode.

Procedure

1. Use SSH to log on to a Controller VM in the cluster.

2. Determine the IP address of the node that you want to put into maintenance mode.
nutanix@cvm$ acli host.list
Note the value of Hypervisor IP for the node that you want to put in maintenance mode.

AHV | Node Management | 22


3. Put the node into maintenance mode.
nutanix@cvm$ acli host.enter_maintenance_mode host-IP-address [wait="{ true |
false }" ] [non_migratable_vm_action="{ acpi_shutdown | block }" ]

Note: Never put Controller VM and AHV hosts into maintenance mode on single-node clusters. It is
recommended to shut down guest VMs before proceeding with disruptive changes.

Replace host-IP-address with either the IP address or host name of the AHV host that you want to
shut down.
The following are optional parameters for running the acli host.enter_maintenance_mode command:

• wait: Set the wait parameter to true to wait for the host evacuation attempt to finish.
• non_migratable_vm_action: By default the non_migratable_vm_action parameter is set to block,
which means VMs with GPU, CPU passthrough, PCI passthrough, and host affinity policies are not
migrated or shut down when you put a node into maintenance mode.
If you want to automatically shut down such VMs, set the non_migratable_vm_action parameter to
acpi_shutdown.

4. Verify if the host is in the maintenance mode.


nutanix@cvm$ acli host.get host-ip
In the output that is displayed, ensure that node_state equals to EnteredMaintenanceMode and
schedulable equals to False.
Do not continue if the host has failed to enter the maintenance mode.

5. See Verifying the Cluster Health on page 20 to once again check if the cluster can tolerate a single-
node failure.

6. Put the CVM into the maintenance mode.


nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=true
Replace host-ID with the ID of the host.
This step prevents the CVM services from being affected by any connectivity issues.

7. Determine the ID of the host.


nutanix@cvm$ ncli host list
An output similar to the following is displayed:

Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.2
In this example, the host ID is 1234.
Wait for a few minutes until the CVM is put into the maintenance mode.

AHV | Node Management | 23


8. Verify if the CVM is in the maintenance mode.
Run the following command on the CVM that you put in the maintenance mode.
nutanix@cvm$ genesis status | grep -v "\[\]"
An output similar to the following is displayed:

nutanix@cvm$ genesis status | grep -v "\[\]"


2021-09-24 05:28:03.827628: Services running on this node:
genesis: [11189, 11390, 11414, 11415, 15671, 15672, 15673, 15676]
scavenger: [27241, 27525, 27526, 27527]
xmount: [25915, 26055, 26056, 26074]
zookeeper: [13053, 13101, 13102, 13103, 13113, 13130]
nutanix@cvm$
Only the Genesis, Scavenger, Xmount, and Zookeeper processes must be running (process ID is
displayed next to the process name).
Do not continue if the CVM has failed to enter the maintenance mode, because it can cause a service
interruption.

What to do next
Perform the maintenance activity. Once the maintenance activity is complete, remove the node from the
maintenance mode. See Exiting a Node from the Maintenance Mode on page 24 for more information.

Exiting a Node from the Maintenance Mode


After you perform any maintenance activity, exit the node from the maintenance mode.

About this task


Perform the following to exit the host from the maintenance mode.

Procedure

1. Remove the CVM from the maintenance mode.

a. Determine the ID of the host.


nutanix@cvm$ ncli host list
An output similar to the following is displayed:

Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.2
In this example, the host ID is 1234.

a. From any other CVM in the cluster, run the following command to exit the CVM from the
maintenance mode.
nutanix@cvm$ ncli host edit id=host-ID enable-maintenance-mode=false
Replace host-ID with the ID of the host.

Note: The command fails if you run the command from the CVM that is in the maintenance mode.

AHV | Node Management | 24


b. Verify if all processes on all the CVMs are in the UP state.
nutanix@cvm$ cluster status | grep -v UP

Do not continue if the CVM has failed to exit the maintenance mode.

2. Remove the AHV host from the maintenance mode.

a. From any CVM in the cluster, run the following command to exit the AHV host from the maintenance
mode.
nutanix@cvm$ acli host.exit_maintenance_mode host-ip
Replace host-ip with the new IP address of the host.
This command migrates (live migration) all the VMs that were previously running on the host back to
the host.
b. Verify if the host has exited the maintenance mode.
nutanix@cvm$ acli host.get host-ip
In the output that is displayed, ensure that node_state equals to kAcropolisNormal or
AcropolisNormal and schedulable equals to True.
Contact Nutanix Support if any of the steps described in this document produce unexpected results.

Shutting Down a Node in a Cluster (AHV)


Before you begin

• Caution: Verify the data resiliency status of your cluster. If the cluster only has replication factor 2 (RF2),
you can only shut down one node for each cluster. If an RF2 cluster would have more than one node shut
down, shut down the entire cluster.

See Verifying the Cluster Health on page 20 to check if the cluster can tolerate a single-node failure.
Do not proceed if the cluster cannot tolerate a single-node failure.
• Put the node you want to shut down into maintenance mode.
See Putting a Node into Maintenance Mode on page 22 for instructions about how to put a node into
maintenance mode.
You can list all the hosts in the cluster by running nutanix@cvm$ acli host.list command, and note
the value of Hypervisor IP for the node you want to shut down.

About this task


Perform the following procedure to shut down a node.

Procedure

1. Using SSH, log on to the Controller VM on the host you want to shut down.

2. Shut down the Controller VM.


nutanix@cvm$ cvm_shutdown -P now

Note: Once the cvm_shutdown command is issued, it might take a few minutes before CVM is powered
off completely. After the cvm_shutdown command is completed successfully, Nutanix recommends that
you wait up to 4 minutes before shutting down the AHV host.

AHV | Node Management | 25


3. Log on to the AHV host with SSH.

4. Shut down the host.


root@ahv# shutdown -h now

What to do next
See Starting a Node in a Cluster (AHV) on page 26 for instructions about how to start a node, including
how to start a CVM and how to exit a node from maintenance mode.

Starting a Node in a Cluster (AHV)


About this task

Procedure

1. On the hardware appliance, power on the node. The CVM starts automatically when your reboot the
node.

2. If the node is in maintenance mode, log on (SSH) to the Controller VM and remove the node from
maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.

3. Log on to another CVM in the Nutanix cluster with SSH.

4. Verify that the status of all services on all the CVMs are Up.
nutanix@cvm$ cluster status
If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the
Nutanix cluster.
CVM: <host IP-Address> Up
Zeus UP [9935, 9980, 9981, 9994, 10015,
10037]
Scavenger UP [25880, 26061, 26062]
Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]
VipMonitor UP [18387, 18464, 18465, 18466, 18474]
Stargate UP [24993, 25032]
InsightsDataTransfer UP [25258, 25348, 25349, 25388, 25391,
25393, 25396]
Ergon UP [25263, 25414, 25415]
Cerebro UP [25272, 25462, 25464, 25581]
Chronos UP [25281, 25488, 25489, 25547]
Curator UP [25294, 25528, 25529, 25585]
Prism UP [25718, 25801, 25802, 25899, 25901,
25906, 25941, 25942]

AHV | Node Management | 26


CIM UP [25721, 25829, 25830, 25856]
AlertManager UP [25727, 25862, 25863, 25990]
Arithmos UP [25737, 25896, 25897, 26040]
Catalog UP [25749, 25989, 25991]
Acropolis UP [26011, 26118, 26119]
Uhura UP [26037, 26165, 26166]
Snmp UP [26057, 26214, 26215]
NutanixGuestTools UP [26105, 26282, 26283, 26299]
MinervaCVM UP [27343, 27465, 27466, 27730]
ClusterConfig UP [27358, 27509, 27510]
Aequitas UP [27368, 27567, 27568, 27600]
APLOSEngine UP [27399, 27580, 27581]
APLOS UP [27853, 27946, 27947]
Lazan UP [27865, 27997, 27999]
Delphi UP [27880, 28058, 28060]
Flow UP [27896, 28121, 28124]
Anduril UP [27913, 28143, 28145]
XTrim UP [27956, 28171, 28172]
ClusterHealth UP [7102, 7103, 27995, 28209,28495,
28496, 28503, 28510,
28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645,
28646, 28648, 28792,
28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135,
29142, 29146, 29150,
29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

Shutting Down an AHV Cluster


You might need to shut down an AHV cluster to perform a maintenance activity or tasks such as relocating
the hardware.

Before you begin


Ensure the following before you shut down the cluster.
1. Upgrade to the most recent version of NCC.
2. Log on to a Controller VM (CVM) with SSH and run the complete NCC health check.
nutanix@cvm$ ncc health_checks run_all
If you receive any failure or error messages, resolve those issues by referring to the KB articles
indicated in the output of the NCC check results. If you are unable to resolve these issues, contact
Nutanix Support.

Warning: If you receive alerts indicating expired encryption certificates or a key manager is not
reachable, resolve these issues before you shut down the cluster. If you do not resolve these issues, data
loss of the cluster might occur.

About this task


Shut down an AHV cluster in the following sequence.

Procedure

1. Shut down the services or VMs associated with AOS features or Nutanix products. For example, shut
down all the Nutanix file server VMs (FSVMs). See the documentation of those features or products for
more information.

AHV | Node Management | 27


2. Shut down all the guest VMs in the cluster in one of the following ways.

» Shut down the guest VMs from within the guest OS.
» Shut down the guest VMs by using the Prism Element web console.
» If you are running many VMs, shut down the VMs by using aCLI:

a. Log on to a CVM in the cluster with SSH.


b. Shut down all the guest VMs in the cluster.
nutanix@cvm$ for i in $(acli vm.list power_state=on | grep -v NTNX | awk 'NR!=1
{print $NF}');do acli vm.shutdown $i ; done

c. Verify if all the guest VMs are shut down.


nutanix@CVM$ acli vm.list power_state=on

d. If any VMs are on, consider powering off the VMs from within the guest OS. To force shut down
through AHV, run the following command:
nutanix@cvm$ acli vm.off vm-name
Replace vm-name with the name of the VM you want to shut down.

3. Stop the Nutanix cluster.

a. Log on to any CVM in the cluster with SSH.


b. Stop the cluster.
nutanix@cvm$ cluster stop

c. Verify if the cluster services have stopped.


nutanix@CVM$ cluster status

The output displays the message The state of the cluster: stop, which confirms that the
cluster has stopped.

Note: The following system services continue to run even after the cluster has stopped successfully:

• Zeus
• Scavenger
• Xmount
• VIPMonitor
You can observe the status of these system services in the output logs:
The state of the cluster: stop
Lockdown mode: Disabled CVM: 10.xx.x.xxx Up
Zeus UP [13130, 13326, 13327, 13347]
Scavenger UP [15015, 15141, 15142, 15143]
Xmount UP [15012, 15121, 15122, 15147]
SysStatCollector DOWN []
IkatProxy DOWN []
IkatControlPlane DOWN []
SSLTerminator DOWN []
SecureFileSync DOWN []
Medusa DOWN []
DynamicRingChanger DOWN []

AHV | Node Management | 28


Pithos DOWN []
InsightsDB DOWN []
Athena DOWN []
Mercury DOWN []
Mantle DOWN []
VipMonitor UP [25898, 25899, 25900, 25901, 25904]
Stargate DOWN []
InsightsDataTransfer DOWN []
Ergon DOWN []
GoErgon DOWN []
Cerebro DOWN []
Chronos DOWN []
Curator DOWN []
Prism DOWN []
Hera DOWN []
CIM DOWN []
AlertManager DOWN []
Arithmos DOWN []
Catalog DOWN []
Acropolis DOWN []
Uhura DOWN []
NutanixGuestTools DOWN []
MinervaCVM DOWN []
ClusterConfig DOWN []
APLOSEngine DOWN []
APLOS DOWN []
PlacementSolver DOWN []
Lazan DOWN []
Polaris DOWN []
Delphi DOWN []
Security DOWN []
Flow DOWN []
Anduril DOWN []
XTrim DOWN []
ClusterHealth DOWN []

4. Shut down all the CVMs in the cluster. Log on to each CVM in the cluster with SSH and shut down that
CVM.
nutanix@cvm$ sudo shutdown -P now

5. Shut down each node in the cluster. Perform the following steps for each node in the cluster.

a. Log on to the IPMI web console of each node.


b. Under Remote Control > Power Control, select Power Off Server - Orderly Shutdown to
gracefully shut down the node.

Note: The navigation path, tabs, and UI layout in IPMI web console can vary based on the hardware
used at your site.

c. Ping each host to verify that all AHV hosts are shut down.

6. Complete the maintenance activity or any other tasks.

AHV | Node Management | 29


7. Start all the nodes in the cluster.

a. Press the power button on the front of the block for each node.
b. Log on to the IPMI web console of each node.
c. On the System tab, check the Power Control status to verify if the node is powered on.

Note: The navigation path, tabs, and UI layout in IPMI web console can vary based on the hardware
used at your site.

8. Start the cluster.

a. Wait for approximately 5 minutes after you start the last node to allow the cluster services to start.
All CVMs start automatically after you start all the nodes.
b. Log on to any CVM in the cluster with SSH.
c. Start the cluster.
nutanix@cvm$ cluster start

d. Verify that all the cluster services are in the UP state.


nutanix@cvm$ cluster status

e. Start the guest VMs from within the guest OS or use the Prism Element web console.
If you are running many VMs, start the VMs by using aCLI:
nutanix@cvm$ for i in $(acli vm.list power_state=off | grep -v NTNX | awk 'NR!=1
{print $NF}');do acli vm.on $i ; done

f. Start the services or VMs associated with AOS features or Nutanix products. For example, start all
the FSVMs. See the documentation of those features or products for more information.
g. Verify if all guest VMs are powered on by using the Prism Element web console.

Rebooting an AHV Node in a Nutanix Cluster


About this task
The Request Reboot operation in the Prism web console gracefully restarts the selected nodes, including
each local CVM one after the other.

Note: Reboot host is a graceful restart workflow. All the user VMs are migrated to another host when you
perform a reboot operation for a host. There is no impact on the user workload due to the reboot operation.

Before you begin

• Ensure the Cluster Resiliency is OK on the Prism web console prior to any restart activities.
• For successful automated restarts of hosts, ensure that the cluster has HA or resource capacity.
• Ensure that the guest VMs can migrate between hosts as the hosts are placed in maintenance mode. If
not, manual intervention may be required.

Procedure

To reboot the nodes in the cluster, perform the following steps:

AHV | Node Management | 30


1. Log on to the Prism Element web console.

2. Click the gear icon in the main menu and then select Reboot in the Settings page.

3. In the Request Reboot window, select the nodes you want to restart, and click Reboot.

Figure 4: Request Reboot of AHV Node

A progress bar is displayed that indicates the progress of the restart of each node.

Changing CVM Memory Configuration (AHV)


About this task
You can increase the memory reserved for each Controller VM in your cluster by using the 1-click
Controller VM Memory Upgrade available from the Prism Element web console. Increase memory size
depending on the workload type or to enable certain AOS features. See the Increasing the Controller VM
Memory Size topic in the Prism Element Web Console Guide for CVM memory sizing recommendations
and instructions about how to increase the CVM memory.

Changing the AHV Hostname


To change the name of an AHV host, log on to any Controller VM (CVM) in the cluster as admin or nutanix
user and run the change_ahv_hostname script.

About this task


Perform the following procedure to change the name of an AHV host:

Procedure

1. Log on to any CVM in the cluster with SSH.

AHV | Node Management | 31


2. Change the hostname of the AHV host.

• If you are logged in as nutanix user, run the following command:


nutanix@cvm$ change_ahv_hostname --host_ip=host-IP-address --host_name=new-host-
name

• If you are logged in as admin user, run the following command:


admin@cvm$ sudo change_ahv_hostname --host_ip=host-IP-address --host_name=new-
host-name

Note: The system prompts you to enter the admin user password if you run the change_ahv_hostname
command with sudo.

Replace host-IP-address with the IP address of the host whose name you want to change and new-
host-name with the new hostname for the AHV host.

Note: This entity must fulfill the following naming conventions:

• The maximum length is 63 characters.


• Allowed characters are uppercase and lowercase letters (A-Z and a-z), decimal digits (0-9),
dots (.), and hyphens (-).
• The entity name must start and end with a number or letter.

If you want to update the hostname of multiple hosts in the cluster, run the script for one host at a time
(sequentially).

Note: The Prism Element web console displays the new hostname after a few minutes.

Changing the Name of the CVM Displayed in the Prism Web Console
You can change the CVM name that is displayed in the Prism web console. The procedure described
in this document does not change the CVM name that is displayed in the terminal or console of an SSH
session.

About this task


You can change the CVM name by using the change_cvm_display_name script. Run this script from a
CVM other than the CVM whose name you want to change. When you run the change_cvm_display_name
script, AOS performs the following steps:

• 1. Checks if the new name starts with NTNX- and ends with -CVM. The CVM name must have only
letters, numbers, and dashes (-).
2. Checks if the CVM has received a shutdown token.
3. Powers off the CVM. The script does not put the CVM or host into maintenance mode. Therefore,
the VMs are not migrated from the host and continue to run with the I/O operations redirected to
another CVM while the current CVM is in a powered off state.
4. Changes the CVM name, enables autostart, and powers on the CVM.
Perform the following to change the CVM name displayed in the Prism web console.

Procedure

1. Use SSH to log on to a CVM other than the CVM whose name you want to change.

AHV | Node Management | 32


2. Change the name of the CVM.
nutanix@cvm$ change_cvm_display_name --cvm_ip=CVM-IP --cvm_name=new-name
Replace CVM-IP with the IP address of the CVM whose name you want to change and new-name with
the new name for the CVM.
The CVM name must have only letters, numbers, and dashes (-), and must start with NTNX- and end
with -CVM.

Note: Do not run this command from the CVM whose name you want to change, because the script
powers off the CVM. In this case, when the CVM is powered off, you lose connectivity to the CVM from
the SSH console and the script abruptly ends.

Compute-Only Node Configuration (AHV Only)


A compute-only (CO) node allows you to seamlessly and efficiently expand the computing capacity (CPU
and memory) of your AHV cluster. The Nutanix cluster uses the resources (CPUs and memory) of a CO
node exclusively for computing purposes.

Note: Clusters that have compute-only nodes do not support virtual switches. Instead, use bridge
configurations for network connections. For more information, see Virtual Switch Limitations on
page 60.

You can use a supported server or an existing hyperconverged (HC) node as a CO node. To use a node
as CO, image the node as CO by using Foundation and then add that node to the cluster by using the
Prism Element web console. For more information about how to image a node as a CO node, see the Field
Installation Guide.

Note: If you want an existing HC node that is already a part of the cluster to work as a CO node, remove
that node from the cluster, image that node as CO by using Foundation, and add that node back to the
cluster. For more information about how to remove a node, see Modifying a Cluster.

Key Features of Compute-Only Node


Following are the key features of CO nodes.

• CO nodes do not have a Controller VM (CVM) and local storage.


• AOS sources the storage for vDisks associated with VMs running on CO nodes from the
hyperconverged (HC) nodes in the cluster.
• You can seamlessly manage your VMs (CRUD operations, ADS, and HA) by using the Prism Element
web console.
• AHV runs on the local storage media of the CO node.
• To update AHV on a cluster that contains a compute-only node, use the Life Cycle Manager. For more
information, see the LCM Updates topic in the Life Cycle Manager Guide.

Use Case of Compute-Only Node


CO nodes enable you to achieve more control and value from restrictive licenses such as Oracle. A
CO node is part of a Nutanix HC cluster, and there is no CVM running on the CO node (VMs use CVMs
running on the HC nodes to access disks). As a result, licensed cores on the CO node are used only for
the application VMs.
Applications or databases that are licensed on a per CPU core basis require the entire node to be licensed
and that also includes the cores on which the CVM runs. With CO nodes, you get a much higher ROI on

AHV | Node Management | 33


the purchase of your database licenses (such as Oracle and Microsoft SQL Server) since the CVM does
not consume any compute resources.

Minimum Cluster Requirements


Following are the minimum cluster requirements for compute-only nodes.

• The Nutanix cluster must be at least a three-node cluster before you add a compute-only node.
However, Nutanix recommends that the cluster has four nodes before you add a compute-only node.
• The ratio of compute-only to hyperconverged nodes in a cluster must not exceed the following:
1 compute-only : 2 hyperconverged

Note: A combination of AHV CO and AHV SO nodes in a cluster is not supported.

• All the hyperconverged nodes in the cluster must be all-flash nodes.


• The number of vCPUs assigned to CVMs on the hyperconverged nodes must be greater than or equal
to the total number of available cores on all the compute-only nodes in the cluster. The CVM requires
a minimum of 12 vCPUs. For more information about how Foundation allocates memory and vCPUs to
your platform model, see CVM vCPU and vRAM Allocation in the Field Installation Guide.
• The total amount of NIC bandwidth allocated to all the hyperconverged nodes must be twice the amount
of the total NIC bandwidth allocated to all the compute-only nodes in the cluster.
Nutanix recommends you use dual 25 GbE on CO nodes and quad 25 GbE on an HC node serving
storage to a CO node.
• The AHV version of the compute-only node must be the same as the other nodes in the cluster.
When you are adding a CO node to the cluster, AOS checks if the AHV version of the node matches
with the AHV version of the existing nodes in the cluster. If there is a mismatch, the add node operation
fails.
For general requirements about adding a node to a Nutanix cluster, see Expanding a Cluster.

Restrictions
Nutanix does not support the following features or tasks on a CO node in this release:
1. Host boot disk replacement
2. Network segmentation
3. Virtual Switch configuration: Use bridge configurations instead.

Supported AOS Versions


Nutanix supports compute-only nodes on AOS releases 5.11 or later.

Supported Hardware Platforms


Compute-only nodes are supported on the following hardware platforms.

• All the NX series hardware


• Dell XC Core
• Cisco UCS
• HPE DX

AHV | Node Management | 34


Networking Configuration
To perform network tasks on a compute-only node such as creating or modifying bridges or uplink
bonds or uplink load balancing, you must use the manage_ovs commands and add the --host flag to the
manage_ovs commands as shown in the following example:

Note: If you have storage-only AHV nodes in clusters with compute-only nodes being ESXI or Hyper-
V, deployment of default virtual switch vs0 fails. In such cases, the Prism Element, Prism Central or CLI
workflows for virtual switch management are unavailable to manage the bridges and bonds. Use the
manage_ovs command options to manage the bridges and bonds.

nutanix@cvm$ manage_ovs --host IP_address_of_co_node --bridge_name bridge_name


create_single_bridge
Replace IP_address_of_co_node with the IP address of the CO node and bridge_name with the name of
bridge you want to create.

Note: Run the manage_ovs commands for a CO from any CVM running on a hyperconverged node.

Perform the networking tasks for each CO node in the cluster individually.
For more information about networking configuration of the AHV hosts, see Host Network Management in
the AHV Administration Guide.

Adding a Compute-Only Node to an AHV Cluster

About this task


Perform the following procedure to add a Compute-Only (CO) node to an AHV cluster.

Before you begin


Ensure that the following prerequisites are met before you add a compute-only node to an AHV cluster:

• Observe the requirements and restrictions listed in Compute-Only Node Configuration (AHV Only).
• Log on to CVM using SSH, and disable all the virtual switches including the default virtual switch (vs0),
using the following command:
nutanix@cvm:~$ acli net.disable_virtual_switch virtual_switch=<virtual-switch-name>
In the above command, replace <virtual-switch-name> with the actual virtual switch name in your
network.

Procedure

To add a CO node to an AHV cluster, perform the following steps:

1. Log on to the Prism Element web console.

AHV | Node Management | 35


2. Do one of the following:

» Click the gear icon in the main menu and select Expand Cluster on the Settings page.
» Go to the hardware dashboard (see Hardware Dashboard) and click Expand Cluster.
The system displays the Expand Cluster window:

Figure 5: Expand Cluster - Operation Selection

3. Select Expand Cluster to expand the cluster with CO node.

Note: To expand a cluster with CO node, do not select Prepare Now and Expand Later. This option
is used to only prepare the nodes and expand the cluster at a later point in time. For CO nodes, node
preparation is not supported.
The system displays the following error in the Configure Host tab, if you proceed with
Prepare Now and Expand Later option:

Figure 6: Expand Cluster - Node Preparation not allowed for CO Node

AHV | Node Management | 36


4. In the Select Host tab, scroll down and, under Manual Host Discovery, click Discover Hosts
Manually.

Figure 7: Discover Hosts Manually

5. Click Add Host.

Figure 8: Add Host

6. Under Host or CVM IP, type the IP address of the AHV host and click Save.

Note: The CO node does not have a Controller VM and you must therefore provide the IP address of
the AHV host.

7. Click Discover and Add Hosts.


Prism Element discovers the CO node and the CO node appears in the list of nodes in the Select
Host tab.

AHV | Node Management | 37


8. Select the CO node to view the node details, and click Next

Figure 9: CO Node - Details

The system displays the Choose Node Type tab.

9. Click Next in the Choose Node Type tab.


The system prompts you to skip host networking.

AHV | Node Management | 38


10. Click Skip Host Networking.

Figure 10: Skip Host Networking

The system prompts you to run checks and expand the cluster with the selected CO node.

AHV | Node Management | 39


11. Click either of the following options in the Configure Host tab:

• Run Checks - Used to only run pre-checks required for cluster expansion. Once all pre-checks are
successful, you can click the Expand Cluster to add the CO node to the cluster.
• Expand Cluster - Used to run both; pre-checks required for cluster expansion and expand cluster
operation together.

Figure 11: Expand Cluster - Configure Host Tab

The add-node process begins, and Prism Element performs a set of checks before the node is added
to the cluster. Once all checks are completed and the node is added successfully, the system displays
the following result:

Figure 12: Expand Cluster - Successful

Note:

AHV | Node Management | 40


• You can check the progress of the operation in the Tasks menu of the Prism Element web
console. The operation takes approximately five to seven minutes to complete.
• If you have not disabled the virtual switch as specified in Prerequisites, the system
displays the following error for cluster expansion:

Figure 13: Expand Cluster - Failed

Check the progress of the operation in the Tasks menu of the Prism Element web console. The
operation takes approximately five to seven minutes to complete.

AHV | Node Management | 41


12. Check the Hardware Diagram view to verify if the CO node is added to the cluster.
You can identity a node as a CO node if the Prism Element web console does not display the IP
address for the CVM.

Figure 14: Hardware - Host View

Important: Virtual switch configuration is not supported when there are CO nodes in the cluster. The
system displays the error message if you attempt to reconfigure the virtual switch for the cluster, using
the following command:
nutanix@cvm:~$ acli net.migrate_br_to_virtual_switch br0 vs_name=vs0

Virtual switch configuration is not supported when there are Compute-Only


nodes in the cluster.

AHV | Node Management | 42


HOST NETWORK MANAGEMENT
Network management in an AHV cluster consists of the following tasks:

• Configuring Layer 2 switching through virtual switch and Open vSwitch bridges. When configuring
virtual switch vSwitch, you configure bridges, bonds, and VLANs.
• Optionally changing the IP address, netmask, and default gateway that were specified for the hosts
during the imaging process.

Virtual Networks (Layer 2)


Each VM network interface is bound to a virtual network. Each virtual network is bound to a single VLAN;
trunking VLANs to a virtual network is not supported. Networks are designated by the L2 type (vlan) and
the VLAN number.
By default, each virtual network maps to virtual switch br0. However, you can change this setting to map
a virtual network to a custom virtual switch. The user is responsible for ensuring that the specified virtual
switch exists on all hosts, and that the physical switch ports for the virtual switch uplinks are properly
configured to receive VLAN-tagged traffic.
A VM NIC must be associated with a virtual network. You can change the virtual network of a vNIC without
deleting and recreating the vNIC.
You can configure VM NICs in trunk mode to support applications that use trunk mode. For information
about configuring virtual NICs in trunk mode, see Configuring a Virtual NIC to Operate in Access or Trunk
Mode on page 123.

Managed Networks (Layer 3)


A virtual network can have an IPv4 configuration, but it is not required. A virtual network with an IPv4
configuration is a managed network; one without an IPv4 configuration is an unmanaged network. A VLAN
can have at most one managed network defined. If a virtual network is managed, every NIC is assigned an
IPv4 address at creation time.
A managed network can optionally have one or more non-overlapping DHCP pools. Each pool must be
entirely contained within the network's managed subnet.
If the managed network has a DHCP pool, the NIC automatically gets assigned an IPv4 address from one
of the pools at creation time, provided at least one address is available. Addresses in the DHCP pool are
not reserved. That is, you can manually specify an address belonging to the pool when creating a virtual
adapter. If the network has no DHCP pool, you must specify the IPv4 address manually.
All DHCP traffic on the network is rerouted to an internal DHCP server, which allocates IPv4 addresses.
DHCP traffic on the virtual network (that is, between the guest VMs and the Controller VM) does not reach
the physical network, and vice versa.
A network must be configured as managed or unmanaged when it is created. It is not possible to convert
one to the other.

AHV | Host Network Management | 43


Figure 15: AHV Networking Architecture

Prerequisites for Configuring Networking


Change the configuration from the factory default to the recommended configuration. See AHV Networking
Recommendations on page 44.

AHV Networking Recommendations


Nutanix recommends that you perform the following OVS configuration tasks from the Controller VM, as
described in this documentation:

• Viewing the network configuration


• Configuring uplink bonds with desired interfaces using the Virtual Switch (VS) configurations.
• Assigning the Controller VM to a VLAN
For performing other network configuration tasks such as adding an interface to a bridge and configuring
LACP for the interfaces in a bond, follow the procedures described in the AHV Networking best practices
documentation.
Nutanix recommends that you configure the network as follows:

Table 4: Recommended Network Configuration

Network Component Best Practice

Virtual Switch Do not modify the OpenFlow tables of any bridges configured in any VS
configurations in the AHV hosts.
Do not delete or rename OVS bridge br0.
Do not modify the native Linux bridge virbr0.

AHV | Host Network Management | 44


Network Component Best Practice

Switch Hops Nutanix nodes send storage replication traffic to each other in a distributed
fashion over the top-of-rack network. One Nutanix node can, therefore,
send replication traffic to any other Nutanix node in the cluster. The
network should provide low and predictable latency for this traffic. Ensure
that there are no more than three switches between any two Nutanix nodes
in the same cluster.

Switch Fabric A switch fabric is a single leaf-spine topology or all switches connected to
the same switch aggregation layer. The Nutanix VLAN shares a common
broadcast domain within the fabric. Connect all Nutanix nodes that form
a cluster to the same switch fabric. Do not stretch a single Nutanix cluster
across multiple, disconnected switch fabrics.
Every Nutanix node in a cluster should therefore be in the same L2
broadcast domain and share the same IP subnet.

WAN Links A WAN (wide area network) or metro link connects different physical sites
over a distance. As an extension of the switch fabric requirement, do not
place Nutanix nodes in the same cluster if they are separated by a WAN.

VLANs Add the Controller VM and the AHV host to the same VLAN. Place all
CVMs and AHV hosts in a cluster in the same VLAN. By default the CVM
and AHV host are untagged, shown as VLAN 0, which effectively places
them on the native VLAN configured on the upstream physical switch.

Note: Do not add any other device (including guest VMs) to the VLAN to
which the CVM and hypervisor host are assigned. Isolate guest VMs on
one or more separate VLANs.

Nutanix recommends configuring the CVM and hypervisor host VLAN as


the native, or untagged, VLAN on the connected switch ports. This native
VLAN configuration allows for easy node addition and cluster expansion.
By default, new Nutanix nodes send and receive untagged traffic. If you
use a tagged VLAN for the CVM and hypervisor hosts instead, you must
configure that VLAN while provisioning the new node, before adding that
node to the Nutanix cluster.
Use tagged VLANs for all guest VM traffic and add the required guest
VM VLANs to all connected switch ports for hosts in the Nutanix cluster.
Limit guest VLANs for guest VM traffic to the smallest number of physical
switches and switch ports possible to reduce broadcast network traffic
load. If a VLAN is no longer needed, remove it.

Default VS bonded port (br0- Aggregate the fastest links of the same speed on the physical host to a VS
up) bond on the default vs0 and provision VLAN trunking for these interfaces
on the physical switch.
By default, interfaces in the bond in the virtual switch operate in the
recommended active-backup mode.

Note: The mixing of bond modes across AHV hosts in the same cluster
is not recommended and not supported.

AHV | Host Network Management | 45


Network Component Best Practice

1 GbE and 10 GbE Ensure you do not form a NIC combination or mix NICs in any of the
interfaces (physical host) following ways in the same bond:

• NIC models from different vendors.


• NICs operating at different speeds.
• NICs with different drivers. To verify the NIC driver in use, run the
following command on the AHV host for each NIC:
root@ahv# ethtool -i <nic-name>
In the above command, replace <nic-name> with the NIC name
available at your site.
If 10 GbE or faster uplinks are available, Nutanix recommends that you use
them instead of 1 GbE uplinks.
Recommendations for 1 GbE uplinks are as follows:

• If you plan to use 1 GbE uplinks, do not include them in the same bond
as the 10 GbE interfaces.
• If you choose to configure only 1 GbE uplinks, when migration of
memory-intensive VMs becomes necessary, power off and power on
in a new host instead of using live migration. In this context, memory-
intensive VMs are VMs whose memory changes at a rate that exceeds
the bandwidth offered by the 1 GbE uplinks.
Nutanix recommends the manual procedure for memory-intensive
VMs because live migration, which you initiate either manually or by
placing the host in maintenance mode, might appear prolonged or
unresponsive and might eventually fail.
Use the aCLI on any CVM in the cluster to start the VMs on another
AHV host:
nutanix@cvm$ acli vm.on vm_list host=host
Replace vm_list with a comma-delimited list of VM names and replace
host with the IP address or UUID of the target host.

• If you must use only 1GbE uplinks, add them into a bond to increase
bandwidth and use the balance-TCP (LACP) or balance-SLB bond
mode.

IPMI port on the hypervisor Do not use VLAN trunking on switch ports that connect to the IPMI
host interface. Configure the switch ports as access ports for management
simplicity.

AHV | Host Network Management | 46


Network Component Best Practice

Upstream physical switch Nutanix does not recommend the use of Fabric Extenders (FEX)
or similar technologies for production use cases. While initial, low-
load implementations might run smoothly with such technologies,
poor performance, VM lockups, and other issues might occur as
implementations scale upward (see Knowledge Base article KB1612).
Nutanix recommends the use of 10Gbps, line-rate, non-blocking switches
with larger buffers for production workloads.
Cut-through versus store-and-forward selection depends on network
design. In designs with no oversubscription and no speed mismatches you
can use low-latency cut-through switches. If you have any oversubscription
or any speed mismatch in the network design, then use a switch with larger
buffers. Port-to-port latency should be no higher than 2 microseconds.
Use fast-convergence technologies (such as Cisco PortFast) on switch
ports that are connected to the hypervisor host.

Physical Network Layout Use redundant top-of-rack switches in a traditional leaf-spine architecture.
This simple, flat network design is well suited for a highly distributed,
shared-nothing compute and storage architecture.
Add all the nodes that belong to a given cluster to the same Layer-2
network segment.
Other network layouts are supported as long as all other Nutanix
recommendations are followed.

Jumbo Frames The Nutanix CVM uses the standard Ethernet MTU (maximum
transmission unit) of 1,500 bytes for all the network interfaces by default.
The standard 1,500 byte MTU delivers excellent performance and stability.
Nutanix does not support configuring the MTU on network interfaces of a
CVM to higher values.
You can enable jumbo frames (MTU of 9,000 bytes) on the physical
network interfaces of AHV hosts and guest VMs if the applications on your
guest VMs require them. If you choose to use jumbo frames on hypervisor
hosts, be sure to enable them end to end in the desired network and
consider both the physical and virtual network infrastructure impacted by
the change.
If you try to configure, on the default virtual switch vs0, an MTU value that
does not fall within the range of 1500 ~ 9000 bytes, Prism displays an error
and fails to apply the configuration.

Controller VM Do not remove the Controller VM from either the OVS bridge br0 or the
native Linux bridge virbr0.

Rack Awareness and Block Block awareness and rack awareness provide smart placement of
Awareness Nutanix cluster services, metadata, and VM data to help maintain data
availability, even when you lose an entire block or rack. The same network
requirements for low latency and high throughput between servers in the
same cluster still apply when using block and rack awareness.

Note: Do not use features like block or rack awareness to stretch a


Nutanix cluster between different physical sites.

AHV | Host Network Management | 47


Network Component Best Practice

Oversubscription Oversubscription occurs when an intermediate network device or link


does not have enough capacity to allow line rate communication between
the systems connected to it. For example, if a 10 Gbps link connects two
switches and four hosts connect to each switch at 10 Gbps, the connecting
link is oversubscribed. Oversubscription is often expressed as a ratio—
in this case 4:1, as the environment could potentially attempt to transmit
40 Gbps between the switches with only 10 Gbps available. Achieving a
ratio of 1:1 is not always feasible. However, you should keep the ratio as
small as possible based on budget and available capacity. If there is any
oversubscription, choose a switch with larger buffers.
In a typical deployment where Nutanix nodes connect to redundant top-of-
rack switches, storage replication traffic between CVMs traverses multiple
devices. To avoid packet loss due to link oversubscription, ensure that the
switch uplinks consist of multiple interfaces operating at a faster speed
than the Nutanix host interfaces. For example, for nodes connected at 10
Gbps, the inter-switch connection should consist of multiple 10 Gbps or 40
Gbps links.

The following diagrams show sample network configurations using Open vSwitch and Virtual Switch.

Figure 16: Virtual Switch

AHV | Host Network Management | 48


Figure 17: AHV Bridge Chain

AHV | Host Network Management | 49


Figure 18: Default factory configuration of Open vSwitch in AHV

Figure 19: Open vSwitch Configuration

IP Address Management
IP Address Management (IPAM) is a feature of AHV that allows it to assign IP addresses automatically to
VMs by using DHCP. You can configure each virtual network with a specific IP address subnet, associated
domain settings, and IP address pools available for assignment to VMs.
An AHV network is defined as a managed network or an unmanaged network based on the IPAM setting.
Managed Network
Managed network refers to an AHV network in which IPAM is enabled.

AHV | Host Network Management | 50


Unmanaged Network
Unmanaged network refers to an AHV network in which IPAM is not enabled or is disabled.
IPAM is enabled, or not, in the Create Network dialog box when you create a virtual network for Guest
VMs. See Configuring a Virtual Network for Guest VM Interfaces topic in the Prism Element Web Console
Guide.

Note: You can enable IPAM only when you are creating a virtual network. You cannot enable or disable
IPAM for an existing virtual network.

IPAM enabled or disabled status has implications. For example, when you want to reconfigure the IP
address of a Prism Central VM, the procedure to do so may involve additional steps for managed networks
(that is, networks with IPAM enabled) where the new IP address belongs to an IP address range different
from the previous IP address range. See Reconfiguring the IP Address and Gateway of Prism Central VMs
in Prism Central Guide.

Layer 2 Network Management


AHV uses virtual switch (VS) to connect the Controller VM, the hypervisor, and the guest VMs to each
other and to the physical network. Virtual switch is configured by default on each AHV node and the VS
services start automatically when you start a node.
To configure virtual networking in an AHV cluster, you need to be familiar with virtual switch. This
documentation gives you a brief overview of virtual switch and the networking components that you need
to configure to enable the hypervisor, Controller VM, and guest VMs to connect to each other and to the
physical network.

About Virtual Switch


Virtual switches or VS are used to manage multiple bridges and uplinks.
The VS configuration is designed to provide flexibility in configuring virtual bridge connections. A virtual
switch (VS) defines a collection of AHV nodes and the uplink ports on each node. It is an aggregation of
the same OVS bridge on all the compute nodes in a cluster. For example, vs0 is the default virtual switch is
an aggregation of the br0 bridge and br0-up uplinks of all the nodes.
After you configure a VS, you can use the VS as reference for physical network management instead of
using the bridge names as reference.
For overview about Virtual Switch, see Virtual Switch Considerations on page 54.
For information about OVS, see About Open vSwitch on page 58.

Virtual Switch Workflow


A virtual switch (VS) defines a collection of AHV compute nodes and the uplink ports on each node. It is an
aggregation of the same OVS bridge on all the compute nodes in a cluster. For example, vs0 is the default
virtual switch is an aggregation of the br0 bridge of all the nodes.
The system creates the default virtual switch vs0 connecting the default bridge br0 on all the hosts in the
cluster during installation of or upgrade to the compatible versions of AOS and AHV. Default virtual switch
vs0 has the following characteristics:

• The default virtual switch cannot be deleted.


• The default bridges br0 on all the nodes in the cluster map to vs0. thus, vs0 is not empty. It has at least
one uplink configured.
• The default management connectivity to a node is mapped to default bridge br0 that is mapped to vs0.

AHV | Host Network Management | 51


• The default parameter values of vs0 - Name, Description, MTU and Bond Type - can be modified
subject to aforesaid characteristics.
• The default virtual switch is configured with the Active-Backup uplink bond type.
For more information about bond types, see the Bond Type table.
The virtual switch aggregates the same bridges on all nodes in the cluster. The bridges (for example,
br1) connect to the physical port such as eth3 (Ethernet port) via the corresponding uplink (for example,
br1-up). The uplink ports of the bridges are connected to the same physical network. For example, the
following illustration shows that vs0 is mapped to the br0 bridge, in turn connected via uplink br0-up to
various (physical) Ethernet ports on different nodes.

Figure 20: Virtual Switch

Uplink configuration uses bonds to improve traffic management. The bond types are defined for the
aggregated OVS bridges.A new bond type - No uplink bond - provides a no-bonding option. A virtual switch
configured with the No uplink bond uplink bond type has 0 or 1 uplinks.
When you configure a virtual switch with any other bond type, you must select at least two uplink ports on
every node.
If you change the uplink configuration of vs0, AOS applies the updated settings to all the nodes in the
cluster one after the other (the rolling update process). To update the settings in a cluster, AOS performs
the following tasks when configuration method applied is Standard:
1. Puts the node in maintenance mode (migrates VMs out of the node)
2. Applies the updated settings
3. Checks connectivity with the default gateway
4. Exits maintenance mode
5. Proceeds to apply the updated settings to the next node
AOS does not put the nodes in maintenance mode when the Quick configuration method is applied.

AHV | Host Network Management | 52


Table 5: Bond Types

Bond Type Use Case Maximum VM NIC Maximum Host


Throughput Throughput

Active-Backup Recommended. Default 10 Gb 10 Gb


configuration, which transmits all
traffic over a single active adapter.

Active-Active with MAC Works with caveats for multicast 10 Gb 20 Gb


pinning traffic. Increases host bandwidth
utilization beyond a single 10 Gb
Also known as balance- adapter. Places each VM NIC
slb on a single adapter at a time. Do
not use this bond type with link
aggregation protocols such as
LACP.

Active-Active LACP and link aggregation 20 Gb 20 Gb


required. Increases host and VM
Also known as LACP
bandwidth utilization beyond a
with balance-tcp
single 10 Gb adapter by balancing
VM NIC TCP and UDP sessions
among adapters. Also used when
network switches require LACP
negotiation.
The default LACP settings are:

• Speed—Fast (1s)
• Mode—Active fallback-active-
backup
• Priority—Default. This is not
configurable.

No Uplink Bond No uplink or a single uplink on - -


each host.
Virtual switch configured with the
No uplink bond uplink bond type
has 0 or 1 uplinks. When you
configure a virtual switch with any
other bond type, you must select
at least two uplink ports on every
node.

Note the following points about the uplink configuration.

• Virtual switches are not enabled in a cluster that has one or more compute-only nodes. See Virtual
Switch Limitations on page 60 and Virtual Switch Requirements on page 60.
• If you select the Active-Active policy, you must manually enable LAG and LACP on the corresponding
ToR switch for each node in the cluster.
• If you reimage a cluster with the Active-Active policy enabled, the default virtual switch (vs0) on the
reimaged cluster is once again the Active-Backup policy. The other virtual switches are removed
during reimage.

AHV | Host Network Management | 53


• Nutanix recommends configuring LACP with fallback to active-backup or individual mode on the ToR
switches. The configuration and behavior varies based on the switch vendor. Use a switch configuration
that allows both switch interfaces to pass traffic after LACP negotiation fails.

Virtual Switch Considerations

Virtual Switch Deployment


A VS configuration is deployed using rolling update of the clusters. After the VS configuration (creation
or update) is received and execution starts, every node is first put into maintenance mode before the VS
configuration is made or modified on the node. This is called the Standard recommended default method
of configuring a VS.
You can select the Quick method of configuration also where the rolling update does not put the clusters
in maintenance mode. The VS configuration task is marked as successful when the configuration is
successful on the first node. Any configuration failure on successive nodes triggers corresponding NCC
alerts. There is no change to the task status.

Note:
If you are modifying an existing bond, AHV removes the bond and then re-creates the bond with
the specified interfaces.
Ensure that the interfaces you want to include in the bond are physically connected to the Nutanix
appliance before you run the command described in this topic. If the interfaces are not physically
connected to the Nutanix appliance, the interfaces are not added to the bond.

Note: If you are modifying an existing bond, AHV removes the bond and then re-creates the bond with the
specified interfaces.
Ensure that the interfaces you want to include in the bond are physically connected to the Nutanix
appliance before you run the command described in this topic. If the interfaces are not physically
connected to the Nutanix appliance, the interfaces are not added to the bond.
Ensure that the pre-checks listed in LCM Prechecks section of the Life Cycle Manager Guide and
the Always and Host Disruptive Upgrades types of pre-checks listed KB-4584 pass for Virtual
Switch deployments.

The VS configuration is stored and re-enforced at system reboot.


The VM NIC configuration also displays the VS details. When you Update VM configuration or Create NIC
for a VM, the NIC details show the virtual switches that can be associated. This view allows you to change
a virtual network and the associated virtual switch.
To change the virtual network, select the virtual network in the Subnet Name dropdown list in the Create
NIC or Update NIC dialog box..

AHV | Host Network Management | 54


Figure 21: Create VM - VS Details

AHV | Host Network Management | 55


Figure 22: VM NIC - VS Details

Impact of Installation of or Upgrade to Compatible AOS and AHV Versions


See Virtual Switch Requirements on page 60 for information about minimum and compatible AOS and
AHV versions.
When you upgrade the AOS to a compatible version from an older version, the upgrade process:

AHV | Host Network Management | 56


• Triggers the creation of the default virtual switch vs0, which is mapped to bridge br0on all the nodes.
• Validates bridge br0 and its uplinks for consistency in terms of MTU and bond-type on every node.
If valid, it adds the bridge br0 of each node to the virtual switch vs0.
If br0 configuration is not consistent, the system generates an NCC alert which provides the failure
reason and necessary details about it.
The system migrates only the bridge br0 on each node to the default virtual switch vs0 because the
connectivity of bridge br0 is guaranteed.
• Does not migrate any other bridges to any other virtual switches during upgrade. You need to manually
migrate the other bridges after install or upgrade is complete.

Bridge Migration
After upgrading to a compatible version of AOS, you can migrate bridges other than br0 that existed on the
nodes. When you migrate the bridges, the system converts the bridges to virtual switches.
See Virtual Switch Migration Requirements in Virtual Switch Requirements on page 60.

Note: You can migrate only those bridges that are present on every compute node in the cluster. See
Migrating Bridges after Upgrade topic in Prism Element Web Console Guide.

Cluster Scaling Impact


VS management for cluster scaling (addition or removal of nodes) is seamless.
Node Removal
When you remove a node, the system detects the removal and automatically removes the node
from all the VS configurations that include the node and generates an internal system update.
For example, a node has two virtual switches, vs1 and vs2, configured apart from the default vs0.
When you remove the node from the cluster, the system removes the node for the vs1 and vs2
configurations automatically with internal system update.
Node Addition
When you add a new node or host to a cluster, the bridges or virtual switches on the new node are
treated in the following manner:

Note: If a host already included in a cluster is removed and then added back, it is treated as a new
host.

• The system validates the default bridge br0 and uplink bond br0-up to check if it conforms to the
default virtual switch vs0 already present on the cluster.
If br0 and br0-up conform, the system includes the new host and its uplinks in vs0.
If br0 and br0-up do not conform,then the system generates an NCC alert.
• The system does not automatically add any other bridge configured on the new host to any other
virtual switch in the cluster.
It generates NCC alerts for all the other non-default virtual switches.

AHV | Host Network Management | 57


• You can manually include the host in the required non-default virtual switches. Update a non-
default virtual switch to include the host.
For information about updating a virtual switch in Prism Element Web Console, see the
Configuring a Virtual Network for Guest VM Interfaces section in Prism Element Web Console
Guide.
For information about updating a virtual switch in Prism Central, see the Network Connections
section in the Prism Central Guide.

VS Management
You can manage virtual switches from Prism Central or Prism Web Console. You can also use aCLI or
REST APIs to manage them. See the Acropolis API Reference and Command Reference guides for more
information.
You can also use the appropriate aCLI commands for virtual switches from the following list:

• net.create_virtual_switch
• net.list_virtual_switch
• net.get_virtual_switch
• net.update_virtual_switch
• net.delete_virtual_switch
• net.migrate_br_to_virtual_switch
• net.disable_virtual_switch

About Open vSwitch


Open vSwitch (OVS) is an open-source software switch implemented in the Linux kernel and designed to
work in a multiserver virtualization environment. By default, OVS behaves like a Layer 2 learning switch
that maintains a MAC address learning table. The hypervisor host and VMs connect to virtual ports on the
switch.
Each hypervisor hosts an OVS instance, and all OVS instances combine to form a single switch. As an
example, the following diagram shows OVS instances running on two hypervisor hosts.

Figure 23: Open vSwitch

AHV | Host Network Management | 58


Default Factory Configuration
The factory configuration of an AHV host includes a default OVS bridge named br0 (configured with the
default virtual switch vs0) and a native linux bridge called virbr0.
Bridge br0 includes the following ports by default:

• An internal port with the same name as the default bridge; that is, an internal port named br0. This is the
access port for the hypervisor host.
• A bonded port named br0-up. The bonded port aggregates all the physical interfaces available on the
node. For example, if the node has two 10 GbE interfaces and two 1 GbE interfaces, all four interfaces
are aggregated on br0-up. This configuration is necessary for Foundation to successfully image the
node regardless of which interfaces are connected to the network.

Note:
Before you begin configuring a virtual network on a node, you must disassociate the 1 GbE
interfaces from the br0-up port. This disassociation occurs when you modify the default virtual
switch (vs0) and create new virtual switches. Nutanix recommends that you aggregate only the
10 GbE or faster interfaces on br0-up and use the 1 GbE interfaces on a separate OVS bridge
deployed in a separate virtual switch.
See Virtual Switch Management on page 61 for information about virtual switch
management.

The following diagram illustrates the default factory configuration of OVS on an AHV node:

Figure 24: Default factory configuration of Open vSwitch in AHV

AHV | Host Network Management | 59


The Controller VM has two network interfaces by default. As shown in the diagram, one network interface
connects to bridge br0. The other network interface connects to a port on virbr0. The Controller VM uses
this bridge to communicate with the hypervisor host.

Virtual Switch Requirements


The requirements to deploy virtual switches are as follows:
1. Virtual switches are supported on AOS 5.19 or later with AHV 20201105.12 or later. Therefore you must
install or upgrade to AOS 5.19 or later, with AHV 20201105.12 or later, to use virtual switches in your
deployments.
2. Virtual bridges used for a VS on all the nodes must have the same specification such as name, MTU
and uplink bond type. For example, if vs1 is mapped to br1 (virtual or OVS bridge 1) on a node, it must
be mapped to br1 on all the other nodes of the same cluster.

Virtual Switch Migration Requirements


The AOS upgrade process initiates the virtual switch migration. The virtual switch migration is successful
only when the following requirements are fulfilled:

• Before migrating to Virtual Switch, all bridge br0 bond interfaces should have the same bond type on all
hosts in the cluster. For example, all hosts should use the Active-backup bond type or balance-tcp. If
some hosts use Active-backup and other hosts use balance-tcp, virtual switch migration fails.
• Before migrating to Virtual Switch, if using LACP:

• Confirm that all bridge br0 lacp-fallback parameters on all hosts are set to the case sensitive value
True with manage_ovs show_uplinks |grep lacp-fallback:. Any host with lowercase true
causes virtual switch migration failure.
• Confirm that the LACP speed on the physical switch is set to fast or 1 second. Also ensure that the
switch ports are ready to fallback to individual mode if LACP negotiation fails due to a configuration
such as no lacp suspend-individual.
• Before migrating to the Virtual Switch, confirm that the upstream physical switch is set to spanning-
tree portfast or spanning-tree port type edge trunk. Failure to do so may lead to a 30-second
network timeout and the virtual switch migration may fail because it uses 20-second non-modifiable
timer.
• Ensure that the pre-checks listed in LCM Prechecks section of the Life Cycle Manager Guide and
the Always and Host Disruptive Upgrades types of pre-checks listed KB-4584 pass for Virtual Switch
deployments.
• For the default virtual switch vs0,

• All configured uplink ports must be available for connecting the network. In Active-Backup bond type,
the active port is selected from any configured uplink port that is linked. Therefore, the virtual switch
vs0 can use all the linked ports for communication with other CVMs/hosts.
• All the host IP addresses in the virtual switch vs0 must be resolvable to the configured gateway
using ARP.

Virtual Switch Limitations


MTU Restriction
The Nutanix Controller VM uses the standard Ethernet MTU (maximum transmission unit) of 1,500
bytes for all the network interfaces by default. The standard 1,500-byte MTU delivers excellent
performance and stability. Nutanix does not support configuring higher values of MTU on the
network interfaces of a Controller VM.

AHV | Host Network Management | 60


You can enable jumbo frames (MTU of 9,000 bytes) on the physical network interfaces of AHV,
ESXi, or Hyper-V hosts and guest VMs if the applications on your guest VMs require such higher
MTU values. If you choose to use jumbo frames on the hypervisor hosts, enable the jumbo frames
end to end in the specified network, considering both the physical and virtual network infrastructure
impacted by the change.
Single node and Two-node cluster configuration.
Virtual switch cannot be deployed is your single-node or two-node cluster has any instantiated
user VMs. The virtual switch creation or update process involves a rolling restart, which checks for
maintenance mode and whether you can migrate the VMs. On a single-node or two-node cluster,
any instantiated user VMs cannot be migrated and the virtual switch operation fails.
Therefore, power down all user VMs for virtual switch operations in a single-node or two-node
cluster.
Compute-only node is not supported.
Virtual switch is not compatible with Compute-only (CO) nodes. If a CO node is present in the
cluster, then the virtual switches are not deployed (including the default virtual switch). You need to
use the net.disable_virtual_switch aCLI command to disable the virtual switch workflow if you
want to expand a cluster which has virtual switches and includes a CO node.
The net.disable_virtual_switch aCLI command cleans up all the virtual switch entries from the
IDF. All the bridges mapped to the virtual switch or switches are retained as they are.
See Compute-Only Node Configuration (AHV Only) on page 33.
Including a storage-only node in a VS is not necessary.
Virtual switch is compatible with Storage-only (SO) nodes but you do not need to include an SO
node in any virtual switch, including the default virtual switch.
Mixed-mode Clusters with AHV Storage-only Nodes
Consider that you have deployed a mixed-node cluster where the compute-only nodes are ESXi
or Hyper-V nodes and the storage-only nodes are AHV nodes. In such a case, the default virtual
switch deployment fails.
Without the default VS, the Prism Element, Prism Central and CLI workflows for virtual switch
required to manage the bridges and bonds are not available. You need to use the manage_ovs
command options to update the bridge and bond configurations on the AHV hosts.

Virtual Switch Management


Virtual Switch can be viewed, created, updated or deleted from both Prism Web Console as well as Prism
Central.

Virtual Switch Views and Visualization


For information on the virtual switch network visualization in Prism Element Web Console, see the Network
Visualization topic in the Prism Web Console Guide.

Virtual Switch Create, Update and Delete Operations


For information about the procedures to create, update and delete a virtual switch in Prism Element Web
Console, see the Configuring a Virtual Network for Guest VM Interfaces information in the Prism Element
Web Console Guide.
For information about the procedures to create, update and delete a virtual switch in Prism Central, see the
Network Connections section in the Prism Central Guide.

AHV | Host Network Management | 61


Re-Configuring Bonds Across Hosts Manually
If you are upgrading AOS to 5.20, 6.0 or later, you need to migrate the existing bridges to virtual switches.
If there are inconsistent bond configurations across hosts before migration of the bridges, then after
migration of bridges the virtual switches may not be properly deployed. To resolve such issues, you must
manually configure the bonds to make them consistent.

About this task

Important: Use this procedure only when you need to modify the inconsistent bonds in a migrated bridge
across hosts in a cluster, that is preventing Acropolis (AOS) from deploying the virtual switch for the migrated
bridge.

Do not use ovs-vsctl commands to make the bridge level changes. Use the manage_ovs commands,
instead.
The manage_ovs command allows you to update the cluster configuration. The changes are applied
and retained across host restarts. The ovs-vsctl command allows you to update the live running host
configuration but does not update the AOS cluster configuration and the changes are lost at host restart.
This behavior of ovs-vsctl introduces connectivity issues during maintenance, such as upgrades or
hardware replacements.
ovs-vsctl is usually used during a break/fix situation where a host may be isolated on the network and
requires a workaround to gain connectivity before the cluster configuration can actually be updated using
manage_ovs.

Note: Disable the virtual switch before you attempt to change the bonds or bridge.
If you hit an issue where the virtual switch is automatically re-created after it is disabled (with AOS
versions 5.20.0 or 5.20.1), follow steps 1 and 2 below to disable such an automatically re-created
virtual switch again before migrating the bridges. For more information, see KB-3263.
Be cautious when using the disable_virtual_switch command because it deletes all the
configurations from the IDF, not only for the default virtual switch vs0, but also any virtual
switches that you may have created (such as vs1 or vs2). Therefore, before you use the
disable_virtual_switch command, ensure that you check a list of existing virtual switches,
that you can get using the acli net.get_virtual_switch command.

Complete this procedure on each host Controller VM that is sharing the bridge that needs to be migrated to
a virtual switch.

Procedure

1. To list the virtual switches, use the following command.


nutanix@cvm$ acli net.list_virtual_switch

2. Disable all the virtual switches.


nutanix@cvm$ acli net.disable_virtual_switch
This disables all the virtual switches.

Note: You can use the nutanix@cvm$ acli net.delete_virtual_switch vs_name command
to delete a specific VS and re-create it with the appropriate bond type.

AHV | Host Network Management | 62


3. Change the bond type to align with the same bond type on all the hosts for the specified virtual switch
nutanix@cvm$ manage_ovs --bridge_name bridge-name --bond_name bond_name --
bond_mode bond-type update_uplinks
Where:

• bridge-name: Provide the name of the bridge, such as br0 for the virtual switch on which you want
to set the uplink bond mode.
• bond-name: Provide the name of the uplink port such as br0-up for which you want to set the bond
mode.
• bond-type: Provide the bond mode that you require to be used uniformly across the hosts on the
named bridge.
Use the manage_ovs --help command for help on this command.

Note: To disable LACP, change the bond type from LACP Active-Active (balance-tcp) to Active-Backup/
Active-Active with MAC pinning (balance-slb) by setting the bond_mode using this command as active-
backup or balance-slb.

Ensure that you turn off LACP on the connected ToR switch port as well. To avoid blocking
of the bond uplinks during the bond type change on the host, ensure that you follow the ToR
switch best practices to enable LACP fallback or passive mode.
To enable LACP, configure bond-type as balance-tcp (Active-Active) with additional
variables --lacp_mode fast and --lacp_fallback true.

4. (If migrating to AOS version earlier than 5.20.2) Check if the issue in the note and disable the virtual
switch.

What to do next
After making the bonds consistent across all the hosts configured in the bridge, migrate the bridge or
enable the virtual switch. For more information, see:

• Configuring a Virtual Network for Guest VM Interfaces in the Prism Element Web Console Guide.
• Network Connections in the Prism Central Guide.
To check whether LACP is enabled or disabled, use the following command.
nutanix@cvm$ manage_ovs show_uplinks

Enabling LACP and LAG (AHV Only)


This section describes the procedure to enable LAG and LACP in AHV nodes and the Top-of-Rack (ToR)
switch or any switch that is directly connected to the Nutanix node.

About this task


If you select the Active-Active bond type, you must enable LACP and LAG on the Top-of-Rack (ToR)
switch or any switch that is directly connected to each Nutanix node in the cluster one after the other.

Before you begin


Ensure that the bond changes are made on the Nutanix node before you make the changes on the switch
that is directly connected to the Nutanix node. This helps you to reduce the node isolation occurrence for
an extended time. For information about how to make the bond configuration changes, see Reconfiguring
Bonds Across Hosts Manually.

AHV | Host Network Management | 63


Procedure

To enable LACP and LAG, perform the following steps:

1. Login to Prism element and navigate to Settings > Network Configuration > Virtual Switch.
You can also login to Prism Central, and navigate to Network & Security > Subnets > Network
Configuration > Virtual Switch.
The system displays the Virtual Switch tab.

2.
Click the Edit icon ( ) for the target virtual switch on which you want to configure LACP and LAG.
The system displays the Edit Virtual Switch window:

Figure 25: Edit Virtual Switch - General Tab

3. In the General tab, choose Standard (Recommended) option in the Select Configuration Method
field, and click Next.

Note: The Standard configuration method puts each node in maintenance mode before applying the
updated settings. After applying the updated settings, the node exits from maintenance mode. For more
information, see Virtual Switch Workflow.

4. In the Uplink Configuration tab, select Active-Active in the Bond Type field, and click Save.

Note: The Active-Active bond type configures all AHV hosts with the fast setting for LACP speed,
causing the AHV host to request LACP control packets at the rate of one per second from the physical
switch. In addition, the Active-Active bond type configuration sets LACP fallback to Active-Backup

AHV | Host Network Management | 64


on all AHV hosts. You cannot modify these default settings after you have configured them in Prism, even
by using the CLI.

Figure 26: Edit Virtual Switch - Uplink Configuration Tab

This completes the LAG and LACP configuration on the cluster. At this stage, cluster starts the Rolling
Reboot operation for all the AHV hosts. Wait for the reboot operation to complete before you put the
node and CVM in maintenance mode and change the switch ports.
For more information about how to manually perform the rolling reboot operation for an AHV host, see
Rebooting an AHV Node in a Nutanix Cluster.

Perform the following steps on each node, one at a time:

5. Put the node and the Controller VM into maintenance mode.

Note: Before you put a node in maintenance mode, see Verifying the Cluster Health and carry out the
necessary checks.

The Step 6 in Putting a Node into Maintenance Mode using Web Console section puts the Controller
VM in maintenance mode.

6. Change the settings for the interface on the switch that is directly connected to the Nutanix node to
match the LACP and LAG settings made in the Edit Virtual Switch window above.
For more information about how to change the LACP settings of the switch that is directly connected to
the node, refer to the vendor-specific documentation of the deployed switch.
Nutanix recommends that you perform the following configurations for LACP settings on the switch:

• Enable LACP fallback.


• Consider the LACP time options (slow and fast). If the switch has a fast configuration, set the LACP
time to fast. This is to prevent an outage due to a mismatch on LACP speeds of the cluster and the
ToR switch. Ensure that the Active-Active bond type configuration set the LACP of cluster to fast.

AHV | Host Network Management | 65


7. Verify that LACP negotiation status is Negotiated.
Perform the SSH to the CVM as a nutanix user, and run the following commands:
nutanix@CVM$ ssh root@[AHV host IP] "ovs-appctl bond/show bond-name"

nutanix@CVM$ ssh root@[AHV host IP] "ovs-appctl lacp/show bond-name”

• Replace the following attributes in the above commands:

• bond-name with the actual name of the uplink port such as br0-up in the above commands.

• [AHV host IP] with the actual AHV host IP at your site.

• Search for the string negotiated in the status lines.

8. Remove the node and Controller VM from maintenance mode. For more information, see Exiting a
Node from the Maintenance Mode using Web Console.
The Controller VM exits maintenance mode during the same process.

What to do next
Do the following after completing the procedure to enable LAG and LACP in all the AHV nodes the
connected ToR switches:

• Verify that the status of all services on all the CVMs are Up. Run the following command and check if
the status of the services is displayed as Up in the output:
nutanix@cvm$ cluster status

• Log on to the Prism Element of the node and check the Data Resiliency Status widget displays OK.

Figure 27: Data Resiliency Status

VLAN Configuration
You can set up a VLAN-based segmented virtual network on an AHV node by assigning the ports on virtual
bridges managed by virtual switches to different VLANs. VLAN port assignments are configured from the
Controller VM that runs on each node.
For best practices associated with VLAN assignments, see AHV Networking Recommendations on
page 44. For information about assigning guest VMs to a virtual switch and VLAN, see Network
Connections in the Prism Central Guide.

AHV | Host Network Management | 66


Assigning an AHV Host to a VLAN

About this task

Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the cluster.
Once the process begins, hosts and CVMs partially lose network access to each other and VM data or
storage containers become unavailable until the process completes.

To assign an AHV host to a VLAN, do the following on every AHV host in the cluster:

Procedure

1. Log on to the AHV host with SSH.

2. Put the AHV host and the CVM in maintenance mode.


See Putting a Node into Maintenance Mode on page 22 for instructions about how to put a node into
maintenance mode.

3. Assign port br0 (the internal port on the default OVS bridge, br0) to the VLAN that you want the host be
on.
root@ahv# ovs-vsctl set port br0 tag=host_vlan_tag
Replace host_vlan_tag with the VLAN tag for hosts.

4. Confirm VLAN tagging on port br0.


root@ahv# ovs-vsctl list port br0

5. Check the value of the tag parameter that is shown.

6. Verify connectivity to the IP address of the AHV host by performing a ping test.

7. Exit the AHV host and the CVM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.

Assigning the Controller VM to a VLAN


By default, the public interface of a Controller VM is assigned to VLAN 0. To assign the Controller VM to
a different VLAN, change the VLAN ID of its public interface. After the change, you can access the public
interface from a device that is on the new VLAN.

About this task

Note: Perform the following procedure during a scheduled downtime. Before you begin, stop the cluster.
Once the process begins, hosts and CVMs partially lose network access to each other and VM data or
storage containers become unavailable until the process completes.

Note: To avoid losing connectivity to the Controller VM, do not change the VLAN ID when you are logged on
to the Controller VM through its public interface. To change the VLAN ID, log on to the internal interface that
has IP address 192.168.5.254.

Perform these steps on every Controller VM in the cluster. To assign the Controller VM to a VLAN, do the
following:

Procedure

1. Log on to the AHV host with SSH.

AHV | Host Network Management | 67


2. Put the AHV host and the Controller VM in maintenance mode.
See Putting a Node into Maintenance Mode on page 22 for instructions about how to put a node into
maintenance mode.

3. Check the Controller VM status on the host.


root@host# virsh list
An output similar to the following is displayed:
root@host# virsh list
Id Name State
----------------------------------------------------
1 NTNX-CLUSTER_NAME-3-CVM running
3 3197bf4a-5e9c-4d87-915e-59d4aff3096a running
4 c624da77-945e-41fd-a6be-80abf06527b9 running

root@host# logout

4. Log on to the Controller VM.


root@host# ssh [email protected]
Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

5. Assign the public interface of the Controller VM to a VLAN.


nutanix@cvm$ change_cvm_vlan vlan_id
Replace vlan_id with the ID of the VLAN to which you want to assign the Controller VM.
For example, add the Controller VM to VLAN 201.
nutanix@cvm$ change_cvm_vlan 201

6. Confirm VLAN tagging on the Controller VM.


root@host# virsh dumpxml cvm_name
Replace cvm_name with the CVM name or CVM ID to view the VLAN tagging information.

Note: Refer to step 3 for Controller VM name and Controller VM ID.

An output similar to the following is displayed:


root@host# virsh dumpxml 1 | grep "tag id" -C10 --color
<target dev='vnet2'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<alias name='net2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='50:6b:8d:b9:0a:18'/>
<source bridge='br0'/>
<vlan>
<tag id='201'/>
</vlan>
<virtualport type='openvswitch'>
<parameters interfaceid='c46374e4-c5b3-4e6b-86c6-bfd6408178b5'/>
</virtualport>
<target dev='vnet0'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<alias name='net3'/>

AHV | Host Network Management | 68


<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
root@host#

7. Check the value of the tag parameter that is shown.

8. Restart the network service.


nutanix@cvm$ sudo service network restart

9. Verify connectivity to the Controller VMs external IP address by performing a ping test from the same
subnet. For example, perform a ping from another Controller VM or directly from the host itself.

10. Exit the AHV host and the Controller VM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.

Enabling RSS Virtio-Net Multi-Queue by increasing the Number of VNIC


Queues
Multi-Queue in VirtIO-net enables you to improve network performance for network I/O-intensive guest
VMs or applications running on AHV hosts.

About this task


You can enable VirtIO-net multi-queue by increasing the number of VNIC queues. If an application uses
many distinct streams of traffic, Receive Side Scaling (RSS) can distribute the streams across multiple
VNIC DMA rings. This increases the amount of RX buffer space by the number of VNIC queues (N). Also,
most guest operating systems pin each ring to a particular vCPU, handling the interrupts and ring-walking
on that vCPU, by that means achieving N-way parallelism in RX processing. However, if you increase the
number of queues beyond the number of vCPUs, you cannot achieve extra parallelism.
Following workloads have the greatest performance benefit of VirtIO-net multi-queue:

• VMs where traffic packets are relatively large


• VMs with many concurrent connections
• VMs with network traffic moving:

• Among VMs on the same host


• Among VMs across hosts
• From VMs to the hosts
• From VMs to an external system
• VMs with high VNIC RX packet drop rate if CPU contention is not the cause
You can increase the number of queues of the AHV VM VNIC to allow the guest OS to use multi-queue
VirtIO-net on guest VMs with intensive network I/O. Multi-Queue VirtIO-net scales the network performance
by transferring packets through more than one Tx/Rx queue pair at a time as the number of vCPUs
increases.
Nutanix recommends that you be conservative when increasing the number of queues. Do not set the
number of queues larger than the total number of vCPUs assigned to a VM. Packet reordering and TCP
retransmissions increase if the number of queues is larger than the number vCPUs assigned to a VM. For
this reason, start by increasing the queue size to 2. The default queue size is 1. After making this change,
monitor the guest VM and network performance. Before you increase the queue size further, verify that the
vCPU usage has not dramatically or unreasonably increased.

AHV | Host Network Management | 69


Perform the following steps to make more VNIC queues available to a guest VM. See your guest OS
documentation to verify if you must perform extra steps on the guest OS to apply the additional VNIC
queues.

Note: You must shut down the guest VM to change the number of VNIC queues. Therefore, make this
change during a planned maintenance window. The VNIC status might change from Up->Down->Up or a
restart of the guest OS might be required to finalize the settings depending on the guest OS implementation
requirements.

Before you begin


(Optional) Nutanix recommends that you perform the following checks before you enable RSS Virtio-Net
Multi-Queue by increasing the number of VNIC queues:

• AHV and AOS are running the latest version.


• AHV guest VMs are running the latest version of the Nutanix VirtIO driver package.
For RSS support, ensure you are running Nutanix VirtIO 1.1.6 or later. See Nutanix VirtIO for Windows
on page 95 for more information about Nutanix VirtIO.

Procedure

To set up a multi-queue virtio-net connection, perform the following steps:

1. Determine the exact name of the guest VM for which you want to change the number of VNIC queues
using the following command:
nutanix@cvm$ acli vm.list
An output similar to the following is displayed:
nutanix@cvm$ acli vm.list
VM name VM UUID
ExampleVM1 a91a683a-4440-45d9-8dbe-xxxxxxxxxxxx
ExampleVM2 fda89db5-4695-4055-a3d4-xxxxxxxxxxxx
...

2. Determine the MAC address of the VNIC and confirm the current number of VNIC queues using the
following command:
nutanix@cvm$ acli vm.nic_get VM-name
Replace VM-name with the name of the VM.
An output similar to the following is displayed:
nutanix@cvm$ acli vm.nic_get VM-name
...
mac_addr: "50:6b:8d:2f:zz:zz"
...
(queues: 2) <- If there is no output of 'queues', the setting is default (1
queue).

Note: AOS defines queues as the maximum number of Tx/Rx queue pairs (default is 1).

3. Determine the total count of vCPUs assigned to the VM using the following command:
nutanix@cvm$ acli vm.get VM-name | grep num.*vcpu
Replace VM-name with the name of the VM.
An output similar to the following is displayed:
num_cores_per_vcpu: 4

AHV | Host Network Management | 70


num_vcpus: 1
The total count of vCPUs assigned to the VM is calculated as - num_vcpus * num_cores_per_vcpu. In
the above output, the total count of vCPUs assigned to the VM is 1 * 4 = 4.

4. Shut down the guest VM using the following command:


nutanix@cvm$ acli vm.shutdown VM-name
Replace VM-name with the name of the VM.

5. Increase the number of VNIC queues.


nutanix@cvm$acli vm.nic_update VM-name vNIC-MAC-address queues=N
Replace VM-name with the name of the guest VM, vNIC-MAC-address with the MAC address of the
VNIC, and N with the number of queues.

Note: N must be less than or equal to the total count of vCPUs assigned to the guest VM.

6. Start the guest VM using the following command:


nutanix@cvm$ acli vm.on VM-name
Replace VM-name with the name of the VM.

7. Confirm in the guest OS documentation if any additional steps are required to enable multi-queue in
VirtIO-net.

Note: Microsoft Windows has RSS enabled by default.

For example, for RHEL and CentOS VMs, perform the following steps:

a. Log on to the guest VM.


b. Confirm if irqbalance.service is active or not using the following command:
uservm# systemctl status irqbalance.service
An output similar to the following is displayed:
irqbalance.service - irqbalance daemon
Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor
preset: enabled)
Active: active (running) since Tue 2020-04-07 10:28:29 AEST; Ns ago

c. Start irqbalance.service if it is not active using the following command:

Note: It is active by default on CentOS VMs. You might have to start it on RHEL VMs.

uservm# systemctl start irqbalance.service

d. Run the following command:


uservm# ethtool -L ethX combined M
Replace M with the number of VNIC queues.
Note the following caveat from the RHEL 7 Virtualization Tuning and Optimization Guide: 5.4.
NETWORK TUNING TECHNIQUES document:
Currently, setting up a multi-queue virtio-net connection can have a negative effect on the performance
of outgoing traffic. Specifically, this might occur while sending packets under 1,500 bytes over the
Transmission Control Protocol (TCP) stream.

AHV | Host Network Management | 71


8. Monitor the VM performance to make sure that the expected network performance increase is observed
and that the guest VM vCPU usage is not dramatically increased to impact the application on the guest
VM.
For assistance with the steps described in this document, or if these steps do not resolve your guest VM
network performance issues, contact Nutanix Support.

Changing the IP Address of an AHV Host


Change the IP address, network mask, or gateway of an AHV host.

Before you begin


Perform the following tasks before you change the IP address, network mask, or gateway of an AHV host:

Caution: All Controller VMs and hypervisor hosts must be on the same subnet.

Warning: Ensure that you perform the steps in the exact order as indicated in this document.

1. Verify the cluster health by following the instructions in Verifying the Cluster Health.
Do not proceed if the cluster cannot tolerate failure of at least one node.
2. Put the node into the maintenance mode. This requires putting both the AHV host and the Controller
VM into maintenance mode. See Putting a Node into Maintenance Mode on page 22 for instructions
about how to put a node into maintenance mode.

About this task


Perform the following procedure to change the IP address, netmask, or gateway of an AHV host.

Procedure

1. Edit the settings of port br0, which is the internal port on the default bridge br0.

a. Log on to the host console as root.


You can access the hypervisor host console either through IPMI or by attaching a keyboard and
monitor to the node.
b. Open the network interface configuration file for port br0 in a text editor.
root@ahv# vi /etc/sysconfig/network-scripts/ifcfg-br0

c. Update entries for host IP address, netmask, and gateway.


The block of configuration information that includes these entries is similar to the following:
ONBOOT="yes"
NM_CONTROLLED="no"
PERSISTENT_DHCLIENT=1
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="br0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"

• Replace host_ip_addr with the IP address for the hypervisor host.


• Replace subnet_mask with the subnet mask for host_ip_addr.
• Replace gateway_ip_addr with the gateway address for host_ip_addr.

AHV | Host Network Management | 72


d. Save your changes.
e. Restart network services.

systemctl restart network.service

f. Assign the host to a VLAN. For information about how to add a host to a VLAN, see Assigning an
AHV Host to a VLAN on page 67.
g. Verify network connectivity by pinging the gateway, other CVMs, and AHV hosts.

2. Log on to the Controller VM that is running on the AHV host whose IP address you changed and restart
genesis.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed:
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
See Controller VM Access on page 12 for information about how to log on to a Controller VM.
Genesis takes a few minutes to restart.

3. Verify if the IP address of the hypervisor host has changed. Run the following nCLI command from any
CVM other than the one in the maintenance mode.
nutanix@cvm$ ncli host list
An output similar to the following is displayed:
nutanix@cvm$ ncli host list
Id : aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee::1234
Uuid : ffffffff-gggg-hhhh-iiii-jjjjjjjjjjj
Name : XXXXXXXXXXX-X
IPMI Address : X.X.Z.3
Controller VM Address : X.X.X.1
Hypervisor Address : X.X.Y.4 <- New IP Address
...

4. Stop the Acropolis service on all the CVMs.

a. Stop the Acropolis service on all the CVMs in the cluster.


nutanix@cvm$ allssh genesis stop acropolis

Note: You cannot manage your guest VMs after the Acropolis service is stopped.

b. Verify if the Acropolis service is DOWN on all the CVMs, except the one in the maintenance mode.
nutanix@cvm$ cluster status | grep -v UP
An output similar to the following is displayed:

nutanix@cvm$ cluster status | grep -v UP

2019-09-04 14:43:18 INFO zookeeper_session.py:143 cluster is attempting to connect


to Zookeeper

2019-09-04 14:43:18 INFO cluster:2774 Executing action status on SVMs X.X.X.1,


X.X.X.2, X.X.X.3

The state of the cluster: start

AHV | Host Network Management | 73


Lockdown mode: Disabled
CVM: X.X.X.1 Up
Acropolis DOWN []
CVM: X.X.X.2 Up, ZeusLeader
Acropolis DOWN []
CVM: X.X.X.3 Maintenance

5. From any CVM in the cluster, start the Acropolis service.


nutanix@cvm$ cluster start

6. Verify if all processes on all the CVMs, except the one in the maintenance mode, are in the UP state.
nutanix@cvm$ cluster status | grep -v UP

7. Exit the AHV host and the Controller VM from the maintenance mode.
See Exiting a Node from the Maintenance Mode on page 24 for more information.

AHV | Host Network Management | 74


VIRTUAL MACHINE MANAGEMENT
The following topics describe various aspects of virtual machine management in an AHV cluster.

Supported Guest VM Types for AHV


The compatibility matrix available on the Nutanix Support portal includes the latest supported AHV guest
VM OSes.

AHV Configuration Maximums


The Nutanix configuration maximums available on the Nutanix support portal includes all the latest
configuration limits applicable to AHV. Select the appropriate AHV version to view version specific
information.

Creating a VM (AHV)
In AHV clusters, you can create a new virtual machine (VM) through the Prism Element web console.

About this task


When creating a VM, you can configure all of its components, such as number of vCPUs and memory,
but you cannot attach a volume group to the VM. Attaching a volume group is possible only when you are
modifying a VM.
To create a VM, do the following:

Procedure

1. Log in to Prism Element web console.

AHV | Virtual Machine Management | 75


2. In the VM dashboard, click the Create VM button.

Note: This option does not appear in clusters that do not support this feature.

The Create VM dialog box appears.

Figure 28: Create VM Dialog Box

3. Do the following in the indicated fields:

a. Name: Enter a name for the VM.


b. Description (optional): Enter a description for the VM.
c. Timezone: Select the timezone that you want the VM to use. If you are creating a Linux VM, select
(UTC) UTC.

Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are creating a
Linux VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows VM with
the hardware clock pointing to the desired timezone.

d. Use this VM as an agent VM: Select this option to make this VM as an agent VM.
You can use this option for the VMs that must be powered on before the rest of the VMs (for
example, to provide network functions before the rest of the VMs are powered on on the host) and
must be powered off after the rest of the VMs are powered off (for example, during maintenance
mode operations). Agent VMs are never migrated to any other host in the cluster. If an HA event

AHV | Virtual Machine Management | 76


occurs or the host is put in maintenance mode, agent VMs are powered off and are powered on on
the same host once that host comes back to a normal state.
If an agent VM is powered off, you can manually start that agent VM on another host and the agent
VM now permanently resides on the new host. The agent VM is never migrated back to the original
host. Note that you cannot migrate an agent VM to another host while the agent VM is powered on.
e. vCPU(s): Enter the number of virtual CPUs to allocate to this VM.
f. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
g. Memory: Enter the amount of memory (in GiB) to allocate to this VM.

4. (For GPU-enabled AHV clusters only) To configure GPU access, click Add GPU in the Graphics
section, and then do the following in the Add GPU dialog box:

Figure 29: Add GPU Dialog Box

For more information, see GPU and vGPU Support on page 128.

a. To configure GPU pass-through, in GPU Mode, click Passthrough, select the GPU that you want
to allocate, and then click Add.
If you want to allocate additional GPUs to the VM, repeat the procedure as many times as you
need to. Make sure that all the allocated pass-through GPUs are on the same host. If all specified
GPUs of the type that you want to allocate are in use, you can proceed to allocate the GPU to the

AHV | Virtual Machine Management | 77


VM, but you cannot power on the VM until a VM that is using the specified GPU type is powered
off.
For more information, see GPU and vGPU Support on page 128.
b. To configure virtual GPU access, in GPU Mode, click virtual GPU, select a GRID license, and
then select a virtual GPU profile from the list.

Note: This option is available only if you have installed the GRID host driver on the GPU hosts in
the cluster.
For more information about the NVIDIA GRID host driver installation instructions, see the
NVIDIA Grid Host Driver for Nutanix AHV Installation Guide.

You can assign multiple virtual GPU to a VM. A vGPU is assigned to the VM only if a vGPU is
available when the VM is starting up.
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support on page 133 and
Restrictions for Multiple vGPU Support on page 133.

Note:
Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile
type.

After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the Same VM
on page 136.

AHV | Virtual Machine Management | 78


5. To attach a disk to the VM, click the Add New Disk button.
The Add Disks dialog box appears.

Figure 30: Add Disk Dialog Box

Do the following in the indicated fields:

a. Type: Select the type of storage device, DISK or CD-ROM, from the pull-down list.
The following fields and options vary depending on whether you choose DISK or CD-ROM.
b. Operation: Specify the device contents from the pull-down list.

• Select Clone from ADSF file to copy any file from the cluster that can be used as an image
onto the disk.
• Select Empty CD-ROM to create a blank CD-ROM device. (This option appears only when CD-
ROM is selected in the previous field.) A CD-ROM device is needed when you intend to provide
a system image from CD-ROM.
• Select Allocate on Storage Container to allocate space without specifying an image. (This
option appears only when DISK is selected in the previous field.) Selecting this option means

AHV | Virtual Machine Management | 79


you are allocating space only. You have to provide a system image later from a CD-ROM or
other source.
• Select Clone from Image Service to copy an image that you have imported by using image
service feature onto the disk. For more information about the Image Service feature, see
Configuring Images and Image Management in the Prism Self Service Administration Guide.
c. Bus Type: Select the bus type from the pull-down list. The choices are IDE, SCSI, or SATA.
The options displayed in the Bus Type drop-down list varies based on the storage device Type
selected in Step a.

• For device Disk, select from SCSI, SATA, PCI, or IDE bus type.
• For device CD-ROM, you can select either IDE,or SATA bus type.

Note: SCSI bus is the preferred bus type and it is used in most cases. Ensure you have installed
the VirtIO drivers in the guest OS. For more information about VirtIO drivers, see Nutanix VirtIO for
Windows in AHV Administration Guide.

Caution: Use SATA, PCI, IDE for compatibility purpose when the guest OS does not have VirtIO
drivers to support SCSI devices. This may have performance implications. For more information
about VirtIO drivers, see Nutanix VirtIO for Windows in AHV Administration Guide.

Note: For AHV 5.16 and later, you cannot use an IDE device if Secured Boot is enabled for UEFI
Mode boot configuration.

d. ADSF Path: Enter the path to the desired system image.


This field appears only when Clone from ADSF file is selected. It specifies the image to copy.
Enter the path name as /storage_container_name/iso_name.iso. For example to clone an
image from myos.iso in a storage container named crt1, enter /crt1/myos.iso. When a user
types the storage container name (/storage_container_name/), a list appears of the ISO files in
that storage container (assuming one or more ISO files had previously been copied to that storage
container).
e. Image: Select the image that you have created by using the image service feature.
This field appears only when Clone from Image Service is selected. It specifies the image to
copy.
f. Storage Container: Select the storage container to use from the pull-down list.
This field appears only when Allocate on Storage Container is selected. The list includes all
storage containers created for this cluster.
g. Size: Enter the disk size in GiBs.
h. When all the field entries are correct, click the Add button to attach the disk to the VM and return to
the Create VM dialog box.
i. Repeat this step to attach additional devices to the VM.

AHV | Virtual Machine Management | 80


6. Select one of the following firmware to boot the VM.

» Legacy BIOS: Select legacy BIOS to boot the VM with legacy BIOS firmware.
» UEFI: Select UEFI to boot the VM with UEFI firmware. UEFI firmware supports larger hard drives,
faster boot time, and provides more security features. For more information about UEFI firmware,
seeUEFI Support for VM on page 116 .
» Secure Boot is supported with AOS 5.16. The current support to Secure Boot is limited to the aCLI.
For more information about Secure Boot, see Secure Boot Support for VMs on page 122 . To
enable Secure Boot, do the following:

• Select UEFI.
• Power-off the VM.
• Log on to the aCLI and update the VM to enable Secure Boot. For more information, see Updating
a VM to Enable Secure Boot in the AHV Administration Guide.

AHV | Virtual Machine Management | 81


7. To create a network interface for the VM, click the Add New NIC button.
The Create NIC dialog box appears.

Figure 31: Create NIC Dialog Box

Do the following in the indicated fields:

a. Subnet Name: Select the target virtual LAN from the drop-down list.
The list includes all defined networks (see Network Configuration For VM Interfaces.).

Note: Selecting IPAM enabled subnet from the drop-down list displays the Private IP Assignment
information that provides information about the number of free IP addresses available in the subnet
and in the IP pool.

b. Network Connection State: Select the state for the network that you want it to operate in after
VM creation. The options are Connected or Disconnected.
c. Private IP Assignment: This is a read-only field and displays the following:

• Network Address/Prefix: The network IP address and prefix.


• Free IPs (Subnet): The number of free IP addresses in the subnet.
• Free IPs (Pool): The number of free IP addresses available in the IP pools for the subnet.
d. Assignment Type: This is for IPAM enabled network. Select Assign with DHCP to assign IP
address automatically to the VM using DHCP. For more information, see IP Address Management
on page 50 .
e. When all the field entries are correct, click the Add button to create a network interface for the VM
and return to the Create VM dialog box.
f. Repeat this step to create additional network interfaces for the VM.

AHV | Virtual Machine Management | 82


Note: Nutanix guarantees a unique VM MAC address in a cluster. You can come across scenarios
where two VM in different clusters can have the same MAC address.

Note: Acropolis leader generates MAC address for the VM on AHV. The first 24 bits of the MAC
address is set to 50-6b-8d (0101 0000 0110 1101 1000 1101) and are reserved by Nutanix, the
25th bit is set to 1 (reserved by Acropolis leader), the 26th bit to 48th bits are auto generated random
numbers.

8. To configure affinity policy for this VM, click Set Affinity.

a. Select the host or hosts on which you want configure the affinity for this VM.
b. Click Save.
The selected host or hosts are listed. This configuration is permanent. The VM will not be moved
from this host or hosts even in case of HA event and will take effect once the VM starts.

9. To customize the VM by using Cloud-init (for Linux VMs) or Sysprep (for Windows VMs), select the
Custom Script check box.
Fields required for configuring Cloud-init and Sysprep, such as options for specifying a configuration
script or answer file and text boxes for specifying paths to required files, appear below the check box.

Figure 32: Create VM Dialog Box (custom script fields)

AHV | Virtual Machine Management | 83


10. To specify a user data file (Linux VMs) or answer file (Windows VMs) for unattended provisioning, do
one of the following:

» If you uploaded the file to a storage container on the cluster, click ADSF path, and then enter the
path to the file.
Enter the ADSF prefix (adsf://) followed by the absolute path to the file. For example, if the user
data is in /home/my_dir/cloud.cfg, enter adsf:///home/my_dir/cloud.cfg. Note the use of
three slashes.
» If the file is available on your local computer, click Upload a file, click Choose File, and then
upload the file.
» If you want to create or paste the contents of the file, click Type or paste script, and then use the
text box that is provided.

11. To copy one or more files to a location on the VM (Linux VMs) or to a location in the ISO file (Windows
VMs) during initialization, do the following:

a. In Source File ADSF Path, enter the absolute path to the file.
b. In Destination Path in VM, enter the absolute path to the target directory and the file name.
For example, if the source file entry is /home/my_dir/myfile.txt then the entry for the
Destination Path in VM should be /<directory_name>/copy_desitation> i.e. /mnt/
myfile.txt.
c. To add another file or directory, click the button beside the destination path field. In the new row
that appears, specify the source and target details.

12. When all the field entries are correct, click the Save button to create the VM and close the Create VM
dialog box.
The new VM appears in the VM table view.

Managing a VM (AHV)
You can use the web console to manage virtual machines (VMs) in Acropolis managed clusters.

About this task


After creating a VM, you can use the web console to start or shut down the VM, launch a console window,
update the VM configuration, take a snapshot, attach a volume group, migrate the VM, clone the VM, or
delete the VM.

Note: Your available options depend on the VM status, type, and permissions. Unavailable options are
grayed out.

To accomplish one or more of these tasks, do the following:

Procedure

1. Log in to Prism Element web console.

2. In the VM dashboard, click the Table view.

AHV | Virtual Machine Management | 84


3. Select the target VM in the table (top section of screen).
The Summary line (middle of screen) displays the VM name with a set of relevant action links on the
right. You can also right-click on a VM to select a relevant action.
The possible actions are Manage Guest Tools, Launch Console, Power on (or Power off), Take
Snapshot, Migrate, Clone, Update, and Delete.

Note: VM pause and resume feature is not supported on AHV.

The following steps describe how to perform each action.

Figure 33: VM Action Links

AHV | Virtual Machine Management | 85


4. To manage guest tools as follows, click Manage Guest Tools.
You can also enable NGT applications (self-service restore, Volume Snapshot Service and
application-consistent snapshots) also as part of manage guest tools.

a. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
b. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
Ensure that VM must have at least one empty IDE CD-ROM slot to attach the ISO.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual
machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check
box.
The Self-Service Restore feature is enabled of the VM. The guest VM administrator can restore the
desired file or files from the VM. For more information about self-service restore feature, see Self-
Service Restore in the Data Protection and Recovery with Prism Element guide.

d. After you select Enable Nutanix Guest Tools check box the VSS snapshot feature is enabled by
default.
After this feature is enabled, Nutanix native in-guest VmQuiesced Snapshot Service (VSS) agent
takes snapshots for VMs that support VSS.

Note:
The AHV VM snapshots are not application consistent. The AHV snapshots are taken
from the VM entity menu by selecting a VM and clicking Take Snapshot.
The application consistent snapshots feature is available with Protection Domain based
snapshots and Recovery Points in Prism Central. For more information, see Conditions
for Application-consistent Snapshots in the Data Protection and Recovery with Prism
Element guide.

e. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual
machine. A CD with volume label NUTANIX_TOOLS gets attached to the VM.

Note:

• If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is
powered off, enable NGT from the UI and power on the VM. If cloned VM is powered
on, enable NGT from the UI and restart the nutanix guest agent service.
• If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and
Mounting the NGT Installer Simultaneously on Multiple Cloned VMs.

If you eject the CD, you can mount the CD back again by logging into the Controller VM and
running the following nCLI command.
nutanix@cvm$ ncli ngt mount vm-id=virtual_machine_id
For example, to mount the NGT on the VM with
VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987,
type the following command.
nutanix@cvm$ ncli ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-
a345-4e52-9af6-

AHV | Virtual Machine Management | 86


c1601e759987

AHV | Virtual Machine Management | 87


5. To launch a console window, click the Launch Console action link.
This opens a Virtual Network Computing (VNC) client and displays the console in a new tab or
window. This option is available only when the VM is powered on. The console window includes four
menu options (top right):

• Clicking the Mount ISO button displays the following window that allows you to mount an ISO
image to the VM. To mount an image, select the desired image and CD-ROM drive from the pull-
down lists and then click the Mount button.

Figure 34: Mount Disk Image Window

AHV | Virtual Machine Management | 88


Note: For information about how to select CD-ROM as the storage device when you intent
to provide a system image from CD-ROM, see Add New Disk in Creating a VM (AHV) on
page 75.

• Clicking the C-A-D icon button sends a CtrlAltDel command to the VM.
• Clicking the camera icon button takes a screenshot of the console window.
• Clicking the power icon button allows you to power on/off the VM. These are the same options that
you can access from the Power On Actions or Power Off Actions action link below the VM table
(see next step).

Figure 35: Virtual Network Computing (VNC) Window

6. To start or shut down the VM, click the Power on (or Power off) action link.
Power on begins immediately. If you want to power off the VMs, you are prompted to select one of the
following options:

• Power Off. Hypervisor performs a hard power off action on the VM.
• Power Cycle. Hypervisor performs a hard restart action on the VM.
• Reset. Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown. Operating system of the VM performs a graceful shutdown.
• Guest Reboot. Operating system of the VM performs a graceful restart.

Note: If you perform power operations such as Guest Reboot or Guest Shutdown by using the Prism
Element web console or API on Windows VMs, these operations might silently fail without any error
messages if at that time a screen saver is running in the Windows VM. Perform the same power
operations again immediately, so that they succeed.

7. To make a snapshot of the VM, click the Take Snapshot action link.
For more information, see Virtual Machine Snapshots.

AHV | Virtual Machine Management | 89


8. To migrate the VM to another host, click the Migrate action link.
This displays the Migrate VM dialog box. Select the target host from the pull-down list (or select the
System will automatically select a host option to let the system choose the host) and then click the
Migrate button to start the migration.

Figure 36: Migrate VM Dialog Box

Note: Nutanix recommends to live migrate VMs when they are under light load. If they are migrated
while heavily utilized, migration may fail because of limited bandwidth.

9. To clone the VM, click the Clone action link.


This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog
box but with all fields (except the name) filled in with the current VM settings and number of clones
needed. Enter a name for the clone and number of clones of the VM that are required and then
click the Save button to create the clone. You can create a modified clone by changing some of
the settings. You can also customize the VM during initialization by providing a custom script and
specifying files needed during the customization process.

Figure 37: Clone VM Dialog Box

AHV | Virtual Machine Management | 90


10. To modify the VM configuration, click the Update action link.
The Update VM dialog box appears, which includes the same fields as the Create VM dialog
box. Modify the configuration as needed, and then save the configuration. In addition to modifying
the configuration, you can attach a volume group to the VM and enable flash mode on the VM.
If you attach a volume group to a VM that is part of a protection domain, the VM is not protected
automatically. Add the VM to the same Consistency Group manually.
(For GPU-enabled AHV clusters only) You can add pass-through GPUs if a VM is already using GPU
pass-through. You can also change the GPU configuration from pass-through to vGPU or vGPU to
pass-through, change the vGPU profile, add more vGPUs, and change the specified vGPU license.
However, you need to power off the VM before you perform these operations.

• Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support and Restrictions for
Multiple vGPU Support in the AHV Administration Guide.
• Multiple vGPUs are supported on the same VM only if you select the highest vGPU profile type.
• For more information on vGPU profile selection, see:

• Virtual GPU Types for Supported GPUs in the NVIDIA Virtual GPU Software User Guide in the
NVIDIA's Virtual GPU Software Documentation webpage, and
• GPU and vGPU Support in the AHV Administration Guide.

AHV | Virtual Machine Management | 91


• After you add the first vGPU, to add multiple vGPUs, see Adding Multiple vGPUs to the Same VM
in the AHV Administration Guide.
You can add new network adapters or NICs using the Add New NIC option. You can also modify the
network used by an existing NIC. See Limitation for vNIC Hot-Unplugging on page 93 and Creating
a VM (AHV) on page 75 before you modify the NIC network or create a new NIC for a VM.

Figure 38: VM Update Dialog Box

Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space
associated with that vDisk is not reclaimed unless you also delete the VM snapshots.

To increase the memory allocation and the number of vCPUs on your VMs while the VMs are powered
on (hot-pluggable), do the following:

a. In the vCPUs field, you can increase the number of vCPUs on your VMs while the VMs are
powered on.
b. In the Number of Cores Per vCPU field, you can change the number of cores per vCPU only if
the VMs are powered off.

Note: This is not a hot-pluggable feature.

c. In the Memory field, you can increase the memory allocation on your VMs while the VMs are
powered on.
For more information about hot-pluggable vCPUs and memory, see Virtual Machine Memory and CPU
Hot-Plug Configurations in the AHV Administration Guide.
To attach a volume group to the VM, do the following:

a. In the Volume Groups section, click Add volume group, and then do one of the following:

AHV | Virtual Machine Management | 92


» From the Available Volume Groups list, select the volume group that you want to attach to the
VM.
» Click Create new volume group, and then, in the Create Volume Group dialog box, create a
volume group (see Creating a Volume Group). After you create a volume group, select it from
the Available Volume Groups list.
Repeat these steps until you have added all the volume groups that you want to attach to the VM.
b. Click Add.

a. To enable flash mode on the VM, click the Enable Flash Mode check box.

» After you enable this feature on the VM, the status is updated in the VM table view. To view the
status of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in
the VM table view.
» You can disable the flash mode feature for individual virtual disks. To update the flash mode for
individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable
Flash Mode check box.

11. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete
the VM.
The deleted VM disappears from the list of VMs in the table.

Limitation for vNIC Hot-Unplugging


If you detach (hot-unplug) the vNIC for the VM with guest OS installed on it, the AOS displays the detach
result as successful, but the actual detach success depends on the status of the ACPI mechanism in guest
OS.
The following table describes the vNIC detach observations and workaround applicable based on guest OS
response to ACPI request:

AHV | Virtual Machine Management | 93


Table 6: vNIC Detach - Observations and Workaround

Detach Procedure Followed Guest OS AOS Behavior Actual Detach Workaround


responds Result
to ACPI
request
(Yes/No)

vNIC Detach (hot-unplug) Yes vNIC detach is vNIC detach is No action needed
Successful. successful.
• Using Prism Central: See
Managing a VM (AHV) No Observe the vNIC detach is not Power cycle the
topic in Prism Central following logs: successful. VM for successful
Guide. vNIC detach.
Device
• Using Prism Element web detached
console: See Managing a successfully
VM (AHV) on page 84.
• Using acli: Log on to the
CVM with SSH and run the
following command:
nutanix@cvm$ acli
vm.nic_delete
<vm_name> <nic mac
address>
or,
nutanix@cvm$ acli
vm.nic_update
<vm_name> <nic
mac address>
connected=false
Replace the following
attributes in the above
commands:

• <vm_name> with the


name of the guest VM
for which the vNIC is to
be detached.
• <nic mac address>
with the vNIC MAC
address that needs to
be detached.

Note: In most cases, it is observed that the ACPI mechanism failure occurs when no guest OS is installed
on the VM.

Virtual Machine Snapshots


You can generate snapshots of virtual machines or VMs. You can generate snapshots of VMs manually or
automatically. Some of the purposes that VM snapshots serve are as follows:

AHV | Virtual Machine Management | 94


• Disaster recovery
• Testing - as a safe restoration point in case something went wrong during testing.
• Migrate VMs
• Create multiple instances of a VM.
Snapshot is a point-in-time state of entities such as VM and Volume Groups, and used for restoration
and replication of data.. You can generate snapshots and store them locally or remotely. Snapshots are
mechanism to capture the delta changes that has occurred over time. Snapshots are primarily used for
data protection and disaster recovery. Snapshots are not autonomous like backup, in the sense that they
depend on the underlying VM infrastructure and other snapshots to restore the VM. Snapshots consume
less resources compared to a full autonomous backup. Typically, a VM snapshot captures the following:

• The state including the power state (for example, powered-on, powered-off, suspended) of the VMs.
• The data includes all the files that make up the VM. This data also includes the data from disks,
configurations, and devices, such as virtual network interface cards.
For more information about creating VM snapshots, see Creating a VM Snapshot Manually section in the
Prism Web Console Guide.

VM Snapshots and Snapshots for Disaster Recovery


The VM Dashboard only allows you to generate VM snapshots manually. You cannot select VMs and
schedule snapshots of the VMs using the VM dashboard. The snapshots generated manually have very
limited utility.

Note: These snapshots (stored locally) cannot be replicated to other sites.

You can schedule and generate snapshots as a part of the disaster recovery process using Nutanix DR
solutions. AOS generates snapshots when you protect a VM with a protection domain using the Data
Protection dashboard in Prism Web Console (see the Data Protection and Recovery with Prism Element
guide). Similarly, Recovery Points (snapshots are called Recovery Points in Prism Central) when you
protect a VM with a protection policy using Data Protection dashboard in Prism Central (see the Leap
Administration Guide).
For example, in the Data Protection dashboard in Prism Web Console, you can create schedules to
generate snapshots using various RPO schemes such as asynchronous replication with frequency
intervals of 60 minutes or more, or NearSync replication with frequency intervals of as less as 20 seconds
up to 15 minutes. These schemes create snapshots in addition to the ones generated by the schedules, for
example, asynchronous replication schedules generate snapshots according to the configured schedule
and, in addition, an extra snapshot every 6 hours. Similarly, NearSync generates snapshots according to
the configured schedule and also generates one extra snapshot every hour.
Similarly, you can use the options in the Data Protection section of Prism Central to generate Recovery
Points using the same RPO schemes.

Windows VM Provisioning
Nutanix VirtIO for Windows
Nutanix VirtIO is a collection of drivers for para-virtual devices that enhance the stability and performance
of virtual machines on AHV.
Nutanix VirtIO is available in two formats:

• VirtIO ISO file - Use it when VM does not yet have a Windows OS installed. For more information, see
Installing or Upgrading Nutanix VirtIO for Windows on page 97.

AHV | Virtual Machine Management | 95


• VirtIO MSI installer file - Use it to install or upgrade VirtIO when Windows OS is installed and running on
a VM or VM already has VirtIO installed.
The VirtIO MSI installer file is also bundled with Nutanix Guest Tools (NGT). Install NGT on a VM to install
the Nutanix VirtlO Package on the VM. For information about how to install NGT, see NGT Installation in
the Prism Web Console Guide.
The following table describes the NGT bundling behavior based on AOS release:

Table 7: NGT Bundling Behavior - AOS Release

AOS Release NGT Bundling Behavior

Prior to AOS 6.6 NGT includes the VM Mobility package which is a re-packaging of VirtIO. The
repackaging is done with additional changes to enable a built-in driver that
is pre-installed in Windows but not enabled by default. This driver is used to
enable the SCSI controller required by some Windows editions for seamless
mobility between different types of hypervisors.
In this case, the VM Mobility package uses the same version numbering as
VirtIO.

AOS 6.6 and above NGT contains both a VirtIO and a VM Mobility package. The Nutanix VirtIO
package contains all the VirtIO drivers, and the VM Mobility package is no
longer re-packaged with the VirtIO drivers and contains only the change to
enable the SCSI controller.

The NGT release is aligned with the AOS release and the bundled VirtIO package is updated in the next
NGT release. Nutanix does not back-port the NGT releases to the previous AOS releases.

Caution: If you install an older version of NGT, the latest VirtIO version, even if installed, might be replaced
by the older VirtIO version.

The VirtIO release is not aligned with the AOS release. To ensure that you have the latest VirtIO drivers,
either install the latest NGT version or update the drivers using the latest VirtIO package available on the
Nutanix Support portal. For more information, see Installing or Upgrading Nutanix VirtIO for Windows on
page 97.

VirtIO Requirements
Requirements for Nutanix VirtIO for Windows.
VirtIO supports the following operating systems:

• Microsoft Windows server version: Windows 2008 R2 or later


• Microsoft Windows client version: Windows 7 or later

Note: On Windows 7 and Windows Server 2008 R2, install Microsoft KB3033929 or update the operating
system with the latest Windows Update to enable support for SHA2 certificates.

Caution: The VirtIO installation or upgrade may fail if multiple Windows VSS snapshots are present in the
guest VM. The installation or upgrade failure is due to the timeout that occurs during installation of Nutanix
VirtIO SCSI pass-through controller driver.
It is recommended to clean up the VSS snapshots or temporarily disconnect the drive that
contains the snapshots. Ensure that you only delete the snapshots that are no longer needed. For

AHV | Virtual Machine Management | 96


more information about how to observe the VirtIO installation or upgrade failure that occurs due to
availability of multiple Windows VSS snapshots, see KB-12374.

Installing or Upgrading Nutanix VirtIO for Windows


Download Nutanix VirtIO and the Nutanix VirtIO Microsoft installer (MSI). The MSI installs and upgrades
the Nutanix VirtIO drivers.

Before you begin


Make sure that your system meets the VirtIO requirements described in VirtIO Requirements on
page 96.

About this task


If you have already installed Nutanix VirtIO, use the following procedure to upgrade VirtIO to a latest
version. If you have not yet installed Nutanix VirtIO, use the following procedure to install Nutanix VirtIO.

Procedure

1. Go to the Nutanix Support portal and select Downloads > AHV and click VirtIO.

2. Select the appropriate VirtIO package.

» If you are creating a new Windows VM, download the ISO file. The installer is available on the ISO if
your VM does not have Internet access.
» If you are updating drivers in a Windows VM, download the MSI installer file.

Figure 39: Search filter and VirtIO options

3. Run the selected package.

» For the ISO: Upload the ISO to the cluster, as described in the Configuring Images topic in Prism
Element Web Console Guide.
» For the MSI: open the download file to run the MSI.

AHV | Virtual Machine Management | 97


4. Read and accept the Nutanix VirtIO license agreement. Click Install.

Figure 40: Nutanix VirtIO Windows Setup Wizard

The Nutanix VirtIO setup wizard shows a status bar and completes installation.

Manually Installing or Upgrading Nutanix VirtIO


Manually install or upgrade Nutanix VirtIO.

Before you begin


Make sure that your system meets the VirtIO requirements described in VirtIO Requirements on
page 96.

About this task

Note: To automatically install Nutanix VirtIO, see Installing or Upgrading Nutanix VirtIO for Windows on
page 97.

If you have already installed Nutanix VirtIO, use the following procedure to upgrade VirtIO to a latest
version. If you have not yet installed Nutanix VirtIO, use the following procedure to install Nutanix VirtIO.

Procedure

1. Go to the Nutanix Support portal, select Downloads > AHV, and click VirtIO.

2. Do one of the following:

» Extract the VirtIO ISO into the same VM where you load Nutanix VirtIO, for easier installation.
If you choose this option, proceed directly to step 7.
» Download the VirtIO ISO for Windows to your local machine.
If you choose this option, proceed to step 3.

3. Upload the ISO to the cluster, as described in the Configuring Images topic of Prism Element Web
Console Guide.

4. Locate the VM where you want to install the Nutanix VirtIO ISO and update the VM.

AHV | Virtual Machine Management | 98


5. Add the Nutanix VirtIO ISO by clicking Add New Disk and complete the indicated fields.

• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: IDE
• IMAGE: Select the Nutanix VirtIO ISO

6. Click Add.

7. Log on to the VM and browse to Control Panel > Device Manager.

AHV | Virtual Machine Management | 99


8. Note: Select the x86 subdirectory for 32-bit Windows, or the amd64 for 64-bit Windows.

Open the devices and select the specific Nutanix drivers for download. For each device, right-click and
Update Driver Software into the drive containing the VirtIO ISO. For each device, follow the wizard
instructions until you receive installation confirmation.

a. System Devices > Nutanix VirtIO Balloon Drivers


b. Network Adapter > Nutanix VirtIO Ethernet Adapter.
c. Processors > Storage Controllers > Nutanix VirtIO SCSI pass through Controller
The Nutanix VirtIO SCSI pass-through controller prompts you to restart your system. Restart at any
time to install the controller.

AHV | Virtual Machine Management | 100


Figure 41: List of Nutanix VirtIO downloads

AHV | Virtual Machine Management | 101


Creating a Windows VM on AHV with Nutanix VirtIO
Create a Windows VM in AHV, or migrate a Windows VM from a non-Nutanix source to AHV, with the
Nutanix VirtIO drivers.

Before you begin

• Upload the Windows installer ISO to your cluster as described in the Configuring Images topic in Web
Console Guide.
• Upload the Nutanix VirtIO ISO to your cluster as described in the Configuring Images topic in Web
Console Guide.

About this task


To install a new or migrated Windows VM with Nutanix VirtIO, complete the following.

Procedure

1. Log on to the Prism web console using your Nutanix credentials.

2. At the top-left corner, click Home > VM.


The VM page appears.

AHV | Virtual Machine Management | 102


3. Click + Create VM in the corner of the page.
The Create VM dialog box appears.

Figure 42: Create VM dialog box

AHV | Virtual Machine Management | 103


4. Complete the indicated fields.

a. NAME: Enter a name for the VM.


b. Description (optional): Enter a description for the VM.
c. Timezone: Select the timezone that you want the VM to use. If you are creating a Linux VM, select
(UTC) UTC.

Note:
The RTC of Linux VMs must be in UTC, so select the UTC timezone if you are creating a
Linux VM.
Windows VMs preserve the RTC in the local timezone, so set up the Windows VM with
the hardware clock pointing to the desired timezone.

d. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
e. MEMORY: Enter the amount of memory for the VM (in GiB).

5. If you are creating a Windows VM, add a Windows CD-ROM to the VM.

a. Click the pencil icon next to the CD-ROM that is already present and fill out the indicated fields.

• OPERATION: CLONE FROM IMAGE SERVICE


• BUS TYPE: SATA
• IMAGE: Select the Windows OS install ISO.
b. Click Update.
The current CD-ROM opens in a new window.

6. Add the Nutanix VirtIO ISO.

a. Click Add New Disk and complete the indicated fields.

• TYPE: CD-ROM
• OPERATION: CLONE FROM IMAGE SERVICE
• BUS TYPE: SATA
• IMAGE: Select the Nutanix VirtIO ISO.
b. Click Add.

7. Add a new disk for the hard drive.

a. Click Add New Disk and complete the indicated fields.

• TYPE: DISK
• OPERATION: ALLOCATE ON STORAGE CONTAINER
• BUS TYPE: SATA
• STORAGE CONTAINER: Select the appropriate storage container.
• SIZE: Enter the number for the size of the hard drive (in GiB).
b. Click Add to add the disk driver.

AHV | Virtual Machine Management | 104


8. If you are migrating a VM, create a disk from the disk image.

a. Click Add New Disk and complete the indicated fields.

• TYPE: DISK
• OPERATION: CLONE FROM IMAGE
• BUS TYPE: SATA
• CLONE FROM IMAGE SERVICE: Click the drop-down menu and choose the image you
created previously.
b. Click Add to add the disk driver.

9. Optionally, after you have migrated or created a VM, add a network interface card (NIC).

a. Click Add New NIC.


b. In the VLAN ID field, choose the VLAN ID according to network requirements and enter the IP
address, if necessary.
c. Click Add.

10. Click Save.

What to do next
Install Windows by following Installing Windows on a VM on page 105.

Installing Windows on a VM
Install a Windows virtual machine.

Before you begin


Create a Windows VM.

Procedure

1. Log on to the web console.

2. Click Home > VM to open the VM dashboard.

3. Select the Windows VM.

4. In the center of the VM page, click Power On.

5. Click Launch Console.


The Windows console opens in a new window.

6. Select the desired language, time and currency format, and keyboard information.

7. Click Next > Install Now.


The Windows setup dialog box shows the operating systems to install.

8. Select the Windows OS you want to install.

9. Click Next and accept the license terms.

10. Click Next > Custom: Install Windows only (advanced) > Load Driver > OK > Browse.

AHV | Virtual Machine Management | 105


11. Choose the Nutanix VirtIO driver.

a. Select the Nutanix VirtIO CD drive.


b. Expand the Windows OS folder and click OK.

Figure 43: Select the Nutanix VirtIO drivers for your OS

The Select the driver to install window appears.

AHV | Virtual Machine Management | 106


12. Select the VirtIO SCSI driver (vioscsi.inf) and click Next.

Figure 44: Select the Driver for Installing Windows on a VM

The amd64 folder contains drivers for 64-bit operating systems. The x86 folder contains drivers for 32-
bit operating systems.

Note: From Nutanix VirtIO driver version 1.1.5, the driver package contains Windows Hardware Quality
Lab (WHQL) certified driver for Windows.

13. Select the allocated disk space for the VM and click Next.
Windows shows the installation progress, which can take several minutes.

14. Enter your user name and password information and click Finish.
Installation can take several minutes.
Once you complete the logon information, Windows setup completes installation.

15. Follow the instructions in Installing or Upgrading Nutanix VirtIO for Windows on page 97 to install
other drivers which are part of Nutanix VirtIO package.

Windows Defender Credential Guard Support in AHV


AHV enables you to use the Windows Defender Credential Guard security feature on Windows guest VMs.

AHV | Virtual Machine Management | 107


Windows Defender Credential Guard feature of Microsoft Windows operating systems allows you to
securely isolate user credentials from the rest of the operating system. By that means, you can protect
guest VMs from credential theft attacks such as Pass-the-Hash or Pass-The-Ticket.
See the Microsoft documentation for more information about the Windows Defender Credential Guard
security feature.

Windows Defender Credential Guard Architecture in AHV

Figure 45: Architecture

Windows Defender Credential Guard uses Microsoft virtualization-based security to isolate user credentials
in the virtualization-based security (VBS) module in AHV. When you enable Windows Defender Credential
Guard on an AHV guest VM, the guest VM runs on top of AHV running both the Windows OS and
VBS. Each Windows OS guest VM, which has credential guard enabled, has a VBS to securely store
credentials.

Windows Defender Credential Guard Requirements


Ensure the following to enable Windows Defender Credential Guard:
1. AOS, AHV, and Windows Server versions support Windows Defender Credential Guard:

• AOS version must be 5.19 or later


• AHV version must be AHV 20201007.1 or later
•Windows Server version must be Windows server 2016 or later, Windows 10 Enterprise or later and
Windows Server 2019 or later
2. UEFI, Secure Boot, and machine type q35 are enabled in the Windows VM from AOS.
The Prism Element workflow to enable Windows Defender Credential Guard includes the workflow to
enable these features.

Limitations

• Windows Defender Credential guard is not supported on hosts with AMD CPUs.

AHV | Virtual Machine Management | 108


• If you enable Windows Defender Credential Guard for your AHV guest VMs, the following optional
configurations are not supported:

• Virtual GPU configurations.


• vTPM (Virtual Trusted Platform Modules) to store MS policies.

Note: vTPM is supported with AOS 6.5.1 or later and AHV 20220304.242 or later release versions
only.

• DMA protection (vIOMMU).


• Nutanix Live Migration.
• Cross hypervisor DR of Credential Guard VMs.

Caution: Use of Windows Defender Credential Guard in your AHV clusters impacts VM performance.
If you enable Windows Defender Credential Guard on AHV guest VMs, VM density drops by ~15–20%.
This expected performance impact is due to nested virtualization overhead added as a result of enabling
credential guard.

Enabling Windows Defender Credential Guard Support in AHV Guest VMs


You can enable Windows Defender Credential Guard when you are either creating a VM or updating a VM.

About this task


Perform the following procedure to enable Windows Defender Credential Guard:

Procedure

1. Enable Windows Defender Credential Guard when you are either creating a VM or updating a VM. Do
one of the following:

» If you are creating a VM, see step 2.


» If you are updating a VM, see step 3.

AHV | Virtual Machine Management | 109


2. If you are creating a Windows VM, do the following:

a. Log on to the Prism Element web console.


b. In the VM dashboard, click Create VM.
c. Fill in the mandatory fields to configure a VM.
d. Under Boot Configuration, select UEFI, and then select the Secure Boot and Windows
Defender Credential Guard options.

Figure 46: Enable Windows Defender Credential Guard

See UEFI Support for VM on page 116 and Secure Boot Support for VMs on page 122 for more
information about these features.
e. Proceed to configure other attributes for your Windows VM.
See Creating a Windows VM on AHV with Nutanix VirtIO on page 102 for more information.
f. Click Save.
g. Turn on the VM.

3. If you are updating an existing VM, do the following:

a. Log on to the Prism Element web console.


b. In the VM dashboard, click the Table view, select the VM, and click Update.
c. Under Boot Configuration, select UEFI, and then select the Secure Boot and Windows
Defender Credential Guard options.

Note:
If the VM is configured to use BIOS, install the guest OS again.
If the VM is already configured to use UEFI, skip the step to select Secure Boot.

See UEFI Support for VM on page 116 and Secure Boot Support for VMs on page 122 for more
information about these features.
d. Click Save.
e. Turn on the VM.

AHV | Virtual Machine Management | 110


4. Enable Windows Defender Credential Guard in the Windows VM by using group policy.
See the Enable Windows Defender Credential Guard by using the Group Policy procedure of the
Manage Windows Defender Credential Guard topic in the Microsoft documentation to enable VBS,
Secure Boot, and Windows Defender Credential Guard for the Windows VM.

5. Open command prompt in the Windows VM and apply the Group Policy settings:
> gpupdate /force
If you have not enabled Windows Defender Credential Guard (step 4) and perform this step (step 5), a
warning similar to the following is displayed:
Updating policy...

Computer Policy update has completed successfully.

The following warnings were encountered during computer policy processing:

Windows failed to apply the {F312195E-3D9D-447A-A3F5-08DFFA24735E} settings.


{F312195E-3D9D-447A-A3F5-08DFFA24735E} settings might have its own log file. Please
click on the "More information" link.
User Policy update has completed successfully.

For more detailed information, review the event log or run GPRESULT /H GPReport.html
from the command line to access information about Group Policy results.
Event Viewer displays a warning for the group policy with an error message that indicates Secure Boot
is not enabled on the VM.
To view the warning message in Event Viewer, do the following:

• In the Windows VM, open Event Viewer.


• Go to Windows Logs -> System and click the warning with the Source as GroupPolicy
(Microsoft-Windows-GroupPolicy) and Event ID as 1085.

Figure 47: Warning in Event Viewer

Note: Ensure that you follow the steps in the order that is stated in this document to successfully enable
Windows Defender Credential Guard.

AHV | Virtual Machine Management | 111


6. Restart the VM.

7. Verify if Windows Defender Credential Guard is enabled in your Windows VM.

a. Start a Windows PowerShell terminal.


b. Run the following command.
PS > Get-CimInstance -ClassName Win32_DeviceGu
ard -Namespace 'root\Microsoft\Windows\DeviceGuard'
An output similar to the following is displayed.
PS > Get-CimInstance -ClassName Win32_DeviceGuard -Namespace 'root\Microsoft
\Windows\DeviceGuard'
AvailableSecurityProperties : {1, 2, 3, 5}
CodeIntegrityPolicyEnforcementStatus : 0
InstanceIdentifier : 4ff40742-2649-41b8-bdd1-e80fad1cce80
RequiredSecurityProperties : {1, 2}
SecurityServicesConfigured : {1}
SecurityServicesRunning : {1}
UsermodeCodeIntegrityPolicyEnforcementStatus : 0
Version : 1.0
VirtualizationBasedSecurityStatus : 2
PSComputerName
Confirm that both SecurityServicesConfigured and SecurityServicesRunning have the value
{ 1 }.
Alternatively, you can verify if Windows Defender Credential Guard is enabled by using System
Information (msinfo32):

a. In the Windows VM, open System Information by typing msinfo32 in the search field next to the
Start menu.
b. Verify if the values of the parameters are as indicated in the following screen shot:

Figure 48: Verify Windows Defender Credential Guard

Affinity Policies for AHV


As an administrator of an AHV cluster, you can specify scheduling policies for virtual machines on an AHV
cluster. By defining these policies, you can control the placement of the virtual machines on the hosts
within a cluster.

AHV | Virtual Machine Management | 112


You can define two types of affinity policies.

VM-Host Affinity Policy


The VM-host affinity policy controls the placement of the VMs. You can use this policy to specify that a
selected VM can only run on the members of the affinity host list. This policy checks and enforces where a
VM can be hosted when you restart or migrate the VM.

Note:

• If you choose to apply the VM-host affinity policy, it limits Acropolis HA and Acropolis Dynamic
Scheduling (ADS) in such a way that a virtual machine cannot be restarted or migrated to a
host that does not conform to the requirements of the affinity policy as this policy is enforced
mandatorily.
• The VM-host anti-affinity policy is not supported.
• VMs configured with host affinity settings retain these settings if the VM is migrated to a new
cluster. Remove the VM-host affinity policies applied to a VM that you want to migrate to
another cluster, as the UUID of the host is retained by the VM and it does not allow the VM
to restart on the destination cluster. When you attempt to protect such VMs, it is successful.
However, some disaster recovery operations like migration fail and attempts to power on these
VMs also fail.
• VMs with host affinity policies can only be migrated to the hosts specified in the affinity policy. If
only one host is specified, the VM cannot be migrated or started on another host during an HA
event. For more information, see Non-Migratable Hosts on page 114.

You can define the VM-host affinity policies by using Prism Element during the VM create or update
operation. For more information, see Creating a VM (AHV).

VM-VM Anti-Affinity Policy


You can use this policy to specify anti-affinity between the virtual machines. The VM-VM anti-affinity policy
keeps the specified virtual machines apart in such a way that when a problem occurs with one host, you
should not lose both the virtual machines. However, this is a preferential policy. This policy does not limit
the Acropolis Dynamic Scheduling (ADS) feature to take necessary action in case of resource constraints.

Note:

• Currently, you can only define VM-VM anti-affinity policy by using aCLI. For more information,
see Configuring VM-VM Anti-Affinity Policy on page 113.
• The VM-VM affinity policy is not supported.

Note: If a VM is cloned that has the affinity policies configured, then the policies are not automatically
applied to the cloned VM. However, if a VM is restored from a DR snapshot, the policies are automatically
applied to the VM.

Limitations of Affinity Rules


Even though if a host is removed from a cluster, the host UUID is not removed from the host-affinity list for
a VM.

Configuring VM-VM Anti-Affinity Policy


To configure VM-VM anti-affinity policies, you must first define a group and then add all the VMs on which
you want to define VM-VM anti-affinity policy.

AHV | Virtual Machine Management | 113


About this task

Note: Currently, the VM-VM affinity policy is not supported.

Perform the following procedure to configure the VM-VM anti-affinity policy.

Procedure

1. Log on to the Controller VM with SSH session.

2. Create a group.
nutanix@cvm$ acli vm_group.create group_name
Replace group_name with the name of the group.

3. Add the VMs on which you want to define anti-affinity to the group.
nutanix@cvm$ acli vm_group.add_vms group_name vm_list=vm_name
Replace group_name with the name of the group. Replace vm_name with the name of the VMs that you
want to define anti-affinity on. In case of multiple VMs, you can specify comma-separated list of VM
names.

4. Configure VM-VM anti-affinity policy.


nutanix@cvm$ acli vm_group.antiaffinity_set group_name
Replace group_name with the name of the group.
After you configure the group, the new anti-affinity rule is applied when the ADS runs again the next
time. ADS runs every 15 minutes.

Removing VM-VM Anti-Affinity Policy


Perform the following procedure to remove the VM-VM anti-affinity policy.

Procedure

1. Log on to the Controller VM with SSH session.

2. Remove the VM-VM anti-affinity policy.


nutanix@cvm$ acli vm_group.antiaffinity_unset group_name
Replace group_name with the name of the group.
The VM-VM anti-affinity policy is removed for the VMs that are present in the group, and they can start
on any host during the next power on operation (as necessitated by the ADS feature).

Non-Migratable Hosts
VMs with GPU, CPU passthrough, PCI passthrough, and host affinity policies are not migrated to other
hosts in the cluster. Such VMs are treated in a different manner in scenarios where VMs are required to
migrate to other hosts in the cluster.

Table 8: Scenarios Where VMs Are Required to Migrate to Other Hosts

Scenario Behavior
One-click upgrade VM is powered off.
Life-cycle management (LCM) Pre-check for LCM fails and the VMs are not migrated.

AHV | Virtual Machine Management | 114


Scenario Behavior
Rolling restart VM is powered off.
AHV host maintenance mode Use the tunable option to shut down the VMs while putting the node
in maintenance mode. For more information, see Putting a Node
into Maintenance Mode on page 22.

Performing Power Operations on VMs by Using Nutanix Guest Tools


(aCLI)
You can initiate safe and graceful power operations such as soft shutdown and restart of the VMs running
on the AHV hosts by using the aCLI. Nutanix Guest Tools (NGT) initiates and performs the soft shutdown
and restart operations within the VM. This workflow ensures a safe and graceful shutdown or restart of the
VM. You can create a pre-shutdown script that you can choose to run before a shutdown or restart of the
VM. In the pre-shutdown script, include any tasks or checks that you want to run before a VM is shut down
or restarted. You can choose to cancel the power operation if the pre-shutdown script fails. If the script
fails, an alert (guest_agent_alert) is generated in the Prism web console.

Before you begin


Ensure that you have met the following prerequisites before you initiate the power operations:
1. NGT is enabled on the VM. All operating systems that NGT supports are supported for this feature.
2. NGT version running on the Controller VM and guest VM is the same.
3. (Optional) If you want to run a pre-shutdown script, place the script in the following locations depending
on your VMs:

• Windows VMs: installed_dir\scripts\power_off.bat


The file name of the script must be power_off.bat.
• Linux VMs: installed_dir/scripts/power_off
The file name of the script must be power_off.

About this task

Note: You can also perform these power operations by using the V3 API calls. For more information, see
developer.nutanix.com.

Perform the following steps to initiate the power operations:

Procedure

1. Log on to a Controller VM with SSH.

AHV | Virtual Machine Management | 115


2. Do one of the following:

» Soft shut down the VM.


nutanix@cvm$ acli vm.guest_shutdown vm_name enable_script_exec=[true or false]
fail_on_script_failure=[true or false]
Replace vm_name with the name of the VM.
» Restart the VM.
nutanix@cvm$ acli vm.guest_reboot vm_name enable_script_exec=[true or false]
fail_on_script_failure=[true or false]
Replace vm_name with the name of the VM.
Set the value of enable_script_exec to true to run your pre-shutdown script and set the value of
fail_on_script_failure to true to cancel the power operation if the pre-shutdown script fails.

UEFI Support for VM


UEFI firmware is a successor to legacy BIOS firmware that supports larger hard drives, faster boot time,
and provides more security features.
VMs with UEFI firmware have the following advantages:

• Boot faster
• Avoid legacy option ROM address constraints
• Include robust reliability and fault management
• Use UEFI drivers

Note:

• Nutanix supports the starting of VMs with UEFI firmware in an AHV cluster. However, if a
VM is added to a protection domain and later restored on a different cluster, the VM loses
boot configuration. To restore the lost boot configuration, see Setting up Boot Device on
page 119.
• Nutanix also provides limited support for VMs migrated from a Hyper-V cluster.

You can create or update VMs with UEFI firmware by using acli commands, Prism Element web console,
or Prism Central web console. For more information about creating a VM by using the Prism Element web
console or Prism Central web console, see Creating a VM (AHV) on page 75. For information about
creating a VM by using aCLI, see Creating UEFI VMs by Using aCLI on page 116.

Note: If you are creating a VM by using aCLI commands, you can define the location of the storage
container for UEFI firmware and variables. Prism Element web console or Prism Central web console does
not provide the option to define the storage container to store UEFI firmware and variables.

For more information about the supported OSes for the guest VMs, see the AHV Guest OS section in
the ]Compatibility and Interoperability Matrix document.

Creating UEFI VMs by Using aCLI


In AHV clusters, you can create a virtual machine (VM) to start with UEFI firmware by using Acropolis CLI
(aCLI). This topic describes the procedure to create a VM by using aCLI. See the "Creating a VM (AHV)"
topic for information about how to create a VM by using the Prism Element web console.

AHV | Virtual Machine Management | 116


Before you begin
Ensure that the VM has an empty vDisk.

About this task


Perform the following procedure to create a UEFI VM by using aCLI:

Procedure

1. Log on to any Controller VM in the cluster with SSH.

2. Create a UEFI VM.

nutanix@cvm$ acli vm.create vm-name uefi_boot=true


A VM is created with UEFI firmware. Replace vm-name with a name of your choice for the VM. By
default, the UEFI firmware and variables are stored in an NVRAM container. If you would like to specify
a location of the NVRAM storage container to store the UEFI firmware and variables, do so by running
the following command.
nutanix@cvm$ acli vm.create vm-name uefi_boot=true
nvram_container=NutanixManagementShare
Replace NutanixManagementShare with a storage container in which you want to store the UEFI
variables.
The UEFI variables are stored in a default NVRAM container. Nutanix recommends you to choose a
storage container with at least RF2 storage policy to ensure the VM high availability for node failure
scenarios. For more information about RF2 storage policy, see Failure and Recovery Scenarios in the
Prism Element Web Console Guide document.

Note: When you update the location of the storage container, clear the UEFI configuration and update
the location of nvram_container to a container of your choice.

What to do next
Go to the UEFI BIOS menu and configure the UEFI firmware settings. For more information about
accessing and setting the UEFI firmware, see Getting Familiar with UEFI Firmware Menu on page 117.

Getting Familiar with UEFI Firmware Menu


After you launch a VM console from the Prism Element web console, the UEFI firmware menu allows you
to do the following tasks for the VM.

• Changing default boot resolution


• Setting up boot device
• Changing boot-time value

Changing Boot Resolution


You can change the default boot resolution of your Windows VM from the UEFI firmware menu.

Before you begin


Ensure that the VM is in powered on state.

About this task


Perform the following procedure to change the default boot resolution of your Windows VM by using the
UEFI firmware menu.

AHV | Virtual Machine Management | 117


Procedure

1. Log on to the Prism Element web console.

2. Launch the console for the VM.


For more details about launching console for the VM, see Managing a VM (AHV) section in Prism
Element Web Console Guide.

3. To go to the UEFI firmware menu, press the F2 keys on your keyboard.

Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box,
and immediately press F2 when the VM starts to boot.

Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-
production hours or during a maintenance period.

Figure 49: UEFI Firmware Menu

4. Use the up or down arrow key to go to Device Manager and press Enter.
The Device Manager page appears.

AHV | Virtual Machine Management | 118


5. In the Device Manager screen, use the up or down arrow key to go to OVMF Platform Configuration
and press Enter.

Figure 50: OVMF Settings

The OVMF Settings page appears.

6. In the OVMF Settings page, use the up or down arrow key to go to the Change Preferred field and
use the right or left arrow key to increase or decrease the boot resolution.
The default boot resolution is 1280X1024.

7. Do one of the following.

» To save the changed resolution, press the F10 key.


» To go back to the previous screen, press the Esc key.

8. Select Reset and click Submit in the Power off/Reset dialog box to restart the VM.
After you restart the VM, the OS displays the changed resolution.

Setting up Boot Device


This section describes how to set up the boot device for a UEFI VM.

About this task


You cannot set the boot order for UEFI VMs by using the aCLI, Prism Central web console, or Prism
Element web console. You can change the boot device for a UEFI VM by using the UEFI firmware menu
only.

Before you begin


Ensure that the following prerequisites are met before you set up or change the boot order for the VM:

• VM is in powered on state.
• The system behavior associated with following VM conditions is noted:

AHV | Virtual Machine Management | 119


Table 9: System Behavior based on VM condition

Condition System Behavior

VM is installed with UEFI and Any change made to the boot order persists and the changes are saved
the EFI boot partition exists. in the nvVars file in the EFI partition.

No guest OS is installed on the Any change made to the boot order persists and the changes are saved
VM but the EFI boot partition in the nvVars file in the EFI partition.
exists.

VM with no EFI boot partition. Any change made to the boot order persists only while the VM is on (or
rebooted), but a power off/on action reverts the boot order change to the
previous setting.

Procedure

To set up the boot device for a UEFI VM, perform the following steps:

1. Log on to the Prism Element web console.

2. Launch the console for the VM.


For more details about launching console for the VM, see Managing a VM (AHV) section in Prism
Element Web Console Guide.

3. To go to the UEFI firmware menu, press the F2 keys on your keyboard.

Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box,
and immediately press F2 when the VM starts to boot.

Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-
production hours or during a maintenance period.

4. Use the up or down arrow key to go to Boot Manager and press Enter.
The Boot Manager screen displays the list of available boot devices in the cluster.

Figure 51: Boot Manager

AHV | Virtual Machine Management | 120


5. In the Boot Manager screen, use the up or down arrow key to select the boot device and press Enter.
The boot device is saved. After you select and save the boot device, the VM boots up with the new boot
device.

6. To go back to the previous screen, press Esc.

Changing Boot Time-Out Value


The boot time-out value determines how long the boot menu is displayed (in seconds) before the default
boot entry is loaded to the VM. This topic describes the procedure to change the default boot-time value of
0 seconds.

About this task


Ensure that the VM is in powered on state.

Procedure

1. Log on to the Prism Element web console.

2. Launch the console for the VM.


For more details about launching console for the VM, see Managing a VM (AHV) section in Prism
Element Web Console Guide.

3. To go to the UEFI firmware menu, press the F2 keys on your keyboard.

Tip: To enter UEFI menu, open the VM console, select Reset in the Power off/Reset VM dialog box,
and immediately press F2 when the VM starts to boot.

Important: Resetting the VM causes a downtime. We suggest that you reset the VM only during off-
production hours or during a maintenance period.

4. Use the up or down arrow key to go to Boot Maintenance Manager and press Enter.

Figure 52: Boot Maintenance Manager

5. In the Boot Maintenance Manager screen, use the up or down arrow key to go to the Auto Boot
Time-out field.
The default boot-time value is 0 seconds.

AHV | Virtual Machine Management | 121


6. In the Auto Boot Time-out field, enter the boot-time value and press Enter.

Note: The valid boot-time value ranges from 1 second to 9 seconds.

The boot-time value is changed. The VM starts after the defined boot-time value.

7. To go back to the previous screen, press Esc.

Secure Boot Support for VMs


The pre-operating system environment is vulnerable to attacks by possible malicious loaders. Secure
boot addresses this vulnerability with UEFI secure boot using policies present in the firmware along with
certificates, to ensure that only properly signed and authenticated components are allowed to execute.

Supported Operating Systems


For more information about the supported OSes for the guest VMs, see the AHV Guest OS section in the
Compatibility and Interoperability Matrix document.

Secure Boot Considerations


This section provides the limitations and requirements to use Secure Boot.

Limitations
Secure Boot for guest VMs has the following limitation:

• Nutanix does not support converting a VM that uses IDE disks or legacy BIOS to VMs that use Secure
Boot.

• The minimum supported version of the Nutanix VirtIO package for Secure boot-enabled VMs is 1.1.6.
• Secure boot VMs do not permit CPU, memory, or PCI disk hot plug.

Requirements
Following are the requirements for Secure Boot:

• Secure Boot is supported only on the Q35 machine type.

Creating/Updating a VM with Secure Boot Enabled


You can enable Secure Boot with UEFI firmware, either while creating a VM or while updating a VM by
using aCLI commands or Prism Element web console.
See Creating a VM (AHV) on page 75 for instructions about how to enable Secure Boot by using the
Prism Element web console.

Creating a VM with Secure Boot Enabled

About this task


To create a VM with Secure Boot enabled:

Procedure

1. Log on to any Controller VM in the cluster with SSH.

AHV | Virtual Machine Management | 122


2. To create a VM with Secure Boot enabled:
nutanix@cvm$ acli vm.create <vm_name> secure_boot=true machine_type=q35

Note: Specifying the machine type is required to enable the secure boot feature. UEFI is enabled by
default when the Secure Boot feature is enabled.

Updating a VM to Enable Secure Boot

About this task


To update a VM to enable Secure Boot:

Procedure

1. Log on to any Controller VM in the cluster with SSH.

2. To update a VM to enable Secure Boot, ensure that the VM is powered off.


nutanix@cvm$ acli vm.update <vm_name> secure_boot=true machine_type=q35

Note:

• If you disable the secure boot flag alone, the machine type remains q35, unless you disable
that flag explicitly.
• UEFI is enabled by default when the Secure Boot feature is enabled. Disabling Secure Boot
does not revert the UEFI flags.

Virtual Machine Network Management


Virtual machine network management involves configuring connectivity for guest VMs through virtual
switches and VPCs.
For information about creating or updating a virtual switch and other VM network options, see Network
and Security Management in Prism Central Guide. Virtual switch creation and updates are also covered in
Network Management in Prism Web Console Guide.

Configuring a Virtual NIC to Operate in Access or Trunk Mode


By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can
send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is
connected. If restricted to using access mode interfaces, a VM running an application on multiple VLANs
(such as a firewall application) must use multiple virtual NICs—one for each VLAN. Instead of configuring
multiple virtual NICs in access mode, you can configure a single virtual NIC on the VM to operate in trunk
mode. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its
own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the
trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only
over its own VLAN.

About this task


To configure a virtual NIC as an access port or trunk port, do the following:

Procedure

1. Log on to the CVM with SSH.

AHV | Virtual Machine Management | 123


2. Do one of the following:

a. Create a virtual NIC on the VM and configure the NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_create <vm_name> network=network [vlan_mode={kAccess |
kTrunked}] [trunked_networks=networks]
Specify appropriate values for the following parameters:

• <vm_name>. Name of the VM.

• network. Name of the virtual network to which you want to connect the virtual NIC.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The parameter
is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set to kAccess.
To include the default VLAN, VLAN 0, include it in the list of trunked networks. To trunk all VLANs,
set vlan_mode to kTrunked and skip this parameter.
• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for
access mode and to kTrunked for trunk mode. Default: kAccess.
b. Configure an existing virtual NIC to operate in the required mode.
nutanix@cvm$ acli vm.nic_update <vm_name> mac_addr update_vlan_trunk_info=true
[vlan_mode={kAccess | kTrunked}] [trunked_networks=networks]
Specify appropriate values for the following parameters:

• <vm_name>. Name of the VM.

• mac_addr. MAC address of the virtual NIC to update (the MAC address is used to identify the
virtual NIC). Required to update a virtual NIC.
• update_vlan_trunk_info. Update the VLAN type and list of trunked VLANs. Set
update_vlan_trunk_info=true to enable trunked mode. If not specified, the parameter defaults to
false and the vlan_mode and trunked_networks parameters are ignored.

Note: You must set the update_vlan_trunk_info to true. If you do not set this parameter to true,
"trunked_networks" are not changed.

• vlan_mode. Mode in which the virtual NIC must operate. Set the parameter to kAccess for access
mode and to kTrunked for trunk mode.
• trunked_networks. Comma-separated list of the VLAN IDs that you want to trunk. The
parameter is processed only if vlan_mode is set to kTrunked and is ignored if vlan_mode is set to
kAccess. To include the default VLAN, VLAN 0, include it in the list of trunked networks. To trunk
all VLANs, set vlan_mode to kTrunked and skip this parameter.

Virtual Machine Memory and CPU Hot-Plug Configurations


Memory and CPUs are hot-pluggable on guest VMs running on AHV. You can increase the memory
allocation and the number of CPUs on your VMs while the VMs are powered on. You can change the
number of vCPUs (sockets) while the VMs are powered on. However, you cannot change the number of
cores per socket while the VMs are powered on.

Note: You cannot decrease the memory allocation and the number of CPUs on your VMs while the VMs are
powered on.

You can change the memory and CPU configuration of your VMs by using the Acropolis CLI (aCLI) (see
Managing a VM (AHV) in the Prism Element Web Console Guide or see Managing a VM (AHV) and
Managing a VM (Self Service) in the Prism Central Guide).

AHV | Virtual Machine Management | 124


See the AHV Guest OS Compatibility Matrix for information about operating systems on which you can hot
plug memory and CPUs.

Memory OS Limitations
1. On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the
memory is not online, you cannot use the new memory. Perform the following procedure to make the
memory online.
1. Identify the memory block that is offline.
Display the status of all of the memory.
$ cat /sys/devices/system/memory/memoryXXX/state
Display the state of a specific memory block.
$ grep line /sys/devices/system/memory/*/state

2. Make the memory online.


$ echo online > /sys/devices/system/memory/memoryXXX/state

2. If your VM has CentoOS 7.2 as the guest OS and less than 3 GB memory, hot plugging more memory
to that VM so that the final memory is greater than 3 GB, results in a memory-overflow condition. To
resolve the issue, restart the guest OS (CentOS 7.2) with the following setting:
swiotlb=force

CPU OS Limitation
On CentOS operating systems, if the hot-plugged CPUs are not displayed in /proc/cpuinfo, you might
have to bring the CPUs online. For each hot-plugged CPU, run the following command to bring the CPU
online.
$ echo 1 > /sys/devices/system/cpu/cpu<n>/online
Replace <n> with the number of the hot plugged CPU.

Hot-Plugging the Memory and CPUs on Virtual Machines (AHV)

About this task


Perform the following procedure to hot plug the memory and CPUs on the AHV VMs.

Procedure

1. Log on the Controller VM with SSH.

2. Update the memory allocation for the VM.


nutanix@cvm$ acli vm.update vm-name memory=new_memory_size
Replace vm-name with the name of the VM and new_memory_size with the memory size.

3. Update the number of CPUs on the VM.


nutanix@cvm$ acli vm.update vm-name num_vcpus=n
Replace vm-name with the name of the VM and n with the number of CPUs.

Note: After you upgrade from a hot-plug unsupported version to the hot-plug supported version,
you must power cycle the VM that was instantiated and powered on before the upgrade, so that it is
compatible with the memory and CPU hot-plug feature. This power-cycle has to be done only once after
the upgrade. New VMs created on the supported version shall have the hot-plug compatibility by default.

AHV | Virtual Machine Management | 125


Virtual Machine Memory Management (vNUMA)
AHV hosts support Virtual Non-uniform Memory Access (vNUMA) on virtual machines. You can enable
vNUMA on VMs when you create or modify the VMs to optimize memory performance.

Non-uniform Memory Access (NUMA)


In a NUMA topology, the memory access times of a VM depend on the memory location relative to a
processor. A VM accesses memory local to a processor faster than the non-local memory. If the VM uses
both CPU and memory from the same physical NUMA node, you can achieve optimal resource utilization.
If you are running the CPU on one NUMA node (for example, node 0) and the VM accesses the memory
from another node (node 1) then memory latency occurs. Ensure that the virtual topology of VMs matches
the physical hardware topology to achieve minimum memory latency.

Virtual Non-uniform Memory Access (vNUMA)


vNUMA optimizes the memory performance of virtual machines that require more vCPUs or memory than
the capacity of a single physical NUMA node. In a vNUMA topology, you can create multiple vNUMA nodes
where each vNUMA node includes vCPUs and virtual RAM. When you assign a vNUMA node to a physical
NUMA node, the vCPUs can intelligently determine the memory latency (high or low). Low memory latency
within a vNUMA node results in low latency in the physical NUMA node as well.

Enabling vNUMA on Virtual Machines

Before you begin


Before you enable vNUMA, see AHV Best Practices Guide under Solutions Documentation.

About this task


Perform the following procedure to enable vNUMA on your VMs running on the AHV hosts.

Procedure

1. Log on to a Controller VM with SSH.

2. Check how many NUMA nodes are available on each AHV host in the cluster.
nutanix@cvm$ hostssh "numactl --hardware"
The console displays an output similar to the following:
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 128837 MB
node 0 free: 862 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 129021 MB
node 1 free: 352 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128859 MB
node 0 free: 1076 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129000 MB
node 1 free: 436 MB

AHV | Virtual Machine Management | 126


node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128859 MB
node 0 free: 701 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129000 MB
node 1 free: 357 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128838 MB
node 0 free: 1274 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129021 MB
node 1 free: 424 MB
node distances:
node 0 1
0: 10 21
1: 21 10
============= 10.x.x.x ============
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 128837 MB
node 0 free: 577 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 129021 MB
node 1 free: 612 MB
node distances:
node 0 1
0: 10 21
1: 21 10
The example output shows that each AHV host has two NUMA nodes.

3. Do one of the following:

» Enable vNUMA if you are creating a VM.


nutanix@cvm$ acli vm.create <vm_name> num_vcpus=x \
num_cores_per_vcpu=x memory=xG \
num_vnuma_nodes=x

» Enable vNUMA if you are modifying an existing VM.


nutanix@cvm$ acli vm.update <vm_name> \
num_vnuma_nodes=x

Replace <vm_name> with the name of the VM on which you want to enable vNUMA or vUMA. Replace x
with the values for the following indicated parameters:

• num_vcpus: Type the number of vCPUs for the VM.


• num_cores_per_vcpu: Type the number of cores per vCPU.

AHV | Virtual Machine Management | 127


• memory: Type the memory in GB for the VM.
• num_vnuma_nodes: Type the number of vNUMA nodes for the VM.
For example:
nutanix@cvm$ acli vm.create test_vm num_vcpus=20 memory=150G num_vnuma_nodes=2
This command creates a VM with 2 vNUMA nodes, 10 vCPUs and 75 GB memory for each vNUMA
node.

GPU and vGPU Support


AHV supports GPU-accelerated computing for guest VMs. You can configure either GPU pass-through or a
virtual GPU.

Note: You can configure either pass-through or a vGPU for a guest VM but not both.

This guide describes the concepts related to the GPU and vGPU support in AHV. For the configuration
procedures, see the Prism Element Web Console Guide.
For driver installation instructions, see the NVIDIA Grid Host Driver for Nutanix AHV Installation Guide.

Note: VMs with GPU are not migrated to other hosts in the cluster. For more information, see Non-
Migratable Hosts on page 114.

Supported GPUs
The following GPUs are supported:

Note: These GPUs are supported only by the AHV version that is bundled with the AOS release.

• NVIDIA® Ampere® A16


• NVIDIA® Ampere® A30
• NVIDIA® Ampere® A40
• NVIDIA® Ampere® A100
• NVIDIA® Quadro® RTX 6000
• NVIDIA® Quadro® RTX 8000
• NVIDIA® Tesla® M10
• NVIDIA® Tesla® M60
• NVIDIA® Tesla® P4
• NVIDIA® Tesla® P40
• NVIDIA® Tesla® P100
• NVIDIA® Tesla® T4 16 GB
• NVIDIA® Tesla® V100 16 GB
• NVIDIA® Tesla® V100 32 GB
• NVIDIA® Tesla® V100S 32 GB

AHV | Virtual Machine Management | 128


GPU Pass-Through for Guest VMs
AHV hosts support GPU pass-through for guest VMs, allowing applications on VMs direct access to GPU
resources. The Nutanix user interfaces provide a cluster-wide view of GPUs, allowing you to allocate
any available GPU to a VM. You can also allocate multiple GPUs to a VM. However, in a pass-through
configuration, only one VM can use a GPU at any given time.

Host Selection Criteria for VMs with GPU Pass-Through


When you power on a VM with GPU pass-through, the VM is started on the host that has the specified
GPU, provided that the Acropolis Dynamic Scheduler determines that the host has sufficient resources
to run the VM. If the specified GPU is available on more than one host, the Acropolis Dynamic Scheduler
ensures that a host with sufficient resources is selected. If sufficient resources are not available on any
host with the specified GPU, the VM is not powered on.
If you allocate multiple GPUs to a VM, the VM is started on a host if, in addition to satisfying Acropolis
Dynamic Scheduler requirements, the host has all of the GPUs that are specified for the VM.
If you want a VM to always use a GPU on a specific host, configure host affinity for the VM.

Support for Graphics and Compute Modes


AHV supports running GPU cards in either graphics mode or compute mode. If a GPU is running in
compute mode, Nutanix user interfaces indicate the mode by appending the string compute to the model
name. No string is appended if a GPU is running in the default graphics mode.

Switching Between Graphics and Compute Modes


If you want to change the mode of the firmware on a GPU, put the host in maintenance mode, and
then flash the GPU manually by logging on to the AHV host and performing standard procedures as
documented for Linux VMs by the vendor of the GPU card.
Typically, you restart the host immediately after you flash the GPU. After restarting the host, redo the GPU
configuration on the affected VM, and then start the VM. For example, consider that you want to re-flash an
NVIDIA Tesla® M60 GPU that is running in graphics mode. The Prism web console identifies the card as
an NVIDIA Tesla M60 GPU. After you re-flash the GPU to run in compute mode and restart the host, redo
the GPU configuration on the affected VMs by adding back the GPU, which is now identified as an NVIDIA
Tesla M60.compute GPU, and then start the VM.

Supported GPU Cards


For a list of supported GPUs, see Supported GPUs on page 128.

Limitations
GPU pass-through support has the following limitations:

• Live migration of VMs with a GPU configuration is not supported. Live migration of VMs is necessary
when the BIOS, BMC, and the hypervisor on the host are being upgraded. During these upgrades, VMs
that have a GPU configuration are powered off and then powered on automatically when the node is
back up.
• VM pause and resume are not supported.
• You cannot hot add VM memory if the VM is using a GPU.
• Hot add and hot remove support is not available for GPUs.
• You can change the GPU configuration of a VM only when the VM is turned off.

AHV | Virtual Machine Management | 129


• The Prism web console does not support console access for VMs that are configured with GPU pass-
through. Before you configure GPU pass-through for a VM, set up an alternative means to access the
VM. For example, enable remote access over RDP.
Removing GPU pass-through from a VM restores console access to the VM through the Prism web
console.

Configuring GPU Pass-Through


For information about configuring GPU pass-through for guest VMs, see Creating a VM (AHV) in the
"Virtual Machine Management" chapter of the Prism Element Web Console Guide.

NVIDIA GRID Virtual GPU Support on AHV


AHV supports NVIDIA GRID technology, which enables multiple guest VMs to use the same physical GPU
concurrently. Concurrent use is possible by dividing a physical GPU into discrete virtual GPUs (vGPUs)
and allocating those vGPUs to guest VMs. Each vGPU has a fixed range of frame buffer and uses all the
GPU processing cores in a time-sliced manner.
Virtual GPUs are of different types (vGPU types are also called vGPU profiles). vGPUs differ by the
amount of physical GPU resources allocated to them and the class of workload that they target. The
number of vGPUs into which a single physical GPU can be divided therefore depends on the vGPU profile
that is used on a physical GPU.
Each physical GPU supports more than one vGPU profile, but a physical GPU cannot run multiple vGPU
profiles concurrently. After a vGPU of a given profile is created on a physical GPU (that is, after a vGPU
is allocated to a VM that is powered on), the GPU is restricted to that vGPU profile until it is freed up
completely. To understand this behavior, consider that you configure a VM to use an M60-1Q vGPU. When
the VM is powering on, it is allocated an M60-1Q vGPU instance only if a physical GPU that supports
M60-1Q is either unused or already running the M60-1Q profile and can accommodate the requested
vGPU.
If an entire physical GPU that supports M60-1Q is free at the time the VM is powering on, an M60-1Q
vGPU instance is created for the VM on the GPU, and that profile is locked on the GPU. In other words,
until the physical GPU is completely freed up again, only M60-1Q vGPU instances can be created on that
physical GPU (that is, only VMs configured with M60-1Q vGPUs can use that physical GPU).

Note: NVIDIA does not support Windows Guest VMs on the C-series NVIDIA vGPU types. See the NVIDIA
documentation on Virtual GPU software for more information.

NVIDIA Grid Host Drivers and License Installation


To enable guest VMs to use vGPUs on AHV, you must install NVIDIA drivers on the guest VMs, install the
NVIDIA GRID host driver on the hypervisor, and set up an NVIDIA GRID License Server.
See the NVIDIA Grid Host Driver for Nutanix AHV Installation Guide for details about the workflow to
enable guest VMs to use vGPUs on AHV and the NVIDIA GRID host driver installation instructions.

vGPU Profile Licensing


vGPU profiles are licensed through an NVIDIA GRID license server. The choice of license depends on the
type of vGPU that the applications running on the VM require. Licenses are available in various editions,
and the vGPU profile that you want might be supported by more than one license edition.

Note: If the specified license is not available on the licensing server, the VM starts up and functions
normally, but the vGPU runs with reduced capability.

AHV | Virtual Machine Management | 130


You must determine the vGPU profile that the VM requires, install an appropriate license on the licensing
server, and configure the VM to use that license and vGPU type. For information about licensing for
different vGPU types, see the NVIDIA GRID licensing documentation.
Guest VMs check out a license over the network when starting up and return the license when shutting
down. As the VM is powering on, it checks out the license from the licensing server. When a license is
checked back in, the vGPU is returned to the vGPU resource pool.
When powered on, guest VMs use a vGPU in the same way that they use a physical GPU that is passed
through.

Supported GPU Cards


For a list of supported GPUs, see Supported GPUs on page 128.

High Availability Support for VMs with vGPUs


Nutanix conditionally supports high availability (HA) of VMs that have NVIDIA GRID vGPUs configured.
The cluster does not reserve any specific resources to guarantee High Availability for the VMs with vGPUs.
The vGPU VMs are restarted on best effort basis in the event of a node failure. You can restart a VM with
vGPUs on another (failover) host which has compatible or identical vGPU resources available. The vGPU
profile available on the failover host must be identical to the vGPU profile configured on the VM that needs
HA. The system attempts to restart the VM after an event. If the failover host has insufficient memory and
vGPU resources for the VM to start, the VM fails to start after failover.
The following conditions are applicable to HA of VMs with vGPUs:

• Memory is not reserved for the VM on the failover host by the HA process. When the VM fails over, if
sufficient memory is not available, the VM cannot power on.
• vGPU resources are not reserved on the failover host. When the VM fails over, if the required vGPU
resources are not available on the failover host, the VM cannot power on.

Limitations for vGPU Support


vGPU support on AHV has the following limitations:

• You cannot hot-add memory to VMs that have a vGPU.


• The Prism web console does not support console access for a VM that is configured with multiple
vGPUs. The Prism web console supports console access for a VM that is configured with a single
vGPU only.
Before you add multiple vGPUs to a VM, set up an alternative means to access the VM. For example,
enable remote access over RDP. For Linux VMs, instead of RDP, use Virtual Network Computing (VNC)
or equivalent.

Console Support for VMs with vGPU


Like other VMs, you can access a VMs with a single vGPU using the console. Enable or disable console
support for a VM with only one vGPU configured. Enabling console support for a VM with multiple vGPUs
is not supported. By default, console support for a vGPU VM is disabled.
See Enabling or Disabling Console Support for vGPU VMs on page 141 for more information about
configuring the support.

Recovery of vGPU Console-enabled VMs


With AHV, you can recover vGPU console-enabled guest VMs efficiently. When you perform DR of vGPU
console-enabled guest VMs, the VMs recovers with the vGPU console. The guest VMs fail to recover when
you perform cross-hypervisor disaster recovery (CHDR).

AHV | Virtual Machine Management | 131


For AHV with minimum AOS versions 6.1, 6.0.2.4 and 5.20:

• vGPU-enabled VMs can be recovered when protected by protection domains in PD-based DR or


protection policies in Leap based solutions using asynchronous, NearSync, or Synchronous (Leap only)
replications.

Note: GPU Passthrough is not supported.

• If both site A and site B have the same GPU boards (and the same assignable vGPU profiles), failovers
work seamlessly. However, with protection domains, no additional steps are required. GPU profiles
are restored correctly and vGPU console settings persist after recovery. With Leap DR, vGPU console
settings do not persist after recovery.
• If site A and site B have different GPU boards and vGPU profiles, you must manually remove the vGPU
profile before you power on the VM in site B.
The vGPU console settings are persistent after recovery and all failovers are supported for the following:

Table 10: Persistent vGPU Console Settings with Failover Support

Recovery using For vGPU enabled AHV VMs

Protection domain based DR Yes


VMware SRM with Nutanix SRA Not applicable

For information about the behavior See the Recovery of vGPU-enabled VMs topic in the Data Protection
and Recovery with Prism Element guide.
See Enabling or Disabling Console Support for vGPU VMs on page 141 for more information about
configuring the support.
For SRA and SRM support, see the Nutanix SRA documentation.

ADS support for VMs with vGPUs


AHV supports Acropolis Dynamic Scheduling (ADS) for VMs with vGPUs.

Note: ADS support requires live migration of VMs with vGPU be operational in the cluster. See Live
Migration of VMs with vGPUs above for minimum NVIDIA and AOS versions that support live migration of
VMs with vGPUs.

When a number of VMs with vGPUs are running on a host and you enable ADS support for the cluster, the
Lazan manager invokes VM migration tasks to resolve resource hotspots or fragmentation in the cluster to
power on incoming vGPU VMs. The Lazan manager can migrate vGPU-enabled VMs to other hosts in the
cluster only if:

• The other hosts support compatible or identical vGPU resources as the source host (hosting the vGPU-
enabled VMs).
• The host affinity is not set for the vGPU-enabled VM.
For more information about limitations, see Live Migration of VMs with Virtual GPUs on page 140 and
Limitations of Live Migration Support on page 141.
For more information about ADS, see Acropolis Dynamic Scheduling in AHV on page 6.

AHV | Virtual Machine Management | 132


Multiple Virtual GPU Support
Prism Central and Prism Element Web Console can deploy VMs with multiple virtual GPU instances.
This support harnesses the capabilities of NVIDIA GRID virtual GPU (vGPU) support for multiple vGPU
instances for a single VM.

Note: Multiple vGPUs on the same VM are supported on NVIDIA Virtual GPU software version 10.1
(440.53) or later.

You can deploy virtual GPUs of different types. A single physical GPU can be divided into the number
of vGPUs depending on the type of vGPU profile that is used on the physical GPU. Each physical GPU
on a GPU board supports more than one type of vGPU profile. For example, a Tesla® M60 GPU device
provides different types of vGPU profiles like M60-0Q, M60-1Q, M60-2Q, M60-4Q, and M60-8Q.
You can only add multiple vGPUs of the same type of vGPU profile to a single VM. For example, consider
that you configure a VM on a Node that has one NVidia Tesla® M60 GPU board. Tesla® M60 provides two
physical GPUs, each supporting one M60-8Q (profile) vGPU, thus supporting a total of two M60-8Q vGPUs
for the entire host.
For restrictions on configuring multiple vGPUs on the same VM, see Restrictions for Multiple vGPU
Support on page 133.
For steps to add multiple vGPUs to the same VM, see Creating a VM (AHV) and Adding Multiple vGPUs to
a VM information in Prism Element Web Console Guide or Creating a VM through Prism Central (AHV) and
Adding Multiple vGPUs to a VM in Prism Central Guide.

Restrictions for Multiple vGPU Support

You can configure multiple vGPUs subject to the following restrictions:

• All the vGPUs that you assign to one VM must be of the same type. In the aforesaid example, with the
Tesla® M60 GPU device, you can assign multiple M60-8Q vGPU profiles. You cannot assign one vGPU
of the M60-1Q type and another vGPU of the M60-8Q type.

Note: You can configure any number of vGPUs of the same type on a VM. However, the cluster
calculates a maximum number of vGPUs of the same type per VM. This number is defined as
max_instances_per_vm. This number is variable and changes based on the GPU resources available
in the cluster and the number of VMs deployed. If the number of vGPUs of a specific type that you
configured on a VM exceeds the max_instances_per_vm number, then the VM fails to power on and the
following error message is displayed:

Operation failed: NoHostResources: No host has enough available GPU for VM <name of
VM>(UUID of VM).

AHV | Virtual Machine Management | 133


You could try reducing the GPU allotment...
When you configure multiple vGPUs on a VM, after you select the appropriate vGPU type for the first
vGPU assignment, Prism (Prism Central and Prism Element Web Console) automatically restricts the
selection of vGPU type for subsequent vGPU assignments to the same VM.

Figure 53: vGPU Type Restriction Message

AHV | Virtual Machine Management | 134


Note:
You can use CLI (acli) to configure multiple vGPUs of multiple types to the same VM. See
Acropolis Command-Line Interface (aCLI) for information about aCLI. Use the vm.gpu_assign
<vm.name> gpu=<gpu-type> command multiple times, once for each vGPU, to add multiple
vGPUs of multiple types to the same VM.
See the GPU board and software documentation for information about the combinations of the
number and types of vGPUs profiles supported by the GPU resources installed in the cluster.
For example, see the NVIDIA Virtual GPU Software Documentation for the vGPU type and
number combinations on the Tesla® M60 board.

• Configure multiple vGPUs only of the highest type using Prism. The highest type of vGPU profile is
based on the driver deployed in the cluster. In the aforesaid example, on a Tesla® M60 device, you can
only configure multiple vGPUs of the M60-8Q type. Prism prevents you from configuring multiple vGPUs
of any other type such as M60-2Q.

Figure 54: vGPU Type Restriction Message

Note:
You can use CLI (acli) to configure multiple vGPUs of other available types. See Acropolis
Command-Line Interface (aCLI) for the aCLI information. Use the vm.gpu_assign <vm.name>
gpu=<gpu-type> command multiple times, once for each vGPU, to configure multiple vGPUs
of other available types.
See the GPU board and software documentation for more information.

• Configure either a passthrough GPU or vGPUs on the same VM. You cannot configure both
passthrough GPU and vGPUs. Prism automatically disallows such configurations after the first GPU is
configured.
• The VM powers on only if the requested type and number of vGPUs are available in the node.
In the aforesaid example, the VM, which is configured with two M60-8Q vGPUs, fails to power on if
another VM sharing the same GPU board is already using one M60-8Q vGPU. This is because the
Tesla® M60 GPU board allows only two M60-8Q vGPUs. Of these, one is already used by another VM.

AHV | Virtual Machine Management | 135


Thus, the VM configured with two M60-8Q vGPUs fails to power on due to unavailability of required
vGPUs.
• Multiple vGPUs on the same VM are supported on NVIDIA Virtual GPU software version 10.1 (440.53)
or later. Ensure that the relevant GRID version license is installed and select it when you configure
multiple vGPUs.

Adding Multiple vGPUs to the Same VM

About this task


You can add multiple vGPUs of the same vGPU type to:

• A new VM when you create it.


• An existing VM when you update it.

Important:
Before you add multiple vGPUs to the VM, see Multiple Virtual GPU Support on page 133 and
Restrictions for Multiple vGPU Support on page 133.

After you add the first vGPU, do the following on the Create VM or Update VM dialog box (the main dialog
box) to add more vGPUs:

Procedure

1. Click Add GPU.

AHV | Virtual Machine Management | 136


2. In the Add GPU dialog box, click Add.
The License field is grayed out because you cannot select a different license when you add a vGPU for
the same VM.
The VGPU Profile is also auto-selected because you can only select the additional vGPU of the same
vGPU type as indicated by the message at the top of the dialog box.

AHV | Virtual Machine Management | 137


Figure 55: Add GPU for multiple vGPUs

AHV | Virtual Machine Management | 138


3. In the main dialog box, you see the newly added vGPU.

Figure 56: New vGPUs Added

4. Repeat the steps for each vGPU addition you want to make.

AHV | Virtual Machine Management | 139


Live Migration of VMs with Virtual GPUs
You can perform the live migration of VMs enabled with virtual GPUs (vGPU-enabled VMs) only on best
effort basis, if the destination node is equipped to provide enough resources to the vGPU-enabled VMs.
However, if the destination node is not equipped with the enough resources, the vGPU-enabled VMs are
shut down and you might experience a downtime.
In a successful migration case, the vGPUs can continue to run while the VMs that are running the vGPUs
are seamlessly migrated in the background.
When you perform the LCM update, the vGPU-enabled VMs are listed as Non-HA-protected VMs. LCM
also migrates the non-HA-protected VMs on best effort basis to the destination node if the following
requirements are met:

• Destination node is equipped with the required resources for the VM.
• The VM GPU drivers are compatible with the AHV host GPU drivers.
If the destination node is not equipped with the enough resources or there is any compatibility issue
between the VM GPU drivers and AHV host GPU drivers, the LCM forcibly shuts down the non-HA-
protected VMs.

Note: Live migration of VMs with vGPUs is supported for vGPUs created with minimum NVIDIA Virtual GPU
software version 10.1 (440.53).

Table 11: Minimum Versions

Component Supports With Minimum Version

AOS Live migration within the same cluster 5.18.1


AHV Live migration within the same cluster 20190916.294
AOS Live migration across cluster 6.1
AHV Live migration across cluster 20201105.30142

Important: In an HA event involving any GPU node, the node locality of the affected vGPU VMs is not
restored after GPU node recovery. The affected vGPU VMs are not migrated back to their original GPU host
intentionally to avoid extended VM stun time expected while migrating vGPU frame buffer. If vGPU VM node
locality is required, migrate the affected vGPU VMs to the desired host manually. For information about the
steps to migrate a live VM with vGPUs, see Migrating Live a VM with Virtual GPUs in the Prism Central
Guide and the Prism Web Console Guide.

Note:
Important frame buffer and VM stun time considerations are:

• The GPU board (for example, NVIDIA Tesla M60) vendor provides the information for
maximum frame buffer size of vGPU types (for example, M60-8Q type) that can be configured
on VMs. However, the actual frame buffer usage may be lower than the maximum sizes.
• The VM stun time depends on the number of vGPUs configured on the VM being migrated.
Stun time may be longer in case of multiple vGPUs operating on the VM.
The stun time also depends on the network factors such bandwidth available for use during the
migration.

AHV | Virtual Machine Management | 140


Live Migration Workflows
You can live migrate a vGPU-enabled VM to the following destinations:

• To another host within the same cluster. Both Prism Web Console (Prism Element) and Prism Central
allow you to live migrate a vGPU-enabled VM to another host within the same cluster.
• To a host outside the cluster, that is, a host in another cluster. Only Prism Central allows you to migrate
a vGPU-enabled VM to a host outside the cluster.)
For information about the steps to live migrate a vGPU-enabled VM, see the following:

• Migrating Live a vGPU-enabled VM Within the Cluster in the Prism Web Console Guide.
• Migrating Within the Cluster in the Prism Central Infrastructure Guide.
• Migrating Outside the Cluster in the Prism Central Infrastructure Guide.

Limitations of Live Migration Support

• Live migration is supported for VMs configured with single or multiple virtual GPUs. It is not supported
for VMs configured with passthrough GPUs.
• The target cluster for the migration must have adequate and available GPU resources, with the same
vGPU types as configured for the VMs to be migrated, to support the vGPUs on the VMs that need to
be migrated.
See Restrictions for Multiple vGPU Support on page 133 for more details.
• The VMs with vGPUs that need to be migrated live cannot be protected with high availability.
• Ensure that the VM is not powered off.
• Ensure that you have the right GPU software license that supports live migration of vGPUs. The source
and target clusters must have the same license type. You require an appropriate license of NVIDIA
GRID software version. See Live Migration of VMs with Virtual GPUs on page 140 for minimum
license requirements.

Enabling or Disabling Console Support for vGPU VMs

About this task


Enable or disable console support for a VM with only one vGPU configured. Enabling console support for a
VM with multiple vGPUs is not supported. By default, console support for a vGPU VM is disabled.
To enable or disable console support for each VM with vGPUs, do the following:

AHV | Virtual Machine Management | 141


Procedure

1. Run the following aCLI command to check if console support is enabled or disabled for the VM with
vGPUs.
acli> vm.get vm-name
Where vm-name is the name of the VM for which you want to check the console support status.
The step result includes the following parameter for the specified VM:
gpu_console=False

Where False indicates that console support is not enabled for the VM. This parameter is displayed as
True when you enable console support for the VM. The default value for gpu_console= is False since
console support is disabled by default.

Note: The console may not display the gpu_console parameter in the output of the vm.get
command if the gpu_console parameter was not previously enabled.

2. Run the following aCLI command to enable or disable console support for the VM with vGPU:
vm.update vm-name gpu_console=true | false
Where:

• true—indicates that you are enabling console support for the VM with vGPU.
• false—indicates that you are disabling console support for the VM with vGPU.

3. Run the vm.get command to check if gpu_console value is true indicating that console support is
enabled or false indicating that console support is disabled as you configured it.
If the value indicated in the vm.get command output is not what is expected, then perform Guest
Shutdown of the VM with vGPU. Next, run the vm.on vm-name aCLI command to turn the VM on
again. Then run vm.get command and check the gpu_console= value.

4. Click a VM name in the VM table view to open the VM details page. Click Launch Console.
The Console opens but only a black screen is displayed.

5. Click on the console screen. Click one of the following key combinations based on the operating system
you are accessing the cluster from.

» For Apple Mac OS: Control+Command+2

» For MS Windows: Ctrl+Alt+2


The console is fully enabled and displays the content.

PXE Configuration for AHV VMs


You can configure a VM to boot over the network in a Preboot eXecution Environment (PXE). Booting over
the network is called PXE booting and does not require the use of installation media. When starting up, a
PXE-enabled VM communicates with a DHCP server to obtain information about the boot file it requires.
Configuring PXE boot for an AHV VM involves performing the following steps:

• Configuring the VM to boot over the network.


• Configuring the PXE environment.
The procedure for configuring a VM to boot over the network is the same for managed and unmanaged
networks. The procedure for configuring the PXE environment differs for the two network types, as follows:

AHV | Virtual Machine Management | 142


• An unmanaged network does not perform IPAM functions and gives VMs direct access to an external
Ethernet network. Therefore, the procedure for configuring the PXE environment for AHV VMs is the
same as for a physical machine or a VM that is running on any other hypervisor. VMs obtain boot file
information from the DHCP or PXE server on the external network.
• A managed network intercepts DHCP requests from AHV VMs and performs IP address management
(IPAM) functions for the VMs. Therefore, you must add a TFTP server and the required boot file
information to the configuration of the managed network. VMs obtain boot file information from this
configuration.
A VM that is configured to use PXE boot boots over the network on subsequent restarts until the boot order
of the VM is changed.

Configuring the PXE Environment for AHV VMs


The procedure for configuring the PXE environment for a VM on an unmanaged network is similar to
the procedure for configuring a PXE environment for a physical machine on the external network and is
beyond the scope of this document. This procedure configures a PXE environment for a VM in a managed
network on an AHV host.

About this task


To configure a PXE environment for a VM on a managed network on an AHV host, do the following:

Procedure

1. Log on to the Prism web console, click the gear icon, and then click Network Configuration in the
menu.

2. On Network Configuration > Subnets tab, click the Edit action link of the network for which you want
to configure a PXE environment.
The VMs that require the PXE boot information must be on this network.

3. In the Update Subnet dialog box:

a. Select the Enable IP address management check box and complete the following configurations:

• In the Network IP Prefix field, enter the network IP address, with prefix, of the subnet that you
are updating.
• In the Gateway IP Address field, enter the gateway IP address of the subnet that you are
updating.
• To provide DHCP settings for the VM, select the DHCP Settings check box and provide the
following information.

Fields Description and Values

Domain Name Servers Provide a comma-separated list of DNS IP addresses.


Example: 8.8.8.8, or 9.9.9.9

Domain Search Enter the VLAN domain name. Use only the domain name format.
Example: nutanix.com

AHV | Virtual Machine Management | 143


Fields Description and Values
TFTP Server Name Enter a valid TFTP host server name of the TFTP server where you host
the host boot file. The IP address of the TFTP server must be accessible
to the virtual machines to download a boot file.
Example: tftp_vlan103

Boot File Name The name of the boot file that the VMs need to download from the TFTP
host server.
Example: boot_ahv202010

4. Under IP Address Pools, click Create Pool to add IP address pools for the subnet.
(Mandatory for Overlay type subnets) This section provides the Network IP Prefix and Gateway IP
fields for the subnet.
(Optional for VLAN type subnet) Check this box to display the Network IP Prefix and Gateway IP
fields and configure the IP address details.

5. (Optional and for VLAN networks only) Check the Override DHCP Server dialog box and enter an IP
address in the DHCP Server IP Address field.
You can configure a DHCP server using the Override DHCP Server option only in case of VLAN
networks.
The DHCP Server IP address (reserved IP address for the Acropolis DHCP server) is visible only to
VMs on this network and responds only to DHCP requests. If this box is not checked, the DHCP Server
IP Address field is not displayed and the DHCP server IP address is generated automatically. The
automatically generated address is network_IP_address_subnet.254, or if the default gateway is
using that address, network_IP_address_subnet.253.
Usually the default DHCP server IP is configured as the last usable IP in the subnet (For eg., its
10.0.0.254 for 10.0.0.0/24 subnet). If you want to use a different IP address in the subnet as the DHCP
server IP, use the override option.

6. Click Close.

Configuring a VM to Boot over a Network


To enable a VM to boot over the network, update the VM's boot device setting. Currently, the only user
interface that enables you to perform this task is the Acropolis CLI (aCLI).

About this task


To configure a VM to boot from the network, do the following:

Procedure

1. Log on to any CVM in the cluster with SSH.

2. Create a VM.

nutanix@cvm$ acli vm.create vm num_vcpus=num_vcpus memory=memory


Replace vm with a name for the VM, and replace num_vcpus and memory with the number of vCPUs and
amount of memory that you want to assign to the VM, respectively.
For example, create a VM named nw-boot-vm.
nutanix@cvm$ acli vm.create nw-boot-vm num_vcpus=1 memory=512

AHV | Virtual Machine Management | 144


3. Create a virtual interface for the VM and place it on a network.
nutanix@cvm$ acli vm.nic_create vm network=network
Replace vm with the name of the VM and replace network with the name of the network. If the network
is an unmanaged network, make sure that a DHCP server and the boot file that the VM requires are
available on the network. If the network is a managed network, configure the DHCP server to provide
TFTP server and boot file information to the VM. See Configuring the PXE Environment for AHV VMs
on page 143.
For example, create a virtual interface for VM nw-boot-vm and place it on a network named network1.
nutanix@cvm$ acli vm.nic_create nw-boot-vm network=network1

4. Obtain the MAC address of the virtual interface.


nutanix@cvm$ acli vm.nic_list vm
Replace vm with the name of the VM.
For example, obtain the MAC address of VM nw-boot-vm.
nutanix@cvm$ acli vm.nic_list nw-boot-vm
00-00-5E-00-53-FF

5. Update the boot device setting so that the VM boots over the network.
nutanix@cvm$ acli vm.update_boot_device vm mac_addr=mac_addr
Replace vm with the name of the VM and mac_addr with the MAC address of the virtual interface that
the VM must use to boot over the network.
For example, update the boot device setting of the VM named nw-boot-vm so that the VM uses the
virtual interface with MAC address 00-00-5E-00-53-FF.
nutanix@cvm$ acli vm.update_boot_device nw-boot-vm mac_addr=00-00-5E-00-53-FF

6. Power on the VM.


nutanix@cvm$ acli vm.on vm_list [host="host"]
Replace vm_list with the name of the VM. Replace host with the name of the host on which you want
to start the VM.
For example, start the VM named nw-boot-vm on a host named host-1.
nutanix@cvm$ acli vm.on nw-boot-vm host="host-1"

Uploading Files to DSF for Microsoft Windows Users


If you are a Microsoft Windows user, you can securely upload files to DSF by using the following
procedure.

Procedure

1. Use WinSCP, with SFTP selected, to connect to Controller VM through port 2222 and start browsing the
DSF datastore.

Note: The root directory displays storage containers and you cannot change it. You can only upload
files to one of the storage containers and not directly to the root directory. To create or delete storage
containers, you can use the Prism user interface.

2. Authenticate by using Prism username and password or, for advanced users, use the public key that is
managed through the Prism cluster lockdown user interface.

AHV | Virtual Machine Management | 145


Enabling Load Balancing of vDisks in a Volume Group
AHV hosts support load balancing of vDisks in a volume group for guest VMs. Load balancing of vDisks
in a volume group enables IO-intensive VMs to use the storage capabilities of multiple Controller VMs
(CVMs).

About this task


If you enable load balancing on a volume group, the guest VM communicates directly with each CVM
hosting a vDisk. Each vDisk is served by a single CVM. Therefore, to use the storage capabilities of
multiple CVMs, create more than one vDisk for a file system and use the OS-level striped volumes to
spread the workload. This configuration improves performance and prevents storage bottlenecks.

Note:

• vDisk load balancing is disabled by default for volume groups that are directly attached to VMs.
However, vDisk load balancing is enabled by default for volume groups that are attached to
VMs by using a data services IP address.
• If you use web console to clone a volume group that has load balancing enabled, the volume
group clone does not have load balancing enabled by default. To enable load balancing
on the volume group clone, you must set the load_balance_vm_attachments parameter to
true using acli or Rest API.
• You can attach a maximum number of 10 load balanced volume groups per guest VM.
• For Linux VMs, ensure that the SCSI device timeout is 60 seconds. For information
about how to check and modify the SCSI device timeout, see the Red Hat documentation
at https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/
online_storage_reconfiguration_guide/task_controlling-scsi-command-timer-onlining-devices.

Perform the following procedure to enable load balancing of vDisks by using aCLI.

Procedure

1. Log on to a Controller VM with SSH.

2. Do one of the following:

» Enable vDisk load balancing if you are creating a volume group.


nutanix@cvm$ acli vg.create vg_name load_balance_vm_attachments=true
Replace vg_name with the name of the volume group.
» Enable vDisk load balancing if you are updating an existing volume group.
nutanix@cvm$ acli vg.update vg_name load_balance_vm_attachments=true
Replace vg_name with the name of the volume group.

Note: To modify an existing volume group, you must first detach all the VMs that are attached to that
volume group before you enable vDisk load balancing.

3. Verify if vDisk load balancing is enabled.


nutanix@cvm$ acli vg.get vg_name
An output similar to the following is displayed:
nutanix@cvm$ acli vg.get ERA_DB_VG_xxxxxxxx
ERA_DB_VG_xxxxxxxx {

AHV | Virtual Machine Management | 146


attachment_list {
vm_uuid: "xxxxx"
.
.
.
.
iscsi_target_name: "xxxxx"
load_balance_vm_attachments: True
logical_timestamp: 4
name: "ERA_DB_VG_xxxxxxxx"
shared: True
uuid: "xxxxxx"
}
If vDisk load balancing is enabled on a volume group, load_balance_vm_attachments: True is displayed
in the output. The output does not display the load_balance_vm_attachments: parameter at all if vDisk
load balancing is disabled.

4. (Optional) Disable vDisk load balancing.


nutanix@cvm$ acli vg.update vg_name load_balance_vm_attachments=false
Replace vg_name with the name of the volume group.

Viewing list of restarted VMs after an HA event


This section provides the information about how to view the list of VMs that are restarted after an HA event
in the AHV cluster.

About this task


If an AHV host becomes inaccessible or fails due to some unplanned event, the AOS restarts the VMs
across the remaining hosts in the cluster.
To view the list of restarted VMs after an HA event:

Procedure

1. Log in to Prism Central or Prism web console.

AHV | Virtual Machine Management | 147


2. View the list of restarted VMs on either of the following page:

• Events page:
1. Navigate to Activity > Events from the entities menu to access the Events page in Prism
Central.
Navigate to Alerts > Events from the main menu to access the Events page in the Prism web
console.
2. Locate or search for the following string, and hover over or click the string:
VMs restarted due to HA failover
The system displays the list of restarted VMs in the Summary page and as a hover text for the
selected event.
For example:
VMs restarted due to HA failover: <VM_Name1>, <VM_Name2>, <VM_Name3>,
<VM_Name4>. VMs were running on host X.X.X.1 prior to HA.

Observe <VM_Name1>, <VM_Name2>, <VM_Name3>, and <VM_Name4> as the actual VMs in


your cluster.
• Tasks page:
1. Navigate to Activity > Tasks from the entities menu to access the Tasks page in Prism Central.
Navigate to Tasks from the main menu to access the Tasks page in the Prism web console.
2. Locate or search for the following task, and click Details:
HA failover
The system displays a list of related tasks for the HA failover event.
3. Locate or search for the following related task, and click Details:
Host restart all VMs
The system displays Restart VM group task for the HA failover event.
4. In the Entity Affected column, click Details, or hover over the VMs text for Restart VM group
task:
The system displays the list of restarted VMs:

Figure 57: List of restarted VMs

AHV | Virtual Machine Management | 148


LIVE VDISK MIGRATION ACROSS
STORAGE CONTAINERS
vDisk migration allows you to change the container of a vDisk. You can migrate vDisks across storage
containers while they are attached to guest VMs without the need to shut down or delete VMs (live
migration). You can either migrate all vDisks attached to a VM or migrate specific vDisks to another
container.
In a Nutanix solution, you group vDisks into storage containers and attach vDisks to guest VMs. AOS
applies storage policies such as replication factor, encryption, compression, deduplication, and erasure
coding at the storage container level. If you apply a storage policy to a storage container, AOS enables that
policy on all the vDisks of the container. If you want to change the policies of the vDisks (for example, from
RF2 to RF3), create another container with a different policy and move the vDisk to that container. With live
migration of vDisks across containers, you can migrate vDisk across containers even if those vDisks are
attached to a live VM. Thus, live migration of vDisks across storage containers enables you to efficiently
manage storage policies for guest VMs.

General Considerations
You cannot migrate images or volume groups.
You cannot perform the following operations during an ongoing vDisk migration:

• Clone the VM
• Resize the VM
• Take a snapshot

Note: During vDisk migration, the logical usage of a vDisk is more than the total capacity of the vDisk. The
issue occurs because the logical usage of the vDisk includes the space occupied in both the source and
destination containers. Once the migration is complete, the logical usage of the vDisk returns to its normal
value.
Migration of vDisks stalls if sufficient storage space is not available in the target storage container.
Ensure that the target container has sufficient storage space before you begin migration.

Disaster Recovery Considerations


Consider the following points if you have a disaster recovery and backup setup:

• You cannot migrate vDisks of a VM that is protected by a protection domain or protection policy. When
you start the migration, ensure that the VM is not protected by a protection domain or protection policy.
If you want to migrate vDisks of such a VM, do the following:

• Remove the VM from the protection domain or protection policy.


• Migrate the vDisks to the target container.
• Add the VM back to the protection domain or protection policy.
• Configure the remote site with the details of the new container.
vDisk migration fails if the VM is protected by a protection domain or protection policy.
• If you are using a third-party backup solution, AOS temporarily blocks snapshot operations for a VM if
vDisk migration is in progress for that VM.

AHV | Live vDisk Migration Across Storage Containers | 149


Migrating a vDisk to Another Container
You can either migrate all vDisks attached to a VM or migrate specific vDisks to another container.

About this task


Perform the following procedure to migrate vDisks across storage containers:

Procedure

1. Log on to a CVM in the cluster with SSH.

2. Do one of the following:

» Migrate all vDisks of a VM to the target storage container.


nutanix@cvm$ acli vm.update_container vm-name container=target-container
wait=false
Replace vm-name with the name of the VM whose vDisks you want to migrate and target-
container with the name of the target container.

» Migrate specific vDisks by using either the UUID of the vDisk or address of the vDisk.
Migrate specific vDisks by using the UUID of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name device_uuid_list=device_uuid
container=target-container wait=false
Replace vm-name with the name of VM, device_uuid with the device UUID of the vDisk, and
target-container with the name of the target storage container.

Run nutanix@cvm$ acli vm.get <vm-name> to determine the device UUID of the vDisk.
You can migrate multiple vDisks at a time by specifying a comma-separated list of device UUIDs of
the vDisks.
Alternatively, you can migrate vDisks by using the address of the vDisk.
nutanix@cvm$ acli vm.update_container vm-name disk_addr_list=disk-address
container=target-container wait=false
Replace vm-name with the name of VM, disk-address with the address of the disk, and target-
container with the name of the target storage container.

Run nutanix@cvm$ acli vm.get <vm-name> to determine the address of the vDisk.
Following is the format of the vDisk address:
bus.index

Following is a section of the output of the acli vm.get vm-name command:


disk_list {
addr {
bus: "scsi"
index: 0
}
Combine the values of bus and index as shown in the following example:
nutanix@cvm$ acli vm.update_container TestUVM_1 disk_addr_list=scsi.0
container=test-container-17475
You can migrate multiple vDisks at a time by specifying a comma-separated list of vDisk addresses.

3. Check the status of the migration in the Tasks menu of the Prism Element web console.

AHV | Live vDisk Migration Across Storage Containers | 150


4. (Optional) Cancel the migration if you no longer want to proceed with it.
nutanix@cvm$ ecli task.cancel task_list=task-ID
Replace task-ID with the ID of the migration task.
Determine the task ID as follows:
nutanix@cvm$ ecli task.list
In the Type column of the tasks list, look for VmChangeDiskContainer.
VmChangeDiskContainer indicates that it is a vDisk migration task. Note the ID of such a task.

Note: Note the following points about canceling migration:

• If you cancel an ongoing migration, AOS retains the vDisks that have not yet been migrated
in the source container. AOS does not migrate vDisks that have already been migrated to
the target container back to the source container.
• If sufficient storage space is not available in the original storage container, migration of
vDisks back to the original container stalls. To resolve the issue, ensure that the source
container has sufficient storage space.

AHV | Live vDisk Migration Across Storage Containers | 151


OVAS
An Open Virtual Appliance (OVA) file is a tar archive file created by converting a virtual machine (VM) into
an Open Virtualization Format (OVF) package for easy distribution and deployment. OVA helps you to
quickly create, move or deploy VMs on different hypervisors.
Prism Central helps you perform the following operations with OVAs:

• Export an AHV VM as an OVA file.


• Upload OVAs of VMs or virtual appliances (vApps). You can import (upload) an OVA file with the
QCOW2 or VMDK disk formats from a URL or the local machine.
• Deploy an OVA file as a VM.
• Download an OVA file to your local machine.
• Rename an OVA file.
• Delete an OVA file.
• Track or monitor the tasks associated with OVA operations in Tasks.
The access to OVA operations is based on your role. See Role Details View in the Prism Central Guide to
check if your role allows you to perform the OVA operations.
For information about:

• Restrictions applicable to OVA operations, see OVA Restrictions on page 152.


• The OVAs dashboard, see OVAs View in the Prism Central Guide.
• Exporting a VM as an OVA, see Exporting a VM as an OVA in the Prism Central Guide.
• Other OVA operations, see OVA Management in the Prism Central Guide.

OVA Restrictions
You can perform the OVA operations subject to the following restrictions:

• Export to or upload OVAs with one of the following disk formats:

• QCOW2: Default disk format auto-selected in the Export as OVA dialog box.
• VMDK: Deselect QCOW2 and select VMDK, if required, before you submit the VM export request
when you export a VM.
• When you export a VM or upload an OVA and the VM or OVA does not have any disks, the disk
format is irrelevant.
• Upload an OVA to multiple clusters using a URL as the source for the OVA. You can upload an OVA
only to a single cluster when you use the local OVA File source.
• Perform the OVA operations only with appropriate permissions. You can run the OVA operations that
you have permissions for, based on your assigned user role.
• The OVA that results from exporting a VM on AHV is compatible with any AHV version 5.18 or later.
• The minimum supported versions for performing OVA operations are AOS 5.18, Prism Central 2020.8,
and AHV-20190916.253.

• OVAs are not supported for credential guard.

AHV | OVAs | 152


COPYRIGHT
Copyright 2023 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or
other jurisdictions. All other brand and product names mentioned herein are for identification purposes only
and may be trademarks of their respective holders.

AHV | Copyright | 153

You might also like