0% found this document useful (0 votes)
135 views

Vsphere Admin6 AOS v6 - 6

This document provides guidance on configuring and managing vSphere environments integrated with Nutanix software and hardware. It covers topics such as networking configuration, registering Nutanix clusters with vCenter, node management, and virtual machine management.

Uploaded by

Bauke Plugge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views

Vsphere Admin6 AOS v6 - 6

This document provides guidance on configuring and managing vSphere environments integrated with Nutanix software and hardware. It covers topics such as networking configuration, registering Nutanix clusters with vCenter, node management, and virtual machine management.

Uploaded by

Bauke Plugge
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

vSphere Administration

Guide for Acropolis


AOS 6.6
September 5, 2023
Contents

Overview.............................................................................................................4
Hardware Configuration...............................................................................................................................4
Nutanix Software Configuration.................................................................................................................. 4

vSphere Networking..........................................................................................7
VMware NSX Support................................................................................................................................. 8
NSX-T Support on Nutanix Platform................................................................................................ 8
Creating Segment for NVDS............................................................................................................ 9
Creating NVDS Switch on the Host by Using NSX-T Manager..................................................... 10
Registering NSX-T Manager with Nutanix..................................................................................... 14
Networking Components........................................................................................................................... 17
Configuring Host Networking (Management Network)..............................................................................19
Changing a Host IP Address.................................................................................................................... 21
Reconnecting a Host to vCenter...............................................................................................................22
Selecting a Management Interface........................................................................................................... 22
Selecting a New Management Interface........................................................................................ 23
Updating Network Settings........................................................................................................................24
Network Teaming Policy........................................................................................................................... 24
Migrate from a Standard Switch to a Distributed Switch.......................................................................... 25
Standard Switch Configuration....................................................................................................... 25
Planning the Migration....................................................................................................................25
Unassigning Physical Uplink of the Host for Distributed Switch.................................................... 26
Migrating to a New Distributed Switch without LACP/LAG............................................................ 27
Migrating to a New Distributed Switch with LACP/LAG................................................................. 34

vCenter Configuration.....................................................................................39
Registering a Cluster to vCenter Server...................................................................................................39
Unregistering a Cluster from the vCenter Server..................................................................................... 41
Creating a Nutanix Cluster in vCenter......................................................................................................41
Adding a Nutanix Node to vCenter...........................................................................................................42
Nutanix Cluster Settings............................................................................................................................43
vSphere General Settings.............................................................................................................. 43
vSphere HA Settings...................................................................................................................... 44
vSphere DRS Settings................................................................................................................... 50
vSphere EVC Settings....................................................................................................................52
VM Override Settings..................................................................................................................... 54
Migrating a Nutanix Cluster from One vCenter Server to Another........................................................... 55
Storage I/O Control (SIOC).......................................................................................................................56
Disabling Storage I/O Control (SIOC) on a Container................................................................... 56

Node Management.......................................................................................... 58
Node Maintenance (ESXi).........................................................................................................................58
Putting a Node into Maintenance Mode (vSphere)........................................................................59
Viewing a Node that is in Maintenance Mode............................................................................... 62
Exiting a Node from the Maintenance Mode (vSphere).................................................................64
Guest VM Status when Node is in Maintenance Mode................................................................. 67

ii
Nonconfigurable ESXi Components..........................................................................................................68
Nutanix Software............................................................................................................................ 68
ESXi................................................................................................................................................ 69
Putting the CVM and ESXi Host in Maintenance Mode Using vCenter....................................................70
Shutting Down an ESXi Node in a Nutanix Cluster..................................................................................70
Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line)........................................ 71
Starting an ESXi Node in a Nutanix Cluster.............................................................................................72
Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line)................................................... 74
Restarting an ESXi Node using CLI......................................................................................................... 75
Rebooting an ESXI Node in a Nutanix Cluster........................................................................................ 76
Changing an ESXi Node Name................................................................................................................ 77
Changing an ESXi Node Password..........................................................................................................77
Changing the CVM Memory Configuration (ESXi)................................................................................... 77

VM Management.............................................................................................. 78
VM Management Using Prism Central..................................................................................................... 78
Creating a VM through Prism Central (ESXi)................................................................................ 78
Managing a VM through Prism Central (ESXi).............................................................................. 83
VM Management using Prism Element.................................................................................................... 83
Creating a VM (ESXi).....................................................................................................................83
Managing a VM (ESXi).................................................................................................................. 86
vDisk Provisioning Types in VMware with Nutanix Storage..................................................................... 94
VM Migration............................................................................................................................................. 95
Migrating a VM to Another Nutanix Cluster................................................................................... 95
Cloning a VM............................................................................................................................................ 96
vStorage APIs for Array Integration..........................................................................................................96

vSphere ESXi Hardening Settings.................................................................98

ESXi Host Upgrade......................................................................................... 99


ESXi Upgrade............................................................................................................................................99
VMware ESXi Hypervisor Upgrade Recommendations and Limitations........................................ 99
Upgrading ESXi Hosts by Uploading Binary and Metadata Files................................................ 100
ESXi Host Manual Upgrade.................................................................................................................... 104
ESXi Host Upgrade Process........................................................................................................ 104

vSphere Cluster Settings Checklist............................................................ 106

Copyright........................................................................................................107

iii
OVERVIEW
Nutanix Enterprise Cloud delivers a resilient, web-scale hyperconverged infrastructure (HCI) solution built for
supporting your virtual and hybrid cloud environments. The Nutanix architecture runs a storage controller called
the Nutanix Controller VM (CVM) on every Nutanix node in a cluster to form a highly distributed, shared-nothing
infrastructure.
All CVMs work together to aggregate storage resources into a single global pool that guest VMs running on the
Nutanix nodes can consume. The Nutanix Distributed Storage Fabric manages storage resources to preserve data and
system integrity if there is node, disk, application, or hypervisor software failure in a cluster. Nutanix storage also
enables data protection and High Availability that keep critical data and guest VMs protected.
This guide describes the procedures and settings required to deploy a Nutanix cluster running in the VMware vSphere
environment. To know more about the VMware terms referred to in this document, see the VMware Documentation.

Hardware Configuration
See the Field Installation Guide for information about how to deploy and create a Nutanix cluster running ESXi for
your hardware. After you create the Nutanix cluster by using Foundation, use this guide to perform the management
tasks.

Limitations
For information about ESXi configuration limitations, see Nutanix Configuration Maximums webpage.

Nutanix Software Configuration


The Nutanix Distributed Storage Fabric aggregates local SSD and HDD storage resources into a single
global unit called a storage pool. In this storage pool, you can create several storage containers, which the
system presents to the hypervisor and uses to host VMs. You can apply a different set of compression,
deduplication, and replication factor policies to each storage container.

Storage Pools
A storage pool on Nutanix is a group of physical disks from one or more tiers. Nutanix recommends configuring only
one storage pool for each Nutanix cluster.
Replication factor
Nutanix supports a replication factor of 2 or 3. Setting the replication factor to 3 instead of 2 adds
an extra data protection layer at the cost of more storage space for the copy. For use cases where
applications provide their own data protection or high availability, you can set a replication factor of
1 on a storage container.
Containers
The Nutanix storage fabric presents usable storage to the vSphere environment as an NFS
datastore. The replication factor of a storage container determines its usable capacity. For example,
replication factor 2 tolerates one component failure and replication factor 3 tolerates two component
failures. When you create a Nutanix cluster, three storage containers are created by default.
Nutanix recommends that you do not delete these storage containers. You can rename the storage
container named default - xxx and use it as the main storage container for hosting VM data.

Note: The available capacity and the vSphere maximum of 2,048 VMs limits the number of VMs a datastore
can host.

Capacity Optimization

• Nutanix recommends enabling inline compression unless otherwise advised.

AOS | Overview | 4
• Nutanix recommends disabling deduplication for all workloads except VDI.
For mixed-workload Nutanix clusters, create a separate storage container for VDI workloads and enable
deduplication on that storage container.

Nutanix CVM Settings


CPU
Keep the default settings as configured by the Foundation during the hardware configuration.
Change the CPU settings only if Nutanix Support recommends it.
Memory
Most workloads use less than 32 GB RAM memory per CVM. However, for mission-critical
workloads with large working sets, Nutanix recommends more than 32 GB CVM RAM memory.

Tip: You can increase CVM RAM memory up to 64 GB using the Prism one-click memory upgrade
procedure. For more information, see Increasing the Controller VM Memory Size in the Prism Web
Console Guide.

Networking
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1500 bytes for
all the network interfaces by default. The standard 1500 byte MTU helps deliver enhanced excellent
performance and stability. Nutanix does not support configuring the MTU on a network interface of
CVMs to higher values.

Caution: Do not use jumbo Frames for the Nutanix CVM.

Caution: Do not change the vSwitchNutanix or the internal vmk (VMkernel) interface.

Nutanix Cluster Settings


Nutanix recommends that you do the following.

• Map a Nutanix cluster to only one vCenter Server.


Due to the way the Nutanix architecture distributes data, there is limited support for mapping a Nutanix cluster to
multiple vCenter Servers. Some Nutanix products (Move, Nutanix Database Service (NDB), Calm, Files, Prism
Central), and features (disaster recovery solution) are unstable when a Nutanix cluster maps to multiple vCenter
Servers.
• Configure a Nutanix cluster with replication factor 2 or replication factor 3.

Tip: Nutanix recommends using replication factor 3 for clusters with more than 16 nodes. Replication factor 3
requires at least five nodes so that the data remains online even if two nodes fail concurrently.

• Use the advertised capacity feature to ensure that the resiliency capacity is equivalent to one node of usable
storage for replication factor 2 or two nodes for replication factor 3.
The advertised capacity of a storage container must equal the total usable cluster space minus the capacity of
either one or two nodes. For example, in a 4-node cluster with 20 TB usable space per node with replication
factor 2, the advertised capacity of the storage container must be 60 TB. That spares 20 TB capacity to sustain and
rebuild one node for self-healing. Similarly, in a 5-node cluster with 20 TB usable space per node with replication
factor 3, advertised capacity of the storage container must be 60 TB. That spares 40 TB capacity to sustain and
rebuild two nodes for self-healing.
• Use the default storage container and mounting it on all the ESXi hosts in the Nutanix cluster.
You can also create a single storage container. If you are creating multiple storage containers, ensure that all the
storage containers follow the advertised capacity recommendation.

AOS | Overview | 5
• Configure the vSphere cluster according to settings listed in vSphere Cluster Settings Checklist on
page 106.

Software Acceptance Level


The Foundation sets the software acceptance level of an ESXi image to CommunitySupported by default. If there is a
requirement to upgrade the software acceptance level, run the following command to upgrade the software acceptance
level to the maximum acceptance level of PartnerSupported.
root@esxi# esxcli software acceptance set --level=PartnerSupported

Scratch Partition Settings


ESXi uses the scratch partition (/scratch) to dump the logs when it encounters a purple screen of death (PSOD) or a
kernel dump. The Foundation install automatically creates this partition on the SATA DOM or M.2 device with the
ESXi installation. Moving the scratch partition to any location other than the SATA DOM or M.2 device can cause
issues with LCM, 1-click hypervisor updates, and the general stability of the Nutanix node.

AOS | Overview | 6
VSPHERE NETWORKING
vSphere on the Nutanix platform enables you to dynamically configure, balance, or share logical networking
components across various traffic types. To ensure availability, scalability, performance, management, and security of
your infrastructure, configure virtual networking when designing a network solution for Nutanix clusters.
You can configure networks according to your requirements. For detailed information about vSphere virtual
networking and different networking strategies, refer to the Nutanix vSphere Storage Solution Document and the
VMware Documentation. This chapter describes the configuration elements required to run VMware vSphere on the
Nutanix Enterprise infrastructure.

Virtual Networking Configuration Options


vSphere on Nutanix supports the following types of virtual switches.
vSphere Standard Switch (vSwitch)
vSphere Standard Switch (vSS) with Nutanix vSwitch is the default configuration for Nutanix
deployments and suits most use cases. A vSwitch detects which VMs are connected to each virtual
port and uses that information to forward traffic to the correct VMs. You can connect a vSwitch to
physical switches by using physical Ethernet adapters (also referred to as uplink adapters) to join
virtual networks with physical networks. This type of connection is similar to connecting physical
switches together to create a larger network.

Tip: A vSwitch works like a physical Ethernet switch.

vSphere Distributed Switch (vDS)


Nutanix recommends vSphere Distributed Switch (vDS) coupled with network I/O control (NIOC version
2) and load-based teaming. This combination provides simplicity, ensures traffic prioritization if there is
contention, and reduces operational management overhead. A vDS acts as a single virtual switch across
all associated hosts on a datacenter. It enables VMs to maintain consistent network configuration as they
migrate across multiple hosts. For more information about vDS, see NSX-T Support on Nutanix Platform
on page 8.
Nutanix recommends setting all vNICs as active on the port group and dvPortGroup unless otherwise specified. The
following table lists the naming convention, port groups, and the corresponding VLAN Nutanix uses for various
traffic types.

Table 1: Port Groups and Corresponding VLAN

Port group VLAN Description

MGMT_10 10 VM kernel interface for host


management traffic

VMOT_20 20 VM kernel interface for vMotion


traffic

FT_30 30 Fault tolerance traffic

VM_40 40 VM traffic

VM_50 50 VM traffic

NTNX_10 10 Nutanix CVM to CVM cluster


communication traffic (public
interface)

AOS | vSphere Networking | 7


Port group VLAN Description

Svm-iscsi-pg N/A Nutanix CVM to internal host


traffic

VMK-svm-iscsi-pg N/A VM kernel port for CVM to


hypervisor communication
(internal)

All Nutanix configurations use an internal-only vSwitch for the NFS communication between the ESXi host and
the Nutanix CVM.This vSwitch remains unmodified regardless of the virtual networking configuration for ESXi
management, VM traffic, vMotion, and so on.

Caution: Do not modify the internal-only vSwitch (vSwitch-Nutanix). vSwitch-Nutanix facilitates communication
between the CVM and the internal hypervisor.

VMware NSX Support


Running VMware NSX on Nutanix infrastructure ensures that VMs always have access to fast local storage and
compute, consistent network addressing and security without the burden of physical infrastructure constraints. The
supported scenario connects the Nutanix CVM to a traditional VLAN network, with guest VMs inside NSX virtual
networks. For more information, see the Nutanix vSphere Storage Solution Document.

NSX-T Support on Nutanix Platform


Nutanix platform relies on communication with vCenter to work with networks backed by vSphere standard switch
(vSS) or vSphere Distributed Switch (vDS). With the introduction of a new management plane, that enables network
management agnostic to the compute manager (vCenter), network configuration information is available through the
NSX-T manager. To collect the network configuration information from the NSX-T Manager, you must modify the
Nutanix infrastructure workflows (AOS upgrades, LCM upgrades, and so on).

Figure 1: Nutanix and the NSX-T Workflow Overview

The Nutanix platform supports the following in the NSX-T configuration.

• ESXi hypervisor only.


• vSS and vDS virtual switch configurations.

AOS | vSphere Networking | 8


• Nutanix CVM connection to VLAN backed NSX-T segments only.
• The NSX-T Manager credentials registration using the CLI.
The Nutanix platform does not support the following in the NSX-T configuration.

• NSX VDS switch.

• Network segmentation with N-VDS.


• Nutanix CVM connection to overlay NSX-T segments.
• Link Aggregation/LACP for the uplinks backing the NVDS host switch connecting Nutanix CVMs.
• The NSX-T Manager credentials registration through Prism.

NSX-T Segments
Nutanix supports NSX-T logical segments to co-exist on Nutanix clusters running ESXi hypervisors. All
infrastructure workflows that include the use of the Foundation, 1-click upgrades, and AOS upgrades are validated to
work in the NSX-T configurations where CVM is backed by the NSX-T VLAN logical segment.
NSX-T has the following types of segments.
VLAN backed
VLAN backed segments operate similar to the standard port group in a vSphere switch. A port
group is created on the NVDS, and VMs that are connected to the port group have their network
packets tagged with the configured VLAN ID.
Overlay backed
Overlay backed segments use the Geneve overlay to create a logical L2 network over L3 network.
Encapsulation occurs at the transport layer (which is the NVDS module on the host).

Multicast Filtering
Enabling multicast snooping on a vDS with a Nutanix CVM attached affects the ability of CVM to discover and add
new nodes in the Nutanix cluster (the cluster expand option in Prism and the Nutanix CLI).

Creating Segment for NVDS


This procedure provides details about creating a segment for nVDS.

About this task


To check the vSwitch configuration of the host and verify if NSX-T network supports the CVM port-group, perform
the following steps.

Procedure

1. Log on to vCenter server and go to the NSX-T Manager.

2. Click Networking, and go to Connectivity > Segments in the left pane.

AOS | vSphere Networking | 9


3. Click ADD SEGMENT under the SEGMENTS tab on the right pane and specify the following information.

Figure 2: Create New Segment

a. Segment Name: Enter a name for the segment.


b. Transport Zone: Select the VLAN-based transport zone.
This transport name is associated with the Transport Zone when configuring the NSX switch.
c. VLAN: Enter the number 0 for native VLAN.

4. Click Save to create a segment for NVDS.

5. Click Yes when the system prompts to continue with configuring the segment.
The newly created segment appears below the prompt.

Figure 3: New Segment Created

Creating NVDS Switch on the Host by Using NSX-T Manager


This procedure provides instructions to create an NVDS switch on the ESXi host. The management and
CVM external interface of the host is migrated to the NVDS switch.

AOS | vSphere Networking | 10


About this task
To create an NVDS switch and configure the NSX-T Manager, do the following.

Procedure

1. Log on to NSX-T Manager.

2. Click System, and go to Configuration > Fabric > Nodes in the left pane.

Figure 4: Add New Node

AOS | vSphere Networking | 11


3. Click ADD HOST NODE under the HOST TRANSPORT NODES in the right pane.

a. Specify the following information in the Host Details dialog box.

Figure 5: Add Host Details

• 1. Name: Enter an identifiable ESXi host name.


2. Host IP: Enter the IP address of the ESXi host.
3. Username: Enter the username used to log on to the ESXi host.
4. Password: Enter the password used to log on to the ESXi host.
5. Click Next to move to the NSX configuration.

AOS | vSphere Networking | 12


b. Specify the following information in the Configure NSX dialog box.

Figure 6: Configure NSX

• 1. Mode: Select Standard option.


Nutanix recommends the Standard mode only.
2. Name: Displays the default name of the virtual switch that appears on the host. You can edit the
default name and provide an identifiable name as per your configuration requirements.
3. Transport Zone: Select the transport zone that you selected in Creating Segment for NVDS on
page 9.
These segments operate in the way similar to the standard port group in a vSphere switch. A port group
is created on the NVDS, and VMs that are connected to the port group have their network packets
tagged with the configured VLAN ID.
4. Uplink Profile: Select an uplink profile for the new nVDS switch.
This selected uplink profile represents the NICs connected to the host. For more information about
uplink profiles, see the VMware Documentation.
5. LLDP Profile: Select the LLDP profile for the new nVDS switch.
For more information about LLDP profiles, see the VMware Documentation.
6. Teaming Policy Uplink Mapping: Map the uplinks with the physical NICs of the ESXi host.

Note: To verify the active physical NICs on the host, select ESXi host > Configure > Networking
> Physical Adapters.

Click Edit icon and enter the name of the active physical NIC in the ESXi host selected for migration
to the NVDS.

AOS | vSphere Networking | 13


Note: Always migrate one physical NIC at a time to avoid connectivity failure with the ESXi host.

7. PNIC only Migration: Turn on the switch to Yes if there are no VMkernel Adapters (vmks)
associated with the PNIC selected for migration from vSS switch to the nVDS switch.
8. Network Mapping for Install. Click Add Mapping to migrate the VMkernels (vmks) to the NVDS
switch.
9. Network Mapping for Uninstall: To revert the migration of VMkernels.

4. Click Finish to create the ESXi host to the NVDS switch.


The newly created nVDS switch appears on the ESXi host.

Figure 7: NVDS Switch Created

Registering NSX-T Manager with Nutanix


After migrating the external interface of the host and the CVM to the NVDS switch, it is mandatory to inform
Genesis about the registration of the cluster with the NSX-T Manager. This registration helps Genesis
communicate with the NSX-T Manager and avoid failures during LCM, 1-click, and AOS upgrades.

About this task


This procedure demonstrates an AOS upgrade error encountered for a non-registered NSX-T Manager with Nutanix
and how to register the NSX-T Manager with Nutanix and resolve the issue.
To register an the NSX-T Manager with Nutanix, do the following.

Procedure

1. Log on to the Prism Element web console.

AOS | vSphere Networking | 14


2. Select VM > Settings > Upgrade Software > Upgrade > Pre-upgrade to upgrade AOS on the host.

Figure 8: Upgrade AOS

AOS | vSphere Networking | 15


3. The upgrade process throws an error if the NSX-T Manager is not registered with Nutanix.

Figure 9: AOS Upgrade Error for Unregistered NSX-T

The AOS upgrade determines if the NSX-T networks supports the CVM, its VLAN, and then attempts to get the
VLAN information of those networks. To get VLAN information for the CVM, the NSX-T Manager information
must be configured in the Nutanix cluster.

4. To fix this upgrade issue, log on to the Prism Element web console using SSH.

5. Access the cluster directory.


nutanix@cvm$ cd ~/cluster/bin

AOS | vSphere Networking | 16


6. Verify if the NSX-T Manager was registered with the CVM earlier.
nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l
If the NSX-T Manager was not registered earlier, you get the following message.
No MX-T manager configured in the cluster

7. Register the NSX-T Manager with the CVM if it was not registered earlier. Specify the credentials of the NSX-T
Manager to the CVM.
nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -a
IP address: 10.10.10.10
Username: admin
Password:
/usr/local/nutanix/cluster/lib/py/requests-2.12.0-py2.7.egg/requests/packages/
urllib3/conectionpool.py:843:
InsecureRequestWarning: Unverified HTTPS request is made. Adding certificate
verification is strongly advised.
See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
Successfully persisted NSX-T manager information

8. Verify the registration of NSX-T Manager with the CVM.


nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -l
If there are no errors, the system displays a similar output.
IP address: 10.10.10.10
Username: admin

9. In the Prism Element Web Console, click the Pre-upgrade to continue the AOS upgrade procedure.
The AOS upgrade is completed successfully.

Networking Components
IP Addresses
All CVMs and ESXi hosts have two network interfaces.

Note: An empty interface eth2 is created on CVM during deployment by Foundation. The eth2 interface is used for
backplane when backplane traffic isolation (Network Segmentation) is enabled in the cluster. For more information
about backplane interface and traffic segmentation, see Security Guide.

Interface IP address vSwitch

ESXi host vmk0 User-defined vSwitch0

CVM eth0 User-defined vSwitch0

ESXi host vmk1 192.168.5.1 vSwitchNutanix

CVM eth1 192.168.5.2 vSwitchNutanix

CVM eth1:1 192.168.5.254 vSwitchNutanix

CVM eth2 User-defined vSwitch0

Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets that overlap with subnet
192.168.5.0/24.

AOS | vSphere Networking | 17


vSwitches
A Nutanix node is configured with the following two vSwitches.

• vSwitchNutanix
Local communications between the CVM and the ESXi host use vSwitchNutanix. vSwitchNutanix has no uplinks.

Caution: To manage network traffic between VMs with greater control, create more port groups on vSwitch0. Do
not modify vSwitchNutanix.

Figure 10: vSwitchNutanix Configuration


• vSwitch0
All other external communications like CVM to a different host (in case of HA re-direction) use vSwitch0 that
has uplinks to the physical network interfaces. Since network segmentation is disabled by default, the backplane
traffic uses vSwitch0.
vSwitch0 has the following two networks.

• Management Network
HA, vMotion, and vCenter communications use the Management Network.
• VM Network
All VMs use the VM Network.

Caution:

• The Nutanix CVM uses the standard Ethernet maximum transmission unit (MTU) of 1,500 bytes for
all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance
and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to
higher values.
• You can enable jumbo Frames (MTU of 9,000 bytes) on the physical network interfaces of ESXi
hosts and guest VMs if the applications on your guest VMs require them. If you choose to use jumbo

AOS | vSphere Networking | 18


Frames on hypervisor hosts, ensure to enable them end-to-end in the desired network and consider
both the physical and virtual network infrastructure impacted by the change.

Figure 11: vSwitch0 Configuration

Configuring Host Networking (Management Network)


After you create the Nutanix cluster by using Foundation, configure networking for your ESXi hosts.

About this task

Figure 12: Configure Management Network

Procedure

1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.

2. Press the down arrow key until Configure Management Network highlights and then press Enter.

3. Select Network Adapters and then press Enter.

AOS | vSphere Networking | 19


4. Ensure that the connected network adapters are selected.
If they are not selected, press Space key to select them and press Enter key to return to the previous screen.

Figure 13: Network Adapters

5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter.
In the dialog box, provide the VLAN ID and press Enter.

Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host
are assigned. Isolate guest VMs on one or more separate VLANs.

6. Select IP Configuration and press Enter.

Figure 14: Configure Management Network

7. If necessary, highlight the Set static IP address and network configuration option and press Space to
update the setting.

8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on your
environment and then press Enter .

AOS | vSphere Networking | 20


9. Select DNS Configuration and press Enter.

10. If necessary, highlight the Use the following DNS server addresses and hostname option and press
Space to update the setting.

11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment
and then press Enter.

12. Press Esc and then Y to apply all changes and restart the management network.

13. Select Test Management Network and press Enter.

14. Press Enter to start the network ping test.

15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier
in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are
configured.

Figure 15: Test Management Network

Press Enter to close the test window.

16. Press Esc to log off.

Changing a Host IP Address


About this task
To change a host IP address, perform the following steps. Perform the following steps once for each
hypervisor host in the Nutanix cluster. Complete the entire procedure on a host before proceeding to the
next host.

Caution: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping IP addresses between
two hosts, temporarily change one host IP address to an interim unused IP address. Changing this IP address avoids
having two hosts with identical IP addresses on the cluster. Then complete the address change or swap on each host
using the following steps.

Note: All CVMs and hypervisor hosts must be on the same subnet. The hypervisor can be multihomed provided that
one interface is on the same subnet as the CVM.

AOS | vSphere Networking | 21


Procedure

1. Configure networking on the Nutanix node. For more information, see Configuring Host Networking
(Management Network) on page 19.

2. Update the host IP addresses in vCenter. For more information, see Reconnecting a Host to vCenter on
page 22.

3. Log on to every CVM in the Nutanix cluster and restart Genesis service.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed.
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]

Reconnecting a Host to vCenter


About this task
If you modify the IP address of a host, you must reconnect the host with the vCenter. To reconnect the host
to the vCenter, perform the following procedure.

Procedure

1. Log on to vCenter with the web client.

2. Right-click the host with the changed IP address and select Disconnect.

3. Right-click the host again and select Remove from Inventory.

4. Right-click the Nutanix cluster and then click Add Hosts....

a. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP
address or FQDN under New hosts.
b. Enter the host logon credentials in the User name and Password fields, and click Next.
If a security or duplicate management alert appears, click Yes.
c. Review the Host Summary and click Next.
d. Click Finish.
You can see the host with the updated IP address in the left pane of vCenter.

Selecting a Management Interface


Nutanix tracks the management IP address for each host and uses that IP address to open an SSH session into the
host to perform management activities. If the selected vmk interface is not accessible through SSH from the CVMs,
activities that require interaction with the hypervisor fail.
If multiple vmk interfaces are present on a host, Nutanix uses the following rules to select a management interface.

AOS | vSphere Networking | 22


1. Assigns weight to each vmk interface.

• If vmk is configured for the management traffic under network settings of ESXi, then the weight assigned is 4.
Otherwise, the weight assigned is 0.
• If the IP address of vmk belongs to the same IP subnet as eth0 of the CVMs interface, then 2 is added to its
weight.
• If the IP address of vmk belongs to the same IP subnet as eth2 of the CVMs interface, then 1 is added to its
weight.
2. The vmk interface that has the highest weight is selected as the management interface.

Example of Selection of Management Network


Consider an ESXi host with following configuration.

• vmk0 IP address and mask: 2.3.62.204, 255.255.255.0


• vmk1 IP address and mask: 192.168.5.1, 255.255.255.0
• vmk2 IP address and mask: 2.3.63.24, 255.255.255.0
Consider a CVM with following configuration.

• eth0 inet address and mask: 2.3.63.31, 255.255.255.0


• eth2 inet address and mask: 2.3.62.12, 255.255.255.0
According to the rules, the following weights are assigned to the vmk interfaces.

• vmk0 = 4 + 0 + 1 = 5
• vmk1 = 0 + 0 + 0 = 0
• vmk2 = 0 + 2 + 0 = 2
Since vmk0 has the highest weight assigned, vmk0 interface is used as a management IP address for the ESXi host.
To verify that vmk0 interface is selected for management IP address, use the following command.
root@esx# esxcli network ip interface tag get -i vmk0
You see the following output.
Tags: Management, VMmotion
For the other two interfaces, no tags are displayed.
If you want any other interface to act as the management IP address, enable management traffic on that interface by
following the procedure described in Selecting a New Management Interface on page 23.

Selecting a New Management Interface


You can mark the vmk interface to select as a management interface on an ESXi host by using the
following method.

Procedure

1. Log on to vCenter with the web client.

AOS | vSphere Networking | 23


2. Do the following on the ESXi host.

a. Go to Configure > Networking > VMkernel adapters.


b. Select the interface on which you want to enable the management traffic.
c. Click Edit settings of the port group to which the vmk belongs.
d. Select Management check box from the Enabled services option to enable management traffic on the
vmk interface.

3. Open an SSH session to the ESXi host and enable the management traffic on the vmk interface.
root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management
Replace vmkN with the vmk interface where you want to enable the management traffic.

Updating Network Settings


After you configure networking of your vSphere deployments on Nutanix Enterprise Cloud, you may want
to update the network settings.

• To know about the best practice of ESXi network teaming policy, see Network Teaming Policy on page 24.

• To migrate an ESXi host networking from a vSphere Standard Switch (vSwitch) to a vSphere Distributed Switch
(vDS) with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG on
page 34.

• To migrate an ESXi host networking from a vSphere standard switch (vSwitch) to a vSphere Distributed Switch
(vDS) without LACP, see Migrating to a New Distributed Switch without LACP/LAG on page 27.
.

Network Teaming Policy


On an ESXi host, NIC teaming policy allows you to bundle two or more physical NICs into a single logical link to
provide more network bandwidth aggregation and link redundancy to a vSwitch. The NIC teaming policies in the
ESXi networking configuration for a vSwitch consists of the following.

• Route based on originating virtual port.


• Route based on IP hash.
• Route based on source MAC hash.
• Explicit failover order.
In addition to the earlier mentioned NIC teaming policy, vDS uses an extra teaming policy that consists of - Route
based on physical NIC load.
When Foundation or Phoenix imaging is performed on a Nutanix cluster, the following two standard virtual switches
are created on ESXi hosts:

• vSwitch0
• vSwitchNutanix
On vSwitch0, the Nutanix best practice guide (see Nutanix vSphere Networking Solution Document) provides the
following recommendations for NIC teaming:

• vSwitch. Route based on originating virtual port

AOS | vSphere Networking | 24


• vDS. Route based on physical NIC load
On vSwitchNutanix, there are no uplinks to the virtual switch, so there is no NIC teaming configuration required.

Migrate from a Standard Switch to a Distributed Switch


This topic provides detailed information about how to migrate from a vSphere Standard Switch (vSS) to a vSphere
Distributed Switch (vDS).
The following are the two types of virtual switches (vSwitch) in vSphere.

• vSphere standard switch (vSwitch) (see vSphere Standard Switch (vSwitch) in vSphere Networking on
page 7).
• vSphere Distributed Switch (vDS) (see vSphere Distributed Switch (vDS) in vSphere Networking on
page 7).

Tip: For more information about vSwitches and the associated network concepts, see the VMware Documentation.

For migrating from a vSS to a vDS with LACP/LAG configuration, see Migrating to a New Distributed Switch
with LACP/LAG on page 34.
For migrating from a vSS to a vDS without LACP/LAG configuration, see Migrating to a New Distributed Switch
without LACP/LAG on page 27.

Standard Switch Configuration


The standard switch configuration consists of the following.
vSwitchNutanix
vSwitchNutanix handles internal communication between the CVM and the ESXi host. There are
no uplink adapters associated with this vSwitch. This virtual switch enables the communication
between the CVM and the host. Administrators must not modify the settings of this virtual switch or
its port groups. The only members of this port group must be the CVM and its host. Do not modify
this virtual switch configuration as it can disrupt communication between the host and the CVM.
vSwitch0
vSwitch0 consists of the vmk (VMkernel) management interface, vMotion interface, and VM port
groups. This virtual switch connects to uplink network adapters that are plugged into a physical
switch.

Planning the Migration


It is important to plan and understand the migration process. An incorrect configuration can disrupt communication,
which can require downtime to resolve.
Consider the following while or before planning the migration.

• Read Nutanix Best Practice Guide for VMware vSphere Networking available here.

• Understand the various teaming and load-balancing algorithms on vSphere.


For more information, see the VMware Documentation.
• Confirm communication on the network through all the connected uplinks.
• Confirm access to the host using IPMI when there are network connectivity issues during migration.
Access the host to troubleshoot the network issue or move the management network back to the standard switch
depending on the issue.

AOS | vSphere Networking | 25


• Confirm that the hypervisor external management IP address and the CVM IP address are in the same public
subnet for the data path redundancy functionality to work.
• When performing migration to the distributed switch, migrate one host at a time and verify that networking is
working as desired.
• Do not migrate the port groups and vmk (VMkernel) interfaces that are on vSwitchNutanix to the distributed
switch (dvSwitch).

Unassigning Physical Uplink of the Host for Distributed Switch


All the physical adapters connect to the vSwitch0 of the host. A live distributed switch must have a physical
uplink connected to it to work. To assign the physical adapter of the host to the distributed switch, unassign
the physical adapter of the host and assign it to the new distributed switch.

About this task


To unassign the physical uplink of the host, do the following.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.

3. Click Configure, and go to Networking > Virtual Switches.

4. Click MANAGE PHYSICAL ADAPTERS tab and select the active adapters from the Assigned adapters
that you want to unassign from the list of physical adapters of the host.

Figure 16: Managing Physical Adapters

AOS | vSphere Networking | 26


5. Click X on the top.
The selected adapter is unassigned from the list of physical adapters of the host.

Tip: Ping the host to check and confirm if you are able to communicate with the active physical adapter of the host.
If you lose network connectivity to the ESXi host during this test, review your network configuration.

Migrating to a New Distributed Switch without LACP/LAG


Migrating to a new distributed switch without LACP/LAG consists of the following workflow.
1. Creating a Distributed Switch on page 27
2. Creating Port Groups on the Distributed Switch on page 28
3. Configuring Port Group Policies on page 30

Creating a Distributed Switch


Connect to vCenter and create a distributed switch.

About this task


To create a distributed switch, do the following.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Networking view and select the host from the left pane.

Figure 17: Distributed Switch Creation

AOS | vSphere Networking | 27


3. Right-click the host, select Distributed Switch > New Distributed Switch, and specify the following
information in the New Distributed Switch dialog box.

a. Name and Location: Enter name for the distributed switch.


b. Select Version: Select a distributed switch version that is compatible with all your hosts in that datacenter.
c. Configure Settings: Select the number of uplinks you want to connect to the distributed switch.
Select Create a default port group checkbox to create a port group. To configure a port group later, see
Creating Port Groups on the Distributed Switch on page 28.
d. Ready to complete: Review the configuration and click Finish.
A new distributed switch is created with the default uplink port group. The uplink port group is the port group to
which the uplinks connect. This uplink is different from the vmk (VMkernel) or the VM port groups.

Figure 18: New Distributed Switch Created in the Host

Creating Port Groups on the Distributed Switch


Create one or more vmk (VMkernel) port groups and VM port groups depending on the vSphere features
you plan to use and or the physical network layout. The best practice is to have the vmk Management
interface, vmk vMotion interface, and vmk iSCSI interface on separate port groups.

About this task


To create port groups on the distributed switch, do the following.

Procedure

1. Log on to vCenter with the web client.

AOS | vSphere Networking | 28


2. Go to the Networking view and select the host from the left pane.

Figure 19: Creating Distributed Port Groups

3. Right-click the host, select Distributed Switch > Distributed Port Group > New Distributed Port
Group, and follow the wizard to create the remaining distributed port group (vMotion interface and VM port
groups).
You would need the following port groups because you would be migrating from the standard switch to the
distributed switch.

• VMkernel Management interface. Use this port group to connect to the host for all management operations.
• VMNetwork. Use this port group to connect to the new VMs.
• vMotion. This port group is an internal interface and the host will use this port during failover for vMotion
traffic.

Note: Nutanix recommends you to use static port binding instead of ephemeral port binding when you create a port
group.

Figure 20: Distributed Port Groups Created

Note: The port group for vmk management interface is created during the distributed switch creation. See
Creating a Distributed Switch on page 27 for more information.

AOS | vSphere Networking | 29


Configuring Port Group Policies
To configure port groups, you must configure VLANs, Teaming and failover, and other distributed port groups
policies at the port group layer or at the distributed switch layer. Refer to the following topics to configure the port
group policies.
1. Configuring Policies on the Port Group Layer on page 30
2. Configuring Policies on the Distributed Switch Layer on page 31
3. Adding ESXi Host to the Distributed Switch on page 31

Configuring Policies on the Port Group Layer

Ensure that the distributed switches port groups have VLANs tagged if the physical adapters of the host
have a VLAN tagged to them. Update the policies for the port group, VLANs, and teaming algorithms to
configure the physical network switch. Configure the load balancing policy as per the network configuration
requirements on the physical switch.

About this task


To configure the port group policies, do the following.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Networking view and select the host from the left pane.

Figure 21: Configure Port Group Policies on the Distributed Switch

3. Right-click the host, select Distributed Switch > Distributed Port Group > Edit Settings, and follow the
wizard to configure the VLAN, Teaming and failover, and other options.

Note: For more information about configuring port group policies, see the VMware Documentation.

4. Click OK to complete the configuration.

5. Repeat steps 2–4 to configure the other port groups.

AOS | vSphere Networking | 30


Configuring Policies on the Distributed Switch Layer

You can configure the same policy for all the port groups simultaneously.

About this task


To configure the same policy for all the port groups, do the following.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Networking view and select the host from the left pane.

Figure 22: Manage Distributed Port Groups

3. Right-click the host, select Distributed Switch > Distributed Port Group > Manage Distributed Port
Groups, and specify the following information in Manage Distributed Port Group dialog box.

a. In the Select port group policies tab, select the port group policies that you want to configure and click
Next.

Note: For more information about configuring port group policies, see the VMware Documentation.

b. In the Select port groups tab, select the distributed port groups on which you want to configure the policy
and click Next.
c. In the Teaming and failover tab, configure the Load balancing policy, Active uplinks, and click Next.
d. In the Ready to complete window, review the configuration and click Finish.

Adding ESXi Host to the Distributed Switch

Migrate the management interface and CVM of the host to the distributed switch.

About this task


To migrate the Management interface and CVM of the ESXi host, do the following.

AOS | vSphere Networking | 31


Procedure

1. Log on to vCenter with the web client.

2. Go to the Networking view and select the host from the left pane.

Figure 23: Add ESXi Host to Distributed Switch

AOS | vSphere Networking | 32


3. Right-click the host, select Distributed Switch > Add and Manage Hosts, and specify the following
information in Add and Manage Hosts dialog box.

a. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next.
b. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.

Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the
distributed switch.

c. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.

Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same
uplink on the distributed switch.

• 1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink.

Figure 24: Select Physical Adapter for Uplinking

Important: If you select physical NICs connected to other switches, those physical NICs migrate to the
current distributed switch.

2. Select the Uplink in the distributed switch to which you want to assign the PNIC of the host and click
OK.
3. Click Next.
d. In the Manage VMkernel adapters tab, configure the vmk adapters.

• 1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port
group.

AOS | vSphere Networking | 33


2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host
and click OK.

Figure 25: Select a Port Group


3. Click Next.
e. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect
all the network adapters of a VM to a distributed port group.

• 1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an
individual network adapter to connect with the distributed port group.
2. Click Assign port group and select the distributed port group to which you want to migrate the VM
or network adapter and click OK.
3. Click Next.
f. In the Ready to complete tab, review the configuration and click Finish.

4. Go to the Hosts and Clusters view in the vCenter web client and Hosts > Configure to review the network
configuration for the host.

Note: Run a ping test to confirm that the networking on the host works as expected.

5. Follow the steps 2–4 to add the remaining hosts to the distributed switch and migrate the adapters.

Migrating to a New Distributed Switch with LACP/LAG


Migrating to a new distributed switch without LACP/LAG consists of the following workflow.
1. Creating a Distributed Switch on page 27
2. Creating Port Groups on the Distributed Switch on page 28
3. Creating Link Aggregation Group on Distributed Switch on page 35

AOS | vSphere Networking | 34


4. Creating Port Groups to use the LAG
5. Adding ESXi Host to the Distributed Switch

Creating Link Aggregation Group on Distributed Switch


Using Link Aggregation Group (LAG) on a distributed switch, you can connect the ESXi host to physical
switched by using dynamic link aggregation. You can create multiple link aggregation groups (LAGs) on a
distributed switch to aggregate the bandwidth of physical NICs on ESXi hosts that are connected to LACP
port channels.

About this task


To create a LAG, do the following.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Networking view and select the host from the left pane.

3. Right-click the host, select Distributed Switch > Configure > LACP.

Figure 26: Create LAG on Distributed Switch

4. Click New and enter the following details in the New Link Aggregation Group dialog box.

a. Name: Enter a name for the LAG.


b. Number of Ports: Enter the number of ports.
The number of ports must match the physical ports per host in the LACP LAG. For example, if the Number
of Ports two, then you can attach two physical ports per ESXi host to the LAG.
c. Mode: Specify the state of the physical switch.
Based on the configuration requirements, you can set the mode to Active or Passive.
d. Load balancing mode: Specify the load balancing mode for the physical switch.
For more information about the various load balancing options, see the VMware Documentation.
e. VLAN trunk range: Specify the VLANs if you have VLANs configured in your environment.

5. Click OK.
LAG is created on the distributed switch.

AOS | vSphere Networking | 35


Creating Port Groups to Use LAG
To use LAG as the uplink you have to edit the settings of the port group created on the distributed switch.

About this task


To edit the settings on the port group to use LAG, do the following.

Procedure

1. Log on to vCenter with the web client.

2. On the Configure tab, expand Networking and select Virtual Switches.

3. Select the required Virtual switch from the list and click EDIT.

4. Go to the Teaming and failover tab in the Edit Settings dialog box and specify the following information.

Figure 27: Configure the Management Port Group

a. Load Balancing: Select Route based IP hash.


b. Active uplinks: Move the LAG under the Unused uplinks section to Active Uplinks section.
c. Unused uplinks: Select the physical uplinks (Uplink 1 and Uplink 2) and move them to the Unused
uplinks section.

5. Repeat steps 2–4 to configure the other port groups.

Adding ESXi Host to the Distributed Switch


Add the ESXi host to the distributed switch and migrate the network from the standard switch to the
distributed switch. Migrate the management interface and CVM of the ESXi host to the distributed switch.

AOS | vSphere Networking | 36


About this task
To migrate the Management interface and CVM of ESXi host, do the following.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Networking view and select the host from the left pane.

Figure 28: Add ESXi Host to Distributed Switch

AOS | vSphere Networking | 37


3. Right-click the host, select Distributed Switch > Add and Manage Hosts, and specify the following
information in Add and Manage Hosts dialog box.

a. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next.
b. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.

Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the
distributed switch.

c. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.

Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same
uplink on the distributed switch.

• 1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink.

Important: If you select physical NICs connected to other switches, those physical NICs migrate to the
current distributed switch.

2. Select the LAG Uplink in the distributed switch to which you want to assign the PNIC of the host and
click OK.
3. Click Next.
d. In the Manage VMkernel adapters tab, configure the vmk adapters.
Select the VMkernel adapter that is associated with vSwitch0 as your management VMkernel adapter. Migrate
this adapter to the corresponding port group on the distributed switch.

Note: Do not migrate the VMkernel adapter associated with vSwitchNutanix.

Note: If the are any VLANs associated with the port group on the standard switch, ensure that the
corresponding distributed port group also has the correct VLAN. Verify the physical network configuration to
ensure it is configured as required.

• 1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port
group.
2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host
and click OK.
3. Click Next.
e. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect
all the network adapters of a VM to a distributed port group.

• 1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an
individual network adapter to connect with the distributed port group.
2. Click Assign port group and select the distributed port group to which you want to migrate the VM
or network adapter and click OK.
3. Click Next.
f. In the Ready to complete tab, review the configuration and click Finish.

AOS | vSphere Networking | 38


VCENTER CONFIGURATION
VMware vCenter enables the centralized management of multiple ESXi hosts. You can either create a vCenter Server
or use an existing vCenter Server. To create a vCenter Server, refer to the VMware Documentation.
This section considers that you already have a vCenter Server and therefore describes the operations you can perform
on an existing vCenter Server. To deploy vSphere clusters running Nutanix Enterprise Cloud, perform the following
steps in the vCenter.

Tip: For a single-window management of all your ESXi nodes, you can also integrate the vCenter Server to Prism
Central. For more information, see Registering a Cluster to vCenter Server on page 39

1. Create a cluster entity within the existing vCenter inventory and configure its settings according to Nutanix best
practices. For more information, see Creating a Nutanix Cluster in vCenter on page 41.
2. Configure HA. For more information, see vSphere HA Settings on page 44.
3. Configure DRS. For more information, see vSphere DRS Settings on page 50.
4. Configure EVC. For more information, see vSphere EVC Settings on page 52.
5. Configure override. For more information, see VM Override Settings on page 54.
6. Add the Nutanix hosts to the new cluster. For more information, see Adding a Nutanix Node to vCenter on
page 42.

Registering a Cluster to vCenter Server


To perform core VM management operations directly from Prism without switching to vCenter Server, you
need to register your cluster with the vCenter Server.

Before you begin


Ensure that you have vCenter Server Extension privileges as these privileges provide permissions to
perform vCenter registration for the Nutanix cluster.

About this task


Following are some of the important points about registering vCenter Server.

• Nutanix does not store vCenter Server credentials.


• Whenever a new node is added to Nutanix cluster, vCenter Server registration for the new node is automatically
performed.
• Nutanix supports vCenter Enhanced Linked Mode.
When registering a Nutanix cluster to a vCenter Enhanced Linked Mode (EHM) enabled ESXi environment,
ensure that Prism is registered to the vCenter containing the vSphere Cluster and Nutanix nodes (often the local
vCenter). For more information about vCenter Enhanced Linked Mode, see vCenter Enhanced Linked Mode in the
vCenter Server Installation and Setup documentation.

Procedure

1. Log into the Prism Element web console.

2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
The vCenter Server that is managing the hosts in the cluster is auto-discovered and displayed.

AOS | vCenter Configuration | 39


3. Click the Register link.
The IP address is auto-populated in the Address field. The port number field is also auto-populated with 443. Do
not change the port number. For the complete list of required ports, see Port Reference.

4. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin
Password fields.

Figure 29: vCenter Registration Figure 1

5. Click Register.
During the registration process a certificate is generated to communicate with the vCenter Server. If the
registration is successful, relevant message is displayed in the Tasks dashboard. The Host Connection field
displays as Connected, which implies that all the hosts are being managed by the vCenter Server that is registered.

Figure 30: vCenter Registration Figure 2

AOS | vCenter Configuration | 40


Unregistering a Cluster from the vCenter Server
To unregister the vCenter Server from your cluster, perform the following procedure.

About this task

• Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter
Server. After you change the IP address of the vCenter Server, you should register the vCenter Server again with
the new IP address with the cluster.
• The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host
Connection field changes to Not Connected, it implies that the hosts are being managed by a different vCenter
Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to
register to this vCenter Server.

Procedure

1. Log into the Prism web console.

2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
A message that cluster is already registered to the vCenter Server is displayed.

3. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin
Password fields.

4. Click Unregister.
If the credentials are correct, the vCenter Server is unregistered from the cluster and a relevant message is
displayed in the Tasks dashboard.

Creating a Nutanix Cluster in vCenter


Before you begin
Nutanix recommends creating a storage container in the Prism Element running on the host or using the
default container to mount NFS datastore on all ESXi hosts.

About this task


To enable the vCenter to discover the Nutanix clusters, perform the following steps in the vCenter.

Procedure

1. Log on to vCenter with the web client.

2. Do one of the following.

» If you want the Nutanix cluster to be in an existing datacenter, proceed to step 3.


» If you want the Nutanix cluster to be in a new datacenter or if there is no datacenter, perform the following
steps to create a datacenter.

Note: Nutanix clusters must be in a datacenter.

a. Go to the Hosts and Clusters view and right-click the IP address of the vCenter Server in the left pane.
b. Click New Datacenter.
c. Enter a meaningful name for the datacenter (for example, NTNX-DC) and click OK.

AOS | vCenter Configuration | 41


3. Right-click the datacenter node and click New Cluster.

a. Enter a meaningful name for the cluster in the Name field (for example, NTNX-Cluster).
b. Turn on the vSphere DRS switch.
c. Turn on the Turn on vSphere HA switch.
d. Uncheck Manage all hosts in the cluster with a single image.
Nutanix cluster (NTNX-Cluster) is created with the default settings for vSphere HA and vSphere DRS.

What to do next
Add all the Nutanix nodes to the Nutanix cluster inventory in vCenter. For more information, see Adding a
Nutanix Node to vCenter on page 42.

Adding a Nutanix Node to vCenter


Before you begin
Configure the Nutanix cluster according to Nutanix specifications given in Creating a Nutanix Cluster in
vCenter on page 41 and vSphere Cluster Settings Checklist on page 106.

About this task

Tip: Refer to KB-1661 for the default credentials of all cluster components.

Procedure

1. Log on to vCenter with the web client.

2. Right-click the Nutanix cluster and then click Add Hosts....

a. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP
address or FQDN under New hosts.
b. Enter the host logon credentials in the User name and Password fields, and click Next.
If a security or duplicate management alert appears, click Yes.
c. Review the Host Summary and click Next.
d. Click Finish.

3. Select the host under the Nutanix cluster from the left pane and go to Configure > System > Security Profile.
Ensure that Lockdown Mode is Disabled. If there are any security requirements to enable the lockdown mode,
follow the steps mentioned in KB-3702.

4. Configure DNS servers.

a. Go to Configure > Networking > TCP/IP configuration.


b. Click Default under TCP/IP stack and go to TCP/IP.
c. Click the pencil icon to configure DNS servers and perform the following.

• 1. Select Enter settings manually.


2. Type the domain name in the Domain field.
3. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and
click OK.

AOS | vCenter Configuration | 42


5. Configure NTP servers.

a. Go to Configure > System > Time Configuration.


b. Click Edit.
c. Select the Use Network Time Protocol (Enable NTP client).
d. Type the NTP server address in the NTP Servers text box.
e. In the NTP Service Startup Policy, select Start and stop with host from the drop-down list.
Add multiple NTP servers if necessary.
f. Click OK.

6. Click Configure > Storage and ensure that NFS datastores are mounted.

Note: Nutanix recommends creating a storage container in Prism Element running on the host.

7. If HA is not enabled, set the CVM to start automatically when the ESXi host starts.

Note: Automatic VM start and stop is disabled in clusters where HA is enabled.

a. Go to Configure > Virtual Machines > VM Startup/Shutdown.


b. Click Edit.
c. Ensure that Automatically start and stop the virtual machines with the system is checked.
d. If the CVM is listed in Manual Startup, click the up arrow to move the CVM into the Automatic Startup
section.
e. Click OK.

What to do next
Configure HA and DRS settings. For more information, see vSphere HA Settings on page 44 and
vSphere DRS Settings on page 50.

Nutanix Cluster Settings


To ensure the optimal performance of your vSphere deployment running on Nutanix cluster, configure the following
settings from the vCenter.

vSphere General Settings

About this task


Configure the following general settings from vCenter.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.

AOS | vCenter Configuration | 43


3. Click Configure, and go to Configuration > General.

a. Under General, set the Swap file location to Virtual machine directory.
Setting the swap file location to the VM directory stores the VM swap files in the same directory as the VM.
b. Under Default VM Compatibility, set the compatibility to Use datacenter setting and host version.
Do not change the compatibility unless the cluster has to support previous versions of ESXi VMs.

Figure 31: General Cluster Settings

vSphere HA Settings
If there is a node failure, vSphere HA (High Availability) settings ensure that there are sufficient compute
resources available to restart all VMs that were running on the failed node.

About this task


Configure the following HA settings from vCenter.

Note: Nutanix recommends that you configure vSphere HA and DRS even if you do not use the features. The vSphere
cluster configuration preserves the settings, so if you later decide to enable the features, the settings are in place and
conform to Nutanix best practices.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.

3. Click Configure, and go to Services > vSphere Availability.

AOS | vCenter Configuration | 44


4. Click Edit next to the text showing vSphere HA status.

Figure 32: vSphere Availability Settings: Failures and Responses

a. Turn on the vSphere HA and Enable Host Monitoringswitches.


b. Specify the following information under the Failures and Responses tab.

• 1. Host Failure Response: Select Restart VMs from the drop-down list.
This option configures the cluster-wide host isolation response settings.
2. Response for Host Isolation: Select Power off and restart VMs from the drop-down list.
3. Datastore with PDL: Select Disabled from the drop-down list.
4. Datastore with APD: Select Disabled from the drop-down list.

Note: To enable the VM component protection in vCenter, refer to the VMware Documentation.

5. VM Monitoring: Select Disabled from the drop-down list.


c. Specify the following information under the Admission Control tab.

AOS | vCenter Configuration | 45


Note: If you are using replication factor 2 with cluster sizes up to 16 nodes, configure HA admission control
settings to tolerate one node failure. For cluster sizes larger than 16 nodes, configure HA admission control to
sustain two node failures and use replication factor 3. vSphere 6.7, and newer versions automatically calculate
the percentage of resources required for admission control.

Figure 33: vSphere Availability Settings: Admission Control

• 1. Host failures cluster tolerates: Enter 1 or 2 based on the number of nodes in the Nutanix cluster
and the replication factor.
2. Define host failover capacity by: Select Cluster resource Percentage from the drop-down list.

Note: If you set one ESXi host as a dedicated failover host in vSphere HA configuration, then CVM
cannot boot up after shutdown.

AOS | vCenter Configuration | 46


3. Performance degradation VMs tolerate: Set the percentage to 100.
For more information about settings of percentage of cluster resources reserved as failover spare
capacity, see vSphere HA Admission Control Settings for Nutanix Environment on page 48.
d. Specify the following information under the Heartbeat Datastores tab.

Note: vSphere HA uses datastore heart beating to distinguish between hosts that have failed and hosts that
reside on a network partition. With datastore heart beating, vSphere HA can monitor hosts when a management
network partition occurs while continuing to respond to failures.

Figure 34: vSphere Availability Settings: Heartbeat Datastores

• 1. Select Use datastores only from the specified list.


2. Select the named storage container mounted as the NFS datastore (Nutanix datastore).
If you have more than one named storage container, select all that are applicable.
3. If the cluster has only one datastore, click Advanced Options tab and add
das.ignoreInsufficientHbDatastore with Value of true.
e. Click OK.

AOS | vCenter Configuration | 47


vSphere HA Admission Control Settings for Nutanix Environment

Overview
If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA admission control
settings with the appropriate percentage of CPU/RAM to achieve at least N+1 availability. For cluster sizes larger
than 16 nodes, you must configure HA admission control with the appropriate percentage of CPU/RAM to achieve at
least N+2 availability.

N+2 Availability Configuration


The N+2 availability configuration can be achieved in the following two ways.

• Redundancy factor 2 and N+2 vSphere HA admission control setting configured.


Because Nutanix distributed file system recovers in the event of a node failure, it is possible to have a second node
failure without data being unavailable if the Nutanix cluster has fully recovered before the subsequent failure.
In this case, a N+2 vSphere HA admission control setting is required to ensure sufficient compute resources are
available to restart all the VMs.
• Redundancy factor 3 and N+2 vSphere HA admission control setting configured.
If you want two concurrent node failures to be tolerated and the cluster has insufficient blocks to use block
awareness, redundancy factor 3 in a cluster of five or more nodes is required. In either of these two options, the
Nutanix storage pool must have sufficient free capacity to restore the configured redundancy factor (2 or 3). The
percentage of free space required is the same as the required HA admission control percentage setting. In this case,
redundancy factor 3 must be configured at the storage container layer. An N+2 vSphere HA admission control
setting is also required to ensure sufficient compute resources are available to restart all the VMs.

Note: For redundancy factor 3, a minimum of five nodes is required, which provides the ability that two concurrent
nodes can fail while ensuring data remains online. In this case, the same N+2 level of availability is required for the
vSphere cluster to enable the VMs to restart following a failure.

Table 2: Minimum Reservation Percentage for vSphere HA Admission Control Setting

For redundancy factor 2 deployments, the recommended minimum HA admission control setting
percentage is marked with single asterisk (*) symbol in the following table. For redundancy factor 2 or
redundancy factor 3 deployments configured for multiple non-concurrent node failures to be tolerated,
the minimum required HA admission control setting percentage is marked with two asterisks (**) in the
following table.

Nodes Availability Level

N+1 N+2 N+3 N+4

1 N/A N/A N/A N/A

2 N/A N/A N/A N/A

3 33* N/A N/A N/A

4 25* 50 75 N/A

5 20* 40** 60 80

6 18* 33** 50 66

7 15* 29** 43 56

8 13* 25** 38 50

AOS | vCenter Configuration | 48


Nodes Availability Level

N+1 N+2 N+3 N+4

9 11* 23** 33 46

10 10* 20** 30 40

11 9* 18** 27 36

12 8* 17** 25 34

13 8* 15** 23 30

14 7* 14** 21 28

15 7* 13** 20 26

16 6* 13** 19 25

Nodes Availability Level

N+1 N+2 N+3 N+4

17 6 12* 18** 24

18 6 11* 17** 22

19 5 11* 16** 22

20 5 10* 15** 20

21 5 10* 14** 20

22 4 9* 14** 18

23 4 9* 13** 18

24 4 8* 13** 16

25 4 8* 12** 16

26 4 8* 12** 16
27 4 7* 11** 14

28 4 7* 11** 14

29 3 7* 10** 14

30 3 7* 10** 14

31 3 6* 10** 12

32 3 6* 9** 12

The table also represents the percentage of the Nutanix storage pool, which should remain free to ensure that the
cluster can fully restore the redundancy factor in the event of one or more nodes, or even a block failure (where three
or more blocks exist within a cluster).

Block Awareness
For deployments of at least three blocks, block awareness automatically ensures data availability when an entire block
of up to four nodes configured with redundancy factor 2 can become unavailable.

AOS | vCenter Configuration | 49


If block awareness levels of availability are required, the vSphere HA admission control setting must ensure sufficient
compute resources are available to restart all virtual machines. In addition, the Nutanix storage pool must have
sufficient space to restore redundancy factor 2 to all data.
The vSphere HA minimum availability level must be equal to number of nodes per block.

Note: For block awareness, each block must be populated with a uniform number of nodes. In the event of a failure, a
non-uniform node count might compromise block awareness or the ability to restore the redundancy factor, or both.

Rack Awareness
Rack fault tolerance is the ability to provide a rack-level availability domain. With rack fault tolerance, data is
replicated to nodes that are not in the same rack. Rack failure can occur in the following situations.

• All power supplies in a rack fail.


• Top-of-rack (TOR) switch fails.
• Network partition occurs: one of the racks becomes inaccessible from the other racks.
With rack fault tolerance enabled, the cluster has rack awareness and guest VMs can continue to run even during the
failure of one rack (with replication factor 2) or two racks (with replication factor 3). The redundant copies of guest
VM data and metadata persist on other racks when one rack fails.

Table 3: Rack awareness has minimum requirements, described in the following table.

Replication factor Minimum number Minimum number Minimum number Data resiliency
of nodes of Blocks of racks

2 3 3 3 Failure of 1 node,
block, or rack

3 5 5 5 Failure of 2 nodes,
blocks, or racks

vSphere DRS Settings

About this task


Configure the following DRS settings from vCenter.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.

3. Click Configure, and go to Services > vSphere DRS.

AOS | vCenter Configuration | 50


4. Click Edit next to the text showing vSphere DRS status.

Figure 35: vSphere DRS Settings: Automation

a. Turn on the vSphere DRS switch.


b. Specify the following information under the Automation tab.

• 1. Automation Level: Select Fully Automated from the drop-down list.

AOS | vCenter Configuration | 51


2. Migration Threshold: Set the bar between conservative and aggressive (value=3).
Migration threshold provides optimal resource utilization while minimizing DRS migrations with little
benefit. This threshold automatically manages data locality in such a way that whenever VMs move,
writes are always written on one of the replicas locally to maximize the subsequent read performance.
Nutanix recommends the migration threshold at 3 in a fully automated configuration.
3. Predictive DRS: Leave the option disabled.
The value of predictive DRS depends on whether you use other VMware products such as vRealize
operations. Unless you use vRealize operations, Nutanix recommends disabling predictive DRS.
4. Virtual Machine Automation: Enable VM automation.
c. Specifying anything under the Additional Options tab is optional.
d. Specify the following information under the Power Management tab.

Figure 36: vSphere DRS Settings: Power Management

• 1. DPM: Leave the option disabled.


Enabling DPM causes nodes in the Nutanix cluster to go offline, affecting cluster resources.
e. Click OK.

vSphere EVC Settings


vSphere enhanced vMotion compatibility (EVC) ensures that workloads can live migrate, using vMotion,
between ESXi hosts in a Nutanix cluster that are running different CPU generations. The general
recommendation is to have EVC enabled as it will help you in the future where you will be scaling your
Nutanix clusters with new hosts that might contain new CPU models.

AOS | vCenter Configuration | 52


About this task
To enable EVC in a brownfield scenario can be challenging. Configure the following EVC settings from
vCenter.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.

3. Shut down the guest VMs and Controller VMs on the hosts with feature sets greater than the EVC mode.
Ensure that the Nutanix cluster contains hosts with CPUs from only one vendor, either Intel or AMD.

4. Click Configure, and go to Configuration > VMware EVC.

5. Click Edit next to the text showing VMware EVC.

6. Enable EVC for the CPU vendor and feature set appropriate for the hosts in the Nutanix cluster, and click OK.
If the Nutanix cluster contains nodes with different processor classes, enable EVC with the lower feature set as the
baseline.

Tip: To know the processor class of a node, perform the following steps.

• 1. Log on to Prism Element running on the Nutanix cluster.


2. Click Hardware from the menu and go to Diagram or Table view.

AOS | vCenter Configuration | 53


3. Click the node and look for the Block Serial field in Host Details.

Figure 37: VMware EVC

7. Start the VMs in the Nutanix cluster to apply the EVC.


If you try to enable EVC on a Nutanix cluster with mismatching host feature sets (mixed processor clusters), the
lowest common feature set (lowest processor class) is selected. Hence, if VMs are already running on the new host
and if you want to enable EVC on the host, you must first shut down the guest VMs and Controller VMs, and then
enable EVC.

Note: Do not shut down more than one CVM at the same time.

VM Override Settings
You must exclude Nutanix CVMs from vSphere availability and resource scheduling and therefore tweak
the following VM overriding settings.

Procedure

1. Log on to vCenter with the web client.

2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.

3. Click Configure, and go to Configuration > VM Overrides.

AOS | vCenter Configuration | 54


4. Select all the CVMs and click Next.
If you do not have the CVMs listed, click Add to ensure that the CVMs are added to the VM Overrides dialog
box.

Figure 38: VM Override

5. In the VM override section, configure override for the following parameters.

• DRS Automation Level: Disabled


• VM HA Restart Priority: Disabled
• VM Monitoring: Disabled

6. Click Finish.

Migrating a Nutanix Cluster from One vCenter Server to Another


About this task
Perform the following steps to migrate a Nutanix cluster from one vCenter Server to another vCenter Server.

Note: The following steps are to migrate a Nutanix cluster with vSphere Standard Switch (vSwitch). To migrate a
Nutanix cluster with vSphere Distributed Switch (vDS), see the VMware Documentation..

Procedure

1. Create a vSphere cluster in the vCenter Server where you want to migrate the Nutanix cluster. See Creating a
Nutanix Cluster in vCenter on page 41.

2. Configure HA, DRS, and EVC on the created vSphere cluster. See Nutanix Cluster Settings on page 43.

AOS | vCenter Configuration | 55


3. Unregister the Nutanix cluster from the source vCenter Server. See Unregistering a Cluster from the vCenter
Server on page 41.

4. Move the nodes from the source vCenter Server to the new vCenter Server.
See the VMware Documentation to know the process.

5. Register the Nutanix cluster to the new vCenter Server. See Registering a Cluster to vCenter Server on
page 39.

Storage I/O Control (SIOC)


SIOC controls the I/O usage of a virtual machine and gradually enforces the predefined I/O share levels. Nutanix
converged storage architecture does not require SIOC. Therefore, while mounting a storage container on an ESXi
host, the system disables SIOC in the statistics mode automatically.

Caution: While mounting a storage container on ESXi hosts running older versions (6.5 or below), the system enables
SIOC in the statistics mode by default. Nutanix recommends disabling SIOC because an enabled SIOC can cause the
following issues.

• The storage can become unavailable because the hosts repeatedly create and delete the access .lck-
XXXXXXXX files under the .iorm.sf subdirectory, located in the root directory of the storage container.
• Site Recovery Manager (SRM) failover and failback does not run efficiently.
• If you are using Metro Availability disaster recovery feature, activate and restore operations do not
work.

Note: For using Metro Availability disaster recovery feature, Nutanix recommends using an empty
storage container. Disable SIOC and delete all the files from the storage container that are related to
SIOC. For more information, see KB-3501.

Run the NCC health check (see KB-3358) to verify if SIOC and SIOC in statistics mode are disabled on storage
containers. If SIOC and SIOC in statistics mode are enabled on storage containers, disable them by performing the
procedure described in Disabling Storage I/O Control (SIOC) on a Container on page 56.

Disabling Storage I/O Control (SIOC) on a Container

About this task


Perform the following procedure to disable storage I/O statistics collection.

Procedure

1. Log on to vCenter with the web client.

2. Click the Storage view in the left pane.

AOS | vCenter Configuration | 56


3. Right-click the storage container under the Nutanix cluster and select Configure Storage I/O Controller.
The properties for the storage container are displayed. The Disable Storage I/O statistics collection option
is unchecked, which means that SIOC is enabled by default.

a. Disable Storage I/O Control and statistics collection.


b. Disable Storage I/O Control but enable statistics collection.
c. Disable Storage I/O Control and statistics collection: Select the Disable Storage I/O Control and
statistics collection option to disable SIOC.
Uncheck Include I/O Statistics for SDRS option.
d. Click OK.

AOS | vCenter Configuration | 57


NODE MANAGEMENT
This chapter describes the management tasks you can do on a Nutanix node.

Node Maintenance (ESXi)


You are required to gracefully place a node into the maintenance mode or non-operational state for
reasons such as making changes to the network configuration of a node, performing manual firmware
upgrades or replacements, performing CVM maintenance or any other maintenance operations.

Entering and Exiting Maintenance Mode


With a minimum AOS release of 6.1.2, 6.5.1 or 6.6, you can only place one node at a time in maintenance mode
for each cluster. When a host is in maintenance mode, the CVM is placed in maintenance mode as part of the node
maintenance operation and any associated RF1 VMs are powered-off. The cluster marks the host as unschedulable
so that no new VM instances are created on it. When a node is placed in the maintenance mode from the Prism web
console, an attempt is made to evacuate VMs from the host. If the evacuation attempt fails, the host remains in the
"entering maintenance mode" state, where it is marked unschedulable, waiting for user remediation.
When a host is placed in the maintenance mode, the CVM is placed in the maintenance mode as part of the node
maintenance operation. The non-migratable VMs (for example, pinned or RF1 VMs which have affinity towards a
specific node) are powered-off while live migratable or high availability (HA) VMs are moved from the original host
to other hosts in the cluster. After exiting the maintenance mode, all non-migratable guest VMs are powered on again
and the live migrated VMs are automatically restored on the original host.

Note: VMs with CPU passthrough or PCI passthrough, pinned VMs (with host affinity policies), and RF1 VMs are not
migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list
of VMs that cannot be live-migrated.

See Putting a Node into Maintenance Mode (vSphere) on page 59 to place a node under maintenance.
You can also enter or exit a host under maintenance through the vCenter web client. See Putting the CVM and ESXi
Host in Maintenance Mode Using vCenter on page 70.

Exiting a Node from Maintenance Mode


See Exiting a Node from the Maintenance Mode (vSphere) on page 64 to remove a node from the
maintenance mode.

Viewing a Node under Maintenance Mode


See Viewing a Node that is in Maintenance Mode on page 62 to view the node under maintenance
mode.

Guest VM Status when Node under Maintenance Mode


See Guest VM Status when Node is in Maintenance Mode on page 67 to view the status of guest VMs
when a node is undergoing maintenance operations.

Best Practices and Recommendations


Nutanix strongly recommends using the Enter Maintenance Mode option to place a node under maintenance.

Known Issues and Limitations ESXi

• The Prism web console enabled maintenance operations (enter and exit node maintenance) are currently supported
on ESXi.

AOS | Node Management | 58


• Entering or exiting a node under maintenance using the vCenter for ESXi is not equivalent to entering or exiting
the node under maintenance from the Prism Element web console.
• You cannot exit the node from maintenance mode from Prism Element web console if the node is placed under
maintenance mode using vCenter (ESXi node). However, you can enter the node maintenance through the Prism
Element web console and exit the node maintenance using the vCenter (ESXi node).

Putting a Node into Maintenance Mode (vSphere)

Before you begin


Check the cluster status and resiliency before putting a node under maintenance. You can also verify the
status of the guest VMs. See Guest VM Status when Node is in Maintenance Mode on page 67 for
more information.

About this task


As the node enter the maintenance mode, the following high-level tasks are performed internally.

• The host initiates entering the maintenance mode.


• The HA VMs are live migrated.
• The pinned and RF1 VMs are powered-off.
• The CVM enters the maintenance mode.
• The CVM is shutdown.
• The host completes entering the maintenance mode.
For more information, see Guest VM Status when Node is in Maintenance Mode on page 67 to view the
status of the guest VMs.

Procedure

1. Login to the Prism Element web console.

2. On the home page, select Hardware from the drop-down menu.

3. Go to the Table > Host view.

4. Select the node that you want to put under maintenance.

AOS | Node Management | 59


5. Click the Enter Maintenance Mode option.

Figure 39: Enter Maintenance Mode Option

6. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next.

Figure 40: Host Maintenance Window (vCenter Credentials)

AOS | Node Management | 60


7. On Host Maintenance window, select the Power off VMs that cannot migrate check box to enable the
Enter Maintenance Mode button.

Figure 41: Host Maintenance Window (Enter Maintenance Mode)

Note: VMs with CPU passthrough, PCI passthrough, pinned VMs (with host affinity policies), and RF1 are not
migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the
list of VMs that cannot be live-migrated.

AOS | Node Management | 61


8. Click the Enter Maintenance Mode button.

• A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates
that the host is entering the maintenance mode.
• The revolving icon disappears and the Exit Maintenance Mode option is enabled after the node completely
enters the maintenance mode.

Figure 42: Enter Node Maintenance (On-going)


• You can also monitor the progress of the node maintenance operation through the newly created Host enter
maintenance and Enter maintenance mode tasks which appear in the task tray.

Note: In case of a node maintenance failure, certain roll-back operations are performed. For example, the CVM is
rebooted. But the live-migrated VMs are not restored to the original host.

What to do next
Once the maintenance activity is complete, you can perform any of the following.

• View the nodes under maintenance, see Viewing a Node that is in Maintenance Mode on page 62
• View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode on page 67
• Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere) on
page 64)

Viewing a Node that is in Maintenance Mode

About this task

Note: This procedure is the same for AHV and ESXi nodes.

Perform the following steps to view a node under maintenance.

Procedure

1. Login to the Prism Element web console.

AOS | Node Management | 62


2. On the home page, select Hardware from the drop-down menu.

3. Go to the Table > Host view.

4. Observe the icon along with a tool tip that appears beside the node which is under maintenance. You can also
view this icon in the host details view.

Figure 43: Example: Node under Maintenance (Table and Host Details View) in AHV

5. Alternatively, view the node under maintenance from the Hardware > Diagram view.

Figure 44: Example: Node under Maintenance (Diagram and Host Details View) in AHV

What to do next
You can:

• View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode on page 67.
• Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere) on
page 64.

AOS | Node Management | 63


Exiting a Node from the Maintenance Mode (vSphere)
After you perform any maintenance activity, exit the node from the maintenance mode.

About this task


As the node exits the maintenance mode, the following high-level tasks are performed internally.

• The host is taken out of maintenance.


• The CVM is powered on.
• The CVM is taken out of maintenance.
After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate
to restore host locality.
For more information, see Guest VM Status when Node is in Maintenance Mode on page 67 to view the
status of the guest VMs.

Procedure

1. On the Prism web console home page, select Hardware from the drop-down menu.

2. Go to the Table > Host view.

3. Select the node which you intend to remove from the maintenance mode.

AOS | Node Management | 64


4. Click the Exit Maintenance Mode option.

Figure 45: Exit Maintenance Mode Option - Table View

Figure 46: Exit Maintenance Mode Option - Diagram View

AOS | Node Management | 65


5. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next.

Figure 47: Host Maintenance Window (vCenter Credentials)

6. On the Host Maintenance window, click the Exit Maintenance Mode button.

Figure 48: Host Maintenance Window (Enter Maintenance Mode)

• A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This
indicates that the host is exiting the maintenance mode.
• The revolving icon disappears and the Enter Maintenance Mode option is enabled after the node
completely exits the maintenance mode.
• You can also monitor the progress of the exit node maintenance operation through the newly created Host exit
maintenance and Exit maintenance mode tasks which appear in the task tray.

What to do next
You can:

AOS | Node Management | 66


• View the status of node under maintenance, see Viewing a Node that is in Maintenance Mode on page 62.
• View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode on page 67.

Guest VM Status when Node is in Maintenance Mode


The following scenarios demonstrate the behavior of three guest VM types - high availability (HA) VMs,
pinned VMs, and RF1 VMs, when a node enters and exits a maintenance operation. The HA VMs are live
VMs that can migrate across nodes if the host server goes down or reboots. The pinned VMs have the host
affinity set to a specific node. The RF1 VMs have affinity towards a specific node or a CVM. To view the
status of the guest VMs, go to VM > Table.

Note: The following scenarios are the same for AHV and ESXi nodes.

Scenario 1: Guest VMs before Node Entering Maintenance Mode


In this example, you can observe the status of the guest VMs on the node prior to the node entering the maintenance
mode. All the guest VMs are powered-on and reside on the same host.

Figure 49: Example: Original State of VM and Hosts in AHV

Scenario 2: Guest VMs during Node Maintenance Mode

• As the node enter the maintenance mode, the following high-level tasks are performed internally.
1. The host initiates entering the maintenance mode.
2. The HA VMs are live migrated.
3. The pinned and RF1 VMs are powered-off.
4. The host completes entering the maintenance mode.
5. The CVM enters the maintenance mode.
6. The AHV host completes entering the maintenance mode.
7. The CVM enters the maintenance mode.
8. The CVM is shut down.

AOS | Node Management | 67


Figure 50: Example: VM and Hosts before Entering Maintenance Mode

Scenario 3: Guest VMs after Node Exiting Maintenance Mode

• As the node exits the maintenance mode, the following high-level tasks are performed internally.
1. The CVM is powered on.
2. The CVM is taken out of maintenance.
3. The host is taken out of maintenance.
After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to
restore host locality.

Figure 51: Example: Original State of VM and Hosts in AHV

Nonconfigurable ESXi Components


The Nutanix manufacturing and installation processes done by running Foundation on the Nutanix nodes
configures the following components. Do not modify any of these components except under the direction of
Nutanix Support.

Nutanix Software
Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix
cluster or render the Nutanix cluster inoperable.

AOS | Node Management | 68


• Local datastore name.
• Configuration and contents of any CVM (except memory configuration to enable certain features).

Important: Note the following important considerations about CVMs.

• Do not delete the Nutanix CVM.


• Do not take a snapshot of the CVM for backup.
• Do not rename, modify, or delete the admin and nutanix user accounts of the CVM.
• Do not create additional CVM user accounts.
Use the default accounts (admin or nutanix), or use sudo to elevate to the root account.
• Do not decrease CVM memory below recommended minimum amounts required for cluster and add-in
features.
Nutanix Cluster Checks (NCC), preupgrade cluster checks, and the AOS upgrade process detect and
monitor CVM memory.
• Nutanix does not support the usage of third-party storage on the host part of Nutanix clusters.
Normal cluster operations might be affected if there are connectivity issues with the third-party storage
you attach to the hosts in a Nutanix cluster.
• Do not run any commands on a CVM that are not in the Nutanix documentation.

ESXi
Modifying any of the following ESXi settings can inadvertently constrain performance of your Nutanix cluster or
render the Nutanix cluster inoperable.

• NFS datastore settings


• VM swapfile location
• VM startup/shutdown order
• CVM name
• CVM virtual hardware configuration file (.vmx file)
• iSCSI software adapter settings
• Hardware settings, including passthrough HBA settings.

• vSwitchNutanix standard virtual switch


• vmk0 interface in Management Network port group
• SSH

Note: An SSH connection is necessary for various scenarios. For example, to establish connectivity with the ESXi
server through a control plane that does not depend on additional management systems or processes. The SSH
connection is also required to modify the networking and control paths in the case of a host failure to maintain
High Availability. For example, the CVM autopathing (Ha.py) requires an SSH connection. In case a local CVM
becomes unavailable, another CVM in the cluster performs the I/O operations over the 10GbE interface.

• Open host firewall ports

AOS | Node Management | 69


• CPU resource settings such as CPU reservation, limit, and shares of the CVM.

Caution: Do not use the Reset System Configuration option.

• ProductLocker symlink setting to point at the default datastore.


Do not change the /productLocker symlink to point at a non-local datastore.
Do not change the ProductLockerLocation advanced setting.

Putting the CVM and ESXi Host in Maintenance Mode Using vCenter
About this task
Nutanix recommends placing the CVM and ESXi host into maintenance mode while the Nutanix cluster
undergoes maintenance or patch installations.

Caution: Verify the data resiliency status of your Nutanix cluster. Ensure that the replication factor (RF) supports
putting the node in maintenance mode.

Procedure

1. Log on to vCenter with the web client.

2. If vSphere DRS is enabled on the Nutanix cluster, skip this step. If vSphere DRS is disabled, perform one of the
following.

» Manually migrate all the VMs except the CVM to another host in the Nutanix cluster.
» Shut down VMs other than the CVM that you do not want to migrate to another host.

3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.

4. In the Enter Maintenance Mode dialog box, check Move powered-off and suspended virtual
machines to other hosts in the cluster and click OK.

Note:
In certain rare conditions, even when DRS is enabled, some VMs do not automatically migrate due to
user-defined affinity rules or VM configuration settings. The VMs that do not migrate appear under
cluster DRS > Faults when a maintenance mode task is in progress. To address the faults, either
manually shut down those VMs or ensure the VMs can be migrated.

Caution: When you put the host in maintenance mode, the maintenance mode process powers down or migrates all
the VMs that are running on the host.

The host gets ready to go into maintenance mode, which prevents VMs from running on this host. DRS
automatically attempts to migrate all the VMs to another host in the Nutanix cluster.
The host enters maintenance mode after its CVM is shut down.

Shutting Down an ESXi Node in a Nutanix Cluster


Before you begin
Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2
(RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut
down, shut down the entire cluster.

AOS | Node Management | 70


About this task
You can put the ESXi host into maintenance mode and shut it down either from the web client or from the command
line. For more information about shutting down a node from the command line, see Shutting Down an ESXi Node
in a Nutanix Cluster (vSphere Command Line) on page 71.

Procedure

1. Log on to vCenter with the web client.

2. Put the Nutanix node in the maintenance mode. For more information, see Putting the CVM and ESXi Host in
Maintenance Mode Using vCenter on page 70.

Note: If DRS is not enabled, manually migrate or shut down all the VMs excluding the CVM. The VMs that are
not migrated automatically even when the DRS is enabled can be because of a configuration option in the VM that
is not present on the target host.

3. Right-click the host and select Shut Down.


Wait until the vCenter displays that the host is not responding, which may take several minutes. If you are logged
on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command


Line)
Before you begin
Verify the data resiliency status of your Nutanix cluster. If the Nutanix cluster only has replication factor 2
(RF2), you can shut down only one node for each cluster. If an RF2 cluster has more than one node shut
down, shut down the entire cluster.

About this task

Procedure

1. Log on to the CVM with SSH and shut down the CVM.
nutanix@cvm$ cvm_shutdown -P now

2. Log on to another CVM in the Nutanix cluster with SSH.

AOS | Node Management | 71


3. Shut down the host.
nutanix@cvm$ ~/serviceability/bin/esx-enter-maintenance-mode -s cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.
If successful, this command returns no output. If it fails with a message like the following, VMs are probably still
running on the host.
CRITICAL esx-enter-maintenance-mode:42 Command vim-cmd hostsvc/maintenance_mode_enter
failed with ret=-1
Ensure that all VMs are shut down or moved to another host and try again before proceeding.
nutanix@cvm$ ~/serviceability/bin/esx-shutdown -s cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host..
Alternatively, you can put the ESXi host into maintenance mode and shut it down using the vSphere web client.
For more information, see Shutting Down an ESXi Node in a Nutanix Cluster on page 70.
If the host shuts down, a message like the following is displayed.
INFO esx-shutdown:67 Please verify if ESX was successfully shut down using
ping hypervisor_ip_addr
hypervisor_ip_addr is the IP address of the ESXi host.

4. Confirm that the ESXi host has shut down.


nutanix@cvm$ ping hypervisor_ip_addr
Replace hypervisor_ip_addr with the IP address of the ESXi host.
If no ping packets are answered, the ESXi host shuts down.

Starting an ESXi Node in a Nutanix Cluster


About this task
You can start an ESXi host either from the web client or from the command line. For more information about starting
a node from the command line, see Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line) on
page 74.

Procedure

1. If the node is off, turn it on by pressing the power button on the front. Otherwise, proceed to the next step.

2. Log on to vCenter (or to the node if vCenter is not running) with the web client.

3. Right-click the ESXi host and select Exit Maintenance Mode.

4. Right-click the CVM and select Power > Power on.


Wait approximately 5 minutes for all services to start on the CVM.

5. Log on to another CVM in the Nutanix cluster with SSH.

6. Confirm that the Nutanix cluster services are running on the CVM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up

AOS | Node Management | 72


... ...
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.

7. Right-click the ESXi host in the web client and select Rescan for Datastores. Confirm that all Nutanix
datastores are available.

8. Verify that the status of all services on all the CVMs are Up.
nutanix@cvm$ cluster status
If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix
cluster.
CVM:host IP-Address Up
Zeus UP [9935, 9980, 9981, 9994, 10015,
10037]
Scavenger UP [25880, 26061, 26062]
Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]
VipMonitor UP [18387, 18464, 18465, 18466, 18474]
Stargate UP [24993, 25032]
InsightsDataTransfer UP [25258, 25348, 25349, 25388, 25391,
25393, 25396]
Ergon UP [25263, 25414, 25415]
Cerebro UP [25272, 25462, 25464, 25581]
Chronos UP [25281, 25488, 25489, 25547]
Curator UP [25294, 25528, 25529, 25585]
Prism UP [25718, 25801, 25802, 25899, 25901,
25906, 25941, 25942]
CIM UP [25721, 25829, 25830, 25856]
AlertManager UP [25727, 25862, 25863, 25990]
Arithmos UP [25737, 25896, 25897, 26040]
Catalog UP [25749, 25989, 25991]
Acropolis UP [26011, 26118, 26119]
Uhura UP [26037, 26165, 26166]
Snmp UP [26057, 26214, 26215]
NutanixGuestTools UP [26105, 26282, 26283, 26299]
MinervaCVM UP [27343, 27465, 27466, 27730]
ClusterConfig UP [27358, 27509, 27510]
Aequitas UP [27368, 27567, 27568, 27600]
APLOSEngine UP [27399, 27580, 27581]
APLOS UP [27853, 27946, 27947]
Lazan UP [27865, 27997, 27999]
Delphi UP [27880, 28058, 28060]
Flow UP [27896, 28121, 28124]
Anduril UP [27913, 28143, 28145]
XTrim UP [27956, 28171, 28172]
ClusterHealth UP [7102, 7103, 27995, 28209,28495,
28496, 28503, 28510,

AOS | Node Management | 73


28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645,
28646, 28648, 28792,
28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135,
29142, 29146, 29150,
29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line)


About this task
You can start an ESXi host either from the command line or from the web client. For more information about starting
a node from the web client, see Starting an ESXi Node in a Nutanix Cluster on page 72.

Procedure

1. Log on to a running CVM in the Nutanix cluster with SSH.

2. Start the CVM.


nutanix@cvm$ ~/serviceability/bin/esx-exit-maintenance-mode -s cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.
If successful, this command produces no output. If it fails, wait 5 minutes and try again.
nutanix@cvm$ ~/serviceability/bin/esx-start-cvm -s cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.
.
If the CVM starts, a message like the following is displayed.
INFO esx-start-cvm:67 CVM started successfully. Please verify using ping cvm_ip_addr
cvm_ip_addr is the IP address of the CVM on the ESXi host.

After starting, the CVM restarts once. Wait three to four minutes before you ping the CVM.
Alternatively, you can take the ESXi host out of maintenance mode and start the CVM using the web client. For
more information, see Starting an ESXi Node in a Nutanix Cluster on page 72

3. Verify that the status of all services on all the CVMs are Up.
nutanix@cvm$ cluster status
If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix
cluster.
CVM:host IP-Address Up
Zeus UP [9935, 9980, 9981, 9994, 10015,
10037]
Scavenger UP [25880, 26061, 26062]
Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]

AOS | Node Management | 74


VipMonitor UP [18387, 18464, 18465, 18466, 18474]
Stargate UP [24993, 25032]
InsightsDataTransfer UP [25258, 25348, 25349, 25388, 25391,
25393, 25396]
Ergon UP [25263, 25414, 25415]
Cerebro UP [25272, 25462, 25464, 25581]
Chronos UP [25281, 25488, 25489, 25547]
Curator UP [25294, 25528, 25529, 25585]
Prism UP [25718, 25801, 25802, 25899, 25901,
25906, 25941, 25942]
CIM UP [25721, 25829, 25830, 25856]
AlertManager UP [25727, 25862, 25863, 25990]
Arithmos UP [25737, 25896, 25897, 26040]
Catalog UP [25749, 25989, 25991]
Acropolis UP [26011, 26118, 26119]
Uhura UP [26037, 26165, 26166]
Snmp UP [26057, 26214, 26215]
NutanixGuestTools UP [26105, 26282, 26283, 26299]
MinervaCVM UP [27343, 27465, 27466, 27730]
ClusterConfig UP [27358, 27509, 27510]
Aequitas UP [27368, 27567, 27568, 27600]
APLOSEngine UP [27399, 27580, 27581]
APLOS UP [27853, 27946, 27947]
Lazan UP [27865, 27997, 27999]
Delphi UP [27880, 28058, 28060]
Flow UP [27896, 28121, 28124]
Anduril UP [27913, 28143, 28145]
XTrim UP [27956, 28171, 28172]
ClusterHealth UP [7102, 7103, 27995, 28209,28495,
28496, 28503, 28510,
28573, 28574, 28577, 28594, 28595, 28597, 28598, 28602, 28603, 28604, 28607, 28645,
28646, 28648, 28792,
28793, 28837, 28838, 28840, 28841, 28858, 28859, 29123, 29124, 29127, 29133, 29135,
29142, 29146, 29150,
29161, 29162, 29163, 29179, 29187, 29219, 29268, 29273]

4. Verify the storage.

a. Log on to the ESXi host with SSH.


b. Rescan for datastores.
root@esx# esxcli storage core adapter rescan --all

c. Confirm that cluster VMFS datastores, if any, are available.


root@esx# esxcfg-scsidevs -m | awk '{print $5}'

Restarting an ESXi Node using CLI


Before you begin
Shut down the guest VMs, including vCenter that are running on the node or move them to other nodes in
the Nutanix cluster.

About this task

Procedure

1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the web client.

AOS | Node Management | 75


2. Right-click the host and select Maintenance mode > Enter Maintenance Mode.
In the Confirm Maintenance Mode dialog box, click OK.
The host is placed in maintenance mode, which prevents VMs from running on the host.

Note: The host does not enter in the maintenance mode until after the CVM is shut down.

3. Log on to the CVM with SSH and shut down the CVM.
nutanix@cvm$ cvm_shutdown -P now

Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the
cluster is aware that the CVM is unavailable.

4. Right-click the node and select Power > Reboot.


Wait until vCenter shows that the host is not responding and then is responding again, which takes several
minutes.
If you are logged on to the ESXi host rather than to vCenter, the web client disconnects when the host shuts down.

5. Right-click the ESXi host and select Exit Maintenance Mode.

6. Right-click the CVM and select Power > Power on.


Wait approximately 5 minutes for all services to start on the CVM.

7. Log on to the CVM with SSH.

8. Confirm that the Nutanix cluster services are running on the CVM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
... ...
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.

9. Right-click the ESXi host in the web client and select Rescan for Datastores. Confirm that all Nutanix
datastores are available.

Rebooting an ESXI Node in a Nutanix Cluster


About this task
The Request Reboot operation in the Prism web console gracefully restarts the selected nodes one after the other.

Note: Reboot host is a graceful restart workflow. All the user VMs are migrated to another host when you perform a
reboot operation for a host. There is no impact on the user workload due to the reboot operation.

Procedure

To reboot the nodes in the cluster, perform the following steps:

1. Log on to the Prism Element web console.

AOS | Node Management | 76


2. Click the gear icon in the main menu and then select Reboot in the Settings page.

3. In the Request Reboot window, select the nodes you want to restart, and click Reboot.

Figure 52: Request Reboot of ESXi Node

A progress bar is displayed that indicates the progress of the restart of each node.

Changing an ESXi Node Name


After running a bare-metal Foundation, you can change the host (node) name from the command line or by
using the vSphere web client.
To change the hostname, see VMware Documentation..

Changing an ESXi Node Password


Although it is not required for the root user to have the same password on all hosts (nodes), doing so
makes cluster management and support much easier. If you do select a different password for one or more
hosts, make sure to note the password for each host.
To change the host password, see VMware Documentation.

Changing the CVM Memory Configuration (ESXi)

About this task


You can increase the memory reserved for each CVM in your Nutanix cluster by using the 1-click CVM Memory
Upgrade option available from the Prism Element web console.
Increase memory size depending on the workload type or to enable certain AOS features. For more information about
CVM memory sizing recommendations and instructions about how to increase the CVM memory, see Increasing the
Controller VM Memory Size in the Prism Web Console Guide.

AOS | Node Management | 77


VM MANAGEMENT
For the list of supported VMs, see Compatibility and Interoperability Matrix.

VM Management Using Prism Central


You can create and manage a VM on your ESXi from Prism Central. For more information, see Creating a
VM through Prism Central (ESXi) on page 78 and Managing a VM through Prism Central (ESXi) on
page 83.

Creating a VM through Prism Central (ESXi)


You can create virtual machines (VMs) in ESXi clusters through Prism Central.

Before you begin


Ensure that the following prerequisites are met before you create a VM in ESXi cluster:

• All the requirements, rules, and guidelines are considered, and the limitations are observed. For details, see
vCenter Server Integration information in the Prism Central Infrastructure Guide.
• The vCenter Server is registered with your cluster. For more information about how to register a vCenter Server,
see vCenter Server Integration in the Prism Central Infrastructure Guide.

Procedure

To create a VM in ESXi cluster, perform the following steps:

1. Log in to Prism Central.

2. Select Infrastructure application from Application Switcher Function, and navigate to Compute & Storage
> VMs from the Navigation Bar. For information about the Navigation Bar, see Application-Specific
Navigation Bar.
The system displays the List tab by default with all the VMs across registered clusters in Nutanix environment.
For information about how to access the list of non-nutanix VMs managed by an external vCenter, see VMs
Summary View information in the Prism Central Infrastructure Guide.

AOS | VM Management | 78
3. Click Create VM, and enter the following information in the Configuration step:

a. Name: Enter a name for the VM.


b. Description (optional): Enter a description for the VM.
c. Cluster: Select the target ESXi cluster from the dropdown list on which you intend to create the VM.
d. Number of VMs: Enter the number of VMs you intend to create. The created VM names are suffixed
sequentially.
e. vCPU(s): Enter the number of virtual CPUs to allocate to this VM.
f. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
g. Memory: Enter the amount of memory (in GiBs) to allocate to this VM.
The following figure shows the Configuration step:

Figure 53: Create VM - Configuration

AOS | VM Management | 79
4. In the Resources step, perform the following actions to attach a Disk to the VM:
Disks: Click Attach Disk , and enter the following information:
The following figure shows the Attach Disk window:

Figure 54: Attach Disk Window

a. Type: Select the type of storage device, Disk or CD-ROM, from the dropdown list.
b. Operation: Specify the device contents from the dropdown list.

• Select Clone from NDSF file to copy any file from the cluster that can be used as an image onto the disk.

• Select Empty CD-ROM to create a blank CD-ROM device. A CD-ROM device is needed when you
intend to provide a system image from CD-ROM.

Note: The Empty CD-ROM option is available only when CD-ROM is selected as the storage device in
the Type field.

• Select Allocate on Storage Container to allocate space without specifying an image. Selecting this
option means you are allocating space only. You have to provide a system image later from a CD-ROM or
other source.

Note: The Allocate on Storage Container option is available only when Disk is selected as the storage
device in the Type field.

• Select Clone from Image to copy an image, that you have imported by using image service feature, onto
the disk.
c. If you select:

AOS | VM Management | 80
• Allocate on Storage Container in the Operation field, the system prompts you to specify the
Storage Container.
• Clone from Image in the Operation field, the system prompts you to specify the Image.
d. Enter one of the following information based on your selection in the Operation field:

• Storage Container - Select the appropriate storage container.


• Image - Select the image you created using the image service feature. For information about Image
management, see Image Management information in the Prism Central Infrastructure Guide.

Note: If the image you created does not appear in the list, see KB-4892. The image transfer can trigger
image bandwidth throttling if a bandwidth throttling policy is associated with the image. For more
information, see Bandwidth Throttling Policies information in the Prism Central Infrastructure Guide.

e. Bus Type: Select the bus type from the dropdown list.
The options displayed in the dropdown list varies based on the storage device Type selected in the Type field.
If the storage device Type is:

• Disk - The available choices are SCSI, SATA, PCI, or SATA.


• CD-ROM - The available choices are IDE, or SATA
f. Path: Enter the path to the desired system image.
g. Capacity: Enter the disk size in GiB.
h. When all the field entries are correct, click Save to attach the disk to the VM and return to the Create VM
window.
Repeat this step to attach additional devices to the VM.
The following figure shows the Resources step:

Figure 55: Create VM Window (Resources)

AOS | VM Management | 81
5. In the Resources step, perform the following actions to create a network interface for the VM:
Networks: Click Attach to Subnet. The Attach to Subnet window appears.

Figure 56: Attach to Subnet Window

a. Subnet: Select the target subnet from the dropdown list.


The list includes all the defined networks. For more information about how to perform the network
configuration, see Creating VLAN Connections information in Prism Central Infrastructure Guide.
b. Network Adapter Type: Select the network adapter type from the dropdown list.
For information about the list of supported adapter types, see External vCenter Server Integration
information in the Prism Central Infrastructure Guide.
c. Network Connection State: Select the state for the network after VM creation:

• Connected - If the VM needs to be connected to the network to operate.


• Disconnected - If the VM needs to be in disconnected state after creation.
d. Click Save to create a network interface for the VM,and return to the Create VM window.
Repeat this step to create additional network interfaces for the VM.

6. In the Management step, perform the following actions to define categories and timezone:

a. Categories: Search for the category to be assigned to the VM. The policies associated with the category
value are assigned to the VM.
b. Guest OS: Type and select the guest operating system.
The guest operating system that you select affects the supported devices and number of virtual CPUs available
for the virtual machine. The Create VM wizard does not install the guest operating system. For information

AOS | VM Management | 82
about the list of supported operating systems, see External vCenter Server Integration information in the
Prism Central Infrastructure Guide.

Figure 57: Create VM Window (Management)

7. In the Review step, when all the field entries are correct, click Create VM to create the VM, and close the
Create VM window.
The new VM appears in the VMs Summary page and List page.

Managing a VM through Prism Central (ESXi)


This section describes how to manage VMs in an ESXi cluster through Prism Central in Nutanix
environment.

About this task


The procedure to manage VMs in an ESXi cluster in Nutanix environment is same as for AHV cluster,
however the fields can vary when you create or manage a VM in an ESXi cluster. For information about the
fields, see Creating a VM through Prism Central (ESXi) information in Prism Central Infrastructure Guide.

Note: To manage a non-nutanix VM on an external vCenter, you can use playbooks. For more information, see VMs
Summary View information in Prism Central Infrastructure Guide.

Procedure

To manage a VM of an ESXi cluster in Nutanix environment:

• Perform the procedure and actions as described in Managing a VM through Prism Central (AHV) in Prism
Central Infrastructure Guide.

Note: You can perform only those operations for which you have permissions from the admin.

VM Management using Prism Element


You can create and manage a VM on your ESXi from Prism Element. For more information, see Creating a VM
(ESXi) on page 83 and Managing a VM (ESXi) on page 86.

Creating a VM (ESXi)
In ESXi clusters, you can create a new virtual machine (VM) through the web console.

AOS | VM Management | 83
Before you begin

• See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism
Web Console Guide before proceeding.
• Register the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter
Server on page 39.

About this task


When creating a VM, you can configure all of its components, such as number of vCPUs and memory, but
you cannot attach a volume group to the VM.
To create a VM, do the following:

Procedure

1. Log in to Prism Element web console.

2. In the VM dashboard (see VM Dashboard), click the Create VM button.


The Create VM dialog box appears.

3. Do the following in the indicated fields:

a. Name: Enter a name for the VM.


b. Description (optional): Enter a description for the VM.
c. Guest OS: Type and select the guest operating system.
The guest operating system that you select affects the supported devices and number of virtual CPUs available
for the virtual machine. The Create VM wizard does not install the guest operating system. For information
about the list of supported operating systems, see VM Management through Prism Element (ESXi) in the
Prism Web Console Guide.
d. vCPU(s): Enter the number of virtual CPUs to allocate to this VM.
e. Number of Cores per vCPU: Enter the number of cores assigned to each virtual CPU.
f. Memory: Enter the amount of memory (in GiBs) to allocate to this VM.

AOS | VM Management | 84
4. To attach a disk to the VM, click the Add New Disk button.
The Add Disks dialog box appears.

Figure 58: Add Disk Dialog Box

Do the following in the indicated fields:

a. Type: Select the type of storage device, DISK or CD-ROM, from the pull-down list.
The following fields and options vary depending on whether you choose DISK or CD-ROM.
b. Operation: Specify the device contents from the pull-down list.

• Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
• Select Allocate on Storage Container to allocate space without specifying an image. (This option
appears only when DISK is selected in the previous field.) Selecting this option means you are allocating
space only. You have to provide a system image later from a CD-ROM or other source.
c. Bus Type: Select the bus type from the pull-down list. The choices are IDE or SCSI.
d. ADSF Path: Enter the path to the desired system image.
This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the
path name as /storage_container_name/vmdk_name.vmdk. For example to clone an image from myvm-
flat.vmdk in a storage container named crt1, enter /crt1/myvm-flat.vmdk. When a user types the storage
container name (/storage_container_name/), a list appears of the VMDK files in that storage container
(assuming one or more VMDK files had previously been copied to that storage container).

Note: Make sure you are copying from a flat file.

e. Storage Container: Select the storage container to use from the pull-down list.
This field appears only when Allocate on Storage Container is selected. The list includes all storage
containers created for this cluster.
f. Size: Enter the disk size in GiBs.
g. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the
Create VM dialog box.
h. Repeat this step to attach more devices to the VM.

AOS | VM Management | 85
5. To create a network interface for the VM, click the Add New NIC button.
The Create NIC dialog box appears. Do the following in the indicated fields:

a. VLAN Name: Select the target virtual LAN from the pull-down list.
The list includes all defined networks. For more information, see Network Configuration for VM Interfaces
in the Prism Web Console Guide.
b. Network Adapter Type: Select the network adapter type from the pull-down list.
For information about the list of supported adapter types, see VM Management through Prism Element
(ESXi) in the Prism Element Web Console Guide.
c. Network UUID: This is a read-only field that displays the network UUID.
d. Network Address/Prefix: This is a read-only field that displays the network IP address and prefix.
e. When all the field entries are correct, click the Add button to create a network interface for the VM and return
to the Create VM dialog box.
f. Repeat this step to create more network interfaces for the VM.

6. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog
box.
The new VM appears in the VM table view. For more information, see VM Table View in the Prism Element
Web Console Guide.

Managing a VM (ESXi)
You can use the web console to manage virtual machines (VMs) in the ESXi clusters.

Before you begin

• See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism
Web Console Guide before proceeding.
• Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering a
Cluster to vCenter Server on page 39.

About this task


After creating a VM, you can use the web console to manage guest tools, power operations, suspend, launch a VM
console window, update the VM configuration, clone the VM, or delete the VM. To accomplish one or more of these
tasks, do the following:

Note: Your available options depend on the VM status, type, and permissions. Unavailable options are unavailable.

Procedure

1. Log in to Prism Element web console.

2. In the VM dashboard (see VM Dashboard), click the Table view.

AOS | VM Management | 86
3. Select the target VM in the table (top section of screen).
The summary line (middle of screen) displays the VM name with a set of relevant action links on the right. You
can also right-click on a VM to select a relevant action.
The possible actions are Manage Guest Tools, Launch Console, Power on (or Power off actions),
Suspend (or Resume), Clone, Update, and Delete. The following steps describe how to perform each
action.

Figure 59: VM Action Links

4. To manage guest tools as follows, click Manage Guest Tools.


You can also enable NGT applications (self-service restore, volume snapshot service and application-consistent
snapshots) as part of manage guest tools.

a. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
b. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
Ensure that VM has at least one empty IDE CD-ROM or SATA slot to attach the ISO.

The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check box.
The self-service restore feature is enabled of the VM. The guest VM administrator can restore the desired file
or files from the VM. For more information about the self-service restore feature, see Self-Service Restore
in the Data Protection and Recovery with Prism Element guide.

d. After you select Enable Nutanix Guest Tools check box the VSS and application-consistent snapshot
feature is enabled by default.
After this feature is enabled, Nutanix native in-guest VmQuiesced snapshot service (VSS) agent is used to
take application-consistent snapshots for all the VMs that support VSS. This mechanism takes application-
consistent snapshots without any VM stuns (temporary unresponsive VMs) and also enables third-party
backup providers like Commvault and Rubrik to take application-consistent snapshots on Nutanix platform

AOS | VM Management | 87
in a hypervisor-agnostic manner. For more information, see Conditions for Application-consistent
Snapshots in the Data Protection and Recovery with Prism Element guide.

e. To mount VMware guest tools, click Mount VMware Guest Tools check box.
The VMware guest tools are mounted on the VM.

Note: You can mount both VMware guest tools and Nutanix Guest Tools at the same time on a particular
VM provided the VM has sufficient empty CD-ROM slots.

f. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.

Note:

• If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is
powered off, enable NGT from the UI and start the VM. If cloned VM is powered on, enable
NGT from the UI and restart the Nutanix guest agent service.
• If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and
Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide.

If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the
following nCLI command.
ncli> ngt mount vm-id=virtual_machine_id
For example, to mount the NGT on the VM with
VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the
following command.
ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-
a345-4e52-9af6-
c1601e759987

Caution: In AOS 4.6, for the powered-on Linux VMs on AHV, ensure that the NGT ISO is ejected or
unmounted within the guest VM before disabling NGT by using the web console. This issue is specific for 4.6
version and does not occur from AOS 4.6.x or later releases.

Note: If you have created the NGT ISO CD-ROMs prior to AOS 4.6 or later releases, the NGT functionality
will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount
the ISO, remount the ISO, install the NGT software again, and then upgrade to 4.6 or later version.

5. To launch a VM console window, click the Launch Console action link.


This opens a virtual network computing (VNC) client and displays the console in a new tab or window. This
option is available only when the VM is powered on. The VM power options that you access from the Power
Off Actions action link below the VM table can also be accessed from the VNC console window. To access the
VM power options, click the Power button at the top-right corner of the console window.

Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser
is Google Chrome. (Firefox typically works best.)

AOS | VM Management | 88
6. To start (or shut down) the VM, click the Power on (or Power off) action link.
Power on begins immediately. If you want to shut down the VMs, you are prompted to select one of the
following options:

• Power Off. Hypervisor performs a hard shut down action on the VM.
• Reset. Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown. Operating system of the VM performs a graceful shutdown.
• Guest Reboot. Operating system of the VM performs a graceful restart.

Note: The Guest Shutdown and Guest Reboot options are available only when VMware guest tools are
installed.

7. To pause (or resume) the VM, click the Suspend (or Resume) action link. This option is available only when
the VM is powered on.

8. To clone the VM, click the Clone action link.


This displays the Clone VM dialog box, which includes the same fields as the Create VM dialog box. A cloned
VM inherits the most the configurations (except the name) of the source VM. Enter a name for the clone and
then click the Save button to create the clone. You can optionally override some of the configurations before

AOS | VM Management | 89
clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority,
NICs, or the guest customization.

Note:

• You can clone up to 250 VMs at a time.


• In the Clone window, you cannot update the disks.

AOS | VM Management | 90
AOS | VM Management | 91
9. To modify the VM configuration, click the Update action link.
The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Modify the
configuration as needed (see Creating a VM (ESXi) on page 83), and in addition you can enable Flash
Mode for the VM.

Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with
that vDisk is not reclaimed unless you also delete the VM snapshots.

a. Click the Enable Flash Mode check box.

» After you enable this feature on the VM, the status is updated in the VM table view. To view the status
of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in the VM table
view.
» You can disable the Flash Mode feature for individual virtual disks. To update the Flash Mode for
individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash
Mode check box.

AOS | VM Management | 92
AOS | VM Management | 93
Figure 62: Update VM Resources - VM Disk Flash Mode

10. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the
VM.
The deleted VM disappears from the list of VMs in the table. You can also delete a VM that is already powered
on.

vDisk Provisioning Types in VMware with Nutanix Storage


You can specify the vDisk provisioning policy when you perform certain VM management operations like creating a
VM, migrating a VM, or cloning a VM.

AOS | VM Management | 94
Traditionally, a vDisk is provisioned with specific allocated space (thick space) or with space allocated on an as-
needed basis (thin disk). The thick disks provisions the space using either lazy zero or eager zero disk formatting
method.
For traditional storage systems, the thick eager zeroed disks provide the best performance out of the three types
of disk provisioning. Thick disks provide second best performance and thin disks provide the least performance.
However, this does not apply to modern storage systems found in Nutanix systems.
Nutanix uses a thick Virtual Machine Disk (VMDK) to reserve the storage space using the vStorage APIs for Array
Integration (VAAI) reserve space API.
On a Nutanix system, there is no performance difference between thin and thick disks. This means that a thick eager
zeroed virtual disk has no performance benefits over a thin virtual disk.
Nutanix uses thick disk (VMDK) in its configuration and the resulting disk will be the same whether the disk is a thin
or a thick disk (despite the configuration differences).

Note: A thick-disk reservation is required for the reservation of the disk space. Nutanix VMDK has no performance
requirement to provision a thick disk. For a single Nutanix container, even when a thick disk is provisioned, no disk
space is allocated to write zeroes. So, there is no requirement for provisioning a thick disk.

When using the up-to-date VAAI for cloning operations, the following behavior is expected:

• When cloning any type of disk format (thin, thick lazy zeroed or thick eager zeroed) to the same Nutanix
datastore, the resulting VM will have a thin disk regardless of the explicit choice of a disk format in the vSphere
client.
Nutanix uses a thin provisioned disk because a thin disk performs the same as a thick disk in the system. The
thin disk prevents disk space from wasting. In the cloning scenario, Nutanix disallows the flow of the reservation
property from the source to the destination when creating a fast clone on the same datastore. This prevents space
wastage due to unnecessary reservation.
• When cloning a VM to a different datastore, the destination VM will have the disk format that you specified in the
vSphere client.

Important: A thick disk will be shown as thick in ESXi, and within NDFS (Nutanix Distributed File System) it is
shown as a thin disk with an extra field configuration.

Nutanix recommends using thin disks over any other disk type.

VM Migration
You can migrate a VM to an ESXi host in a Nutanix cluster. Usually the migration is done in the following cases.

• Migrate VMs from existing storage platform to Nutanix.


• Keep VMs running during disruptive upgrade or other downtime of Nutanix cluster.
In migrating VMs between Nutanix clusters running vSphere, the source host and NFS datastore are the ones
presently running the VM. The target host and NFS datastore are the ones where the VM runs after migration. The
target ESXi host and datastore must be part of a Nutanix cluster.
To accomplish this migration, you have to mount the NFS datastores from the target on the source. After the
migration is complete, you must unmount the datastores and block access.

Migrating a VM to Another Nutanix Cluster

Before you begin


Before migrating a VM to another Nutanix cluster running vSphere, verify that you have provisioned the
target Nutanix environment.

AOS | VM Management | 95
About this task
The shared storage feature in vSphere allows you to move both compute and storage resources from the
source legacy environment to the target Nutanix environment at the same time without disruption. This
feature also removes the need to do any sort of file systems allow lists on Nutanix.
You can use the shared storage feature through the migration wizard in the web client.

Procedure

1. Log on to vCenter with the web client.

2. Select the VM that you want to migrate.

3. Right-click the VM and select Migrate.

4. Under Select Migration Type, select Change both compute resource and storage.

5. Select Compute Resource and then Storage and click Next.


If necessary, change the disk format to the one that you want to use during the migration process.

6. Select a destination network for all VM network adapters and click Next.

7. Click Finish.
Wait for the migration process to complete. The process performs the storage vMotion first, and then creates a
temporary storage network over vmk0 for the period where the disk files are on Nutanix.

Cloning a VM
About this task
To clone a VM, you must enable the Nutanix VAAI plug-in. For steps to enable and verify Nutanix VAAI plug-in,
refer KB-1868.

Procedure

1. Log on to vCenter with the web client.

2. Right-click the VM and select Clone.

3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.

4. Select the datastore that contains source VM and click Next.

Note: If you choose a datastore other than the one that contains the source VM, the clone operation uses the
VMware implementation and not the Nutanix VAAI plug-in.

5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.

6. Click Finish.

vStorage APIs for Array Integration


To improve the vSphere cloning process, Nutanix provides a vStorage API for array integration (VAAI) plug-in. This
plug-in is installed by default during the Nutanix factory process.
Without the Nutanix VAAI plug-in, the process of creating a full clone takes a significant amount of time because all
the data that comprises a VM is duplicated. This duplication also results in an increase in storage consumption.

AOS | VM Management | 96
The Nutanix VAAI plug-in efficiently makes full clones without reserving space for the clone. Read requests for
blocks shared between parent and clone are sent to the original vDisk that was created for the parent VM. As the
clone VM writes new blocks, the Nutanix file system allocates storage for those blocks. This data management occurs
completely at the storage layer, so the ESXi host sees a single file with the full capacity that was allocated when the
clone was created.

AOS | VM Management | 97
VSPHERE ESXI HARDENING SETTINGS
Configure the following settings in /etc/ssh/sshd_config to harden an ESXi hypervisor in a Nutanix cluster.

Caution: When hardening ESXi security, some settings may impact operations of a Nutanix cluster.

HostbasedAuthentication no
PermitTunnel no
AcceptEnv
GatewayPorts no
Compression no
StrictModes yes
KerberosAuthentication no
GSSAPIAuthentication no
PermitUserEnvironment no
PermitEmptyPasswords no
PermitRootLogin no

Match Address x.x.x.11,x.x.x.12,x.x.x.13,x.x.x.14,192.168.5.0/24


PermitRootLogin yes
PasswordAuthentication yes

AOS | vSphere ESXi Hardening Settings | 98


ESXI HOST UPGRADE
You can upgrade your host either automatically through Prism Element (1-click upgrade) or manually. For more
information about automatic and manual upgrades, see ESXi Upgrade on page 99 and ESXi Host Manual
Upgrade on page 104 respectively.
This paragraph describes the Nutanix hypervisor support policy for vSphere and Hyper-V hypervisor releases.
Nutanix provides hypervisor compatibility and support statements that should be reviewed before planning an
upgrade to a new release or applying a hypervisor update or patch:

• Compatibility and Interoperability Matrix


• Hypervisor Support Policy- See Support Policies and FAQs for the supported Acropolis hypervisors.
Review the Nutanix Field Advisory page also for critical issues that Nutanix may have uncovered with the
hypervisor release being considered.

Note: You may need to log in to the Support Portal to view the links above.

The Acropolis Upgrade Guide provides steps that can be used to upgrade the hypervisor hosts. However, as
noted in the documentation, the customer is responsible for reviewing the guidance from VMware or Microsoft,
respectively, on other component compatibility and upgrade order (e.g. vCenter), which needs to be planned first.

ESXi Upgrade
These topics describe how to upgrade your ESXi hypervisor host through the Prism Element web console
Upgrade Software feature (also known as 1-click upgrade). To install or upgrade VMware vCenter server
or other third-party software, see your vendor documentation for this information.
AOS supports ESXi hypervisor upgrades that you can apply through the web console Upgrade Software feature
(also known as 1-click upgrade).
You can view the available upgrade options, start an upgrade, and monitor upgrade progress through the web console.
In the main menu, click the gear icon, and then select Upgrade Software in the Settings panel. You can see the
current status of your software versions and start an upgrade.

VMware ESXi Hypervisor Upgrade Recommendations and Limitations

• To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation.
• Always consult the VMware web site for any vCenter and hypervisor installation dependencies. For example, a
hypervisor version might require that you upgrade vCenter first.
• If you have not enabled fully automated DRS in your environment and want to upgrade the ESXi host, you need
to upgrade the ESXi host manually. For LCM upgrades on the ESXi cluster, it is recommended to have a fully
automated DRS, so that VM migrations can be done automatically. For more information on fully automated
DRS, see the topic, Set a Custom Automation Level for a Virtual Machine in the VMware vSphere Documentation.
For information about upgrading ESXi hosts manually, see ESXi Host Manual Upgrade in the vSphere
Administration Guide.
• Disable Admission Control to upgrade ESXi on AOS; if enabled, the upgrade process will fail. You can enable it
for normal cluster operation otherwise.
Nutanix Support for ESXi Upgrades
Nutanix qualifies specific VMware ESXi hypervisor updates and provides a related JSON metadata
upgrade file on the Nutanix Support Portal for one-click upgrade through the Prism web console
Software Upgrade feature.

AOS | ESXi Host Upgrade | 99


Nutanix does not provide ESXi binary files, only related JSON metadata upgrade files. Obtain ESXi offline
bundles (not ISOs) from the VMware web site.
Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater than or released after
the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor
support statement in our Support FAQ. For updates that are made available by VMware that do not have a
Nutanix-provided JSON metadata upgrade file, obtain the offline bundle and md5sum checksum available
from VMware, then use the web console Software Upgrade feature to upgrade ESXi.
Mixing nodes with different processor (CPU) types in the same cluster
If you are mixing nodes with different processor (CPU) types in the same cluster, you must enable
enhanced vMotion compatibility (EVC) to allow vMotion/live migration of VMs during the hypervisor
upgrade. For example, if your cluster includes a node with a Haswell CPU and other nodes with
Broadwell CPUs, open vCenter and enable VMware enhanced vMotion compatibility (EVC) setting
and specifically enable EVC for Intel hosts.
CPU Level for Enhanced vMotion Compatibility (EVC)
AOS Controller VMs and Prism Central VMs require a minimum CPU micro-architecture version of Intel
Sandy Bridge. For AOS clusters with ESXi hosts, or when deploying Prism Central VMs on any ESXi
cluster: if you have set the vSphere cluster enhanced vMotion compatibility (EVC) level, the minimum level
must be L4 - Sandy Bridge.
vCenter Requirements and Limitations

Note: ENG-358564 You might be unable to log in to vCenter Server as the /storage/seat partition for vCenter
Server version 7.0 and later might become full due to a large number of SSH-related events. See KB 10830 at
the Nutanix Support portal for symptoms and solutions to this issue.

• If your cluster is running the ESXi hypervisor and is also managed by VMware vCenter, you must provide
vCenter administrator credentials and vCenter IP address as an extra step before upgrading. Ensure that
ports 80 / 443 are open between your cluster and your vCenter instance to successfully upgrade.
• If you have just registered your cluster in vCenter. Do not perform any cluster upgrades (AOS, Controller
VM memory, hypervisor, and so on) if you have just registered your cluster in vCenter. Wait at least 1
hour before performing upgrades to allow cluster settings to become updated. Also do not register the
cluster in vCenter and perform any upgrades at the same time.
• Cluster mapped to two vCenters. Upgrading software through the web console (1-click upgrade) does not
support configurations where a cluster is mapped to two vCenters or where it includes host-affinity must
rules for VMs.
Ensure that enough cluster resources are available for live migration to occur and to allow hosts to enter
maintenance mode.
Mixing Different Hypervisor Versions
For ESXi hosts, mixing different hypervisor versions in the same cluster is temporarily allowed for
deferring a hypervisor upgrade as part of an add-node/expand cluster operation, reimaging a node
as part of a break-fix procedure, planned migrations, and similar temporary operations.

Upgrading ESXi Hosts by Uploading Binary and Metadata Files

Before you begin

• See General Hypervisor Upgrade Recommendations, VMware ESXi Hypervisor Upgrade


Recommendations and Limitations, and Recommended Upgrade Order.
• To install or upgrade VMware vCenter server or other third-party software, see your vendor documentation.

AOS | ESXi Host Upgrade | 100


About this task
Do the following steps to download Nutanix-qualified ESXi metadata .JSON files and upgrade the ESXi
hosts through Upgrade Software in the Prism Element web console. Nutanix does not provide ESXi binary
files, only related JSON metadata upgrade files.

Procedure

1. Before performing any upgrade procedure, make sure you are running the latest version of the Nutanix Cluster
Check (NCC) health checks and upgrade NCC if necessary.

2. Run NCC as described in Run NCC Checks.

3. Log on to the Nutanix support portal and navigate to the Hypervisors Support page from the Downloads
menu, then download the Nutanix-qualified ESXi metadata .JSON files to your local machine or media.

a. The default view is All. From the drop-down menu, select Nutanix - VMware ESXi, which shows all
available JSON versions.
b. From the release drop-down menu, select the available ESXi version. For example, 7.0.0 u2a.
c. Click Download to download the Nutanix-qualified ESXi metadata .JSON file.

Figure 63: Downloads Page for ESXi Metadata JSON

4. Log on to the Prism Element web console for any node in the cluster.

5. Click the gear icon in the main menu, select Upgrade Software in the Settings page, and then click the
Hypervisor tab.

6. Click the upload the Hypervisor binary link.

7. Click Choose File for the metadata JSON (obtained from Nutanix) and binary files (offline bundle zip file for
upgrades obtained from VMware), respectively, browse to the file locations, select the file, and click Upload
Now.

8. When the file upload is completed, click Upgrade > Upgrade Now, then click Yes to confirm.
[Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on
without upgrading, click Upgrade > Pre-upgrade. These checks also run as part of the upgrade procedure.

AOS | ESXi Host Upgrade | 101


9. Type your vCenter IP address and credentials, then click Upgrade.
Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or
username@domain.

Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the
Upgrade option is not displayed, because the software is already installed.

The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation
checks and uploads, through the Progress Monitor.

10. On the LCM page, click Inventory > Perform Inventory to enable LCM to check, update and display the
inventory information.
See Performing Inventory With LCM in the Acropolis Upgrade Guide.

Upgrading ESXi by Uploading An Offline Bundle File and Checksum

About this task

• Do the following steps to download a non-Nutanix-qualified (patch) ESXi upgrade offline bundle from VMware,
then upgrade ESXi through Upgrade Software in the Prism Element web console.
• Typically you perform this procedure to patch your version of ESXi and Nutanix has not yet officially qualified
that new patch version. Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater
than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases.

Procedure

1. From the VMware web site, download the offline bundle (for example, update-from-esxi6.0-6.0_update02.zip)
and copy the associated MD5 checksum. Ensure that this checksum is obtained from the VMware web site, not
manually generated from the bundle by you.

2. Save the files to your local machine or media, such as a USB drive or other portable media.

3. Log on to the Prism Element web console for any node in the cluster.

4. Click the gear icon in the main menu of the Prism Element web console, select Upgrade Software in the
Settings page, and then click the Hypervisor tab.

5. Click the upload the Hypervisor binary link.

6. Click enter md5 checksum and copy the MD5 checksum into the Hypervisor MD5 Checksum field.

AOS | ESXi Host Upgrade | 102


7. Scroll down and click Choose File for the binary file, browse to the offline bundle file location, select the file,
and click Upload Now.

Figure 64: ESXi 1-Click Upgrade, Unqualified Bundle

8. When the file upload is completed, click Upgrade > Upgrade Now, then click Yes to confirm.
[Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without
upgrading, click Upgrade > Pre-upgrade. These checks also run as part of the upgrade procedure.

AOS | ESXi Host Upgrade | 103


9. Type your vCenter IP address and credentials, then click Upgrade.
Ensure that you are using your Active Directory or LDAP credentials in the form of domain\username or
username@domain.

Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the
Upgrade option is not displayed, because the software is already installed.

The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation
checks and uploads, through the Progress Monitor.

ESXi Host Manual Upgrade


If you have not enabled DRS in your environment and want to upgrade the ESXi host, you must upgrade
the ESXi host manually. This topic describes all the requirements that you must meet before manually
upgrading the ESXi host.

Tip: If you have enabled DRS and want to upgrade the ESXi host, use the one-click upgrade procedure from the Prism
web console. For more information on the one-click upgrade procedure, see the ESXi Upgrade on page 99.

Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater than or released after
the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor
support statement in our Support FAQ.
Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading ESXi does not require
cluster downtime.

• If you want to avoid cluster interruption, you must complete upgrading a host and ensure that the CVM is
running before upgrading any other host. When two hosts in a cluster are down at the same time, all the data is
unavailable.
• If you want to minimize the duration of the upgrade activities and cluster downtime is acceptable, you can stop the
cluster and upgrade all hosts at the same time.

Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate the failure of a single
node or drive. Nutanix clusters with a configured option of redundancy factor 3 allow the Nutanix cluster to withstand
the failure of two nodes or drives in different blocks.

• Never shut down or restart multiple Controller VMs or hosts simultaneously.


• Always run the cluster status command to verify that all Controller VMs are up before performing a
Controller VM or host shutdown or restart.

ESXi Host Upgrade Process


Perform the following process to upgrade ESXi hosts in your environment.

Prerequisites and Requirements

Note: Use the following process only if you do not have DRS enabled in your Nutanix cluster.

• If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the cluster with the cluster
stop command.

Caution: There is downtime if you upgrade all the nodes in the Nutanix cluster at once. If you do not want
downtime in your environment, you must ensure that only one CVM is shut down at a time in a redundancy factor 2
configuration.

AOS | ESXi Host Upgrade | 104


• If you are upgrading the nodes while keeping the cluster running, ensure that all nodes are up by logging on to a
CVM and running the cluster status command. If any nodes are not running, start them before proceeding with the
upgrade. Shut down all guest VMs on the node or migrate them to other nodes in the Nutanix cluster.
• Disable email alerts in the web console under Email Alert Services or with the nCLI command.
ncli> alerts update-alert-config enable=false

• Run the complete NCC health check by using the health check command.
nutanix@cvm$ ncc health_checks run_all

• Run the cluster status command to verify that all Controller VMs are up and running, before performing a
Controller VM or host shutdown or restart.
nutanix@cvm$ cluster status

• Place the host in the maintenance mode by using the web client.
• Log on to the CVM with SSH and shut down the CVM.
nutanix@cvm$ cvm_shutdown -P now

Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the
cluster is aware that the CVM is unavailable.

• Start the upgrade using vSphere Upgrade Guide or vCenter Update Manager VUM.

Upgrading ESXi Host

• See the VMware Documentation for information about the standard ESXi upgrade procedures. If any problem
occurs with the upgrade process, an alert is raised in the Alert dashboard.

Post Upgrade
Run the complete NCC health check by using the following command.
nutanix@cvm$ ncc health_checks run_all
Enable email alerts in the web console under Email Alert Services or with the nCLI command.
ncli> alerts update-alert-config enable=true

AOS | ESXi Host Upgrade | 105


VSPHERE CLUSTER SETTINGS
CHECKLIST
Review the following checklist of the settings that you have to configure to successfully deploy vSphere
virtual environment running Nutanix Enterprise cloud.

vSphere Availability Settings

• Enable host monitoring.


• Enable admission control and use the percentage-based policy with a value based on the number of nodes in the
cluster.
For more information about settings of percentage of cluster resources reserved as failover spare capacity,
vSphere HA Admission Control Settings for Nutanix Environment on page 48.
• Set the VM Restart Priority of all CVMs to Disabled.
• Set the Host Isolation Response of the cluster to Power Off & Restart VMs.
• Set the VM Monitoring for all CVMs to Disabled.
• Enable datastore heartbeats by clicking Use datastores only from the specified list and choosing the
Nutanix NFS datastore.
If the cluster has only one datastore, click Advanced Options tab and add das.ignoreInsufficientHbDatastore
with Value of true.

vSphere DRS Settings

• Set the Automation Level on all CVMs to Disabled.


• Select Automation Level to accept level 3 recommendations.
• Leave power management disabled.

Other Cluster Settings

• Configure advertised capacity for the Nutanix storage container (total usable capacity minus the capacity of one
node for replication factor 2 or two nodes for replication factor 3).
• Store VM swapfiles in the same directory as the VM.
• Enable enhanced vMotion compatibility (EVC) in the cluster. For more information, see vSphere EVC Settings
on page 52.
• Configure Nutanix CVMs with the appropriate VM overrides. For more information, see VM Override Settings
on page 54.
• Check Nonconfigurable ESXi Components on page 68. Modifying the nonconfigurable components may
inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.

AOS | vSphere Cluster Settings Checklist | 106


COPYRIGHT
Copyright 2023 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the United States and/or other
jurisdictions. All other brand and product names mentioned herein are for identification purposes only and may be
trademarks of their respective holders.

AOS | Copyright | 107

You might also like