Vsphere Admin6 AOS v6 - 6
Vsphere Admin6 AOS v6 - 6
Overview.............................................................................................................4
Hardware Configuration...............................................................................................................................4
Nutanix Software Configuration.................................................................................................................. 4
vSphere Networking..........................................................................................7
VMware NSX Support................................................................................................................................. 8
NSX-T Support on Nutanix Platform................................................................................................ 8
Creating Segment for NVDS............................................................................................................ 9
Creating NVDS Switch on the Host by Using NSX-T Manager..................................................... 10
Registering NSX-T Manager with Nutanix..................................................................................... 14
Networking Components........................................................................................................................... 17
Configuring Host Networking (Management Network)..............................................................................19
Changing a Host IP Address.................................................................................................................... 21
Reconnecting a Host to vCenter...............................................................................................................22
Selecting a Management Interface........................................................................................................... 22
Selecting a New Management Interface........................................................................................ 23
Updating Network Settings........................................................................................................................24
Network Teaming Policy........................................................................................................................... 24
Migrate from a Standard Switch to a Distributed Switch.......................................................................... 25
Standard Switch Configuration....................................................................................................... 25
Planning the Migration....................................................................................................................25
Unassigning Physical Uplink of the Host for Distributed Switch.................................................... 26
Migrating to a New Distributed Switch without LACP/LAG............................................................ 27
Migrating to a New Distributed Switch with LACP/LAG................................................................. 34
vCenter Configuration.....................................................................................39
Registering a Cluster to vCenter Server...................................................................................................39
Unregistering a Cluster from the vCenter Server..................................................................................... 41
Creating a Nutanix Cluster in vCenter......................................................................................................41
Adding a Nutanix Node to vCenter...........................................................................................................42
Nutanix Cluster Settings............................................................................................................................43
vSphere General Settings.............................................................................................................. 43
vSphere HA Settings...................................................................................................................... 44
vSphere DRS Settings................................................................................................................... 50
vSphere EVC Settings....................................................................................................................52
VM Override Settings..................................................................................................................... 54
Migrating a Nutanix Cluster from One vCenter Server to Another........................................................... 55
Storage I/O Control (SIOC).......................................................................................................................56
Disabling Storage I/O Control (SIOC) on a Container................................................................... 56
Node Management.......................................................................................... 58
Node Maintenance (ESXi).........................................................................................................................58
Putting a Node into Maintenance Mode (vSphere)........................................................................59
Viewing a Node that is in Maintenance Mode............................................................................... 62
Exiting a Node from the Maintenance Mode (vSphere).................................................................64
Guest VM Status when Node is in Maintenance Mode................................................................. 67
ii
Nonconfigurable ESXi Components..........................................................................................................68
Nutanix Software............................................................................................................................ 68
ESXi................................................................................................................................................ 69
Putting the CVM and ESXi Host in Maintenance Mode Using vCenter....................................................70
Shutting Down an ESXi Node in a Nutanix Cluster..................................................................................70
Shutting Down an ESXi Node in a Nutanix Cluster (vSphere Command Line)........................................ 71
Starting an ESXi Node in a Nutanix Cluster.............................................................................................72
Starting an ESXi Node in a Nutanix Cluster (vSphere Command Line)................................................... 74
Restarting an ESXi Node using CLI......................................................................................................... 75
Rebooting an ESXI Node in a Nutanix Cluster........................................................................................ 76
Changing an ESXi Node Name................................................................................................................ 77
Changing an ESXi Node Password..........................................................................................................77
Changing the CVM Memory Configuration (ESXi)................................................................................... 77
VM Management.............................................................................................. 78
VM Management Using Prism Central..................................................................................................... 78
Creating a VM through Prism Central (ESXi)................................................................................ 78
Managing a VM through Prism Central (ESXi).............................................................................. 83
VM Management using Prism Element.................................................................................................... 83
Creating a VM (ESXi).....................................................................................................................83
Managing a VM (ESXi).................................................................................................................. 86
vDisk Provisioning Types in VMware with Nutanix Storage..................................................................... 94
VM Migration............................................................................................................................................. 95
Migrating a VM to Another Nutanix Cluster................................................................................... 95
Cloning a VM............................................................................................................................................ 96
vStorage APIs for Array Integration..........................................................................................................96
Copyright........................................................................................................107
iii
OVERVIEW
Nutanix Enterprise Cloud delivers a resilient, web-scale hyperconverged infrastructure (HCI) solution built for
supporting your virtual and hybrid cloud environments. The Nutanix architecture runs a storage controller called
the Nutanix Controller VM (CVM) on every Nutanix node in a cluster to form a highly distributed, shared-nothing
infrastructure.
All CVMs work together to aggregate storage resources into a single global pool that guest VMs running on the
Nutanix nodes can consume. The Nutanix Distributed Storage Fabric manages storage resources to preserve data and
system integrity if there is node, disk, application, or hypervisor software failure in a cluster. Nutanix storage also
enables data protection and High Availability that keep critical data and guest VMs protected.
This guide describes the procedures and settings required to deploy a Nutanix cluster running in the VMware vSphere
environment. To know more about the VMware terms referred to in this document, see the VMware Documentation.
Hardware Configuration
See the Field Installation Guide for information about how to deploy and create a Nutanix cluster running ESXi for
your hardware. After you create the Nutanix cluster by using Foundation, use this guide to perform the management
tasks.
Limitations
For information about ESXi configuration limitations, see Nutanix Configuration Maximums webpage.
Storage Pools
A storage pool on Nutanix is a group of physical disks from one or more tiers. Nutanix recommends configuring only
one storage pool for each Nutanix cluster.
Replication factor
Nutanix supports a replication factor of 2 or 3. Setting the replication factor to 3 instead of 2 adds
an extra data protection layer at the cost of more storage space for the copy. For use cases where
applications provide their own data protection or high availability, you can set a replication factor of
1 on a storage container.
Containers
The Nutanix storage fabric presents usable storage to the vSphere environment as an NFS
datastore. The replication factor of a storage container determines its usable capacity. For example,
replication factor 2 tolerates one component failure and replication factor 3 tolerates two component
failures. When you create a Nutanix cluster, three storage containers are created by default.
Nutanix recommends that you do not delete these storage containers. You can rename the storage
container named default - xxx and use it as the main storage container for hosting VM data.
Note: The available capacity and the vSphere maximum of 2,048 VMs limits the number of VMs a datastore
can host.
Capacity Optimization
AOS | Overview | 4
• Nutanix recommends disabling deduplication for all workloads except VDI.
For mixed-workload Nutanix clusters, create a separate storage container for VDI workloads and enable
deduplication on that storage container.
Tip: You can increase CVM RAM memory up to 64 GB using the Prism one-click memory upgrade
procedure. For more information, see Increasing the Controller VM Memory Size in the Prism Web
Console Guide.
Networking
The Nutanix CVM uses the standard Ethernet MTU (maximum transmission unit) of 1500 bytes for
all the network interfaces by default. The standard 1500 byte MTU helps deliver enhanced excellent
performance and stability. Nutanix does not support configuring the MTU on a network interface of
CVMs to higher values.
Caution: Do not change the vSwitchNutanix or the internal vmk (VMkernel) interface.
Tip: Nutanix recommends using replication factor 3 for clusters with more than 16 nodes. Replication factor 3
requires at least five nodes so that the data remains online even if two nodes fail concurrently.
• Use the advertised capacity feature to ensure that the resiliency capacity is equivalent to one node of usable
storage for replication factor 2 or two nodes for replication factor 3.
The advertised capacity of a storage container must equal the total usable cluster space minus the capacity of
either one or two nodes. For example, in a 4-node cluster with 20 TB usable space per node with replication
factor 2, the advertised capacity of the storage container must be 60 TB. That spares 20 TB capacity to sustain and
rebuild one node for self-healing. Similarly, in a 5-node cluster with 20 TB usable space per node with replication
factor 3, advertised capacity of the storage container must be 60 TB. That spares 40 TB capacity to sustain and
rebuild two nodes for self-healing.
• Use the default storage container and mounting it on all the ESXi hosts in the Nutanix cluster.
You can also create a single storage container. If you are creating multiple storage containers, ensure that all the
storage containers follow the advertised capacity recommendation.
AOS | Overview | 5
• Configure the vSphere cluster according to settings listed in vSphere Cluster Settings Checklist on
page 106.
AOS | Overview | 6
VSPHERE NETWORKING
vSphere on the Nutanix platform enables you to dynamically configure, balance, or share logical networking
components across various traffic types. To ensure availability, scalability, performance, management, and security of
your infrastructure, configure virtual networking when designing a network solution for Nutanix clusters.
You can configure networks according to your requirements. For detailed information about vSphere virtual
networking and different networking strategies, refer to the Nutanix vSphere Storage Solution Document and the
VMware Documentation. This chapter describes the configuration elements required to run VMware vSphere on the
Nutanix Enterprise infrastructure.
VM_40 40 VM traffic
VM_50 50 VM traffic
All Nutanix configurations use an internal-only vSwitch for the NFS communication between the ESXi host and
the Nutanix CVM.This vSwitch remains unmodified regardless of the virtual networking configuration for ESXi
management, VM traffic, vMotion, and so on.
Caution: Do not modify the internal-only vSwitch (vSwitch-Nutanix). vSwitch-Nutanix facilitates communication
between the CVM and the internal hypervisor.
NSX-T Segments
Nutanix supports NSX-T logical segments to co-exist on Nutanix clusters running ESXi hypervisors. All
infrastructure workflows that include the use of the Foundation, 1-click upgrades, and AOS upgrades are validated to
work in the NSX-T configurations where CVM is backed by the NSX-T VLAN logical segment.
NSX-T has the following types of segments.
VLAN backed
VLAN backed segments operate similar to the standard port group in a vSphere switch. A port
group is created on the NVDS, and VMs that are connected to the port group have their network
packets tagged with the configured VLAN ID.
Overlay backed
Overlay backed segments use the Geneve overlay to create a logical L2 network over L3 network.
Encapsulation occurs at the transport layer (which is the NVDS module on the host).
Multicast Filtering
Enabling multicast snooping on a vDS with a Nutanix CVM attached affects the ability of CVM to discover and add
new nodes in the Nutanix cluster (the cluster expand option in Prism and the Nutanix CLI).
Procedure
5. Click Yes when the system prompts to continue with configuring the segment.
The newly created segment appears below the prompt.
Procedure
2. Click System, and go to Configuration > Fabric > Nodes in the left pane.
Note: To verify the active physical NICs on the host, select ESXi host > Configure > Networking
> Physical Adapters.
Click Edit icon and enter the name of the active physical NIC in the ESXi host selected for migration
to the NVDS.
7. PNIC only Migration: Turn on the switch to Yes if there are no VMkernel Adapters (vmks)
associated with the PNIC selected for migration from vSS switch to the nVDS switch.
8. Network Mapping for Install. Click Add Mapping to migrate the VMkernels (vmks) to the NVDS
switch.
9. Network Mapping for Uninstall: To revert the migration of VMkernels.
Procedure
The AOS upgrade determines if the NSX-T networks supports the CVM, its VLAN, and then attempts to get the
VLAN information of those networks. To get VLAN information for the CVM, the NSX-T Manager information
must be configured in the Nutanix cluster.
4. To fix this upgrade issue, log on to the Prism Element web console using SSH.
7. Register the NSX-T Manager with the CVM if it was not registered earlier. Specify the credentials of the NSX-T
Manager to the CVM.
nutanix@cvm$ ~/cluster/bin$ ./nsx_t_manager -a
IP address: 10.10.10.10
Username: admin
Password:
/usr/local/nutanix/cluster/lib/py/requests-2.12.0-py2.7.egg/requests/packages/
urllib3/conectionpool.py:843:
InsecureRequestWarning: Unverified HTTPS request is made. Adding certificate
verification is strongly advised.
See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
Successfully persisted NSX-T manager information
9. In the Prism Element Web Console, click the Pre-upgrade to continue the AOS upgrade procedure.
The AOS upgrade is completed successfully.
Networking Components
IP Addresses
All CVMs and ESXi hosts have two network interfaces.
Note: An empty interface eth2 is created on CVM during deployment by Foundation. The eth2 interface is used for
backplane when backplane traffic isolation (Network Segmentation) is enabled in the cluster. For more information
about backplane interface and traffic segmentation, see Security Guide.
Note: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets that overlap with subnet
192.168.5.0/24.
• vSwitchNutanix
Local communications between the CVM and the ESXi host use vSwitchNutanix. vSwitchNutanix has no uplinks.
Caution: To manage network traffic between VMs with greater control, create more port groups on vSwitch0. Do
not modify vSwitchNutanix.
• Management Network
HA, vMotion, and vCenter communications use the Management Network.
• VM Network
All VMs use the VM Network.
Caution:
• The Nutanix CVM uses the standard Ethernet maximum transmission unit (MTU) of 1,500 bytes for
all the network interfaces by default. The standard 1,500-byte MTU delivers excellent performance
and stability. Nutanix does not support configuring the MTU on a network interface of CVMs to
higher values.
• You can enable jumbo Frames (MTU of 9,000 bytes) on the physical network interfaces of ESXi
hosts and guest VMs if the applications on your guest VMs require them. If you choose to use jumbo
Procedure
1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
2. Press the down arrow key until Configure Management Network highlights and then press Enter.
5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter.
In the dialog box, provide the VLAN ID and press Enter.
Note: Do not add any other device (including guest VMs) to the VLAN to which the CVM and hypervisor host
are assigned. Isolate guest VMs on one or more separate VLANs.
7. If necessary, highlight the Set static IP address and network configuration option and press Space to
update the setting.
8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on your
environment and then press Enter .
10. If necessary, highlight the Use the following DNS server addresses and hostname option and press
Space to update the setting.
11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment
and then press Enter.
12. Press Esc and then Y to apply all changes and restart the management network.
15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier
in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are
configured.
Caution: The cluster cannot tolerate duplicate host IP addresses. For example, when swapping IP addresses between
two hosts, temporarily change one host IP address to an interim unused IP address. Changing this IP address avoids
having two hosts with identical IP addresses on the cluster. Then complete the address change or swap on each host
using the following steps.
Note: All CVMs and hypervisor hosts must be on the same subnet. The hypervisor can be multihomed provided that
one interface is on the same subnet as the CVM.
1. Configure networking on the Nutanix node. For more information, see Configuring Host Networking
(Management Network) on page 19.
2. Update the host IP addresses in vCenter. For more information, see Reconnecting a Host to vCenter on
page 22.
3. Log on to every CVM in the Nutanix cluster and restart Genesis service.
nutanix@cvm$ genesis restart
If the restart is successful, output similar to the following is displayed.
Stopping Genesis pids [1933, 30217, 30218, 30219, 30241]
Genesis started on pids [30378, 30379, 30380, 30381, 30403]
Procedure
2. Right-click the host with the changed IP address and select Disconnect.
a. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP
address or FQDN under New hosts.
b. Enter the host logon credentials in the User name and Password fields, and click Next.
If a security or duplicate management alert appears, click Yes.
c. Review the Host Summary and click Next.
d. Click Finish.
You can see the host with the updated IP address in the left pane of vCenter.
• If vmk is configured for the management traffic under network settings of ESXi, then the weight assigned is 4.
Otherwise, the weight assigned is 0.
• If the IP address of vmk belongs to the same IP subnet as eth0 of the CVMs interface, then 2 is added to its
weight.
• If the IP address of vmk belongs to the same IP subnet as eth2 of the CVMs interface, then 1 is added to its
weight.
2. The vmk interface that has the highest weight is selected as the management interface.
• vmk0 = 4 + 0 + 1 = 5
• vmk1 = 0 + 0 + 0 = 0
• vmk2 = 0 + 2 + 0 = 2
Since vmk0 has the highest weight assigned, vmk0 interface is used as a management IP address for the ESXi host.
To verify that vmk0 interface is selected for management IP address, use the following command.
root@esx# esxcli network ip interface tag get -i vmk0
You see the following output.
Tags: Management, VMmotion
For the other two interfaces, no tags are displayed.
If you want any other interface to act as the management IP address, enable management traffic on that interface by
following the procedure described in Selecting a New Management Interface on page 23.
Procedure
3. Open an SSH session to the ESXi host and enable the management traffic on the vmk interface.
root@esx# esxcli network ip interface tag add -i vmkN --tagname=Management
Replace vmkN with the vmk interface where you want to enable the management traffic.
• To know about the best practice of ESXi network teaming policy, see Network Teaming Policy on page 24.
• To migrate an ESXi host networking from a vSphere Standard Switch (vSwitch) to a vSphere Distributed Switch
(vDS) with LACP/LAG configuration, see Migrating to a New Distributed Switch with LACP/LAG on
page 34.
• To migrate an ESXi host networking from a vSphere standard switch (vSwitch) to a vSphere Distributed Switch
(vDS) without LACP, see Migrating to a New Distributed Switch without LACP/LAG on page 27.
.
• vSwitch0
• vSwitchNutanix
On vSwitch0, the Nutanix best practice guide (see Nutanix vSphere Networking Solution Document) provides the
following recommendations for NIC teaming:
• vSphere standard switch (vSwitch) (see vSphere Standard Switch (vSwitch) in vSphere Networking on
page 7).
• vSphere Distributed Switch (vDS) (see vSphere Distributed Switch (vDS) in vSphere Networking on
page 7).
Tip: For more information about vSwitches and the associated network concepts, see the VMware Documentation.
For migrating from a vSS to a vDS with LACP/LAG configuration, see Migrating to a New Distributed Switch
with LACP/LAG on page 34.
For migrating from a vSS to a vDS without LACP/LAG configuration, see Migrating to a New Distributed Switch
without LACP/LAG on page 27.
• Read Nutanix Best Practice Guide for VMware vSphere Networking available here.
Procedure
2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
4. Click MANAGE PHYSICAL ADAPTERS tab and select the active adapters from the Assigned adapters
that you want to unassign from the list of physical adapters of the host.
Tip: Ping the host to check and confirm if you are able to communicate with the active physical adapter of the host.
If you lose network connectivity to the ESXi host during this test, review your network configuration.
Procedure
2. Go to the Networking view and select the host from the left pane.
Procedure
3. Right-click the host, select Distributed Switch > Distributed Port Group > New Distributed Port
Group, and follow the wizard to create the remaining distributed port group (vMotion interface and VM port
groups).
You would need the following port groups because you would be migrating from the standard switch to the
distributed switch.
• VMkernel Management interface. Use this port group to connect to the host for all management operations.
• VMNetwork. Use this port group to connect to the new VMs.
• vMotion. This port group is an internal interface and the host will use this port during failover for vMotion
traffic.
Note: Nutanix recommends you to use static port binding instead of ephemeral port binding when you create a port
group.
Note: The port group for vmk management interface is created during the distributed switch creation. See
Creating a Distributed Switch on page 27 for more information.
Ensure that the distributed switches port groups have VLANs tagged if the physical adapters of the host
have a VLAN tagged to them. Update the policies for the port group, VLANs, and teaming algorithms to
configure the physical network switch. Configure the load balancing policy as per the network configuration
requirements on the physical switch.
Procedure
2. Go to the Networking view and select the host from the left pane.
3. Right-click the host, select Distributed Switch > Distributed Port Group > Edit Settings, and follow the
wizard to configure the VLAN, Teaming and failover, and other options.
Note: For more information about configuring port group policies, see the VMware Documentation.
You can configure the same policy for all the port groups simultaneously.
Procedure
2. Go to the Networking view and select the host from the left pane.
3. Right-click the host, select Distributed Switch > Distributed Port Group > Manage Distributed Port
Groups, and specify the following information in Manage Distributed Port Group dialog box.
a. In the Select port group policies tab, select the port group policies that you want to configure and click
Next.
Note: For more information about configuring port group policies, see the VMware Documentation.
b. In the Select port groups tab, select the distributed port groups on which you want to configure the policy
and click Next.
c. In the Teaming and failover tab, configure the Load balancing policy, Active uplinks, and click Next.
d. In the Ready to complete window, review the configuration and click Finish.
Migrate the management interface and CVM of the host to the distributed switch.
2. Go to the Networking view and select the host from the left pane.
a. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next.
b. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the
distributed switch.
c. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same
uplink on the distributed switch.
• 1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink.
Important: If you select physical NICs connected to other switches, those physical NICs migrate to the
current distributed switch.
2. Select the Uplink in the distributed switch to which you want to assign the PNIC of the host and click
OK.
3. Click Next.
d. In the Manage VMkernel adapters tab, configure the vmk adapters.
• 1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port
group.
• 1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an
individual network adapter to connect with the distributed port group.
2. Click Assign port group and select the distributed port group to which you want to migrate the VM
or network adapter and click OK.
3. Click Next.
f. In the Ready to complete tab, review the configuration and click Finish.
4. Go to the Hosts and Clusters view in the vCenter web client and Hosts > Configure to review the network
configuration for the host.
Note: Run a ping test to confirm that the networking on the host works as expected.
5. Follow the steps 2–4 to add the remaining hosts to the distributed switch and migrate the adapters.
Procedure
2. Go to the Networking view and select the host from the left pane.
3. Right-click the host, select Distributed Switch > Configure > LACP.
4. Click New and enter the following details in the New Link Aggregation Group dialog box.
5. Click OK.
LAG is created on the distributed switch.
Procedure
3. Select the required Virtual switch from the list and click EDIT.
4. Go to the Teaming and failover tab in the Edit Settings dialog box and specify the following information.
Procedure
2. Go to the Networking view and select the host from the left pane.
a. In the Select task tab, select Add hosts to add new host to the distributed switch and click Next.
b. In the Select hosts tab, click New hosts to select the ESXi host and add it to the distributed switch.
Note: Add one host at a time to the distributed switch and then migrate all the CVMs from the host to the
distributed switch.
c. In the Manage physical adapters tab, configure the physical NICs (PNICs) on the distributed switch.
Tip: For consistent network configuration, you can connect the same physical NIC on every host to the same
uplink on the distributed switch.
• 1. Select a PNIC from the On other switches/unclaimed section and click Assign uplink.
Important: If you select physical NICs connected to other switches, those physical NICs migrate to the
current distributed switch.
2. Select the LAG Uplink in the distributed switch to which you want to assign the PNIC of the host and
click OK.
3. Click Next.
d. In the Manage VMkernel adapters tab, configure the vmk adapters.
Select the VMkernel adapter that is associated with vSwitch0 as your management VMkernel adapter. Migrate
this adapter to the corresponding port group on the distributed switch.
Note: If the are any VLANs associated with the port group on the standard switch, ensure that the
corresponding distributed port group also has the correct VLAN. Verify the physical network configuration to
ensure it is configured as required.
• 1. Select a VMkernel adapter from the On other switches/unclaimed section and click Assign port
group.
2. Select the port group in the distributed switch to which you want to assign the VMkernel of the host
and click OK.
3. Click Next.
e. (optional) In the Migrate VM networking tab, select Migrate virtual machine networking to connect
all the network adapters of a VM to a distributed port group.
• 1. Select the VM to connect all the network adapters of the VM to a distributed port group, or select an
individual network adapter to connect with the distributed port group.
2. Click Assign port group and select the distributed port group to which you want to migrate the VM
or network adapter and click OK.
3. Click Next.
f. In the Ready to complete tab, review the configuration and click Finish.
Tip: For a single-window management of all your ESXi nodes, you can also integrate the vCenter Server to Prism
Central. For more information, see Registering a Cluster to vCenter Server on page 39
1. Create a cluster entity within the existing vCenter inventory and configure its settings according to Nutanix best
practices. For more information, see Creating a Nutanix Cluster in vCenter on page 41.
2. Configure HA. For more information, see vSphere HA Settings on page 44.
3. Configure DRS. For more information, see vSphere DRS Settings on page 50.
4. Configure EVC. For more information, see vSphere EVC Settings on page 52.
5. Configure override. For more information, see VM Override Settings on page 54.
6. Add the Nutanix hosts to the new cluster. For more information, see Adding a Nutanix Node to vCenter on
page 42.
Procedure
2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
The vCenter Server that is managing the hosts in the cluster is auto-discovered and displayed.
4. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin
Password fields.
5. Click Register.
During the registration process a certificate is generated to communicate with the vCenter Server. If the
registration is successful, relevant message is displayed in the Tasks dashboard. The Host Connection field
displays as Connected, which implies that all the hosts are being managed by the vCenter Server that is registered.
• Ensure that you unregister the vCenter Server from the cluster before changing the IP address of the vCenter
Server. After you change the IP address of the vCenter Server, you should register the vCenter Server again with
the new IP address with the cluster.
• The vCenter Server Registration page displays the registered vCenter Server. If for some reason the Host
Connection field changes to Not Connected, it implies that the hosts are being managed by a different vCenter
Server. In this case, there will be new vCenter entry with host connection status as Connected and you need to
register to this vCenter Server.
Procedure
2. Click the gear icon in the main menu and then select vCenter Registration in the Settings page.
A message that cluster is already registered to the vCenter Server is displayed.
3. Type the administrator user name and password of the vCenter Server in the Admin Username and Admin
Password fields.
4. Click Unregister.
If the credentials are correct, the vCenter Server is unregistered from the cluster and a relevant message is
displayed in the Tasks dashboard.
Procedure
a. Go to the Hosts and Clusters view and right-click the IP address of the vCenter Server in the left pane.
b. Click New Datacenter.
c. Enter a meaningful name for the datacenter (for example, NTNX-DC) and click OK.
a. Enter a meaningful name for the cluster in the Name field (for example, NTNX-Cluster).
b. Turn on the vSphere DRS switch.
c. Turn on the Turn on vSphere HA switch.
d. Uncheck Manage all hosts in the cluster with a single image.
Nutanix cluster (NTNX-Cluster) is created with the default settings for vSphere HA and vSphere DRS.
What to do next
Add all the Nutanix nodes to the Nutanix cluster inventory in vCenter. For more information, see Adding a
Nutanix Node to vCenter on page 42.
Tip: Refer to KB-1661 for the default credentials of all cluster components.
Procedure
a. Enter the IP address or fully qualified domain name (FQDN) of the host you want to reconnect in the IP
address or FQDN under New hosts.
b. Enter the host logon credentials in the User name and Password fields, and click Next.
If a security or duplicate management alert appears, click Yes.
c. Review the Host Summary and click Next.
d. Click Finish.
3. Select the host under the Nutanix cluster from the left pane and go to Configure > System > Security Profile.
Ensure that Lockdown Mode is Disabled. If there are any security requirements to enable the lockdown mode,
follow the steps mentioned in KB-3702.
6. Click Configure > Storage and ensure that NFS datastores are mounted.
Note: Nutanix recommends creating a storage container in Prism Element running on the host.
7. If HA is not enabled, set the CVM to start automatically when the ESXi host starts.
What to do next
Configure HA and DRS settings. For more information, see vSphere HA Settings on page 44 and
vSphere DRS Settings on page 50.
Procedure
2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
a. Under General, set the Swap file location to Virtual machine directory.
Setting the swap file location to the VM directory stores the VM swap files in the same directory as the VM.
b. Under Default VM Compatibility, set the compatibility to Use datacenter setting and host version.
Do not change the compatibility unless the cluster has to support previous versions of ESXi VMs.
vSphere HA Settings
If there is a node failure, vSphere HA (High Availability) settings ensure that there are sufficient compute
resources available to restart all VMs that were running on the failed node.
Note: Nutanix recommends that you configure vSphere HA and DRS even if you do not use the features. The vSphere
cluster configuration preserves the settings, so if you later decide to enable the features, the settings are in place and
conform to Nutanix best practices.
Procedure
2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
• 1. Host Failure Response: Select Restart VMs from the drop-down list.
This option configures the cluster-wide host isolation response settings.
2. Response for Host Isolation: Select Power off and restart VMs from the drop-down list.
3. Datastore with PDL: Select Disabled from the drop-down list.
4. Datastore with APD: Select Disabled from the drop-down list.
Note: To enable the VM component protection in vCenter, refer to the VMware Documentation.
• 1. Host failures cluster tolerates: Enter 1 or 2 based on the number of nodes in the Nutanix cluster
and the replication factor.
2. Define host failover capacity by: Select Cluster resource Percentage from the drop-down list.
Note: If you set one ESXi host as a dedicated failover host in vSphere HA configuration, then CVM
cannot boot up after shutdown.
Note: vSphere HA uses datastore heart beating to distinguish between hosts that have failed and hosts that
reside on a network partition. With datastore heart beating, vSphere HA can monitor hosts when a management
network partition occurs while continuing to respond to failures.
Overview
If you are using redundancy factor 2 with cluster sizes of up to 16 nodes, you must configure HA admission control
settings with the appropriate percentage of CPU/RAM to achieve at least N+1 availability. For cluster sizes larger
than 16 nodes, you must configure HA admission control with the appropriate percentage of CPU/RAM to achieve at
least N+2 availability.
Note: For redundancy factor 3, a minimum of five nodes is required, which provides the ability that two concurrent
nodes can fail while ensuring data remains online. In this case, the same N+2 level of availability is required for the
vSphere cluster to enable the VMs to restart following a failure.
For redundancy factor 2 deployments, the recommended minimum HA admission control setting
percentage is marked with single asterisk (*) symbol in the following table. For redundancy factor 2 or
redundancy factor 3 deployments configured for multiple non-concurrent node failures to be tolerated,
the minimum required HA admission control setting percentage is marked with two asterisks (**) in the
following table.
4 25* 50 75 N/A
5 20* 40** 60 80
6 18* 33** 50 66
7 15* 29** 43 56
8 13* 25** 38 50
9 11* 23** 33 46
10 10* 20** 30 40
11 9* 18** 27 36
12 8* 17** 25 34
13 8* 15** 23 30
14 7* 14** 21 28
15 7* 13** 20 26
16 6* 13** 19 25
17 6 12* 18** 24
18 6 11* 17** 22
19 5 11* 16** 22
20 5 10* 15** 20
21 5 10* 14** 20
22 4 9* 14** 18
23 4 9* 13** 18
24 4 8* 13** 16
25 4 8* 12** 16
26 4 8* 12** 16
27 4 7* 11** 14
28 4 7* 11** 14
29 3 7* 10** 14
30 3 7* 10** 14
31 3 6* 10** 12
32 3 6* 9** 12
The table also represents the percentage of the Nutanix storage pool, which should remain free to ensure that the
cluster can fully restore the redundancy factor in the event of one or more nodes, or even a block failure (where three
or more blocks exist within a cluster).
Block Awareness
For deployments of at least three blocks, block awareness automatically ensures data availability when an entire block
of up to four nodes configured with redundancy factor 2 can become unavailable.
Note: For block awareness, each block must be populated with a uniform number of nodes. In the event of a failure, a
non-uniform node count might compromise block awareness or the ability to restore the redundancy factor, or both.
Rack Awareness
Rack fault tolerance is the ability to provide a rack-level availability domain. With rack fault tolerance, data is
replicated to nodes that are not in the same rack. Rack failure can occur in the following situations.
Table 3: Rack awareness has minimum requirements, described in the following table.
Replication factor Minimum number Minimum number Minimum number Data resiliency
of nodes of Blocks of racks
2 3 3 3 Failure of 1 node,
block, or rack
3 5 5 5 Failure of 2 nodes,
blocks, or racks
Procedure
2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
Procedure
2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
3. Shut down the guest VMs and Controller VMs on the hosts with feature sets greater than the EVC mode.
Ensure that the Nutanix cluster contains hosts with CPUs from only one vendor, either Intel or AMD.
6. Enable EVC for the CPU vendor and feature set appropriate for the hosts in the Nutanix cluster, and click OK.
If the Nutanix cluster contains nodes with different processor classes, enable EVC with the lower feature set as the
baseline.
Tip: To know the processor class of a node, perform the following steps.
Note: Do not shut down more than one CVM at the same time.
VM Override Settings
You must exclude Nutanix CVMs from vSphere availability and resource scheduling and therefore tweak
the following VM overriding settings.
Procedure
2. Go to the Hosts and Clusters view and select the Nutanix cluster from the left pane.
6. Click Finish.
Note: The following steps are to migrate a Nutanix cluster with vSphere Standard Switch (vSwitch). To migrate a
Nutanix cluster with vSphere Distributed Switch (vDS), see the VMware Documentation..
Procedure
1. Create a vSphere cluster in the vCenter Server where you want to migrate the Nutanix cluster. See Creating a
Nutanix Cluster in vCenter on page 41.
2. Configure HA, DRS, and EVC on the created vSphere cluster. See Nutanix Cluster Settings on page 43.
4. Move the nodes from the source vCenter Server to the new vCenter Server.
See the VMware Documentation to know the process.
5. Register the Nutanix cluster to the new vCenter Server. See Registering a Cluster to vCenter Server on
page 39.
Caution: While mounting a storage container on ESXi hosts running older versions (6.5 or below), the system enables
SIOC in the statistics mode by default. Nutanix recommends disabling SIOC because an enabled SIOC can cause the
following issues.
• The storage can become unavailable because the hosts repeatedly create and delete the access .lck-
XXXXXXXX files under the .iorm.sf subdirectory, located in the root directory of the storage container.
• Site Recovery Manager (SRM) failover and failback does not run efficiently.
• If you are using Metro Availability disaster recovery feature, activate and restore operations do not
work.
Note: For using Metro Availability disaster recovery feature, Nutanix recommends using an empty
storage container. Disable SIOC and delete all the files from the storage container that are related to
SIOC. For more information, see KB-3501.
Run the NCC health check (see KB-3358) to verify if SIOC and SIOC in statistics mode are disabled on storage
containers. If SIOC and SIOC in statistics mode are enabled on storage containers, disable them by performing the
procedure described in Disabling Storage I/O Control (SIOC) on a Container on page 56.
Procedure
Note: VMs with CPU passthrough or PCI passthrough, pinned VMs (with host affinity policies), and RF1 VMs are not
migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the list
of VMs that cannot be live-migrated.
See Putting a Node into Maintenance Mode (vSphere) on page 59 to place a node under maintenance.
You can also enter or exit a host under maintenance through the vCenter web client. See Putting the CVM and ESXi
Host in Maintenance Mode Using vCenter on page 70.
• The Prism web console enabled maintenance operations (enter and exit node maintenance) are currently supported
on ESXi.
Procedure
6. On the Host Maintenance window, provide the vCenter credentials for the ESXi host and click Next.
Note: VMs with CPU passthrough, PCI passthrough, pinned VMs (with host affinity policies), and RF1 are not
migrated to other hosts in the cluster when a node undergoes maintenance. Click View these VMs link to view the
list of VMs that cannot be live-migrated.
• A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This indicates
that the host is entering the maintenance mode.
• The revolving icon disappears and the Exit Maintenance Mode option is enabled after the node completely
enters the maintenance mode.
Note: In case of a node maintenance failure, certain roll-back operations are performed. For example, the CVM is
rebooted. But the live-migrated VMs are not restored to the original host.
What to do next
Once the maintenance activity is complete, you can perform any of the following.
• View the nodes under maintenance, see Viewing a Node that is in Maintenance Mode on page 62
• View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode on page 67
• Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere) on
page 64)
Note: This procedure is the same for AHV and ESXi nodes.
Procedure
4. Observe the icon along with a tool tip that appears beside the node which is under maintenance. You can also
view this icon in the host details view.
Figure 43: Example: Node under Maintenance (Table and Host Details View) in AHV
5. Alternatively, view the node under maintenance from the Hardware > Diagram view.
Figure 44: Example: Node under Maintenance (Diagram and Host Details View) in AHV
What to do next
You can:
• View the status of the guest VMs, see Guest VM Status when Node is in Maintenance Mode on page 67.
• Remove the node from the maintenance mode Exiting a Node from the Maintenance Mode (vSphere) on
page 64.
Procedure
1. On the Prism web console home page, select Hardware from the drop-down menu.
3. Select the node which you intend to remove from the maintenance mode.
6. On the Host Maintenance window, click the Exit Maintenance Mode button.
• A revolving icon appears as a tool tip beside the selected node and also in the Host Details view. This
indicates that the host is exiting the maintenance mode.
• The revolving icon disappears and the Enter Maintenance Mode option is enabled after the node
completely exits the maintenance mode.
• You can also monitor the progress of the exit node maintenance operation through the newly created Host exit
maintenance and Exit maintenance mode tasks which appear in the task tray.
What to do next
You can:
Note: The following scenarios are the same for AHV and ESXi nodes.
• As the node enter the maintenance mode, the following high-level tasks are performed internally.
1. The host initiates entering the maintenance mode.
2. The HA VMs are live migrated.
3. The pinned and RF1 VMs are powered-off.
4. The host completes entering the maintenance mode.
5. The CVM enters the maintenance mode.
6. The AHV host completes entering the maintenance mode.
7. The CVM enters the maintenance mode.
8. The CVM is shut down.
• As the node exits the maintenance mode, the following high-level tasks are performed internally.
1. The CVM is powered on.
2. The CVM is taken out of maintenance.
3. The host is taken out of maintenance.
After the host exits the maintenance mode, the RF1 VMs continue to be powered on and the VMs migrate to
restore host locality.
Nutanix Software
Modifying any of the following Nutanix software settings may inadvertently constrain performance of your Nutanix
cluster or render the Nutanix cluster inoperable.
ESXi
Modifying any of the following ESXi settings can inadvertently constrain performance of your Nutanix cluster or
render the Nutanix cluster inoperable.
Note: An SSH connection is necessary for various scenarios. For example, to establish connectivity with the ESXi
server through a control plane that does not depend on additional management systems or processes. The SSH
connection is also required to modify the networking and control paths in the case of a host failure to maintain
High Availability. For example, the CVM autopathing (Ha.py) requires an SSH connection. In case a local CVM
becomes unavailable, another CVM in the cluster performs the I/O operations over the 10GbE interface.
Putting the CVM and ESXi Host in Maintenance Mode Using vCenter
About this task
Nutanix recommends placing the CVM and ESXi host into maintenance mode while the Nutanix cluster
undergoes maintenance or patch installations.
Caution: Verify the data resiliency status of your Nutanix cluster. Ensure that the replication factor (RF) supports
putting the node in maintenance mode.
Procedure
2. If vSphere DRS is enabled on the Nutanix cluster, skip this step. If vSphere DRS is disabled, perform one of the
following.
» Manually migrate all the VMs except the CVM to another host in the Nutanix cluster.
» Shut down VMs other than the CVM that you do not want to migrate to another host.
3. Right-click the host and select Maintenance Mode > Enter Maintenance Mode.
4. In the Enter Maintenance Mode dialog box, check Move powered-off and suspended virtual
machines to other hosts in the cluster and click OK.
Note:
In certain rare conditions, even when DRS is enabled, some VMs do not automatically migrate due to
user-defined affinity rules or VM configuration settings. The VMs that do not migrate appear under
cluster DRS > Faults when a maintenance mode task is in progress. To address the faults, either
manually shut down those VMs or ensure the VMs can be migrated.
Caution: When you put the host in maintenance mode, the maintenance mode process powers down or migrates all
the VMs that are running on the host.
The host gets ready to go into maintenance mode, which prevents VMs from running on this host. DRS
automatically attempts to migrate all the VMs to another host in the Nutanix cluster.
The host enters maintenance mode after its CVM is shut down.
Procedure
2. Put the Nutanix node in the maintenance mode. For more information, see Putting the CVM and ESXi Host in
Maintenance Mode Using vCenter on page 70.
Note: If DRS is not enabled, manually migrate or shut down all the VMs excluding the CVM. The VMs that are
not migrated automatically even when the DRS is enabled can be because of a configuration option in the VM that
is not present on the target host.
Procedure
1. Log on to the CVM with SSH and shut down the CVM.
nutanix@cvm$ cvm_shutdown -P now
Procedure
1. If the node is off, turn it on by pressing the power button on the front. Otherwise, proceed to the next step.
2. Log on to vCenter (or to the node if vCenter is not running) with the web client.
6. Confirm that the Nutanix cluster services are running on the CVM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
7. Right-click the ESXi host in the web client and select Rescan for Datastores. Confirm that all Nutanix
datastores are available.
8. Verify that the status of all services on all the CVMs are Up.
nutanix@cvm$ cluster status
If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix
cluster.
CVM:host IP-Address Up
Zeus UP [9935, 9980, 9981, 9994, 10015,
10037]
Scavenger UP [25880, 26061, 26062]
Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]
VipMonitor UP [18387, 18464, 18465, 18466, 18474]
Stargate UP [24993, 25032]
InsightsDataTransfer UP [25258, 25348, 25349, 25388, 25391,
25393, 25396]
Ergon UP [25263, 25414, 25415]
Cerebro UP [25272, 25462, 25464, 25581]
Chronos UP [25281, 25488, 25489, 25547]
Curator UP [25294, 25528, 25529, 25585]
Prism UP [25718, 25801, 25802, 25899, 25901,
25906, 25941, 25942]
CIM UP [25721, 25829, 25830, 25856]
AlertManager UP [25727, 25862, 25863, 25990]
Arithmos UP [25737, 25896, 25897, 26040]
Catalog UP [25749, 25989, 25991]
Acropolis UP [26011, 26118, 26119]
Uhura UP [26037, 26165, 26166]
Snmp UP [26057, 26214, 26215]
NutanixGuestTools UP [26105, 26282, 26283, 26299]
MinervaCVM UP [27343, 27465, 27466, 27730]
ClusterConfig UP [27358, 27509, 27510]
Aequitas UP [27368, 27567, 27568, 27600]
APLOSEngine UP [27399, 27580, 27581]
APLOS UP [27853, 27946, 27947]
Lazan UP [27865, 27997, 27999]
Delphi UP [27880, 28058, 28060]
Flow UP [27896, 28121, 28124]
Anduril UP [27913, 28143, 28145]
XTrim UP [27956, 28171, 28172]
ClusterHealth UP [7102, 7103, 27995, 28209,28495,
28496, 28503, 28510,
Procedure
After starting, the CVM restarts once. Wait three to four minutes before you ping the CVM.
Alternatively, you can take the ESXi host out of maintenance mode and start the CVM using the web client. For
more information, see Starting an ESXi Node in a Nutanix Cluster on page 72
3. Verify that the status of all services on all the CVMs are Up.
nutanix@cvm$ cluster status
If the Nutanix cluster is running properly, output similar to the following is displayed for each node in the Nutanix
cluster.
CVM:host IP-Address Up
Zeus UP [9935, 9980, 9981, 9994, 10015,
10037]
Scavenger UP [25880, 26061, 26062]
Xmount UP [21170, 21208]
SysStatCollector UP [22272, 22330, 22331]
IkatProxy UP [23213, 23262]
IkatControlPlane UP [23487, 23565]
SSLTerminator UP [23490, 23620]
SecureFileSync UP [23496, 23645, 23646]
Medusa UP [23912, 23944, 23945, 23946, 24176]
DynamicRingChanger UP [24314, 24404, 24405, 24558]
Pithos UP [24317, 24555, 24556, 24593]
InsightsDB UP [24322, 24472, 24473, 24583]
Athena UP [24329, 24504, 24505]
Mercury UP [24338, 24515, 24516, 24614]
Mantle UP [24344, 24572, 24573, 24634]
Procedure
1. Log on to vCenter (or to the ESXi host if the node is running the vCenter VM) with the web client.
Note: The host does not enter in the maintenance mode until after the CVM is shut down.
3. Log on to the CVM with SSH and shut down the CVM.
nutanix@cvm$ cvm_shutdown -P now
Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the
cluster is aware that the CVM is unavailable.
8. Confirm that the Nutanix cluster services are running on the CVM.
nutanix@cvm$ ncli cluster status | grep -A 15 cvm_ip_addr
Replace cvm_ip_addr with the IP address of the CVM on the ESXi host.
Output similar to the following is displayed.
Name : 10.1.56.197
Status : Up
... ...
StatsAggregator : up
SysStatCollector : up
Every service listed should be up.
9. Right-click the ESXi host in the web client and select Rescan for Datastores. Confirm that all Nutanix
datastores are available.
Note: Reboot host is a graceful restart workflow. All the user VMs are migrated to another host when you perform a
reboot operation for a host. There is no impact on the user workload due to the reboot operation.
Procedure
3. In the Request Reboot window, select the nodes you want to restart, and click Reboot.
A progress bar is displayed that indicates the progress of the restart of each node.
• All the requirements, rules, and guidelines are considered, and the limitations are observed. For details, see
vCenter Server Integration information in the Prism Central Infrastructure Guide.
• The vCenter Server is registered with your cluster. For more information about how to register a vCenter Server,
see vCenter Server Integration in the Prism Central Infrastructure Guide.
Procedure
2. Select Infrastructure application from Application Switcher Function, and navigate to Compute & Storage
> VMs from the Navigation Bar. For information about the Navigation Bar, see Application-Specific
Navigation Bar.
The system displays the List tab by default with all the VMs across registered clusters in Nutanix environment.
For information about how to access the list of non-nutanix VMs managed by an external vCenter, see VMs
Summary View information in the Prism Central Infrastructure Guide.
AOS | VM Management | 78
3. Click Create VM, and enter the following information in the Configuration step:
AOS | VM Management | 79
4. In the Resources step, perform the following actions to attach a Disk to the VM:
Disks: Click Attach Disk , and enter the following information:
The following figure shows the Attach Disk window:
a. Type: Select the type of storage device, Disk or CD-ROM, from the dropdown list.
b. Operation: Specify the device contents from the dropdown list.
• Select Clone from NDSF file to copy any file from the cluster that can be used as an image onto the disk.
• Select Empty CD-ROM to create a blank CD-ROM device. A CD-ROM device is needed when you
intend to provide a system image from CD-ROM.
Note: The Empty CD-ROM option is available only when CD-ROM is selected as the storage device in
the Type field.
• Select Allocate on Storage Container to allocate space without specifying an image. Selecting this
option means you are allocating space only. You have to provide a system image later from a CD-ROM or
other source.
Note: The Allocate on Storage Container option is available only when Disk is selected as the storage
device in the Type field.
• Select Clone from Image to copy an image, that you have imported by using image service feature, onto
the disk.
c. If you select:
AOS | VM Management | 80
• Allocate on Storage Container in the Operation field, the system prompts you to specify the
Storage Container.
• Clone from Image in the Operation field, the system prompts you to specify the Image.
d. Enter one of the following information based on your selection in the Operation field:
Note: If the image you created does not appear in the list, see KB-4892. The image transfer can trigger
image bandwidth throttling if a bandwidth throttling policy is associated with the image. For more
information, see Bandwidth Throttling Policies information in the Prism Central Infrastructure Guide.
e. Bus Type: Select the bus type from the dropdown list.
The options displayed in the dropdown list varies based on the storage device Type selected in the Type field.
If the storage device Type is:
AOS | VM Management | 81
5. In the Resources step, perform the following actions to create a network interface for the VM:
Networks: Click Attach to Subnet. The Attach to Subnet window appears.
6. In the Management step, perform the following actions to define categories and timezone:
a. Categories: Search for the category to be assigned to the VM. The policies associated with the category
value are assigned to the VM.
b. Guest OS: Type and select the guest operating system.
The guest operating system that you select affects the supported devices and number of virtual CPUs available
for the virtual machine. The Create VM wizard does not install the guest operating system. For information
AOS | VM Management | 82
about the list of supported operating systems, see External vCenter Server Integration information in the
Prism Central Infrastructure Guide.
7. In the Review step, when all the field entries are correct, click Create VM to create the VM, and close the
Create VM window.
The new VM appears in the VMs Summary page and List page.
Note: To manage a non-nutanix VM on an external vCenter, you can use playbooks. For more information, see VMs
Summary View information in Prism Central Infrastructure Guide.
Procedure
• Perform the procedure and actions as described in Managing a VM through Prism Central (AHV) in Prism
Central Infrastructure Guide.
Note: You can perform only those operations for which you have permissions from the admin.
Creating a VM (ESXi)
In ESXi clusters, you can create a new virtual machine (VM) through the web console.
AOS | VM Management | 83
Before you begin
• See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism
Web Console Guide before proceeding.
• Register the vCenter Server with your cluster. For more information, see Registering a Cluster to vCenter
Server on page 39.
Procedure
AOS | VM Management | 84
4. To attach a disk to the VM, click the Add New Disk button.
The Add Disks dialog box appears.
a. Type: Select the type of storage device, DISK or CD-ROM, from the pull-down list.
The following fields and options vary depending on whether you choose DISK or CD-ROM.
b. Operation: Specify the device contents from the pull-down list.
• Select Clone from ADSF file to copy any file from the cluster that can be used as an image onto the disk.
• Select Allocate on Storage Container to allocate space without specifying an image. (This option
appears only when DISK is selected in the previous field.) Selecting this option means you are allocating
space only. You have to provide a system image later from a CD-ROM or other source.
c. Bus Type: Select the bus type from the pull-down list. The choices are IDE or SCSI.
d. ADSF Path: Enter the path to the desired system image.
This field appears only when Clone from ADSF file is selected. It specifies the image to copy. Enter the
path name as /storage_container_name/vmdk_name.vmdk. For example to clone an image from myvm-
flat.vmdk in a storage container named crt1, enter /crt1/myvm-flat.vmdk. When a user types the storage
container name (/storage_container_name/), a list appears of the VMDK files in that storage container
(assuming one or more VMDK files had previously been copied to that storage container).
e. Storage Container: Select the storage container to use from the pull-down list.
This field appears only when Allocate on Storage Container is selected. The list includes all storage
containers created for this cluster.
f. Size: Enter the disk size in GiBs.
g. When all the field entries are correct, click the Add button to attach the disk to the VM and return to the
Create VM dialog box.
h. Repeat this step to attach more devices to the VM.
AOS | VM Management | 85
5. To create a network interface for the VM, click the Add New NIC button.
The Create NIC dialog box appears. Do the following in the indicated fields:
a. VLAN Name: Select the target virtual LAN from the pull-down list.
The list includes all defined networks. For more information, see Network Configuration for VM Interfaces
in the Prism Web Console Guide.
b. Network Adapter Type: Select the network adapter type from the pull-down list.
For information about the list of supported adapter types, see VM Management through Prism Element
(ESXi) in the Prism Element Web Console Guide.
c. Network UUID: This is a read-only field that displays the network UUID.
d. Network Address/Prefix: This is a read-only field that displays the network IP address and prefix.
e. When all the field entries are correct, click the Add button to create a network interface for the VM and return
to the Create VM dialog box.
f. Repeat this step to create more network interfaces for the VM.
6. When all the field entries are correct, click the Save button to create the VM and close the Create VM dialog
box.
The new VM appears in the VM table view. For more information, see VM Table View in the Prism Element
Web Console Guide.
Managing a VM (ESXi)
You can use the web console to manage virtual machines (VMs) in the ESXi clusters.
• See the requirements and limitations section in VM Management through Prism Element (ESXi) in the Prism
Web Console Guide before proceeding.
• Ensure that you have registered the vCenter Server with your cluster. For more information, see Registering a
Cluster to vCenter Server on page 39.
Note: Your available options depend on the VM status, type, and permissions. Unavailable options are unavailable.
Procedure
AOS | VM Management | 86
3. Select the target VM in the table (top section of screen).
The summary line (middle of screen) displays the VM name with a set of relevant action links on the right. You
can also right-click on a VM to select a relevant action.
The possible actions are Manage Guest Tools, Launch Console, Power on (or Power off actions),
Suspend (or Resume), Clone, Update, and Delete. The following steps describe how to perform each
action.
a. Select Enable Nutanix Guest Tools check box to enable NGT on the selected VM.
b. Select Mount Nutanix Guest Tools to mount NGT on the selected VM.
Ensure that VM has at least one empty IDE CD-ROM or SATA slot to attach the ISO.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
c. To enable self-service restore feature for Windows VMs, click Self Service Restore (SSR) check box.
The self-service restore feature is enabled of the VM. The guest VM administrator can restore the desired file
or files from the VM. For more information about the self-service restore feature, see Self-Service Restore
in the Data Protection and Recovery with Prism Element guide.
d. After you select Enable Nutanix Guest Tools check box the VSS and application-consistent snapshot
feature is enabled by default.
After this feature is enabled, Nutanix native in-guest VmQuiesced snapshot service (VSS) agent is used to
take application-consistent snapshots for all the VMs that support VSS. This mechanism takes application-
consistent snapshots without any VM stuns (temporary unresponsive VMs) and also enables third-party
backup providers like Commvault and Rubrik to take application-consistent snapshots on Nutanix platform
AOS | VM Management | 87
in a hypervisor-agnostic manner. For more information, see Conditions for Application-consistent
Snapshots in the Data Protection and Recovery with Prism Element guide.
e. To mount VMware guest tools, click Mount VMware Guest Tools check box.
The VMware guest tools are mounted on the VM.
Note: You can mount both VMware guest tools and Nutanix Guest Tools at the same time on a particular
VM provided the VM has sufficient empty CD-ROM slots.
f. Click Submit.
The VM is registered with the NGT service. NGT is enabled and mounted on the selected virtual machine. A
CD with volume label NUTANIX_TOOLS gets attached to the VM.
Note:
• If you clone a VM, by default NGT is not enabled on the cloned VM. If the cloned VM is
powered off, enable NGT from the UI and start the VM. If cloned VM is powered on, enable
NGT from the UI and restart the Nutanix guest agent service.
• If you want to enable NGT on multiple VMs simultaneously, see Enabling NGT and
Mounting the NGT Installer on Cloned VMs in the Prism Web Console Guide.
If you eject the CD, you can mount the CD back again by logging into the Controller VM and running the
following nCLI command.
ncli> ngt mount vm-id=virtual_machine_id
For example, to mount the NGT on the VM with
VM_ID=00051a34-066f-72ed-0000-000000005400::38dc7bf2-a345-4e52-9af6-c1601e759987, type the
following command.
ncli> ngt mount vm-id=00051a34-066f-72ed-0000-000000005400::38dc7bf2-
a345-4e52-9af6-
c1601e759987
Caution: In AOS 4.6, for the powered-on Linux VMs on AHV, ensure that the NGT ISO is ejected or
unmounted within the guest VM before disabling NGT by using the web console. This issue is specific for 4.6
version and does not occur from AOS 4.6.x or later releases.
Note: If you have created the NGT ISO CD-ROMs prior to AOS 4.6 or later releases, the NGT functionality
will not work even if you upgrade your cluster because REST APIs have been disabled. You must unmount
the ISO, remount the ISO, install the NGT software again, and then upgrade to 4.6 or later version.
Note: A VNC client may not function properly on all browsers. Some keys are not recognized when the browser
is Google Chrome. (Firefox typically works best.)
AOS | VM Management | 88
6. To start (or shut down) the VM, click the Power on (or Power off) action link.
Power on begins immediately. If you want to shut down the VMs, you are prompted to select one of the
following options:
• Power Off. Hypervisor performs a hard shut down action on the VM.
• Reset. Hypervisor performs an ACPI reset action through the BIOS on the VM.
• Guest Shutdown. Operating system of the VM performs a graceful shutdown.
• Guest Reboot. Operating system of the VM performs a graceful restart.
Note: The Guest Shutdown and Guest Reboot options are available only when VMware guest tools are
installed.
7. To pause (or resume) the VM, click the Suspend (or Resume) action link. This option is available only when
the VM is powered on.
AOS | VM Management | 89
clicking the Save button. For example, you can override the number of vCPUs, memory size, boot priority,
NICs, or the guest customization.
Note:
AOS | VM Management | 90
AOS | VM Management | 91
9. To modify the VM configuration, click the Update action link.
The Update VM dialog box appears, which includes the same fields as the Create VM dialog box. Modify the
configuration as needed (see Creating a VM (ESXi) on page 83), and in addition you can enable Flash
Mode for the VM.
Note: If you delete a vDisk attached to a VM and snapshots associated with this VM exist, space associated with
that vDisk is not reclaimed unless you also delete the VM snapshots.
» After you enable this feature on the VM, the status is updated in the VM table view. To view the status
of individual virtual disks (disks that are flashed to the SSD), go the Virtual Disks tab in the VM table
view.
» You can disable the Flash Mode feature for individual virtual disks. To update the Flash Mode for
individual virtual disks, click the update disk icon in the Disks pane and deselect the Enable Flash
Mode check box.
AOS | VM Management | 92
AOS | VM Management | 93
Figure 62: Update VM Resources - VM Disk Flash Mode
10. To delete the VM, click the Delete action link. A window prompt appears; click the OK button to delete the
VM.
The deleted VM disappears from the list of VMs in the table. You can also delete a VM that is already powered
on.
AOS | VM Management | 94
Traditionally, a vDisk is provisioned with specific allocated space (thick space) or with space allocated on an as-
needed basis (thin disk). The thick disks provisions the space using either lazy zero or eager zero disk formatting
method.
For traditional storage systems, the thick eager zeroed disks provide the best performance out of the three types
of disk provisioning. Thick disks provide second best performance and thin disks provide the least performance.
However, this does not apply to modern storage systems found in Nutanix systems.
Nutanix uses a thick Virtual Machine Disk (VMDK) to reserve the storage space using the vStorage APIs for Array
Integration (VAAI) reserve space API.
On a Nutanix system, there is no performance difference between thin and thick disks. This means that a thick eager
zeroed virtual disk has no performance benefits over a thin virtual disk.
Nutanix uses thick disk (VMDK) in its configuration and the resulting disk will be the same whether the disk is a thin
or a thick disk (despite the configuration differences).
Note: A thick-disk reservation is required for the reservation of the disk space. Nutanix VMDK has no performance
requirement to provision a thick disk. For a single Nutanix container, even when a thick disk is provisioned, no disk
space is allocated to write zeroes. So, there is no requirement for provisioning a thick disk.
When using the up-to-date VAAI for cloning operations, the following behavior is expected:
• When cloning any type of disk format (thin, thick lazy zeroed or thick eager zeroed) to the same Nutanix
datastore, the resulting VM will have a thin disk regardless of the explicit choice of a disk format in the vSphere
client.
Nutanix uses a thin provisioned disk because a thin disk performs the same as a thick disk in the system. The
thin disk prevents disk space from wasting. In the cloning scenario, Nutanix disallows the flow of the reservation
property from the source to the destination when creating a fast clone on the same datastore. This prevents space
wastage due to unnecessary reservation.
• When cloning a VM to a different datastore, the destination VM will have the disk format that you specified in the
vSphere client.
Important: A thick disk will be shown as thick in ESXi, and within NDFS (Nutanix Distributed File System) it is
shown as a thin disk with an extra field configuration.
Nutanix recommends using thin disks over any other disk type.
VM Migration
You can migrate a VM to an ESXi host in a Nutanix cluster. Usually the migration is done in the following cases.
AOS | VM Management | 95
About this task
The shared storage feature in vSphere allows you to move both compute and storage resources from the
source legacy environment to the target Nutanix environment at the same time without disruption. This
feature also removes the need to do any sort of file systems allow lists on Nutanix.
You can use the shared storage feature through the migration wizard in the web client.
Procedure
4. Under Select Migration Type, select Change both compute resource and storage.
6. Select a destination network for all VM network adapters and click Next.
7. Click Finish.
Wait for the migration process to complete. The process performs the storage vMotion first, and then creates a
temporary storage network over vmk0 for the period where the disk files are on Nutanix.
Cloning a VM
About this task
To clone a VM, you must enable the Nutanix VAAI plug-in. For steps to enable and verify Nutanix VAAI plug-in,
refer KB-1868.
Procedure
3. Follow the wizard to enter a name for the clone, select a cluster, and select a host.
Note: If you choose a datastore other than the one that contains the source VM, the clone operation uses the
VMware implementation and not the Nutanix VAAI plug-in.
5. If desired, set the guest customization parameters. Otherwise, proceed to the next step.
6. Click Finish.
AOS | VM Management | 96
The Nutanix VAAI plug-in efficiently makes full clones without reserving space for the clone. Read requests for
blocks shared between parent and clone are sent to the original vDisk that was created for the parent VM. As the
clone VM writes new blocks, the Nutanix file system allocates storage for those blocks. This data management occurs
completely at the storage layer, so the ESXi host sees a single file with the full capacity that was allocated when the
clone was created.
AOS | VM Management | 97
VSPHERE ESXI HARDENING SETTINGS
Configure the following settings in /etc/ssh/sshd_config to harden an ESXi hypervisor in a Nutanix cluster.
Caution: When hardening ESXi security, some settings may impact operations of a Nutanix cluster.
HostbasedAuthentication no
PermitTunnel no
AcceptEnv
GatewayPorts no
Compression no
StrictModes yes
KerberosAuthentication no
GSSAPIAuthentication no
PermitUserEnvironment no
PermitEmptyPasswords no
PermitRootLogin no
Note: You may need to log in to the Support Portal to view the links above.
The Acropolis Upgrade Guide provides steps that can be used to upgrade the hypervisor hosts. However, as
noted in the documentation, the customer is responsible for reviewing the guidance from VMware or Microsoft,
respectively, on other component compatibility and upgrade order (e.g. vCenter), which needs to be planned first.
ESXi Upgrade
These topics describe how to upgrade your ESXi hypervisor host through the Prism Element web console
Upgrade Software feature (also known as 1-click upgrade). To install or upgrade VMware vCenter server
or other third-party software, see your vendor documentation for this information.
AOS supports ESXi hypervisor upgrades that you can apply through the web console Upgrade Software feature
(also known as 1-click upgrade).
You can view the available upgrade options, start an upgrade, and monitor upgrade progress through the web console.
In the main menu, click the gear icon, and then select Upgrade Software in the Settings panel. You can see the
current status of your software versions and start an upgrade.
• To install or upgrade VMware vCenter Server or other third-party software, see your vendor documentation.
• Always consult the VMware web site for any vCenter and hypervisor installation dependencies. For example, a
hypervisor version might require that you upgrade vCenter first.
• If you have not enabled fully automated DRS in your environment and want to upgrade the ESXi host, you need
to upgrade the ESXi host manually. For LCM upgrades on the ESXi cluster, it is recommended to have a fully
automated DRS, so that VM migrations can be done automatically. For more information on fully automated
DRS, see the topic, Set a Custom Automation Level for a Virtual Machine in the VMware vSphere Documentation.
For information about upgrading ESXi hosts manually, see ESXi Host Manual Upgrade in the vSphere
Administration Guide.
• Disable Admission Control to upgrade ESXi on AOS; if enabled, the upgrade process will fail. You can enable it
for normal cluster operation otherwise.
Nutanix Support for ESXi Upgrades
Nutanix qualifies specific VMware ESXi hypervisor updates and provides a related JSON metadata
upgrade file on the Nutanix Support Portal for one-click upgrade through the Prism web console
Software Upgrade feature.
Note: ENG-358564 You might be unable to log in to vCenter Server as the /storage/seat partition for vCenter
Server version 7.0 and later might become full due to a large number of SSH-related events. See KB 10830 at
the Nutanix Support portal for symptoms and solutions to this issue.
• If your cluster is running the ESXi hypervisor and is also managed by VMware vCenter, you must provide
vCenter administrator credentials and vCenter IP address as an extra step before upgrading. Ensure that
ports 80 / 443 are open between your cluster and your vCenter instance to successfully upgrade.
• If you have just registered your cluster in vCenter. Do not perform any cluster upgrades (AOS, Controller
VM memory, hypervisor, and so on) if you have just registered your cluster in vCenter. Wait at least 1
hour before performing upgrades to allow cluster settings to become updated. Also do not register the
cluster in vCenter and perform any upgrades at the same time.
• Cluster mapped to two vCenters. Upgrading software through the web console (1-click upgrade) does not
support configurations where a cluster is mapped to two vCenters or where it includes host-affinity must
rules for VMs.
Ensure that enough cluster resources are available for live migration to occur and to allow hosts to enter
maintenance mode.
Mixing Different Hypervisor Versions
For ESXi hosts, mixing different hypervisor versions in the same cluster is temporarily allowed for
deferring a hypervisor upgrade as part of an add-node/expand cluster operation, reimaging a node
as part of a break-fix procedure, planned migrations, and similar temporary operations.
Procedure
1. Before performing any upgrade procedure, make sure you are running the latest version of the Nutanix Cluster
Check (NCC) health checks and upgrade NCC if necessary.
3. Log on to the Nutanix support portal and navigate to the Hypervisors Support page from the Downloads
menu, then download the Nutanix-qualified ESXi metadata .JSON files to your local machine or media.
a. The default view is All. From the drop-down menu, select Nutanix - VMware ESXi, which shows all
available JSON versions.
b. From the release drop-down menu, select the available ESXi version. For example, 7.0.0 u2a.
c. Click Download to download the Nutanix-qualified ESXi metadata .JSON file.
4. Log on to the Prism Element web console for any node in the cluster.
5. Click the gear icon in the main menu, select Upgrade Software in the Settings page, and then click the
Hypervisor tab.
7. Click Choose File for the metadata JSON (obtained from Nutanix) and binary files (offline bundle zip file for
upgrades obtained from VMware), respectively, browse to the file locations, select the file, and click Upload
Now.
8. When the file upload is completed, click Upgrade > Upgrade Now, then click Yes to confirm.
[Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on
without upgrading, click Upgrade > Pre-upgrade. These checks also run as part of the upgrade procedure.
Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the
Upgrade option is not displayed, because the software is already installed.
The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation
checks and uploads, through the Progress Monitor.
10. On the LCM page, click Inventory > Perform Inventory to enable LCM to check, update and display the
inventory information.
See Performing Inventory With LCM in the Acropolis Upgrade Guide.
• Do the following steps to download a non-Nutanix-qualified (patch) ESXi upgrade offline bundle from VMware,
then upgrade ESXi through Upgrade Software in the Prism Element web console.
• Typically you perform this procedure to patch your version of ESXi and Nutanix has not yet officially qualified
that new patch version. Nutanix supports the ability to patch upgrade ESXi hosts with versions that are greater
than or released after the Nutanix qualified version, but Nutanix might not have qualified those releases.
Procedure
1. From the VMware web site, download the offline bundle (for example, update-from-esxi6.0-6.0_update02.zip)
and copy the associated MD5 checksum. Ensure that this checksum is obtained from the VMware web site, not
manually generated from the bundle by you.
2. Save the files to your local machine or media, such as a USB drive or other portable media.
3. Log on to the Prism Element web console for any node in the cluster.
4. Click the gear icon in the main menu of the Prism Element web console, select Upgrade Software in the
Settings page, and then click the Hypervisor tab.
6. Click enter md5 checksum and copy the MD5 checksum into the Hypervisor MD5 Checksum field.
8. When the file upload is completed, click Upgrade > Upgrade Now, then click Yes to confirm.
[Optional] To run the pre-upgrade installation checks only on the Controller VM where you are logged on without
upgrading, click Upgrade > Pre-upgrade. These checks also run as part of the upgrade procedure.
Note: AOS can detect if you have uploaded software that is already installed or upgraded. In this case, the
Upgrade option is not displayed, because the software is already installed.
The Upgrade Software dialog box shows the progress of your selection, including status of pre-installation
checks and uploads, through the Progress Monitor.
Tip: If you have enabled DRS and want to upgrade the ESXi host, use the one-click upgrade procedure from the Prism
web console. For more information on the one-click upgrade procedure, see the ESXi Upgrade on page 99.
Nutanix supports the ability to patch upgrade the ESXi hosts with the versions that are greater than or released after
the Nutanix qualified version, but Nutanix might not have qualified those releases. See the Nutanix hypervisor
support statement in our Support FAQ.
Because ESXi hosts with different versions can co-exist in a single Nutanix cluster, upgrading ESXi does not require
cluster downtime.
• If you want to avoid cluster interruption, you must complete upgrading a host and ensure that the CVM is
running before upgrading any other host. When two hosts in a cluster are down at the same time, all the data is
unavailable.
• If you want to minimize the duration of the upgrade activities and cluster downtime is acceptable, you can stop the
cluster and upgrade all hosts at the same time.
Warning: By default, Nutanix clusters have redundancy factor 2, which means they can tolerate the failure of a single
node or drive. Nutanix clusters with a configured option of redundancy factor 3 allow the Nutanix cluster to withstand
the failure of two nodes or drives in different blocks.
Note: Use the following process only if you do not have DRS enabled in your Nutanix cluster.
• If you are upgrading all nodes in the cluster at once, shut down all guest VMs and stop the cluster with the cluster
stop command.
Caution: There is downtime if you upgrade all the nodes in the Nutanix cluster at once. If you do not want
downtime in your environment, you must ensure that only one CVM is shut down at a time in a redundancy factor 2
configuration.
• Run the complete NCC health check by using the health check command.
nutanix@cvm$ ncc health_checks run_all
• Run the cluster status command to verify that all Controller VMs are up and running, before performing a
Controller VM or host shutdown or restart.
nutanix@cvm$ cluster status
• Place the host in the maintenance mode by using the web client.
• Log on to the CVM with SSH and shut down the CVM.
nutanix@cvm$ cvm_shutdown -P now
Note: Do not reset or shutdown the CVM in any way other than the cvm_shutdown command to ensure that the
cluster is aware that the CVM is unavailable.
• Start the upgrade using vSphere Upgrade Guide or vCenter Update Manager VUM.
• See the VMware Documentation for information about the standard ESXi upgrade procedures. If any problem
occurs with the upgrade process, an alert is raised in the Alert dashboard.
Post Upgrade
Run the complete NCC health check by using the following command.
nutanix@cvm$ ncc health_checks run_all
Enable email alerts in the web console under Email Alert Services or with the nCLI command.
ncli> alerts update-alert-config enable=true
• Configure advertised capacity for the Nutanix storage container (total usable capacity minus the capacity of one
node for replication factor 2 or two nodes for replication factor 3).
• Store VM swapfiles in the same directory as the VM.
• Enable enhanced vMotion compatibility (EVC) in the cluster. For more information, see vSphere EVC Settings
on page 52.
• Configure Nutanix CVMs with the appropriate VM overrides. For more information, see VM Override Settings
on page 54.
• Check Nonconfigurable ESXi Components on page 68. Modifying the nonconfigurable components may
inadvertently constrain performance of your Nutanix cluster or render the Nutanix cluster inoperable.