CCH With Nutanix Intersight Managed X-Series Field Guide v1.0
CCH With Nutanix Intersight Managed X-Series Field Guide v1.0
Nutanix
Intersight Managed Mode for Cisco UCS X-Series Installation Field Guide
Document Information
Access the latest version of this document at Cisco Communities: https://community.cisco.com/t5/unified-
computing-system-knowledge-base/cisco-compute-hyperconverged-with-nutanix-on-x-series-field/ta-p/5219852
Revision History
Version Date Prism Central Foundation AOS LTS AOS STS or LCM Notes
version Central version version eSTS version Version
1.0 November 2024 2022.6 or 1.7 6.10 6.8.1 3.1 Initial Release for Intersight based deployments with
2023.3 or M7 generation X-Series servers.
2024.2
• Nutanix Installation
• Guest VM Networking
Software Prerequisites:
1. Nutanix Prism Central with Foundation Central added from the marketplace
2. Cisco Intersight SaaS account, or the connected or private virtual appliance with sufficient licenses
3. An anonymous web server for hosting installation files, such as the Cisco IMM toolkit VM (optional)
4. NTP sync and DNS name resolution for Cisco Intersight or the Intersight appliance, and Prism Central
Anonymous HTTP server Cisco IMM Toolkit 4.2.2 Any anonymous HTTP server can be used, the IMM Toolkit is an easy and free VM
Nutanix AOS 6.8.1 or later Intel 5th generation CPUs require AOS 6.8.1 or later. AOS 6.10 is the current long-term support
version.
Nutanix AHV AHV-20230302.101026 or AHV- Use AHV-20230302.101026 along with AOS 6.8.1, and AHV-20230302.102001 along with AOS
20230302.102001 6.10
Vmware ESXi 7.0 U3o, 8.1 U1a or 8.0 U2 Use the Cisco custom installation ISO images available for download from Broadcom.
Cisco UCS X210c-M7 blade Firmware 5.2(2.240074) Only necessary to download if you are using a local Cisco Intersight private Virtual Appliance,
otherwise the images will be automatically downloaded
Nutanix Phoenix OS (HCI Bootstrap) 5.7 Only necessary to download if you are using a local Cisco Intersight private Virtual Appliance,
otherwise the images will be automatically downloaded
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Cisco Fabric Interconnect Physical Installation
Fabric Interconnect A
Fabric Interconnect B
UCSX-S9108-100G A
Note: Connect 2-8 cables per UCS X-
series Direct module directly to the
UCSX-S9108-100G B upstream Ethernet switches. The
recommended configuration is at least
UCSX one cable per module to a pair of
9508 upstream switches running vPC and to
Chassis configure the uplinks as port-channels.
pc
pc
pc pc
pc
pc
pc pc
---- Basic System Configuration Dialog ---- Enter the system name: deadpool
This setup utility will guide you through the basic configuration of Physical Switch Mgmt0 IP address : 10.1.16.7
the system. Only minimal configuration including IP connectivity to
the Fabric interconnect and its clustering mode is performed through these steps. Physical Switch Mgmt0 IPv4 netmask : 255.255.255.0
Type Ctrl-C at any time to abort configuration and reboot system. IPv4 address of the default gateway : 10.1.16.1
To back track or make modifications to already entered values,
complete input till end of section and answer no when prompted DNS IP address : 10.1.16.10
to apply configuration.
Configure the default domain name? (yes/no) [n]: yes
first-setup: Warning: is EMPTY. using switch as name
Default domain name : deadpool.local
Starting GUI for initial setup.
Following configurations will be applied:
Switch can now be configured from GUI. Use https://10.1.16.170 and click
on 'Express Setup' link. If you want to cancel the configuration from GUI and go Management Mode=intersight
back, Switch Fabric=A
press the 'ctrl+c' key and choose 'X'. Press any other key to see the installation System Name=deadpool
progress from GUI Enforced Strong Password=no
Physical Switch Mgmt0 IP Address=10.1.16.7
^C Physical Switch Mgmt0 IP Netmask=255.255.255.0
Type 'reboot' to abort configuration and reboot system Default Gateway=10.1.16.1
or Type 'X' to cancel GUI configuratuion and go back to console or Press any other DNS Server=10.1.16.10
key to see the installation progress from GUI (reboot/bash/X) ? X Domain Name=deadpool.local
Enter the configuration method. (console/gui) ? console Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.
Enter the management mode. (ucsm/intersight)? intersight
Configuration file - Ok
• Configure FI-B via the CLI with the values for your UCS Enter the admin password of the peer Fabric interconnect:
Connecting to peer Fabric interconnect... done
Retrieving config from peer Fabric interconnect... done
domain Peer Fabric interconnect management mode : intersight
Peer Fabric interconnect Mgmt0 IPv4 Address: 10.1.16.7
• Verify you can log in to both FIs as admin using the Peer Fabric interconnect Mgmt0 IPv4 Netmask: 255.255.255.0
password you provided Peer FI is IPv4 Cluster enabled. Please Provide Local Fabric Interconnect Mgmt0 IPv4
Address
Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.
Configuration file - Ok
XML interface to system may become unavailable since ssh is disabled
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Claim Fabric Interconnects in Cisco Intersight
Log in to one Fabric Interconnect’s web console
via HTTPS with a web browser using the IP
address you set. Retrieve the Device ID and the
Claim Code from the console page by clicking on
Device Connector at the top.
Select Cisco UCS Domain (Intersight Managed) and click Start, then enter the Device ID and Claim Code, then click
Claim.
Note: When using the Cisco Intersight Virtual Appliance, one Fabric Interconnect’s IP address plus the username
and password is used to claim the servers instead of a Device ID and Claim Code.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Upgrade Fabric Interconnect Firmware If Necessary
A Domain Profile must be created and deployed to the Fabric Interconnects. The Domain Profile defines the roles of
the ports on the Fabric Interconnects, the VLANs used on the network and several other domain-wide policy
settings such as QoS. After the Domain Profile is deployed the blades will discover and can then be onboarded in
Foundation Central and targeted for a Nutanix cluster deployment.
Select the desired Organization in the Intersight account and give the policy a name. Select the Fabric Interconnect
pair the policy will be applied to.
Select the VLAN policy for Fabric Interconnect A, then select to create a new VLAN policy.
Enter a name for the VLAN policy for Fabric Interconnects, click Next, then select to add a VLAN.
Enter a prefix and the VLAN ID for the new VLAN, then click to select the Multicast policy. Select to create a new
Multicast policy. Enter a name for the new Multicast policy.
Leave the default settings for the Multicast policy and click Create. Click Add to create the new VLAN in the VLAN
policy.
If any additional VLANs need to communicate via the uplinks from this Fabric Interconnect pair, click Add VLANs and
create them as was done for the first VLAN. Click Create to finally finish creating the VLAN policy. Select the VLAN
policy for Fabric Interconnect B.
Select the VLAN policy that was just created so that both Fabric Interconnects have the same VLAN configuration,
then click Next.
Click to select the Port policy for Fabric Interconnect A then click Create New.
Enter a name for the Port policy and select the model of Fabric Interconnect matching the hardware in use, then
click next. If Fibre Channel ports are required then configure the unified ports, otherwise click Next to continue.
If Breakout ports are required then select them and click the Configure button, otherwise click Next to continue.
Select the ports which will be server ports, i.e. ports connected to the blade chassis, then click the Configure button.
Select Server as the role for server ports then click Save. After returning to the previous screen select the ports
which will be the Ethernet uplinks from the Fabric Interconnects and click the Configure button. Select Ethernet
Uplink as the role then click Save.
If necessary, click on Port Channels and add the multiple Ethernet uplink ports to a port channel. After all the server
and Ethernet uplink ports and their optional port channels are configured click Save. Click to select a port policy for
Fabric Interconnect B, then select the policy just created so the port configurations match.
Select to create a System QoS policy. Give the policy a name then click Next.
Set the Best Effort QoS class to MTU 9216 then click Create. After returning to the previous screen click to select a
Network Control Policy. Enter a name for the policy then click Next.
Set the preferred primary DNS server address then click Create. Click Next to move to the Domain profile summary.
Click Deploy and watch the domain profile progress through Validation and Configuration until the status is OK.
Generate an API key using schema version 3 for use by Foundation Central. Be sure to
copy the API Key ID and copy and save the Secret Key file. It will only be shown once.
Ensure your Cisco ID is granted access to download software from CCO. If not, click the Activate link and enter your CCO
login credentials. This step is not required for an air-gapped Cisco Intersight Private Virtual Appliance (PVA).
If an air-gapped Cisco Intersight Private Virtual Appliance (PVA) is used, the Cisco UCS X-series firmware Intersight
Bundle and the Nutanix Phoenix HCI bundle files must be uploaded to the Intersight PVA.
The X-series blade server Intersight Bundle and the Nutanix Phoenix HCI bundle can be downloaded from your PVA
software repository downloads page at https://www.intersight.com. If you have not created a software download page
for your PVA, create one at https://www.intersight.com/pvapp. Use HCI bundle version 5.7 and Intersight Bundle
version 5.2(2.240074) for the Cisco UCS X210c M7 blade servers.
Note: Software Repository will not appear in the PVA until you have claimed a target server.
Two pools must be created before the Nutanix cluster is created; a MAC Address Pool and an IP Address Pool for the
blades’ IMC access, plus an optional UUID pool to assign unique identifiers to each blade.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Create MAC Address Pool
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Prism Central Installation on a Nutanix Cluster
If not already done, deploy Prism Central version 2023.3 or 2024.2 on a Nutanix ESXi or AHV cluster. If not possible, Cisco
recommends deploying Prism Central version 2022.6 on a non-Nutanix ESXi host or cluster.
https://portal.nutanix.com/page/documents/detai
ls?targetId=Acropolis-Upgrade-Guide-v6_5:upg-
vm-install-wc-t.html
NETMASK=xxx.xxx.xxx.xxx
IPADDR=xxx.xxx.xxx.xxx
BOOTPROTO=none
GATEWAY=xxx.xxx.xxx.xxx
Edit the /etc/hosts file to remove all lines containing any entry like “127.0.0.1 NTNX-10-3-190-99-A-CVM” then save
the changes and reboot:
$ sudo vi /etc/hosts
$ sudo reboot
For the full details on the installation process using Prism Central 2024.2 reference the guide here:
https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-install-prism-central-non-nutanix-c.html
Alert: If Foundation Central was installed before configuring or changing the DNS, NTP or Proxy server addresses, the
Prism Central VM must be rebooted before attempting to install a cluster.
Alert: If Foundation Central was installed before configuring or changing the DNS, NTP or Proxy server addresses, the
Prism Central VM must be rebooted before attempting to install a cluster.
Alert: If Foundation Central was installed before configuring or changing the DNS, NTP or Proxy server addresses, the
Prism Central VM must be rebooted before attempting to install a cluster.
Note: You must register the Nutanix cluster that hosts the Prism Central 2023.3+ VM with Prism Central before you can
successfully enable the marketplace. The required version of Foundation Central is 1.7 or higher.
Note: You must register the Nutanix cluster that hosts the Prism Central VM with Prism Central before you can successfully
run LCM. You may need to run an inventory task once to update LCM, then run an inventory again to scan the system for
available updates. The required version of Foundation Central is 1.7 or higher.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Upgrade Foundation Central via CLI
When using Prism Central running on a non-Nutanix ESXi cluster, Foundation Central will not be upgradeable via LCM and must be upgraded
via the CLI because the cluster running the Prism Central VM cannot be registered with Prism Central. For more information, r efer to the
following page: https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-Cisco-HCI-ISM:v1-upgrade-
fc-cli-t.html
1. Download the Foundation Central 1.7+ LCM bundle and upload it to the Prism Central VM in the /home/nutanix folder.
2. Log on to the CLI of the Prism Central VM as user nutanix and extract the compressed file contents:
$ mkdir /home/nutanix/fc_installer
$ tar -xf lcm_foundation-central_1.7.tar.gz -C /home/nutanix/fc_installer/
3. Stop Foundation Central:
$ genesis stop foundation_central
4. Remove the existing Foundation Central files:
$ sudo rm -rf /home/docker/foundation_central/*
5. Extract the new Foundation Central files to the correct folder:
$ sudo tar -xJf /home/nutanix/fc_installer/builds/foundation-central-builds/1.7/foundation-central-
installer.tar.xz -C /home/docker/foundation_central/
6. Set the directory ownership and permissions:
$ sudo chown -R nutanix:nutanix /home/docker/foundation_central/*
7. Start the Foundation Central service:
$ cluster start
8. In some cases, it may be necessary to reboot the Prism Central server after the manual upgrade.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Deploy Cisco IMM Transition Toolkit (optional)
Download a supported Nutanix AOS STS or LTS image and the AHV installer here:
https://portal.nutanix.com/page/downloads/list
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Connect Foundation Central to Cisco Intersight
or
Select the onboarded nodes to be used in the new cluster, then click Create Cluster.
or
Click the link to open Prism Element when the installation is complete.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Witness VM Installation and Configuration
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Witness VM Use Cases and Requirements
• A Witness VM is highly recommended for 2-node clusters or clusters configured for Metro Availability.
• The witness VM makes failover decisions during network outages or site availability interruptions to avoid split-
brain scenarios.
• The witness VM must reside in a different failure domain from the clusters it is monitoring, meaning it has its own
separate power and independent network communication to both monitored sites.
• The configuration of a witness VM is the same for 2-node clusters or metro availability clusters and can act as
witness for up to 50 clusters.
• The witness VM only runs on AHV or ESXi clusters, it cannot be backed up or restored via snapshots and cannot be
migrated between vCenter servers.
• Network latency between the two sites and the witness VM must be less than 200ms.
• TCP port 9440 is used and must bypass any proxy servers in the network.
Create a new VM with minimum 2 vCPU and 6GB vRAM, add the three disk images as SCSI disks cloned from the
image service, and add a NIC in the appropriate VLAN, then click Save and boot the VM.
NETMASK="xxx.xxx.xxx.xxx"
IPADDR="xxx.xxx.xxx.xxx"
BOOTPROTO="none"
GATEWAY="xxx.xxx.xxx.xxx”
$ sudo reboot
Create the witness VM cluster:
$ cluster -s vm_ip_address --cluster_function_list=witness_vm create
Note: the witness VM command prompt will say “-cvm” in the hostname, make sure you are in fact on the witness VM
console and not an actual cluster controller VM
Configure the
witness during
the first login to
Prism Element
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Initial Nutanix Cluster Config for ESXi
Note: It may take a few minutes after adding the nodes for the vCenter to be discovered and allow you to register it.
This is an additional clustered IP address for enabling iSCSI Data Services, which is
required to install Prism Central.
Go to Health
This is an additional clustered IP address for enabling iSCSI Data Services, which is
required to install Prism Central.
Go to Health
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Configure Guest VM Networking for ESXi
Assuming additional VLAN IDs were configured during the installation for guest
VM traffic, additional port groups can be added for the guest VMs with the
appropriate VLAN IDs. Add a new port group to the default vSwitch0 for the
guest VMs, using VLAN ID tags. Repeat for each VLAN required and repeat for all
the hosts in the vCenter cluster so their configuration matches.
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Register Cluster with Prism Central
Recommended method for licensing is to use Seamless Licensing via Prism Central, which requires internet access.
Clicking on “Manage All Licenses” will prompt you to log in to the Nutanix support portal. Ensure you log in with a valid
My Nutanix account with administrative rights and is entitled with valid licenses. Licenses can be selected and applied
to the clusters in the subsequent screens. For more information on licensing, refer to this page:
https://portal.nutanix.com/page/documents/details?targetId=License-Manager:License-Manager
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Cluster Expansion Status
Expansions of Cisco HCI with Nutanix clusters using X-series blades is available with Foundation Central version 1.7 or
later, along with AOS 6.8.1 or later, AHV 20230302.101026 or later, and Prism Central 2023.3 or later. Foundation
Central can be upgraded via Prism Central’s Lifecycle Manager (LCM). If the Intersight Private Virtual Appliance (PVA) is
used it must be upgraded to version 1.1.1-0 or later.
Only clusters with 3+ nodes can be expanded; single node and 2 node clusters cannot be expanded. Up to 8 nodes can
be added to an existing cluster at one time.
A cluster being expanded must be registered with and managed by the Prism Central server that runs Foundation
Central. The existing cluster nodes must have associated server profiles in Cisco Intersight.
The new node(s) to be added to the cluster must be inserted into the Cisco UCS X9508 chassis as with all the other
HCI converged nodes, and the servers must show as fully discovered in Cisco Intersight. Foundation Central is used to
onboard the new node(s) and perform an initial node preparation, which installs the hypervisor and AOS software.
The appropriate matching version of AHV and AOS must be downloaded and hosted on an anonymous HTTP server,
like the initial installation process. Afterwards, Prism Central is used to expand the cluster to consume the newly
prepared node(s).
Select the cluster to be expanded and the node(s) to prepare for the expansion.
Enter the networking information and IP address configuration for the new node(s).
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Prepare Additional Node(s) Continued
Select the API key and submit the node preparation job.
© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Nutanix Lifecycle Manager Status
LCM version 3.1 is required to support Cisco server firmware updates for UCS X-series blades and requires the cluster
to run AOS version 6.8+.
A cluster running a supported version of AOS 6.8 or 6.10 will run LCM 3.0 out of the box. To upgrade LCM 3.0 to 3.1,
run an inventory from the LCM GUI, and select the “Standalone CIMC” management option.
LCM will update itself to version 3.1 during an Inventory task. The first inventory job will show available software
updates, and you will see that LCM is now running version 3.1, but no inventory will be done against the Cisco X -series
blade servers. Run another inventory job, choosing the “Intersight” management option, and filling in the relevant
information including the Intersight API key ID and secret key. This inventory job will complete and show all available
software updates along with any available Cisco server firmware updates.
LCM can now upgrade Cisco server firmware alongside software components as part of an upgrade plan.