0% found this document useful (0 votes)
220 views141 pages

CCH With Nutanix Intersight Managed X-Series Field Guide v1.0

This field guide provides instructions for installing Nutanix clusters on Cisco UCS X-series servers in Intersight Managed Mode. It details hardware and software prerequisites, configuration steps for Cisco Intersight, and the necessary cabling and setup for Fabric Interconnects. The document also includes information on creating domain profiles and VLAN policies for effective management and deployment of the Nutanix clusters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
220 views141 pages

CCH With Nutanix Intersight Managed X-Series Field Guide v1.0

This field guide provides instructions for installing Nutanix clusters on Cisco UCS X-series servers in Intersight Managed Mode. It details hardware and software prerequisites, configuration steps for Cisco Intersight, and the necessary cabling and setup for Fabric Interconnects. The document also includes information on creating domain profiles and VLAN policies for effective management and deployment of the Nutanix clusters.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 141

Cisco Compute Hyperconverged with

Nutanix
Intersight Managed Mode for Cisco UCS X-Series Installation Field Guide
Document Information
Access the latest version of this document at Cisco Communities: https://community.cisco.com/t5/unified-
computing-system-knowledge-base/cisco-compute-hyperconverged-with-nutanix-on-x-series-field/ta-p/5219852
Revision History

Version Date Prism Central Foundation AOS LTS AOS STS or LCM Notes
version Central version version eSTS version Version

1.0 November 2024 2022.6 or 1.7 6.10 6.8.1 3.1 Initial Release for Intersight based deployments with
2023.3 or M7 generation X-Series servers.
2024.2

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Contents
• Hardware Configuration

• Cisco Intersight Configuration

• Software Prerequisites Configuration

• Nutanix Installation

• Witness VM Installation and Configuration

• Initial Nutanix Configurations

• Guest VM Networking

• Prism Central Configuration

• Nutanix Cluster Expansion

• Nutanix Lifecycle Manager

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Installation Overview
This field guide covers the installation of Nutanix clusters on Cisco UCS X-series servers in Intersight Managed Mode
(IMM) i.e. X-series blades in chassis that are connected to Cisco UCS Fabric Interconnects and managed by Cisco
Intersight.

Software Prerequisites:
1. Nutanix Prism Central with Foundation Central added from the marketplace
2. Cisco Intersight SaaS account, or the connected or private virtual appliance with sufficient licenses
3. An anonymous web server for hosting installation files, such as the Cisco IMM toolkit VM (optional)
4. NTP sync and DNS name resolution for Cisco Intersight or the Intersight appliance, and Prism Central

Rack and Configure Generate


Configure
cable the Claim FIs in Cisco Domain Cisco
Start servers and
Fabric
Intersight Profile and Intersight API
Interconnects
FIs Pools keys

Onboard Authenticate Install Prism


Register Run Cluster
servers in Foundation Central and
End Cluster with Installation
Foundation Central to Cisco Foundation
Prism Central wizard Intersight Central
Central

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Software Requirements
Software or Firmware Version Notes
Prism Central 2022.6, 2023.3 or 2024.2 Prism Central 2022.6 or 2023.3 can be used to install a cluster but Prism Central 2024.1 is
required to claim a cluster running AOS 6.8.1, or 2024.2 for AOS 6.10.

Foundation Central 1.7

Anonymous HTTP server Cisco IMM Toolkit 4.2.2 Any anonymous HTTP server can be used, the IMM Toolkit is an easy and free VM

Nutanix AOS 6.8.1 or later Intel 5th generation CPUs require AOS 6.8.1 or later. AOS 6.10 is the current long-term support
version.

Nutanix AHV AHV-20230302.101026 or AHV- Use AHV-20230302.101026 along with AOS 6.8.1, and AHV-20230302.102001 along with AOS
20230302.102001 6.10

Nutanix Lifecycle Manager (LCM) 3.1

Vmware ESXi 7.0 U3o, 8.1 U1a or 8.0 U2 Use the Cisco custom installation ISO images available for download from Broadcom.

Cisco Fabric Interconnect Firmware 4.3(4.240066)

Cisco Intersight Virtual Appliance 1.1.1-0 or later

Cisco UCS X210c-M7 blade Firmware 5.2(2.240074) Only necessary to download if you are using a local Cisco Intersight private Virtual Appliance,
otherwise the images will be automatically downloaded

Nutanix Phoenix OS (HCI Bootstrap) 5.7 Only necessary to download if you are using a local Cisco Intersight private Virtual Appliance,
otherwise the images will be automatically downloaded

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cisco UCS Hardware Configuration

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Cisco Fabric Interconnect Physical Installation

Fabric Interconnect A Management Ethernet Switch

Serial Console Router or Laptop


Fabric Interconnect B

Note: L1 connects to L1, and L2 connects to L2

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cisco Fabric Interconnect and Chassis Cabling
FI uplinks require 10Gb ethernet
minimum end-to-end
Uplink Switch(es)

Fabric Interconnect A

Fabric Interconnect B

Intelligent Fabric Module 1


Note: Connect 2-8 cables per
Intelligent Fabric Module (IFM) to a
Intelligent Fabric Module 2 single Fabric Interconnect (FI), i.e. all
from IFM 1 to FI A. Do not mix cables
UCSX between the upper and lower IFMs.
9508 Use 25 GbE or 100 GbE cables as
Chassis appropriate for the model of IFM in
the chassis. Start with the left-most
ports on the IFMs, there is no need to
divide them left to right.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Cisco X-Series Direct Cabling
Uplink Switch(es) Management Ethernet Switch

All 8 QSFP ports can be uplink


ports, the left-most pair are
unified ports that can also be
configured as FC uplinks Serial Console Router or Laptop

UCSX-S9108-100G A
Note: Connect 2-8 cables per UCS X-
series Direct module directly to the
UCSX-S9108-100G B upstream Ethernet switches. The
recommended configuration is at least
UCSX one cable per module to a pair of
9508 upstream switches running vPC and to
Chassis configure the uplinks as port-channels.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Fabric Interconnect Uplink Options
Single Switch with Port Channels (not
Single Switch (not recommended)
recommended)

Dual Switch with/without Port Channels with


no vPC (not recommended) Dual Switch with vPC

pc
pc
pc pc

Note: Configure ports on the uplink switches to support jumbo frames

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


UCS X-series Direct Uplink Options
Single Switch with Port Channels (not
Single Switch (not recommended)
recommended)

Dual Switch with/without Port Channels with


no vPC (not recommended) Dual Switch with vPC

pc
pc
pc pc

Note: Configure ports on the uplink switches to support jumbo frames

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Initial UCS Configuration – Fabric Interconnect A
• Connect to FI-A via a serial console router or directly with a serial
cable/adapter The Fabric interconnect will be configured in the intersight managed mode. Choose (y/n) to
proceed: y
• Configure FI-A via the CLI with the values for your UCS domain
Enforce strong password? (y/n) [y]: y
• Choose Intersight for the management mode
• 2 IP addresses are required, one per FI Enter the password for "admin":
Confirm the password for "admin":
• Enter a valid DNS server to reach the internet
Enter the switch fabric (A/B) []: A

---- Basic System Configuration Dialog ---- Enter the system name: deadpool

This setup utility will guide you through the basic configuration of Physical Switch Mgmt0 IP address : 10.1.16.7
the system. Only minimal configuration including IP connectivity to
the Fabric interconnect and its clustering mode is performed through these steps. Physical Switch Mgmt0 IPv4 netmask : 255.255.255.0

Type Ctrl-C at any time to abort configuration and reboot system. IPv4 address of the default gateway : 10.1.16.1
To back track or make modifications to already entered values,
complete input till end of section and answer no when prompted DNS IP address : 10.1.16.10
to apply configuration.
Configure the default domain name? (yes/no) [n]: yes
first-setup: Warning: is EMPTY. using switch as name
Default domain name : deadpool.local
Starting GUI for initial setup.
Following configurations will be applied:
Switch can now be configured from GUI. Use https://10.1.16.170 and click
on 'Express Setup' link. If you want to cancel the configuration from GUI and go Management Mode=intersight
back, Switch Fabric=A
press the 'ctrl+c' key and choose 'X'. Press any other key to see the installation System Name=deadpool
progress from GUI Enforced Strong Password=no
Physical Switch Mgmt0 IP Address=10.1.16.7
^C Physical Switch Mgmt0 IP Netmask=255.255.255.0
Type 'reboot' to abort configuration and reboot system Default Gateway=10.1.16.1
or Type 'X' to cancel GUI configuratuion and go back to console or Press any other DNS Server=10.1.16.10
key to see the installation progress from GUI (reboot/bash/X) ? X Domain Name=deadpool.local

Enter the configuration method. (console/gui) ? console Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.
Enter the management mode. (ucsm/intersight)? intersight
Configuration file - Ok

Cisco UCS 6500 Series Fabric Interconnect


deadpool-A login:
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Initial UCS Configuration – Fabric Interconnect B
Enter the configuration method. (console/gui) ? console
• Connect to FI-B via a serial console router or directly
Installer has detected the presence of a peer Fabric interconnect. This Fabric
with a serial cable/adapter interconnect will be added to the cluster. Continue (y/n) ? y

• Configure FI-B via the CLI with the values for your UCS Enter the admin password of the peer Fabric interconnect:
Connecting to peer Fabric interconnect... done
Retrieving config from peer Fabric interconnect... done
domain Peer Fabric interconnect management mode : intersight
Peer Fabric interconnect Mgmt0 IPv4 Address: 10.1.16.7
• Verify you can log in to both FIs as admin using the Peer Fabric interconnect Mgmt0 IPv4 Netmask: 255.255.255.0

password you provided Peer FI is IPv4 Cluster enabled. Please Provide Local Fabric Interconnect Mgmt0 IPv4
Address

Physical Switch Mgmt0 IP address : 10.1.16.8

Local fabric interconnect model(UCS-FI-6536)


Peer fabric interconnect is compatible with the local fabric interconnect. Continuing with
the installer...

Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
Applying configuration. Please wait.

Configuration file - Ok
XML interface to system may become unavailable since ssh is disabled

Completing basic configuration setup

Cisco UCS 6500 Series Fabric Interconnect


deadpool-B login:

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cisco Intersight Configuration

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Claim Fabric Interconnects in Cisco Intersight
Log in to one Fabric Interconnect’s web console
via HTTPS with a web browser using the IP
address you set. Retrieve the Device ID and the
Claim Code from the console page by clicking on
Device Connector at the top.

In Cisco Intersight, go to the System


area, click on Targets, then Claim a
New Target

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Claim Fabric Interconnects in Cisco Intersight Continued

Select Cisco UCS Domain (Intersight Managed) and click Start, then enter the Device ID and Claim Code, then click
Claim.

Note: When using the Cisco Intersight Virtual Appliance, one Fabric Interconnect’s IP address plus the username
and password is used to claim the servers instead of a Device ID and Claim Code.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Upgrade Fabric Interconnect Firmware If Necessary

Minimum version required: 4.3(4.240066)

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile in Cisco Intersight

A Domain Profile must be created and deployed to the Fabric Interconnects. The Domain Profile defines the roles of
the ports on the Fabric Interconnects, the VLANs used on the network and several other domain-wide policy
settings such as QoS. After the Domain Profile is deployed the blades will discover and can then be onboarded in
Foundation Central and targeted for a Nutanix cluster deployment.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Select the desired Organization in the Intersight account and give the policy a name. Select the Fabric Interconnect
pair the policy will be applied to.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Select the VLAN policy for Fabric Interconnect A, then select to create a new VLAN policy.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Enter a name for the VLAN policy for Fabric Interconnects, click Next, then select to add a VLAN.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Enter a prefix and the VLAN ID for the new VLAN, then click to select the Multicast policy. Select to create a new
Multicast policy. Enter a name for the new Multicast policy.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Leave the default settings for the Multicast policy and click Create. Click Add to create the new VLAN in the VLAN
policy.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

If any additional VLANs need to communicate via the uplinks from this Fabric Interconnect pair, click Add VLANs and
create them as was done for the first VLAN. Click Create to finally finish creating the VLAN policy. Select the VLAN
policy for Fabric Interconnect B.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Select the VLAN policy that was just created so that both Fabric Interconnects have the same VLAN configuration,
then click Next.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Click to select the Port policy for Fabric Interconnect A then click Create New.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Enter a name for the Port policy and select the model of Fabric Interconnect matching the hardware in use, then
click next. If Fibre Channel ports are required then configure the unified ports, otherwise click Next to continue.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

If Breakout ports are required then select them and click the Configure button, otherwise click Next to continue.
Select the ports which will be server ports, i.e. ports connected to the blade chassis, then click the Configure button.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Select Server as the role for server ports then click Save. After returning to the previous screen select the ports
which will be the Ethernet uplinks from the Fabric Interconnects and click the Configure button. Select Ethernet
Uplink as the role then click Save.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

If necessary, click on Port Channels and add the multiple Ethernet uplink ports to a port channel. After all the server
and Ethernet uplink ports and their optional port channels are configured click Save. Click to select a port policy for
Fabric Interconnect B, then select the policy just created so the port configurations match.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Select to create a System QoS policy. Give the policy a name then click Next.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Set the Best Effort QoS class to MTU 9216 then click Create. After returning to the previous screen click to select a
Network Control Policy. Enter a name for the policy then click Next.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Set the preferred primary DNS server address then click Create. Click Next to move to the Domain profile summary.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Domain Profile Continued

Click Deploy and watch the domain profile progress through Validation and Configuration until the status is OK.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Domain Profile Completed

After the Domain profile is deployed, the chassis will


be discovered, and then the blades will be discovered.
Once the chassis and blades are discovered the next
steps can be completed.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Generate API Keys

Generate an API key using schema version 3 for use by Foundation Central. Be sure to
copy the API Key ID and copy and save the Secret Key file. It will only be shown once.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cisco Intersight Software Download Permissions

Ensure your Cisco ID is granted access to download software from CCO. If not, click the Activate link and enter your CCO
login credentials. This step is not required for an air-gapped Cisco Intersight Private Virtual Appliance (PVA).

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Upload Software to Intersight Private Virtual Appliance

If an air-gapped Cisco Intersight Private Virtual Appliance (PVA) is used, the Cisco UCS X-series firmware Intersight
Bundle and the Nutanix Phoenix HCI bundle files must be uploaded to the Intersight PVA.
The X-series blade server Intersight Bundle and the Nutanix Phoenix HCI bundle can be downloaded from your PVA
software repository downloads page at https://www.intersight.com. If you have not created a software download page
for your PVA, create one at https://www.intersight.com/pvapp. Use HCI bundle version 5.7 and Intersight Bundle
version 5.2(2.240074) for the Cisco UCS X210c M7 blade servers.

Note: Software Repository will not appear in the PVA until you have claimed a target server.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Prerequisite Pools

Two pools must be created before the Nutanix cluster is created; a MAC Address Pool and an IP Address Pool for the
blades’ IMC access, plus an optional UUID pool to assign unique identifiers to each blade.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Create MAC Address Pool

Create a MAC Address Pool with a unique starting


block address, and with enough addresses to assign at
least 2 to each server in the cluster, plus extras for
future growth.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create IP Address Pool

Create an IP Address Pool with valid DNS, and a block with


enough addresses to assign 1 to each server in the cluster, plus
extras for future growth.
The IP addresses must be in the same layer 2 subnet as the Fabric Interconnects’ mgmt0 interfaces, which is known
as Out-of-Band IMC access. Alternatively, In-Band IMC access uses a VLAN via the Fabric Interconnects’ Ethernet
uplinks. The VLAN must be defined in the Domain Profile and VLAN policy, and the IP addresses defined in this pool
must be from the layer 2 subnet carried by that VLAN.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Create UUID Pool (Optional)

Create a UUID Pool with a unique prefix and a block


with a unique starting address, and with enough
addresses to assign 1 to each server in the cluster, plus
extras for future growth. If a pool is not used the UUID
of the blade servers’ profiles will be based on the
hardware UUID, which could limit the ability to move
the profile to a different blade in the future.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Software Prerequisites Configuration

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Prism Central Installation on a Nutanix Cluster

If not already done, deploy Prism Central version 2023.3 or 2024.2 on a Nutanix ESXi or AHV cluster. If not possible, Cisco
recommends deploying Prism Central version 2022.6 on a non-Nutanix ESXi host or cluster.

Prism Central binaries are available here: https://portal.nutanix.com/page/downloads?product=prism.


Additional upgrade path and compatibility information is available here:
https://portal.nutanix.com/page/documents/upgrade-paths and here:
https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix/interoperability
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Prism Central Installation on Nutanix continued

Warning: You must Note: Deployment


provide valid DNS can take 30+
servers in order for minutes
the connection to
Cisco Intersight to
work properly

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Central 2022.6 Installation on non-Nutanix ESXi infra
Cisco recommends deploying Prism Central
2022.6.0.12 for non-Nutanix ESXi only as a temporary
single VM for Foundation Central to install the initial
cluster. Long term, Prism Central deployed on Nutanix
is preferred.

Refer to the following documentation for the


installation of Prism Central on ESXi:

https://portal.nutanix.com/page/documents/detai
ls?targetId=Acropolis-Upgrade-Guide-v6_5:upg-
vm-install-wc-t.html

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Central 2022.6 Installation on ESXi continued
Power on the VM then open the local vSphere console. Let the VM run through its initial configuration steps for
roughly 15 minutes, the VM will reboot multiple times.
Log on as user: nutanix password: nutanix/4u
Edit the network interface with a static IP address:
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
Add or edit the NETMASK, IPADDR and GATEWAY lines, change BOOTPROTO to none, then save the changes:

NETMASK=xxx.xxx.xxx.xxx
IPADDR=xxx.xxx.xxx.xxx
BOOTPROTO=none
GATEWAY=xxx.xxx.xxx.xxx
Edit the /etc/hosts file to remove all lines containing any entry like “127.0.0.1 NTNX-10-3-190-99-A-CVM” then save
the changes and reboot:
$ sudo vi /etc/hosts
$ sudo reboot

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Central 2022.6 Installation on ESXi continued
Log in to the Prism Central VM via SSH as user: nutanix password: nutanix/4u
Run the command to create the Prism Central cluster:
$ cluster --cluster_function_list "multicluster" -s <static_ip_address> --
dns_servers "<DNS 1 IP>,<DNS 2 IP>" --ntp_servers "<NTP 1 IP>,<NTP 2 IP>" create
Log in to the Prism Central VM GUI with a web browser at https://<static_ip_address>:9440 as user: admin password:
nutanix/4u

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Central 2024.2 Installation on non-Nutanix ESXi infra
If desired, Prism Central version 2024.2 can be deployed as an OVA on a non-Nutanix ESXi infrastructure platform instead of Prism Central version
2022.6, however the process to do so has additional requirements and is a more difficult installation process. For completeness, the information is
provided here in case Prism Central 2024.2 on a non-Nutanix ESXi platform is the long-term goal. Cisco recommends that an installation of Prism Central
on a non-Nutanix ESXi platform is only used temporarily to install the first Nutanix clusters, and long-term the best solution is to deploy Prism Central on
a Nutanix cluster. The additional details of deploying version 2024.2 versus 2022.6 are as follows:
1. Using Prism Central 2024.2 requires ESXi hosts managed by vCenter, while version 2022.6 does not.
2. A tag, a category and a storage profile must all be created in vCenter, linked and associated with the datastore where the Prism Central VM is
running.
3. Deploying Prism Central 2024.2 and enabling the Microservices Platform requires additional downloads and several additional CLI scripts to be run
before Foundation Central can be enabled and can take an additional 30 minutes to complete.
4. Even after the Microservices Platform is enabled, the Prism Central Marketplace cannot be enabled because the Prism Central VM runs on a cluster
that cannot be registered with Prism Central, therefore Foundation Central cannot be added to the system in the normal manner.
5. Foundation Central can only be enabled via directly accessing a specific URL at https://<pc_ip>:9440/console/#page/explore/foundation_central
after the Microservices Platform has been enabled.
6. Foundation Central version 1.6 will be enabled by default, and version 1.7 is required for installation using Cisco X-series blades. Upgrading
Foundation Central via Lifecycle Manager (LCM) is not possible because the the Prism Central VM runs on a cluster that cannot be registered with
Prism Central. Therefore, Foundation Central must be upgraded via the CLI the same as for Prism Central 2022.6 as described later in this
document.

For the full details on the installation process using Prism Central 2024.2 reference the guide here:
https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-install-prism-central-non-nutanix-c.html

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure DNS in Prism Central

Version 2022.6.x Version 2023.3+

Alert: If Foundation Central was installed before configuring or changing the DNS, NTP or Proxy server addresses, the
Prism Central VM must be rebooted before attempting to install a cluster.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure NTP in Prism Central

Version 2022.6.x Version 2023.3+

Alert: If Foundation Central was installed before configuring or changing the DNS, NTP or Proxy server addresses, the
Prism Central VM must be rebooted before attempting to install a cluster.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure Proxy in Prism Central (If Necessary)

Version 2022.6.x Version 2023.3+

Alert: If Foundation Central was installed before configuring or changing the DNS, NTP or Proxy server addresses, the
Prism Central VM must be rebooted before attempting to install a cluster.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Install Foundation Central in Prism Central 2022.6.x

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Install Foundation Central in Prism Central 2023.3+

Note: You must register the Nutanix cluster that hosts the Prism Central 2023.3+ VM with Prism Central before you can
successfully enable the marketplace. The required version of Foundation Central is 1.7 or higher.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Use LCM to upgrade Foundation Central

Note: You must register the Nutanix cluster that hosts the Prism Central VM with Prism Central before you can successfully
run LCM. You may need to run an inventory task once to update LCM, then run an inventory again to scan the system for
available updates. The required version of Foundation Central is 1.7 or higher.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Upgrade Foundation Central via CLI
When using Prism Central running on a non-Nutanix ESXi cluster, Foundation Central will not be upgradeable via LCM and must be upgraded
via the CLI because the cluster running the Prism Central VM cannot be registered with Prism Central. For more information, r efer to the
following page: https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-Cisco-HCI-ISM:v1-upgrade-
fc-cli-t.html

1. Download the Foundation Central 1.7+ LCM bundle and upload it to the Prism Central VM in the /home/nutanix folder.
2. Log on to the CLI of the Prism Central VM as user nutanix and extract the compressed file contents:
$ mkdir /home/nutanix/fc_installer
$ tar -xf lcm_foundation-central_1.7.tar.gz -C /home/nutanix/fc_installer/
3. Stop Foundation Central:
$ genesis stop foundation_central
4. Remove the existing Foundation Central files:
$ sudo rm -rf /home/docker/foundation_central/*
5. Extract the new Foundation Central files to the correct folder:
$ sudo tar -xJf /home/nutanix/fc_installer/builds/foundation-central-builds/1.7/foundation-central-
installer.tar.xz -C /home/docker/foundation_central/
6. Set the directory ownership and permissions:
$ sudo chown -R nutanix:nutanix /home/docker/foundation_central/*
7. Start the Foundation Central service:
$ cluster start
8. In some cases, it may be necessary to reboot the Prism Central server after the manual upgrade.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Deploy Cisco IMM Transition Toolkit (optional)

During installation, the


factory installed software can
be used or the servers can
optionally be re-imaged. If
so, the
Cisco IMM Toolkit provides an easy HTTP server which can host the
AOS, AHV and ESXi installation files. Any anonymous HTTP server can
be used. Download the latest IMM Transition Toolkit OVA from here:
https://ucstools.cloudapps.cisco.com/#/downloadApp

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Download AOS, AOS Metadata and AHV Software
Consult the Nutanix Compatibility and Interoperability matrix here:
https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix

Download a supported Nutanix AOS STS or LTS image and the AHV installer here:
https://portal.nutanix.com/page/downloads/list

Note: Installation of AOS on Cisco X-series requires AOS 6.8.1 or 6.10


or later, and a corresponding compatible version of AHV or ESXi.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Download VMware Software
Download the supported and compatible Cisco custom ESXi ISOs here:
https://support.broadcom.com/group/ecx/productfiles?subFamily=VMware%20vSphere&displayGroup=VMware%2
0vSphere%20-%20Standard&release=8.0&os=&servicePk=202631&language=EN

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Upload Files to IMM Transition Toolkit

Log in via a web browser. Create a folder


for storing the Nutanix installation files if
desired.
Click on File Upload, then drag-and-drop
the AOS, AOS metadata and AHV or ESXi
installation files you will use for the
cluster installations.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Nutanix Installation

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Connect Foundation Central to Cisco Intersight

Note: Only one connection to Cisco Intersight is allowed at a time.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Generate an API Key for Foundation Central

or

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Onboard Servers in Foundation Central

Select only the nodes which will be in the Nutanix clusters to be


installed by Foundation Central, choosing Intersight Managed
Mode for the X-series blade servers. Only servers without service
profiles associated will be shown.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Begin Cluster Creation Wizard

Select the onboarded nodes to be used in the new cluster, then click Create Cluster.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cluster Creation Continued

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Retrieve AOS, metadata and Hypervisor file URLs (optional)

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Retrieve Hypervisor file SHA256 checksums (optional)

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cluster Creation Continued

or

Choose whether to use the factory installed hypervisor and


AOS software, or to re-image the servers. The AOS
Download URL and AOS Metadata file URL must be
provided, then select ESX or AHV as the hypervisor. Provide
the Download URL for the Cisco custom ESXi installation ISO
or the AHV installation ISO. You must also provide the
SHA256 checksum for the hypervisor installation file being
used.
Note: AOS version 6.8 and later no longer include the AHV
installation files in the AOS image, therefore you must
download the AHV installation ISO file and supply its
location when imaging the servers.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Cluster Creation Continued

Enter the VLAN ID for


the hypervisor host and
Nutanix controller VM
traffic. This VLAN must
have been created in
the Domain Profile and
VLAN Policy configured
earlier.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cluster Creation Continued
Enter any additional VLAN IDs which should be carried by the
vNICs to the blade servers, for instance additional VLANs for
the guest VMs. These VLANs must have been created in the
Domain Profile and VLAN Policy configured earlier.

LACP is not supported when deploying X-series blade servers


or C-series rack-mount servers connected to Fabric
Interconnects.

Select the MAC Address Pool created earlier. Most


installations will use the default “Out-of-Band” IMC access (i.e.
connectivity to the blades’ IMC is via the Fabric Interconnect
mgmt0 ports). For Out-of-Band select the IP Address Pool
created earlier. For In-Band IMC access (i.e. connectivity to the
blades’ IMC is via the Fabric Interconnects’ Ethernet uplinks)
you must choose an IP Address Pool and the VLAN ID for the
IMC traffic.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cluster Creation Continued

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cluster Creation Continued

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cluster Creation Continued

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Install Progress

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Install Complete

Click the link to open Prism Element when the installation is complete.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Witness VM Installation and Configuration

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Witness VM Use Cases and Requirements
• A Witness VM is highly recommended for 2-node clusters or clusters configured for Metro Availability.

• The witness VM makes failover decisions during network outages or site availability interruptions to avoid split-
brain scenarios.
• The witness VM must reside in a different failure domain from the clusters it is monitoring, meaning it has its own
separate power and independent network communication to both monitored sites.
• The configuration of a witness VM is the same for 2-node clusters or metro availability clusters and can act as
witness for up to 50 clusters.
• The witness VM only runs on AHV or ESXi clusters, it cannot be backed up or restored via snapshots and cannot be
migrated between vCenter servers.
• Network latency between the two sites and the witness VM must be less than 200ms.

• TCP port 9440 is used and must bypass any proxy servers in the network.

• For detailed information refer to the following Nutanix document:


https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_8:sto-
cluster-witness-option-wc-c.html

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Download Witness VM disk images
Download the witness VM disk images here: https://portal.nutanix.com/page/downloads?product=witnessvm

Download the 3 disk images for


deployment on AHV, or the
single OVA for deployment on
ESXi

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Upload Witness VM images to Prism

Upload the three disk images for


deployment on AHV; the boot
image, the data image and the
home image.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Deploy Witness VM on Nutanix

Create a new VM with minimum 2 vCPU and 6GB vRAM, add the three disk images as SCSI disks cloned from the
image service, and add a NIC in the appropriate VLAN, then click Save and boot the VM.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Deploy Witness VM on ESXi

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure Witness VM on Nutanix or ESXi
Open the console of the VM and log in as user admin, password Nutanix/4u, you will be prompted to change the
password. Edit the network interface with a static IP address:
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
Add the NETMASK, IPADDR and GATEWAY lines, change BOOTPROTO to none, then save the changes and reboot:

NETMASK="xxx.xxx.xxx.xxx"
IPADDR="xxx.xxx.xxx.xxx"
BOOTPROTO="none"
GATEWAY="xxx.xxx.xxx.xxx”

$ sudo reboot
Create the witness VM cluster:
$ cluster -s vm_ip_address --cluster_function_list=witness_vm create
Note: the witness VM command prompt will say “-cvm” in the hostname, make sure you are in fact on the witness VM
console and not an actual cluster controller VM

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure Witness VM on 2-node cluster
Or configure later in Prism Element Settings

Configure the
witness during
the first login to
Prism Element

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Initial Cluster Configurations
• Initial Configuration for ESXi
• Initial Configuration for AHV

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Initial Nutanix Cluster Config for ESXi

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Access Prism Element
• Access Prism Element (the built-
in version of Prism) at the cluster
IP address or an individual
controller VM IP address, using
HTTPS at port 9440
• Default username: admin
• Default password (case
sensitive): Nutanix/4u
• Password must be changed on
first login

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Accept EULA and Enable Pulse

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Element Home

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Add Hosts to vCenter Server

In the vSphere Web Client, create a Datacenter, a Cluster and add


the hosts. You will have to move the hosts into the cluster after
adding them.

Refer here for the recommended vSphere, DRS and HA settings:


https://portal.nutanix.com/page/documents/details?targetId=vSph
ere-Admin6-AOS-v6_5:vsp-cluster-settings-vcenter-vsphere-c.html

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Element to vCenter Server Registration

Note: It may take a few minutes after adding the nodes for the vCenter to be discovered and allow you to register it.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure vCenter Server Authentication

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Storage Containers (Datastores)
Note: After creating the
containers, you should
manually select them as the
HA datastores in the vCenter
Cluster Availability settings,
when using ESXi.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Set Rebuild Capacity Reservation

Without this setting enabled, cluster will


accept incoming writes even if all blocks
cannot completely heal during failures

After enabling, cluster will refuse new writes


if they cannot be fully protected during
failures
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Set iSCSI Data Services IP Address

This is an additional clustered IP address for enabling iSCSI Data Services, which is
required to install Prism Central.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Modify Default Passwords on ESXi and CVMs
Follow the instructions here to reset the default nutanix@NTNX-WMP27210026-A-CVM:10.1.50.21:~$ echo -e "CHANGING ALL
ESXi HOST PASSWORDS. Note - This script cannot be used for passwords
administrative passwords on the ESXi hypervisors and that contain special characters ( $ \ { } ^ &)\nPlease input new
the Nutanix controller VMs: password: "; read -s password1; echo "Confirm new password: "; read -s
password2; if [ "$password1" == "$password2" ] && [[ ! "$password1" =~
https://portal.nutanix.com/page/documents/kbs/deta [\\\{\$\^\}\&] ]]; then hostssh "echo -e \"${password1}\" | passwd
ils?targetId=kA00e000000LKXcCAO root --stdin"; else echo "The passwords do not match or contain
invalid characters (\ $ { } ^ &)"; fi
CHANGING ALL ESXi HOST PASSWORDS. Note - This script cannot be used
for passwords that contain special characters ( $ \ { } ^ &)
Log on to a CVM via SSH, username: nutanix password: Please input new password:
nutanix/4u Confirm new password:
============= 10.1.50.14 ============
Changing password for root
nutanix@NTNX-WMP27210026-A-CVM:10.1.50.21:~$ sudo passwd nutanix
passwd: password updated successfully
Changing password for user nutanix.
============= 10.1.50.18 ============
New password:
Changing password for root
Retype new password:
passwd: password updated successfully
passwd: all authentication tokens updated successfully.
============= 10.1.50.16 ============
Changing password for root
Re-run NCC password health check after changing the passwd: password updated successfully
============= 10.1.50.15 ============
passwords Changing password for root
passwd: password updated successfully
nutanix@NTNX-WMP27210026-A-CVM:10.1.50.21:~$ ncc health_checks ============= 10.1.50.19 ============
system_checks default_password_check Changing password for root
passwd: password updated successfully
============= 10.1.50.17 ============
Changing password for root
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
passwd: password updated successfully
Enable NTP on ESXi hosts

Repeat for each ESXi hypervisor host

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure DNS on ESXi hosts

Repeat for each ESXi hypervisor host

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Remediate all NCC Failures and Warnings
Resolve all active alerts Remediate until all Alerts, Failures and Warnings are gone

Go to Health

Run NCC checks

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Initial Nutanix Cluster Config for AHV

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Access Prism Element
• Access Prism Element (the built-
in version of Prism) at the cluster
IP address or an individual
controller VM IP address, using
HTTPS at port 9440
• Default username: admin
• Default password (case
sensitive): Nutanix/4u
• Password must be changed on
first login

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Accept EULA and Enable Pulse

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Element Home

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create Storage Containers (Datastores)

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Set Rebuild Capacity Reservation

Without this setting enabled, cluster will


accept incoming writes even if all blocks
cannot completely heal during failures

After enabling, cluster will refuse new writes


if they cannot be fully protected during
failures
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Set iSCSI Data Services IP Address

This is an additional clustered IP address for enabling iSCSI Data Services, which is
required to install Prism Central.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Enable VM High Availability Reservation

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Modify Default Passwords on AHV and CVMs
Follow the instructions here to reset the default administrative passwords on the AHV hypervisors, and the
Nutanix controller VMs: https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000LKXcCAO
Three accounts on AHV must have their passwords reset: root, admin and nutanix

Log on to a CVM via SSH, username: nutanix password:


nutanix/4u
nutanix@NTNX-WMP27210026-A-CVM:10.1.50.21:~$ sudo passwd nutanix
Changing password for user nutanix.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Re-run NCC password health check after changing the


passwords
nutanix@NTNX-WMP27210026-A-CVM:10.1.50.21:~$ ncc health_checks
system_checks default_password_check

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Remediate all NCC Failures and Warnings
Resolve all active alerts Remediate until all Alerts, Failures and Warnings are gone

Go to Health

Run NCC checks

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Guest VM Networking
• Guest VM Networking for ESXi
• Guest VM Networking for AHV

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Configure Guest VM Networking for ESXi

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create New Port Groups in vCenter

Assuming additional VLAN IDs were configured during the installation for guest
VM traffic, additional port groups can be added for the guest VMs with the
appropriate VLAN IDs. Add a new port group to the default vSwitch0 for the
guest VMs, using VLAN ID tags. Repeat for each VLAN required and repeat for all
the hosts in the vCenter cluster so their configuration matches.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure Guest VM Networking for AHV

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Create VM Subnet(s)

Assuming additional VLAN IDs were


configured during the installation for
guest VM traffic, additional subnets can Note: Do not modify the default
be added for the guest VMs with the virtual switch bond type to Active-
appropriate VLAN IDs. Active.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Central Configuration

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Register Cluster with Prism Central

These instructions assume the Prism Central


instance or cluster used to deploy the Nutanix
cluster will also be the one used long-term for
management. If not, deploy a new Prism Central
instance or cluster on the new Nutanix cluster, then
register that cluster with the Prism Central instance
running on itself.
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Access Prism Central
• Access Prism Central at the VM
or cluster IP address, using HTTPS
at port 9440
• Default username: admin
• Default password: Nutanix/4u
• Password must be changed on
first login

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prism Central Dashboard

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Verify DNS and NTP in Prism Central

Prism Central cannot be upgraded without DNS and NTP configured

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Configure Licensing

Recommended method for licensing is to use Seamless Licensing via Prism Central, which requires internet access.
Clicking on “Manage All Licenses” will prompt you to log in to the Nutanix support portal. Ensure you log in with a valid
My Nutanix account with administrative rights and is entitled with valid licenses. Licenses can be selected and applied
to the clusters in the subsequent screens. For more information on licensing, refer to this page:
https://portal.nutanix.com/page/documents/details?targetId=License-Manager:License-Manager

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Cluster Expansion

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Cluster Expansion Status
Expansions of Cisco HCI with Nutanix clusters using X-series blades is available with Foundation Central version 1.7 or
later, along with AOS 6.8.1 or later, AHV 20230302.101026 or later, and Prism Central 2023.3 or later. Foundation
Central can be upgraded via Prism Central’s Lifecycle Manager (LCM). If the Intersight Private Virtual Appliance (PVA) is
used it must be upgraded to version 1.1.1-0 or later.

Only clusters with 3+ nodes can be expanded; single node and 2 node clusters cannot be expanded. Up to 8 nodes can
be added to an existing cluster at one time.

A cluster being expanded must be registered with and managed by the Prism Central server that runs Foundation
Central. The existing cluster nodes must have associated server profiles in Cisco Intersight.

The new node(s) to be added to the cluster must be inserted into the Cisco UCS X9508 chassis as with all the other
HCI converged nodes, and the servers must show as fully discovered in Cisco Intersight. Foundation Central is used to
onboard the new node(s) and perform an initial node preparation, which installs the hypervisor and AOS software.
The appropriate matching version of AHV and AOS must be downloaded and hosted on an anonymous HTTP server,
like the initial installation process. Afterwards, Prism Central is used to expand the cluster to consume the newly
prepared node(s).

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Upgrade Foundation Central

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Onboard Additional Node(s)

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prepare Additional Node(s)

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prepare Additional Node(s) Continued

Select the cluster to be expanded and the node(s) to prepare for the expansion.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prepare Additional Node(s) Continued
Enter the download HTTP URLs for the AOS and
AHV installer files which match the installed
versions on the existing cluster as listed at the
top of the page. The AOS metadata file URL
must also be provided, the metadata JSON file
can be downloaded from the Nutanix portal in
the same location as the installer file. Also, the
SHA256 checksum for the AHV installer file
must be provided. A local checksum utility must
be used to calculate the SHA256 checksum, as
the Nutanix portal only provides the MD5
checksum online. The Cisco IMM Transition
Toolkit can calculate the checksums if it is being
used to host the installation files.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prepare Additional Node(s) Continued

Enter the networking information and IP address configuration for the new node(s).
© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .
Prepare Additional Node(s) Continued

Select the API key and submit the node preparation job.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Prepare Additional Node(s) Completed

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Expand Cluster

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Expand Cluster Continued

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Expand Cluster Continued

Leave the IPMI IPv4 address as 0.0.0.0,


do not enter the IP address assigned to
the blade from the Intersight IP address
pool.

The X-series blades will only be


configured with two vNIC interfaces,
therefore, choose eth0 as the active
uplink and eth1 as the backup uplink.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Expand Cluster Continued

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Expand Cluster Completed

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Nutanix Lifecycle Manager

© 2 023
024 Cisco and/ or its affi liate s. All ri ghts re se rved . Cis co Confiden tial
Nutanix Lifecycle Manager Status
LCM version 3.1 is required to support Cisco server firmware updates for UCS X-series blades and requires the cluster
to run AOS version 6.8+.

A cluster running a supported version of AOS 6.8 or 6.10 will run LCM 3.0 out of the box. To upgrade LCM 3.0 to 3.1,
run an inventory from the LCM GUI, and select the “Standalone CIMC” management option.

LCM will update itself to version 3.1 during an Inventory task. The first inventory job will show available software
updates, and you will see that LCM is now running version 3.1, but no inventory will be done against the Cisco X -series
blade servers. Run another inventory job, choosing the “Intersight” management option, and filling in the relevant
information including the Intersight API key ID and secret key. This inventory job will complete and show all available
software updates along with any available Cisco server firmware updates.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Run LCM Inventory to See Available Updates

Once LCM is upgraded to version 3.1, choose the “Intersight” management


option, select Intersight SaaS or enter your internal Intersight virtual appliance
name, and enter the API Key ID and Secret Key to authenticate against Cisco
Intersight for firmware upgrades.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


View LCM Inventory

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .


Use LCM to Upgrade Cisco Server Firmware

LCM can now upgrade Cisco server firmware alongside software components as part of an upgrade plan.

© 2 024 Cisco and/ or its affi liate s. All ri ghts re se rved .

You might also like