Field Installation Guide v4 3
Field Installation Guide v4 3
9. Network Requirements.......................................................................................35
ii
10. Hyper-V Installation Requirements.............................................................. 37
12. Troubleshooting................................................................................................... 43
Fix IPMI Configuration Problems...................................................................................................................... 43
Fix Imaging Problems........................................................................................................................................... 44
Frequently Asked Questions (FAQ)................................................................................................................ 45
Copyright...................................................................................................................53
License......................................................................................................................................................................... 53
Conventions............................................................................................................................................................... 53
Default Cluster Credentials................................................................................................................................. 53
Version......................................................................................................................................................................... 54
iii
1
FIELD INSTALLATION OVERVIEW
Nutanix installs AHV and the Nutanix Controller VM at the factory before shipping a node to
a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes or to use any
hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides
step-by-step instructions on how to use the Foundation tool to do a field installation, which
consists of installing a hypervisor and the Nutanix Controller VM on each node and then
creating a cluster. You can also use Foundation to create just a cluster from nodes that are
already imaged or to image nodes without creating a cluster.
Use the procedures described in this document with Nutanix nodes and nodes from OEM
partners. These nodes are pre-imaged with a hypervisor and AOS at the factory, and you
can either create a cluster from these nodes or re-image them with the desired AOS and
hypervisor and then create a cluster. Nodes from other vendors do not have a hypervisor and
AOS installed on them, and you must perform the bare-metal imaging procedures specifically
adapted for those nodes. For those procedures, see the vendor-specific field installation guides,
a complete listing of which is available on the Nutanix support portal. On the portal, click the
hamburger menu icon, go to Documentation > Hardware Compatibility Lists, and then select
the vendor from the Platform filter available on the page.
Use the Prism web console (in clusters running AOS 4.5 or later) to image factory-prepared
nodes and then add them to an existing cluster. See the "Expanding a Cluster" section in the
Web Console Guide for this procedure.
Note: Use Foundation to image factory-prepared (or bare metal) nodes and create a new cluster
from those nodes. Use the Prism web console (in clusters running AOS 4.5 or later) to image
factory-prepared nodes and then add them to an existing cluster. See the "Expanding a Cluster"
section in the Web Console Guide for this procedure.
A field installation can be performed for either factory-prepared nodes or bare metal nodes.
Note: Foundation supports imaging an ESXi, Hyper-V, or AHV hypervisor on nearly all Nutanix
hardware models with some restrictions. Click here (or log into the Nutanix support portal
and select Documentation > Compatibility Matrix from the main menu) for a list of supported
configurations. To check a particular configuration, go to the Filter By fields and select the
desired model, AOS version, and hypervisor in the first three fields. In addition, check the notes at
the bottom of the table.
2
PREPARE FACTORY-IMAGED NODES
FOR FOUNDATION
This procedure describes how to install a selected hypervisor and the Nutanix Controller
VM on discovered nodes and how to configure the nodes into a cluster. "Discovered nodes"
are factory prepared nodes on the same subnet that are not part of a cluster currently. This
procedure runs the Foundation tool through the Nutanix Controller VM (Controller VM–based
Foundation).
• Make sure that the nodes that you want to image are factory-prepared nodes that have not
been configured in any way and are not part of a cluster.
• Physically install the Nutanix nodes at your site. For general installation instructions, see
"Mounting the Block" in the Getting Started Guide. For installation instructions specific to
your model type, see "Rack Mounting" in the NX and SX Series Hardware Administration
Guide.
Your workstation must be connected to the network on the same subnet as the nodes you
want to image. Foundation does not require an IPMI connection or any special network
port configuration to image discovered nodes. See Network Requirements for general
information about the network topology and port access required for a cluster.
• Determine the appropriate network (gateway and DNS server IP addresses), cluster (name,
virtual IP address), and node (Controller VM, hypervisor, and IPMI IP address ranges)
parameter values needed for installation.
Note: The use of a DHCP server is not supported for Controller VMs, so make sure to assign
static IP addresses to Controller VMs.
Note: Nutanix uses an internal virtual switch to manage network communications between
the Controller VM and the hypervisor host. This switch is associated with a private network
on the default VLAN and uses the 192.168.5.0/24 address space. For the hypervisor, IPMI
interface, and other devices on the network (including the guest VMs that you create on
the cluster), do not use a subnet that overlaps with the 192.168.5.0/24 subnet on the default
VLAN. If you want to use an overlapping subnet for such devices, make sure that you use a
different VLAN.
Note: This method can image discovered nodes or create a single cluster from discovered nodes
or both. This method is limited to factory prepared nodes running AOS 4.5 or later. If you want to
image factory prepared nodes running an earlier AOS (NOS) version, or image bare metal nodes,
see Prepare Bare Metal Nodes for Foundation on page 11.
Procedure
1. Run discovery and launch Foundation (see Discover Nodes and Launch Foundation on
page 6).
2. (Optional) Update Foundation to the latest version (see Upgrade CVM Foundation by Using
the Foundation Java Applet on page 10).
4. After the cluster is created successfully, begin configuring the cluster (see Configure a New
Cluster in Prism on page 31).
3. Extract the contents of the downloaded ZIP file into the workstation that you want to use for
imaging, and then double-click nutanix_foundation_applet.jnlp.
The discovery process begins and a window appears with a list of discovered nodes.
Note:
A security warning message may appear indicating this is from an unknown source.
Click the accept and run buttons to run the application.
Note: Use the network configuration tool only on factory-prepared nodes that are not part of
a cluster. Use of the tool on a node that is part of a cluster makes the node inaccessible to the
other nodes in the cluster, and the only way to resolve the issue is to reconfigure the node to the
previous IP addresses by using the network configuration tool again.
Procedure
1. Connect a console to one of the nodes and log on to the Acropolis host with root
credentials.
a. Review the network card details to ascertain interface properties and identify connected
interfaces.
b. Use the arrow keys to shift focus to the interface that you want to configure, and then use
the Spacebar key to select the interface.
Repeat this step for each interface that you want to configure.
c. Use the arrow keys to navigate through the user interface and specify values for the
following parameters:
Launch Foundation
How you launch Foundation depends on whether you used the Foundation Applet to discover
nodes in the same broadcast domain or the crash cart user interface to discover nodes in a
VLAN-segmented network.
• If you used the Foundation Applet to discover nodes in the same broadcast domain, do the
following:
Note: A warning message may appear stating this is not the highest available version of
Foundation found in the discovered nodes. If you select a node using an earlier Foundation
version (one that does not recognize one or more of the node models), installation may
fail when Foundation attempts to image a node of an unknown model. Therefore, select
the node with the highest Foundation version among the nodes to be imaged. (You can
ignore the warning and proceed if you do not intend to select any of the nodes that have
the higher Foundation version.)
b. (Optional but recommended) Upgrade Foundation on the selected node to the latest
version. See Upgrade CVM Foundation by Using the Foundation Java Applet on
page 10.
c. With the node having the latest Foundation version selected, click the Launch Foundation
button.
Foundation searches the network subnet for unconfigured Nutanix nodes (factory
prepared nodes that are not part of a cluster) and then displays information about
the discovered blocks and nodes in the Discovered Nodes screen. (It does not display
information about nodes that are powered off or in a different subnet.) The discovery
process normally takes just a few seconds.
Note: If you want Foundation to image nodes from an existing cluster, you must first either
remove the target nodes from the cluster or destroy the cluster.
• If you used the crash cart user interface to discover nodes in a VLAN-segmented network, in
a browser on your workstation, enter the following URL: http://CVM_IP_address:8000
Replace CVM_IP_address with the IP address that you assigned to the Controller VM when
using the network configuration tool.
Procedure
1. Obtain the MD5 checksum of the ISO that you want to use.
What to do next
If the MD5 checksum is listed in the whitelist file, save the file to the workstation that hosts the
Foundation VM. If the whitelist file on the Foundation VM does not contain the MD5 checksum,
you can replace that file with the downloaded file before you begin installation.
Procedure
1. In the Foundation Java applet, select the node on which you want to upgrade Foundation,
and then click Upgrade Foundation.
2. Browse to the folder to which you downloaded the Foundation tarball and double-click the
tarball.
The upgrade process begins. After the upgrade completes, Genesis is restarted on the node,
and that in turn restarts the Foundation service. After the Foundation service becomes
available, the upgrade process reports success.
What to do next
Run Foundation on page 27
3
PREPARE BARE METAL NODES FOR
FOUNDATION
This procedure describes how to install a selected hypervisor and the Nutanix Controller VM on
bare metal nodes and optionally configure the nodes into one or more clusters. "Bare metal"
nodes are those that are not factory prepared or cannot be detected through discovery. You
can also use this method to image factory prepared nodes that you do not want to configure
into a cluster.
Note: Imaging or configuring bare metal nodes should be performed only by Nutanix sales
engineers and partners. For any assistance with this procedure, sales engineers and partners
should contact Nutanix support.
• Physically install the nodes at your site. For installing Nutanix hardware platforms, see the
NX and SX Series Hardware Administration and Reference for your model type. For installing
hardware from any other manufacturer, see that manufacturer's documentation.
• Set up the installation environment (see Prepare the Installation Environment on page 12).
Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you
will get a Foundation timeout error if you do not change the boot order back to virtual CD-
ROM in the BIOS.
Note: If STP (spanning tree protocol) is enabled on the ports that are connected to the
Nutanix host, Foundation might time out during the imaging process. Therefore, you must
disable STP by using PortFast or an equivalent feature on the ports that are connected to the
Nutanix host before starting Foundation.
Note: Avoid connecting any device (that is, plugging it into a USB port on a node) that
presents virtual media, such as CDROM. This could conflict with the foundation installation
when it tries to mount the virtual CDROM hosting the install ISO.
• Have ready the appropriate global, node, and cluster parameter values needed for
installation. The use of a DHCP server is not supported for Controller VMs, so make sure to
assign static IP addresses to Controller VMs.
Note: If the Foundation VM IP address set previously was configured in one (typically
public) network environment and you are imaging the cluster on a different (typically
private) network in which the current address is no longer correct, repeat step 13 in Prepare
Workstation on page 12 to configure a new static IP address for the Foundation VM.
• If the nodes contain self-encrypting drives (SEDs), disable encryption on the SEDs before
imaging the nodes. If the nodes contain only SEDs, you can enable encryption after you
Video: Click here to see a video (MP4 format) demonstration of this procedure. (The video may
not reflect the latest features described in this section.)
Prepare Workstation
A workstation is needed to host the Foundation VM during imaging. You can perform these
steps either before going to the installation site (if you use a portable laptop) or at the site (if
you can connect to the web).
• Get a workstation (laptop or desktop computer) that you can use for the installation. The
workstation must have at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of
disk space (preferably SSD), and a physical (wired) network adapter.
• Go to the Nutanix support portal and download the following files to a temporary directory
on the workstation.
• Foundation_VM-version#-disk1.vmdk.
This is the Foundation VM VMDK file
• If you intend to install a hypervisor other than AHV, you must provide the ISO image (see
Hypervisor ISO Images on page 33). Make sure that the hypervisor ISO image is available
on the workstation.
• This procedure describes how to use Oracle VM VirtualBox, a free open source tool used to
create a virtualized environment on the workstation. Download the installer for Oracle VM
VirtualBox and install it with the default options. See the Oracle VM VirtualBox User Manual
for installation and start up instructions (https://www.virtualbox.org/wiki/Documentation).
Note: You can also use a tool such as VMware vSphere instead of Oracle VM VirtualBox.
Procedure
2. Go to the location to which you downloaded the Foundation tar file and extract its contents.
$ tar -xf Foundation_VM_OVF-version#.tar
Note: If the tar utility is not available, use the corresponding utility for your environment.
3. Copy the extracted files to the VirtualBox VMs folder that you created.
2. Click the File option of the main menu and then select Import Appliance from the pull-down
list.
3. Find and select the Foundation_VM-version#.ovf file, and then click Next.
5. In the left column of the main screen, select Foundation_VM-version# and click Start.
The Foundation VM console launches and the VM operating system boots.
6. At the login screen, login as the Nutanix user with the password nutanix/4u.
The Foundation VM desktop appears (after it loads).
a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions
CD Image... from the menu.
A VBOXADDITIONS CD entry appears on the Foundation VM desktop.
b. Click OK when prompted to Open Autorun Prompt and then click Run.
c. Enter the root password (nutanix/4u) and then click Authenticate.
d. After the installation is complete, press the return key to close the VirtualBox Guest
Additions installation window.
e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.
f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux
GUI.
g. After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the
menu on the VirtualBox window for the Foundation VM.
8. Open a terminal session and run the ifconfig command to determine if the Foundation VM
was able to get an IP address from the DHCP server.
If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a
static IP as follows:
Note: Normally, the Foundation VM needs to be on a public network in order to copy selected
ISO files to the Foundation VM in the next two steps. This might require setting a static IP
Note: Selections in the terminal window can be made using the indicated keys only.
(Mouse clicks do not work.)
Procedure
2. If you are installing hypervisors other than AHV, copy the ISO files to the corresponding
directory on the Foundation VM.
Note: You do not have to provide an AHV image. Foundation includes an AHV tar file in /
home/nutanix/foundation/isos/hypervisor/kvm. However, if you want to install a different
version of AHV, download the AHV tar file from the Nutanix support portal and copy it to the
directory.
3. If you downloaded diagnostics files for one or more hypervisors to your workstation, copy
them to the appropriate directory on the Foundation VM. The directories for the diagnostic
files are as follows:
Procedure
1. Obtain the MD5 checksum of the ISO that you want to use.
What to do next
If the MD5 checksum is listed in the whitelist file, save the file to the workstation that hosts the
Foundation VM. If the whitelist file on the Foundation VM does not contain the MD5 checksum,
you can replace that file with the downloaded file before you begin installation.
Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing
tables). A flat switch is often recommended to protect against configuration errors that could
affect the production environment. Foundation includes a multi-homing feature that allows you
to image the nodes using production IP addresses despite being connected to a flat switch. See
Network Requirements on page 35 for general information about the network topology and
port access required for a cluster.
Procedure
1. Make sure that IPv6 is enabled on the network to which the nodes are connected and IPv6
multicast is supported.
2. (Nutanix NX Series) Connect the dedicated IPMI port and any one of the data ports to the
switch. We highly recommend that you use the dedicated IPMI port and a 10G data port. You
may use a 1G port instead of a 10G port at the cost of increased imaging time or imaging
failure. If you use SFP+ 10G NICs and a 1G RJ45 switch for imaging, connect the 10G port to
the switch using one of our approved GBICs. You may also use the shared IPMI/1G port in
place of the dedicated port, as long as the BMC is configured to use it, but it is less reliable
than the dedicated port. Regardless of the ports you choose, physically disconnect all
other ports. The IPMI LAN interfaces of the nodes must be in failover mode (factory default
setting).
Use the following guideline when connecting the IPMI port on G4 and later platforms: if
you use the shared IPMI port, make sure that the connected switch can auto-negotiate to
100 Mbps. This auto-negotiation capability is required because the shared IPMI port can
support 1 Gbps throughput only when the host is online. If the switch cannot auto-negotiate
to 100 Mbps when the host goes offline, make sure to use the dedicated IPMI port instead
Note:
The exact location of the port depends on the model type. See the hardware documentation
for your model to determine the port location. The following figure illustrates the location of
the network ports on the back of an NX-3050 (middle RJ-45 interface).
3. (Lenovo Converged HX Series) Lenovo HX-series systems require that you connect both the
system management (IMM) port and one of the 10 GbE ports. The following figure illustrates
the location of the network ports on the back of the HX3500 and HX5500.
4. (Dell XC series) Connect the iDRAC port and one of the data ports. While some Dell XC
Series systems, such as the Dell XC430-4, support imaging over a 1 GbE network connection,
5. (IBM POWER Servers) Connect the dedicated IPMI port and a data port to the network to
which the Foundation VM is connected.
6. Connect the installation workstation (see Prepare Workstation on page 12) to the same
switch as the nodes.
Procedure
2. In the gear icon menu at the top-right corner of the Foundation user interface, click Update
Foundation.
» (Not to be used with Lenovo platforms) To perform a one-click, over-the-air update, click
Update.
The dialog box displays the version to which you can update Foundation.
» (For Lenovo platforms; optional for other platforms) To update Foundation by using an
installer that you downloaded to the workstation, click Browse, browse to and select the
tarball, and then click Install.
Requirements
The Foundation app should be installed on a Mac or Windows computer that meets the
following recommended requirements:
Limitations
• Only Nutanix nodes can be configured with Foundation app.
• The app is currently in beta phase. Your experience may not be optimal.
Procedure
2. Download latest version of 64-bit Java JDK from http://www.oracle.com/ and install it.
5. Drag the Foundation app to Application folder. This installs the Foundation app.
6. Double-click the Foundation app in Application folder. This launches Foundation UI in the
default browser or you can manually visit http://localhost:8000 on your preferred browser.
Note: To upgrade the app, download and install a higher version of app from Nutanix portal.
8. To close the app, right-click on Foundation icon in Launcher and click Force Quit.
Note: The installation kills any running Foundation process. If you have initiated Foundation with
a previously installed app, ensure that it is complete before launching the installation.
Procedure
1. Download latest version of 64-bit Java JDK from http://www.oracle.com/ and install it.
3. Ensure IPv6 is enabled on the network interface that connects the Windows PC to the
switch.
6. Double-click the Foundation icon in Desktop or Start Menu. This launches Foundation UI in
the default browser or you can manually visit http://localhost:8000/gui/index.html on your
preferred browser.
Note: To upgrade the app, download a higher version of app from Nutanix portal and perform
a fresh installation. The fresh installation kills any running Foundation operation and updates
the older version to the higher version. If you have initiated Foundation with the older app,
ensure that it is complete before doing a fresh installation of the higher version.
Procedure
1. If Foundation app is running, right-click on Foundation icon in Launcher and click Force Quit.
Note: Uninstallation does not remove the log and configuration files that are created by
Foundation app. So, for clean installation, it is recommended to delete these files manually.
Procedure
• You can upgrade CVM Foundation from version 3.12 or later to 4.3.x by using the
Foundation Java applet or the Prism web console. For information about upgrading
Foundation by using the Java applet, see Upgrade CVM Foundation by Using the
Foundation Java Applet on page 10.
• For information about upgrading Foundation by using the Prism web console, see
Cluster Management > Software and Firmware Upgrades > Upgrading Foundation in
the Prism Web Console Guide.
Note: The over-the-air update functionality for CVM Foundation is available in the
Prism web console only from AOS release 5.0 or higher. If you are running an earlier
AOS release, upload a Foundation tarball to the cluster and perform a one-click
upgrade from the Prism web console.
• To upgrade CVM or standalone Foundation from the Foundation UI, click the version
link in the Foundation UI.
• To upgrade the Foundation app, download a higher version of app from Nutanix
portal and perform a fresh installation. The fresh installation kills any running
Foundation operation and updates the older version to the higher version. If you have
initiated Foundation with the older app, ensure that it is complete before doing a fresh
installation of the higher version.
Upgrade from the Command-Line Interface
• To upgrade Foundation from version 3.1 or later to version 4.3.x, by using the
command line, do the following:
1. Download the Foundation upgrade bundle (foundation-version#.tar.gz) from the
support portal to the /home/nutanix/ directory.
2. Change your working directory to /home/nutanix/.
3. Upgrade Foundation.
$ ./foundation/bin/foundation_upgrade -t foundation-version#.tar.gz
• Serves as a reusable baseline file that helps skip repeat manual entering of configuration
details in repeat or similar Foundation operations.
• Plan the configuration details in advance and keep it ready in the file. When you run
Foundation later, import this configuration file.
• Invite others to review and edit your planned configuration.
• Import NX nodes from a Salesforce order to avoid manually adding NX nodes.
Note: The configuration file only stores configuration settings and not AOS or hypervisor images.
You can upload images only in Foundation UI.
Note:
• During this procedure, you assign IP addresses to the hypervisor host, the Controller
VMs, and the IPMI interfaces. Do not assign IP addresses from a subnet that overlaps
with the 192.168.5.0/24 address space on the default VLAN. Nutanix uses an internal
virtual switch to manage network communications between the Controller VM and
the hypervisor host. This switch is associated with a private network on the default
VLAN and uses the 192.168.5.0/24 address space. If you want to use an overlapping
subnet, make sure that you use a different VLAN.
• Mixed-vendor cluster is not supported. For restriction details in Nutanix nodes'
cluster, see the section Product Mixing Restrictions in NX and SX Series Hardware
Administration Guide.
• For a single imaged node that you need to re-image or to form a 1-node cluster
using CVM Foundation, ensure that you launch the CVM Foundation from another
Procedure
2. In a web browser inside the Foundation VM, visit the URL http://localhost:8000/gui/
index.html.
3. After you have assigned an IP address to the Foundation VM, visit the URL http://
<foundation-vm-ip-address>:8000/gui/index.html from a web browser outside the VM .
For CVM Foundation:
1. In the Foundation Java applet, select a node and click the Launch Foundation button.
2. If you used the crash cart user interface to discover nodes in a VLAN-segmented network,
visit the URL http://<foundation-cvm-ip-address>:8000 from a browser in a workstation
connected to the node's network .
For Foundation app:
1. Double-click the Foundation executable file. For installation process and launch details,
see the sections Install Foundation App on macOS on page 22 or Install Foundation App
on Windows on page 23.
• If you are manually adding multiple blocks in a single instance, all added blocks get
the same number of nodes. To add blocks with different numbers of nodes, add
multiple blocks with highest number of nodes and then delete nodes for each block, as
applicable. Alternatively, you can also repeat the add process to separately add blocks
with different number of nodes.
• From the Tools menu, reorder or re-arrange the blocks to match the order of IP addresses
and hypervisor hostnames that you want to assign.
• To remove a node from the Foundation process, un-select a node row and click Remove
Unselected Rows on the Tools menu.
• Individually assign the IP addresses and hypervisor hostname for each node or click
Range Autofill in the Tools menu to bulk-assign these details using the autofill row.
Note: Unlike CVM Foundation, standalone Foundation does not validate these IP
addresses by checking for uniqueness. Hence, manually cross-check and ensure that the IP
addresses are unique and valid.
4. The Cluster page lets you provide cluster details and configure cluster formation or just
image the nodes without forming a cluster. You can also enable network segmentation to
separate CVM network traffic from user VMs and hypervisors network traffic.
Note:
• The Cluster Virtual IP field is essential for Hyper-V clusters but optional for ESXi
and AHV clusters.
• To provide multiple DNS or NTP servers, enter a comma-separated list of IP
addresses.
• For best practices in configuring NTP servers, see the section Recommendations
for Time Synchronization in Prism Web Console Guide.
5. On the AOS page, you can specify and upload AOS images and also view version of existing
installed AOS image on the nodes. You can skip updating CVMs with AOS if all discovered
nodes' CVMs already have same AOS version that you want to use.
Note:
• You can select one or more nodes to be Storage-Only nodes, which host AHV
only. You need to image rest of the nodes with another hypervisor and form a
multi-hypervisor cluster.
• For discovered nodes, if you skip updating CVMs with AOS, you can still re-image
the hypervisors. Hypervisor-only imaging is supported. However, imaging CVMs
with AOS without imaging hypervisors is not supported.
• [Hyper-V only] If you choose Hyper-V, from the Choose Hyper-V SKU list that is
displayed, select the SKU that you want to use.
Four Hyper-V SKUs are supported: Standard, Datacenter, Standard with GUI, and
Datacenter with GUI.
7. Standalone Foundation needs IPMI remote access to nodes. Standalone Foundation UI has
the additional IPMI page, where you need to provide the IPMI access credentials for each
node.
To provide credentials for all nodes at one go, click the Tools menu to either use the autofill
row or assign a vendor's default IPMI credentials to all nodes.
8. The Installation in Progress page displays the progress status and lets you view the
individual Log for in-progress or completed operations of all the nodes. You can click Back
to Configuration to have a read-only view of the configuration details while the installation is
in progress.
Note: You can abort an ongoing installation in standalone Foundation but not in CVM
Foundation.
Results
Post successful completion of all operations, the Installation finished page is displayed.
Note: If you have missed something and want to reconfigure and redo the installation, you can
click Reset to go back to the Start page and redo the Foundation process.
7
POST INSTALLATION STEPS
Configure a New Cluster in Prism
About this task
After creating the cluster, you can configure it through the Prism web console. A storage pool
and a container are created automatically when the cluster is created, but many other set up
options require user action. The following are common cluster set up steps typically done
soon after creating a cluster. (All the sections cited in the following steps are in the Prism Web
Console Guide.)
Procedure
1. Verify the cluster has passed the latest Nutanix Cluster Check (NCC) tests.
a. Check the installed NCC version and update it if a later version is available (see the
"Software and Firmware Upgrades" section).
b. Run NCC if you downloaded a newer version or did not run it as part of the install.
Running NCC must be done from a command line. Open a command window, log on to
any Controller VM in the cluster with SSH, and then run the following command:
nutanix@cvm$ ncc health_checks run_all
If the check reports a status other than PASS, resolve the reported issues before
proceeding. If you are unable to resolve the issues, contact Nutanix support for assistance.
c. Configure NCC so that the cluster checks are run and emailed according to your desired
frequency.
nutanix@cvm$ ncc --set_email_frequency=num_hrs
where num_hrs is a positive integer of at least 4 to specify how frequently NCC is run and
results are emailed. For example, to run NCC and email results every 12 hours, specify 12;
or every 24 hours, specify 24, and so on. For other commands related to automatically
emailing NCC results, see "Automatically Emailing NCC Results" in the Nutanix Cluster
Check (NCC) Guide for your version of NCC.
Replace cluster_timezone with the timezone of the cluster (for example, America/Los_Angeles,
Europe/London, or Asia/Tokyo). Restart all Controller VMs in the cluster after changing the
timezone. Because a cluster can tolerate only a single Controller VM unavailable at any one
time, restart the Controller VMs in a series, waiting until one has finished starting before
proceeding to the next. See the Command Reference for more information about using the
nCLI.
4. If the site security policy allows Nutanix customer support to access the cluster, enable the
remote support tunnel (see the "Controlling Remote Connections" section).
CAUTION: Failing to enable remote support prevents Nutanix support from directly
addressing cluster issues. Nutanix recommends that all customers allow email alerts at
minimum because it allows proactive support of customer issues.
5. If the site security policy allows Nutanix support to collect cluster status information, enable
the Pulse feature (see the "Configuring Pulse" section).
This information is used by Nutanix support to diagnose potential problems and provide
more informed and proactive help.
6. Add a list of alert email recipients, or if the security policy does not allow it, disable alert
emails (see the "Configuring Email Alerts" section).
You also have the option to specify email recipients for specific alerts (see the "Configuring
Alert Policies" section).
7. If the site security policy allows automatic downloads to update AOS and other upgradeable
cluster elements, enable that feature (see the "Software and Firmware Upgrades" section).
Note: Allow access to the following through your firewall to ensure that automatic download
of updates can function:
• *.compute-*.amazonaws.com:80
• release-api.nutanix.com:80
9. For ESXi and Hyper-V clusters, add the hosts to the appropriate management interface.
Note: For the Lenovo Converged HX Series platform, use the custom ISOs that are available on
the VMware website (www.vmware.com) at Downloads > Product Downloads > vSphere >
Custom ISOs.
Make sure that the MD5 checksum of the hypervisor ISO image is listed in the ISO whitelist file
used by Foundation. See Verify Hypervisor Support on page 9.
The following table describes the fields that appear in the iso_whitelist.json file for each ISO
image.
Name Description
The following are sample entries from the whitelist for an ESX and an AHV image.
"iso_whitelist": {
"478e2c6f7a875dd3dacaaeb2b0b38228": {
"min_foundation": "2.1",
"hypervisor": "esx",
"min_nos": null,
"friendly_name": "ESX 6.0",
"version": "6.0",
"unsupported_hardware": [],
"compatible_versions": {
"esx": ["^6\\.0.*"]
},
"a2a97a6af6a3e397b43e3a4c7a86ee37": {
"min_foundation": "3.0",
"hypervisor": "kvm",
"min_nos": null,
"friendly_name": "20160127",
"compatible_versions": {
"kvm": [
"^el6.nutanix.20160127$"
]
},
"version": "20160127",
"deprecated": "3.1",
"unsupported_hardware": []
},
9
NETWORK REQUIREMENTS
When configuring a Nutanix block, you will need to ask for the IP addresses of components that
should already exist in the customer network, as well as IP addresses that can be assigned to
the Nutanix cluster. You will also need to make sure to open the software ports that are used
to manage cluster components and to enable communication between components such as
the Controller VM, Web console, Prism Central, hypervisor, and the Nutanix hardware. Nutanix
recommends that you specify information such as a DNS server and NTP server even if the
cluster is not connected to the Internet or not running in a production environment.
• Default gateway
• Network mask
• DNS server
• NTP server
You should also check whether a proxy server is in place in the network. If so, you will need the
IP address and port number of that server when enabling Nutanix support on the cluster.
New IP Addresses
Each node in a Nutanix cluster requires three IP addresses, one for each of the following
components:
• IPMI interface
• Hypervisor host
• Nutanix Controller VM
All Controller VMs and hypervisor hosts must be on the same subnet. No systems other than
the Controller VMs and hypervisor hosts can be on this network, which must be isolated and
protected.
Note: If you have Volume Shadow Copy Service (VSS) based backup tool (for example
Veeam), functional level of Active Directory must be 2008 or higher.
• Active Directory Web Services (ADWS) must be installed and running. By default,
connections are made over TCP port 9389, and firewall policies must enable an exception on
this port for ADWS.
To test that ADWS is installed and running on a domain controller, log on by using a domain
administrator account in a Windows host other than the domain controller host that is joined
to the same domain and has the RSAT-AD-Powershell feature installed, and run the following
PowerShell command. If the command prints the primary name of the domain controller,
then ADWS is installed and the port is open.
> (Get-ADDomainController).Name
Note: If any of the above requirements are not met, you need to manually create an Active
Directory computer object for the Nutanix storage in the Active Directory, and add a DNS
entry for the name.
• Ensure that the Active Directory domain is configured correctly for consistent time
synchronization.
Accounts and Privileges:
• An Active Directory account with permission to create new Active Directory computer
objects for either a storage container or Organizational Unit (OU) where Nutanix nodes are
placed. The credentials of this account are not stored anywhere.
• An account that has sufficient privileges to join a Windows host to a domain. The credentials
of this account are not stored anywhere. These credentials are only used to join the hosts to
the domain.
Additional Information Required:
Note: The primary domain controller IP address is set as the primary DNS server on all the
Nutanix hosts. It is also set as the NTP server in the Nutanix storage cluster to keep the
Controller VM, host, and Active Directory time synchronized.
• The fully qualified domain name to which the Nutanix hosts and the storage cluster is going
to be joined.
SCVMM
Requirements:
• The SCVMM version must at least be 2012 R2 and it must be installed on Windows Server
2012 or a newer version.
• The SCVMM server must allow PowerShell remoting.
To test this scenario, log on by using the SCVMM administrator account in a Windows host
and run the following PowerShell command on a Windows host that is different to the
SCVMM host (for example, run the command from the domain controller). If they print the
name of the SCVMM server, then PowerShell remoting to the SCVMM server is not blocked.
> Invoke-Command -ComputerName scvmm_server -ScriptBlock {hostname} -Credential MYDOMAIN
\username
Replace scvmm_server with the SCVMM host name and MYDOMAIN with Active Directory domain
name.
Note: If the SCVMM server does not allow PowerShell remoting, you can perform the SCVMM
setup manually by using the SCVMM user interface.
• The ipconfig command must run in a PowerShell window on the SCVMM server. To verify
run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {ipconfig} -Credential
MYDOMAIN\username
Replace scvmm_server_name with the SCVMM host name and MYDOMAIN with Active Directory
domain name.
• The SMB client configuration in the SCVMM server should have RequireSecuritySignature set
to False. To verify, run the following command.
> Invoke-Command -ComputerName scvmm_server_name -ScriptBlock {Get-SMBClientConfiguration |
FL RequireSecuritySignature}
If you are changing it from True to False, it is important to confirm that the policies that
are on the SCVMM host have the correct value. On the SCVMM host run rsop.msc to review
the resultant set of policy details, and verify the value by navigating to, Servername >
Note: If security signing is mandatory, then you need to enable Kerberos in the Nutanix
cluster. In this case, it is important to ensure that the time remains synchronized between the
Active Directory server, the Nutanix hosts, and the Nutanix Controller VMs. The Nutanix hosts
and the Controller VMs set their NTP server as the Active Directory server, so it should be
sufficient to ensure that Active Directory domain is configured correctly for consistent time
synchronization.
• When adding a host or a cluster to the SCVMM, the run-as account you are specifying for
managing the host or cluster must be different from the service account that was used to
install SCVMM.
• Run-as account must be a domain account and must have local administrator privileges on
the Nutanix hosts. This can be a domain administrator account. When the Nutanix hosts are
joined to the domain, the domain administrator accounts automatically takes administrator
privileges on the host. If the domain account used as the run-as account in SCVMM is not a
domain administrator account, you need to manually add it to the list of local administrators
on each host by running sconfig.
• SCVMM domain account with administrator privileges on SCVMM and PowerShell remote
execution privileges.
• If you want to install SCVMM server, a service account with local administrator privileges on
the SCVMM server.
IP Addresses
Note: For N nodes, (3*N + 2) IP addresses are required. All IP addresses must be in the same
subnet.
DNS Requirements
• Each Nutanix host must be assigned a name of 15 characters or less, which gets
automatically added to the DNS server during domain joining.
• The Nutanix storage cluster needs to be assigned a name of 15 characters or less, which
must be added to the DNS server when the storage cluster is joined to the domain.
• Virtual machine and virtual disk paths must always refer to the Nutanix storage cluster by
name, not the external IP address. If you use the IP address, it directs all the I/O to a single
node in the cluster and thereby compromises performance and scalability.
Note: For external non-Nutanix host that needs to access Nutanix SMB shares, see Nutanix
SMB Shares Connection Requirements from Outside the Cluster..
• When applying Windows updates to the Nutanix hosts, the hosts should be restarted one at
a time, ensuring that Nutanix services comes up fully in the Controller VM of the restarted
host before updating the next host. This can be accomplished by using Cluster Aware
Updating and using a Nutanix-provided script, which can be plugged into the Cluster Aware
Update Manager as a pre-update script. This pre-update script ensures that the Nutanix
services go down on only one host at a time ensuring availability of storage throughout
the update procedure. For more information about cluster-aware updating, see Installing
Windows Updates with Cluster-Aware Updating.
Note: Ensure that automatic Windows updates are not enabled for the Nutanix hosts in the
domain policies.
11
SET IPMI STATIC IP ADDRESS
You can assign a static IP address for an IPMI port by resetting the BIOS configuration.
Procedure
3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.
The BIOS Setup Utility screen appears.
6. Select Update IPMI LAN Configuration, press Enter, and then select Yes in the pop-up
window.
8. Select Station IP Address, press Enter, and then enter the IP address for the IPMI port on
that node in the pop-up window.
9. Select Subnet Mask, press Enter, and then enter the corresponding submask value in the
pop-up window.
10. Select Gateway IP Address, press Enter, and then enter the IP address for the node's
network gateway in the pop-up window.
11. When all the field entries are correct, press the F4 key to save the settings and exit the
BIOS setup mode.
12
TROUBLESHOOTING
This section provides guidance for fixing problems that might occur during a Foundation
installation.
• For help with IPMI configuration problems in a bare metal workflow, see Fix IPMI
Configuration Problems on page 43.
• For help with imaging problems, see Fix Imaging Problems on page 44.
• For answers to other common questions, see Frequently Asked Questions (FAQ) on
page 45.
• One or more IPMI MAC addresses are invalid or there are conflicting IP addresses. Go to the
IPMI screen and correct the IPMI MAC and IP addresses as needed.
• There is a user name/password mismatch. Go to the IPMI page and correct the IPMI
username and password fields as needed.
• One or more nodes are connected to the switch through the wrong network interface. Go
to the back of the nodes and verify that the first 1GbE network interface of each node is
connected to the switch (see Set Up the Network on page 19).
• The Foundation VM is not in the same broadcast domain as the Controller VMs for
discovered nodes or the IPMI interface for added (bare metal or undiscovered) nodes. This
problem typically occurs because (a) you are not using a flat switch, (b) some node IP
addresses are not in the same subnet as the Foundation VM, and (c) multi-homing was not
configured.
• If all the nodes are in the Foundation VM subnet, go to the Node page and correct the IP
addresses as needed.
• If the nodes are in multiple subnets, go to the Cluster page and configure multi-homing.
• The IPMI interface is not set to failover. You can check for this through the BIOS.
To identify and resolve IPMI port configuration problems, do the following:
Foundation | Troubleshooting | 43
Procedure
1. Go to the Block & Node Config screen and review the problem IP address for the failed
nodes (nodes with a red X next to the IPMI address field).
Hovering the cursor over the address displays a pop-up message with troubleshooting
information. This can help you diagnose the problem. See the service.log file (in /home/
nutanix/foundation/log) and the individual node log files for more detailed information.
2. When you have corrected all the problems and are ready to try again, click the Configure
IPMI button at the top of the screen.
3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.
4. When all nodes have green check marks in the IPMI address column, click the Image Nodes
button at the top of the screen to begin the imaging step.
If you cannot fix the IPMI configuration problem for one or more of the nodes, you can
bypass those nodes and continue to the imaging step for the other nodes by clicking the
Proceed button. In this case you must configure the IPMI port address manually for each
bypassed node (see Set IPMI Static IP Address on page 41).
• A type failure was detected. Check connectivity to the IPMI (bare metal workflow).
• There were network connectivity issues such as the following:
Foundation | Troubleshooting | 44
necessary) and delete. Foundation needs about 9 GB of free space for Hyper-V and about 3
GB for ESXi or AHV.
• The host boots but complains it cannot reach the Foundation VM. The message varies per
hypervisor. For example, on ESXi you might see a "ks.cfg:line 12: "/.pre" script returned
with an error" error message. Make sure you have assigned the host an IP address on the
same subnet as the Foundation VM or you have configured multi-homing. Also check for IP
address conflicts.
To identify and resolve imaging problems, do the following:
Procedure
1. See the individual log file for any failed nodes for information about the problem.
2. When you have corrected the problems and are ready to try again, click the Image Nodes
(bare metal workflow) button.
3. Repeat the preceding steps as necessary to fix all the imaging errors.
If you cannot fix the imaging problem for one or more of the nodes, you can image those
nodes one at a time (Contact Support for help).
Installation Issues
Foundation | Troubleshooting | 45
certain requests made to the Foundation API. Logs from past installations are stored in /
home/nutanix/foundation/log/archive. In addition, the state of the current install process
is stored in /home/nutanix/foundation/persisted_config.json. You can download
the entire log archive from the following URL: http://foundation_ip:8000/foundation/
log_archive_tar
• My installation hangs, and the service log complains about type detection.
Verify that all of your IPMI IPs are reachable through Foundation. (On rare occasion the IPMI
IP assignment will take some time.) If you get a complaint about authentication, double-
check your password. If the problem persists, try resetting the BMC.
• Installation fails with an error where Foundation cannot ping the configured IPMI IP
addresses.
Verify that the LAN interface is set to failover mode in the IPMI settings for each node.
You can find this setting by logging into IPMI and going to Configuration > Network > Lan
Interface. Verify that the setting is Failover (not Dedicate).
• The diagnostic box was checked to run after installation, but that test (diagnostics.py) does
not complete (hangs, fails, times out).
Running this test can result in timeouts or low IOPS if you are using 1G cables. Such cables
might not provide the performance necessary to run this test at a reasonable speed.
• Foundation seems to be preparing the ISOs properly, but the nodes boot into <previous
hypervisor> and the install hangs.
The boot order for one or more nodes might be set incorrectly to select the USB over SATA
DOM as the first boot device instead of the CDROM. To fix this, boot the nodes into BIOS
mode and either select "restore optimized defaults" (F3 as of BIOS version 3.0.2) or give the
CDROM boot priority. Reboot the nodes and retry the installation.
• I have misconfigured the IP addresses in the Foundation configuration page. How long is the
timeout for the call back function, and is there a way I can avoid the wait?
The call back timeout is 60 minutes. To stop the Foundation process and restart it, open up
the terminal in the Foundation VM and enter the following commands:
$ sudo /etc/init.d/foundation_service stop
$ cd ~/foundation/
$ mv persisted_config.json persisted_config.json.bak
$ sudo /etc/init.d/foundation_service start
Refresh the Foundation web page. If the nodes are still stuck, reboot them.
Foundation | Troubleshooting | 46
• I need to reset a block to the default state.
Using the bare metal imaging workflow, download the desired Phoenix ISO image for
AHV from the support portal (see https://portal.nutanix.com/#/page/phoenix/list). Boot
each node in the block to that ISO and follow the prompts until the re-imaging process is
complete. You should then be able to use Foundation as usual.
• The cluster create step is not working.
If you are installing NOS 3.5 or later, check the service.log file for messages about the
problem. Next, check the relevant cluster log (cluster_X.log) for cluster-specific messages.
The cluster create step in Foundation is not supported for earlier releases and will fail if you
are using Foundation to image a pre-3.5 NOS release. You must create the cluster manually
(after imaging) for earlier NOS releases.
• I want to re-image nodes that are part of an existing cluster.
Do a cluster destroy prior to discovery. (Nodes in an existing cluster are ignored during
discovery.)
• My Foundation VM is complaining that it is out of disk space. What can I delete to make
room?
Unmount any temporarily-mounted file systems using the following commands:
$ sudo fusermount -u /home/nutanix/foundation/tmp/fuse
$ sudo umount /tmp/tmp*
$ sudo rm -rf /tmp/tmp*
If more space is needed, delete some of the Phoenix ISO images from the Foundation VM.
• I keep seeing the message “"tar: Exiting with failure status due to previous errors'tar rf /
home/nutanix/foundation/log/archive/log-archive-20140604-131859.tar -C /home/nutanix/
foundation ./persisted_config.json' failed; error ignored."
This is a benign message. Foundation archives your persisted configuration file
(persisted_config.json) alongside the logs. Occasionally, there is no configuration file to back
up. This is expected, and you may ignore this message with no ill consequences.
• Imaging fails after changing the language pack.
Do not change the language pack. Only the default English language pack is supported.
Changing the language pack can cause some scripts to fail during Foundation imaging. Even
after imaging, character set changes can cause problems for NOS.
• [ESXi] Foundation is booting into pre-install Phoenix, but not the ESXi installer.
Check the BIOS version and verify it is supported. If it is not a supported version, upgrade it.
Foundation | Troubleshooting | 47
• I get "This Kernel requires an x86-64 CPU, but only detected an i686 CPU" when trying to
boot the VM on VirtualBox.
The VM needs to be configured to expose a 64-bit CPU. For more information, see https://
forums.virtualbox.org/viewtopic.php?f=8&t=58767.
• I am running the network setup script, but I do not see eth0 when I run ifconfig.
This can happen when you make changes to your VirtualBox network adapters. VirtualBox
typically creates a new interface (eth1, then eth2, and so on) to accommodate your new
settings. To fix this, run the following commands:
$ sudo rm /etc/udev/rules.d/70-persistent-net-rules
$ sudo shutdown -r now
This should reboot your machine and reset your adapter to eth0.
• I have plugged in the Ethernet cables according to the directions and I can reach the IPMI
interface, but discovery is not finding the nodes to image.
Your Foundation VM must be in the same broadcast domain as the Controller VMs to receive
their IPv6 link-local traffic. If you are installing on a flat 1G switch, ensure that the 10G cables
are not plugged in. (If they are, the Controller VMs might choose to direct their traffic over
that interface and never reach your Foundation VM.) If you are installing on a 10G switch,
ensure that only the IPMI 10/100 port and the 10G ports are connected.
• The switch is dropping my IPMI connections in the middle of imaging.
If your network connection seems to be dropping out in the middle of imaging, try using an
unmanaged switch with spanning tree protocol disabled.
• Foundation is stalled on the ping home phase.
The ping test will wait up to two minutes per NIC to receive a response, so a long delay
in the ping phase indicates a network connection issue. Check that your 10G cables are
unplugged and your 1G connection can reach Foundation.
• How do I install on a 10/100 switch?
A 10/100 switch is not recommended, but it can be used for a few nodes. However, you may
see timeouts. It is highly recommend that you use a 1G or 10G switch if it is available to you.
Informational Topics
• How can I determine whether a node was imaged with Foundation or standalone Phoenix?
• A node imaged using standalone Phoenix will have the file /etc/nutanix/
foundation_version in it, but the contents will be “unknown” instead of a valid foundation
version.
• A node imaged using Foundation will have the file /etc/nutanix/foundation_version in it
with a valid foundation version.
• Does first boot work when run more than once?
First boot creates a marker failure marker file whenever it fails and a success marker file
whenever it succeeds. If the first boot script needs to be executed again, delete these marker
files and just manually execute the script.
• Do the first boot marker files contain anything?
They are just empty files.
Foundation | Troubleshooting | 48
• Why might first boot fail?
Possible reasons include the following:
• First boot may take more time than expected, in which case Foundation might time out.
• NC team creation fails.
• The Controller VM has a kernel panic when it boots.
• Hostd does not come up in time.
• What is the timeout for first boot?
The timeout is 90 minutes. A node may restart several times (requirements from certain
driver installations) during the execution of the first boot script, and this can increase the
overall first boot time.
• How does the Foundation process differ on a Dell system?
Foundation uses a different tool called racadm to talk to the IPMI interface of a Dell system,
and the files which have the hardware layout details are different. However, the overall
Foundation work flow (series of steps) remains the same.
• How does the Foundation service start in the Controller VM-based and standalone versions?
• Standalone: Manually start the Foundation service using foundation_service start (in the
~/foundation/bin directory).
• Controller VM-based: Genesis service takes care of starting the Foundation service. If
the Foundation service is not already running, use the genesis restart command to
start Foundation. If the Foundation service is already running, a genesis restart will
not restart Foundation. You must manually kill the Foundation service that is running
currently before executing genesis restart. The genesis status command lists the
services running currently along with their PIDs.
• Why doesn’t the genesis restart command stop Foundation?
Genesis restarts only the services required for a cluster to be up and running. Stopping
Foundation could cause failures to current imaging sessions. For example, when expanding a
cluster Foundation may be in the process of imaging a node, which should not be disrupted
by restarting Genesis.
• How is the installer VM created?
The Qemu library is part of Phoenix. The qemu command starts the VM by taking a hypervisor
ISO and disk details as input. This command is simply executed on Phoenix to launch the
installer VM.
• How do you validate that installation is complete and the node is ready with regards to
firstboot?
This can be validated by checking the presence of a first boot success marker file. The
marker file varies per hypervisor:
• ESXi: /bootbank/first_boot.log
• AHV: /root/.firstboot_success
• HyperV: D:\markers\firstboot_success
Foundation | Troubleshooting | 49
• Does Repair CVM re-create partitions?
Repair CVM images AOS alone and recreates the partitions on the SSD. It does not touch the
SATADOM, which contains the hypervisor.
• Can I use older Phoenix ISOs for manual imaging?
Use a Phoenix ISO that contains the AOS installation bundle and hypervisor ISO in it.
Makefiles has a separate target for building such a standalone Phoenix ISO.
• What are the pre-checks run when a node is added?
• The hypervisor type and version should match between the existing cluster and the new
node.
• The AOS version should match between the existing cluster and the new node.
• Can I get a map of percent completion to step?
No. The percent completion does not have a one-to-one mapping to the step. Percent
completion depends on the different tasks which actually get executed during imaging.
• Do the log folders contain past imaging session logs?
Yes. All the previous imaging session logs are compressed (on a session basis) and archived
in the folder ~/foundation/log/archive.
• If I have two clusters in my lab, can I use one to do bare-metal imaging on the other?
No. This is because the tools and packages which are required for bare-metal imaging are
typically not present in the Controller VM.
• How do you add a new node that needs to be imaged to an existing cluster?
If the cluster is running AOS 4.5 or later and the node also has 4.5 or later, you can use the
"Expand Cluster" option in the Prism web console. This option employs Foundation to image
the new node (if required) and then adds it to the existing cluster. You can also add the
node through the nCLI: ncli cluster add-node node-uuid=<uuid>. The UUID value can be
found in the factory_config.json file on the node.
• Is is required to supply IPMI details when using the Controller VM-based Foundation?
It is optional to provide IPMI details in Controller VM-based Foundation. If IPMI information is
provided, Foundation will try to configure the IPMI interface as well.
• Is it valid to use a share to hold AOS installation bundles and hypervisor ISO files?
AOS installation bundles and hypervisor ISO files can be present anywhere, but there
needs to be a link (as appropriate) in ~/foundation/nos or ~/foundation/isos/hypervisor/
[esx|kvm|hyperv]/ to the appropriate share location. Foundation will pick up files in these
locations only. As long as a file is accessible from these standard locations inside Foundation
using a link, Foundation will pick it up.
• Where is Foundation located in the Controller VM?
/home/nutanix/foundation
Foundation | Troubleshooting | 50
• How can I determine if a particular (standalone) Foundation VM can image a given cluster?
Execute the following command on the Foundation VM and see whether it returns
successfully (exit status 0):
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password fru
If this command is successful, the Foundation VM can be used to image the node. This is the
command used by Foundation to get hardware details from the IPMI interface of the node.
The exact tool used for talking to the SMC IPMI interface is the following:
java -jar SMCIPMITool.jar ipmi_ip username password shell
If this command is able to open a shell, imaging will not fail because of an IPMI issue. Any
other errors like violating minimum requirements will be shown only after Foundation starts
imaging the node.
• How do I determine whether a particular hypervisor ISO will work?
The md5 hash of all qualified hypervisor ISO images are listed in the iso_whitelist.json file,
which is located in ~/foundation/config/. The latest version of the iso_whitelist.json file is
available from the Nutanix support portal (see Hypervisor ISO Images on page 33).
• How does foundation mount an ISO over IPMI?
The java command starts a shell with access to the remote IPMI interface. The vmwa
command mounts the ISO file virtually over IPMI. Foundation then opens another terminal
and uses the following commands to set the first boot device to CD ROM and restart the
node.
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis bootdev cdrom
ipmitool –H ipmi_ip -U ipmi_username -P ipmi_password chassis power reset
Foundation | Troubleshooting | 51
• Pynfs, what is it?
It is a Python implementation of NFS share used during the initial days. It is still used on
platforms with a 16 GB DOM.
• Is there a reason for using port 8000?
No specific reason.
COPYRIGHT
Copyright 2019 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws. Nutanix and the Nutanix logo are registered trademarks of Nutanix, Inc. in the
United States and/or other jurisdictions. All other brand and product names mentioned herein
are for identification purposes only and may be trademarks of their respective holders.
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
Foundation |
Interface Target Username Password
Version
Last modified: July 12, 2019 (2019-07-12T15:24:32+05:30)