DCPG002 PDF
DCPG002 PDF
System Management
Study Guide
XClarity Administrator 33
XClarity Controller 46
IMM 73 77
CMM 86
Figure 1-1 shows a broad overview of the capabilities of Lenovo XClarity Administrator.
Lenovo XClarity is available in two editions: Lenovo XClarity Administrator and the Lenovo XClarity Pro bundle.
XClarity Administrator is the foundational element in the XClarity solution and can be used by itself to simplify hardware
management across multiple systems. XClarity Pro includes Administrator and XClarity Integrators for Microsoft
System Center and VMware vCenter, which enables managing Lenovo hardware from those external tools and
provides advanced automation in clustered environments.
Table 1-2 Comparing Lenovo XClarity Administrator with other management products
Configuring the NTP server: Consider using the host on which XClarity Administrator is installed as NTP
server when you are deciding where to configure the server. If you decide as such, ensure that the host is
accessible over the management network (typically the Eth0 interface).
Host prerequisites
The XClarity Administrator appliance runs as a VM on the host system. The following Hypervisors are supported for
installing the XClarity Administrator appliance:
• VMware ESXi 5.1 update 1 or Version 5.5 or later
• Microsoft Windows Server 2012 or higher with Hyper-V installed
The host system that is running the XClarity Administrator appliance VM has the following
minimum requirements:
• Two 4-core Intel Xeon processors.
• 6 GB of memory
• A minimum of 64 GB of storage for use by XClarity Administrator
When firmware updates are applied via XClarity Administrator, the managed system must be restarted.
Therefore, if XClarity Administrator is hosted on a managed Flex System compute node, you cannot use
XClarity Administrator to apply firmware updates to all servers in that entire chassis (specifically, this
managed compute node). Restarting this managed compute node or host system also restarts XClarity
Administrator, which makes XClarity Administrator unavailable to complete the updates on the Managed
compute node or host system. Therefore, you must clear this Flex node when updates are made on the
Flex Chassis.
Supported hardware
Lenovo XClarity Administrator supports several System x servers and Flex System compute nodes and other devices.
Requirements for IMMv2 Advanced FoD key on System x rack servers: Operating System Deployment and
Remote Control features of Lenovo XClarity Administrator require that the IMMv2 Advanced Feature on
Demand (FoD) key is installed on System x servers.
However, XClarity Administrator still can manage the server if the IMMv2 Advanced FoD key is not
enabled. In the case of Remote Control, if the FoD key is not detected on a server, the remote-control
session displays the “Missing activation key” message for the server when the list of available servers is
shown.
At the time of this writing the latest version of XClarity Administrator was v1.0.3 build 2, the following tables Table 2-1
and Table 2-3 on page 8 list the System x servers and Flex System endpoints that can be managed by XClarity
Administrator. For the latest information about supported systems, see the support site for Lenovo XClarity
Administrator at the following
URL: http://pic.dhe.ibm.com/infocenter/flexsys/information/topic/com.lenovo.lxca.doc/plan_supportedhw.html
Support with limitations: Some endpoints are supported with limitations, as listed in
Table 2-1 and Table 2-3 on page 8. For more information about these limitations, see 2.1.3,
“Restrictions on supported hardware” on page 10.
Table 2-3 lists the Flex Chassis and components and the level of support for Lenovo XClarity
Administrator. For more information about limitations, see 2.1.3, “Restrictions on supported
hardware” on page 10.
Table 2-3 Flex Systems: Supported compute nodes, chassis, and switches
As an example of the difference between supported systems and systems with limited support for XClarity
Administrator, the CPU Subsystem-Level Utilization history for an x3550 M5 (a supported system) is shown in
Figure 2-1.
Figure 2-1 Example of CPU Subsystem-Level Utilization for a supported system (x3650 M5)
The Memory Subsystem-Level Utilization history for the same server is similar to the graph that is shown in Figure 2-2
on page 11.
Historical memory data: Under some circumstances, historical memory usage is not available. At the time
of this writing, this issue is a known defect and currently being investigated.
Figure 2-2 Memory Subsystem-Level Utilization history for a supported system (x3550 M5)
Figure 2-4 Option to Apply settings to Chassis switch internal ports, where applicable
At the time of this writing, the switch-related settings can apply only to the following types of
switches:
• Flex System Fabric CN4093 10Gb Converged Scalable Switch
• Flex System Fabric EN4093R 10Gb Scalable Switch
• Flex System Fabric SI4093 System Interconnect Module
• Flex System Fabric SI4091 System Interconnect Module
Power Systems compute nodes
Flex System compute nodes with POWER processors are not supported; however, these
systems still are displayed in the chassis views and you can view the properties and status for
these compute nodes.
Chassis that feature CMMs are supported with limitations. The following functions are not
available:
• Aggregated event and audit logs from I/O Modules
• Network configuration (configuring the Flex Network Switches internal ports via configuration patterns for the I/O
adapters, as shown in Figure 2-4)
13 Tech Sales Certification - System Management Study Guide
Incompatibility with CMM and CMM2: A CMM and a Lenovo CMM2 cannot be installed in the same chassis
at the same time. The firmware on a CMM cannot be upgraded to change it to a Lenovo CMM2 because
they contain different hardware.
At the time of this writing, the switch-related settings apply only the following types of switches:
– Flex System Fabric CN4093 10Gb Converged Scalable Switch
– Flex System Fabric EN4093R 10Gb Scalable Switch
– Flex System Fabric SI4093 System Interconnect Module
– Flex System Fabric SI4091 System Interconnect Module
Lenovo Flex switches are supported with Lenovo CMMs only.
If a switch is in stacked or protected mode, you cannot update its firmware by using XClarity Administrator.
For more information about IO adapter support, see this Lenovo XClarity Supported Devices website:
https://ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-XCLACOM
Table 2-1 on page 7 lists the Rack Server Models and the level of support for Lenovo XClarity Administrator.
Using the latest versions: Browser support is tied to tested web browsers. If you have a
newer browser version than the versions that are listed here, the browser might not be
supported. However, this lack of support does not mean that the browser does not work.
We recommend the use of the documented supported web browsers.
Port requirements
There are several ports that must be available, depending on how firewalls are implemented in your environment. If
these ports are blocked or used by another process, some Lenovo XClarity Administrator functions might not work.
Review the following to determine which ports need to be opened based on your environment:
• Access to the Lenovo XClarity Administrator server
If the Lenovo XClarity Administrator server and all managed endpoints are behind a firewall and you intend
to access those devices from a browser that is outside of the firewall, you must ensure that the Lenovo
XClarity Administrator ports are open. If SNMP and SMTP are used for event management, you might also
need to ensure that the ports that are used by the Lenovo XClarity Administrator server for event forwarding
are open.
The Lenovo XClarity Administrator server listens on (and responds through) the ports that are listed in Table 2-5.
If you intend to install operating systems on managed endpoints through the Lenovo XClarity Administrator server,
make sure that you review the list of ports in Table 2-8 on page 18. Table 2-7 Ports that must be open between
Lenovo XClarity Administrator and managed endpoints
For more information about ports that must be available for deployed operating systems, see the following Information
Center page:
http://pic.dhe.ibm.com/infocenter/flexsys/information/topic/com.lenovo.lxca.doc/operating_system_firewall_rules_for_de-
ployment.html
If you are deploying Microsoft Windows, the ports that are listed in Table 2-9 must be available for Windows profiles.
Management considerations
There are several alternatives to choose from when managing endpoints. Depending on the endpoints that are
managed, you might need multiple management solutions that are running at the same time.
Lenovo XClarity Administrator provides hardware management for System x rack servers and Flex System devices,
including the CMM, compute nodes, and Flex switches.
Note: Lenovo offers Lenovo XClarity Integrator for Microsoft System Center and the Lenovo XClarity
Integrator for VMware vCenter to enable more System x and Flex monitoring and management from within
the Microsoft and VMware Interfaces.
For more information about these solutions, see the following websites:
Lenovo XClarity Integrator for VMware vCenter:
http://ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-VMWARE
Lenovo XClarity Integrator for Microsoft System Center:
http://ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-MANAGE
• The Flex System compute nodes with POWER processors and Flex System V7000 Storage Node cannot be
managed by XClarity Administrator.
These devices appear in the graphical interface for XClarity Administrator and you can view device properties and
status. You also can power on and off a storage node, virtually reseat the canisters, and start the management module.
However, you must use other management alternatives to take any actions on the devices, such as updating or
configuring the devices. Consider the following points:
– Use a Hardware Management Console to manage POWER-based compute nodes.
You can use the Power Systems Hardware Management Console to manage these devices even if you also
are managing that chassis by using XClarity Administrator.
– Use the web interface or the CLI that is provided with the Flex System V7000 Storage Node to manage that
device.
Depending on your configuration, transitioning from management by Flex System Manager to management by
XClarity Administrator might be disruptive to your running workloads. Therefore, consider performing the transition
during a maintenance window to minimize downtime with running workloads.
Complete the following steps to ensure that XClarity Administrator can manage a chassis that
was managed by Flex System Manager:
1. Prepare the chassis to be removed from management by Flex System Manager. Optionally, complete the following
steps to prepare the chassis to be removed from management by the Flex System Manager:
a. If you use IBM Fabric Manager (FM), which is part of the Flex System Manager, to virtualize addresses,
modify Fabric Manager to use push mode to distribute virtual addresses through the CMM. If you are using
Fabric Manager in pull mode and Flex System Manager is powered off, the virtual addresses are unavail
able after the next restart of the compute node.
Differences between IBM FM to XClarity Administrator: Fabric Manager supports the concept of a standby
node. If there is a hardware failure, Fabric Manager assigns the virtual address of the failed compute node
to the standby node so that it can automatically take over the workload from the failed node.
XClarity Administrator does not support the concept of a standby node. With XClarity Administrator, if a
server fails after you deploy a server pattern, you can recover the server by unassigning the profile from
the failed server and then reassigning that profile to a standby server.
If virtual addresses are changed, you also must adjust infrastructure services, as described in the following examples:
• If the worldwide port name (WWPN) is changed for a compute node, you also must adjust SAN zoning and LUN
mapping.
• If the MAC address for a port is changed, you must adjust the MAC-to-IP address binding in the DHCP server or ‘
clustering software.
• IBM FM can configure a virtual start target WWN. If you do not migrate properly, you can lose the ability to start your
operating system.
b. Remove the chassis from management by using the Flex System Manager.
c. Manage the chassis from XClarity Administrator. For more information about managing a chassis, see
Chapter 6, “Discovery, inventory, and other systems information” on page 121
d. Remove any agents that were installed on devices that are managed by the Flex System Manager. The
XClarity Administrator implements an agentless management approach. Therefore, you do not need to
install agents on managed compute nodes. Although the installed agents have no effect on XClarity
Administrator management functions, you can choose to remove those agents and reclaim the space on the
compute node.
If you intend to use management software other than XClarity Administrator to monitor your managed endpoints and
if that management software uses SNMPv3 communication, you must prepare your environment by completing the
following steps before you manage the chassis by using the XClarity Administrator. If the chassis are already
managed by XClarity Administrator, you must first unmanage the chassis (for more information, see 6.5, “Unmanaging
a system” on page 155):
1. Create a user account on the CMM.
2. Configure the SNMPv3 properties for the user account, including passwords, authorization privileges, encryption,
and trap address.
3. Configure the user account to provision the SNMP user account to the IMM.
4. Enable node account management on the CMM.
5. Repeat steps 1 - 3 for up to 12 SNMP user accounts that are supported by CMM.
6. For each new SNMP user account, log on and change the first-time password.
SNMPv3 passwords: The password for the SNMPv3 user accounts expires after 90 days. You must
change the password before they expire to avoid account disruption. To change the password, you must
first unmanage the chassis from XClarity Administrator, change the passwords, and then manage the
chassis again.
To avoid having to change the password for the SNMP user accounts on the first logon and again every 90 days, you
can set the security policy and the global login setting on the CMM to “Legacy” (this practice is not recommended). You
can change these settings before or after you manage the chassis.
Network considerations
When the Lenovo XClarity Administrator installation is planned, consider the network topology that is implemented in
your environment and how the XClarity Administrator fits into that topology.
Network types
Most environments implement the following types of networks. Based on your requirements, you might implement only
one of these networks or you might implement all three:
• Management network
The management network often is reserved for communications between XClarity Administrator and the
management processors for managed endpoints. For example, the management network might be configured
to include XClarity Administrator, the CMMs for each managed chassis, and the Integrated Management
Modules (IMMs) of each server that is managed by XClarity Administrator.
• Data network
The data network often is used for communications between the operating systems that are installed on the
servers and the company intranet, the Internet, or both.
• Operating system deployment network
In some cases, an operating system deployment network is set up to separate out the communications that
are required to deploy operating systems on servers. If implemented, this network often includes XClarity
Administrator and all server hosts.
Instead of implementing a separate operating system deployment network, you might choose to combine this
functionality in the management network or the data network.
Consider the following points when one network interface (Eth0) is present:
• The interface must be configured to support the discovery and management of hardware. It must communicate with
the CMM in each managed chassis, with the IMM of each managed compute node and rack server, and with the Flex
switches in each managed chassis.
• If you intend to acquire firmware updates from the Fix Central website (electronic fix-distribution
website at https://ibm.com/support/fixcentral/), this interface must also have connectivity to the Internet (typically
through a firewall). Otherwise, you must manually import firmware updates into the management-sever updates
repository.
• If you intend to deploy operating system images to managed servers, the network interface must have IP network
connectivity to the server network interface that is used to access the host operating system and must be configured
with an IPv4 address.
Consider the following points when two network interfaces (Eth0 and Eth1) are present:
• The Eth0 interface often is connected to the management network and is used to discover and manage hardware. It
must communicate with the CMM of each managed chassis, with the IMM of each managed server, and with the Flex
switches that are installed in each managed chassis.
If you intend to acquire firmware updates from the Fix Central website (see https://ibm.com/support/fixcentral/), the
Eth0 interface must also have connectivity to the Internet (often through a firewall). Otherwise, you must import firm-
ware updates into the management server updates repository.
• The Eth1 interface often is connected to the data network (an internal data network, a public data network, or both)
and used to manage host operating systems.
• The network interface that you chose to use to deploy operating system images to the managed servers must have
IP-network connectivity to the server network interface that is used to access the host operating system and must be
configured with an IPv4 address.
• If you implemented a separate network for deploying operating systems, you can configure Eth1 to connect to that
network instead of the data network. However, if the operating system deployment network does not have access to
the data network, you must define another I/O interface on that server so that the server host has access to the
data network when you install the operating system on a server.
To avoid this issue, change the management interface to a static IP address or ensure that the DHCP
server configuration is set so that the DHCP address is based on a MAC address or that the DHCP lease
does not expire.
Network configurations
Table 2-11 lists possible configurations for the XClarity Administrator network interfaces that are based on the type of
network topology that was implemented in your environment. Use this table to determine how to define Eth0 and Eth1.
Table 2-11 Role of Eth0 and Eth1 based on network topology
When you install XClarity Administrator, define network settings by using the following considerations:
• Eth0 must be configured to support the discovery and management of hardware. It must communicate with the CMM
of each managed chassis, with the IMM of each managed server, and with the Flex switches that is installed in each
managed chassis.
• Optionally, if you intend to acquire firmware updates from the Fix Central website
(http://ibm.com/support/fixcentral), Eth0 must also have connectivity to the Internet (typically through a firewall).
Otherwise, you must import firmware updates into the firmware-updates repository.
• Optionally, if you intend to deploy operating system images to managed servers, Eth0 must have IP network
connectivity to the server network interface that is used to access the host operating system and it must be configured
with an IPv4 address.
• You can set up the XClarity Administrator host on any system that meets the requirements for the XClarity
Administrator, including a managed server (compute node or rack server) only when you implement a single data and
management network topology or a virtually separate data and management network topology; however, you cannot
use the XClarity Administrator to apply firmware updates to that managed server.
Even then, only some of the firmware is applied with immediate activation. The XClarity Administrator forces the target
server to restart, which also restarts the XClarity Administrator. When applied with deferred activation, only some
firmware is applied when the XClarity Administrator host is restarted.
You can also configure Eth1 to connect to the same network from XClarity Administrator to support redundancy.
Figure 2-6 shows an example implementation for a Single Data and Management network or converged network
topology.
When you install XClarity Administrator, define network settings by using the followingconsiderations:•
• Eth0 is configured to support the discovery and management of hardware. It must communicate with the CMM of each
managed chassis, with the IMM of each managed server, and with the Flex switches that are installed in each
managed chassis. If you intend to acquire firmware updates from Fix Central website
(http://ibm.com/support/fixcentral) this interface must also have connectivity to the Internet (typically through a
firewall). Otherwise, you must import firmware updates into the management server updates repository.
• Eth1 often is configured to communicate with an internal data network, a public data network, or both.
Note: If you implement a separate operating system deployment network, Eth1 might be configured to
connect to that network instead of the data network. If the operating system deployment network does not
have access to the data network, you must define an extra I/O interface on that server when you install the
operating system so that the server host has access to the data network.
Figure 2-7 shows an example implementation of separate management and data networks in which the operating
system deployment network is configured as part of the data network.
Figure 2-8 shows another example implementation of separate management and data networks in which the
operating system deployment network is configured as part of the management network. In this implementation,
XClarity Administrator does not need connectivity to the data network.
Note: If the operating system deployment network does not have access to the data network, configure an
extra interface on the servers to provide connectivity from the host operating system on the server to the
data network, if needed.
Note: If the XClarity Administrator is installed on a host that is running on a managed compute node in a
chassis, you cannot use the XClarity Administrator to apply firmware updates to that entire chassis
because it also update the managed compute node. When firmware updates are applied, the host system
must be restarted.
When you install the XClarity Administrator, define network settings by using the following
considerations:
• Eth0 is configured to support the discovery and management of hardware. It must communicate with the CMM of
each managed chassis, with the IMM of each managed server, and with the Flex switches that are installed in each
managed chassis. If you intend to acquire firmware updates from Fix Central website, this interface must also have
connectivity to the Internet (often through a firewall). Otherwise, you must import firmware updates into the manage-
ment server updates repository.
• Eth1 often is configured to communicate with an internal data network, a public data network, or both.
• If you intend to deploy operating system images to managed servers, the Eth0 interface or the Eth1 interface must
have IP network connectivity to the server network interface that is used to access the host operating system and it
must be configured with an IPv4 address.
Note: If you implement a separate operating system deployment network, Eth1 might be configured to
connect to that network instead of the data network. If the operating system deployment network does not
have access to the data network, you must define an extra I/O interface on that server when you install the
operating system so that the server host has access to the data network.
However, you cannot use the XClarity Administrator to apply firmware updates to that managed server. Even then,
only some of the firmware is applied with immediate activation. The XClarity Administrator forces the target server to
restart, which also restarts the XClarity Administrator. When applied with deferred activation, only some firmware is
applied when the XClarity Administrator host is restarted.
Figure 2-9 shows an example implementation of virtually separate management and data networks in which the
operating system deployment network is configured as part of the data network. In this example, the XClarity
Administrator is installed on a managed compute node.
When you install XClarity Administrator and define network settings, Eth0 must be configured to support the discovery
and management of hardware. It must communicate with the CMM of each managed chassis, with the IMM of each
managed server, and with the Flex switches that are installed in each managed chassis.
If you intend to acquire firmware updates from Fix Central, Eth0 must also have connectivity to the Internet (often
through a firewall). Otherwise, you must import firmware updates into the management server updates repository.
If you intend to deploy operating system images to managed servers, Eth0 must have IP network connectivity to the
server network interface that is used to access the host operating system and it must be configured with an IPv4
address.
You also can configure Eth1 to connect to the same network from XClarity Administrator to
support redundancy.
Figure 2-11 shows an example implementation of a management-only network with support for operating system de-
ployment.
For more information about minimum requirements of the host system, see 2.1.1, “Virtual appliance prerequisites” on
page 6.
VMware ESX and vCenter compatibility: Ensure that you install a version of VMware vCenter that is
compatible with the versions of ESX or ESXi that are installed on the hosts to be used in the cluster to
avoid any version support issues.
VMware vCenter can be installed on one of the hosts that is used in the cluster. However,if that host is powered off or
not usable, you also lose access to the VMware vCenter interface.
• Shared storage (data stores) that can be accessed by all hosts in the cluster. You can use any type of shared storage
that is supported by VMware. The data store is used by VMware to determine whether a VM should fail over to a
different host (heart-beating).
For more information about setting up a VMware HA cluster (VMware 5.0), see the page “Create a vSphere HA
Cluster” in the vSphere 5 Documentation Center:
https://pubs.vmware.com/vsphere-50/index.jsp#com.vmware.vsphere.avail.doc_50/GUIDB53060B9-
2704-4EE2-B97A-AE6FEBCE3356.html
Note: You need the second data store for the heartbeat.
Microsoft Hyper-V
To implement High Availability (HA) for XClarity Administrator in a Microsoft Hyper-V environment, use the HA
functionality that is provided for the Hyper-V environment.
Licensing
Lenovo XClarity Administrator is a licensed product and is available in the following editions:
• XClarity Administrator (Stand-alone option)
• XClarity Pro (bundled with the XClarity Integrators for VMware vCenter and Microsoft System Center)
Both editions are available with a 1, 3, or 5-year software subscription and support. The editions also are available on a
per-server or per-chassis basis.
If you have a fully populated chassis with 14 nodes, the per-chassis licensing is more cost-effective; however, you might
want to purchase per-server licensing for Flex System compute nodes if, for example, the chassis was not fully
populated with nodes.
The one-time charge for the product includes the license, software subscription, and support. It is delivered as a
software virtual appliance for VMware or Microsoft Hyper-V via the Passport Advantage online licensing.
Many clients might have a service and support agreement running with IBM Flex System Manager, IBM Systems
Director Standard Edition, or IBM Fabric Manager (Stand-alone). If you have such an agreement in place, you are
entitled to XClarity Pro at no extra cost for the remainder of the current service and support agreement. This is all
administered via IBM Passport Advantage online licensing program.
Licensing example
If you have two years remaining of a 3-year IBM Flex System Manager Service and support agreement, you can
transition free of cost to XClarity Pro for the remaining two years. XClarity Administrator and the XClarity Integrator
licenses show up under the Flex System Manager entitlement
When it is time to renew this agreement, you renew the service and support agreement for XClarity Pro. The entitlement
then is under XClarity Pro and not under IBM Flex System Manager.
Part numbers
The part numbers for geographical regions are listed in Table 2-13, Table 2-14 on page 34, and Table 2-15 on page 35.
XClarity Pro includes XClarity Integrator for Microsoft System Center and XClarity Integrator for VMware vCenter.
Table 2-13 Part numbers: Per Managed Server (EMEA and Latin America)
Table 2-14 Part numbers: Per managed chassis (North America, Asia Pacific, and Japan)
Lenovo XClarity Administrator provides agent-free hardware management for our servers, storage, network
switches and HX Series appliances.
Features
The administration dashboard is an HTML 5-based web interface that allows fast location of resources so tasks can be
run quickly. Because Lenovo XClarity Administrator does not include any agent software that is installed on the
managed endpoints, there are no CPU cycles spent on agent execution and no memory is used, which means that up
to 1GB of RAM and 1 - 2% CPU usage is saved, compared to a typical managed system where an agent is required.
Fast time to value is realized through automatic discovery of existing or new Lenovo System x rack servers
and Flex System infrastructure. Inventory of the discovered endpoints is gathered, so the managed hardware inventory
and its status can be viewed-at-a-glance.
A centralized view of events and alerts that are generated from managed endpoints is available. When an issue is
detected by a managed endpoint, an event is passed to Lenovo XClarity Administrator. Alerts and events are visible via
the XClarity Administrator Dashboard, the Status bar, and the Alerts and Events detail for the specific system.
Firmware management
Firmware management is simplified by assigning Firmware-compliance policies to supported managed endpoints to
ensure that firmware on those endpoints remains compliant. You can also create and edit firmware-compliance policies
when validated firmware levels do not match the suggested predefined policies. Additionally you can also apply and
activate firmware that is later than the currently installed firmware on a single managed endpoint or group of endpoints
without using compliance policies.
Configuration management
Configuration management uses pattern-based configurations to quickly provision and re-provision a
single server or multiple servers and compute nodes, all with a single set of configuration settings. Address
pools can be configured to assist with deployments. Category patterns are used to create configuration
patterns, which can be deployed to server profiles.
OS Provisioning
OS Provisioning enables bare metal deployment. VMware ESXi, Windows Server, SUSE Linux Enterprise Server
(SLES) and Red Hat Linux images can be imported and held in a repository for images. Up to 28 OS images can be
deployed concurrently.
Security
If you must be compliant with NIST SP 800-131A or FIPS 140-2, Lenovo XClarity Administrator can help you meet that
compliance. Lenovo XClarity Administrator supports self-signed SSL certificates (issued by an internal certificate
authority) or external SSL certificates (private or commercial CA). Lenovo XClarity includes an audit log that provides a
historical record of user actions, such as logging on, creating users, or changing user passwords.
Integration
Lenovo XClarity can be integrated into external, higher level management, automation, and orchestration platforms
through open REST application programming interfaces (APIs). This means Lenovo XClarity can easily integrate with
your existing management infrastructure.
Lenovo XClarity Integrator for VMware vRealize Log Insight (free download)
https://marketplace.vmware.com/vsx/solutions/lenovo-networking-content-pack-for-vmwarevrealize-log-insight
Lenovo XClarity Integrator for Microsoft System Center (free download, support requires XClarityPro license)
https://datacentersupport.lenovo.com/documents/LNVO-MANAGE
Ordering information for those integrators requiring a license is described in the Download and ordering information
section.
Support for Lenovo XClarity Integrators for VMware vCenter and Microsoft System Center is included in Lenovo XClarity
Pro offering which is described in the next section.
Note: The Lenovo XClarity Integrator for Zenoss is now withdrawn from marketing
This download provides Lenovo XClarity Administrator base functionality plus a 90-day trial evaluation Licenses for
XClarity Administrator features Configuration Patterns and Operating System Deployment.
Note: Service and Support is only available with an XClarity Pro purchase.
The following figure shows the Inventory screen of the mobile app.
Figure 2. Lenovo XClarity mobile app
The mobile app is available for download from these app stores:
• Google Play
• Apple iTunes
• Lenovo Store (China)
• Baidu Store (China)
Management tasks
By using Lenovo XClarity, users can perform the following tasks that are described in this section.
User Management
Lenovo XClarity Administrator provides a centralized authentication server to create and manage all user accounts
and to manage and authenticate user credentials. The authentication server is created automatically when the
management server first starts. The User accounts, which are used to log on and manage the Lenovo XClarity
Administrator, are also used for all chassis and servers that are managed by the Lenovo XClarity Administrator. When
you create a user account, you control the level of access, such as whether the account has read/write authority or
read-only authority, by using predefined role groups.
Configuration management
Configuration patterns provide a way to ensure that you have consistent configurations applied to managed servers.
Server patterns are used to provision or pre-provision a managed server by configuring local storage, I/O adapters, boot
setting, firmware, ports, IMM, and UEFI settings. Server patterns also integrate support for virtualizing I/O addresses so
you can virtualize Flex System fabric connections or repurpose servers without disruption to the fabric.
Firmware updates
Within Lenovo XClarity, you can manage the firmware updates repository and apply and activate firmware
updates for all managed endpoints. Compliance policies can be started to flag managed endpoints that do
not comply with the defined firmware rules. Refreshing the repository and downloading updates requires
an Internet connection. If Lenovo XClarity has no Internet connection, you can manually import updates to
the repository. The firmware apply and activate interface is shown in the following figure.
With the latest release of XClarity Administrator there is now a PyLXCA toolkit which provides a Pythonbased
library of commands and APIs to automate provisioning and resource management from an OpenStack environment,
such as Ansible or Puppet.
The PyLXCA toolkit provides an interface to Lenovo XClarity Administrator REST APIs to automate functions such as:
• Logging in to Lenovo XClarity Administrator
• Managing and unmanaging chassis, servers, storage systems, and top-of-rack switches (endpoints)
• Viewing inventory data for endpoints and components
• Deploying an operating-system image to one or more servers
• Configuring servers through the use of Configuration Patterns
• Applying firmware updates to endpoints
The free download also includes a 90-day evaluation license for Configuration Patterns and Operating System
Deployment to allow you to evaluate these licensed components.
Note: The free downloads do not include any entitlement for technical support.
Lenovo XClarity Integrators for Microsoft System Center (MSSC) are also available to download for free from the
following link (does not include any entitlement for technical support):
https://datacentersupport.lenovo.com/documents/lnvo-manage
Lenovo XClarity integrator for VMware vCenter is also available to download for free from the following link (does not
include any entitlement for technical support):
https://datacentersupport.lenovo.com/documents/lnvo-vmware
To gain entitlement for technical support, purchase a license for Lenovo XClarity Pro to add these features and
support:
• Lenovo XClarity Administrator Configuration Patterns
• Lenovo XClarity Administrator Operating System (OS) Deployment
• Technical support for Lenovo XClarity Administrator
• Technical support for Lenovo XClarity Integrators for MSSC
• Technical support for Lenovo XClarity Integrators for VMware vCenter
Lenovo XClarity Pro editions are available with a 1-year, 3-year, or 5-year software subscription and support. Lenovo
XClarity Pro is available on a per-managed-server basis or per-managed-chassis basis. The per chassis licenses offer
a more cost effective way of purchasing licenses for the Flex System environment.
When you purchase XClarity Pro, the order is fulfilled via electronic software delivery (ESD) using the Lenovo Key
Management System (LKMS). The order is placed onto LKMS using an email address for the end user who has
ordered the code. This email address is where the Activation Code is sent in PDF format and the email address also
allows login to the system for administration and to manage the LKMS inventory. The Activation code is redeemed on
LKMS and the electronic proof of entitlement is sent along with a welcome letter and explanation of how to obtain the
code from the ESD portal. The ESD portal is also known as Flexnet.
Lenovo XClarity Pro includes Lenovo XClarity Integrator for Microsoft System Center and Lenovo XClarity
Integrator for VMware vCenter.
For VMware, the virtual machine is available as an OVF template. For Hyper-V and Nutanix AHV, the virtual machine is
a virtual-disk image (VHD). For KVM, the virtual machine is available as qcow2 format.
For details about support, including any limitations, see the following support pages:
• Flex System
• ThinkSystem, Converged HX Series, NeXtScale, and System x
• RackSwitch
• Storage
•ThinkServer
Related links
For more information, see the following resources:
Free XClarity Administrator download (includes 90-day trial license for Configuration Patterns and OS Deployment)
http://www.lenovo.com/xclarity
Lenovo Key Management System user guide, Using Lenovo Features on Demand
https://lenovopress.com/redp4895
LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. Lenovo may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
The products described in this document are not intended for use in implantation or other life support applications
where malfunction may result in injury or death to persons. The information contained in this document does not
affect or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express
or implied license or indemnity under the intellectual property rights of Lenovo or third parties. All information
contained in this document was obtained in specific environments and is presented as an illustration. The result
obtained in other operating environments may vary. Lenovo may use or distribute any of the information you supply
in any way it believes appropriate without incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials
for this Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was
determined in a controlled environment. Therefore, the result obtained in other operating environments may vary
significantly. Some measurements may have been made on development-level systems and there is no guarantee
that these measurements will be the same on generally available systems. Furthermore, some measurements may
have been estimated through extrapolation. Actual results may vary. Users of this document should verify the
applicable data for their specific environment.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
• Flex System
• Lenovo XClarity
• Lenovo®
• NeXtScale
• RackSwitch
• System x®
• ThinkServer®
• ThinkSystem
Linux® is a trademark of Linus Torvalds in the United States, other countries, or both.
Hyper-V®, Microsoft®, PowerShell, Windows Server®, and Windows® are trademarks of Microsoft
Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Note: The XClarity Controller currently supports Redfish Scalable Platforms Management API Specification 1.0.2 and
schema 2016.2
Notes:
• A dedicated systems-management network port may not be available on some ThinkSystem servers; for these
servers access to the XClarity Controller is only available through a network port that is shared with the server
operating system.
• For Flex servers, the Chassis Management Module (CMM) is the primary management module for systems-
management functions. Access to the XClarity Controller is available through the network port on the CMM.
This document explains how to use the functions of the XClarity Controller in a ThinkSystem server. The XClarity
Controller works with the XClarity Provisioning Manager and UEFI to provide systems-management capability for
ThinkSystem servers.
Note: The first time you access the Support Portal, you must choose the product category, product family, and model
numbers for your server. The next time you access the Support Portal, the products you selected initially are
preloaded by the website, and only the links for your products are displayed. To change or add to your product list,
click the Manage my product lists link. Changes are made periodically to the website. Procedures for locating
firmware and documentation might vary slightly from what is described in this document.
1. Go to http://datacentersupport.lenovo.com.
2. Under Support, select Data Center.
3. When the content is loaded, select Servers.
4. Under Select Series, first select the particular server hardware series, then under Select SubSeries, select the
particular server product subseries, and finally, under Select Machine Type select the particular machine type.
Event Logs
• IPMI SEL
• Human Readable Log
• Audit Log
Environmental Monitoring
• Agent Free Monitoring
• Sensor Monitoring
• Fan Control
LED Control
• Chipset Errors (Caterr, IERR, etc...)
• System Health Indication
• OOB Performance Monitoring for I/O adapters
• Inventory Display and Export
RAS
• Virtual NMI
• Automatic Firmware Recovery
• Automated promotion of backup firmware
• POST Watchdog
• OS Loader Watchdog
• Blue Screen Capture (OS Failure)
• Embedded Diagnostic Tools
Network Configuration
• IPv4
• IPv6
• IP Address, Subnet Mask, Gateway
• IP Address Assignment Modes
• Host name
• Programmable MAC address
• Dual MAC Selection (if supported by server hardware)
• Network Port Reassignments
• VLAN Tagging
Alerts
• PET Traps
• CIM Indication
• SNMP TRAPs
• E-mail
Serial Redirection
• IPMI SOL
• Serial port configuration
Security
• XClarity Controller Core Root of Trust for Measurement (CRTM)
• Digitally signed firmware updates
• Role Based Access Control (RBAC)
• Local User Accounts
• LDAP/AD User Accounts
• Secure Rollback of Firmware
• Chassis intrusion detection (only available on some server models)
• XCC remote assertion of UEFI TPM Physical Presence
• Audit logging of configuration changes and server actions
• Public-key (PK) Authentication
• System Retire/Repurpose
Power Management
• Real time Power Meter
License Management
• Activation Key Validation and Repository
Firmware Updates
• Agent Free Update
• Remote Update
Alerts
• Syslog
Serial Redirection
• Serial Redirection via SSH
Security
• Security Key Lifecycle Manager (SKLM)
• IP address blocking
Power Management
• Real time Power Graphics
• Historical Power Counters
• Temperature Graphics
All of the XClarity Controller Standard and Advanced Level features plus:
RAS
• Boot Capture
Remote Presence
• Mounting of local client ISO/IMG files
• Quality/Bandwidth Control
• Virtual Console Collaboration (six users)
• Virtual Console Chat
• Virtual Media mounting of remote ISO/IMG files via HTTPS, SFTP, CIFS, and NFS
Power Management
• Power Capping
• OOB Performance Monitoring - System Performance metrics
Note: Support for the remote console feature is not available through the browser on mobile device operating systems.
Depending upon the version of the firmware in the XClarity Controller, web browser support can vary from the brows-
ers listed in this section. To see the list of supported browsers for the firmware that is currently on the XClarity Control-
ler, click the Supported Browsers menu list from the XClarity Controller login page.
For increased security, only high strength ciphers are now supported when using HTTPS. When using HTTPS, the
combination of your client operating system and browser must support one of the following cipher suites:
• ECDHE-ECDSA-AES256-GCM-SHA384
• ECDHE-ECDSA-AES256-SHA384
• ECDHE-ECDSA-AES256-SHA
• ECDHE-ECDSA-AES128-GCM-SHA256
• ECDHE-ECDSA-AES128-SHA256
• ECDHE-ECDSA-AES128-SHA
• ECDHE-RSA-AES256-GCM-SHA384
• ECDHE-RSA-AES256-SHA384
• ECDHE-RSA-AES128-GCM-SHA256
• ECDHE-RSA-AES128-SHA256
Your internet browser’s cache stores information about web pages that you visit so that they will load more quickly in
the future. After a flash update of the XClarity Controller firmware, your browser may continue to use information from
its cache instead of retrieving it from the XClarity Controller. After updating the XClarity Controller firmware, it is
recommended that you clear the browser cache to ensure that web pages served by the XClarity Controller are
displayed correctly.
By default, the chosen language for the XClarity Controller web interface is English. The interface is capable of display-
ing multiple languages. These include the following:
• French
• German
• Italian
• Japanese
• Korean
• Portuguese (Brazil)
• Simplified Chinese
• Spanish (international)
• Traditional Chinese
To choose the language of your preference, click the arrow beside the currently selected language. A drop- down menu
will appear to let you choose your preferred language.
Text strings that are generated by the XClarity Controller firmware are displayed in the language dictated by the brows-
er. If the browser specifies a language other than one of the translated languages listed above, the text is displayed
in English. In addition, any text string that is displayed by the XClarity Controller firmware, but is not generated by the
XClarity Controller (for example messages generated by UEFI, PCIe adapters, etc…) are displayed in English.
The input of language-specific text other than English, such as the Trespass message is currently not supported. Only
text typed in English is supported.
The XClarity Controller combines service processor functions, a video controller, and remote presence function in a
single chip. To access the XClarity Controller remotely by using the XClarity Controller web interface, you must first log
in. This chapter describes the login procedures and the actions that you can perform from the XClarity Controller web
interface.
The XClarity Controller supports static and Dynamic Host Configuration Protocol (DHCP) IPv4 addressing. The default
static IPv4 address assigned to the XClarity Controller is 192.168.70.125. The XClarity Controller is initially configured
to attempt to obtain an address from a DHCP server, and if it cannot, it uses the static IPv4 address.
The XClarity Controller also supports IPv6, but it does not have a fixed static IPv6 IP address by default. For initial ac-
cess to the XClarity Controller in an IPv6 environment, you can either use the IPv4 IP address or the IPv6 link-local ad-
dress. The XClarity Controller generates a unique link-local IPv6 address, using the IEEE 802 MAC address by insert-
ing two octets, with hexadecimal values of 0xFF and 0xFE in the middle of the 48- bit MAC as described in RFC4291
and flipping the 7th bit of the MAC address. For example if the MAC address is 08-94-ef-2f-28-af, the link-local address
would be as follows:
fe80::0a94:efff:fe2f:28af
When you access the XClarity Controller, the following IPv6 conditions are set as default:
• Automatic IPv6 address configuration is enabled.
• IPv6 static IP address configuration is disabled.
• DHCPv6 is enabled.
• Stateless auto-configuration is enabled.
The XClarity Controller provides the choice of using a dedicated systems-management network connection (if appli-
cable) or one that is shared with the server. The default connection for rack-mounted and tower servers is to use the
dedicated systems-management network connector.
The dedicated systems-management network connection on most servers is provided using a separate 1Gbit network
interface controller. However, on some systems the dedicated systems-management network connection may be
provided using the Network Controller Sideband Interface (NCSI) to one of the network ports of a multi-port network
interface controller. In this case, the dedicated systems-management network connection is limited to the 10/100 speed
of the sideband interface. For information and any limitations on the implementation of the management port on your
system, see your system documentation.
Note: A dedicated systems-management network port might not be available on your server. If your hardware does not
have a dedicated network port, the shared setting is the only XClarity Controller setting available.
After you start the server, you can use the XClarity Provisioning Manager to configure the XClarity Controller network
connection. The server with the XClarity Controller must be connected to a DHCP server, or the server network must
be configured to use the XClarity Controller static IP address. To set up the XClarity Controller network connection
through the Setup utility, complete the following steps:
Step 1. Turn on the server. The ThinkSystem welcome screen is displayed.
Note: It may take up to 40 seconds after the server is connected to AC power for the power-control button to
become active.
Step 2. When the prompt <F1> System Setup is displayed, press F1. If you have set both a power-on password and
an administrator password, you must type the administrator password to access the XClarity Provisioning
Manager.
Step 3. From the XClarity Provisioning Manager main menu, select UEFI Setup.
Step 4. On the next screen, select BMC Settings.
Step 5. There are three XClarity Controller network connection choices in the DHCP Control field:
• Static IP
• DHCP Enabled
• DHCP with Fallback
Notes:
• You must wait approximately 1 minute for changes to take effect before the server firmware is functional again.
• You can also configure the XClarity Controller network connection through the XClarity Controller web interface or
command-line interface (CLI). In the XClarity Controller web interface, network connections can be configured by
clicking BMC Configuration from the left navigation panel , and then selecting Network. In the XClarity Controller CLI,
network connections are configured using several commands that depend on the configuration of your installation.
Important: The XClarity Controller is set initially with a user name of USERID and password of PASSW0RD (with a
zero, not the letter O). This default user setting has Supervisor access. Change this user name and password during
your initial configuration for enhanced security.
Note: In a Flex System, the XClarity Controller user accounts can be managed by a Flex System Chassis Manage-
ment Module (CMM) and might be different than the USERID/PASSW0RD combination described above.
To access the XClarity Controller through the XClarity Controller web interface, complete the following steps:
Step 1. Open a web browser. In the address or URL field, type the IP address or host name of the XClarity Controller
to which you want to connect.
Step 2. Select the desired language from the language drop-down list.
Step 3. Type your user name and password in the XClarity Controller Login window. If you are using the XClarity
Controller for the first time, you can obtain your user name and password from your system administrator. All
login attempts are documented in the event log. Depending on how your system administrator configured the
user ID, you might need to enter a new password after logging in.
Step 4. Click Log In to start the session. The browser opens the XClarity Controller home page, as shown in the
following illustration. The home page displays information about the system that the XClarity Controller
manages plus icons indicating how many critical errors and how many warnings are currently present in the
system.
The second section is the graphical information provided to the right of the navigation panel. The modular format gives
you a quick view of the server status and some quick actions that can be performed.
Note: When navigating the web interface, you can also click the question mark icon for online help.
Three column table containing descriptions of the actions that you can perform from the XClarity Controller web
interface.
When configuring the XClarity Controller, the following key options are available:
• Backup and Restore
• License
• Network
• Security
• User/LDAP
Click User/LDAP under BMC Configuration to create, modify, and view user accounts, and to configure LDAP
settings.
The Local User tab shows the user accounts that are configured in the XClarity Controller, and which are currently
logged in to the XClarity Controller.
The LDAP tab shows the LDAP configuration for accessing user accounts that are kept on an LDAP server.
Create user
Click Create to create a new user account.
Complete the following fields: User name, Password, Confirm Password, and Authority Level. For further details on the
authority level, see the following section.
Supervisor
The Supervisor user authority level has no restrictions.
Read only
The Read only user authority level has read-only access and cannot perform actions such as file transfers, power and
restart actions, or remote presence functions.
Custom
The Custom user authority level allows a more customized profile for user authority with settings for the actions that the
user is allowed to perform.
SNMPv3 Settings
To enable SNMPv3 access for a user, select the check box next to the SNMPv3 Settings. The following user access
options are explained:
Access type
Only GET operations are supported. The XClarity Controller does not support SNMPv3 SET operations. SNMP3 can
only perform query operations.
Authentication protocol
Only HMAC-SHA is supported as the authentication protocol. This algorithm is used by the SNMPv3 security model for
authentication.
Privacy protocol
The data transfer between the SNMP client and the agent can be protected using encryption. The supported methods
are CBC-DES and AES.
SSH Key
The XClarity Controller supports SSH Public Key Authentication. To add a SSH key to the local user account, select the
check box next to the SSH Key. The following two options are provided:
Note: Some of Lenovo’s tools may create a temporary user account for accessing the XClarity Controller when the tool
is run on the server operating system. This temporary account is not viewable and does not use any of the 12 local
user account positions. The account is created with a random user name (for example, “20luN4SB”) and password.
The account can only be used to access the XClarity Controller on the internal Ethernet over USB interface, and only
for the CIM-XML and SFTP interfaces. The creation and removal of this temporary account is recorded in the audit log
as well as any actions performed by the tool with these credentials.
To delete a local user account, click the trash can icon on the row of the account that you wish to remove. If you are
authorized, you can remove your own account or the account of other users, even if they are currently logged in, un-
less it is the only account remaining with User Account Management privileges. Sessions that are already in progress
when user accounts are deleted will not be automatically terminated.
In the Web inactivity session timeout field, you can specify how long, in minutes, the XClarity Controller waits before
it disconnects an inactive web session. The maximum wait time is 1,440 minutes. If set to 0, the web session never
expires.
The XClarity Controller firmware supports up to six simultaneous web sessions. To free up sessions for use by others,
it is recommended that you log out of the web session when you are finished rather than relying on the inactivity time-
out to automatically close your session.
Note: If you leave the browser open on an XClarity Controller web page that automatically refreshes, your web session
will not automatically close due to inactivity.
Configuring LDAP
Use the information in this topic to view or change XClarity Controller LDAP settings.
Click the LDAP tab to view or modify XClarity Controller LDAP settings.
The XClarity Controller can remotely authenticate a user’s access through a central LDAP server instead of, or in
addition to the local user accounts that are stored in the XClarity Controller itself. Privileges can be designated for
each user account using the IBMRBSPermissions string. You can also use the LDAP server to assign users to groups
and perform group authentication, in addition to the normal user (password check) authentication. For example, an
XClarity Controller can be associated with one or more groups, the user will pass group authentication only if the user
belongs to at least one group that is associated with the XClarity Controller.
• Use Pre-Configured Servers: You can configure up to four LDAP servers by entering each server’s IP
address or host name if DNS is enabled. The port number for each server is optional. If this field is left
blank, the default value of 389 is used for non-secured LDAP connections. For secured connections, the
default port value is 636. You must configure at least one LDAP server.
• Use DNS to Find Servers: You can choose to discover the LDAP server(s) dynamically. The mechanisms
described in RFC2782 (A DNS RR for specifying the location of services) are used to locate the LDAP
server(s). This is known as DNS SRV. You need to specify a fully qualified domain name (FQDN) to be used
as the domain name in the DNS SRV request.
If you wish to enable secure LDAP, click the Enable Secure LDAP check box. In order to support secure
LDAP, a valid SSL certificate must be in place and at least one SSL client trusted certificate must be
imported into the XClarity Controller. Your LDAP server must support Transport Layer Security (TLS) version
1.2 to be compatible with the XClarity Controller secure LDAP client. For more information about certificate
handling, see “SSL certificate handling” on page 32.
2. Fill in information under Additional Parameters. Below are explanations of the parameters.
Binding method
Before you can search or query the LDAP server, you must send a bind request. This field controls how this
initial bind to the LDAP server is performed. The following bind methods are available:
• No Credentials Required
Use this method to bind without a Distinguished Name (DN) or password. This method is strongly
discouraged because most servers are configured to not allow search requests on specific user records.
• Use Configured Credentials
Use this method to bind with the configured client DN and password.
If the initial bind is successful, a search is performed to find an entry on the LDAP server that belongs to the user who
is logging in. If necessary, a second attempt to bind is made, this time with the DN that is retrieved from the user’s
LDAP record and the password that was entered during the login process. If the second attempt to bind fails, the user
is denied access. The second bind is performed only when the No Credentials Required or Use Configured Creden-
tials binding methods are used.
Group Filter
The Group Filter field is used for group authentication. Group authentication is attempted after the user’s credentials
are successfully verified. If group authentication fails, the user’s attempt to log on is denied. When the group filter is
configured, it is used to specify to which groups the XClarity Controller belongs. This means that to succeed the user
must belong to at least one of the groups that are configured for group authentication. If the Group Filter field is left
blank, group authentication automatically succeeds. If the group filter is configured, an attempt is made to match at
least one group in the list to a group that the user belongs. If there is no match, the user fails authentication and is
denied access. If there is at least one match, group authentication is successful.
The comparisons are case sensitive. The filter is limited to 511 characters and can consist of one or more group
names. The colon (:) character must be used to delimit multiple group names. Leading and trailing spaces are ignored,
but any other space is treated as part of the group name.
Note: The wildcard character (*) is no longer treated as a wildcard. The wildcard concept has been discontinued to
prevent security exposures. A group name can be specified as a full DN or by using only the cn portion. For example,
a group with a DN of cn=adminGroup, dc=mycompany, dc=com can be specified using the actual DN or with admin-
Group.
Nested group membership is supported only in Active Directory environments. For example, if a user is a member of
GroupA and GroupB, and GroupA is also a member of GroupC, the user is said to be a member of GroupC also.
Nested searches stop if 128 groups have been searched. Groups in one level are searched before groups in a lower
level. Loops are not detected.
Note: If you give a user the ability to modify basic, networking, and/or security related adapter configuration parame-
ters, you should consider giving this same user the ability to restart the XClarity Controller (bit position 10). Otherwise,
without this ability, a user might be able to change parameters (for example, IP address of the adapter), but will not be
able to have them take effect.
3. Choose whether or not to Enable enhanced role-based security for Active Directory Users under Active Directory
Settings (if Use LDAP server for Authentication and Authorization mode is used), or configure the Groups for Local
Authorization (if Use LDAP server for Authentication only (with local authorization) mode is used).
• Enable enhanced role-based security for Active Directory Users
If enhanced role-based security setting is enabled, a free-formatted server name must be configured to
act as the target name for this particular XClarity Controller. The target name can be associated with one
or more roles on the Active Directory server through a Role Based Security (RBS) Snap-In. This is
accomplished by creating managed targets, giving them specific names, and then associating them to the
appropriate roles. If a name is configured in this field, it provides the ability to define specific roles for users
and XClarity Controllers (targets) who are members of the same role. When a user logs in to the XClarity
Controller and is authenticated via Active Directory, the roles that the user is a member of are retrieved from
the directory. The permissions that are assigned to the user are extracted from the roles that also have as a
member a target that matches the server name that is configured here, or a target that matches any XClarity
Controller. Multiple XClarity Controllers can share the same target name. This could be used to group multiple
XClarity Controllers together and assign them to the same role (or roles) by using a single managed target.
Conversely each XClarity Controller can be given a unique name.
The XClarity Controller uses two network controllers. One network controller is connected to the dedicated manage-
ment port and the other network controller is connected to the shared port. Each of the network controllers is assigned
its own burned in MAC address. If DHCP is being used to assign an IP address to the XClarity Controller, when a user
switches between network ports or when a failover from the dedicated network port to the shared network port occurs,
a different IP address may be assigned to the XClarity Controller by the DHCP server. It is recommended that when us-
ing DHCP, users should use the host name to access the XClarity Controller rather than relying on an IP address. Even
if the XClarity Controller network ports are not changed, the DHCP server could possibly assign a different IP address
to the XClarity Controller when the DHCP lease expires, or when the XClarity Controller reboots. If a user needs to
access the XClarity Controller using an IP address that will not change, the XClarity Controller should be configured for
a static IP address rather than DHCP.
Click Network under BMC Configuration to modify XClarity Controller Ethernet settings.
To enable Virtual LAN (VLAN) tagging select the Enable VLAN check box. When VLAN is enabled and a VLAN ID is
configured, the XClarity Controller only accepts packets with the specified VLAN IDs. The VLAN IDs can be configured
with numeric values between 1 and 4094.
From the MAC selection list choose one of the following selections:
• Use burned in MAC address
The Burned-in MAC address option is a unique physical address that is assigned to this XClarity Controller by the
manufacturer. The address is a read-only field.
• Use custom MAC address
If a value is specified, the locally administered address overrides the burned-in MAC address. The locally administered
address must be a hexadecimal value from 000000000000 through FFFFFFFFFFFF. This value must be in the form xx:xx
:xx:xx:xx:xx where x is a hexadecimal number from 0 to 9 or “a” through “f ”. The XClarity Controller does not support
the use of a multicast address. The first byte of a multicast address is an odd number (the least significant bit is set to 1);
therefore, the first byte must be an even number.
In the Maximum transmission unit field, specify the maximum transmission unit of a packet (in bytes) for your network interface.
The maximum transmission unit range is from 60 to 1500. The default value for this field is 1500.
Configuring DNS
Use the information in this topic to view or change XClarity Controller Domain Name System (DNS) settings.
Note: In a Flex System, DNS settings cannot be modified on the XClarity Controller. DNS settings are managed by the
CMM.
Click Network under BMC Configuration to view or modify XClarity Controller DNS settings.
If you click the Use additional DNS address servers check box, specify the IP addresses of up to three Domain Name
System servers on your network. Each IP address must contain integers from 0 to 255, separated by periods. These
DNS server addresses are added to the top of the search list, so a host name lookup is done on these servers before
one that is automatically assigned by a DHCP server.
Configuring DDNS
Use the information in this topic to enable or disable Dynamic Domain Name System (DDNS) protocol on the XClarity
Controller.
Click Network under BMC Configuration to view or modify XClarity Controller DDNS settings.
Click the Enable DDNS check box, to enable DDNS. When DDNS is enabled, the XClarity Controller notifies a domain
name server to change in real time, the active domain name server configuration of the XClarity Controller configured
host names, addresses or other information that is stored in the domain name server.
Choose an option from the item list to decide how you want the domain name of the XClarity Controller to be selected.
• Use custom domain name: You can specify the domain name to which the XClarity Controller belongs.
• Use domain name obtained from the DHCP server: The domain name to which the XClarity Controller belongs is
specified by the DHCP server.
Click Network under BMC Configuration to view or modify the XClarity Controller Ethernet over USB settings.
The Ethernet over USB is used for in-band communications to the XClarity Controller. Click the check box to enable or
disable the Ethernet over USB interface.
Important: If you disable the Ethernet over USB, you cannot perform an in-band update of the XClarity Controller firm-
ware or server firmware using the Linux or Windows flash utilities.
Mapping of external Ethernet port numbers to Ethernet over USB port numbers is controlled by clicking the Enable
external Ethernet to Ethernet over USB port forwarding checkbox and completing the mapping information for ports you
wish to have forwarded from the management network interface to the server.
Configuring SNMPv3
Use the information in this topic to configure SNMP agents.
Complete the following steps to configure the XClarity Controller SNMPv3 alert settings.
1. Click Network under BMC Configuration.
2. Check the corresponding check box to enable the SNMPv3 agent or SNMP Traps.
3. If enabling the SNMP Traps, select the following event types you wish to be alerted:
• Critical
• Attention
• System
4. If enabling the SNMPv3 agent, complete the following fields:
a. In the BMC Contact field, enter the name of the contact person.
b. In the Location field, enter the site (geographical coordinates).
Click Network under BMC Configuration to view or modify XClarity Controller IPMI settings. Complete the following
fields to view or modify IPMI settings:
Important: If you are not using any tools or applications that access the XClarity Controller through the network using
the IPMI protocol, it is highly recommended that you disable IPMI network access for improved security.
Because each BMC network setting is configured using separate IPMI requests and in no particular order, the BMC
does not have the complete view of all of the network settings until the BMC is restarted to apply the pending network
changes. The request to change a network setting may succeed at the time that the request is made, but later be deter-
mined to be invalid when additional changes are requested. If the pending network settings are incompatible when the
BMC is restarted, the new settings will not be applied. After restarting the BMC, you should attempt to access the BMC
using the new settings to ensure that they have been applied as expected.
Click Network under BMC Configuration to view or modify XClarity Controller port assignments. Complete the following
fields to view or modify port assignments:
Web
The port number is 80. This field is not user-configurable.
Remote Presence
In this field specify the port number for Remote Presence. The default value is 3900.
SLP
In this field specify the port number that is used for the SLP. The port number is 427. This field is not user-configurable.
SSDP
The port number is 1900. This field is not user-configurable.
SSH
In this field specify the port number that is configured to access the command line interface through the SSH protocol.
The default value is 22.
SNMP Agent
In this field specify the port number for the SNMP agent that runs on the XClarity Controller. The default value is 161.
Valid port number values are from 1 to 65535.
SNMP Traps
In this field specify the port number that is used for SNMP traps. The default value is 162. Valid port number values are
from 1 to 65535.
Click Network under BMC Configuration to view or modify XClarity Controller access control settings.
On some servers the front panel USB port can be switched to attach either to the server or to the XClarity Controller.
Connection to the XClarity Controller is primarily intended for use with a mobile device running the Lenovo XClarity
Mobile app. When a USB cable is connected between the mobile device and the server’s front panel, an Ethernet over
USB connection will be established between the mobile app running on the device and the XClarity Controller.
Click Network under BMC Configuration to view or modify XClarity Controller front panel USB port to management
settings.
There are four types of settings that you can choose from:
Host Only Mode
The front panel USB port is always connected only to the server.
For additional information about the Mobile app, see the following site:
http://sysmgt.lenovofiles.com/help/topic/com.lenovo.lxca.doc/lxca_usemobileapp.html
Notes:
• If the front panel USB port is configured for Shared Mode, the port is connected to the XClarity Controller when there
is no power, and is connected to the server when there is power. When there is power, the control of the front panel
USB port can be switched back and forth between the server and the XClarity Controller. In shared mode, the port can
also be switched between the host and the XClarity Controller by pressing and holding the front panel Identification
button (for compute nodes it may be the USB management button) for more than 3 seconds.
• When configured in Shared Mode and the USB port is currently connected to the server, the XClarity Controller can
support a request to switch the front panel USB port back to the XClarity Controller. When this request is executed,
the front panel USB port will remain connected to the XClarity Controller until there is no USB activity to the XClarity
Controller for the period specified by the inactivity timeout.
Note: The default minimum TLS version setting is TLS 1.2, but you can configure the XClarity Controller to use other
TLS versions if needed by your browser or management applications. For more information, see “tls command” on
page 154.
Click Security under BMC Configuration to access and configure security properties, status, and settings for your
XClarity Controller.
SSL is a security protocol that provides communication privacy. SSL enables client/server applications to communi-
cate in a way that prevents eavesdropping, tampering, and message forgery. You can configure the XClarity Controller
to use SSL support for different types of connections, such as secure web server (HTTPS), secure LDAP connection
(LDAPS), CIM over HTTPS, and SSH server, and to manage the certificates that are required for SSL.
You can use SSL with a self-signed certificate or with a certificate that is signed by a third-party certificate authority.
Using a self-signed certificate is the simplest method for using SSL; but, it does create a small security risk. The risk
arises because the SSL client has no way of validating the identity of the SSL server for the first connection that is
attempted between the client and server. For example, it is possible that a third party might impersonate the XClari-
ty Controller web server and intercept data that is flowing between the actual XClarity Controller web server and the
user’s web browser. If, at the time of the initial connection between the browser and the XClarity Controller, the self-
signed certificate is imported into the certificate store of the browser, all future communications will be secure for that
browser (assuming that the initial connection was not compromised by an attack).
For more complete security, you can use a certificate that is signed by a certificate authority (CA). To obtain a signed
certificate, you will need to select Generate Certificate Signing Request (CSR). Select Download Certificate Signing
Request (CSR) and send the Certificate-Signing Request (CSR) to a CA to obtain a signed certificate. When the signed
certificate is received, select Import Signed Certificate to import it into the XClarity Controller.
The function of the CA is to verify the identity of the XClarity Controller. A certificate contains digital signatures for the
CA and the XClarity Controller. If a well-known CA issues the certificate or if the certificate of the CA has already been
imported into the web browser, the browser can validate the certificate and positively identify the XClarity Controller
web server.
The XClarity Controller requires a certificate for use with HTTPS Server, CIM over HTTPS, and the secure LDAP client.
In addition the secure LDAP client also requires one or more trusted certificates to be imported. The trusted certificate
is used by the secure LDAP client to positively identify the LDAP server. The trusted certificate is the certificate of the
CA that signed the certificate of the LDAP server. If the LDAP server uses self-signed certificates, the trusted certificate
can be the certificate of the LDAP server itself. Additional trusted certificates must be imported if more than one LDAP
server is used in your configuration.
Click Security under BMC Configuration to configure the SSL certificate management.
When managing XClarity Controller certificates, you are presented with the following actions:
Download Signed Certificate
Use this link to download a copy of the currently installed certificate. The certificate can be downloaded in either PEM
or DER format. The contents of the certificate can be viewed using a third-party tool such as OpenSSL (www.openssl.
org). An example of the command line for viewing the contents of the certificate using OpenSSL would look something
like the following:
openssl x509 -in cert.der -inform DER -text
Click Network under BMC Configuration to configure the Secure Shell server.
To use the SSH protocol, a key needs to be generated first to enable the SSH server.
Notes:
• No certificate management is required to use this option.
• The XClarity Controller will initially create a SSH server key. If you wish to generate a new SSH server key, click Net
work under BMC Configuration; then, click Regenerate key.
• After you complete the action, you must restart the XClarity Controller for your changes to take effect.
The XClarity Controller provides an IPMI interface via the KCS channel that does not require authentication.
Click Security under BMC Configuration to enable to disable IPMI over KCS access.
Note: After you change the settings, you must restart the XClarity Controller for your changes to take effect.
Important: If you are not running any tools or applications on the server that access the XClarity Controller through the
IPMI protocol, it is highly recommended that you disable the IPMI KCS access for improved security. XClarity Essen-
tials does use the IPMI over KCS interface to the XClarity Controller. If you disabled the IPMI over KCS interface,
re-enable it prior to running XClarity Essentials on the server. Then disable the interface after you have finished.
This feature allows you to decide whether or not to allow the system firmware to return to an older firmware level.
To enable of disable this feature, click Network under BMC Configuration. Any changes that are made will take effect
immediately without the XClarity Controller requiring a restart.
This feature is only available if the Physical Presence Policy is enabled through UEFI. Once enabled, you can access
the physical presence feature by clicking Security under BMC Configuration.
The Security Key Lifecycle Manager (SKLM) is a software product for creating and managing security keys. The
SKLM for ThinkSystem Self Encrypting Drives (SED) - Features on Demand (FoD) option is a ThinkSystem FoD
option that enables centralized management of encryption keys. The encryption keys are used to gain access to data
stored on SEDs in a ThinkSystem server.
A centralized SKLM (key repository) server provides the encryption keys to unlock the SEDs in the ThinkSystem
server. The FoD option requires that a FoD Activation key be installed in the XClarity Controller FoD key repository.
The Activation key for the FoD option is a unique identifier comprised of the machine type and serial number. To use
the storage key/drive access functionality, the FoD key ThinkSystem TKLM Activation for Secure Drive Encryption
(Type 32796 or 801C) must be installed in the XClarity Controller FoD key repository. See Chapter 7 “License
Management” on page 71for information pertaining to installing an activation key.
The SKLM FoD option is limited to ThinkSystem XClarity Controller-based servers. To increase security, the XClarity
Controller can be placed in a separate management network. The XClarity Controller uses the network to retrieve
encryption keys from the SKLM server; therefore, the SKLM server must be accessible to the XClarity Controller
through this network. The XClarity Controller provides the communication channel between the SKLM server and the
requesting ThinkSystem server. The XClarity Controller firmware attempts to connect with each configured SKLM
server, stopping when a successful connection is established.
The XClarity Controller establishes communication with the SKLM server if the following conditions are met:
• A valid FoD activation key is installed in the XClarity Controller.
• One or more SKLM server host name/IP addresses are configured in the XClarity Controller.
• Two certificates (client and server) for communication with the SKLM server are installed in the XClarity Controller.
Note: Configure at least two (a primary and a secondary) SKLM servers for your device. If the primary SKLM server
does not respond to the connection attempt from the XClarity Controller; connection attempts are initiated with the
additional SKLM servers until a successful connection is established.
A Transport Layer Security (TLS) connection must be established between the XClarity Controller and the SKLM serv-
er. The XClarity Controller authenticates the SKLM server by comparing the server certificate submitted by the SKLM
server, with the SKLM server certificate previously imported into the XClarity Controller’s trust store. The SKLM server
authenticates each XClarity Controller that communicates with it and checks to verify that the XClarity Controller is
permitted to access the SKLM server. This authentication is accomplished by comparing the client certificate that the
XClarity Controller submits, with a list of trusted certificates that are stored on the SKLM server.
At least one SKLM server (key repository server) will be connected, and the device group is considered optional.
The SKLM server certificate will need to be imported, while the client certificate needs to be specified. By default, the
HTTPS certificate is used. If you wish to replace it, you can generate a new one.
Port
Type the port number for the SKLM server in this field. If this field is left blank, the default value of 5695 is used. Valid
port number values are 1 to 65535.
A device group allows users to manage the self-encrypting drive (SED) keys on multiple servers as a group. A device
group with the same name must also be created on the SKLM server.
Client and server certificates are used to authenticate the communication between the SKLM server and the XClarity
Controller located in the ThinkSystem server. Client and server certificate management are discussed in this section.
A client certificate is required for communication with the SKLM server. The client certificate contains digital signatures
for the CA and the XClarity Controller.
Notes:
• Certificates are preserved across firmware updates.
• If a client certificate is not created for communication with the SKLM server, the XClarity Controller HTTPS server
certificate is used.
• The function of the CA is to verify the identity of the XClarity Controller.
To create a client certificate, click the plus icon () and select one of the following items:
• Generate a New Key and a Self-Signed Certificate
• Generate a New Key and a Certificate Signing Request (CSR)
The Generate a New Key and a Self-Signed Certificate action item generates a new encryption key and a self-signed
certificate. In the Generate New Key and Self-Signed Certificate window, type or select the information in the required
fields and any optional fields that apply to your configuration, (see the following table). Click OK to generate your en-
cryption key and certificate. A progress window displays while the self- signed certificate is being generated. A confir-
mation window is displayed when the certificate is successfully installed.
Note: The new encryption key and certificate replace any existing key and certificate.
After the client certificate has been generated you can download the certificate to storage on your XClarity Controller
by selecting the Download Certificate action item.
The Generate a New Key and a Certificate Signing Request (CSR) action item generates a new encryption key and
a CSR. In the Generate a New Key and a Certificate Signing Request window, type or select the information in the
required fields and any optional fields that apply to your configuration, (see the following table). Click OK to generate
your new encryption key and CSR.
A progress window displays while the CSR is being generated and a confirmation window is displayed upon success-
ful completion. After generation of the CSR, you must send the CSR to a CA for digital signing. Select the Download
Certificate Signing Request (CSR) action item and click OK to save the CSR to your server. You can then submit the
CSR to your CA for signing.
The CSR is digitally signed by the CA using the user’s certificate processing tool, such as the OpenSSL or Certutil
command line tool. All client certificates that are signed using the user’s certificate processing tool have the same
base certificate. This base certificate must also be imported to the SKLM server so that all servers digitally signed by
the user are accepted by the SKLM server.
After the certificate has been signed by the CA you must import it into the BMC. Select the Import a Signed Certificate
action item and select the file to upload as the client certificate; then, click OK. A progress window displays while the
CA-signed certificate is being uploaded. A Certificate Upload window is displayed if the upload process is successful.
A Certificate Upload Error window is displayed if the upload process is not successful.
After a CA-signed certificate is imported into the BMC, select the Download Certificate action item. When you select
this action item, the CA-signed certificate is downloaded from the XClarity Controller to store on your system.
The server certificate is generated in the SKLM server and must be imported into the XClarity Controller before the
secure drive access functionality will work. To import the certificate that authenticates the SKLM server to the BMC,
click Import a Certificate from the Server Certificate Status section of the Drive Access page. A progress indicator is
displayed as the file is transferred to storage on the XClarity Controller.
After the server certificate is successfully transferred to the XClarity Controller, the Server Certificate Status area dis-
plays the following content: A server certificate is installed.
If you want to remove a trusted certificate, click the corresponding Remove button.
Select Backup and Restore under BMC Configuration to perform the following actions:
• View management controller configuration summary
• Backup or restore the management controller configuration
• View backup or restore status
• Reset the management controller configuration to its factory default settings
• Access the management controller initial setup wizard
Select Backup and Restore under BMC Configuration. At the very top is the Backup BMC configuration section.
If a backup was previously made, you will see the details in the Last backup field.
To backup the current BMC configuration, follow the steps shown below:
1. Specify the password for the BMC backup file.
2. Select if you wish to encrypt the whole file or only sensitive data.
3. Begin the backup process by clicking Start Backup. During the process, you are not allowed to perform any restore/
reset actions.
4. When the process is completed, a button will appear to let you download the and save the file.
Select Backup and Restore under BMC Configuration. Located below Backup BMC Configuration is the Restore BMC
from Configuration File section.
To restore the BMC to a previously saved configuration, follow the steps shown below:
1. Browse to select the backup file and input the password when prompted.
2. Verify the file by clicking View content to view the details.
3. After verifying the content, click Start Restore.
To reset the BMC to factory defaults, follow the steps shown below:
1. Click Start to Reset BMC to Factory Defaults.
Notes:
• Only users with Supervisor user authority level can perform this action.
• The Ethernet connection is temporarily disconnected. You must log in the XClarity Controller web interface again
after the reset operation is completed.
• Once you click Start to Reset BMC to Factory Defaults, all previous configuration changes will be lost. If you wish to
enable LDAP when restoring the BMC configuration, you will need to first import a trusted security certificate before
doing so.
• After the process is completed, the XClarity Controller will be restarted. If this is a local server, your TCP/IP
connection will be lost and you may need to reconfigure the network interface to restore connectivity.
For details on how to restart the XClarity Controller, see “Power actions” on page 54
The IMM and IMM2 consolidate service processor functionality previously provided by the combination of the
Baseboard Management Controller (BMC) and the Remote Supervisor Adapter II in System x and BladeCenter
products.
Upgrading to Standard or Advanced will be performed using a software license key using Lenovo Features on Demand
(FoD).
Note: IMM2 Basic does not include web browser or remote presence capabilities.
IMM2 Standard (as standard in some servers or as enabled using the Features on Demand software license key in
other servers) has the following features:
• Secure web server user interface
• Remote power control
• Access to server vital product data (VPD)
• Advanced Predictive Failure Analysis (PFA) support
• Power Management
• Automatic notification and alerts
• Continuous health monitoring and control
• Choice of a dedicated or shared Ethernet connection
• Domain Name System (DNS) server support
• Dynamic Host Configuration Protocol (DHCP) support
• E-mail alerts
• Syslog logging support
• Embedded Dynamic System Analysis (DSA)
• Enhanced user authority levels
• LAN over USB for in-band communications to the IMM
• Event logs that are time stamped, saved on the IMM, and that can be attached to e-mail alerts
• Support for Industry-standard interfaces and protocols: IPMI V2.0, CIM, and SNMP
• OS watchdogs
• Serial over LAN
IMM2 Advanced (as enabled using the Features on Demand software license key) has the following features:
• Remote presence, including remote control of server via a Java or ActiveX client
• Supports up to four concurrent remote users
• Operating system failure screen capture and display through the web interface
• Video recorder and playback function
• Virtual media allowing the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server.
For servers with an SD Media adapter installed, you can configure volumes on those SD Cards for use by the IMM.
Note: For servers where only IMM2 Basic is installed (for example, x3100 M4, x3250 M4, nx360 M4), the use of IMM2
Advanced requires IMM2 Standard also be purchased and enabled.
Note: The IMM2 Advanced upgrade requires the IMM2 Standard upgrade.
Lenovo offers two levels of IMM, Standard and Premium. If the server has IMM Standard functionality, it can be
upgraded to IMM Premium by purchasing and installing a Virtual Media Key (either part number 46C7526 or 46C7527,
depending on the server) on the server system board. This key is a physical component (Figure 1). However, no new
firmware is required. IMM Premium provides Remote Presence and Virtual Media capabilities. Figure 1 shows where
the Virtual Media Key is installed in one of the supported servers (x3620 M3).
IMM Premium (as enabled using the Virtual Media Key) adds the following features in addition to the features of IMM
Standard:
• Remote presence, including remote control of server
• Operating system failure screen capture and display through the web interface
• Virtual media allowing the attachment of a diskette drive, CD/DVD drive, USB flash drive, or disk image to a server
The following table lists the available Virtual Media Keys and their part numbers. Table 3 lists the key used in each
System x server. Note that three different part numbers exist. The parts are keyed to prevent insertion into the wrong
system.
Withdrawn: All three part numbers in the table are withdrawn from marketing.
* For systems with only IMM2 Basic standard, the IMM2 Advanced upgrade requires IMM2 Standard (90Y3900) also
be purchased and enabled.
* The x3755 M3 includes an Aspeed AST-2050 Baseboard Management Controller (BMC). This BMC is different from
the BMCs in earlier servers and includes the IMM Premium feature set.
Each BladeCenter chassis, with the exception of the BladeCenter S, supports a redundant pair of management
modules. The two management modules used in a chassis must be identical.
The MM/AMM is used to monitor, manage, configure, report logs, and update firmware from BladeCenter chassis
blades and I/O modules. Although the IMM is now included in some blade servers, the AMM remains the management
module for systems-management functions for BladeCenter and blade servers. There is no external network access
to the IMM on blade servers. The AMM must be used for remote management of blade servers. The IMM replaces the
functionality of the BMC and the Concurrent Keyboard, Video, and Mouse (cKVM) option card in past blade server
products:
• The Advanced Management Module for BladeCenter S, BladeCenter E, BladeCenter H, and BladeCenter HT is part
#25R5778.
• The Advanced Management Module for BladeCenter T is part # 32R0835.
• The original MMs have been withdrawn but were only supported in BladeCenter E (BC-E, part # 48P7055) and
BladeCenter T (BC-T, part # 90P3741).
All BladeCenter chassis models with the original MM installed can be upgraded to an AMM. In fact, most current
servers require that the chassis have AMMs installed.
The following table lists the service processors that are standard and optional in each BladeCenter chassis.
Useful links
These web pages provide addition information about the service processors in System x and BladeCenter servers:
Lenovo Press paper Using System x Features on Demand
http://lenovopress.com/redp4895
ServerProven
http://www.lenovo.com/us/en/serverproven/xseries/upgrades/smmatrix.shtml
Lenovo Press document Service Processors Supported in System x Servers, covering Netfinity and xSeries as well as
the first generation of System x servers
http://lenovopress.com/tips0146
LENOVO PROVIDES THIS PUBLICATION ”AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MER-
CHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express
or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. Lenovo may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without
notice.
The products described in this document are not intended for use in implantation or other life support applications
where malfunction may result in injury or death to persons. The information contained in this document does not affect
or change Lenovo product specifications or warranties. Nothing in this document shall operate as an express or
implied license or indemnity under the intellectual property rights of Lenovo or third parties. All information contained
in this document was obtained in specific environments and is presented as an illustration. The result obtained in other
operating environments may vary. Lenovo may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Any references in this publication to non-Lenovo Web sites are provided for convenience only and do not in any man-
ner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this
Lenovo product, and use of those Web sites is at your own risk. Any performance data contained herein was deter-
mined in a controlled environment. Therefore, the result obtained in other operating environments may vary signifi-
cantly. Some measurements may have been made on developmentlevel systems and there is no guarantee that these
measurements will be the same on generally available systems. Furthermore, some measurements may have been
estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for
their specific environment.
The following terms are trademarks of Lenovo in the United States, other countries, or both:
Advanced Settings Utility
BladeCenter®
Dynamic System Analysis
Flex System
Lenovo®
NeXtScale
Netfinity®
System x®
X5
iDataPlex®
xSeries®
Access® and ActiveX® are trademarks of Microsoft Corporation in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
IT is a constant part of business and of general life. The expertise of these innovators in delivering IT solutions helped
the planet become more efficient. As organizational leaders seek to extract more real value from their data, business
processes, and other key investments, IT is moving to the strategic center of business.
To meet these business demands, new categories of systems emerged. These systems combine the flexibility of
general-purpose systems, the elasticity of cloud computing, and the simplicity of an appliance that is tuned to the
workload. These systems represent the collective knowledge of thousands of deployments, established guidelines,
innovative thinking, IT leadership, and distilled expertise.
These systems are optimized for performance and virtualized for efficiency. These systems offer a no-compromise de-
sign with system-level upgradeability. The capability is built for cloud, which contains “built-in” flexibility and simplicity.
Lenovo Flex System combined with Lenovo XClarity Administrator is an converged infrastructure system with built-in
expertise that deeply integrates with the complex IT elements of an infrastructure.
Lenovo and its business partners can deliver comprehensive infrastructure solutions that combine servers, storage,
networking, virtualization, and management in a single structure. Our solutions are delivered with built-in expertise that
enables organizations to manage and flexibly deploy integrated patterns of virtual and hardware resources through
unified management.
This section introduces the major components of the Flex System infrastructure.
Lenovo XClarity Administrator is a virtual appliance that is quickly imported into a virtualized environment, which gives
easy deployment and portability. It can be up and running incredibly quickly, discovering a Lenovo IT environment and
managing systems, without the need for any agents to be installed.
The chassis is designed to support multiple generations of technology and offers independently scalable resource
pools for higher usage and lower cost per workload.
With the ability to handle up 14 standard width nodes that deliver independent two-socket Intel Xeon E5 servers (or
even accommodate an eight-socket single Intel Xeon E7 node), the Enterprise Chassis provides flexibility and tremen-
dous compute capacity in its 10U package.
Additionally, the rear of the chassis accommodates four high-speed I/O bays that can accommodate up to 40 GbE
high-speed networking, 16 Gb Fibre Channel, or 56 Gb InfiniBand. With interconnecting compute nodes, networking,
and storage that uses a high-performance and scalable mid-plane, the Enterprise Chassis can support the latest
high-speed networking technologies.
The ground-up design of the Enterprise Chassis reaches new levels of energy efficiency through innovations in power,
cooling, and air flow. By using simpler controls and futuristic designs, the Enterprise Chassis can break free of “one
size fits all” energy schemes.
The ability to support the workload demands of tomorrow’s workloads is built in with a new I/O architecture, which
provides choice and flexibility in fabric and speed. With the ability to use Ethernet, InfiniBand, Fibre Channel (FC),
Fibre Channel over Ethernet (FCoE), and iSCSI, the Enterprise Chassis is uniquely positioned to meet the growing
and future I/O needs of large and small businesses.
The chassis is the same width and depth as the Enterprise Chassis and identical in its node, I/O module, CMM, and
Fan modules. At 11U, the Carrier-Grade Chassis is 1U higher than the Enterprise to allow for extra airflow. This added
1U air ducting allows for elevated temperature operation at ASHRAE 4 levels and for temporary elevated temperature
excursions to up to 55 °C.
This Carrier-Grade Chassis is designed to NEBS level 3 and ETSI certification levels. It is designed for operation
within earthquake zone 4 areas. The chassis supports -48 V DC power operation, as required for many Central Office
Telco environments.
Compute nodes
Lenovo offers compute nodes that vary in architecture, dimension, and capabilities.
Optimized for efficiency, density, performance, reliability, and security, the portfolio comprises Intel Xeon
processor-based nodes that are designed to make full use of the capabilities of these processors, all of which can be
mixed within the same Enterprise Chassis.
Intel based nodes compute nodes are available in the following models that range from the two-socket to the
eight-socket Intel processor family:
• Intel Xeon E5-2600 v3 and E5-2600 v4 product families
• Intel Xeon E5-4600 v2 product family
• Intel Xeon E7-8800 v3, E7-4800 v3, and E7-2800 v2 product families
Up to 14 two-socket Intel Xeon E5-2600 servers can be deployed in a single enterprise chassis where high-density
cloud, virtual desktop, or server virtualization is wanted.
The nodes are complemented with leadership I/O capabilities of up to 16 channels of high-speed I/O lanes per
standard wide node bay and 32 lanes per full wide node bay. Various I/O adapters and matching I/O Modules are
available.
Expansion nodes
Expansion nodes can be attached to the x240 M5, which allow expansion of the node capabilities with locally attached
storage or more PCIe adapters.
The Storage Expansion Node provides locally attached disk expansion. Hot plug SAS and SATA disk are supported as
well as SSDs.
With the attachment of the PCIe Expansion Node, the node can have up to four PCIe adapters attached. High-
performance GPUs can also be installed within the PCIe Expansion Node which gives Virtual Desktop acceleration or
high-performance compute capabilities.
Storage
Flex System can be connected to various external storage systems from Lenovo as well as many other storage
vendors. The Lenovo Storwize V3700 storage enclosure is one such disk system that supports attachment to Flex
System.
There are various storage solutions that are available from third-party vendors. These vendors publish support
statements for end-to-end connectivity between their storage and the Flex System Chassis components.
I/O modules
By using the range of available modules and switches to support key network protocols, you can configure Flex
System to fit in your infrastructure. However, you can do so without sacrificing the ability to be ready for the future. The
networking resources in Flex System are standards-based, flexible, and fully integrated into the system. This
combination gives you no-compromise networking for your solution. Network resources are virtualized and managed
by workload. These capabilities are automated and optimized to make your network more reliable and simpler to
manage.
The Flex System Fabric EN4093R 10Gb Scalable Switch is shown in Figure 1-5.
This book
This book describes the Flex System products that are available from Lenovo, including all of the chassis and
chassis options, the full range of Intel nodes, expansion nodes and associated options. We also describe converged
system offerings that are available from Lenovo.
We cover the configuration tools that are used to configure (and price) a Lenovo Flex System and Lenovo
Converged System for Infrastructure. This book contains machine type model numbers, option part numbers, and
feature codes.
We cover the technology and features of the chassis, compute nodes, management features, connectivity, and
options, starting with a description of the systems management features of the Flex System product portfolio.
Systems management
Lenovo XClarity Administrator is designed to help you get the most out of your Flex System installation. By using this
highly capable management tool, you also can automate repetitive tasks. The management interface can significantly
reduce the number of manual navigational steps for typical management tasks. Benefit from simplified system setup
procedures, by using configuration patterns and built-in expertise to consolidate monitoring for physical and
virtual resources.
Chassis types: The management architecture of the Flex System Enterprise Chassis is identical to the
Flex System Carrier-Grade Chassis. Where the term Enterprise Chassis is used, it applies equally to both
chassis.
This chapter includes the following topics:
2.1, “Management network” on page 10
2.2, “Chassis Management Module” on page 12
2.3, “Security” on page 16
2.4, “Compute node management” on page 17
2.5, “Lenovo XClarity Administrator” on page 19
Management network
In the Flex System chassis, there are separate management and data networks. The management network is a private
and secure Gigabit Ethernet network. It is used to complete management-related functions throughout the chassis,
including management tasks that are related to the compute nodes, switches, storage, and the chassis.
92 Tech Sales Certification - System Management Study Guide
The internal management network is shown in Figure 2-1 as the blue lines. It connects the Chassis Management
Module (CMM) to the compute nodes and the switches in the I/O bays. The management networks in multiple chassis
deployments can be connected through the external ports of the CMMs in each chassis, via a GbE top-of-rack switch.
Figure 2-1 Flex System management network with internal Lenovo XClarity Administrator
The data network is shown in Figure 2-1 as yellow lines. One of the key functions that the data network supports is
the discovery of operating systems running on the various network endpoints by Lenovo XClarity Administrator.
Lenovo XClarity Administrator is downloaded as a Virtual Machine image and can be installed onto a virtual machine
running either inside the chassis or outside. Depending on internet connections, the management system can be
installed and up and running, discovering manageable Lenovo systems in less than 30 minutes, offering impressive
time to value.
Lenovo XClarity Administrator not only manages Flex System chassis based products, it can also manage a mixed
environment of many differing Lenovo systems.
Lenovo XClarity Administrator can discover chassis in your environment by probing for manageable systems that are
on the same IP subnet as Lenovo XClarity Administrator by using a specified IP address or range of IP addresses or
by importing information from a spreadsheet.
Figure 2-1 on page 10 shows Lenovo XClarity Administrator deployed within a Flex System environment. Here, the
VM that contains Lenovo XClarity Administrator is installed within the Chassis on a node that is running a supported
hypervisor. In this example, there is a single network (management and data). All communications between Lenovo
XClarity Administrator and the network occurs over one (eth0) network interface on the host:
Figure 2-2 Flex System Management network with external Lenovo XClarity Administrator
The Lenovo XClarity management network requires one or two network connections.
When only one network interface is present, the following conditions must be met:
• The interface must be configured to support the discovery and management of hardware.
It must communicate with the CMM in each managed chassis, the Integrated Management Module2 (IMM2) of each
managed compute node and rack server, and the Flex switches in each managed chassis.
• If you intend to acquire firmware updates from Lenovo’s electronic fix-distribution website, this interface must also
have connectivity to the Internet (typically through a firewall).
Otherwise, you must manually import firmware updates into the management-sever updates repository.
• If you intend to deploy operating system images to managed servers, the network interface must have IP network
connectivity to the server network interface that is used to access the host operating system and must be configured
with an IPv4 address.
When two network interfaces are (Eth0 and Eth1) present (as shown in Figure 2-2 on page 11), the following conditions
must be met:
• The Eth0 interface often is connected to the management network and used to discover and manage hardware. It
must communicate with the CMM of each managed chassis, the IMM2 of each managed server, and the Flex
switches that are installed in each managed chassis.
• If you intend to acquire firmware updates from the Fix Central website, the Eth0 interface must also have connectivity
to the Internet (typically through a firewall). Otherwise, you must import firmware updates into the management server
updates repository.
• If you intend to deploy operating system images to managed servers, the network interface must have IP network
connectivity to the server network interface that is used to access the host operating system and must be configured
with an IPv4 address.
For more information about the Lenovo XClarity Administrator features and functions, see
2.5, “Lenovo XClarity Administrator” on page 19.
CMM2 is the Chassis Management Module that is currently available from Lenovo. The original CMM is now
withdrawn from marketing.
The next section describes the usage models of the CMM and its features.
For more information about the CMM see 3.6, “Chassis Management Module” on page 62.
Mixing of CMM versions: If two CMMs are installed in a Flex System chassis, they should
be of the same type. If a primary CMM2 is installed, the secondary must be a CMM2
Through an embedded firmware stack, the CMM implements functions to monitor, control, and
provide external user interfaces to manage all chassis resources. You can use the CMM to perform the following func-
tions:
• Define login IDs and passwords.
• Configure security settings, such as data encryption and user account security. The CMM
contains an LDAP client that can be configured to provide user authentication through one or
more LDAP servers. The LDAP server (or servers) to be used for authentication can be
discovered dynamically or manually pre-configured.
• Select recipients for alert notification of specific events.
• Monitor the status of the compute nodes and other components.
• Find chassis component information.
• Discover other chassis in the network and enable access to them.
• Control the chassis, compute nodes, and other components.
• Access the I/O modules to configure them.
• Change the start sequence in a compute node.
• Set the date and time.
• Use a remote console for the compute nodes.
• Enable multi-chassis monitoring.
• Set power policies and view power consumption history for chassis components.
Interfaces
The CMM supports a web-based graphical user interface (GUI) that provides a way to perform chassis management
functions within a supported web browser. You can also perform management functions through the CMM
command-line interface (CLI). The web-based and CLI interfaces are accessible through the single RJ45 Ethernet con-
nector on the CMM, or from any system that is connected to the same network.
The CMM does not have a fixed static IPv6 IP address by default. Initial access to the CMM in an IPv6 environment
can be done by using the IPv4 IP address or the IPv6 link-local address. The IPv6 link-local address is automatically
generated based on the MAC address of the CMM. By default, the CMM is configured to respond to DHCP first before
it uses its static IPv4address. If you do not want this operation to occur, connect locally to the CMM and change
the default IP settings. For example, you can connect locally by using a notebook. The web-based GUI brings together
all of the functionality that is needed to manage the chassis elements in an easy-to-use fashion consistently across all
System x IMM2 based platforms.
Security
Today’s world of computing demands tighter security standards and native integration with computing platforms. For
example, the push towards virtualization increased the need for more security. This increase comes as more
mission-critical workloads are consolidated on to fewer and more powerful servers. The Flex System Enterprise Chassis
takes a new approach to security with a ground-up chassis management design to meet new security standards.
The following security enhancements and features are provided in the chassis:
• Single sign-on (central user management)
• End-to-end audit logs
• Secure boot: Tivoli Provisioning Manager and CRTM
• Intel TXT technology (Intel Xeon based compute nodes)
• Signed firmware updates to ensure authenticity
• Secure communications
• Certificate authority and management
• Chassis and compute node detection and provisioning
• Role-based access control
• Security policy management
• Same management protocols that are supported on BladeCenter AMM for compatibility with earlier versions
• Insecure protocols are disabled by default in CMM, with Locks settings to prevent user from inadvertently or
maliciously enabling them
• Supports up to 84 local CMM user accounts
• Supports up to 32 simultaneous sessions
• CMM supports LDAP authentication
The Enterprise Chassis ships Secure and supports the following security policy settings:
Secure: Default setting to ensure a secure chassis infrastructure and includes the following features:
– Strong password policies with automatic validation and verification checks
– Updated passwords that replace the manufacturing default passwords after the initial setup
– Only secure communication protocols, such as Secure Shell (SSH) and Secure Sockets Layer (SSL)
The centralized security policy makes Enterprise Chassis easy to configure. All components
run with the same security policy that is provided by the CMM. This consistency ensures that
all I/O modules run with a hardened attack surface.
The CMM and Lenovo XClarity Administrator each have their own independent security
policies that control, audit, and enforce the security settings. The security settings include the
network settings and protocols, password and firmware update controls, and trusted
computing properties.
The management controllers for the various Enterprise Chassis components have the following default IPv4
addresses:
• CMM: 192.168.70.100
• Compute nodes: 192.168.70.101-114 (corresponding to the slots 1 - 14 in the chassis)
• I/O Modules: 192.168.70.120-123 (sequentially corresponding to chassis bay numbering)
In addition to the IPv4 address, all I/O modules support link-local IPv6 addresses and configurable external IPv6 ad-
dresses.
In addition to the interface, the following other major enhancements from the previous IMMv1are included:
• Faster processor and more memory
• IMM2 manageable “northbound” from outside the chassis, which enables consistent management and scripting with
System x rack servers
Remote presence:
– Increased color depth and resolution for more detailed server video
– Active X client in addition to Java client
– Increased memory capacity (~50 MB) provides convenience for remote software installations
• No IMM2 reset is required on configuration changes because they become effective immediately without restart
• Hardware management of non-volatile storage
• Faster Ethernet over USB
• 1 Gb Ethernet management capability
• Improved system power-on and boot time
• More detailed information for UEFI detected events enables easier problem determination and fault isolation
• User interface meets accessibility standards (CI-162 compliant)
• Separate audit and event logs
• “Trusted” IMM with significant security enhancements (CRTM/TPM, signed updates, authentication policies, and so
on)
• Simplified update and flashing mechanism
• Syslog alerting mechanism provides an alternative to email and SNMP traps
• Support for Features on Demand (FoD) enablement of server functions, option card features, and System x solutions
and applications
• First Failure Data Capture: One button web press starts data collection and download
99 Tech Sales Certification - System Management Study Guide
For more information about IMM2 as implemented in Flex System compute nodes, see Chapter 5, “Compute nodes” on
page 213.
For more information, see Integrated Management Module II User’s Guide available from:
https://download.lenovo.com/servers_pdf/nn1jz_book.pdf
I/O modules
The I/O modules include the following base functions:
• Initialization
• Configuration
• Diagnostic tests (power-on and concurrent)
• Status Reporting
The following set of protocols and software features also are supported on the I/O modules:
• A configuration method over the Ethernet management port.
• A scriptable SSH CLI, a web server with SSL support, Simple Network Management Protocol v3 (SNMPv3) Agent with
alerts, and a sFTP client.
• Server ports that are used for Telnet, HTTP, SNMPv1 agents, TFTP, FTP, and other insecure protocols are DISABLED
by default.
• LDAP authentication protocol support for user authentication.
• For Ethernet I/O modules, 802.1x enabled with policy enforcement point (PEP) capability to allow support of TNC
(Trusted Network Connect).
• The ability to capture and apply a switch configuration file and the ability to capture a first failure data capture (FFDC)
data file.
• Ability to transfer files by using URL update methods (HTTP, HTTPS, FTP, TFTP, and sFTP).
• Various methods for firmware updates, including FTP, sFTP, and TFTP. In addition, firmware updates by using a URL
that includes protocol support for HTTP, HTTPs, FTP, sFTP, and TFTP.
• SLP discovery and SNMPv3.
• Ability to detect firmware and hardware hangs and to pull a “crash-failure memory dump” file to an FTP (sFTP) server.
• Selectable primary and backup firmware banks as the current operational firmware.
• Ability to send events, SNMP traps, and event logs to the CMM, including security audit logs.
• IPv4 and IPv6 on by default.
• The CMM management port supports IPv4 and IPv6 (IPV6 support includes the use of link local addresses. Port
mirroring capabilities:
– Port mirroring of CMM ports to internal and external ports.
– For security reasons, the ability to mirror the CMM traffic is hidden and is available to development and
service personnel only.
• Management virtual local area network (VLAN) for Ethernet switches: A configurable management 802.1q tagged
VLAN in the standard VLAN range of 1 - 4094. It includes the CMM’s internal management ports and the I/O modules
internal ports that are connected to the nodes.
Lenovo XClarity Administrator is a virtual appliance that is quickly imported into a virtualized environment, which gives
easy deployment and portability. This virtualized appliance design is a key advantage because there is no need to
dedicate a node bay. The VM can be hosted on a physical server located either inside or outside of the Chassis.
Managed endpoints do not need special software agents or drivers to be installed or maintained to be managed by
Lenovo XClarity Administrator. Being agentless also means that Lenovo XClarity Administrator removes operating
system dependency and is one less component to certify in the workload stack, which results in management simplicity.
Because Lenovo XClarity Administrator is a virtual appliance, it can use VMware High Availability and Hyper-V
clustering for failover capability.
The administration dashboard based on HTML 5 allows fast location of resources so tasks can be run quickly. Because
Lenovo XClarity Administrator does not have any agent software that is installed on the managed endpoints, there are
no CPU cycles that are spent on agent execution and no memory used. Therefore, up to 1 GB of RAM and 1 - 2% CPU
usage is saved, compared to a typical managed system in which an agent is required.
With a simplified administration dashboard, the following functions are easily achieved:
• Discovery
• Inventory
• Monitoring
• Firmware updates
• Firmware compliance
• Configuration management
• Deployment of operating systems and hypervisors to bare metal servers
Fast time to value is realized through automatic discovery of existing or new Lenovo System x rack servers and Flex
System infrastructure. Inventory of the discovered endpoints is gathered, so an at-a-glance view of the managed
hardware inventory and its status is possible.
A centralized view of events and alerts that are generated from managed endpoints, such as Flex System chassis,
System x servers, and Flex System switches, is available. When an issue is detected by a managed endpoint, an event
is passed to Lenovo XClarity Administrator. Alerts and events are visible via the XClarity Administrator Dashboard, the
Status bar, and via the Alerts and Events detail for the specific system.
Firmware management is simplified by assigning compliance policies to managed endpoints. The compliance policy can
be created and XClarity Administrator monitors changes to the hardware inventory and flags any previous non-compliant
systems.
Configuration management uses pattern-based configurations to quickly provision and reprovision a single server or mul-
tiple servers and compute nodes, all with a single set of configuration settings. Address pools can be configured to assist
with deployments. Category patterns are used to create configuration patterns that can be deployed to server profiles.
Provisioning enables firmware management, configuration, and bare metal deployment. VMware ESXi, Windows Server,
and Red Hat Linux images can be imported and held in a repository for images. Up to 28 OS images can be deployed
concurrently.
If you must be compliant to NIST SP 800-131A or FIPS 140-2, Lenovo XClarity Administrator can help you achieve a
fully compliant environment. Lenovo XClarity Administrator supports self-signed SSL certificates (which are issued by an
internal certificate authority) or external SSL certificates (private or commercial CA). Lenovo XClarity includes an audit
log that provides a historical record of user actions, such as logging on, new users, or changing user passwords.
If you must be compliant to NIST SP 800-131A or FIPS 140-2, Lenovo XClarity Administrator can help you achieve a
Lenovo XClarity can be integrated into external, higher-level management, automation, and orchestration platforms
through open REST application programming interfaces (APIs). This ability means Lenovo XClarity can easily
integrate with your management infrastructure.
The User accounts that are used to log on and manage the Lenovo XClarity Administrator are also used for all
chassis and servers that are managed by the Lenovo XClarity Administrator. When you create a user account,
you control the level of access (such as whether the account has read/write authority or read-only authority)
by using predefined role groups.
• Hardware monitoring
Lenovo XClarity Administrator provides a centralized view of events and alerts that are generated from
managed endpoints, such as chassis, servers, and Flex System switches. When an issue is detected by the
CMM or device that is installed in the chassis, an event is passed to the Lenovo XClarity Administrator. That
event is displayed in the alerts list that is available within the user interface. A status bar also is available that
provides overall status information about the main XClarity Administrator interface.
• Hardware management
There are various management tasks for each supported endpoint, including viewing status and properties,
configuring system information and network settings, starting the CMM/IMM web interface, and remotely
controlling the System x or Flex system node.
• Configuration management
Configuration patterns provide a way to ensure that you have a consistent configuration that is applied to
managed servers. Server patterns are used to provision or pre-provision a managed server by configuring
local storage, I/O adapters, boot setting, firmware, ports, IMM, and UEFI settings. Server patterns also
integrate support for virtualizing I/O addresses, so you can virtualize Flex System fabric connections or
repurpose servers without disruption to the fabric.
The differences between each version of Lenovo XClarity is shown in Table 2-2.
As can be seen in Table 2-2 on page 21, Lenovo XClarity Administrator is available for
download and operation at no charge, however in this form it has no service or support as
standard and comes with a limited time 90 day evaluation of bare metal deployment and
configuration patterns.
Lenovo XClarity Administrator can be downloaded at no charge from the following website:
http://shop.lenovo.com/us/en/systems/software/systems-management/xclarity/
XClarity Pro is available with 1-year, 3-year, or 5-years software subscription and support and
comes with full function, including configuration patterns and bare metal deployment.
Lenovo XClarity Pro is available either on a per managed server or per managed chassis basis. The Per managed
chassis offers advantageous licensing cost model, because the entire chassis can be licensed for management which
includes all the nodes that are installed within. In addition the XClarity Pro includes support and service for XClarity
Integrators.
The one-time charge for the product includes the license, software subscription, and support.
It is delivered as proof of entitlement via the Lenovo Electronic Software Delivery (ESD) process.
This provides electronic proof of entitlement (ePOE) and the client receives this ePOE via an
e-mail, to the e-mail address that was entered at the time of order. The ePOE contains
customer name, contact, customer number and order reference number. It also details the
software subscription and support part number, description and coverage dates. Its vital that
the correct client e-mail address is entered at the time of placing an order with Lenovo.
The client also receives an ESD welcome letter via e-mail, that is issued approximately 2-3 days after the proof of
entitlement is sent. This contains instructions on how to log in to the ESD portal and gain access to the software via
four secure download options.
For assistance with ePOE for Lenovo XClarity Pro, refer to the following website:
https://lenovoesd.flexnetoperations.com/control/lnvo/manualsupport
The part numbers for both per managed chassis and per managed server, are shown in Table 2-3 and Table 2-4
below.
Table 2-3 Lenovo XClarity Pro per managed chassis
The host system that is running the Lenovo XClarity virtual machine features the following minimum hardware
requirements:
• Two virtual microprocessors
• 6 GB of memory
• A minimum of 64 GB of storage for use by Lenovo XClarity
For VMware, the virtual machine is available as an OVF template. For Hyper-V, the virtual machine is a virtual disk
image.
NUMA and Hyper-V: For Hyper-V environments that run on Linux guests with a 2.6 kernel base and that
use large amounts of memory for the virtual appliance, you must disable the use of non-uniform memory
access (NUMA) on the Hyper-V Settings Panel from Hyper-V Manager. Changing this setting requires you
to restart the Hyper-V service, which also restarts all running virtual machines. If this setting is not disabled,
Lenovo XClarity Administrator virtual appliance might experience problems during initial startup.
Where support with some limited functions is listed in Table 2-5 on page 25, the following functions are restricted:
• Servers and compute nodes: Servers with IBM signed firmware are supported as listed
Table 2-5 on page 25; however, the following functions are not available:
– Processor and memory usage data
– RAID-link configuration (configuration management by using patterns)
I/O Modules: I/O modules with IBM signed firmware are supported as listed in Table 2-5 on page 25; however, the fol-
lowing functions are not available:
– Aggregated event and audit logs
– Network configuration (port configuration via configuration management by using patterns)
CMM: It is not possible for a CMM that is signed by IBM and a CMM2 that is signed by Lenovo to be
installed within a chassis at the same time. The firmware on a CMM cannot be upgraded to make a CMM2
because they contain different hardware.
There are minimum levels of firmware that is required for each managed endpoint. During installation and discovery,
Lenovo XClarity prompts the user where firmware can be updated to allow management of CMM, nodes, I/O Modules.
All endpoints in a Flex System chassis must be at the same software level.
For more information about support, see the following Flex System Information Center website:
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp?topic=/com.lenovo.lxca.doc/plan_supportedhw.
html
The Lenovo Flex System Enterprise Chassis (machine type 8721) is a 10U next-generation
server platform with integrated chassis management. It is a compact, high-density,
high-performance, rack-mount, scalable platform system.
The Carrier-Grade Chassis (machine type 7385) is also available for use in harsher Telecommunications environments
where NEBS Level 3 or ETSI certification is required. This chassis is based on the Flex System Enterprise Chassis,
and incorporates extra cooling capability for elevated temperature operation. The Carrier-Grade chassis is 11U in
height.
Both chassis support up to 14 standard compute nodes. The compute nodes share common resources, such as power,
cooling, management, and I/O resources within a single chassis.
Enterprise Chassis
The Enterprise Chassis is shown in Figure 3-1 as seen from the front. The front of the chassis includes 14 horizontal
bays with removable dividers with which nodes and expansion nodes can be installed within the chassis. Nodes can be
Compute or Expansion type. The nodes can be installed when the chassis is powered.
The chassis uses a die-cast mechanical bezel for rigidity so that the chassis can be shipped with nodes installed. This
chassis construction features tight tolerances between nodes, shelves, and the chassis bezel. These tolerances ensure
accurate location and mating of connectors to the midplane.
More Console Breakout Cables can be ordered, if required. The Console Breakout Cable connects to the front of a
node and allows Keyboard, Video, USB, and Serial connections to be attached locally to that node. For more informa-
tion about alternative methods, see 3.8.5, “Console planning” on page 77. The CMM includes built-in console redirec-
tion via the CMM Ethernet port.
The ordering part number and feature code for the breakout cable are listed in Table 3-1.
The component parts of the chassis with the shuttle removed are shown in Figure 3-2. The shuttle forms the rear of
the chassis where the I/O Modules, power supplies, fan modules, and CMMs are installed. The Shuttle is removed only
to gain access to the midplane or fan distribution cards in the rare event of a service action.
Within the chassis, a personality card holds vital product data (VPD) and other information that is relevant to the
particular chassis. This card can be replaced only under service action and is not normally accessible. The personality
card is attached to the midplane, as shown in Figure 3-34 on page 67.
Models
The components that comprise each model of the Enterprise Chassis are listed in Table 3-2.
Comprehensive information of previously released chassis models some of which contained the CMM, together with
compatibility of Flex System Nodes and options can be found in the Flex Systems Interoperability Guide (FSIG). This is
an excellent resource to assist with upgradeability of existing systems that are already in production. The FSIG can be
found at the following website: http://www.lenovopress.com/fsig
The Enterprise Chassis provides several LEDs on the front information panel that can be used to obtain the status of
the chassis. The Identify, Check log, and Fault LED are also on the rear of the chassis for ease of use.
Specifications
The specifications of the Enterprise Chassis MT 8721 are listed in Table 3-3.
The Flex System Enterprise Chassis is rated to a maximum operating temperature of 40 °C. 3.2
Carrier-Grade Chassis
The Flex System Carrier-Grade Chassis is based on the leading-edge design of the Flex System Enterprise Chassis.
It has an extra 1U air inlet to provide more cooling for operation at elevated temperatures so is 11U high in total.
The Carrier-Grade chassis and supported nodes, I/O Modules, and options were tested to NEBS Level 3 and ETSI
standards for operation in the harsher conditions that are found in remote Central Office Telecommunications
environments. The Carrier-Grade chassis is rated to a maximum operating temperature of 45 °C and temporary
excursions to 55 °C for up to four days of operation are permitted.
The Carrier-Grade chassis is designed and tested for operation in harsh environments, such as Central Offices (COs)
that are commonly found in the Telecommunications industry.
A CO generally is used to house the equipment that is needed for the processing and routing of telephone and data
traffic. COs also are commonly known as telephone exchanges, telephone switching centers, or wire centers. They
often are a windowless building that is built of concrete or brick, in some cases raised above the ground level to
prevent flooding. The buildings often are designed with a resilience to earthquake damage. The ability to withstand
extreme climatic conditions (such as tornados, earthquakes, and flooding) is often designed into the buildings
construction.
High security also is wanted to prevent unauthorized access and protect the security of data that is being switched.
Equipment and systems that are housed within the CO are often resilient to loss of power, building air conditioning, and
outbreak of fire. The demand for packet-switching is increasing, so the need for more compute servers and
higher-bandwidth connections to provide enhanced services to Telecommunications provider clients is driving an
adoption of computing systems for these environments. The Carrier-Grade Chassis is designed to operate in such
environments.
The chassis is ASHRAE 4 compliant. This compliance allows normal operation of the chassis to a maximum operating
temperature of 45° C with temporary elevated temperature excursions of up to 55° C for 96 hours. This ability can be
advantageous for COs that in remote areas. If the air conditioning systems fail on a Friday, the chassis can be
operated at temperatures above 45° C during a weekend and repairs can be made on the following Monday morning to
return the temperature to normal.
The testing that takes place as part of Network Equipment-Building System (NEBS) and ETSI European
Telecommunications Standards Institute (ETSI) compliance includes items, such as temperature, humidity, vibration,
electromagnetic compatibility, electromagnetic interference, ESD range, and flame spread.
The components of the Carrier-Grade chassis are shown in Figure 3-6 and often are identical to the Enterprise
Chassis.
Models
The components of the standard model are listed in Table 3-4.
As with the Enterprise Chassis, the Carrier-Grade Chassis provides several LEDs on the front
information panel that can be used to obtain the status of the chassis. The Identify, Check log,
and Fault LED also are on the rear of the chassis for ease of use.
The following components can be installed into the rear of the chassis:
• Up to two CMMs.
• Up to six 2500 W -48 V DC power supply modules.
• Up to six fan modules that consist of four 80 mm fan modules and two 40 mm fan modules. The two 40 mm fan
modules are included within the chassis as standard. More 80 mm fan modules can be installed for a total of 10
modules.
• Up to four I/O modules.
• Unique to the Carrier-Grade Chassis are two earth ground studs and the ESD wrist strap attachment point can be
seen in the lower 1U section of the chassis.
The Chassis has the same rack mounting rail kit as the Flex System Enterprise Chassis, which can be installed
quickly into four post racks with circular or square holes.
Air filters
To support NEBS and ETSI compliance, the Carrier-Grade Chassis includes two airborne contaminate filters that are
fitted to the front of the chassis. The main filter assembly covers the compute nodes and a secondary filter assembly
covers the 1U air-inlet at the bottom of the chassis.
Each filter assembly includes 6 mm polyurethane filter media that must be removed, inspected, and replaced regularly.
The filter media pieces are consumable parts and are not covered under the terms of the warranty. Lenovo recom-
mends the service intervals that are listed in Table 3-7.
Table 3-8 lists the part number to order replacement filter media. The part number includes the following components:
• Four of the main filter media
• Four of the secondary 1U filter media
Table 3-8 Flex System Enterprise Chassis airborne contaminant filter ordering information
Fan modules
The Enterprise Chassis and Carrier-Grade Chassis support up to 10 hot pluggable fan modules that consist of two 40
mm fan modules and eight 80 mm fan modules.
A chassis can operate with a minimum of six hot-swap fan modules that are installed, which consist of four 80 mm fan
modules and two 40 mm fan modules.
The fan modules plug into the chassis and connect to the fan distribution cards. More 80 mm fan modules can be
added as required to support chassis cooling requirements.
The two 40 mm fan modules in fan bays 5 and 10 distribute airflow to the I/O modules and CMMs. These modules ship
preinstalled in the chassis. Each 40 mm fan module contains two 40 mm counter rotating fan pairs, side-by-side.
The 80 mm fan modules distribute airflow to the compute nodes through the chassis from front to rear. Each 80 mm fan
module contains two 80 mm fan modules, back-to-back within the module, which are counter-rotating.
Both fan modules have an EMC mesh screen on the rear internal face of the module. This design also provides a
laminar flow through the screen. Laminar flow is a smooth flow of air, which is sometimes referred to as streamline flow.
This flow reduces turbulence of the exhaust air and improves the efficiency of the overall fan assembly.
Both fan modules have two LED indicators that consist of a green power-on indicator and an amber fault indicator. The
power indicator lights when the fan module has power and flashes when the module is in the power save state.
Fan quantities: When the modules are ordered as an option, they are supplied as a pair. If you order the
modules as a feature code, they are supplied as single units.
The specifications of the 80 mm fan module pair option are listed in Table 3-9.
When you install more nodes, install the nodes, fan modules, and power supplies from the bottom upwards.
The minimum configuration of 80 mm fan modules is four, which provides cooling for a maximum of four nodes. This
base configuration is shown in Figure 3-14.
Installing six 80 mm fan modules allows another four nodes to be supported within the chassis. Therefore, the
maximum is eight, as shown in Figure 3-15.
Figure 3-15 Six 80 mm fan modules allow for a maximum of eight nodes
To cool more than eight nodes, all fan modules must be installed, as shown in Figure 3-16.
If there are insufficient fan modules for the number of nodes that are installed, the nodes might be throttled.
There are two fan logic modules included within the chassis, as shown in Figure 3-17.
Fan logic modules are multiplexers for the internal I2C bus, which is used for communication between hardware
components within the chassis. Each fan pack is accessed through a dedicated I2C bus, which is switched by the Fan
Mux card from each CMM.
The fan logic module switches the I2C bus to each individual fan pack. This module can be used by the CMM to
determine multiple parameters, such as fan RPM. There is a fan logic module for the left and right sides of the chassis.
The left fan logic module accesses the left fan modules, and the right fan logic module accesses the right fan modules.
Fan presence indication for each fan pack is read by the fan logic module. Power and fault LEDs are also controlled by
the fan logic module.
A fan logic module and its LEDs are shown in Figure 3-18.
As shown in Figure 3-18, there are two LEDs on the fan logic module. The power-on LED is green when the fan logic
module is powered. The amber fault LED flashes to indicate a faulty fan logic module. Fan logic modules are hot
swappable.
For more information about airflow and cooling, see 3.4, “Cooling”.
The cooling process can be scaled up as required, based on which node bays are populated. For more information
about the number of fan modules that are required for nodes, see 3.3.1, “Fan module population” on page 45.
When a node is removed from a bay, an airflow damper closes in the midplane. Therefore, no air is drawn in through
an unpopulated bay. When a node is inserted into a bay, the damper is opened by the node insertion, which allows for
cooling of the node in that bay.
The Carrier-Grade Chassis has an extra 1U cooling aperture at the base of the chassis.
The upper and lower cooling apertures for the Enterprise Chassis and Carrier-Grade Chassis are shown in Figure
3-19.
Various fan modules are included in the chassis to assist with efficient cooling. Fan modules consist of 40 mm and 80
mm types and are contained within hot pluggable fan modules. The power supplies also have two integrated,
independently powered 40 mm fan modules.
The cooling path for the nodes begins when air is drawn in from the front of the chassis. The airflow intensity is
controlled by the 80 mm fan modules in the rear. Air passes from the front of the chassis, through the node, through
openings in the Midplane, and then into a plenum chamber. Each plenum is isolated from the other, which provides
separate left and right cooling zones. The 80 mm fan packs on each zone then move the warm air from the plenum
to the rear of the chassis.
In a two-bay wide node, the air flow within the node is not segregated because it spans both airflow zones.
The 40 mm fan module on the right side cools the right switches; the left 40 mm fan module cools the left pair of
switches. Each 40 mm fan module features a pair of counter-rotating fans for redundancy.
Cool air flows in from the lower inlet apertures at the front of the chassis. It is drawn into the lower openings in the
CMM and I/O Modules where it provides cooling for these components. It passes through and is drawn out the top of
the CMM and I/O modules. The warm air is expelled to the rear of the chassis by the 40 mm fan assembly. This expul-
sion is indicated by the red airflow arrows that are shown in Figure 3-22.
The removal of the 40 mm fan pack exposes an opening in the bay that leads to the 80 mm fan packs. A back flow
damper within the fan bay then closes. The backflow damper prevents hot air from reentering the system from the rear
of the chassis. The 80 mm fan packs cool the switch modules and the CMM while the fan pack is being replaced.
In the Carrier-Grade Chassis, there are extra airflow inlet apertures at the front of the system that allow air to be drawn
into the chassis and cool the I/O modules and CMMs. This aperture routes the air through the base of the chassis.
Figure 3-23 on page 52 shows the outlets that are under the I/O modules and CMMs. As shown in Figure 3-23 on
page 52, the chassis is viewed from the rear with the shuttle removed, which shows the midplane. It also shows the
air dampers in their closed positions that are within the midplane.
The x240 M5 Node has an ambient temperature sensor. When installed within a Flex System Enterprise Chassis the
ambient temperature is monitored by the IMM2 in the node and a number of thresholds inbuilt that can initially alert and
then in extreme temperature events, shut the node down:
• Warning (Upper non-critical Threshold) 43° C
• Soft Shutdown (Upper critical Threshold) 46° C
• Hard Shutdown (Upper non-recoverable Threshold) 50° C
When a NEBS/ETSI supported node is installed in the Carrier-Grade Chassis, these warnings and shutdowns are
elevated, to allow operation within the extended temperature envelope of the
Carrier-Grade chassis.
The carefully designed cooling subsystem of the chassis results in lower airflow volume, which is measured in cubic feet
per minute (CFM) and lower cooling energy that is spent at a chassis level. This system also maximizes the temperature
difference across the chassis (which is often known as the Delta T) for more efficient room integration. Monitored
Chassis level airflow usage is displayed to enable airflow planning and monitoring for hot air recirculation.
Five Acoustic Optimization states can be selected. Use the one that best balances performance requirements with the
noise level of the fans.
Chassis level CFM usage is available to you for planning purposes. In addition, ambient health awareness can detect
potential hot air recirculation to the chassis.
The standard chassis models ship either with two or six 2500 W modules, or with two 2500 W -48 V DC power
supplies, depending on the model chosen. The Carrier-Grade chassis ships with two 2500 W -48 V DC power
supplies.
For more information about populating the 2500 W power supplies, see 3.5.1, “Power supply selection” on page 56,
which also provides planning information for the nodes that are being installed.
A maximum of six power supplies can be installed within the Enterprise Chassis.
Support of power supplies: Mixing of different power supply types is not supported in the same chassis.
The 2500 W AC supplies are 2500 watts output rated at 200 - 208 VAC (nominal), and 2750 W at 220 - 240 V AC
(nominal). The power supply has an oversubscription rating of up to 3538 W output at 200 V AC. The power supply
operating range is 200 - 240 VAC.
The power supplies also contain two dual independently powered 40 mm cooling fans that are not powered by the
power supply that is installed inside. Instead, they draw power from the chassis midplane. The fans are variable
speed and controlled by the chassis fan logic.
The 2500 W -48 V DC power supply operates over a typical telecommunications range of -60 V to -48 V DC.
For power supply population, Table 3-12 on page 57 lists the supported compute nodes that are based on type and
number of power supplies that are installed in the chassis and the power policy enabled (N+N or N+1).
The 2500 W AC power supplies are 80 PLUS Platinum certified. The 80 PLUS certification is a performance
specification for power supplies that are used within servers and computers. The standard has several ratings, such
as Bronze, Silver, Gold, and Platinum. To meet the 80 PLUS Platinum standard, the power supply must have a power
factor (PF) of 0.95 or greater t 50% rated load and efficiency equal to or greater than the following values:
• 90% at 20% of rated load
• 94% at 50% of rated load
• 91% at 100% of rated load
The efficiency of the 2500 W Enterprise Chassis power supplies at various percentage loads at different input voltages
is listed in Table 3-11.
Table 3-11 2500 W AC power supply efficiency at different loads for 200 - 208 VAC and 220 - 240 VAC
The location of the power supplies within the enterprise chassis where two power supplies are installed into bay 4 and
bay 1 is shown in Figure 3-24. Four power supply bays are shown with fillers that must be removed to install power
supplies into the bays.
Similar to the fan bay fillers, there are blue touch point and finger hold apertures (circular) that are below the blue touch
points to make the filler removal process easy and intuitive.
With 2500 W power supplies installed (AC or DC), the chassis allows power configurations to be N+N redundancy with
most node types. Table 3-12 on page 57 shows the support matrix.Alternatively, a chassis can operate in N+1, where
N can equal 3, 4, or 5.
All power supplies are combined into a single 12.2 V DC power domain within the chassis. This combination
distributes power to each of the compute nodes, I/O modules, and ancillary components through the Enterprise
Chassis midplane.
The midplane is a highly reliable design with no active components. Each power supply is designed to provide fault
isolation and is hot swappable.
Power monitoring of the DC and AC signals allows the CMM to accurately monitor the power supplies.
The integral power supply fans are not dependent upon the power supply being functional because they operate and
are powered independently from the chassis midplane.
Power supplies are added as required to meet the load requirements of the Enterprise Chassis configuration. There is
no need to over provision a chassis and power supplies can be added as the nodes are installed. For more
information about power-supply unit planning, see Table 3-12 on page 57.
The rear view of an AC power supply and highlighted LEDs are shown in Figure 3-25. There is a handle for removal
and insertion of the power supply and a removal latch that is operated by thumb, so the PSU can easily be unlatched
and removed with one hand.
The Power Supply options that are listed in Table 3-10 on page 54 ship with a 2.5 m intra-rack power cable (C19 to
C20).
Before you remove any power supplies, ensure that the remaining power supplies have sufficient capacity to power the
Enterprise Chassis. Power usage information can be found in the CMM web interface.
DC and AC power supplies are available. For more information about all of the power supplies, see the following
sections:
• 3.5.1, “Power supply selection” on page 56
• 3.8.2, “AC power planning” on page 69
• 3.8.3, “DC power planning” on page 74
Table 3-12 on page 57 shows the number of compute nodes that can be installed based on the following factors:
• Model of compute node that is installed
• Power policy that is enabled (N+N or N+1)
• Number of power supplies that are installed (4, 5, or 6)
• The thermal design power (TDP) rating of the processors
• No throttling
For power policies, N+N means a fully redundant configuration where there are duplicate power supplies for each
supply that is needed for full operation. N+1 means there is only one redundant power supply and all other supplies are
needed for full operation.
Support of power supplies: Mixing of different power supply types is not supported in the same chassis.
A full complement of any compute nodes at all TDP ratings are supported if all six power
supplies are installed and an N+1 power policy is selected.
Table 3-12 Specific number of compute nodes supported based on installed power supplies
Tip: For more information about exact configuration support, see the Power Configurator
that is available at this website:
https://support.lenovo.com/us/en/documents/LNVO-PWRCONF
2100W power supply part number 47C7633, is withdrawn from marketing. Information on the compute node support
when using the 2100W power supplies can be found in the product guide for the Flex System Enterprise Chassis, at
the following location:
https://lenovopress.com/tips0863-flex-system-enterprise-chassis
The chassis is run by using one of the following power capping policies:
• No Power Capping
Maximum input power is determined by the active power redundancy policy.
• Static Capping
This policy sets an overall chassis limit on the maximum input power. In a situation where powering on a
component can cause the limit to be exceeded, the component is prevented from powering on.
For example, if eight x240 M5 nodes with 135W processors are required to be installed with N+1 redundancy, a
minimum of three power supplies are required for support, according to Table 3-13 on page 60.
Table 3-13 and Table on page 60 show the highest TDP rating of processors for each node type. In some
configurations, the power supplies cannot power the quantity of nodes, which is highlighted in the tables as “NS” (not
sufficient).
It is impossible to physically install more than seven full-wide compute nodes in a chassis, as shown in Figure 3-13 on
page 44.
In Table 3-13 and Table on page 60, assume that the same type of node is being configured and that throttling is
enabled. Refer to the power configurator for mixed configurations of different node types within a chassis.
a. Number of power supplies is based on x86 compute nodes with processors of the highest TDP
rating.
b. Not supported. The number of nodes exceeds the capacity of the power supplies.
Tip: For more information about the exact configuration, see the Power configurator that is available at this
website:
https://support.lenovo.com/us/en/documents/LNVO-PWRCONF
Figure 3-28 Eight x240 M5 135W processor based nodes with three power supplies in N+1 configuration
When 14 x240 M5 nodes with 145W processors are required with N+1 power configuration, then five power supplies
are required to achieve this, as shown in Table 3-13 on page 60.
The chassis can accommodate one or two CMMs. The first is installed in CMM Bay 1 and the second in CMM bay 2.
Installing two provides CMM redundancy.
The ordering information for the second CMM is listed in Table 3-14.
a. The first feature code is for the primary CMM and the second feature code is for the second redundant CMM.
CMM1 information: This section describes the CMM included with chassis currently
shipping, CMM2. For information about the older CMM (68Y7030), consult the Lenovo Flex
System Interoperability Guide (FSIG), available from:
http://www.lenovopress.com/fsig
The CMM also incorporates a reset button, which features the following functions (depending upon how long the button
is pressed):
• When pressed for less than 5 seconds, the CMM restarts.
• When pressed for more than 5 seconds, the CMM configuration is reset to manufacturing
defaults and then restarts.
For more information about how the CMM integrates into the Systems Management
architecture, see 2.2, “Chassis Management Module” on page 12.
The LEDs that are on the rear of the chassis are shown in Figure 3-33.
Figure 3-33 Chassis LEDs on the rear of the Enterprise Chassis shown
Midplane
The midplane is the circuit board that connects to the compute nodes from the front of the chassis. It also connects to
I/O modules, fan modules, and power supplies from the rear of the chassis. The midplane is within the chassis and can
be accessed by removing the Shuttle assembly. Removing the midplane is rare and necessary only in case of service
action.
The midplane is passive, which means that there are no electronic components on it. The midplane includes apertures
through which air can pass. When no node is installed in a standard node bay, the Air Damper is closed for that bay,
which provides highly efficient scale up cooling.
The midplane also includes reliable industry standard connectors on both sides for power supplies, fan distribution
cards, switches, I/O modules, and nodes. The chassis design allows for highly accurate placement and connector
matings from the nodes, I/O modules, and Power supplies to the midplane, as shown in Figure 3-34 on page 67.
The midplane uses a single power domain within the design. This overall solution is cost-effective and optimizes the
design for a preferred 10U Height.
Within the midplane, there are five separate power and ground planes for distribution of the main 12.2-Volt power
domain through the chassis.
The midplane also distributes I2C management signals and some 3.3v for powering management circuits. The power
supplies source their fan power from the midplane.
Hot Swap components have orange touch points. Orange tabs are found on fan modules, fan logic modules, power
supplies, and I/O Module handles. The orange designates that the items are hot swap and can be removed and
replaced while the chassis is powered. The components that are hot swap and those components that are hot plug are
listed in Table 3-16.
Nodes can be plugged into the chassis while the chassis is powered. The node can then be powered on. Power the
node off before removal.
For more information about planning your Flex System power infrastructure, see Flex System Enterprise Chassis
Power Guide, which is available at this website:
http://ibm.com/support/entry/portal/docdisplay?lndocid=LNVO-POWINF
AC power planning
The Enterprise Chassis can have a maximum of six power supplies installed; therefore, you must consider how to
provide the best power optimized source. N+N and N+1 configurations are supported for maximum flexibility in power
redundancy. A configuration of balanced 3-phase power input into a single or group of chassis is possible. Consider-
ation also must be given to the nodes that are being installed within the chassis to ensure that sufficient power
supplies are installed to deliver the required redundancy. For more information, see 3.5.1, “Power supply selection” on
page 56.
Each AC power supply in the chassis has a 16 A C20 3-pin socket and can be fed by a C19 power cable from a
suitable supply. (The DC power supplies have different unique connectors, as described in 3.8.3, “DC power planning”
on page 74).
The chassis power system is designed for efficiency by using data center power that consists of 3-phase, 60 A Delta
200 VAC (North America), or 3-phase 32 A wye 380-415 VAC (international). The chassis can also be fed from single
phase 200 - 240 VAC supplies, if required.
The power is scaled as required; therefore, as more nodes are added, the power and cooling increases. For power
planning, Table 3-12 on page 57 shows the number of power supplies that are needed for N+N or N+1, which is node
type dependent.
This section describes single phase and 3-phase example configurations for North America and worldwide, starting
with 3-phase. It is assumed that you have power budget in your configuration to deliver N+N or N+1 regarding your
particular node configuration.
Also shown in Figure 3-36 is a typical configuration for a 32 A 3-phase wye supply at 380 - 415 VAC (often termed
“WW” or “International”) for N+N. Ensure that the node deployment meets the requirements that are shown in Table
3-12 on page 57.
The maximum number of Enterprise Chassis that can be installed with a 42U rack is four. Therefore, the chassis re-
quires a total of four 32 A, 3-phase wye feeds to provide for a redundant N+N configuration.
DC power planning
The Flex System Enterprise Chassis type 8721-DLx ships with two -48 V DC power supply modules included as
standard, as does the Carrier-Grade chassis model 7385-DCx. Four more -48 V DC power supplies can be added into
a chassis, for a total of six 2500 W -48 V supplies.
The DC power supply can also be ordered as an option for a chassis and as an “upgrade” for the AC chassis types,
however power supply types cannot be mixed within the same chassis. The part number and feature code for the DC
power supply are listed in Table 3-18.
The power supply is designed to operate at -48 V DC with a rated current of 56 A. It has a 2500 W rating.
Input connectors are provided on the rear of this power supply for the -48 V and Return (RTN) line. There also are
protective earth connections.
The -48 V and Return connections are presented in the form of a single Amphenol connector type 618470001. The
protective earth connections are made with two M6 studs.
The lower rear view of the power supply with the Amphenol connector on the left side and the two earth studs on the
right side is shown in Figure 3-40.
A 2 m DC power cable is supplied with each power supply for connection into the datacenter. This cable that is
attached to the power supply for illustration purposes only is shown in Figure 3-41. The power supply normally is
installed within a Flex System chassis before connection. The other end of the cable has two tin-covered copper
power lugs for attachment to the data center’s -48 V power bus bar and connections.
UPS planning
The chassis can be powered by using single or multiple UPS units (dependent on load), which provide protection if
there is a power failure or interruption. With typical chassis deployments, the 8 kVA or 11 kVA units can provide suffi-
cient capacity and runtimes, with the possibility of extending runtimes with extended battery modules.
Single-phase or 3-phase UPS units that are available from Lenovo can be used to supply power to a chassis.
The 11,000 VA UPS that is shown in Figure 3-42 is ideal for powering an entire chassis in most if not all configurations
and features 4x IEC320 C19 outlets and a hard-wired outlet.
A diagram showing how each power feed can be connected to one of the four 20 A outlets on the rear of the UPS is
shown in Figure 3-43 on page 77. This UPS requires hard wiring to a suitable supply by a qualified electrician. In N+N
and N+1 where N=3 environments, a single UPS might be sufficient to provide redundancy for the entire chassis load
because it has 3x C19 outlets available. Having two UPS units means that a single point of failure (a UPS) can
be eliminated.
For more information, including an overview of all the UPS offerings available from Lenovo, see the document UPS
Technical Reference Guide, which is available at this website:
https://support.lenovo.com/documents/LNVO-POWINF
Console planning
The Enterprise Chassis is a “lights out” system and can be managed remotely with ease. However, the following
methods can be used to access an individual nodes console:
• Each node can be individually connected to by physically plugging in a console breakout cable to the front of the
node. (One console breakout cable is supplied with each chassis and additional ones can be ordered). This cable
presents a 15-pin video connector, twoUSB sockets, and a serial cable out the front. Connecting a portable screen
and USB keyboard and mouse near the front of the chassis enables quick connection into the console breakout cable
and access directly into the node. This configuration is often called crash cart management capability.
• Connect a Serial Conversion Option (SCO), Virtual Media Conversion Option (VCO2), or
USB Conversion Option (UCO) that is connected to the Flex System console Breakout Cable, attached to each node
via a local console cable, to a Global or Local Console Switch. Although supported, this method is not particularly
elegant because there are a significant number of cables to be routed from the front of a chassis.
• Connection to XClarity Administrator that is managing the chassis by browser, allows remote presence to each node
within the chassis.
• Connection remotely into the Ethernet management port of the CMM by using a browser allows remote presence to
each node within the chassis.
• Connect remotely to each IMM2 on a node and start a remote console session to that node through the IMM2. This
would be via a network connection to the CMM, via the internal management network that is described in 2.1,
“Management network” on page 10.
The ordering part number and feature code are listed in Table 3-19 on page 78.
The Carrier-Grade Chassis is designed to operate in ASHRAE class A4 operating environments, which means
temperatures of up to 45 °C (104 °F) for altitudes up to 3,000 m (10,000 ft).
The airflow requirements for the Enterprise Chassis and the Carrier-Grade Chassis are from
270 CFM (cubic feet per minute) to a maximum of 1020 CFM.
The 12.9 kW heat output figure is a potential maximum only, where the most power-hungry configuration is chosen and
all power envelopes are maximum. For a more realistic figure, use the Power Configurator tool to establish specific
power requirements for a configuration, which is available from the Lenovo Enterprise Systems Data Center Planning
Portal:
https://support.lenovo.com/us/en/documents/LNVO-PWRCONF
Data center operation at environmental temperatures above 35 °C often can be operated in a free air cooling
environment where outside air is filtered and then used to ventilate the data center. This configuration is the definition
of ASHRAE class A3 (and the A4 class, which raises the upper limit to 45 °C). A conventional data center does not
normally run with computer room air conditioning (CRAC) units up to 40 °C because the risk of failures of CRAC or
power to the CRACs failing gives limited time for shutdowns before over-temperature events occur.
The Flex System Enterprise Chassis is suitable for operation in an ASHRAE class A3 environment that is installed in
operating and non-operating mode. However, the Carrier-Grade chassis can operate at higher temperatures than the
Enterprise Chassis. The Carrier-Grade chassis is ASHRAE 4 class, so can operate up to 45 °C. It can also withstand
short-term temperature excursions to 55 °C for 96 hours.
For more information about ASHRAE 2011 thermal guidelines, data center classes, and white papers, see the following
American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) website:
http://www.ashrae.org
The chassis can be installed within Lenovo or non Lenovo racks. However, the 42U 1100 mm Enterprise V2 Dynamic
Rack offers for North America a convenient footprint size of a single standard floor tile width and two floor tiles deep.
If installed within a non Lenovo rack, the vertical rails must have clearances to EIA-310-D. There must be sufficient
room in front of the vertical front rack-mounted rail to provide minimum bezel clearance of 70 mm (2.76 inches) depth.
The rack must be sufficient to support the weight of the chassis, cables, power supplies, and other items that are
installed within. There must be sufficient room behind the rear of the rear rack rails to provide for cable management
and routing. Ensure the stability of any non Lenovo rack by using stabilization feet or baying kits so that it does not
become unstable when it is fully populated.
Finally, ensure that sufficient airflow is available to the chassis. Racks with glass fronts do not normally allow sufficient
airflow into the chassis, unless they are specialized racks that are designed for forced air cooling.
For more information about airflow in CFM to assist with planning, see the Power
Configurator tool that is available at this website:
https://support.lenovo.com/documents/LNVO-PWRCONF
150 Tech Sales Certification - System Management Study Guide
Chassis-rack cabinet compatibility
Lenovo offers an extensive range of industry-standard, EIA-compatible rack enclosures and expansion units. The
flexible rack solutions help you consolidate servers and save space, while allowing easy access to crucial components
and cable management.
The Flex System Enterprise Chassis that is supported in each rack cabinet is listed in Table 3-20. Not all of the racks
that are shown are available from Lenovo, but they are included because a client can have one of these racks already
on site.
Carrier-Grade Chassis: None of the racks that are listed in Table 3-20 are NEBS compliant; therefore,
none are supported by the Carrier-Grade Chassis.
a. This rack cabinet is optimized for Flex System Enterprise Chassis, including dedicated front-to-back cable
raceways. For more information, see 3.9, “42U 1100mm Enterprise V2 Dynamic Rack” on page 81.
b. This rack cabinet is optimized for Flex System Enterprise Chassis, including dedicated front-to-back cable
raceways, and includes a unique PureFlex door. This rack is no longer sold by Lenovo.
c. This rack cabinet is optimized for Flex System Enterprise Chassis, including dedicated front-to-back cable
raceways, and includes the original square blue design of unique PureFlex Logod Door, which was shipped Q2
- Q4, 2012. This rack is no longer sold by Lenovo.
Racks that have glass-fronted doors do not allow sufficient airflow for the Enterprise Chassis, such as the older
Netfinity racks. In some cases with the Netfinity racks, the chassis depth is such that the Enterprise Chassis cannot be
accommodated within the dimensions of the rack.
This 42U rack conforms to the EIA-310-D industry standard for a 19-inch, type A rack cabinet.
The external rack dimensions are listed in Table 3-22.
The rack features outriggers (stabilizers) that allow for movement and transportation while populated. These stabilizers
are removed after the rack is installed. The rack that is shown in Figure 3-44 on page 82 is the 9363-4PX rack, with
the Lenovo logo on the door and the outriggers removed.
Lenovo is a leading vendor with specific ship-loadable designs. These kinds of racks are called dynamic racks. The
42U 1100mm Enterprise V2 Dynamic Rack and 42U 1100mm Enterprise V2 Dynamic Expansion Rack are dynamic
racks.
A dynamic rack features extra heavy-duty construction and sturdy packaging that can be reused for shipping a ful-
ly loaded rack. They also have outrigger casters for secure movement and tilt stability. Dynamic racks also include
a heavy-duty shipping pallet that includes a ramp for easy “on and off” maneuvering. Dynamic racks undergo more
shock and vibration testing, and all System x racks are of welded rather than the less robust bolted construction.
The rear view of the 42U 1100 mm Flex System Dynamic Rack is shown in Figure 3-45.
Figure 3-45 42U 1100 mm Flex System Dynamic
Rack rear view, with doors and sides panels removed
A cable raceway when viewed inside the rack looking down is shown in Figure 3-47. Cables can enter the side bays
of the rack from the raceway or pass from one side bay to the other, passing vertically through the raceway. These
openings are at the front and rear of each raceway.
The 1U rack PDUs can also be accommodated in the side bays. In these bays, the PDU is mounted vertically in the
rear of the side bay and presents its outlets to the rear of the rack. Four 0U PDUs can also be vertically mounted in the
rear of the rack.
Rear vertical aperture that is blocked by a PDU: When a PDU is installed in a rear side
pocket bay, it is not possible to use the cable raceway vertical apertures at the rear.
The rack features square mounting holes that are common in the industry onto which the Enterprise Chassis and other
server and storage products can be mounted.
For implementations where the front anti-tip plate is not required, an air baffle/air recirculation prevention plate is
supplied with the rack. You might not want to use the plate when an airflow tile must be positioned directly in front of
the rack.
As shown in Figure 3-49, this air baffle can be installed to the lower front of the rack. It helps prevent warm air from
the rear of the rack from circulating underneath the rack to the front, which improves the cooling efficiency of the entire
rack solution.
It provides effective cooling for the warm air exhausts of equipment that is mounted within the rack. The heat exchang-
er has no moving parts to fail and no power is required.
The rear door heat exchanger can be used to improve cooling and reduce cooling costs in a high-density Enterprise
Chassis environment.
The physical design of the door is slightly different from that of the soon to be withdrawn Rear Door Heat eXchanger
(32R0712) that was marketed by Lenovo for attachment to Enterprise Racks. The Rear Door Heat eXchanger V2 has a
wider rear aperture and slightly different heat profile, as shown in Figure 3-50.
Attaching a Rear Door Heat eXchanger to the rear of a rack allows up to 100,000 BTU/hr or
30 kw of heat to be removed at a rack level.
As the warm air passes through the heat exchanger, it is cooled with water and exits the rear
of the rack cabinet into the data center. The door is designed to provide an overall air
temperature drop of up to 25 °C, as measured between air that enters the exchanger and
exits the rear.
The internal workings of the Rear Door Heat eXchanger V2 are shown in Figure 3-51.
The percentage heat that is removed from a 30 kW heat load as a function of water temperature and water flow rate is
shown in Figure 3-52. With 18° at 10 (gpm), 90% of 30 kW heat is removed by the door.
For efficient cooling, water pressure and water temperature must be delivered in accordance with the specifications
that are listed in Table 3-24. The temperature must be maintained above the dew point to prevent condensation from
forming
The installation and planning guide provides lists of suppliers that can provide coolant distribution unit solutions,
flexible hose assemblies, and water treatment that meet the suggested water quality requirements.
It takes three people to install the Rear Door Heat eXchanger. The exchanger requires a non-conductive step ladder
to be used for attachment of the upper hinge assembly. Consult the Installation and Maintenance Guide before
proceeding: https://support.lenovo.com/docs/UM103398