HX 240c m5 Specsheet
HX 240c m5 Specsheet
Cisco HyperFlex
HX240c M5 Node
(Hybrid)
OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
EXTERNAL INTERFACE VIEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
CHASSIS FRONT VIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Chassis Rear View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
BASE NODE STANDARD CAPABILITIES and FEATURES . . . . . . . . . . . . . . . . . . 6
CONFIGURING the HyperFlex HX240C M5 Node . . . . . . . . . . . . . . . . . . . . . . 9
STEP 1 VERIFY SERVER SKU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
STEP 2 SELECT RISER CARDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
STEP 3 SELECT CPU(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
STEP 4 SELECT MEMORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
STEP 5 SELECT RAID CONTROLLER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
SAS HBA (internal HDD/SSD/JBOD support) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
STEP 6 SELECT DRIVES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
STEP 7 SELECT PCIe OPTION CARD(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
STEP 8 ORDER GPU CARDS (OPTIONAL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
STEP 9 SELECT ACCESSORIES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
STEP 10 ORDER SECURITY DEVICES (OPTIONAL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
STEP 11 ORDER POWER SUPPLY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
STEP 12 SELECT POWER CORD(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
STEP 13 ORDER TOOL-LESS RAIL KIT AND OPTIONAL REVERSIBLE CABLE MANAGEMENT ARM . 34
STEP 14 SELECT HYPERVISOR / HOST OPERATING SYSTEM . . . . . . . . . . . . . . . . . . . . . . 35
STEP 15 SELECT HX DATA PLATFORM SOFTWARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
STEP 16 SELECT INSTALLATION SERVICE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
STEP 17 SELECT SERVICE and SUPPORT LEVEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
OPTIONAL STEP - ORDER RACK(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
OPTIONAL STEP - ORDER PDU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
SUPPLEMENTAL MATERIAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Hyperconverged Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
CHASSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Riser Card Configuration and Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Serial Port Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Upgrade and Servicing-Related Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
RACKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
PDUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
KVM CABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
DISCONTINUED EOL PRODUCTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
TECHNICAL SPECIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Dimensions and Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Power Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Extended Operating Temperature Hardware Configuration Limits . . . . . . . . . . . . . . . . . . 64
Compliance Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
OVERVIEW
Cisco HyperFlex™ Systems unlock the full potential of hyperconvergence. The systems are based on an
end-to-end software-defined infrastructure, combining software-defined computing in the form of Cisco
Unified Computing System (Cisco UCS) servers; software-defined storage with the powerful Cisco HX Data
Platform and software-defined networking with the Cisco UCS fabric that will integrate smoothly with Cisco
Application Centric Infrastructure (Cisco ACI™). Together with a single point of connectivity and hardware
management, these technologies deliver a preintegrated and adaptable cluster that is ready to provide a
unified pool of resources to power applications as your business needs dictate.
The HX240C M5 servers extend the capabilities of Cisco’s HyperFlex portfolio in a 2U form factor with the
addition of the Intel® Xeon® Processor Scalable Family, 24 DIMM slots with configuration options ranging
from 128GB up to 3TB of DRAM, and an all flash footprint of cache and capacity drives for highly available,
high performance storage.
Rear View
5
6
7
8
9
2 10 11
Slot 10
Slot 11
Slot 12
Slot 13
Slot 14
Slot 15
Slot 16
Slot 17
Slot 18
Slot 19
Slot 20
Slot 21
Slot 22
Slot 23
Slot 24
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
Slot 9
305989
1
1 2 3 4
PCIe 03 PCIe 06
PSU 02
Rear Bay 01
Rear Bay 02
PCIe 02 PCIe 05
PCIe 01 PCIe 04
PSU 01
305992
mLOM
5 6 7 8 9 10 11 12
2 PCIe riser 2 (slots 4, 5, 6) 8 Dual 1/10 GE ports (LAN1, LAN2) LAN1 /is
left connector, LAN2 is right connector
■ Riser 2B
• slot 4 (x8, CPU2 controlled) - Future
• slot 5 (x16, CPU2 controlled) - for GPU
• slot 6 (x8, CPU2 controlled) - Future
NOTE: Use of PCIe riser 2 requires a dual CPU
configuration.
5 Screw holes for dual-hole grounding lug 11 Serial port (RJ-45 connector)
Capability/Feature Description
CPU One or two Intel® Xeon® scalable family CPUs or one or two 2nd Generation
Intel® Xeon® scalable family CPUs
Video The Cisco Integrated Management Controller (CIMC) provides video using the
ASPEED Pilot 4 video/graphics controller:
■ Integrated 2D graphics core with hardware acceleration
■ DDR2/3 memory interface supports up to 16 MB directly accessible from host
and entire DDR memory indirectly accessible from host processor.
■ Supports all display resolutions up to 1920 x 1200 x 32bpp resolution at 60Hz
■ High speed Integrated 24-bit RAMDAC
■ Single lane PCI-Express Gen2 host interface
■ eSPI processor to BMC support
Front Panel A front panel controller provides status indications and control buttons.
ACPI This server supports the advanced configuration and power interface (ACPI) 4.0
standard.
Expansion slots ■ Dedicated RAID/JBOD controller slot (see Figure 6 on page 47)
• An internal slot is reserved for the Cisco 12G SAS HBA.
■ Dedicated slots for Riser 1 and Riser 2
• For more details on riser 1 and riser 2 see the Riser options section below
Capability/Feature Description
Internal storage Up to 24 Drives are installed into front-panel drive bays that provide
devices hot-swappable access for SAS/SATA drives. 24 Drives are used as below:
• Up to 23 SAS HDD or Up to 23 SED SAS HDD (for capacity)
• One SATA SSD (System drive for HXDP Operations)
One rear drive slot for caching drives
• One SATA SSD OR One SED SAS SSD (for caching)
One socket for one micro-SD card on PCIe Riser 1 for following usage:
• The micro-SD card serves as a dedicated local resource for utilities such as
HUU. Images can be pulled from a file share (NFS/CIFS) and uploaded to
the cards for future use.
I/O ■ One slot for a micro-SD card on PCIe Riser 1 (Option 1 and 1B).
Interfaces • The micro-SD card serves as a dedicated local resource for utilities such as
host upgrade utility (HUU). Images can be pulled from a file share
(NFS/CIFS) and uploaded to the cards for future use. Cisco Intersight
leverages this card for advanced server management.
■ Rear panel
• One 1Gbase RJ-45 management port (Marvell 88E6176)
• Two 10Gbase-T LOM ports (Intel X550 controller embedded on the
motherboard)
• One RS-232 serial port (RJ45 connector)
• One DB15 VGA connector
• Two USB 3.0 port connectors
• One flexible modular LAN on motherboard (mLOM) slot that can
accommodate various interface cards
■ Front panel
• One KVM console connector (supplies two USB 2.0 connectors, one VGA
DB15 video connector, and one serial port (RS232)
mLOM Slot The mLOM slot on the motherboard can flexibly accommodate the follow card:
NOTE:
■ 1387 VIC natively supports 6300 series FIs.
■ To support 6200 series FIs with 1387, 10G QSAs compatible with 1387 are
available for purchase.
■ Breakout cables are not supported with 1387
■ Use of 10GbE is not allowed when used with 6300 series FI.
Capability/Feature Description
(optional) PCIe slot 1 and PCIe slot 2 on the motherboard can flexibly accommodate the
Additional NICs following cards:
UCSM Unified Computing System Manager (UCSM) runs in the Fabric Interconnect and
automatically discovers and provisions some of the server components.
HX-M5S-HXDP This major line bundle (MLB) consists of the Server Nodes (HX240C-M5SX and
HX240C-M5SX) with HXDP software spare PIDs
HX240C-M5SX1 HX240C M5 Node, with two CPUs, memory, upto 23 HDDs for data storage, one
SSD for system/HXDP logs, one SSD for caching, two power supplies, one M.2
SATA SSD, one micro-SD card, one VIC 1387 mLOM card, no PCIe cards, and no
rail kit
HX2X0C-M5S This major line bundle (MLB) consists of the Server Nodes (HX220C-M5SX and
HX240c-M5SX), Fabric Interconnects (HX-FI-6248UP, HX-FI-6296UP, HX-FI-6332,
HX-FI-6332-16UP) and HXDP software spare PIDs.
Notes:
1. This product may not be purchased outside of the approved bundles (must be ordered under the MLB).
• Requires configuration of one or two power supplies, one or two CPUs, recommended
memory sizes, 1 SSD for Caching, 1 SSD for system logs, up to 23 data HDDs, 1 VIC mLOM
card, 1 M.2 SATA SSD and 1 micro-SD card.
• Provides option to choose 10G QSAs to connect with HX-FI-6248UP and HX-FI-6296UP
• Provides option to choose rail kits.
NOTE: Use the steps on the following pages to configure the node with
the components that you want to include.
HX-PCI-1-C240M5 Riser 1. Includes 3 PCIe slots (x8, x16, x8). Slots 1 and 2 controlled with CPU1;
slot 3 controlled with CPU2. x16 slot supports GPU.
HX-RIS-1-240M5 Riser 1 3PCIe slots (x8, x16, x8); slot 3 req CPU2, For T4
HX-PCI-2B-240M5 Riser 2B. Includes 3 PCIe slots (x8, x16, x8) plus 1 NVMe connector (controls rear
SFF NVMe drives). x16 slot supports GPU.
For additional details, see Riser Card Configuration and Options, page 50
■ Intel® Xeon® processor scalable family CPUs and 2nd Generation Intel®Xeon® scalable family
CPUs
■ From 8 cores up to 28 cores per CPU
■ Intel C620 series chipset
■ Cache size of up to 38.5 MB
Select CPUs
Supported Configurations.
2-CPU Configuration:
■ Select two identical CPUs from any one of the rows of Table 4 on page 12.
■ DIMMs
Slot 2
Slot 2
Slot 1
A1 A2 G2 G1
Chan A Chan G
B1 B2 H2 H1
Chan B Chan H
C1 C2 J2 J1
Chan C Chan J
CPU 1 CPU 2
D1 D2 K2 K1
Chan D Chan K
E1 E2 L2 L1
Chan E Chan L
F1 F2 M2 M1
Chan F Chan M
24 DIMMS
3072 GB maximum memory (with 128 GB DIMMs)
6 memory channels per CPU,
up to 2 DIMMs per channel
Select DIMMs
NOTE: The memory mirroring feature is not supported with HyperFlex nodes.
Ranks/
Product ID (PID) PID Description Voltage
DIMM
Approved Configurations
12 (A1, A2, B1, B2, C1, C2); (D1, D2, E1, E2, F1, F2)
■ Select 8,12 16, or 24 identical DIMMs per CPU. The DIMMs will be placed by the factory as
shown in the following table
CPU 1 CPU 2
8 (A1,B1); (D1,E1) (G1, H1); (K1, L1)
12 (A1, B1, C1); (D1, E1, F1) (G1, H1, J1); (K1, L1, M1)
16 (A1, A2, B1, B2); (D1, D2, E1, E2) (G1, G2, H1, H2); (K1, K2, L1, L2)
24 (A1, A2, B1, B2, C1, C2); (D1, D2, E1, E2, (G1, G2, H1, H2, J1, J2); (K1, K2, L1, L2, M1,
F1, F2) M2)
NOTE: System performance is optimized when the DIMM type and quantity are equal
for both CPUs, and when all channels are filled equally across the CPUs in the server.
Table 6 2933-MHz DIMM Memory Speeds with Different 2nd Generation Intel® Xeon® Scalable Processors
DIMM and CPU RDIMM RDIMM RDIMM
LRDIMM LRDIMM
Frequencies DPC (2Rx4) - (2Rx4) - (1Rx4) -
(4Rx4)- (4Rx4) -
(MHz) 64 GB (MHz) 32 GB (MHz) 16 GB (MHz)
128 GB (MHz) 64 GB (MHz)
Table 7 2666-MHz DIMM Memory Speeds with Different Intel® Xeon® Scalable Processors
TSV-
TSV-
DIMM and CPU RDIMM LRDIMM RDIMM LRDIMM
RDIMM
Frequencies DPC (8Rx4) - (4Rx4) - (2Rx4) - (2Rx4) -
(4Rx4) -
(MHz) 128 GB 64 GB (MHz) 32 GB (MHz) 32 GB (MHz)
64 GB (MHz)
(MHz)
■ The Cisco 12G SAS HBA, which plugs into a dedicated RAID controller slot.
Approved Configurations
The Cisco 12 Gbps Modular SAS HBA supports up to 26 internal drives with non-RAID support.
Select Drives
Drive
Product ID (PID) PID Description Capacity
Type
Capacity Drives
HX-HD12TB10K12N 1.2TB 2.5 inch 12G SAS 10K RPM HDD SAS 1.2 TB
HX-HD12T10NK9** 1.2TB 2.5 inch 12G SAS 10K RPM HDD SED SAS 1.2 TB
HX-HD18TB10K4KN 1.8 TB 12G SAS 10K RPM SFF HDD SAS 1.8 TB
HX-HD24TB10K4KN 2.4 TB 12G SAS 10K RPM SFF HDD (4K) (Hyperflex Release 4.0(1a) SAS 2.4 TB
and later)
Caching Drives
HX-SD16T123X-EP 1.6TB 2.5 inch Enterprise Performance 12G SAS SSD (3X endurance) SAS 1.6 TB
HX-SD16TBENK9** 1.6TB 2.5 inch Enterprise Performance 12G SAS SSD (10X endur) SED SAS 1.6 TB
System / Log Drives
HX-SD240GM1X-EV 240GB 2.5 inch Enterprise Value 6G SATA SSD (HyperFlex Release SATA 240 GB
3.5(1a) Onwards)
HX-SD480G6I1X-EV 480GB 2.5 inch Enterprise Value 6G SATA SSD (HyperFlex Release SATA 480 GB
4.0(2a) and later)
HX-SD480GM1X-EV 480GB 2.5 inch Enterprise Value 6G SATA SSD (HyperFlex Release SATA 480 GB
4.0(2a) and later)
Boot Drives
HX-M2-240GB 240GB SATA M.2 SSD SATA 240 GB
HX-M2-960GB 960GB SATA M.2 (HyperFlex Release 4.0(2a) and later) SATA 960 GB
Drive
Product ID (PID) PID Description Capacity
Type
NOTE:
■ Cisco uses solid state drives (SSDs) from a number of vendors. All solid state drives (SSDs) are subject
to physical write limits and have varying maximum usage limitation specifications set by the
manufacturer. Cisco will not replace any solid state drives (SSDs) that have exceeded any maximum
usage specifications set by Cisco or the manufacturer, as determined solely by Cisco.
■ ** SED drive components are not supported with Microsoft Hyper-V
Approved Configurations
■ 6 to 23 capacity drives -
— 1.2 TB 12G SAS 10K RPM SFF HDD (HX-HD12TB10K12N) OR
— 1.8 TB 12G SAS 10K RPM SFF HDD (HX-HD18TB10K4KN) OR
— 1.2 TB 12G SAS 10K RPM SFF HDD SED (HX-HD12T10NK9) OR
— 2.4 TB 12G SAS 10K RPM SFF HDD (4K) (HX-HD24TB10K4KN)
NOTE: If you select 'SED capacity' drives, you must choose 'SED cache' drives below
NOTE: 'SED cache' drive can only be selected if you have selected 'SED capacity'
drives
SED drives are not supported with Microsoft Hyper-V.
Caveats
You must choose up to 23 HDD data drives, one caching drive, one system drive and one boot
drive.
Card
Product ID (PID) PID Description
Height
Notes:
1. The mLOM card does not plug into any of the riser 1 or riser 2 card slots; instead, it plugs into a connector
inside the chassis.
Caveats
— Breakout cables cannot be used to connect to 6200 series fabric interconnects. Use
a QSA instead. Use of 10GbE is not permitted with 6300 series FI.
— Cisco QSA Module is available as an option under 'Accessories -> SFP'. PID for QSA is
CVR-QSFP-SFP10G
— Please order two of above QSA modules when connectivity with 6200 is desired.
CAUTION: When using the GPU cards, The maximum allowable operating
temperature for NVIDIA P40 GPU is 32oC (89oF)
NOTE:
■ AMD GPU 7150X2 can only be ordered as Spare PID at this time. Please refer to
Installation Guide for steps on installation.
■ All GPU cards must be procured from Cisco as there is a unique SBIOS ID
required by CIMC and UCSM
■ All GPU cards require two CPUs and a minimum of two power supplies in the
server. 1600 W power supplies are recommended. Use the power calculator at
the following link to determine the needed power based on the options chosen
(CPUs, drives, memory, and so on):
■ HX-GPU-P4 Requires two new (new to HX) riser cards for full configuration of 6
http://ucspowercalc.cisco.com
Caveats
— NVIDIA M10 GPUs can support only less than 1 TB of total memory in the server. Do
not install more than fourteen 64-GB DIMMs when using an NVIDIA GPU card in this
server.
— GPUs cannot be mixed.
— Slot 5 on riser card 2 is the required slot for the first GPU.
— Slot 2 on riser card 1 is the secondary slot for a second GPU.
A chassis intrusion switch gives a notification of any unauthorized mechanical access into the
server.
NOTE:
■ The TPM module used in this system conforms to TPM v1.2 and 2.0, as defined
by the Trusted Computing Group (TCG). It is also SPI-based.
■ TPM installation is supported after-factory. However, a TPM installs with a
one-way screw and cannot be replaced, upgraded, or moved to another server. If
a server with a TPM is returned, the replacement server must be ordered with a
new TPM.
http://ucspowercalc.cisco.com
Notes:
1. PSU supported on C220/C240/HX
NOTE: In a server with two power supplies, both power supplies must be identical.
Green 2.0 m
305085
Black & red 3.5 m
Connector:
IEC60320/C13
186570
CAB-AC-L620-C13 AC Power Cord, NEMA L6-20 - C13,
2M/6.5ft
CAB-C13-C14-AC CORD,PWR,JMP,IEC60320/C14,IEC6
0320/C13, 3.0M
186571
CAB-9K10A-AU Power Cord, 250VAC 10A 3112 Plug,
Australia
Cordset rating: 10 A, 250 V/500 V MAX
Length: 2500mm
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
186576
CAB-250V-10A-ID Power Cord, 250V, 10A, India
OVE
Cordset rating 16A, 250V
Plug:
(2500mm)
EL 208
Connector:
EL 701
187490
CAB-250V-10A-IS Power Cord, SFS, 250V, 10A, Israel
EL-212
16A
250V
Connector:
Plug: EL 701B
EL 212 (IEC60320/C13)
186574
(SI-32)
Connector:
Plug: EL 701C
EL 210 (EN 60320/C15)
186580
(BS 1363A) 13 AMP fuse
Plug: Connector:
NEMA 5-15P IEC60320/C15
192260
CAB-250V-10A-BR Power Cord - 250V, 10A - Brazil
1 76.2 From Plug End
2,133.6 ± 25
The reversible cable management arm mounts on either the right or left slide rails at the rear of
the server and is used for cable management. Use Table 16 to order a cable management arm.
For more information about the tool-less rail kit and cable management arm, see the Cisco UCS
C240 M5 Installation and Service Guide at this URL:
https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/hw/C240M5/install/C240M
5.html
NOTE: If you plan to rackmount your HyperFlex HX240C Node, you must order a
tool-less rail kit. The same rail kits and CMA's are used for M4 and M5 servers.
VMware1
HX-VSP-6-5-EPL-D Factory Installed - VMware vSphere 6.5 Ent Plus SW+Lic (2 CPU)
HX-VSP-6-5-STD-D Factory Installed - VMware vSphere 6.5 Std SW and Lic (2 CPU)
HX-VSP-6-7-EPL-D Factory Installed - VMware vSphere 6.7 Ent Plus SW+Lic 2-CPU
HX-VSP-6-7-STD-D Factory Installed - VMware vSphere 6.7 Std SW and Lic (2CPU)
HX-VSP-EPL-1A VMware vSphere 6 Ent Plus (1 CPU), 1-yr, Support Required Cisco
HX-VSP-EPL-3A VMware vSphere 6 Ent Plus (1 CPU), 3-yr, Support Required Cisco
HX-VSP-EPL-5A VMware vSphere 6 Ent Plus (1 CPU), 5-yr, Support Required Cisco
Microsoft Hyper-V3,4
HX-19-ST16C-NS Windows Server 2019 Standard (16 Cores/2 VMs) - No Cisco SVC
Notes:
1. Although VMware 6.0 is installed at the factory, VMware 6.5 is also supported.
2. Choose quantity of two when choosing PAC licensing for dual CPU systems.
3. Microsoft Windows Server with Hyper-V will NOT be installed in Cisco Factory. Customers need to bring their own
Windows Server ISO image that needs to be installed at deployment site.
4. To ensure the best possible Day 0 Installation experience, mandatory Installation Services are required with all
Hyper-V orders. Details on PIDs can be found in HyperFlex Ordering Guide.
For support of the entire Unified Computing System, Cisco offers the Cisco Smart Net Total Care
for UCS Service. This service provides expert software and hardware support to help sustain
performance and high availability of the unified computing environment. Access to Cisco
Technical Assistance Center (TAC) is provided around the clock, from anywhere in the world
For systems that include Unified Computing System Manager, the support service includes
downloads of UCSM upgrades. The Cisco Smart Net Total Care for UCS Service includes flexible
hardware replacement options, including replacement in as little as two hours. There is also
access to Cisco's extensive online technical resources to help maintain optimal efficiency and
uptime of the unified computing environment. For more information please refer to the following
url: http://www.cisco.com/c/en/us/services/technical/smart-net-total-care.html?stickynav=1
An enhanced offer over traditional Smart Net Total Care which provides onsite troubleshooting
expertise to aid in the diagnostics and isolation of hardware issue within our customers’ Cisco
Hyper-Converged environment. It is delivered by a Cisco Certified field engineer (FE) in
collaboration with remote TAC engineer and Virtual Internet working Support Engineer (VISE).
You can choose a desired service listed in Table 21
**Includes Local Language Support (see below for full description) – Only available in China and Japan
***Includes Local Language Support and Drive Retention – Only available in China and Japan
Solution Support
Solution Support includes both Cisco product support and solution-level support, resolving
complex issues in multivendor environments, on average, 43% more quickly than product
support alone. Solution Support is a critical element in data center administration, to help
rapidly resolve any issue encountered, while maintaining performance, reliability, and return on
investment.
This service centralizes support across your multivendor Cisco environment for both our
products and solution partner products you've deployed in your ecosystem. Whether there is an
issue with a Cisco or solution partner product, just call us. Our experts are the primary point of
contact and own the case from first call to resolution. For more information please refer to the
following url:
http://www.cisco.com/c/en/us/services/technical/solution-support.html?stickynav=1
You can choose a desired service Table 22
Cisco Partner Support Service (PSS) is a Cisco Collaborative Services service offering that is
designed for partners to deliver their own branded support and managed services to enterprise
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets
to help them:
■ Expand their service portfolios to support the most complex network environments
■ Lower delivery costs
■ Deliver services that increase customer loyalty
PSS options enable eligible Cisco partners to develop and consistently deliver high-value
technical support that capitalizes on Cisco intellectual assets. This helps partners to realize
higher margins and expand their practice.
PSS provides hardware and software support, including triage support for third party software,
backed by Cisco technical resources and level three support. You can choose a desired service
listed in Table 23.
Combined Services makes it easier to purchase and manage required services under one
contract. The more benefits you realize from the Cisco HyperFlex System, the more important
the technology becomes to your business. These services allow you to:
With the Cisco Drive Retention Service, you can obtain a new disk drive in exchange for a faulty
drive without returning the faulty drive.
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The Drive Retention service enables
you to retain your drives and ensures that the sensitive data on those drives is not compromised,
which reduces the risk of any potential liabilities. This service also enables you to comply with
regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in the above tables (where
available)
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
Where available, and subject to an additional fee, local language support for calls on all assigned
severity levels may be available for specific product(s) – see tables above.
For a complete listing of available services for Cisco HyperFlex System, see the following URL:
https://www.cisco.com/c/en/us/services/technical.html?stickynav=1
For more information about the R42612 rack, see RACKS, page 54
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/r-series-
racks/rack-pdu-specsheet.pdf
SUPPLEMENTAL MATERIAL
Hyperconverged Systems
Cisco HyperFlex Systems let you unlock the full potential of hyperconvergence and adapt IT to the needs of
your workloads. The systems use an end-to-end software-defined infrastructure approach, combining
software-defined computing in the form of Cisco HyperFlex HX-Series nodes; software-defined storage with
the powerful Cisco HX Data Platform; and software-defined networking with the Cisco UCS fabric that will
integrate smoothly with Cisco Application Centric Infrastructure (Cisco ACI). Together with a single point of
connectivity and management, these technologies deliver a preintegrated and adaptable cluster with a
unified pool of resources that you can quickly deploy, adapt, scale, and manage to efficiently power your
applications and your business.
CHASSIS
An internal view of the HX240C M5 Node chassis with the top cover removed is shown in Figure 6.
2 3 5 6 8 9 10
4 7
PSU 02
Top 11
FAN 06 PSU 01
Bottom
Rear Bay 02
12
FAN 05 CPU 02 Rear Bay 01
FAN 04 13
PCIe Riser 02
1
14
FAN 03
15
16
FAN 02 CPU 01
PCIe Riser 01
17
FAN 01
305993
20 19 18
1 Front-Facing drive bays. All drive bays support 11 Power supplies (hot-swappable,
SAS/SATA HDDs/SSDs. redundant as 1+1).
3 DIMM sockets on motherboard (up to 12 per 13 Trusted platform module (TPM) socket on
CPU; total 24). motherboard (not visible in this view)
Not visible under air baffle in this view.
4 CPUs and heatsinks (one or two). Not visible 14 PCIe riser 2 (PCIe slots 4, 5, 6),
under air baffle in this view
■ 2B—With slots 4 (x8), 5 (x16), and 6
(x8); includes one PCIe cable
connector for rear NVMe SSDs.
6 USB 3.0 slot on motherboard 16 PCIe riser 1 (PCIe slot 1, 2, 3), with the
following options:
■ 1A—Slots 1 (x8), 2 (x16), 3 (x8); slot 2
requires CPU2.
8 PCIe cable connectors for NVMe SSDs, with 18 Cisco modular RAID controller PCIe slot
PCIe riser 2: (dedicated slot)
■ One connector for rear SFF NVMe SSDs
10 Rear-drive backplane assembly 20 Securing clips for GPU cards on air baffle
Block Diagram
Figure 7 HX240c M5 Block Diagram
ŝƐĐŽ,yϮϰϬDϱϮh^ĞƌǀĞƌ^LJƐƚĞŵůŽĐŬŝĂŐƌĂŵ ZĞĂƌWĂŶĞů
h^ h^ϯ͘ϬͬϮ͘Ϭ
dŚƵŵďĚƌŝǀĞ
h^ϯ͘ϬͬϮ͘Ϭ
h^ϯ͘Ϭ
s'
^d ^ĞƌŝĂů
ŽŶŶĞĐƚŽƌ
yϰ^d
/ŶƚĞů W/Ğdžϰ'ĞŶϯ ϮͲƉŽƌƚ ϭϬ'Ͳd
^d
>ĞǁŝƐďƵƌŐW, džϱϱϬE/
ϭϬ'Ͳd
ŽŶŶĞĐƚŽƌ
yϰ^d
W/Ğͬ>W ϭ'Ͳd
D/ϯ D 'ďƐǁŝƚĐŚ ;DŐŵƚͿ
h^ϯ͘Ϭ W/Ğ;ϮdžϮͿĂŶĚ
Ɛ^d;džϮͿ
ĞDD
DŝŶŝ^ƚŽƌĂŐĞ
DŽĚƵůĞ
D/ϯ;džϰͿ
;ƚLJƉĞƐǀĂƌLJǁŝƚŚŵ>KDŵŽĚƵůĞͿ
;D͘ϮͿ
͘͘͘͘͘͘͘͘͘͘͘͘
Zϰ/DDƐ
EĞƚǁŽƌŬĐŽŶŶĞĐƚŽƌƐ
ϭ Ϯ
ŚĂŶ
D/ϯ
ŵ>KD
ϭ Ϯ
DŽĚƵůĞ
ŚĂŶ W/Ğdžϭϲ'ĞŶϯ
ϭ Ϯ
ŚĂŶ /ŶƚĞů
yĞŽŶ
ϭ Ϯ
^ŬLJůĂŬĞW
ϮϰĨƌŽŶƚĚƌŝǀĞƐ;Ϯ͘ϱΗͿ ŚĂŶ
;WhϭͿ
;^^ͬ^d^^ƐͿ
ϭ Ϯ
ŚĂŶ &ƌŽŶƚWĂŶĞů
^^džϰ W/ĞdžϮϰ'ĞŶϯ
ϭ
ŽŶŶĞĐƚŽƌ
Ϯ &ϭ &Ϯ
yϰ^^
W/Ğdžϴ'ĞŶϯ
ϯ ŚĂŶ&
ϰ
yϮh^Ϯ͘Ϭ
^^džϰ
ϱ ϭs'
ŽŶŶĞĐƚŽƌ
yϰ^^
ϲ hW/;ϯdžϭϬ͘ϰ'dͬƐͿ ϭ^ĞƌŝĂů
ϳ 'ϭ 'Ϯ
ϴ
ĂďůĞƐƚŽŵŽĚƵůĂƌ ŚĂŶ'
͘ ͘ ͘ ,ĐĂƌĚ
͘ ͘ ͘ <sD
͘ ͘ ͘ ,ϭ ,Ϯ ŽŶŶĞĐƚŽƌ
͘ ͘ ͘ ŚĂŶ,
͘ ͘ ^^džϰ
͘
Ϯϭ :ϭ :Ϯ W/ĞdžϰнW/Ğdžϰ
ŽŶŶĞĐƚŽƌ
yϰ^^
ϮϮ
ŚĂŶ:
Ϯϯ /ŶƚĞů
Ϯϰ
<ϭ <Ϯ
yĞŽŶ
W/ĞZŝƐĞƌϭ;ŽƉƚŝŽŶͿ
ŚĂŶ< ^ŬLJůĂŬĞW
dǁŽZĞĂƌƌŝǀĞƐ;Ϯ͘ϱΗͿ ^ůŽƚϯ
;^^ͬ^d^^ƐŽƌW/ĞEsDĞĚƌŝǀĞƐͿ ;WhϮͿ W/Ğdžϴ'ĞŶϯ
ĂďůĞƚŽW/Ğ ^ůŽƚϮ ^ƐůŽƚ
>ϭ >Ϯ W/Ğdžϭϲ'ĞŶϯ
ĐŽŶŶĞĐƚŽƌŽŶZŝƐĞƌϮ W/ĞdžϰϬ'ĞŶϯ
KƉƚŝŽŶϮĨŽƌϮ ŚĂŶ> W/Ğdžϴ'ĞŶϯ ^ůŽƚϭ
^^džϭŽƌ ŽƉƚŝŽŶĂůƌĞĂƌEsDĞ
ĚƌŝǀĞƐ Dϭ DϮ W/ĞZŝƐĞƌϭ;ŽƉƚŝŽŶͿ
ϭ W/Ğdžϰ
ŽŶŶĞĐƚŽƌ
yϰ^^
^^ͬ^d
ŽŶŶĞĐƚŽƌ
yϰ^d
ŽŶŶĞĐƚŽƌĨŽƌ,
^^ͬ^d ĂƌĚ
ŽŶŶĞĐƚŽƌ
yϰ^d
W/ĞZŝƐĞƌϮ;ŽƉƚŝŽŶͿ
W/Ğdžϴ'ĞŶϯ ^ůŽƚϲ ZĞĂƌ
EsDĞŽŶŶ
W/Ğdžϭϲ'ĞŶϯ ^ůŽƚϱ
yϰ^^ yϰ^^ yϰ^^
ŽŶŶĞĐƚŽƌ ŽŶŶĞĐƚŽƌ ͘͘͘͘͘͘͘͘͘͘͘͘ ŽŶŶĞĐƚŽƌ W/Ğdžϴ'ĞŶϯ ^ůŽƚϰ
ηϭ ηϮ ηϳ
W/ĞZŝƐĞƌϮ;ŽƉƚŝŽŶͿ
ĂďůĞƐƚŽĚƌŝǀĞďĂĐŬƉůĂŶĞƐ
&ŽƌϮϰͲĚƌŝǀĞƐLJƐƚĞŵ͕Ɛŝdždžϰ^^ĐŽŶŶĞĐƚŽƌƐĨŽƌϮϰĨƌŽŶƚ
^^ͬ^dĚƌŝǀĞƐĂŶĚŽŶĞĨŽƌƚǁŽƌĞĂƌ^^ͬ^dĚƌŝǀĞƐ
W/Ğdžϴ'ĞŶϯ
Figure 8 Riser Card 1 (slots 1, 2, and 3) and Riser Card 2 (slots 4, 5, and 6)
PCIe 03 PCIe 06
PSU 02
Rear Bay 01
Rear Bay 02
PCIe 02 PCIe 05
PCIe 01 PCIe 04
PSU 01
mLOM
Slot
Height Length Electrical NCSI Physical
#
Riser Card 1 (option 1A, PID UCSC-PCI-1-C240M5)
Slot 6
Slot 5
Slot 4
Slot 6
Slot 5
Slot 4
Notes:
1. GPU capable slot
Pin Signal
1 RTS (Request to Send)
2 DTR (Data Terminal Ready)
3 TxD (Transmit Data)
4 GND (Signal Ground)
5 GND (Signal Ground)
6 RxD (Receive Data)
7 DSR (Data Set Ready)
8 CTS (Clear to Send)
Notes:
1. A drive blanking panel must be installed if you remove a disk drive from a UCS server. These panels are
required to maintain system temperatures at safe operating levels, and to keep dust away from system
components.
RACKS
The Cisco R42612 rack is certified for Cisco UCS installation at customer sites and is suitable for the
following equipment:
The rack is compatible with hardware designed for EIA-standard 19-inch racks.Cisco R42612 Rack. See Cisco
RP-Series Rack and Rack PDU specification for more details at
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/r-series-racks/rack-
pdu-specsheet.pdf
PDUs
Cisco RP Series Power Distribution Units (PDUs) offer power distribution with branch circuit protection.
Cisco RP Series PDU models distribute power to up to 42 outlets. The architecture organizes power
distribution, simplifies cable management, and enables you to move, add, and change rack equipment
without an electrician.
With a Cisco RP Series PDU in the rack, you can replace up to two dozen input power cords with just one.
The fixed input cord connects to the power source from overhead or under-floor distribution. Your IT
equipment is then powered by PDU outlets in the rack using short, easy-to-manage power cords.
The C-series severs accept the zero-rack-unit (0RU) or horizontal PDU. See Cisco RP-Series Rack and Rack
PDU specification for more details at
http://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/r-series-racks/rack-
pdu-specsheet.pdf
KVM CABLE
The KVM cable provides a connection into the server, providing a DB9 serial connector, a VGA connector for
a monitor, and dual USB 2.0 ports for a keyboard and mouse. With this cable, you can create a direct
connection to the operating system and the BIOS running on the server.
2 DB-9 serial connector 4 Two-port USB 2.0 connector (for a mouse and
keyboard)
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Weight1
Maximum
(24 HDD model with 24 HDDs, 2 CPUs, 24 DIMMs, 2 1600 W power supplies) 57.5 lbs (26.1 kg)
(8 HDD model with 8 HDDs, 2 CPUs, 24 DIMMs, 2 1600 W power supplies) 45.5 lbs (20.4 kg)
Minimum
(24 HDD model with 1 HDD, 1 CPU, 1 DIMM, 1 770 W power supply) 37.0 lbs (16.8 kg)
(8 HDD model with 1 HDD, 1 CPU, 1 DIMM, 1 770 W power supply) 41.5 lbs (18.8 kg)
Bare
(24 HDD model with 0 HDD, 0 CPU, 0 DIMM, 1 770 W power supply) 35.5 lbs (16.1 kg)
(8 HDD model with 0 HDD, 0 CPU, 0 DIMM, 1 770 W power supply) 40.0 lbs (18.1 kg)
Notes:
1. Weight includes inner rail, which is attached to the server. Weight does not include outer rail, which is
attached to the rack.
Power Specifications
The server is available with the following types of power supplies:
Parameter Specification
Input Connector IEC320 C14
Input Voltage Range (V rms) 100 to 240
Maximum Allowable Input Voltage Range (V rms) 90 to 264
Frequency Range (Hz) 50 to 60
Maximum Allowable Frequency Range (Hz) 47 to 63
Maximum Rated Output (W)1 800 1050
Maximum Rated Standby Output (W) 36
Nominal Input Voltage (V rms) 100 120 208 230
Nominal Input Current (A rms) 9.2 7.6 5.8 5.2
Maximum Input at Nominal Input Voltage (W) 889 889 1167 1154
Maximum Input at Nominal Input Voltage (VA) 916 916 1203 1190
Minimum Rated Efficiency (%)2 90 90 90 91
Minimum Rated Power Factor2 0.97 0.97 0.97 0.97
Maximum Inrush Current (A peak) 15
Maximum Inrush Current (ms) 0.2
Minimum Ride-Through Time (ms)3 12
Notes:
1. Maximum rated output is limited to 800W when operating at low-line input voltage (100-127V)
2. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published at
http://www.80plus.org/ for certified values
3. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Parameter Specification
Input Connector Molex 42820
Input Voltage Range (V rms) -48
Maximum Allowable Input Voltage Range (V rms) -40 to -72
Frequency Range (Hz) NA
Maximum Allowable Frequency Range (Hz) NA
Maximum Rated Output (W) 1050
Notes:
1. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published at
http://www.80plus.org/ for certified values
2. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
Parameter Specification
Notes:
1. Maximum rated output is limited to 800W when operating at low-line input voltage (100-127V)
2. This is the minimum rating required to achieve 80 PLUS Platinum certification, see test reports published at
http://www.80plus.org/ for certified values
3. Time output voltage remains within regulation limits at 100% load, during input voltage dropout
For configuration-specific power specifications, use the Cisco UCS Power Calculator at this URL:
http://ucspowercalc.cisco.com
Environmental Specifications
The environmental specifications for the HX240c M5 server are listed in Table 34.
Parameter Minimum
Operating Temperature 10oC to 35oC (50oF to 95oF) with no direct sunlight
Maximum allowable operating temperature de-rated
1oC/300 m (1oF/547 ft) above 950 m (3117 ft)
Extended Operating Temperature 5oC to 40oC (41oF to 104oF) with no direct sunlight
Maximum allowable operating temperature de-rated
1oC/175 m (1oF/319 ft) above 950 m (3117 ft)
5oC to 45oC (41oF to 113oF) with no direct sunlight
Maximum allowable operating temperature de-rated
1oC/125 m (1oF/228 ft) above 950 m (3117 ft)
System performance may be impacted when operating in the
extended operating temperature range.
Operation above 40C is limited to less than 1% of annual
operating hours.
Hardware configuration limits apply to extended
operating temperature range.
Non-Operating Temperature -40oC to 65oC (-40oF to 149oF)
Maximum rate of change (operating and non-operating)
20oC/hr (36oF/hr)
Operating Relative Humidity 8% to 90% and 24oC (75oF) maximum dew-point temperature,
non-condensing environment
Non-Operating Relative Humidity 5% to 95% and 33oC (91oF) maximum dew-point temperature,
non-condensing environment
Operating Altitude 0 m to 3050 m {10,000 ft)
Non-Operating Altitude 0 m to 12,000 m (39,370 ft)
Sound Power level, Measure 5.8
A-weighted per ISO7779 LWAd (Bels)
Operation at 73°F (23°C)
Sound Pressure level, Measure 43
A-weighted per ISO7779 LpAm (dBA)
Operation at 73°F (23°C)
Compliance Requirements
The regulatory compliance requirements for C-Series servers are listed in Table 36.
Parameter Description