B260M4_SpecSheet
B260M4_SpecSheet
Spec Sheet
OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DETAILED VIEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chassis Front View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
BASE SERVER STANDARD CAPABILITIES and FEATURES . . . . . . . . . . . . . . . . . 5
CONFIGURING the SERVER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
STEP 1 VERIFY BASE SKU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
STEP 2 CHOOSE CPU(S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
STEP 3 CHOOSE MEMORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
STEP 4 CHOOSE HARD DISK DRIVES (HDDs) or SOLID STATE DRIVES (SSDs) . . . . . . . . . . . . 17
STEP 5 CHOOSE RAID CONFIGURATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
STEP 6 CHOOSE ADAPTERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
STEP 7 ORDER A TRUSTED PLATFORM MODULE (OPTIONAL) . . . . . . . . . . . . . . . . . . . . . . 27
STEP 8 ORDER OPTIONAL KVM CABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
STEP 9 ORDER CISCO FLEXIBLE FLASH SECURE DIGITAL CARDS . . . . . . . . . . . . . . . . . . . . 29
STEP 10 ORDER OPTIONAL INTERNAL USB 2.0 DRIVE . . . . . . . . . . . . . . . . . . . . . . . . . . 30
STEP 11 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE . . . . . . . . . . . . . . . 31
STEP 12 CHOOSE OPERATING SYSTEM MEDIA KIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
STEP 13 CHOOSE SERVICE and SUPPORT LEVEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
SUPPLEMENTAL MATERIAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Motherboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
DIMM and CPU Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Memory Population Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Memory Mixing Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Upgrade and Servicing-Related Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Drive and Blade Server Blanking Panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Replacing a CPU (with CPU heat sink) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Motherboard Lithium Battery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
CPU Removal and Installation (“pick n place”) Tool Set . . . . . . . . . . . . . . . . . . . . . 47
Thermal Grease (with syringe applicator) for CPU to Heatsink Seal . . . . . . . . . . . . . . 47
CPU Heat Sink Cleaning Kit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Network Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
VIC 1340/1240 and Port Expander . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Connectivity Using the Cisco UCS 2208XP/2204XP Fabric Extender . . . . . . . . . . . . . . 52
Connectivity using the Cisco UCS 2104XP Fabric Extender . . . . . . . . . . . . . . . . . . . . 58
TECHNICAL SPECIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Dimensions and Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Power Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Discontinued EOL Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
OVERVIEW
The Cisco® UCS B260 M4 High-Performance Blade Server (Figure 1) is a two-socket, full-width blade server
supporting the Intel® Xeon® E7-8800 v2, E7-4800 v2, and E7-2800 v2 series processor family CPUs, with up
to 3 terabytes1 (TB) of double-data-rate 3 (DDR3) memory in 48 slots, up to two small form factor (SFF),
hot-swappable2 drive bays for hard disk drives (HDDs) or solid state drives (SSDs), two dual-port and one
quad-port mezzanine slots. These slots leverage the UCS virtual interface card (VIC) technology for up to
160 Gbps aggregate I/O bandwidth. The Cisco UCS B260 M4 server is designed to power the most demanding
enterprise applications.
As shown in Figure 1, the B260 M4 server consists of one Scalable M4 Blade Module and a Scalability
Terminator.
Front View
Notes . . .
DETAILED VIEWS
Chassis Front View
Figure 2 shows the front of the Cisco UCS B260 M4 Blade Server.
1 2
352470
8 7 6 3
12 11 10
9 5 4
Notes . . .
1. See SUPPLEMENTAL MATERIAL on page 40 for more information about the KVM cable that plugs into the
console port.
Capability/Feature Description
Chassis The B260 M4 Blade Server mounts in a Cisco UCS 5100 series chassis and
occupies two chassis slots (each chassis slot is a half-width slot).
CPU Two Intel® Xeon® E7-8800 v2, E7-4800 v2, or E7-2800 v2 series processor
family CPUs.
Chipset Intel® C602J chipset
Memory 48 slots for registered DIMMs. Maximum memory capacity is 3 TB1. This is
accomplished with 48 DIMMs, consisting of 24 DIMM kits (2 64 GB matched
DIMMs per kit).
Expansion slots Two dual-port slots and one quad-port mezzanine slot are provided that can
accommodate PCIe compatible adapters.
One of the dual-port slots is dedicated for the VIC 1240 or VIC 1340 adapter
which provides Ethernet and Fibre Channel over Ethernet (FCoE)
NOTE: The Cisco VIC 1200 Series (1240 and 1280) is compatible
in UCS domains that implement both 6100 and 6200 Series Fabric
Interconnects. However, the Cisco VIC 1300 Series (1340 and 1380)
is compatible only with the 6200 Series Fabric Interconnects.
The other dual-port slot and the quad-port slot are used for various types of
Cisco adapters and Cisco UCS Storage Accelerator adapters. The VIC 1280
and VIC 1380 can only be plugged into the quad-port slot.
Storage controller LSI SAS3008 12G SAS RAID controller, providing 12 Gbps SAS connectivity.
Provides RAID 0, 1, and JBOD capability.
Internal storage devices ■ Up to two optional front-accessible, hot-swappable hard disk drives
(HDDs) or solid state drives (SSDs).
■ One optional USB flash drive, mounted inside the chassis
■ Dual sockets for optional Flexible Flash cards on the front left side of
the server
Capability/Feature Description
Video The Cisco Integrated Management Controller (CIMC) provides video using the
Matrox G200e video/graphics controller:
Notes . . .
1. A maximum of 3 TB memory is available using 64 GB DIMMs
■ Does not include CPUs, memory DIMMs, SSDs, HDDs, or mezzanine cards.
NOTE: Use the steps on the following pages to configure the server with
the components that you want to include.
NOTE: The B260 M4 server consists of a Scalable M4 Blade Module and a Scalability
Terminator that plugs into the front of the blade module.
■ Your current B260 M4 server must be configured with two identical Intel® Xeon® E7-8800 v2
or two identical E7-4800 v2 series processor family CPUs. A B260 M4 with E7-2800 v2 CPUs
cannot be upgraded.
■ Order the upgrade kit (PID UCSB-EX-M4-1E-U), which consists of the following:
— One Scalable M4 Blade Module
— One Scalability Connector
■ Configure the new Scalable M4 Blade Module with two Intel Xeon E7-8800 v2 or E7-4800 v2
series processor family CPU that are identical to the two processors in the B260 M4 server to
be upgraded.
NOTE: The two CPUs in the original B260 M4 server and the two CPUs in the Scalable
M4 Blade Module from the upgrade kit must be identical.
■ Remove the Scalability Terminator from your original B260 M4. Install the new Scalable M4
Blade Module from the upgrade kit in the chassis slot above or below. Then install the new
Scalability Connector into the front of both blade modules, connecting them together. You
now have a B460 M4 server, consisting of two Scalable M4 Blade Modules ganged together by
the Scalability Connector.
■ Intel Xeon E7-8800 v2, E7-4800 v2, or E7-2800 v2 series processor family CPUs
■ Intel C602J chipset
■ Cache size of up to 37.5 MB
Choose CPUs
Approved Configurations
Caveats
NOTE: The B260 M4 server consists of a Scalable M4 Blade Module and a Scalability
Terminator that plugs into the front of the blade module.
You can upgrade a B260 M4 blade server later to a B460 M4 server; however, each
B260 M4 server must be configured with two identical Intel Xeon E7-8800 v2 or
E7-4800 v2 series processor family CPUs. A B260 M4 with E7-2800 v2 CPUs cannot be
upgraded. See To upgrade from a B260 M4 server to a B460 M4 server: on page 8
for details on upgrading.
■ DIMMs
— Clock speed: 1600, 1333, or 1066 MHz
— Ranks per DIMM: 4 (for 32 GB DIMMs), 2 (for 8 or 16 GB DIMMs),
or 8 (for 64 GB DIMMs)
— Operational voltage: 1.5 V or 1.35 V
— Registered DIMM (RDIMM) or load-reduced DIMM (LRDIMM)
■ Each CPU controls four serial memory interface 2 (SMI-2) channels. Memory buffers convert
each SMI-2 channel into two DDR3 subchannels. Memory is organized as 3 DIMMs per
subchannel, totaling 6 DIMMs per SMI-2 channel. See Figure 3
NOTE: Memory mirroring is supported and settable using the UCSM Service Profile
“Memory RAS Configuration” setting.
Select DIMMs
DIMMs are available as two-DIMM kits. Each of the product IDs in Table 4 specifies two DIMMs.
Ranks/
Product ID (PID) PID Description Voltage
DIMM
Approved Configurations
DIMMs
per CPU 1 DIMMs CPU 2 DIMMs
CPU
4 (A1, B1) (C1, D1) - blue slots (O1, P1) (L1, K1) - blue slots
6 (A1, B1) (C1, D1) (E1, F1) - blue slots (O1, P1) (L1, K1) (M1, N1) - blue slots
8 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
10 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) - black slots (O2, P2) - black slots
12 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) (C2, D2) - black slots (O2, P2) (L2, K2) - black slots
DIMMs
per CPU 1 DIMMs CPU 2 DIMMs
CPU
14 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) (C2, D2) (E2, F2)- black slots (O2, P2) (L2, K2) (M2, N2) - black slots
16 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) (C2, D2) (E2, F2) (G2, H2)- black slots (O2, P2) (L2, K2) (M2, N2) (J2, I2)- black slots
18 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) (C2, D2) (E2, F2) (G2, H2)- black slots (O2, P2) (L2, K2) (M2, N2) (J2, I2)- black slots
(A3, B3) - white or ivory slots (O3, P3) - white or ivory slots
20 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) (C2, D2) (E2, F2) (G2, H2)- black slots (O2, P2) (L2, K2) (M2, N2) (J2, I2) - black slots
(A3, B3) (C3, D3)- white or ivory slots (O3, P3) (L3, K3)- white or ivory slots
22 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) (C2, D2) (E2, F2) (G2, H2)- black slots (O2, P2) (L2, K2) (M2, N2) (J2, I2) - black slots
(A3, B3) (C3, D3) (E3, F3) - white or ivory slots (O3, P3) (L3, K3) (M3, N3) - white or ivory slots
24 (A1, B1) (C1, D1) (E1, F1) (G1, H1) - blue slots (O1, P1) (L1, K1) (M1, N1) (J1, I1) - blue slots
(A2, B2) (C2, D2) (E2, F2) (G2, H2)- black slots (O2, P2) (L2, K2) (M2, N2) (J2, I2) - black slots
(A3, B3) (C3, D3) (E3, F3) (G3, H3) - white or (O3, P3) (L3, K3) (M3, N3) (J3, I3) - white or
ivory slots ivory slots
Caveats
■ Memory Mode. System speed is dependent on how many DIMMs are populated per channel,
the CPU DIMM speed support, and the BIOS memory mode. The BIOS default memory mode is
performance mode. However, the BIOS can be changed to support lockstep mode.
— Memory Performance Mode. In this mode, the main memory channel from the CPU
to the memory buffer runs at double the clock rate of each of the two memory
subchannels from the buffer to the DIMMs, and each DIMM subchannel is accessed
sequentially. For example, if the CPU channel clock speed is 2667 MHz, each of the
DIMM subchannels operates at 1333 MHz. For this reason, performance mode is
referred to as 2:1. Performance mode does not provide data protection, but can
yield up to 1.5 times the performance of lockstep mode and is the best choice for
high throughput requirements.
— Memory Lockstep Mode. In this mode, the main memory channel from the CPU to
the memory buffer runs at the same clock rate of each of the two memory
subchannels from the buffer to the DIMMs, and both DIMM subchannels are accessed
simultaneously for a double-width access. For example, if the CPU channel clock
speed is 1600 MHz, each of the DIMM subchannels operates at 1600 MHz. For this
reason, lockstep mode is referred to as 1:1. Memory lockstep mode provides
protection against both single-bit and multi-bit errors. Memory lockstep lets two
memory channels work as a single channel, moving a data word two channels wide
and providing eight bits of memory correction.
Notes . . .
1. CPU examples: E7-2890/2880/2870 v2, E7-4890/4880/4870/4860 v2, E7-8893/8891/8857 v2
2. CPU examples: E7-2850 v2, E7-4850/4830/4820 v2
3. CPU example: E7-4809 v2
DIMMs run at various clock speeds depending on the DIMM voltage and number of DIMMs
per channel. See Table 7 and Table 8.
8 GB/2R/RDIMM 1333 MHz 1333 MHz 1066 MHz 1333 MHz 1333 MHz 1066 MHz
16 GB/2R/RDIMM 1333 MHz 1333 MHz 1066 MHz 1333 MHz 1333 MHz 1066 MHz
32 GB/4R/LRDIMM 1333 MHz 1333 MHz 1333 MHz 1600 MHz 1600 MHz 1333 MHz
64 GB/8R/LRDIMM 1066 MHz 1066 MHz 1066 MHz 1066 MHz 1066 MHz 1066 MHz
8 GB/2R/RDIMM 1333 MHz 1066 MHz N/A 1333 MHz 1066 MHz N/A
16 GB/2R/RDIMM 1333 MHz 1066 MHz N/A 1333 MHz 1066 MHz N/A
32 GB/4R/LRDIMM 1333 MHz 1333 MHz N/A 1333 MHz 1333 MHz N/A
■ The only supported DIMM configurations are shown in Table 5 on page 13. Although the
DIMMs are sold in matched pairs, exact pairing is not required. For best results, follow the
DIMM population rules.
■ The B260 M4 server needs at least one two-DIMM kit installed for each CPU.
■ Memory DIMMs must be installed evenly across the installed CPUs.
■ Do not mix RDIMMs and LRDIMMs.
■ Your selected CPU(s) can have some effect on performance. The CPUs must be of the same
type.
■ For DIMM size mixing rules, see Table 26 on page 45.
STEP 4 CHOOSE HARD DISK DRIVES (HDDs) or SOLID STATE DRIVES (SSDs)
The standard disk drive features are:
Choose Drives
NOTE: 4K format drives are supported and qualified as bootable with Cisco UCS
Manager Release 3.1(2b)and later versions.
Drive
Product ID (PID) PID Description Capacity
Type
HDDs
12 Gbps Drives
UCS-HD600G15K12G 600 GB 12G SAS 15K RPM SFF HDD SAS 600 GB
UCS-HD450G15K12G 450 GB 12G SAS 15K RPM SFF HDD SAS 450 GB
UCS-HD300G15K12G 300 GB 12G SAS 15K RPM SFF HDD SAS 300 GB
UCS-HD12TB10K12G 1.2 TB 12G SAS 10K RPM SFF HDD SAS 1.2 TB
UCS-HD900G10K12G 900 GB 12G SAS 10K RPM SFF HDD SAS 900 GB
UCS-HD600G10K12G 600 GB 12G SAS 10K RPM SFF HDD SAS 600 GB
UCS-HD300G10K12G 300 GB 12G SAS 10K RPM SFF HDD SAS 300 GB
6 Gbps Drives
UCS-HD12T10KS2-E 1.2 TB 6G SAS 10K RPM HDD SAS 1.2 TB
SSDs
12 Gbps Drives
UCS-SD16TB12S4-EP 1.6 TB 2.5 inch Enterprise Performance 12G SAS SSD SAS 1.6 TB
(10X endurance)
UCS-SD800G12S4-EP 800 GB 2.5 inch Enterprise Performance 12G SAS SSD SAS 800 GB
(10X endurance)
UCS-SD400G12S4-EP 400 GB 2.5 inch Enterprise Performance 12G SAS SSD SAS 400 GB
(10X endurance)
6 Gbps Drives
Drive
Product ID (PID) PID Description Capacity
Type
UCS-SD19TBKSS-EV 1.9 TB 2.5 inch Enterprise Value 6G SATA SSD (1 SATA 1.9 TB
FWPD)(PM863)
UCS-SD960GBKS4-EV 960 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 960 GB
UCS-SD800G0KS2-EP 800 GB Enterprise Performance 6G SAS SSD (Samsung 1625) SAS 800 GB
UCS-SD480GBKSS-EV 480 GB 2.5 in Enterprise Value 6G SATA SSD (1FWPD) SATA 480 GB
(PM86)
UCS-SD400G0KS2-EP 400 GB Enterprise Performance 6G SAS SSD (Samsung 1625) SAS 400 GB
UCS-SD240GBKS4-EV 240 GB 2.5 inch Enterprise Value 6G SATA SSD SATA 240 GB
Approved Configurations
Caveats
■ HDDs and SSDs cannot be mixed in the same server. SAS SSDs and SATA SSDs can be mixed in
the same server.See STEP 5 CHOOSE RAID CONFIGURATION, page 19 for available RAID
configurations.
Caveats
■ RAID configuration is possible if you have two identical drives. Otherwise, a JBOD
configuration is supported.
Cisco developed 1200 Series and 1300 Series Virtual Interface Cards (VICs) to provide flexibility
to create multiple NIC and HBA devices. The VICs also support adapter Fabric Extender and
Virtual Machine Fabric Extender technologies. The VIC features are listed here:
— 1200 Series VICs enable advanced networking features including Netflow for
network statistics, and DPDK, USNIC for low-latency computing applications.
— 1300 Series VICs include all of the 1200 Series features plus additional
enhancements including network overlay offload support for NVGRE and VXLAN, and
RoCE services.
— In addition, 1300 Series VICs support PCIe Gen 3.0 for greater bandwidth than 1200
Series VICs
— Two Converged Network Adapter (CNA) ports, supporting both Ethernet and FCoE
— Delivers 80 Gbps total I/O throughput to the server
• VIC 1240 supports dual 4 x 10 Gbps Unified I/O ports
• VIC 1340 supports dual 4x 10 Gbps Unified I/O ports or 2x40 (native) Gbps Unified
I/O ports
— Creates up to 256 fully functional unique and independent PCIe adapters and
interfaces (NICs or HBAs) without requiring single-root I/O virtualization (SR-IOV)
support from operating systems or hypervisors
— Provides virtual machine visibility from the physical network and a consistent
network operations model for physical and virtual servers
— Supports customer requirements for a wide range of operating systems and
hypervisors
■ Cisco UCS Storage Accelerator Adapters
Cisco UCS Storage Accelerator adapters are designed specifically for the Cisco UCS B-series M4
blade servers and integrate seamlessly to allow improvement in performance and relief of I/O
bottlenecks.
NOTE: For environments with 6100 Series Fabric Interconnects, you must configure
only the VIC 1240/1280 adapters (1200 Series) and not 1340/1380 (1300 Series).
From an I/O connectivity standpoint, configure only the VIC 1200 Series with the
6100 Series Fabric Interconnects.
NOTE: There are three slots on the server. One is a dedicated slot for the VIC
1340/1240 adapter only and the other two accommodate Cisco or Cisco Storage
Accelerator adapters as well as other options. Table 10 shows which adapters plug
into each of the three slots. Only the VIC 1340 or 1240 adapter plugs into the VIC
1340/1240 adapter slot. All other adapters plug into the other two mezzanine
adapter slots.
NOTE: You must have a B260 M4 configured with 2 CPUs to support cards that plug
into either of the two mezzanine connectors. The VIC 1340 and 1240 adapters are
supported on both 1- and 2-CPU configured systems.
To help ensure that your operating system is compatible with the cards you have selected,
please check the Hardware Compatibility List at this URL:
http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html
Choose an Adapter
The supported mezzanine adapters in the UCS B260 M4 are listed in Table 10.
“Adapter 1,” “Adapter 2,” and “Adapter 3” refer to the UCSM naming convention for the
adapter slots (this document uses the UCSM naming convention). In the server BIOS and on the
motherboard, the corresponding slots are labeled as “mLOM,” “Mezz 1,” and “Mezz 2,”
respectively. See Table 11.
Server BIOS and Motherboard Slot Naming UCSM Slot Naming Available Bandwidth
mLOM (VIC 1240 or VIC 1340 only) Adapter 1 20 Gbps per Fabric Extender
Mezz1 Adapter 2 20 Gbps per Fabric Extender
Mezz2 Adapter 3 40 Gbps per Fabric Extender
Supported Configurations
Table 12 and Table 13 show the supported adapter combinations. The configuration rules are
summarized as follows:
■ Adapter slot 1 is dedicated for the VIC 1240 or VIC 1340 only. No other mezzanine card can
fit in Adapter Slot 1.
■ The Port Expander Card can only be selected if the VIC 1240 or VIC 1340 is also selected for
the server.
■ You must select at least one VIC or CNA. You may select up to two VICs or CNAs. However,
you cannot mix a VIC and a CNA in the same server.
■ You cannot select more than one VIC 1240 or VIC 1340. You cannot select more than one VIC
1280 or VIC 1380. A VIC 1240 and a VIC 1280 can be mixed in the same server. A VIC 1340 and
VIC1380 can also be mixed.
■ You can select up to two Storage Acceleration adapters. A Fusion-io adapter cannot be
mixed with an LSI WarpDrive adapter in the same server.
NOTE: CPU 1 controls adapter slot 1, and CPU 2 controls adapter slots 2 and 3.
Table 12 Supported 1340 and 1380 Adapter1 Combinations (for each B260 M4 blade)
Table 12 Supported 1340 and 1380 Adapter1 Combinations (for each B260 M4 blade) (continued)
Table 13 Supported 1240 and 1280 Adapter1 Combinations (for each B260 M4 blade)
Table 13 Supported 1240 and 1280 Adapter1 Combinations (for each B260 M4 blade) (continued)
To check that your operating system is compatible with the adapter you have selected, please check the
Hardware Compatibility List at this URL
http://www.cisco.com/en/US/products/ps10477/prod_technical_reference_list.html
NOTE: The module used in this server conforms to TPM v1.2/1.3, as defined by the
Trusted Computing Group (TCG).
2 DB-9 serial connector 4 Two-port USB 2.0 connector (for a mouse and
keyboard)
NOTE: Dual card support (mirroring) is supported with UCS Manager 2.2.x and later.
The SDHC card ordering information is listed in Table 16. Order one or two SD cards.
NOTE: A clearance of 0.950 inches (24.1 mm) is required for the USB device to be
inserted and removed (see the following figure).
NOTE: When the Cisco USB key is purchased with a server, it is pre-installed into the
internal USB port and held firmly in place with a clip to protect it from shock and
vibration during shipment and transportation. This clip also prevents the USB key
from undergoing shock and vibration during ongoing customer operational use.
Table 19 OS Media
If you have noncritical implementations and choose to have no service contract, the following
coverage is supplied:
For support of the entire Unified Computing System, Cisco offers the Cisco SMARTnet for UCS
Service. This service provides expert software and hardware support to help sustain
performance and high availability of the unified computing environment. Access to Cisco
Technical Assistance Center (TAC) is provided around the clock, from anywhere in the world.
For UCS blade servers, there is Smart Call Home, which provides proactive, embedded
diagnostics and real-time alerts. For systems that include Unified Computing System Manager,
the support service includes downloads of UCSM upgrades. The Cisco SMARTnet for UCS Service
includes flexible hardware replacement options, including replacement in as little as two hours.
There is also access to Cisco's extensive online technical resources to help maintain optimal
efficiency and uptime of the unified computing environment. You can choose a desired service
listed in Table 20.
For faster parts replacement than is provided with the standard Cisco Unified Computing System
warranty, Cisco offers the Cisco SMARTnet for UCS Hardware Only Service. You can choose from
two levels of advanced onsite parts replacement coverage in as little as four hours. SMARTnet
for UCS Hardware Only Service provides remote access any time to Cisco support professionals
who can determine if a return materials authorization (RMA) is required. You can choose a
service listed in Table 21.
Service On
Service SKU Description
Level GSP Site?
Cisco Partner Support Service (PSS) is a Cisco Collaborative Services service offering that is
designed for partners to deliver their own branded support and managed services to enterprise
customers. Cisco PSS provides partners with access to Cisco's support infrastructure and assets
to help them:
■ Expand their service portfolios to support the most complex network environments
■ Lower delivery costs
■ Deliver services that increase customer loyalty
Partner Unified Computing Support Options enable eligible Cisco partners to develop and
consistently deliver high-value technical support that capitalizes on Cisco intellectual assets.
This helps partners to realize higher margins and expand their practice.
Partner Unified Computing Support Options are available to Cisco PSS partners. For additional
information, see the following URL:
www.cisco.com/go/partnerucssupport
Partner Support Service for UCS provides hardware and software support, including triage
support for third party software, backed by Cisco technical resources and level three support.
See Table 22.
Service
On
Service SKU Level Description
Site?
GSP
Partner Support Service for UCS Hardware Only provides customers with replacement parts in as
little as two hours. See Table 23.
Service
On
Service SKU Level Description
Site?
GSP
Combined Services makes it easier to purchase and manage required services under one
contract. SMARTnet services for UCS help increase the availability of your vital data center
infrastructure and realize the most value from your unified computing investment. The more
benefits you realize from the Cisco Unified Computing System (Cisco UCS), the more important
the technology becomes to your business. These services allow you to:
With the Cisco Unified Computing Drive Retention (UCDR) Service, you can obtain a new disk
drive in exchange for a faulty drive without returning the faulty drive. In exchange for a Cisco
replacement drive, you provide a signed Certificate of Destruction (CoD) confirming that the
drive has been removed from the system listed, is no longer in service, and has been destroyed.
Sophisticated data recovery techniques have made classified, proprietary, and confidential
information vulnerable, even on malfunctioning disk drives. The UCDR service enables you to
retain your drives and ensures that the sensitive data on those drives is not compromised, which
reduces the risk of any potential liabilities. This service also enables you to comply with
regulatory, local, and federal requirements.
If your company has a need to control confidential, classified, sensitive, or proprietary data, you
might want to consider one of the Drive Retention Services listed in Table 24, Table 25, or
Table 26.
NOTE: Cisco does not offer a certified drive destruction service as part of this
service.
Service Service
Service Description Service Level Product ID (PID)
Program Name Level GSP
Service Level
Service Description Service Level Product ID (PID)
GSP
Table 26 Drive Retention Service Options for Partner Support Service (Hardware Only)
Service Level
Service Description Service Level Product ID (PID)
GSP
For more service and support information, see the following URL:
http://www.cisco.com/en/US/services/ps2961/ps10312/Unified_Computing_Services_Overview.pdf
For a complete listing of available services for Cisco Unified Computing System, see this URL:
http://www.cisco.com/en/US/products/ps10312/serv_group_home.html
SUPPLEMENTAL MATERIAL
Motherboard
A photo of the top view of the B260 M4 server with the cover removed is shown in Figure 5.
A drawing of the top view of the B260 M4 server with the cover removed is shown in Figure 5.
25 26
24
27
12 16 20
10 13 14 17 18 21
11 15 19
7
8 5 6 23
9 22
1 2
28
352496
8 Memory buffer for subchannels A and B 22 DIMM slots O1-O3 and P1-P3
9 DIMM slots A1-A3 and B1-B3 23 Memory buffer for subchannels O and P
14 DIMM slots G1-G3 and H1-H3 28 Flexible Flash card locations (2)
D1 D2 D3 C1 C2 C3 E3 E2 E1 F3 F2 F1 H1 H2 H3 G1 G2 G3 I3 I2 I1 J3 J2 J1 N1 N2 N3 M1M2 M3 K3 K2 K1 L3 L2 L1
B1 B2 B3 A1 A2 A3 O3 O2 O1 P3 P2 P1
CPU 1 CPU 2
Each CPU controls four memory channels, and each memory channel controls two subchannels each through
individual memory buffers placed around the motherboard (shown as black rectangles on Figure 7). Each
subchannel controls 3 DIMMs as follows (refer also to Figure 3 on page 12):
When considering the memory configuration of your server, you should observe the following:
■ Your selected CPU(s) can have some effect on performance. All CPUs in the server must be
of the same type.
■ Performance degradation can result from unevenly populating DIMMs between CPUs
Identical
Qty of Qty of Qty of Qty of Total
DIMMs or
2x8GB DIMM kits 2x16GB DIMM kit 2x32GB DIMM kit 2x64GB DIMM kit Memory
Mixed
(UCS-MR-2X082RY-E) (UCS-MR-2X162RY-E) (UCS-ML-2X324RY-E) (UCS-ML-2X648RY-E) Capacity
DIMMs
UCSB-HS-01-EX= CPU Heat Sink for UCS B260 M4 and B460 M41
UCS-CPU-LPCVR= CPU load plate dust cover (for unpopulated CPU sockets)
UCS-CPU-EP-PNP= Pick n place CPU tools for M3/EP and M4/EX CPUs3
UCSX-HSCK= UCS Processor Heat Sink Cleaning Kit (when replacing a CPU)3
UCSB-MRAID-SC= Supercap for FlexStorage 12G SAS RAID controller w/1GB FBWC
Notes . . .
1. This part is included/configured with your UCS server (in some cases, as determined by the configuration of
your server).
2. This part is included/configured with the UCS 5108 blade server chassis.
3. This part is included with the purchase of each optional or spare CPU processor kit.
4. Only half the capacity of the 32 GB SD card is available with this server
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B260M4.html
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B260M4.html
Instructions for using this tool set are found at the following link:
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B260M4.html
NOTE: When you purchase a spare CPU, the Pick n Place Toolkit is included.
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B260M4.html
CAUTION:
DO NOT use thermal grease available for purchase at any commercial
electronics store. If these instructions are not followed, the CPU may
overheat and be destroyed.
NOTE: When you purchase a spare CPU, the thermal grease with syringe applicator
is included.
http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/hw/blade-servers/B260M4.html
NOTE: When you purchase a spare CPU, the CPU cleaning kit is included.
Network Connectivity
This section shows how the supported adapter card configurations for the B260 M4 connect to the Fabric
Extender modules in the 5108 blade server chassis.
There are three configurable adapter slots on the B260 M4. One slot supports only the VIC 1340/1240
adapter, and two additional slots accommodate Cisco adapters, as well as Cisco UCS Storage Accelerator
adapters. Table 12 on page 24 shows supported adapter configurations. You must install at least one VIC or
CNA adapter in one of the three adapter slots.
“Adapter 1,” “Adapter 2,” and “Adapter 3” refer to the UCSM naming convention for the adapter slots (this
document uses the UCSM naming convention). In the server BIOS and on the motherboard, the
corresponding slots are labeled as “mLOM,” “Mezz 1,” and “Mezz 2,” respectively. See Table 29.
Server BIOS and Motherboard Slot Naming UCSM Slot Naming Available Bandwidth
mLOM (VIC 1340/1240 only) Adapter 1 20 Gbps per Fabric Extender
Mezz1 Adapter 2 20 Gbps per Fabric Extender
Mezz2 Adapter 3 40 Gbps per Fabric Extender
Total bandwidth is a function of the Fabric Extender, the adapter, and the adapter slot, as shown in
Table 30 and Table 31.
2208XP 160 Gb
2204XP 160 Gb
2104XP 40 Gb
Adapter 2 Slot 40 Gb
Adapter 3 Slot 80 Gb
Figure 8 shows the configuration for maximum bandwidth, where the following ports are routed to Fabric
Extender Modules A and B inside the 5108 blade server chassis:
Fabric Fabric
Extender A Extender B
4 x 10G KR
4 x 10G KR
2 x 10G KR
2 x 10G KR
2 x 10G KR
2 x 10G KR
Blade Module
■ Two ports of the first group and two ports of the second group are wired through the UCS
5108 Blade Server chassis to Fabric Extender A and Fabric Extender B.
■ The other two ports of each group are wired to adapter slot 2. The VIC 1340/1240 adapter
senses the type of adapter installed in adapter slot 2. If a Port Expander is installed in
adapter slot 2, the four 10G KR ports between the adapters are used for port expansion;
otherwise they are unused.
With the Port Expander installed, there are up to eight (depending on the Fabric Extender installed) 10 Gb
network interfaces, as represented in Figure 9.
Total Available
Adapter Slot 1 Adapter Slot 2 Adapter Slot 3
Bandwidth
2 x 2208XP/
2 x 2204XP
VIC 1340/1240 Port Expander Card VIC 1380/1280 160/80 Gb
Figure 10 on page 53
VIC 1340/1240 Cisco UCS Storage Accelerator VIC 1380/1280 120/60 Gb
Figure 11 on page 53
VIC 1340/1240 Not populated VIC 1380/1280 120/60 Gb
Figure 12 on page 54
Not populated Cisco UCS Storage Accelerator VIC 1380/1280 80/40 Gb
Figure 13 on page 54
Not populated Not populated VIC 1380/1280 80/40 Gb
Figure 14 on page 55
VIC 1340/1240 Port Expander Card Cisco UCS Storage Accelerator 80/40 Gb
Figure 15 on page 55
VIC 1340/1240 Port Expander Card Not populated 80/40 Gb
Figure 16 on page 56
Note: for the following configuration, do not mix a Cisco Storage Accelerator with an LSI WarpDrive.
Slots 2 and 3 must have identical types of storage cards.
VIC 1340/1240 Cisco UCS Storage Accelerator Cisco UCS Storage Accelerator 40/20 Gb
Figure 17 on page 56
VIC 1340/1240 Cisco UCS Storage Accelerator Not populated 40/20 Gb
Figure 18 on page 57
VIC 1340/1240 Not populated Not populated 40/20 Gb
Figure 19 on page 57
Notes . . .
1. In the server BIOS and on the motherboard, “Adapter 1,” “Adapter 2,” and “Adapter 3” are labeled as “mLOM,”
“Mezz 1,” and “Mezz 2,” respectively
In Figure 10, two ports from the VIC 1340/1240 are channeled to Fabric Extender A and two are channeled
to Fabric Extender B. The Port Expander Card for the VIC 1340/1240 installed in adapter slot 2 acts as a
pass-through device, channeling two ports to each of the Fabric Extenders. In addition, the VIC 1380/1280
channels four ports to each Fabric Extender. The result is 80 Gb of bandwidth to each Fabric Extender.
Figure 10 VIC 1340/1240, Port Expander in adapter slot 2, and VIC 1380/1280 in adapter slot 3
In Figure 11, two ports from the VIC 1340/1240 are channeled to Fabric Extender A and two are channeled
to Fabric Extender B. A Cisco UCS Storage Accelerator adapter is installed in slot 2, but provides no network
connectivity. The VIC 1380/1280 installed in adapter slot 3 channels four ports to each of the Fabric
Extenders. The result is 60 Gb of bandwidth to each Fabric Extender.
Figure 11 VIC 1340/1240, Cisco UCS SA adapter slot 2, and VIC 1380/1280 adapter slot 3
In Figure 12, two ports from the VIC 1340/1240 are channeled to Fabric Extender A and two are channeled
to Fabric Extender B. Adapter slot 2 is empty. The VIC 1380/1280 installed in adapter slot 3 channels four
ports to each of the Fabric Extenders. The result is 60 Gb of bandwidth to each Fabric Extender.
In Figure 13, no VIC 1340/1240 is installed. A Cisco UCS Storage Accelerator adapter is installed in slot 2,
but provides no network connectivity. The VIC 1380/1280 installed in adapter slot 3 channels four ports to
each of the Fabric Extenders. The result is 40 Gb of bandwidth to each Fabric Extender.
Figure 13 No VIC 1340/1240 installed, UCS Storage Accelerator in slot 2 and VIC 1380/1280 in slot 3
In Figure 14, no VIC 1340/1240 is installed. Adapter 2 slot is also not occupied. The VIC 1380/1280 installed
in adapter slot 3 channels four ports to each of the Fabric Extenders. The result is 40 Gb of bandwidth to
each Fabric Extender.
Figure 14 No VIC 1340/1240 installed, no adapter installed in slot 2, and VIC 1380/1280 in slot 3
In Figure 15, two ports from the VIC 1340/1240 are channeled to Fabric Extender A and two are channeled
to Fabric Extender B. The Port Expander Card installed in adapter slot 2 acts as a pass-through device,
channeling two ports to each of the Fabric Extenders. A Cisco UCS storage accelerator is installed in slot 3,
but provides no network connectivity. The result is 40 Gb of bandwidth to each Fabric Extender.
Figure 15 VIC 1340/1240 and Port Expander in Adapter Slot 2 with UCS storage accelerator in slot 3
In Figure 16, two ports from the VIC 1340/1240 are channeled to Fabric Extender A and two are channeled
to Fabric Extender B. The Port Expander Card installed in adapter slot 2 acts as a pass-through device,
channeling two ports to each of the Fabric Extenders. Adapter slot 3 is empty. The result is 40 Gb of
bandwidth to each Fabric Extender.
Figure 16 VIC 1340/1240 and Port Expander in Adapter Slot 2 (adapter slot 3 empty)
In Figure 17, two ports from the VIC 1340/1240 adapter are channeled to Fabric Extender A and two are
channeled to Fabric Extender B. UCS storage accelerators are installed in adapter slots 2 and 3, but provide
no network connectivity. The result is 20 Gb of bandwidth to each Fabric Extender.
Figure 17 VIC 1340/1240 with UCS storage accelerators installed in adapter slots 2 and 3
In Figure 18, two ports from the VIC 1340/1240 adapter are channeled to Fabric Extender A and two are
channeled to Fabric Extender B. A UCS storage accelerator is installed in adapter slot 2 but provides no
network connectivity and slot 3 is empty. The result is 20 Gb of bandwidth to each Fabric Extender.
Figure 18 VIC 1340/1240 with UCS storage accelerator installed in adapter slot 2 and slot 3 empty
In Figure 19, two ports from the VIC 1340/1240 adapter are channeled to Fabric Extender A and two are
channeled to Fabric Extender B. Adapter slots 2 and 3 are empty. The result is 20 Gb of bandwidth to each
Fabric Extender.
Total Available
Adapter Slot 1 Adapter Slot 2 Adapter Slot 3
Bandwidth
2 x 2104XP
VIC 1340/1240 Port Expander Card VIC 1380/1280 40 Gb
Figure 20 on page 59
VIC 1340/1240 Cisco UCS Storage Accelerator VIC 1380/1280 40 Gb
Figure 21 on page 59
VIC 1340/1240 Not populated VIC 1380/1280 40 Gb
Figure 22 on page 60
Not populated Cisco UCS Storage Accelerator VIC 1380/1280 20 Gb
Figure 23 on page 60
Not populated Not populated VIC 1380/1280 20 Gb
Figure 24 on page 61
VIC 1340/1240 Port Expander Card Cisco UCS Storage Accelerator 20 Gb
Figure 25 on page 61
VIC 1340/1240 Port Expander Card Not populated 20 Gb
Figure 26 on page 62
Note: for the following configuration, do not mix a Fusion-io adapter with an LSI WarpDrive. Slots 2 and
3 must have identical types of storage cards.
VIC 1340/1240 Cisco UCS Storage Accelerator Cisco UCS Storage Accelerator 20 Gb
Figure 27 on page 62
VIC 1340/1240 Cisco UCS Storage Accelerator Not populated 20 Gb
Figure 28 on page 63
VIC 1340/1240 Not populated Not populated 20 Gb
Figure 29 on page 63
Notes . . .
1. In the server BIOS and on the motherboard, “Adapter 1,” “Adapter 2,” and “Adapter 3” are labeled as “mLOM,”
“Mezz 1,” and “Mezz 2,” respectively
In Figure 20, one port from the VIC 1340/1240 is connected to Fabric Extender A and one is connected to
Fabric Extender B. The Port Expander Card for the VIC 1340/1240 installed in adapter slot 2 has no role in
this case. In addition, the VIC 1380/1280 channels one port to each Fabric Extender. The result is 20 Gb of
bandwidth to each Fabric Extender.
Figure 20 VIC 1340/1240, Port Expander in adapter slot 2, and VIC 1380/1280 in adapter slot 3
In Figure 21, two ports from the VIC 1340/1240 are connected, one to each Fabric Extender. A Cisco UCS
Storage Accelerator adapter is installed in slot 2, but provides no network connectivity. The VIC 1380/1280
installed in adapter slot 3 connects two ports, one to each of the Fabric Extenders. The result is 20 Gb of
bandwidth to each Fabric Extender.
Figure 21 VIC 1340/1240, Cisco UCS SA adapter slot 2, and VIC 1380/1280 in adapter slot 3
In Figure 22, two ports from the VIC 1340/1240 are connected, one to each Fabric Extender. Adapter slot 2
is empty. The VIC 1380/1280 installed in adapter slot 3 connects two ports, one to each of the Fabric
Extenders. The result is 20 Gb of bandwidth to each Fabric Extender.
In Figure 23, no VIC 1340/1240 is installed. A Cisco UCS Storage Accelerator adapter is installed in slot 2,
but provides no network connectivity. The VIC 1380/1280 installed in adapter slot 3 connects two ports, one
to each of the Fabric Extenders. The result is 10 Gb of bandwidth to each Fabric Extender.
Figure 23 No VIC 1340/1240 installed, UCS Storage Accelerator in slot 2 and VIC 1380/1280 in slot 3
In Figure 24, no VIC 1340/1240 is installed. Adapter slot 2 is also not occupied. The VIC 1380/1280 installed
in adapter slot 3 connects two ports, one to each Fabric Extender. The result is 10 Gb of bandwidth to each
Fabric Extender.
Figure 24 No VIC 1340/1240 installed, no adapter installed in slot 2, and VIC 1380/1280 in slot 3
In Figure 25, one port from the VIC 1340/1240 is connected to Fabric Extender A and one is connected to
Fabric Extender B. The Port Expander Card installed in adapter slot 2 has no role in this case. A Cisco UCS
storage accelerator is installed in slot 3, but provides no network connectivity. The result is 10 Gb of
bandwidth to each Fabric Extender.
Figure 25 VIC 1340/1240 and Port Expander in Adapter Slot 2 with UCS storage accelerator in slot 3
In Figure 26, one port from the VIC 1340/1240 is connected to Fabric Extender A and one is connected to
Fabric Extender B. The Port Expander Card installed in adapter slot 2 has no role in this case. Adapter slot
3 is empty. The result is 10 Gb of bandwidth to each Fabric Extender.
Figure 26 VIC 1340/1240 and Port Expander in Adapter Slot 2 (adapter 3 empty)
In Figure 27, two ports from the VIC 1340/1240 adapter are connected, one to each Fabric Extender. UCS
storage accelerators are installed in adapter slots 2 and 3, but provide no network connectivity. The result
is 10 Gb of bandwidth to each Fabric Extender.
Figure 27 VIC 1340/1240 with UCS storage accelerators installed in adapter slots 2 and 3
In Figure 28, two ports from the VIC 1340/1240 adapter are connected, one to each Fabric Extender. A UCS
storage accelerator is installed in adapter slot 2 but provides no network connectivity and slot 3 is empty.
The result is 10 Gb of bandwidth to each Fabric Extender.
Figure 28 VIC 1340/1240 with UCS storage accelerator installed in adapter slot 2 and slot 3 empty
In Figure 29, two ports from the VIC 1340/1240 adapter are connected, one to each to Fabric Extender.
Adapter slots 2 and 3 are empty. The result is 10 Gb of bandwidth to each Fabric Extender.
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Notes . . .
1. The system weight given here is an estimate for a fully configured system and will vary depending on the
number of CPUs, memory DIMMs, and other optional items.
Power Specifications
For configuration-specific power specifications, use the Cisco UCS Power Calculator at:
http://ucspowercalc.cisco.com.