Hardware Overview
Hardware Overview
Reporting Problems
To send comments or report errors regarding this document,
please email: [email protected].
For Issues not related to this document, contact your service provider.
Refer to Document ID:
1416422235015
Content Creation Date November 19, 2014
This guide is a hardware overview guide for the EMC VNX5400 platform and provides
an overview of the architecture, features, and components of the VNX5400 platform. The
specific aspects of the VNX5400 platform and its major components include the front and
rear connectors and LED indicators on the 3U, 25 (2.5-inch) disk processor enclosure
(DPE), the 1U Control Station, the 2U Data Mover enclosure, and the 2U, 25 (2.5-inch), the
3U, 15 (2.5- or 3.5-inch), and the 4U, 60 (2.5- or 3.5-inch) disk-array enclosures DAEs.
This guide is available online at https://mydocs.emc.com/VNX/. Go to the About VNX
section, and then select Learn about VNX hardware. Next, follow the steps in the wizard.
Topics include:
Only trained and qualified personnel should be allowed to install, replace, or service this
equipment.
Revision history
The following table presents the revision history of this document:
Table 1 Revision history
Revision
Date
Description
04
03
02
01
Note: This document was accurate at publication time. New versions of this document
might be released on the EMC online support website. Check the EMC online support
website to ensure that you are using the latest version of this document.
Table 2 Organization
Title
Description
Overview on page 5
System component
description on page 12
General on page 14
Related documentation
Description
Disk-array enclosure on
page 66
Describes the types of DAE cabling available for the Block and
File/Unified VNX5400 platform. The cabling can be either
stacked or interleaved depending on your specific
requirements.
Related documentation
EMC provides the ability to create step-by-step planning, installation, and maintenance
instructions tailored to your environment. To create VNX customized documentation, go
to: https://mydocs.emc.com/VNX/.
To download a PDF copy of the desired publication, go to the following sections:
For hardware-related guides, go to About VNX, then select Learn about VNX hardware.
Next, follow the steps in the wizard.
For installation, adding, or replacing tasks, go to VNX tasks section, then select the
appropriate heading. For example, to download a PDF copy of the VNX5400 Block
Installation Guide, go to Install VNXand follow the steps in the wizard.
For server-related tasks, go to the VNX Server tasks section, then select the
appropriate heading. For example, to download a PDF copy of adding or replacing
server hardware, go to VNX Server tasks, and select Add or replace server hardware.
Next, follow the steps in the wizard.
Safety warnings
Safety warnings appear throughout this publication in procedures that, if performed
incorrectly, might harm you or damage the equipment. A caution or warning symbol
precedes each safety statement. The safety warnings provide safety guidelines that you
should follow when working with any equipment that connects to electrical power or
telephone wiring.
Overview
Overview
The EMC VNX series implements a modular architecture that integrates hardware
components for Block, File, and Object with concurrent support for native NAS, iSCSI,
Fiber Channel, and Fibre Channel over Ethernet (FCoE) protocols. The VNX series is based
on Intel Xeon-based PCI Express 3.0 processors and delivers File (NAS) functionality via
two to eight Data Movers and Block (iSCSI, FCoE, and FC) storage via dual storage
processors using a full 6-Gb/s SAS disk drive topology. The VNX Series is targeted at the
entry-level to high-end/large-capacity storage environments that require advanced
features, flexibility and configurability. The VNX Series provides significant advancements
in efficiency, simplicity, and performance.
Benefits include:
Support for File (CIFS and NFS), Block (FC, iSCSI & FCoE) and Object
Simple conversions when starting with a VNX Series Block only platform by simply
adding File services or starting with File only and adding Block services
Support for both block and file auto-tiering with Fully Automated Storage Tiering
(FAST) for Virtual Pools (VP - FAST VP)
Unified replication with RecoverPoint support for both file and block data
Updated unified management with Unisphere now delivering a more cohesive unified
user experience
Offering Block and File services, Block services only, or File services only, the VNX5400
platform is an entry-level storage platform. For a quick look at the VNX5400 platform
hardware features, see Table 3, Block and File VNX5400 platform hardware feature quick
reference, on page 9.
In a Block services configuration, the VNX5400 platform supports a 3U DPE and three
types of DAEs. The 3U DPE supported is a 25 drive 2.5-inch disk 3U enclosure (or DPE9).
The DAEs supported are a 25 drive 2.5-inch disk 2U enclosure (or DAE5S), a 15 drive 2.5or 3.5-inch disk 3U enclosure (or DAE6S), and a 60 drive 2.5- or 3.5-inch disk 4U
enclosure (or DAE7S). Expansion of up to 9, 2U DAEs (a maximum of 225, 2.5-inch disk
drives), up to 15, 3U DAEs (a maximum of 225, or up to 3, 4U DAEs (a maximum of 180,
2.5- or 3.5-inch disk drives) is supported.
Note: When the 4U DAEs are implemented in the VNX5400 platform, the 40U Dense rack is
required because of the depth of the 4U DAE.
IMPORTANT
When calculating the number of disk drives for your Block, File, and Unified services
VNX5400 platform, the DPE is included in the total drive slot quantity of 250 drives. If the
total drive slot quantity exceeds 250, you will not be able to add another DAE. Refer to the
Disk-array enclosure section on page 66 for more information about the available
expansion DAEs for the VNX5400 platform.
In a File services or a Unified services configuration (Figure 1 on page 7), the VNX5400
platform supports a 3U DPE, from one to two 1U Control Stations (CS0 and CS1), one to
two 2U Data Mover enclosures having one to four Data Movers1, and three types of DAEs.
The 3U DPE supported is a 25 drive 2.5-inch disk 3U enclosure (or DPE9). The DAEs
supported are a 25 drive 2.5-inch disk 2U enclosure (or DAE5S), a 15 drive 2.5- or 3.5-inch
disk 3U enclosure (or DAE6S), and a 60 drive 2.5- or 3.5-inch disk 4U enclosure (or
DAE7S). Expansion of up to 9, 2U DAEs (a maximum of 225, 2.5-inch disk drives), up to
15, 3U DAEs (a maximum of 225, 2.5- or 3.5-inch disk drives), or up to 3, 4U DAEs (a
maximum of 180, 2.5- or 3.5-inch disk drives) is supported.
Note: The Block or the File and Unified services configuration of the VNX5400 platform can
have a mix of DAE types to conform to your specific requirements. In other words, you can
have a mix of 2U DAEs, 3U DAEs, and 4U DAEs in the same environment so as long as the
VNX5400 platform does not have no more than the supported amount of 250 disk drives.
1. The term Data Mover is used throughout this guide. The term Data Mover is also referred to as a
blade. These terms are interchangeable and mean the same.
6
Front view
Figure 1 shows an example of the front view of a File/Unified VNX5400 platform having a
3U, 25 (2.5-inch) disk drive DPE, two 2U Data Mover enclosures with four Data Movers,
and two 1U Control Stations (one optional).
Note: The example shown in Figure 1 does not show any DAEs.
Control Station 1
(optional)
Control Station 0
AC
AC
AC
AC
AC
AC
AC
AC
Data Mover
enclosure 1
Data Mover
enclosure 0
3U, 25 (2.5-inch)
disk processor
enclosure
2
1
0
6Gb SAS
6Gb SAS
1
X4
1
G
b
E
2
1
0
6Gb SAS
6Gb SAS
1
G
b
E
1
X4
SP
SP
VNX-000940
Rear view
Figure 2 shows an example of the rear view of a File/Unified VNX5400 platform having a
3U DPE showing two storage processors (SP A and B), two 2U Data Mover enclosures with
four Data Movers, and two 1U Control Stations (one optional).
Note: The example shown in Figure 2 does not show any DAEs.
MGMT
B MGMT
Control Station 1
(optional)
IOIO
MGMT
IOIO
3
2
Data Mover 4
Data Mover 5
Data Mover 2
Data Mover 3
X4
X4
X4
0
X4
!
AC
AC
0
DC
DC
Disk processor
enclosure
MGMT
Data Mover
enclosure 0
CS
Data Mover
enclosure 1
MGMT
B MGMT
Control Station 0
CS
VNX-000941
Note: Figure 1 on page 7 and Figure 2 are examples of a File/Unified VNX5400 platform
(front and rear views) without any DAEs. These figures are for illustrative purposes only.
Hardware features
Contained in a 9U architecture, the File/Unified VNX5400 platform weighs approximately
182.82 lb (82.925 kg) fully loaded2 without I/O modules and DAEs. With the 3U DPE
having the deepest dimension within the cabinet, the File/Unified VNX5400 without DAEs
measures (9U) 12.25 inches high x 17.62 inches wide x 24.77 inches deep (31.11 cm x
44.76 cm x 69.92 cm). Between the front and rear of the enclosure, a midplane distributes
2. A fully loaded VNX5400 (without any DAEs) includes two 1U Control Stations, one 3U DPE (with
two SPs), and two 2U Data Mover enclosure with four Data Movers. In this fully loaded File/Unified
VNX5400 platform, the 3U DPE (with two SPs) can have 25 (2.5-inch) disk drives. Separately, the
25 (2.5-inch) drives weigh 13.5 lb (6.13 kg).
8
power and signals to all the enclosure components. On the front of the VNX5400 DPE, the
CPU modules, cooling fan modules, and disk drives plug directly into the midplane
connections. On the rear of the VNX5400 DPE, the battery backup unit (BBU) modules, the
base modules, power supply modules, management modules, and I/O modules plug
directly into the midplane connections.
Note: The previously mentioned dimensions are approximate and do not include any I/O
modules, DAEs, or the cabinet enclosure.
For more information about the weight and dimensions of a VNX5400 platform, go to
https://mydocs.emc.com/VNX/ and go to the About VNX section, and then select View
technical specifications. Next, follow the steps in the wizard.
Table 3 Block and File VNX5400 platform hardware feature quick reference
Block
Minimum
form
factor
Maximum
# of
drives
3U w/out
optional
CS,
DMEs,
and DAEs
250
Config.
I/O
slots
per SP
Drive
types
41
6-Gb/s
2.5 or
3.5 in.
SAS
and
2.5 or
3.5 in.
Flash
Built-in
I/O ports
per SP
Two 4x
lane BE2
6-Gb/s
SAS
ports
File
SPs
System
memory
per SP
16 GB
Protocols
FC, iSCSI,
and FCoE
Config.
I/O slots
per DM
DMs
System
memory
per DM
1 to 43
6 GB
Protocols
NFS,
CIFS,
and
pNFS4
1. For the type and number of Ultraflex I/O modules supported in the SP, refer to the I/O module section on page SP I/O module types on page 46.
2. BE = back end
3. For the type and number of Ultraflex I/O modules supported in the DM, refer to the I/O module section on page Data Mover I/O module types on
page 57.
4. pNFS = parallel-NFS
Configured for AC-input power, the VNX5400 platform includes the following hardware
features:
Note: A DC-powered VNX5400 model is also available.
One 3U DPE:
On the front of the VNX5400, the 3U DPE (Figure 2 on page 8) has two SPs (SP A
and B). Each SP consists of:
One disk drive carrier type; the 3U, 25 (2.5-inch) disk drive carrier (Figure 3 on
page 13). Two types of disk drives are supported in this carrier: Serial
attached-SCSI (SAS) and Flash
Four dual cooling fan packs (see Storage processor (SP) dual fan pack (cooling
module) on page page 15 for more information) or packs (eight fans total)
Note: The dual cooling fan pack is secured with push-tabs on the left and right
sides of each pack.
EMC VNX5400 Hardware Information Guide
One CPU module with an Intel Xeon 4-core 1.8-GHz processor facilitating
Simultaneous Multi-Threading (SMT).
Eight Double Data Rate Three (DDR3) synchronous dynamic RAM (SDRAM) slots
supporting up to 16 GB of SDRAM per CPU module or SP using 4 or 8 GB DIMMs
Three LEDs; power on, fault, and unsafe to remove.
Note: Each CPU is secured with a push tab/pop out latch.
On the rear of the VNX5400, the 3U DPE (Figure 2 on page 8) has two SPs (SP A and
B). Each SP consists of:
One battery backup unit (BBU) providing back-up power for the SP and the disk
drives allowing for the flushing (data cacheing) of the vault drives whenever an
AC or DC input power loss to the storage system occurs (see Battery backup
unit on page page 26 for more information).
Note: The BBU module is secured with a push-pull type of latch.
One base module featuring two integrated 4x lane 6-Gb/s mini-SAS HD
(encryption capable) back-end ports (labeled 1 and 0, respectively).
Note: The base module is secured with a screw-type of latch (see Base
module on page page 26 for more information).
One management module (see SP management module on page page 32 for
more information) featuring:
a.) One RS-232/EIA 232 serial (up to 115 K baud) service laptop (micro DB-9)
port
b.) One RS-232/EIA 232 serial SPS management (micro DB-9) port
c.) One 10/100/1000 LAN network management (RJ-45) port
d.) One 10/100/1000 LAN service (RJ-45) port
Note: The management module is secured with a latch handle (labeled MGMT).
One power supply module (hot-swappable) featuring (see SP power supply
module on page 31 for more information):
Note: Three types of power supplies are supported in the VNX5400 storage
system. Two AC-type power supplies and one DC-type power supply. For more
information on the power supply types, see SP power supply module on
page 31 and the VNX5400 parts Location Guide for more information.
a.) One recessed power plug
b.) Three LEDs (labeled with ! for fault, DC and AC); the labels on the LEDs are
printed upside down.
Note: The power supply is secured with a pull latch and handle.
Four PCI Gen 3, 8x lane I/O module slots (A1 A4 and B1 B4) are available for
use, supporting:
Note: The maximum number of I/O modules for the VNX5400 is 4 per SP. Any
combination of the following I/O modules up to four per SP.
10
11
b.) One or two of the following network I/O modules in any combination:
Two-port 10-Gb/s optical or active Twinax; labeled 10 GbE v3 on the
latch handle
Four-port 1-Gb/s copper; labeled 1 GbE on the latch handle
Two-port 10-Gb/s RJ45 Base-T iSCSI/IP; labeled 10 GbE Base-T on the
latch handle
Any required cables including LAN cables, modem cables, and serial DB-9 cable.
24
0
Caution: Array Software on drives 0-3. Removing or relocating them
2
1
0
6Gb SAS
6Gb SAS
1
G
b
E
1
0
1
X4
6Gb SAS
6Gb SAS
1
G
b
E
1
X4
SP
SP
01
01
10
9
8
5
Green on
Detail
Blue/amber
fault
White
Unsafe to remove
VNX-000520
10
See Detail
Detail
13
General
On the front of the VNX5400 platform, the DPE comprises the following components:
Drive carrier
Disk drives
Midplane
EMI shielding
Drive carrier
The disk drive carriers are metal and plastic assemblies that provide smooth, reliable
contact with the enclosure slot guides and midplane connectors. Each carrier has a
handle with a latch and spring clips. The latch holds the disk drive in place to ensure
proper connection with the midplane. Disk drive activity/fault LEDs are integrated into the
carrier.
Disk drives
Each disk drive consists of one disk drive in a carrier. You can visually distinguish
between disk drive types by their different latch and handle mechanisms and by type,
capacity, and speed labels on each disk drive. You can add or remove a disk drive while
the 3U DPE is powered up, but you should exercise special care when removing modules
while they are in use. Disk drives are extremely sensitive electronic components.
IMPORTANT
When calculating the number of drives for your VNX5400 platform, the DPE is included in
the total drive slot quantity of 250 drives. If the total drive slot quantity exceeds 250, you
will not be able to add another DAE. Refer to the Disk-array enclosure section on
page 66 for more information about the available expansion DAEs for the VNX5400
platform.
Midplane
A midplane separates the front-facing disk drives from the rear-facing SPs. It distributes
power and signals to all components in the enclosure. SPs and disk drives plug directly
into the midplane.
14
EMI shielding
EMI compliance requires a properly installed electromagnetic interference (EMI) shield in
front of the DPE disk drives. When installed in cabinets that include a front door, the DPE
includes a simple EMI shield. Other installations require a front bezel that has a locking
latch and integrated EMI shield. You must remove the bezel/shield to remove and install
the disk drives.
Table 4 describes the VNX5400 platform 3U, 25 DPE, SP, and disk drive status LEDs.
Table 4 VNX5400 platform 3U, 25 DPE, SP, and disk drive status LEDs
LED
Color
State
Description
Amber
On
Fan fault
Off
Amber
On
DPE faulted
Off
Green
On
Off
Powered down
Amber
On
Off
Blue
On
Blinking
15
Table 4 VNX5400 platform 3U, 25 DPE, SP, and disk drive status LEDs (continued)
LED
Color
State
Description
Amber
On (steady)
SP fault
Executing BIOS
Executing Post
Off
Amber
Executing BIOS
Executing Post
On
Degraded mode
Blue
Blue
Off
Powered down
Amber
On
Amber
Blinks at 1, 3, 3,
and 1 times a
second
Memory problem
On
Blue
16
Table 4 VNX5400 platform 3U, 25 DPE, SP, and disk drive status LEDs (continued)
LED
Color
State
Description
White
On
Off
Green
On
SP is powered up normally
Off
SP is powered off
CL5184
17
You have two minutes to remove the faulted fan pack (cooling module) and install a
replacement before the SP shuts down. For more information, refer to the Replacing a
Storage Processor Fan module procedure for the correct steps to take before and during
removal of a fan pack. This procedure is available online at:
https://mydocs.emc.com/VNX/ and go to VNX tasks, select Replace VNX Hardware. Next,
follow the steps in the wizard.
Control panel
VNX-000521
Hard drives
18
ID
10
VNX-000549
Color
State
Description
Green
On
Blinking
Off
Idle
Green
Blinking
Off
Green
On
Blinking
Sleep mode
Off
Power off
19
Color
State
Description
Green
On
Blinking
On
Blinking
Off
Amber
20
Green
On
Blinking
Off
Idle
Not used
Green
On
Powered on
AC
AC
AC
AC
CNS-001667
Table 6 describes the 2U Data Mover enclosure status (power and fault) LEDs.
Color
State
Description
Power
Blue
On
Off
Amber
On
Fault
Note: When the enclosure fault LED is amber, look for the
replaceable component within the enclosure that is causing
the fault. Refer to the other status LED definitions in this
section to determine which replaceable component failed.
Off
21
CPU
The CPU modules in the DME contain the power, fault, and unsafe-to-remove LEDs.
Figure 8 shows the CPU LEDs.
CPU fault LED
CPU power LED
AC
AC
CPU unsafe to
remove LED
AC
AC
CNS-001669
Color
State
Description
Power
Green
On
Off
Amber
On
Blinking
Fault
Blinking
Unsafe to
remove
Off
White
On
Off
Note: The fault LED changes color from amber to blue when the operating system is loading, see
step 4 in the fault LED description above.
22
AC
AC
AC
AC
CNS-001673
Color
State
Description
Power/Fault
Green
On
Amber
Blinking
Amber
On
No power
23
SP B and A
Four PCI Gen 3 x8 I/O module slots (A1 A4 and B1 B4) featuring the following
SP I/O module types:
Four-port 8-Gb/s FC optical (running at 2, 4, or 8 Gb/s); labeled 8 GbE Fibre on
the latch handle
Four-port 1-Gb/s Base-T iSCSI I/O module; labeled 1 GbE iSCSI/TOE on the
latch handle
Two-port 10-Gb/s optical or active Twinax5; labeled 10 GbE v3 on the latch
handle
Two-port 10-Gb/s RJ45 Base-T iSCSI/IP; labeled 10 GbE Base-T on the latch
handle
Note: This I/O module is not supported when the VNX5400 storage system is
using the low powered (800 W, 100-240 V) power supply (see SP power
supply module on page 31 and the VNX5400 Parts Location Guide for more
information).
Two-port 10-Gb/s Fibre Channel over Ethernet (FCoE); labeled 10 GbE/FCoE on
the latch handle
Two management modules (one per SP) featuring:
Two (RJ-45) LAN connectors (labeled with a network management symbol and a
wrench symbol)
Two (micro DB-9) RS-232/EIA connectors (labeled with a battery symbol and a
wrench symbol)
One USB port (not used)
24
6
5
10 11 12
X4
X4
X4
13
0
X4
DC
DC
AC
0
AC
14
17
16
15
VNX-000522
10
11
12
13
14
15
16
17
25
Push/pull latch
Handle
VNX-000523
The BBU is designed to provide under 12 V DC so as not to use power until the power
supply drops. The power provided is enough to keep one CPU module, one base module,
and four disk drives running long enough to do two cache vaults.
Table 9 describes the BBU status LED. See Figure 11 for location of the BBU status LED.
Color
State
Description
Status
Green
On
Off
Amber
On
Faulted or marker
Amber
Blinking
Marked
Base module
Each base module provides two 6-Gb/s PCI-e Gen 3 SAS ports (from left to right labeled 1
and 0, respectively). These ports (see the following illustration) provide an interface for
SAS and NL-SAS drives on the DAE. This port is a 36-pin mini-SAS HD small form-factor
8644 (SFF-8644) specification connector (socket or receptacle) using an SFF-8644
specification mini-SAS HD cable (plug) with a pull (release) tab.
26
Note: The first DAE connection comes from these 6-Gb/s mini-SAS HD ports. This
connection uses a 36-pin mini-SAS HD small form-factor 8644 (SFF-8644) specification
connector (plug) with a pull (release) tab on one end (see Figure 13 on page 29) to a
26-pin mini-SAS small form-factor 8088 (SFF-8088) specification connector (plug) with a
pull tab on the other end cable.
The following illustration shows an example of the 6-Gb/s mini-SAS HD connector (socket)
and pinout.
D1
D9
C1
C9
B1
B9
A1
A9
VNXe-000510
The following tables list the 6-Gb/s mini-SAS HD port pin signals used on the connector
and define the connection requirements of the signal.
Table 10 6-Gb/s mini-SAS HD port connector pinout
Pin
Signal
Pin
Signal
A1
Reserved
C1
SCL1
A2
Intl1
C2
SDA1
A3
Signal GND
C3
Signal GND
A4
Rx 1-
C4
Tx 1+
A5
Rx 1+
C5
Tx 1-
A6
Signal GND
C6
Signal GND
A7
Rx 3+
C7
Tx 3+
A8
Rx 3-
C8
Tx 3-
A9
Signal GND
C9
Signal GND
B1
Vact1
D1
Vact1
B2
ModPrsL1
D2
Vman1
B3
Signal GND
D3
Signal GND
B4
Rx 0+
D4
Tx 0+
B5
RX 0-
D5
Tx 0-
B6
Signal GND
D6
Signal GND
27
Signal
Pin
Signal
B7
Rx 2+
D7
Tx 2+
B8
Rx 2-
D8
Tx 2-
B9
Signal GND
D9
Signal GND
28
Signal
Connection requirements
Intl
Active Low Module Interrupt: The cable assembly asserts this pin to indicate
an interrupt bit has been set to one in the management interface memory
map. This pin is connected to Vman on the receptacle side of the management
interface. The source of the interrupt may be identified using the 2-wire serial
management interface. If a cable assembly does not support interrupts, then
all interrupt bits in the cable management interface memory map are set to
zero and the cable assembly negates this pin (e.g., all interrupt bits of a
passive cable assembly may be programmed to a clear state and the IntL pin
not connected on the cable plug side of the management interface).
ModPrsL
Active Low Module Present: On the cable plug side of the management
interface, ModPrsL is connected directly to the signal ground pins specified in
Table 10 on page 27. ModPrsL is connected to Vman on the receptacle side of
the management interface to negate this signal when the plug is not fully
mated to the receptacle.
Reserved
This pin is not connected on the receptacle side and cable plug side of the
management interface.
SCL
SDA
Vact
Vman
Figure 13 shows an example of an mini-SAS HD cable connector (plug) with pull tab and
pinout.
White (release) pull tab
D9
D1
C9
C1
B9
B1
A9
A1
VNXe-000509
IMPORTANT
When connecting the mini-SAS HD cable connector (plug) into the Base module ports
(sockets) 0 and 1, be careful of the orientation of the cable end with the port. On the Base
module, the ports have nubs (or keys). While the cable end has a notch. This notch aligns
with the nub (or key) in the port. On the other side of the cable end is a white release tab
opposite from the cable notch.
To connect, gently slide the cable into the port until you hear a small click aligning the
notch with the nub (or key) in the port.
Do Not force the cable into the port.
A video describing how to properly connect mini-SAS HD cables and mini-SAS cables to a
DPE and a DAE, respectively in a VNX product is available online at:
https://edutube.emc.com/, in the Search box, type in Mini-SAS HD Cable Connectivity.
The video will start immediately.
Below each port, a blue SAS link LED (labeled x4) is provided. This module plugs directly
into the base module enclosure to the midplane. The module cannot be removed safely
while the SP is running. The unsafe to remove LED (white hand with a right diagonal line
through it) will light and the SP will immediately reboot. To the left of the unsafe to remove
LED is the power/fault LED (bi-colored green/amber). The push/pull knob releases and
seats the base module in the SP enclosure (turn left and pull to release the base module
from the enclosure, push in and turn right to seat the base module into the enclosure).
29
1
x4
x4
Push/pull knob
VNX-000524
Color
State
Description
Blue
On
Link
Blinking
once every
second
Port is marked
Off
No link
Green
On
Operating normally
Amber
On
Faulted
Off
Not powered
White
On
Power/fault
Unsafe to
remove
30
Off
DC Power supply
Refer to the VNX5400 Parts Location Guide for the correct part numbers.
IMPORTANT
For VNX5400 systems with the 200-240 V AC power supply (models
VNX54DPxxx/VNXB54DPxxx), at least two 200-240 V AC circuits are required for higher
availability. For VNX systems with the 100-240 V AC power supply (model VNX54VPxxx), at
least two 100-240 V AC circuits are required for higher availability. For full power
specifications, go to https://mydocs.emc.com and select View technical specifications
under the About VNX section.
For VNX models with the DC power supply, see the requirements in the DC-Powered VNX
Series Enclosures Installation and Operation Guide. For full power specifications, go to
https://mydocs.emc.com and select View technical specifications under the About VNX
section.
Do not remove the SP power supply module while the SP is plugged in. Power supply
module removal for more than a few minutes can cause the SP to shut down due to lack of
cooling. Refer to the Replacing a Power Supply (PS) in a DPE procedure for the correct
steps to take before and during removal of an SP power supply module assembly from
the base module enclosure in a DPE. This procedure is available online at
https://mydocs.emc.com/VNX/ and go to VNX tasks, then select Replace VNX hardware.
Next, follow the steps in the wizard.
Fault LED
!
DC LED
DC
AC
AC LED
VNX-000550
Figure 15 SP latch, AC power supply (power in) recessed connector (plug), and status LEDs
31
Color
State
Description
Fault
Amber
On
Blinking
Off
Green
On
DC Power on
Off
Green
On
AC Power on
Off
DC power
AC power
SP management module
The SP management module provides the management connections via one
10/100/1000 Ethernet (RJ-45) port. Another RJ-45 port is available to support a service
laptop connection. The SP management module includes two RS-232/EIA 232 (DB-9)
serial socket connectors (one for service laptop connection and the other for an SPS
connection), a USB port (not used), and several LEDs (Figure 16 on page 33).
32
VNX-000583
Power/fault LED
33
VNX-000584
Color
State
Description
Power/Fault
Green
On
Amber
On
34
Off
Link (each
port has
one)
Green
On
Network connection
Off
No network connection
Activity
(each port
has one)
Amber
Blinking
Transmit/receive activity
Off
No network activity
9
6
9
6
Pin 1
Pin 1
VNX-000582
Table 15 lists the SP management module Ethernet (DB-9) pin signals used on the
connectors.
Signal
Description
CD
Carrier detect
TXD
Transmitted data
RXD
Received data
DTR
GND
Ground
DSR
RTS
Clear to send
CTS
Request to send
RI
SP null modem (micro DB-9 to DB-9 serial) cable The cable connecting the SP
management module to the PC or service laptop is a micro DB-9 cable (plug) to serial DB-9
(socket). It has a micro DB-9 plug (SP side) on one end and a serial DB-9 socket (PC or
service laptop side) on the other end. Figure 19 shows an example of an SP management
module to PC (service laptop) cable.
VNX-000093
35
AC power in connector
Five (RJ-45) connectors (labeled A, CS, B, and two [one not used] MGMT)
Note: The RJ-45 connectors (labeled CS and A, respectively) are integrated into the
rear of the 1U Control Station while the RJ-45 connectors (labeled B and MGMT,
respectively) are on a PCI-e card in the expansion slot on the rear of the Control
Station.
MGMT
B MGMT
IOIO
MGMT
CS
VNX-000525
10
AC power in connector
10
1. The CS port uses an IPMI (Intelligent Platform Management Interface) cable to connect to a standby (optional)
Control Station (CS1).
36
Five Ethernet (RJ-45) ports (one not used [labeled MGMT], see location 2 in Figure 20
on page 36)
To avoid electric shock, do not connect safety extra-low voltage (SELV) circuits to
telephone-network voltage (TNV) circuits. LAN ports contain SELV circuits, and WAN ports
contain TNV circuits. Some LAN and WAN ports both use RJ-45 connectors. Use caution
when connecting cables.
Description
10BASE-T
100BASE-TX
1000BASE-T
37
87654321
CNS-001749
Table 17 lists the Control Station Ethernet (RJ-45) pin signals used on the connector.
Signal
Description
BI_DA+
Bidirectional pair A, +
BI_DA-
Bidirectional pair A, -
BI_DB+
Bidirectional pair B, +
BI_DC+
Bidirectional pair C, +
BI_DC-
Bidirectional pair C, -
BI_DB-
Bidirectional pair B, -
BI_DD+
Bidirectional pair D, +
BI_DD-
Bidirectional pair D, -
CNS-001748
38
Table 18 describes the link/activity and connection speed associated with the Control
Station (RJ-45) port LEDs.
Table 18 Control Station RJ-45 port LEDs
Led
Color
State
Description
Left,
link/activity
(see location 1)
Green
On
Network/link connection
Green
Blinking
Transmit/receive activity
Off
No network/link connection
Green
On
100-Mb/s connection
Amber
On
Off
Right, link
speed
(see location 2)
Ethernet cable extensions for the Control Station B and MGMT ports
Each File/Unified VNX5400 platform 1U Control Station comes with two modular Ethernet
cable extensions (or patch cords) for the RJ-45 ports (labeled on the CS as B and MGMT,
respectively). These cables (Figure 23)allow you to extend the length of the Ethernet
cables from the CS 0, port B to Data Mover enclosure 0, management module B, port 1
and CS 0, MGMT port to the public LAN.
If your File/Unified VNX5400 platform includes a second optional 1U Control Station
(CS 1), another set of Ethernet cable extensions for the RJ-45 ports is provided. These
cables allow you to extend the length of the Ethernet cables from the CS 1, port B to Data
Mover enclosure 0, management module B, port 2 and CS 1, MGMT port to the public LAN.
Each cable includes a corresponding label clip to assist you during system cabling.
Note: If you received the File/Unified VNX5400 platform already installed in a cabinet rack
with all of the File/Unified VNX5400 platform components, all the cabling has already
been installed.
VNX-000564
39
9
VNX-000526
Table 19 lists the 1U Control Station Ethernet (DB-9) pin signals used on the connector.
Table 19 Control Station (DB-9) plug connector pinout
DB-9 Pin
Signal
Description
CD
Carrier detect
RXD
Received data
TXD
Transmitted data
DTR
GND
Ground
DSR
RTS
Request to send
CTS
Clear to send
RI
9
VNX-000527
40
Table 20 lists the 1U Control Station Ethernet (DB-9) pin signals used on the connector.
Table 20 Control Station modem (DB-9) plug connector pinout
DB-9 Pin
Signal
Description
CD
Carrier detect
RXD
Received data
TXD
Transmitted data
DTR
GND
Ground
DSR
RTS
Clear to send
CTS
Request to send
RI
3
1
0
3
1
0
3
2
1
1
0
3
2
3
1
0
3
1
0
3
1
0
3
2
1
0
VNX-000600
41
2
1
#
CNS-001754
Power/fault LED
The File/Unified VNX5400 Data Mover management module contains LAN ports. LAN
ports contain safety extra-low voltage (SELV) circuits, and WAN ports contain
telephone-network voltage (TNV) circuits. To avoid electric shock, do not connect TNV
circuits to SELV circuits. Some LAN and WAN ports both use RJ-45 connectors. Use caution
when connecting cables.
To access the Ethernet ports, connect a Category 3, 4, 5, 5E, or 6 unshielded twisted-pair
(UTP) cable to the RJ-45 connector on the back of the management module (Table 16 on
page 37).
Since the Control Station and the management module have the same type of RJ-45 ports,
Control Station Ethernet (RJ-45) ports on page 37 provides detailed information about
the management module ports, connector, and adapter.
42
Numeric display
(Data Mover enclosure ID)
CNS-001671
Color
State
Description
Power/Fault
Green
On
Amber
On
Off
Link (each
port has one)
Green
On
Network connection
Off
No network connection
Activity (each
port has one)
Amber
Blinking
Transmit/receive activity
Off
No network activity
Numeric
(7-segment)
display for
enclosure ID
On
43
I/O modules
9
6
Pin 1
CNS-001753
Figure 29 Data Mover management module serial console (DB-9) socket connector
Table 22 lists the Data Mover management module Ethernet (DB-9) pin signals used on
the connector.
Signal
Description
CD
Carrier detect
TXD
Transmitted data
RXD
Received data
DTR
GND
Ground
DSR
RTS
Clear to send
CTS
Request to send
RI
I/O modules
Several types of I/O modules are supported in the Block, File, and Unified VNX5400. The
SP supports five types of I/O modules (see SP I/O module types on page 46) and the
Data Mover supports four types (see Data Mover I/O module types on page 57). In this
section, each I/O module description includes the type of port (copper or optical) as well
as a description of the LEDs.
44
I/O modules
4
3
2
1
CNS-001090
LC type interface
The LC type interface was developed by Lucent Technologies (hence, Lucent Connector). It
uses a push-pull mechanism. LC connectors are normally held together in a multimode
duplex configuration with a plastic clip. These cables are usually colored orange for OM2
multimode optical fiber type cables and aqua for OM3 multimode optical fiber type
cables. These cables have the duplex connectors encased in a gray plastic covering. To
determine the send or transmit (TX) and receive (RX) ferrules (connector ends), these
cables will show a letter and numeral (for example A1 and A2 for the TX and RX,
respectively) or a white and yellow rubber gasket (jacket) for the send or transmit (TX) and
receive (RX) ends (Figure 31 on page 46).
45
I/O modules
1
3
2
A
1
2
CNS-001102
Orange cable
46
I/O modules
Two-port 10-Gb/s optical or active Twinax Fibre Channel over Ethernet (FCoE) I/O
module on page 51
CNS-001752
Power/fault LED
47
I/O modules
The four-port 8-Gb/s FC I/O module uses SFP+ transceiver modules to connect to LC-type
optical fibre cables (LC type interface on page 45). These SFP+ transceiver modules are
input/output (I/O) devices that plug into the FC port of the FC I/O modules. For more
information about these SFP+ transceiver modules, see the I/O modules section on
page 44.
Link/Activity
LED
Link/Activity
LED
Link/Activity
LED
Link/Activity
LED
CNS-001670
Color
State
Description
Power/Fault
Green
On
Amber
On
Off
Green
On
Blue
On
Green or
Blue
Blinking
Off
No network connection
Link/Activity
(each port
has one LED)
1. Refer to the VNX5400 Parts Location Guide for the correct SFP+ part number.
48
I/O modules
3
4
5
CNS-001751
Power/fault LED
49
I/O modules
Link LEDs
Activity LEDs
CNS-001666
50
LED
Color
State
Description
Power/Fault
Green
On
Amber
On
Off
Link (each
port has
one)
Green
On
Network connection
Off
No network connection
Activity
(each port
has one)
Amber
Blinking
Transmit/receive activity
Off
No activity
I/O modules
Two-port 10-Gb/s optical or active Twinax Fibre Channel over Ethernet (FCoE) I/O module
The two-port 10-Gb/s optical or active Twinax6 FCoE I/O module (labeled 10 GbE/FCoE on
the latch handle) comes with two FCoE ports, one power/fault LED, and a link and activity
LED for each port (Figure 36). The ports on this I/O module can interface at speeds up to
10 Gb/s for Fibre Channel over Ethernet networks. The two-port 10-Gb/s FCoE I/O module
uses the SFP+ transceiver module. For part number label location, see the VNX5400 Parts
Location Guide available online at https://mydocs.emc.com/VNX/ and go to Additional
VNX documentation, and select the related documentation software for the model
desired, then go to VNX Hardware Parts, next select the VNX5400 Parts Guide.
1
4
5
CNS-001756
Power/fault LED
6. The FCoE I/O module can also use active twinaxial (Twinax) cables. Twinax is a type of cable
similar to coax, but with two inner conductors instead of one. These cables will be supplied in lieu
of the SFP+ transceiver module when so ordered.
EMC VNX5400 Hardware Information Guide
51
I/O modules
Power/fault LED
Link LED
Activity LED
Link LED
Activity LED
CNS-001672
Color
State
Description
Power/Fault
Green
On
Amber
On
Off
Green
On
Network connection
Off
No network connection
Amber
Blinking
Transmit/receive activity
Off
No activity
Link
Activity
52
I/O modules
VNXe-000751
Power/fault LED
53
I/O modules
Power/fault LED
Link LED
Activity LED
Link LED
Activity LED
VNXe-000752
State
Description
Power/Fault Green
On
Amber
On
Off
Green
On
Network connection
Off
No network connection
Amber
Blinking
Transmit/receive activity
Off
No activity
Link
Activity
54
Color
I/O modules
4
5
CNS-001756
Power/fault LED
55
I/O modules
Power/fault LED
Link LED
Activity LED
Link LED
Activity LED
CNS-001672
State
Description
Power/Fault Green
On
Amber
On
Off
Green
On
Network connection
Off
No network connection
Amber
Blinking
Transmit/receive activity
Off
No activity
Link
Activity
56
Color
I/O modules
Be careful when replacing or swapping out SFP+ modules, your Data Mover will lose
access to the SP or tape drive to which it is connected.
57
I/O modules
This means that you can install and remove an SFP+ module while the File/Unified
VNX5400 platform is operating.
1
CNS-001752
Power/fault LED
58
I/O modules
Power/fault
LED
Link/Activity
LED
Link/Activity
LED
Link/Activity
LED
Link/Activity
LED
CNS-001670
Color
State
Description
Power/Fault
Green
On
Amber
On
Off
Green
On
Blue
On
Green or
Blue
Blinking
Off
No network connection
Link/Activity
(each port
has one
LED)
1. Refer to the VNX5400 Parts Location Guide for the correct SFP+ part number.
59
I/O modules
3
4
5
CNS-001751
Power/fault LED
60
I/O modules
Link LEDs
Activity LEDs
CNS-001666
Color
State
Description
Power/Fault
Green
On
Amber
On
Off
Link (each
port has
one)
Green
On
Network connection
Off
No network connection
Activity
(each port
has one)
Amber
Blinking
Transmit/receive activity
Off
No activity
61
I/O modules
4
5
CNS-001756
Power/fault LED
62
I/O modules
Power/fault LED
Link LED
Activity LED
Link LED
Activity LED
CNS-001672
State
Description
Power/Fault Green
On
Amber
On
Off
Green
On
Network connection
Off
No network connection
Amber
Blinking
Transmit/receive activity
Off
No activity
Link
Activity
Color
63
I/O modules
VNXe-000751
Power/fault LED
I/O modules
Power/fault LED
Link LED
Activity LED
Link LED
Activity LED
VNXe-000752
State
Description
Power/Fault Green
On
Amber
On
Off
Green
On
Network connection
Off
No network connection
Amber
Blinking
Transmit/receive activity
Off
No activity
Link
Activity
Color
65
Disk-array enclosure
Disk-array enclosure
Lifting the DAE and installing it into or removing it from a rack is a two- or three-person
job. If needed, use an appropriate lifting device. A fully loaded 2U DAE, 3U DAE, or 4U DAE
weighs approximately 45 lb (20.41 kg), 68 lb (30.84 kg), or 213 lb (96.62 kg),
respectively.
For more information about the weight and dimensions of a VNX5400 platform DAEs, go
to https://mydocs.emc.com/VNX/ and go to the About VNX section, then select View
technical specifications. Next, follow the steps in the wizard.
The VNX5400 platform supports the expansion of three types of disk-array enclosures
(DAEs) across a 6-Gb/s SAS bus:
The VNX5400 platform supports up to nine 2U, 25 (2.5-inch) DAEs (for a total of 225,
2.5-inch disk drives), up to fifteen 3U, 15 (2.5- or 3.5-inch) DAEs (for a total of 225, 2.5- or
3.5-inch disk drives), or up to three 4U, 60 (2.5- or 3.5-inch) DAEs (for a total of 180, 2.5or 3.5-inch disk drives).
Configurations with mixtures of 4U and 3U or 2U DAEs are also possible dependent on the
drive slot count. However, if the 4U DAE is used as part of your request for a mixture of DAE
types, due to the depth of the 4U DAE, the Dense rack is always required and the storage
system is not customer installable.
IMPORTANT
As described in the previous paragraph, you cannot build an environment beyond the
supported software and hardware requirements for that VNX5400 platform. Do not try to
add more disk drives than the software can support.
66
Disk-array enclosure
General
Each VNX5400 platform DAE typically consists of the following components:
Drive carrier
Disk drive
Midplane
EMI shielding
Drive carrier
In a 2U and 3U DAE, the disk drive carriers are metal and plastic assemblies that provide
smooth, reliable contact with the enclosure slot guides and midplane connectors. Each
carrier has a handle with a latch and spring clips. The latch holds the disk drive in place to
ensure proper connection with the midplane. Disk drive activity/fault LEDs are integrated
into the carrier (Figure 57 on page 76 and Figure 50 on page 69).
For more information about the drive carrier in a 4U DAE, see the 4U, 60 (2.5- or 3.5-inch)
DAE section on page 83.
Disk drives
Each disk drive consists of one disk drive in a carrier. You can visually distinguish
between disk drive types by their different latch and handle mechanisms and by type,
capacity, and speed labels on each disk drive. You can add or remove a disk drive while
the DAE is powered up, but you should exercise special care when removing disk drives
while they are in use. Disk drives are extremely sensitive electronic components.
IMPORTANT
The 4U DAE cannot use disk drives from a 2U or 3U DAE. The 4U DAE employs different
types of SAS or Flash disk drives.
Midplane
In a 2U or 3U DAE, a midplane separates the front-facing disk drives from the rear-facing
LCCs and power supply/cooling modules. It distributes power and signals to all
components in the enclosure. LCCs, power supply/cooling modules, and disk drives plug
directly into the midplane.
9. The 4U, 60 disk drive DAE includes Inter Connect Modules (ICMs). 4U, 60 (2.5- or 3.5-inch) DAE
on page 83 provides more information about the 4U, 60 disk drive DAE.
10. The 4U, 60 disk drive DAE has separate power supplies and cooling modules (fans).
EMC VNX5400 Hardware Information Guide
67
Disk-array enclosure
LCCs
In a 2U or 3U DAE, an LCC supports, controls, and monitors the DAE, and is the primary
interconnect management element. Each LCC includes connectors for input and
expansion to downstream devices. An enclosure address (EA) indicator is located on each
LCC (Figure 63 on page 83 and Figure 56 on page 75)11. Each LCC includes a bus (loop)
identification indicator (Figure 63 on page 83 and Figure 56 on page 75).
In a 4U DAE, the primary functionality of an LCC is to be a SAS expander as well as to
provide enclosure services to all the disk drives (60 in all). In other words, the LCC in a 4U
DAE (Figure 70 on page 89) implements a version of the Common Disk Enclosure
Sub-system (CDES) architecture. CDES consists of the PMC-Sierra PM8005 SXP 6G SAS
expander, the Common Disk Enclosure FPGA (CDEF) and supporting logic.
In the 4U DAE LCC, two SAS expanders are available. As previously described, the SAS
expanders are PMC-Sierra SXP36 6G (PM8005, rev C) components. Each expander
functions or operates separately. That is, each expander has its own CDEF and supporting
logic to support 30 drives each. A 4-lane SAS wide port connecting each expander to the
Inter Connect Module (ICM) expander on the same side (A or B) of the 4U DAE is available.
Each expander manages the drives it is connected to. The only shared resources are the
LCC LED and the expander I2C (inter-integrated circuit) bus.
Power supply
In a 2U or 3U DAE, the power supply/cooling module integrates independent power
supply and blower cooling assemblies into a single module.
Each power supply is an auto-ranging power-factor-corrected, multi-output, off-line
converter with its own line cord. The drives and LCC have individual soft-start switches
that protect the disk drives and LCC if you install them while the disk enclosure is
powered up. A disk or blower with power-related faults will not affect the operation of any
other device.
In a 2U or 3U DAE, each power/cooling module has three status LEDs (Figure 59 on
page 79 and Figure 52 on page 71).
In a 4U DAE, the power supplies (Figure 79 on page 99) and cooling modules (Figure 72 on
page 91) are separated and located at opposite ends of the 4U DAE. The power supplies
are located on the rear of the 4U DAE while the cooling modules or fans are located on the
rear of the 4U DAE. The power supplies can be installed/removed from the rear of the DAE
while the cooling modules or fans can only be installed/removed by sliding the DAE
forward, then sliding the DAE cover to the rear. You access the cooling modules or fans
from inside the DAE (see the Access to disk drives, LCCs, and cooling modules section
on page 84 for more information).
Cooling modules
In a 2U or 3U DAE, the enclosure cooling system consists of dual-blower modules in each
power supply/cooling module.
In a 4U DAE, the cooling modules are separate from the power supply modules.
11. The EA is sometimes referred to as an enclosure ID.
68
Disk-array enclosure
EMI shielding
EMI compliance requires a properly installed electromagnetic interference (EMI) shield in
front of the DAE disk drives. When installed in cabinets that include a front door, the DAE
includes a simple EMI shield. Other installations require a front bezel that has a locking
latch and integrated EMI shield. You must remove the bezel/shield to remove and install
the disk drive modules.
Status LEDs
VNX-000276
69
Disk-array enclosure
Table 32 describes the 2U, 25 (2.5-inch) DAE and disk drive status LEDs.
Color
State
Description
Blue
On
Amber
On
Blue
On
Off
Powered down
Amber
On
Off
Blue
On
Blinking
4 5
7
X4
X4
10
11
6 Gb
SAS
8
#
X4
X4
6 Gb
SAS
12
LCC B bus ID
10
11
12
70
VNX-000280
Disk-array enclosure
Power supply in
Power on LED
VNX-000279
Figure 52 Example of a 2U, 25 (2.5-inch) DAE AC power supply/cooling module power in (recessed)
connector (plug) and status LEDs
71
Disk-array enclosure
Table 33 describes the 2U, 25 (2.5-inch) DAE power supply/cooling module LEDs.
Color
State
Description
Power fault
Amber
On
Fault
Blinking
Off
Green
On
Power on
Off
Power off
Power on
The power supply/cooling modules are located to the left and right of the LCCs. The units
integrate independent power supply and dual-blower cooling assemblies into a single
module.
Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline
converter with its own line cord. Each supply supports a fully configured DAE and shares
load currents with the other supply. The drives and LCCs have individual soft-start
switches that protect the disk drives and LCCs if they are installed while the disk
enclosure is powered up. The enclosure cooling system includes two dual-blower
modules.
72
Disk-array enclosure
Figure 53 shows an example of the port connector (socket) and cable connector (plug)
with pull tab.
Pin A1
B1
A13
B13
VNX-000094
Table 34 lists the 2U, DAE 6-Gb/s SAS port pin signals used on the connector.
Signal
Pin
Signal
A1
GND
B1
GND
A2
Rx 0+
B2
Tx 0+
A3
Rx 0-
B3
Tx 0-
A4
GND
B4
GND
A5
Rx 1+
B5
Tx 1+
A6
Rx 1-
B6
Tx 1-
A7
GND
B7
GND
A8
Rx 2+
B8
Tx 2+
A9
Rx 2-
B9
Tx 2-
A10
GND
B10
GND
A11
Rx 3+
B11
Tx 3+
A12
Rx 3-
B12
Tx 3-
A13
GND
B13
GND
73
Disk-array enclosure
X4
X4
6 Gb
SAS
Latch handle
Link LED
VNX-000274
Color
State
Description
Link/activity
Blue
On
Green
On
Alternating Blinking
Blue/Green
Not connected
Off
The LCC (RJ-12) port is a LAN port not a WAN port. LAN ports contain safety extra-low
voltage (SELV) circuits, and WAN ports contain telephone-network voltage (TNV) circuits.
An RJ-45 (or TNV-type) looks the same as the RJ-12 except for two very important
differences. An RJ-45 is an 8-wire modular jack. The RJ-12 is a six-wire modular jack. The
RJ-45 plugs and jacks are wider than their RJ-12 counterparts - 7/16" vs 3/8". An RJ-45
plug won't fit into an R-J12 jack. But an RJ-12 plug will fit into an RJ-45 jack. Use caution
when connecting cables. To avoid electric shock, do not attempt to connect TNV circuits
to SELV circuits.
74
Disk-array enclosure
VNX-000106
X4
X4
6 Gb
SAS
LCC enclosure ID
VNX-000277
Color
State
Description
Power on
Green
On
Power on
Off
Power off
Amber
On
Fault detected
Off
Power fault
75
Disk-array enclosure
Status LEDs
4
VNX-000103
Figure 57 Example of a 3U, 15 (2.5- or 3.5-inch) disk drive DAE (front view)
Table 37 describes the VNX5400 platform DAE and the 3.5-inch disk drive status LEDs
76
LED
Color
State
Description
Amber
On
Green
On
Blue
On
Off
Powered down
Disk-array enclosure
Color
State
Description
Amber
On
Off
Green
On
Blinking, mostly
on
Blinking at
constant rate
Blinking, mostly
off
Off
77
Disk-array enclosure
Figure 58 shows an example of the rear view of a 3U, 15 (3.5-inch) disk drive DAE.
3
10 11
12
LCC B
#
6Gb SAS
X4
X4
6Gb SAS
LCC A
VNX-000100
LCC B bus ID
10
11
12
Figure 58 Example of a 3U, 15 (3.5-inch) disk drive DAE with two LCCs and two power
supply/cooling modules (rear view)
As shown in Figure 58, an enclosure ID13 indicator is located on each LCC. Each LCC also
includes a bus (back-end port) identification indicator. The SP initializes the bus ID when
the operating system is loaded.
Note: An LCC might be in either the A slot, as shown, or the B slot above it, depending on
the DAE placement within a system. For example, the front DAE in some systems is in slot
A; the rear enclosure LCC is inverted, and in slot B.
Disk-array enclosure
VNX-000104
Figure 59 Example of a 3U, 15 (3.5-inch) DAE AC power supply/cooling module power in (recessed)
connector (plug) and status LEDs
Table 38 describes the 3U, 15 (3.5-inch) DAE power supply/cooling module LEDs.
Table 38 3U, 15 (3.5-inch) disk drive DAE AC power supply/cooling module LEDs
Led
Color
State
Description
Power on
Green
On
Power on
Off
Power off
Amber
On
Fault
Blinking
Off
Amber
On
Off
Power fault
Fan fault
The power supply/cooling modules are located above and below the LCCs. The units
integrate independent power supply and dual-blower cooling assemblies into a single
module.
Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline
converter with its own line cord. Each supply supports a fully configured DAE and shares
load currents with the other supply. The drives and LCCs have individual soft-start
switches that protect the disk drives and LCCs if they are installed while the disk
enclosure is powered up.
The enclosure cooling system includes two dual-blower modules.
79
Disk-array enclosure
B1
A13
B13
VNX-000094
A video describing how to properly connect mini-SAS HD cables and mini-SAS cables to a
DPE and a DAE, respectively, in a VNX product is available online at:
https://edutube.emc.com/, in the Search box, type in Mini-SAS HD Cable Connectivity.
The video will start immediately.
Note: The first half of the video shows an example of how to connect a mini-SAS HD cable
to a mini-SAS HD port while the second half shows how to connect a mini-SAS cable to a
DAE LCC port.
Table 39 lists the 3U DAE LCC 6-Gb/s SAS port pin signals used on the connector.
80
Pin
Signal
Pin
Signal
A1
GND
B1
GND
A2
Rx 0+
B2
Tx 0+
A3
Rx 0-
B3
Tx 0-
A4
GND
B4
GND
A5
Rx 1+
B5
Tx 1+
Disk-array enclosure
Signal
Pin
Signal
A6
Rx 1-
B6
Tx 1-
A7
GND
B7
GND
A8
Rx 2+
B8
Tx 2+
A9
Rx 2-
B9
Tx 2-
A10
GND
B10
GND
A11
Rx 3+
B11
Tx 3+
A12
Rx 3-
B12
Tx 3-
A13
GND
B13
GND
X4
6Gb SAS
6Gb SAS
X4
VNX-000101
81
Disk-array enclosure
Color
State
Description
Link/activity
Blue
On
Green
On
Alternating Blinking
Blue/Green
Not connected
Off
The LCC (RJ-12) port is a LAN port not a WAN port. LAN ports contain safety extra-low
voltage (SELV) circuits, and WAN ports contain telephone-network voltage (TNV) circuits.
An RJ-45 (or TNV-type) looks the same as the RJ-12 except for two very important
differences. An RJ-45 is an 8-wire modular jack. The RJ-12 is a six-wire modular jack. The
RJ-45 plugs and jacks are wider than their RJ-12 counterparts - 7/16" vs 3/8". An RJ-45
plug won't fit into an R-J12 jack. But an RJ-12 plug will fit into an RJ-45 jack. Use caution
when connecting cables. To avoid electric shock, do not attempt to connect TNV circuits
to SELV circuits.
VNX-000106
82
Disk-array enclosure
Bus (loop) ID
LCC enclosure ID
X4
6Gb SAS
LCC B
VNX-000107
LCC power
status LED (green)
Figure 63 Example of an LCC B enclosure ID, bus ID, and LCC status LEDs
Color
State
Description
Power fault
Amber
On
Fault
Off
Green
On
Power on
Off
Power off
Power on
83
Disk-array enclosure
Access to internal components in a 4U, 60 DAE mounted 31U (4.5 feet or 1.38 meters) or
more above the floor requires special equipment and is restricted to authorized service
personnel only. Attempts to service disks, fans, or LCCs mounted 31U or higher without
appropriate tools and personnel might result in serious personal injury.
The 4U, 60 (2.5- or 3.5-inch) DAE (DAE7S) includes up to 60, 2.5- or 3.5-inch disk drives.
Supporting 6-Gb/s data transfer speeds, this DAE has the following hardware
components: three fans (or cooling modules), 60 disks (30 per side), two Link Control
Cards (LCCs), two Inter Connect Modules (ICMs), and two power supplies.
To replace or add any of these components, refer to their respective Customer
Replaceable Unit (CRU) procedure for the 4U, 60 DAE. For example, to replace a disk drive,
refer to the Replacing a disk in a 60-disk enclosure document available online at
https://mydocs.emc.com/VNX/ and go to VNX Tasks, then select Replace VNX hardware.
Next, follow the steps in the wizard.
84
Disk-array enclosure
2
CL4663
Figure 64 4U, 60 (2.5- or 3.5-inch) DAE (unlocking top, front ring pull latch mechanism and bottom
slide extension release levers)
Note: If the 4U, 60 DAE does not slide out of the rack, verify that all the other DAEs are
completely seated in the rack by pushing firmly on them.
Figure 65 shows an example of a 4U DAE with the top cover closed.
Top cover
VNX-000656
85
Disk-array enclosure
Figure 66 shows an example of a 4U DAE with the top cover open showing the disk drives,
LCCs, and the cooling modules or fans.
Top cover
VNX-000649
Disk
Fan
CL4658
The ICMs and power supplies shown in Figure 67 are accessed from the rear of the 4U
DAE. Rear view on page 92 provides more information.
86
Disk-array enclosure
Disk drives
The disk drives for the 4U DAE are encased in cartridge-style enclosures. This enclosure is
used so that varied types and sizes of disk drives can be supported. Each cartridge has an
easy-to-pull and push latch. The latch allows you to quickly and efficiently snap-out a disk
drive for removal and snap-in for installation.
Two drive sizes are supported in the 4U DAE:
You can add or remove a disk drive while the DAE is powered up, but you should exercise
special care when removing modules while they are in use. Drive modules are extremely
sensitive electronic components.
Figure 68 shows a top-down cut-away interior view of 4U, 60 DAE showing the location of
the disk drives, fans (cooling modules), and LCC A.
disk
LCC A
fan
(cooling module)
CL4735
Figure 68 4U, 60 (2.5- or 3.5-inch) top-down cut-away of disk drives, fans (cooling modules), and
LCC A (interior view)
87
Disk-array enclosure
Rear of 4U DAE
Inter Connect Module (ICM)
LCC B
B
LCCA
10
11
LCC A
Cooling Module
Cooling Module
Front of 4U DAE
Cooling Module
VNX-000650
Figure 69 4U, 60 DAE disk drive layout and notation (top-down interior view)
Note: The labels for the banks, slots, and LCCA shown in Figure 69 are the physical labels
in the 4U DAE.
88
Disk-array enclosure
LCC
Each 4U, 60 DAE includes two LCCs. The primary function of each LCC is to be a SAS
expander providing services to 30 drive slots per LCC in the 4U, 60 DAE.
The LCC implements Common Disk Enclosure Subsystem (CDES). CDES consists of a
6-Gb/s SAS expander, Common Disk Enclosure FPGA (CDEF), and supporting logic.
The primary components on the LCC are the two SAS expanders. A four-lane SAS wide port
connecting each expander to the ICM expander on the same side (A or B) of the 4U, 60
DAE is available. Each LCC independently monitors the environmental status of the entire
enclosure, using a microcomputer-controlled monitor program. The monitor
communicates the status to the storage processor, which polls disk enclosure status.
Figure 70 shows the location of the status LEDs on the 4U, 60 DAE LCC.
LCC A
VNX-000654
Color
State
Description
Power
Green
On
Power on
Off
Power off
Amber
On
Fault
Off
Power fault
89
Disk-array enclosure
Figure 71 Example of a 4U, 60 DAE fan control module showing the fan fault LED
90
Led
Color
State
Description
Fan fault
Amber
On
Off
Disk-array enclosure
Front view
On the front, viewing from left to right, the 4U, 60 DAE includes three fans or cooling
modules and two Status LEDs.
Figure 72 shows the location of the fan or cooling module and the 4U, 60 DAE status LEDs.
VNX-000528
Color
State
Description
DAE power
Blue
On
Off
Powered down
Amber
On
Fault detected
Off
No fault detected
91
Disk-array enclosure
Rear view
On the rear, viewing from left to right, a 4U, 60 (2.5- or 3.5-inch) DAE includes two 6 Gb/s
SAS ICMs (A and B) and two power supply modules (A and B) as shown in Figure 73.
1
18
17
16
15 14
13
12 11
10
VNX-000627
10
11
12
13
14
15
16
17
18
Figure 73 Example of a 4U, 60 DAE with two ICMs and two power supply/cooing modules (rear view)
92
Disk-array enclosure
ICM
The 4U, 60 DAE external interfaces are made through the ICM. The ICM is the primary
interconnect management element (Figure 74).
The ICM is a plug-in module that includes a USB connector, RJ-12 management connector
(not used), Bus ID indicator, enclosure ID indicator, two input SAS connectors and two
output SAS connectors with corresponding LEDs indicating the link and activity of each
SAS connector for input and output to devices.
The ICM is hot-swapable. It has a built-in thumbscrew for ease of installation and removal.
1
2
3
4
5
0
9
1
8
0
7
1
VNX-000629
ICM thumbscrew
As described previously, the ICMs in a 4U, 60 DAE connect to the DPE and other DAEs with
6-Gb/s SAS cables. The cables connect the ICMs in a system in a daisy-chain topology.
93
Disk-array enclosure
As shown in Figure 74 on page 93, an enclosure ID14 indicator is located on each ICM.
Each ICM also includes a bus (back-end port) identification indicator. The SP initializes
the bus ID when the operating system is loaded.
Table 45 describes the ICM status LEDs.
Table 45 ICM status LEDs
Led
Color
State
Description
Power on
Green
On
Power on
Off
Power off
Amber
On
Fault
Off
Power fault
B1
A13
B13
VNX-000094
A video describing how to properly connect mini-SAS HD cables and mini-SAS cables to a
DPE and a DAE, respectively, in a VNX product is available online at:
https://edutube.emc.com/, in the Search box, type in Mini-SAS HD Cable Connectivity.
The video will start immediately.
14. The enclosure ID is sometimes referred to as the enclosure address (EA).
94
Disk-array enclosure
Note: The first half of the video shows an example of how to connect a mini-SAS HD cable
to a mini-SAS HD port while the second half shows how to connect a mini-SAS cable to a
DAE LCC port.
Table 46 lists the 4U, DAE ICM 6-Gb/s SAS port pin signals used on the connector.
Signal
Pin
Signal
A1
GND
B1
GND
A2
Rx 0+
B2
Tx 0+
A3
Rx 0-
B3
Tx 0-
A4
GND
B4
GND
A5
Rx 1+
B5
Tx 1+
A6
Rx 1-
B6
Tx 1-
A7
GND
B7
GND
A8
Rx 2+
B8
Tx 2+
A9
Rx 2-
B9
Tx 2-
A10
GND
B10
GND
A11
Rx 3+
B11
Tx 3+
A12
Rx 3-
B12
Tx 3-
A13
GND
B13
GND
6-Gb/s SAS port LEDs and port direction (input or output) Figure 76 on page 96 shows
the 6-Gb/s SAS port LEDa bi-color (blue/green) LED next to the connector, either left or
rightthat indicates the link/activity of the SAS port.
95
Disk-array enclosure
LED (bi-color);
blue/green
X4
X4
LED (bi-color);
blue/green
6 Gb/s SAS
input
6 Gb
SAS
X4
X4
LED (bi-color);
blue/green
LED (bi-color);
blue/green
6 Gb/s SAS
output
VNX-000655
Color
State
Description
Link/activity
Blue
On
Green
On
Off
Not connected
96
Disk-array enclosure
The ICM (RJ-12) port is a LAN port not a WAN port. LAN ports contain safety extra-low
voltage (SELV) circuits, and WAN ports contain telephone-network voltage (TNV) circuits.
An RJ-45 (or TNV-type) looks the same as the RJ-12 except for two very important
differences. An RJ-45 is an 8-wire modular jack. The RJ-12 is a six-wire modular jack. The
RJ-45 plugs and jacks are wider than their RJ-12 counterparts - 7/16" vs 3/8". An RJ-45
plug won't fit into an R-J12 jack. But an RJ-12 plug will fit into an RJ-45 jack. Use caution
when connecting cables. To avoid electric shock, do not attempt to connect TNV circuits
to SELV circuits.
VNX-000652
USB connector
The USB connector provides a power connection to the front console.
4U, 60 DAE ICM enclosure ID (enclosure address) and bus ID
On the rear of the ICM (A and B), an ICM enclosure ID indicator is provided. This ID
indicator is a seven-segment LED display for displaying decimal numbers. The ICM
enclosure ID appears on both ICMs (A and B) which is the same ID number. The enclosure
ID is set at installation (Figure 78 on page 98).
Each ICM includes a bus (loop) identification indicator. This indicator includes two
seven-segment LED displays for displaying decimal numbers. The SP initializes the bus ID
when the operating system is loaded (Figure 78 on page 98).
Note: Figure 78 on page 98 shows both the bus ID indicator and enclosure ID indicator
when viewed from the horizontal side of the ICM. Normally, you would have to turn your
head to view these indicators.
97
Disk-array enclosure
Bus (loop) ID
DAE ICM status LEDs,
fault (left) and power (right)
#
Enclosure ID
VNX-000651
Figure 78 Example of an ICM enclosure ID indicator, bus ID indicator, and the ICM status LEDs
Color
State
Description
Power on
Green
On
Power on
Off
Power off
Amber
On
Fault
Off
Power fault
98
Disk-array enclosure
In the 4U DAE, the power supplies provide four independent power zones. Each of the
hot-swappable power supplies has the capability to deliver 1300 W at 12 V in its
load-sharing highly-available configuration. Control and status are implemented
throughout the I2C interface.
Figure 79 shows an example of the 4U, 60 DAE AC power supply with two power in
recessed connectors (or plugs) and status LEDs.
Fan
Power supply
thumbscrew
Fan
Power on
Power on
LED (green)
LED (green)
Power fault
LED (amber)
AC 0 power in
(recessed plug)
AC 1 power in
(recessed plug)
Power on
LED (green)
Power on
LED (green)
Power fault
LED (amber)
VNX-000648
Figure 79 Example of a 4U, 60 DAE AC power supply showing the (power in) recessed connector
(plugs) and status LEDs
Table 49 describes the 4U, 60 (2.5- or 3.5-inch) DAE power supply LEDs.
Color
State
Description
AC 1 power on
(12 V power)
Green
On
Off
Green
On
Off
Amber
On
Off
AC 0 power on
(12 V power)
Power fault
99
Cabling
Cabling
This section describes examples of the types of cabling you will need to connect the DAEs
to your VNX series platform. The descriptions are presented in illustrations and text. Each
illustration shows an example of the cable connection points (ports) located on the
specific hardware components for the VNX5400 platform.
IMPORTANT
The following sections only discuss the DAE cabling of the VNX5400 platform with either
the 3U, 15 disk drive DAE or the 2U, 25 disk drive DAE.
For all other cabling of your VNX5400 platform, the VNX5400 Installation Guide provides
information about the DPE power cabling, DAE power cabling, PDU power cabling, LAN
cabling, and so on.
100
SP A SAS 0
0
S
0
S
A 0
S
S
S
A
P SA 0
S
S
A A
P S
S A
P
S
S
SP A SAS 0
SP A SAS 0
SP A SAS 0
SP A SAS 0
VNX-000529
101
The first DAE connected to the Storage Processor SAS output port 1 is designated
Enclosure 0 (EA0). Each DAE connected after the first DAE increments the enclosure
number by one. All enclosures connected to SAS Port 0 will show an ID of 0, but the
addresses will increment.
Figure 81 on page 103 shows the first example of a VNX5400 platform with two DAEs (one
3U, 15 disk drive DAE and the other a 2U, 25 disk drive DAE) or a VNX5400 platform with a
total of 65 disk drives (as the DPE is a 3U, 25 disk drive device).
The SAS ports on the VNX5400 platform DPE are labeled 0 and 1. SAS 0 is connected
internally to the SAS expander that connects the internal DPE disks. Since SAS 0 is
already connected internally to the DPE disks, the first DAE is connected to SAS 1 to
balance the load on the SAS ports. The second DAE is connected to SAS 0, the third DAE is
connected to SAS 1, and so on.
In Figure 81 on page 103, notice that each DAE device supports two completely redundant
buses (LCC A and LCC B).
The rule of load or bus balancing is applied to all DAEs. That is, Bus 0 is Enclosure
Address 0 (EA0), Bus 1 is EA0, and so on. In the case of the VNX5400 platform, Bus 0 EA0
is the DPE (SP A and B). So, to balance the load, Bus 1 EA0 becomes the first DAE (LCC A
and B) in the cabinet with the next DAE (LCC A and LCC B) as Bus 0 EA1, and so on. If you
have several DAEs in your VNX5400 platform, you can daisy chain them within that bus.
However, it is recommended that you balance each bus. In other words, always optimize
your environment by using every available bus, and spreading the number of enclosures
as evenly as possible across the buses.
Note: On the DPE and DAE, each cable connector includes a symbol to denote the
direction the cable needs to connect to. The cable connector that has a double circle
symbol
is the input to the device. The cable connector with the double diamond
symbol
is the output from the device.
IMPORTANT
Notice the description of the cable labels affixed to the SP to DAE cables.
102
Note: If your VNX5400 platform was not cabled at the factory, refer to the cable wrap guide
(Cable label wraps on page 100) that came with your VNX5400 platform for the correct
cable labels.
#
X4
6 Gb
SAS
X4
6 Gb
SAS
LCC B
#
X4
6Gb SAS
LCC B
X4
6Gb SAS
LCC A
LCC A
3
2
X4
!
DC
AC
SP B SAS 1
SP A SAS 1
0
X4
X4
AC
X4
DC
SP A SAS 0
VNX-000530
SP B SAS 0
Figure 81 Example of the VNX5400 Block platform with two DAEs (3U, 15 disks and 2U, 25 disks)
cabling
Note: Each cable end includes a symbol to denote the direction the cable needs to
connect to. The cable end that has a single circle
symbol is the input end. While the
cable connector with the single diamond symbol is the output end.
103
So, the cabling for Bus 1 is interleaved and daisy-chained through the remaining DAEs
starting with:
EA1/Bus 1
While the cabling for Bus 0 is interleaved and daisy-chained through the remaining DAEs
starting with:
EA2/Bus 0
104
#
X4
6 Gb
SAS
EA 4/Bus 1
A
X4
6 Gb
SAS
#
X4
6 Gb
SAS
EA 4/Bus 0
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 3/Bus 1
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 3/Bus 0
A
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 2/Bus 1
A
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 2/Bus 0
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 1/Bus 1
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 1/Bus 0
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 0/Bus 1
X4
6 Gb
SAS
X4 X4
!
DC
DC
AC
X4 X4
AC
DPE
B
VNX-000942
Figure 82 Example of the VNX5400 platform with nine interleaved DAEs (2U, 25 disks)
105
So, the cabling for Bus 1 is stacked and daisy-chained through the remaining DAEs
starting with:
EA1/Bus 1
While the cabling for Bus 0 is stacked and daisy-chained through the remaining DAEs:
EA2/Bus 0
Note: In this example, Bus 0 is indicated with the orange DAEs and Bus 1 is indicated with
the blue DAEs.
106
#
X4
6 Gb
SAS
EA 4/Bus 0
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 3/Bus 0
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 2/Bus 0
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 1/Bus 0
6 Gb
SAS
X4
#
#
X4
6 Gb
SAS
EA 4/Bus 1
X4
6 Gb
SAS
#
X4
6 Gb
SAS
EA 3/Bus 1
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 2/Bus 1
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 1/Bus 1
X4
6 Gb
SAS
#
#
X4
6 Gb
SAS
EA 0/Bus 1
X4
6 Gb
SAS
X4 X4
!
DC
AC
AC
X4 X4
DC
DPE
B
VNX-000943
Figure 83 Example of the VNX5400 Block platform with nine stacked DAEs (2U, 25 disks)
107
108
#
X4
6 Gb
SAS
LCC B
X4
6 Gb
SAS
X4
6Gb SAS
LCC B
X4
6Gb SAS
LCC A
LCC A
1
MGMT
B MGMT
IOIO
MGMT
MGMT
3
2
CS
MGMT
B MGMT
IOIO
CS
3
2
1
0
X4
X4
X4
0
X4
DC
DC
SP A SAS 1
AC
SP A SAS 0
AC
SP B SAS 1
VNX-000944
SP B SAS 0
Figure 84 Example of the VNX5400 File platform with two DAEs (2U, 25 disks and 3U, 15 disks)
cabling
Note: In Figure 84 the VNX5400 File platform shows a DPE (with two SPs), a CS (with an
optional CS available), two DMEs (with four DMs), and a 3U 15 DAE and a 2U 25 DAE.
109
Copyright 2014 EMC Corporation. All rights reserved. Published in the USA.
Published July 21, 2014
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any
kind with respect to the information in this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this
publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories
section on EMC Online Support.
110