0% found this document useful (0 votes)
500 views59 pages

02 - XtremIO - X2 - XMS 6 3 To 6 3 3 - XIOS 6 0 - 0 To 6 3 3 - SPG - 302 005 938 - Rev 05

Uploaded by

Hooman Mohaghegh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
500 views59 pages

02 - XtremIO - X2 - XMS 6 3 To 6 3 3 - XIOS 6 0 - 0 To 6 3 3 - SPG - 302 005 938 - Rev 05

Uploaded by

Hooman Mohaghegh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

DELL EMC

Dell EMC XtremIO Storage Array


X2 Cluster Type
XMS Versions 6.3.0, 6.3.1, 6.3.2 and 6.3.3
XIOS Versions 6.0.0, 6.0.1, 6.0.2, 6.1.0, 6.2.0, 6.2.1, 6.2.2, 6.3.0, 6.3.1, 6.3.2 and 6.3.3

Site Preparation Guide


P/N 302-005-938
REV 05

April 13, 2021

Topics include:
 Overview................................................................................................................... 2
 Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack........... 3
 Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack ....... 17
 Hardware Requirements.......................................................................................... 28
 Physical XMS Requirements.................................................................................... 29
 Virtual XMS Requirements....................................................................................... 29
 Temperature Requirements ..................................................................................... 31
 Shipping and Storage Requirements ....................................................................... 32
 Security Requirements ............................................................................................ 34
 Connecting the Cluster to Host................................................................................ 35
 Remote Support Requirements ............................................................................... 54
 Ports and Protocols................................................................................................. 57
 Provisioning WWNs and IQNs.................................................................................. 58

Note: This document was accurate at publication time. Go to Dell EMC Online Support
(https://support.emc.com) to ensure that you are using the latest version of this
document.


This document provides specific information on XtremIO clusters that are managed by
XMS versions 6.3.0 to 6.3.2. For XtremIO clusters that are managed by other versions,
refer to the appropriate XtremIO product documents which are provided for the
corresponding versions.
Overview

Overview
Note: The customer should verify that all site preparation requirements in this guide
(including, but not limited to adequate HVAC, power, floor space, security, etc.) are
completely met before the XtremIO cluster is installed, and the specified operating
environment is maintained for optimal system operation.

An Dell EMC® XtremIO Storage Array requires a properly equipped computer room with
controlled temperature and humidity, proper airflow and ventilation, proper power and
grounding, cluster cable routing facilities, and fire equipment.
To verify that the computer room meets the requirements for XtremIO Storage Array,
confirm that:
 Any customer concerns have been addressed through planning sessions between Dell
EMC and the customer.
 The site meets the requirements described in this document.
 The site contains LAN connections for remote service operation.

Note: It is not allowed to split an existing XtremIO cluster into multiple clusters, or to
merge multiple, existing XtremIO clusters into a single cluster. This is due to the very high
logistical overheads that are required for performing such non-standard procedures.

2 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Preparing the Site for Installing the Cluster within an "Dell EMC
Titan-D" Rack
Rack Clearance
The Dell EMC® rack ventilates from front to back. You must provide adequate clearance at
the rear to service and cool the XtremIO Storage Array components. Depending on
component-specific connections within the rack, the available power cord length may be
somewhat shorter than the 15-foot standard. Figure 1 shows the required clearance for
the Dell EMC rack.

Width
24 in. (61 cm)

Height
75 in.
(190.5 cm)
Power cord length
15 ft. (4.57 m)

Rear access
39 in. (99.1 cm)
Depth
44 in. (111.76 cm)
Front access
48 in. (121.92 cm)

Figure 1 Dell EMC Rack Dimensions and Access Clearances

Dell EMC XtremIO Storage Array Site Preparation Guide 3


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Package Dimension and Clearance


Confirm that the doorways and elevators are wide and tall enough to accommodate the
shipping pallet and rack. Figure 2 shows the dimensions of the boxed Dell EMC rack.

81.00 in.
(2.06 m)

52.00 in. 42.00 in.


(1.32 m)
(1.07 m)

Figure 2 Dell EMC Rack Shipping Crate Dimensions

Make sure to leave approximately 96 inches (2.43 meters) clearance at the rack’s rear in
order to unload the unit and roll it off the pallet, as shown in Figure 3.

96.00 in.
(2.43 m)

Figure 3 Dell EMC Rack Shipping Crate Unload Clearance Dimensions

4 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Rack Stabilizing
If you intend to secure the optional anti-tip bracket to your site floor, prepare the location
for the mounting bolts. The anti-tip bracket provides an extra measure of anti-tip security.
One or two kits may be used. For racks with components that slide, we recommend that
you use two kits.

Note: Measurements in Figure 4 are shown in inches.

Figure 4 shows the Anti-tip bracket.

12.00
7.00

.44

21.25
17.25

3.39

1.56 2.78
Figure 4 Anti-Tip Bracket

Dell EMC XtremIO Storage Array Site Preparation Guide 5


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Site Floor Load-Bearing Requirements


Table 1 shows the approximate floor load of each XtremIO Storage Array configuration.

Table 1 XtremIO Storage Array Cluster Weights in Dell EMC Rack

Configuration Weight without XMS Weight with XMS

Single X-Brick cluster 644.7 lb (292.5 kg) 677.7 lb (307.4 kg)

Two X-Brick cluster 879.4 lb (399 kg) 912.4 lb (413.9 kg)

Three X-Brick cluster 1076.1 lb (488.2 kg) 1109.1 lb (503.1 kg)

Four X-Brick cluster 1272.8 lb (577.3 kg) 1305.8 lb (592.3 kg)

Note: The values shown in Table 1 are for X2-R configurations, and do not include the rack
weight. For X2-S configurations of two, three and four X-Brick weights, subtract 18.7 lb
(8.5kg). For X2-T configurations (single X-Brick), subtract 18 lb (8.16 kg).

Install the rack in raised or non-raised floor environments that are capable of supporting
at least 1,180kg (2,600 lbs.) per rack. Your system may weigh less than this, but requires
extra floor support margin to accommodate equipment upgrades and/or reconfiguration.
In a raised floor environment:
 Dell EMC recommends 24 x 24 inch or (60 x 60 cm) heavy-duty, concrete filled steel
floor tiles.
 Use only floor tiles and stringers rated to withstand:
• Concentrated loads of two casters or leveling feet, each weighing up to 1,000 lb
(454 kg).
• Minimum static ultimate load of 3,000 lb (1,361 kg).
• Rolling loads of 1,000 lb (454 kg). On floor tiles that do not meet the 1,000 lb
rolling load rating, use coverings such a plywood to protect floors during system
roll.
 Position adjacent racks with no more than two casters or leveling feet on a single floor
tile.

Note: Cutting tiles per specifications as shown in Figure 5 on page 7 ensures the
proper caster wheel placement.

6 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Cable Routing
Dell EMC recommends installing the equipment into a room with a raised floor, for
accommodation of the cabling under the floor.
If you are installing the rack onto a raised floor, cut a cable-access hole in one tile as
shown in Figure 5.

8 in.
Floor tile (20.3 cm)
24 in. (61 cm) square
C
Cutout Cutout detail 9 in.
(22.9 cm)

6 in.
(15.2 cm)
C

8 in.
(20.3 cm)

Front

Figure 5 Floor Tile Cutout

Cutouts in 24 x 24 in tiles must be no more that 8 inches (20.3 cm) wide by 6 inches
(15.3 cm) deep, and centered on the tiles, 9 inches (22.9 cm) from the front and rear and
8 inches (20.3 cm) from the sides. Since cutouts will weaken the tile, you can minimize
deflection by adding pedestal mounts adjacent to the cutout. The number and placement
of additional pedestal mounts relative to a cutout must be in accordance with the floor tile
manufacturer recommendations.
When positioning the rack, take care to avoid moving a caster into a floor tile cutout.
Make sure that the combined weight of any other objects in the data center does not
compromise the structural integrity of the raised floor and/or the sub-floor (non-raised
floor).
Dell EMC recommends that a certified data center design consultant inspect your site to
ensure that the floor is capable of supporting the system and surrounding weight. Note
that the actual rack weight depends on your specific product configuration. You can
calculate your total using the tools available at:
https://powercalculator.emc.com/Main.aspx

Casters and Leveling Feet


The Dell EMC rack includes four caster wheels, as shown in Figure 6 on page 8. The front
wheels are fixed. The two rear casters swivel in a 1.75-inch diameter. Swivel position of
the caster wheels determine the load-bearing points on your site floor, but does not affect
the rack footprint. Once you have positioned, leveled, and stabilized the rack, the four
leveling feet determine the final load-bearing points on your site floor.

Dell EMC XtremIO Storage Array Site Preparation Guide 7


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Detail A
(right front
corner)

Dimension 3.62
17.102 minimum 20.580 maximum to center of caster
(based on swivel (based on swivel wheel from this surface
position of caster wheel) position of caster wheel) Detail B
1.750
18.830 Caster swivel
diameter Bottom view
Outer surface Outer surface Leveling feet
Rear of rear door of rear door 1.750 Rear
Swivel diameter
reference (see
detail B)

32.620
maximum
(based on
swivel position
31.740 of caster wheel)
30.870
minimum
(based on 40.390
swivel position
of caster wheel)
Leveling feet
3.620

Front 20.700
Right 20.650
Top view side view Front
Dimension 3.620 to center of
Note: Some items in the views are caster wheel from this surface
removed for clarity. (see detail A)

Figure 6 Caster Wheels on Dell EMC Rack Bottom


The customer is ultimately responsible for ensuring that the data center floor on which the
Dell EMC system is to be configured is capable of supporting the system weight, whether
the system is configured directly on the data center floor, or on a raised floor supported by
the data center floor. Failure to comply with these floor-loading requirements could result
in severe damage to the Dell EMC system, the raised floor, subfloor, site floor and the
surrounding infrastructure. Notwithstanding anything to the contrary in any agreement
between Dell EMC and customer, Dell EMC fully disclaims any and all liability for any
damage or injury resulting from customer's failure to ensure that the raised floor, subfloor
and/or site floor are capable of supporting the system weight as specified in this guide.
The customer assumes all risk and liability associated with such failure.

8 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Power Requirements
Depending on the rack configuration and input AC power source, single or three-phase
listed in Table 2 and Table 4 on page 11, the rack requires two to eight independent power
sources. To determine your site requirements, use the published technical specifications
and device rating labels for all non-Dell EMC equipment. This helps in providing the
current draw of the devices in each rack. The total current draw for each rack can then be
calculated. For Dell EMC products, refer to “Dell EMC Power Calculator” located at
http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the XtremIO
hardware currently in use.

Note: If the data center uses low line input (100/110 Volts), an RPQ should be submitted.

Table 2 Single-Phase Power Connection Requirements

Specification North American 3 wire connection International and Australian 3 wire


(2 L and 1 G)1 connection (1 L, 1 N, and 1 G)1

Input nominal 200 - 240V AC +/- 10% L - L nom 220 - 240V AC +/- 10% L - L nom
voltage

Frequency 50 - 60 Hz 50 - 60 Hz

Circuit breakers 30 A 32 A

Power zones Two Two

Power • One to four 30 A, single-phase drops per zone.


requirements at • Each rack requires a minimum of two drops, (one per side) for every pair
site (minimum to of X-Bricks.
maximum)
Note: The options for the single-phase Power Distribution Unit (PDU)
interface connector are listed in Table 3 on page 10.

1. L = line phase, N = neutral, G = ground

Dell EMC XtremIO Storage Array Site Preparation Guide 9


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Power Cables and Connectors


Power cables and connectors depend on the type ordered with the XtremIO Storage Array
equipment, and must match the supply receptacles at the site. Table 3 shows the
connector types.

Table 3 AC Power Cable Connector

Single-Phase Rack Connector Customer AC Source Interface Site


Options Receptacle

North America and Japan

NEMA L6-30P
NEMA L6-30R

North America and Japan

Russellstoll 3750DP Russellstoll 9C33U0

International

IEC-309 332P6 IEC-309 332C6

Australia

CLIPSAL 56PA332 CLIPSAL 56CSC332

10 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Table 4 shows the three-phase power connection requirements.

Table 4 Three-Phase Power Connection Requirements

Specification North American (Delta) 4 wire International and Australian (Wye) 5


connection (3 L and 1 G)1 wire connection (3 L, 1 N, and 1 G)1

Input nominal 200 - 240V AC +/- 10% L - L nom 220 - 240V AC +/- 10% L - L nom
voltage

Frequency 50 - 60 Hz 50 - 60 Hz

Circuit breakers 50 A 32 A

Power zones Two Two

Power requirements North America (Delta):


at site (minimum to • One 50 A, three-phase drops per zone.
maximum) • Each rack requires a minimum of two drops to a maximum of four drops.
This is determined by the system configuration and the power needs for
that configuration.
International (Wye):
• One to two 32 A, three-phase drop per zone.
• Each rack requires a minimum of two drops to a maximum of four drops.
This is determined by the system configuration and the power needs for
that configuration.

Note: The interface connector options for the Delta and Wye three-phase
PDUs are listed in Table 5.

1. L = line phase, N = neutral, G = ground

Table 5 shows the three-phase delta-type AC power input connector options.

Table 5 Three-Phase Delta-Type AC Power Input Connector Options

Three-Phase Delta Rack Customer AC Source Interface Site


Connector Options Receptacle

North America and


International

Russellstoll 9P54U2 Russellstoll 9C54U2

North America

Hubbell CS-8365C Hubbell CS-8364C

Dell EMC XtremIO Storage Array Site Preparation Guide 11


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Table 6 shows the three-phase Wye-type AC power Input connector options.

Table 6 Three-Phase Wye-Type AC Power Input Connector Options

Three-Phase Delta Rack Connector Customer AC Source Interface Site


Options Receptacle

International

Sursum S52S30A or Hubbell C530P6S Sursum K52S30A or Hubbell C530C6S

Customer Receptacle International


15 FT +/- 6 in
Label: Not For Current Interruption

Label: See Section B

3 in 4 in

APP
FLY Lead (CE can add appropriate plug
base on customer receptacle)

North America

Hubbell L22-30P Hubbell L22-30R

Table 7 shows the three-phase Wye-type AC power Input connector details.

Table 7 Three-Phase Wye Connector Key

Part Number WYE Description Input Connector Color

100-564-786-00 0U, 3 Phase WYE

999-997-693 Functional spec 3P WYE 0U


Titan-D PDU

038-004-778 3 Phase WYE International BLK Sursum K52S30A or Hubbell C530C6S Black

038-002-499 3 Phase WYE International BLK FLY Lead Black

038-004-479 3 Phase WYE N. America BLK Hubbell L22-30P Black

12 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

Power Consumption
Table 8 and Table 9 show the cluster power consumption and heat dissipation.
Calculations in these tables are intended to provide typical and maximum power and heat
dissipations. Ensure that the installation site meets these typical and worst-case
requirements.

Note: For specific environmental conditions, refer to the “Dell EMC Power Calculator”
located at http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the
XtremIO hardware currently in use.

Note: The figures refer to cluster configurations not including a physical XMS. The figures
for an XMS are detailed separately.

Note: A cluster's DAE can have varying SSD configurations. The figures refer to cluster
configurations with fully-populated DAEs.

Table 8 Power Consumption and Heat Dissipation for Typical Operation

Configuration Total Power Consumption (VA) Heat Dissipation (Btu/Hr)

X2-R X2-S X2-T X2-R X2-S X2-T

Single X-Brick cluster 2,132 2,004 1,916 7,165 6,736 6,439

Two X-Brick cluster 4,406 4,150 N/A 14,740 13,881 N/A

Three X-Brick cluster 6,538 6,154 N/A 21,906 20,616 N/A

Four X-Brick cluster 8,670 8,158 N/A 29,071 27,352 N/A

Dell EMC XtremIO Storage Array Site Preparation Guide 13


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

PDU Configuration
Factory-assembled racks are shipped in a “four PDU” configuration.
Table 9 describes the number of line drops that are required per zone according to the
amount of X-Bricks in the cluster.

Table 9 Required Line Drops Per Zone

Cluster Configuration Single Phase 3-Phase - Delta 3-Phase - WYE

Single X-Brick 1 1 1

Two X-Bricks

Three X-Bricks 2 1 1

Four X-Bricks

Note: The PDU configurations do not include a power On/Off switch. Make sure the (four)
circuit breaker switches on each PDU are UP, in the OFF (0) position until you are ready to
supply AC power. Make sure that the power is OFF before disconnecting power from a PDU.

Single-Phase 3-Phase WYE and Delta

Figure 7 Four PDU Circuit Breaker Switch Labels

14 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

PDU = 100-563-477
(Titan-D, Single Phase)

The upper PDP is energized


for 1-4 X-Brick clusters.

The upper PDU is NOT


energized for clusters
with 1-4 X-Bricks

X-Bricks 7 and 8

X-Bricks 5 and 6
An XMS (if present) is connected
to the console outlet at the rear
of the PDU.

X-Bricks 3 and 4

Black PDU jumpers are only used


on the lower PDU when only one
or two X-Bricks are in the rack.

X-Bricks 1 and 2

Figure 8 Single Phase Power Connection

Dell EMC XtremIO Storage Array Site Preparation Guide 15


Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack

16AMP

16AMP
CB
21
21
CB
C
B
21
21
CB

CB11
11
CB
C
B1
1
CB

16AMP

16AMP
16AMP

16AMP
CB10
10
CB
CB10
10
CB

CB9 CB9
16AMP 9
CB 9
CB

16AMP
16 AMP

16AMP
CB8 CB8
8
CB 8
CB

CB7 CB7
7
CB 7
CB
16 AMP

16AMP
16 AMP

16AMP
CB6 CB6
6
CB 6
CB

CB5 CB5
5
CB 5
CB
16AMP

16AMP
16 AMP

16AMP

CB4 CB4
4
CB 4
CB

CB3 CB3
3
CB 3
CB
16AMP
16AMP
16AMP

16 AMP

CB2 CB2
2
CB 2
CB

CB1
Second Line Cord for X-Bricks 5-8 CB1
1
CB 1
CB
16AMP
16AMP

First Line Cord for X-Bricks 1-4


DELTA WYE

Figure 9 3-Phase Power Connection

16 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Preparing the Site for Installing the Cluster into a Non-"Dell EMC
Titan-D" Rack
When the XtremIO Storage Array is to be installed into a customer rack, the cluster is
delivered in a non-standard packaging (also known as mini-rack). The package comes in
one-size of 12U. Each mini-rack can contain one or two X-Bricks. Therefore, single and two
X-Brick clusters arrive in a single mini-rack and three and four X-Brick clusters arrive in two
mini-racks. If the customer has ordered a physical XMS, the XMS unit is added in a
separate box on top of the mini-rack, as shown in Figure 10.

40.75 in
(103.5 cm)

33 in 42.5 in
(83.8 cm) (108 cm)

Figure 10 Dimensions of 12U Mini-Rack with 1U XMS for Field Installation

Dell EMC XtremIO Storage Array Site Preparation Guide 17


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Rack Requirements
Non-”Dell EMC Titan-D” racks must meet the following requirements, depending on PDU
orientation:
Figure 11 shows a non-"Dell EMC Titan-D" rack with rear-facing PDUs.

Front
Front Door

d
Rack

Rack
Post

Post
a
Front NEMA

Front NEMA
b
Rack Top View

i
j
Rear NEMA

Rear NEMA
PDU

PDU

e f
Rack
Rack

Post
Post

c
19" NEMA
g
Rear Door
h
Rear
Figure 11 Rear of Rack with Rear-Facing PDUs Service Clearance (Top View)

18 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Non-”Dell EMC Titan-D” racks with rear-facing PDUs must meet the requirements shown in
Table 10.

Table 10 Non-"Dell EMC Titan-D" Racks Requirements (Rear-Facing PDUs)

Dim Label Description

a Distance between front surface of the rack and the front NEMA rail

b Distance between NEMA rails, min = 24"(609.6mm), max = 32"(812.8mm)

c Distance between rear surface of the chassis to rear surface of the rack, min = 2.5"
(63.5mm)

d If a front door exists, distance between inner front surface of the front door and the
front NEMA rail, min=2.5” (63.5mm)

e Distance between the inside surface of the rear post and the rear vertical edge of
the chassis and rails; min 2.5" (63.5mm) is recommended.
Note: If no rack post, minimum recommended distance is measured to inside
surface of the rack.

f Width of the rear rack post

g Minimum = distance between NEMA rails 19" (482.6) +2e, Min=24" (609.6mm)

h Minimum = distance between NEMA rails + 2x"e" = 2x"f" = 19" + 2.5" + "f" = 21.5" +
"f"

i Deepest Component: DAE Chassis and CMD = 36.5" (927.1mm)

j Minimum rack depth = "i" + "c" = 36.5" + 2.5" = 39" (990mm)


Note: A rear door need not be present. Regardless, all hardware must be within the
boundaries of the rack.

If all of the requirements (described in Table 10) for a non-"Dell EMC Titan-D" rack are not
met, and the customer wishes to continue with a non-compliant rack, an RPQ process
must be initiated.

Dell EMC XtremIO Storage Array Site Preparation Guide 19


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Figure 12 shows a non-"Dell EMC Titan-D" rack with center-facing PDUs.

Front
Front Door

Rack
d

Rack
Post

Post
a

Front NEMA

Front NEMA
b
Rack Top View

j
Rear NEMA

Rear NEMA

PDU PDU
e 19" NEMA c
Rack
Rack

Post
Post

f
Rear Door
h
Rear
Figure 12 Rear of Rack with Center-Facing PDUs Service Clearance (Top View)

20 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Non-”Dell EMC Titan-D” racks with center-facing PDUs must meet the requirements shown
in Table 11.

Table 11 Non-”Dell EMC Titan-D” Racks Requirements (Center-Facing PDUs)

Dim Label Description

a Distance between front surface of the rack and the front NEMA rail

b Distance between NEMA rails, min = 24"(609.6mm), max = 32"(812.8mm)

c Distance between rear surface of the chassis to rear surface of the rack, min = 2.5"
(63.5mm)

d If a front door exists, distance between inner front surface of the front door and the
front NEMA rail, min=2.5” (63.5mm).

e Distance between the inside surface of the rear post and the rear vertical edge of the
chassis and rails; min 2.5" (63.5mm) is recommended.
Note: If no rack post, minimum recommended distance is measured to inside surface
of the rack.

f Width of the rear rack post

g PDU depth + 3" (76.2mm) AC cable bend clearance. Racks equipped with
center-facing PDUs that fail to meet this requirement are permitted, as long as the
NEMA rail/product area is not compromised. Therefore some outlets may NOT be
accessible. If those outlets are required, an RPQ request for in-rack PDUs should be
submitted. In all cases, the DAE's PSU accesses must remain unblocked.

h Rack width. Min= 19" NEMA+(2x"e")+(2f) or 19" NEMA+(2x"g"), whichever is greater.


Note: "e" + "f" < "g" + 0.5"

i Deepest Component: DAE Chassis and CMD = 36.5" (927.1mm)

j Minimum rack depth = "i" + "c" 36.5" (927.1mm) + 2.5" (63.50) = 39" (990mm)
Note: A rear door need not be present. Regardless, all hardware must be within the
boundaries of the rack.

If all of the requirements (described in Table 11) for a non-"Dell EMC Titan-D" rack are not
met, and the customer wishes to continue with a non-compliant rack, an RPQ process
must be initiated.
Essential Requirements:
 The rack space requirements of the different XtremIO Storage Array configurations are
as follows:
• A single X-Brick cluster requires 5U of contiguous rack space.
• A two X-Brick cluster requires 11U of contiguous rack space.
• A three X-Brick cluster requires 16U of contiguous rack space.
• A four X-Brick cluster requires 20U contiguous rack space.

Note: An optional physical XMS may occupy the upper-most U in the rack.

 The NEMA rail mounting holes must be one of the following:


• Round, 0.281 in. diameter, non-threaded.
• Square, 0.375 in.

Dell EMC XtremIO Storage Array Site Preparation Guide 21


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

 AC power:
• 200-240 VAC +/- 10% single phase or three-phase 50-60 Hz, power connection.
Table 4 on page 11 shows the three-phase power connection requirements.
• Redundant power zones, one on each side of the rack. Each power zone should
have capacity for the maximum power load (refer to Table 14 on page 27).
• The provided power cables suit AC outlets located within 24 inches of each
component receptacle (this does not include the X2-R InfiniBand Switches, whose
AC inlets are located in the front panel and are therefore provided with long power
cables).

Note: If you are using longer power cables, make sure they are cables of a high
standard.

Cluster Weight
Table 12 shows the approximate weights of XtremIO X2-R, X2-S and X2-T configurations,
when fully populated.

Table 12 Approximate Weight by Configuration

Configuration X2-R Approx. Weight X2-S Approx. Weight X2-T Approx. Weight

Single X-Brick cluster 190 lb (86kg) 190 lb (86kg) 176 lb (79.8 kg)

Two X-Brick cluster 468.1 lb (212.7 kg) 431.4 lb (195.7 kg) N/A

Three X-Brick cluster 664.8 lb (301.9 kg) 629 lb (284.9 kg) N/A

Four X-Brick cluster 861.2 lb (391 kg) 824.5 lb (374 kg) N/A

Note: The values shown in Table 12 do not include the rack weight. For configurations that
include a physical XMS, add 33 lb (15 kg).

22 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Component Dimensions
Table 13 shows the dimensions, weight and rack space of each major component in the
XtremIO Storage Array.

Table 13 XtremIO Storage Array Components Physical Data

Component Dimensions Rack Space Weight

Storage Controller Height: 43.2 mm (1.7 in) 1U 34.65 lb (15.72 kg)


Width: 438 mm (17.25 in)
Depth: 709 mm (27.9 in)

Disk Array Enclosure Height: 88.9 mm (3.5 in) 2U 97.0 lb (44 kg)1
(DAE) Width: 438 mm (17.25 in)
Depth: 927.1 mm (36.5 in)

X2-R InfiniBand Height: 43.7 mm (1.72 in) 1U 25 lb (11.34 kg)


Switch Width: 428 mm (16.84 in)
Depth: 686 mm (27 in)

X2-S InfiniBand Height: 43.7 mm (1.72 in) 1U 7.1 lb (3.2 kg)


Switch Width: 200 mm (7.9 in)
Depth: 399 mm (15.7 in)

XMS (optional) Height: 43.2 mm (1.7 in) 1U 33 lb (14.96 kg)


Width: 438 mm (17.25 in)
Depth: 709 mm (27.9 in)

Cable Management Height: 43.2 mm (1.7 in) 1U 1.75 lb (0.78 kg)


Duct Width: 438 mm (17.25 in) Note: Shares 1U with
Depth: 914.4 mm (36 in) air baffle at the rack’s
front end. Air Baffle Weight:
0.95 lb (0.44) kg
1. Fully loaded (72 SSDs). SSDs are added by increments of six SSDs per X-Brick expansion, with
each increment weighing approximately 2.4 lb (1.08 kg). Lighter-weighing “air seals” occupy DAE
slots not occupied by SSDs.

Dell EMC XtremIO Storage Array Site Preparation Guide 23


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Components Stacking
The following figures display the components stacking order according to the purchased
configuration.
A single X-Brick cluster requires 5U of contiguous rack space.
Figure 13 describes the stacking of a single X-Brick cluster.

XMS (Optional)

Cable Management Duct


X1-SC2
X1-SC1

X1-DAE

Place Holder

Figure 13 Single X-Brick Cluster Stacking Order

A two X-Brick cluster requires 11U of contiguous rack space.


Figure 14 describes the stacking of a two X-Brick cluster.

XMS (Optional)

IBSW-2
IBSW-1

X2-DAE

X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1

X1-DAE

Place Holder

Figure 14 Two X-Brick Cluster Stacking Order

24 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

A three X-Brick cluster requires 16U of contiguous rack space.


Figure 15 describes the stacking of a three X-Brick cluster.

XMS (Optional)

Cable Management Duct


X3-SC2
X3-SC1

X3-DAE

IBSW-2
IBSW-1

X2-DAE

X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1

X1-DAE

Place Holder

Figure 15 Three X-Brick Cluster Stacking Order

Dell EMC XtremIO Storage Array Site Preparation Guide 25


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

A four X-Brick cluster requires 20U of contiguous rack space.


Figure 16 describes the stacking of a four X-Brick cluster.

XMS (Optional)

X4-DAE

X4-SC2
X4-SC1
Cable Management Duct
X3-SC2
X3-SC1

X3-DAE

IBSW-2
IBSW-1

X2-DAE

X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1

X1-DAE

Place Holder

Figure 16 Four X-Brick Cluster Stacking Order

26 Dell EMC XtremIO Storage Array Site Preparation Guide


Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack

Power Requirements
Table 14 details the power requirements for each configuration, as well as the number of
IEC 320-C13 outlets required on each power zone.

Note: For specific environmental conditions, refer to the “Dell EMC Power Calculator”
located at http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the
XtremIO hardware currently in use.

Note: The figures refer to cluster configurations, not including a physical XMS. The
requirements for an XMS are detailed separately.

Table 14 XtemIO Storage Array Power Requirements

Configuration Total Power Number of IEC 320-C13 Number of IEC


Consumption (VA) Outlets Zone-A 320-C13 Outlets
Zone-B

Single X-Brick cluster 2,132 3 3

Two X-Brick cluster 4,406 8 8

Three X-Brick cluster 6,538 11 11

Four X-Brick cluster 8,670 14 14

XMS 200 1 1

Table 15 details the power consumption and socket related data for the cluster
components.

Table 15 Components Power Consumption and Socket Data

Component Power Socket Number/Type Power Consumption (VA)

X2-R X2-S X2-T

Storage Controller 2 x IEC C14 683 619 683

InfiniBand Switch 2 x IEC C14 71 71 N/A

Disk Array Enclosure (DAE) 2 x IEC C14 766 766 550

Dell EMC XtremIO Storage Array Site Preparation Guide 27


Hardware Requirements

Hardware Requirements
Table 16 shows the hardware requirements for each XtremIO Storage Array configuration.

Table 16 Hardware Requirements

Configuration iSCSI Only ConfigurationETH ETH


2 FC + 2 iSCSI Configuration (No FC) Ports on Ports on
Switch Switch
Fibre iSCSI Fibre iSCSI RJ45 RJ45
Channel Connectivity Channel Connectivity 1GbE/10 10GbE2
Connectivity Connectivity GbE1

Single X-Brick 4 x 16GB FC 4 x 10GbE N/A 8 x 10GbE 2 (+1) 2


cluster ports on the LC optical LC optical
switch ports on the ports on the
switch switch

Two X-Brick 8 x 16GB FC 8 x 10GbE N/A 16 x 10GbE 2 (+1) 2-4


cluster ports on the LC optical LC optical
switch ports on the ports on the
switch switch

Three X-Brick 12 x 16GB FC 12 x 10GbE N/A 24 x 10GbE 2 (+1) 2-6


cluster ports on the LC optical LC optical
switch ports on the ports on the
switch switch

Four X-Brick 16 x 16GB FC 16 x 10GbE N/A 32 x 10GbE 2 (+1) 2-8


cluster ports on the LC optical LC optical
switch ports on the ports on the
switch switch
1. An additional port is required if a physical XMS is implemented.
2. For Native Replication; not required when using Optics.

28 Dell EMC XtremIO Storage Array Site Preparation Guide


Physical XMS Requirements

Physical XMS Requirements


The system operation is controlled via a stand-alone dedicated Linux-based server, called
the XtremIO Management Server (XMS). Each XtremIO cluster requires connectivity to a
single XMS host, which can be either an Dell EMC-supplied physical server, or a virtual
server that will be deployed on a customer-supplied server running VMware ESX. A single
cluster cannot be managed by multiple XMSs.
This section describes the XtremIO physical XMS pre-deployment requirements (when the
cluster configuration is to include a physical XMS).

Note: For XtremIO virtual XMS pre-deployment requirements, refer to “Virtual XMS
Requirements”.

Physical XMS Specifications


This section describes the physical XMS specifications, according to the physical XMS
model, when installing an (optional) physical XMS for an X2 cluster type.
The physical XMS specification is dependent on the total number of Volumes expected to
be configured on the X2 clusters connected to this physical XMS. The following physical
XMS specification alternatives are available according to the physical XMS model:
 X1 physical XMS (P/N# 100-586-008-00 and 100-586-008-01) – supports up to 8K
Volumes
 X2 physical XMS (P/N# 100-586-049-xx and 100-586-149-xx) – supports up to 32K
Volumes

Virtual XMS Requirements


The system operation is controlled via a stand-alone dedicated Linux-based server, called
the XtremIO Management Server (XMS). Each XtremIO cluster requires connectivity to a
single XMS host, which can be either an Dell EMC-supplied physical server, or a virtual
server that will be deployed on a customer-supplied server running VMware ESX. A single
cluster cannot be managed by multiple XMSs.
This section describes the XtremIO virtual XMS pre-deployment requirements (when the
cluster configuration is to include a virtual XMS).

Note: For XtremIO physical XMS pre-deployment requirements refer to “Physical XMS
Requirements”.

Dell EMC XtremIO Storage Array Site Preparation Guide 29


Virtual XMS Requirements

Virtual XMS Specifications


 Virtual machine configuration: To run the virtual XMS, a single virtual machine is
needed (or two if virtual XMS high-availability is required).
The configuration of this virtual machine is dependent on the total number of clusters
expected to be connected to the XMS, and on the total number of volumes expected to
be configured on the clusters connected to this virtual XMS. The following virtual XMS
configuration alternatives are available:
• Regular - when the number of connected clusters is three (or lower) and the
number of Volumes is expected to be up to 8000
• Expanded - when the number of connected clusters is four (or greater) or the
number of Volumes is expected to be up to 32000
Table 17 shows the required virtual XMS VM configurations.

Table 17 Virtual XMS VM Configurations per the Expected Total Number of Volumes

Parameter Regular Expanded

RAM 8GB 10GB - 16GB1

CPU 2 X vCPU 4 X vCPU

NIC 1 X vNIC 1 X vNIC

Virtual Disk2 900GB thin 900GB thin


1. 16GB for clusters reaching the maximum XMS scalability numbers. Refer to the XMS release
notes document for details.
2. The virtual XMS VM should have a single 900GB disk (thin provisioned). 200 GB of disk
capacity is preallocated following the cluster initialization. Therefore, at least 200 GB of free
capacity should be available on the data store used to deploy the XMS.

Note: It is possible to initially configure the virtual XMS per the Regular
configuration, and at a later stage, adjust the virtual XMS to the Expanded
configuration. For details on expanding the virtual XMS configuration, refer to the
XtremIO Storage Array User Guide.

 Underlying storage: The virtual XMS VM should be provisioned on a RAID protected


storage (or VMware vSphere HCL-approved shared storage if virtual XMS
high-availability is required).
In addition, the underlying storage should supply a consistent latency of up to 8 ms.

Note: Shared storage used in this case should not originate from the XtremIO cluster
managed by the Virtual XMS.

 Network connectivity: The Virtual XMS should be located in the same Local Area
Network (LAN) as the XtremIO cluster.

30 Dell EMC XtremIO Storage Array Site Preparation Guide


Temperature Requirements

 Host: The virtual XMS VM should be deployed on a single host, running ESX 5.x or 6.x
(or more, if virtual XMS high-availability is required).

Note: XtremIO Storage Array supports both ESX and ESXi. For simplification, all
references to ESX server/host apply to both ESX and ESXi, unless stated otherwise.

The host should be on VMware vSphere HCL approved hardware and meet the
following configuration requirements:
• Single socket dual core CPU
• One 1GbE NICs
• Redundant power supply

Backing Up the Virtual XMS


Only the critical information from the XMS database is regularly saved and backed-up on
the cluster.

Other Specifications
 The OVA package from which the virtual XMS is deployed contains VMware tools.
Therefore, no VMware tools upgrade is required following virtual XMS deployment.
 The deployed virtual XMS Shares memory resource allocation is set to High. Therefore,
the virtual XMS is given high priority on memory allocation when required.

Note: In case non-standard Shares memory resource allocation is used, the virtual
XMS Shares memory resource allocation should be adjusted post-deployment.

For information pertaining to managing the virtual XMS, refer to the XtremIO Storage Array
User Guide.

Temperature Requirements
Table 18 shows the XtremIO Storage Array environmental operating range requirements.

Table 18 XtremIO Storage Array Environmental Data

Requirement Normal Environmental Conditions Improbable Environmental Conditions

Operating 5°C to 35°C 35°C to 40°C


Temperature

Duration Normal; 100% Improbable; 10% (876 hrs/yr)

Note: The environmental data shown in Table 18 complies with ASHRAE A3 standards.

Note: A cluster operating at temperatures exceeding the recommended operating range


may cause data to become unavailable. Continuous operation of a DAE at a high
temperature may lead to SSD failure.

Dell EMC XtremIO Storage Array Site Preparation Guide 31


Shipping and Storage Requirements

Shipping and Storage Requirements


Table 19 shows the shipping and storage requirements for XtemIO Storage Array
equipment.

Table 19 Shipping and Storage Environmental Requirements

Condition Setting

Ambient Temperature -40oF to 149oF (-40oC to 65oC)

Temperature Gradient 43.2oF/hr (24oC/hr)

Relative Humidity 10% to 90% non-condensing

Maximum Altitude 25,000 ft (7619.7 m)

Air Quality Requirements


Dell EMC products are designed to be consistent with the requirements of the American
Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE) Environmental
Standard Handbook and the most current revision of Thermal Guidelines for Data
Processing Environments, Second Edition, ASHRAE 2009b.
XtremIO Storage Array is best suited for Class 1 datacom environments, which consist of
tightly-controlled environmental parameters, including temperature, dew point, relative
humidity and air quality. These facilities house mission-critical equipment and are
typically fault-tolerant, including the air conditioners.
The data center should maintain a cleanliness level as identified in ISO 14664-1, class 8
for particulate dust and pollution control. The air entering the data center should be
filtered with a MERV 11 filter or better. The air within the data center should be
continuously filtered with a MERV 8 or better filtration system. In addition, efforts should
be maintained to prevent conductive particles, such as zinc whiskers, from entering the
facility.
For data centers with gaseous contamination, such as high sulfur content, lower
temperatures and humidity are recommended to minimize the risk of hardware corrosion
and degradation. In general, the humidity fluctuations within the data center should be
minimized. It is also recommended that the data center be positively pressured and have
air curtains on entry ways to prevent outside air contaminants and humidity from entering
the facility.
For facilities below 40% relative humidity, Dell EMC recommends using grounding straps
when contacting the equipment to avoid the risk of electrostatic discharge (ESD), which
can harm electronic equipment.
As part of an ongoing monitoring process for the corrosiveness of the environment, it is
recommended to place copper and silver coupons (per ISA 71.04-1985, Section 6.1
Reactivity), in air streams representative of those in the data center. The monthly reactivity
rate of the coupons should be less than 300 Angstroms. When monitored reactivity rate
is exceeded, the coupon should be analyzed for material species and a corrective
mitigation process put in place.

32 Dell EMC XtremIO Storage Array Site Preparation Guide


Shipping and Storage Requirements

Shock and Vibration


Dell EMC products are tested to withstand the shock and random vibration levels. The
levels apply to all three axes, and should be measured with an accelerometer on the
equipment enclosures within the rack.
Table 20 shows the maximum shock and vibration levels:

Table 20 Maximum Shock and Vibration Levels

Platform Condition Response Measurement Level

Non operational shock 10 G, 7 ms duration

Operational shock 3 G, 11 ms duration

Non operational random vibration 0.40 Grms, 5-500 Hz, 30 minutes

Operational random vibration 0.21 Grms, 5-500 Hz, 10 minutes

Systems that are mounted on an approved Dell EMC package complete transportation
testing to withstand shock and vibrations in the vertical direction only. Table 21 shows
the respective maximum shock and vibration values not to exceed.

Table 21 Transportation Testing

Packaged System Condition Response Measurement Level

Transportation shock 10 G, 12ms duration

Transportation random vibration 1.15 Grms, 1 hour Frequency range 1-200 Hz

Dell EMC XtremIO Storage Array Site Preparation Guide 33


Security Requirements

Security Requirements
This section describes the security requirements in the data center.

Firewall Settings
 Set the FW rules prior to installation.
 If the XMS is on a different subnet than the X-Bricks, open TCP, UDP and ICMP firewall
ports in both directions. Refer to Table 24 on page 57.
 Open TCP ports between the XMS and the managing desktop running the XMS GUI.
Refer to Table 24 on page 57.
 Open the services you want to enable from the XMS to the relevant target systems.
Refer to Table 24 on page 57.

Management and IPMI IP Settings


 Allocate at least three, and up to four IP addresses, one (or two) for the XMS and two
for X-Brick #1’s Storage Controllers. Make sure that both Storage Controller IP
addresses are on the same network subnet, and that the XMS IP address(es) can
access these addresses.

Note: The XMS can manage clusters of the same IP version type; either IPv4 or IPv6
(but not both types). The XMSes primary IP address must be of the same version type
as that of the clusters’ IP address version type; IPv4 or IPv6. A secondary IP address
can be added to the XMS to serve as a user connection (for GUI, RESTful API, etc.). The
secondary IP address can not be the same IP address type as that of the primary IP
address type.

 It is recommended to protect all XtremIO IP addresses against external access.


 If the XMS IP address is to be exposed only to a specific host, you can use a firewall
and open ports to designated hosts, as described in “Firewall Settings”.

34 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Connecting the Cluster to Host


The XtremIO cluster can be connected to either the host SAN via Fibre Channel (FC) port,
the iSCSI port, or to both ports.

Connecting the Cluster to the Host Switch via FC

To connect the cluster to the host switch via FC:


1. Connect the external FC cables to Ports 3 and 4 of all Storage Controllers in the cluster,
as shown in Figure 17.

Port 4
Port 3

Figure 17 Storage Controller FC Ports

2. Make sure that the other end of the external FC cables are connected to the customer’s
switch.

Note: For connection via FC, at least one FC port of each Storage Controller in the cluster
must be connected to the host switch. However, it is highly recommended to connect both
FC ports of all Storage Controllers to two separate switches, so that each FC port of each
Storage Controller is connected to a different switch.

Connecting the Cluster to the Host Switch via iSCSI

To connect the cluster to the host switch via iSCSI :


1. Connect the external iSCSI cables to Ports 1 and 2 of all Storage Controllers in the
cluster, as shown in Figure 18. If Ports 3 and 4 are to be configured as iSCSI, you can
connect an external iSCSI cable to each of these ports as well.

Port 2
Port 1

Figure 18 Storage Controller iSCSI Ports

Dell EMC XtremIO Storage Array Site Preparation Guide 35


Connecting the Cluster to Host

2. Make sure that the other end of the external iSCSI cables are connected to the
customer’s switch.
Note:
 For connection via iSCSI, at least one iSCSI port of each Storage Controller in the cluster must be
connected to the host switch. However, it is highly recommended to connect both iSCSI ports of
all Storage Controllers to two separate switches, so that each iSCSI port of each Storage
Controller will be connected to a different switch.
 Ports 3 and 4 can be configured during the create cluster procedure, to act as 10Gb Ethernet or
16Gb Fibre Channel. For more information, refer to the XtremIO Storage Array Software
Installation and Upgrade Guide.
 In case a Storage Controller is configured to four ISCSI ports, and only two ports are used, it is
recommended to use ports 3 and 4 for ISCSI connectivity.

Connecting Storage Controllers for Native Replication


This section provides detailed information for configuring links on Storage Controllers and
IPs for replication from source clusters to destination clusters.

General Native Replication Guidelines


Connect the source and destination clusters according to the following guidelines:
 A single X-Brick cluster can be connected to a maximum of two clusters only.
 A multiple X-Brick cluster can be connected to a maximum of four clusters only.
 Set the replication ports and IP addresses for replication.
 Dedicated ports must be configured for replication.
 The number of ports that need to be used for replication depends on the replicated
throughput.
 One IP-link can be configured per port (only one IP address per port and 1 IP-link per
port).

36 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Choosing the Connectivity


A minimum of two ports per cluster on different Storage Controllers is required. You cannot
use the on-board ports and optic ports on the same cluster for replication.
For Native Replication, you can choose one of the following configurations:
 On-Board RJ45 Port

On-Board RJ45

 Optic 10 GbE Ports

Port 2
Port 1

With this configuration, it is first necessary to modify the ports’ types from iSCSI to
Replication. This is done by using the modify-target command to change the type
to Replication, and providing the IP and subnet for the replication port.
In case the configuration is four iSCSI ports, the preferred ports for Native Replication
are ports 1 and 2.

Note: For instructions on setting the replication ports, and setting IPs for replication
and modifying the port types from iSCSI to Replication, refer to XtremIO Storage Array
User Guide.

Table 22 and Table 23 show the recommended port configurations that can be defined per
cluster.

Note: Other configurations may be used.

Table 22 One to One Configurations (no Fan-In/Fan-out configurations)

Source/Destination Single X-Brick Two X-Brick Three X-Brick Four X-Brick

Single X-Brick 2 or 4* 2 or 4* 2, 3* or 4* 2 or 4*

Two X-Brick 2 or 4 2, 4 or 8* 2, 3, 4, 6* or 8* 2, 4 or 8*

Three X-Brick 2, 3 or 4** 2, 3, 4 or 6 2, 3, 6 or 12* 2, 3, 6 or 12*

Four X-Brick 2 or 4** 2, 4 or 8** 2, 3, 6 or 12* 2, 4, 8 or 16*

* Using the optic connectivity at the source cluster is required for this configuration.
**Using the optic connectivity at the source cluster is required for this configuration on destination.

Dell EMC XtremIO Storage Array Site Preparation Guide 37


Connecting the Cluster to Host

Table 23 Fan-in/Fan-out Configurations:

Source Cluster Destination Cluster Number of Ports at Number of Ports at


Configuration Configuration Source Destination

Single X-Brick 2 X Single X-Brick 4* 2 per target cluster

4 2 per target cluster


2 X Single X-Brick
8* 4* per target cluster

3 X Single X-Brick 6* 2 per target cluster

4 X Single X-Brick 8* 2 per target cluster


Two X-Brick
4 2 per target cluster
2 X Dual X-Brick
8* 4 per target cluster

3 X Dual X-Brick 6* 2 per target cluster

4 X Dual X-Brick 8* 2 per target cluster

4 2 per target cluster


2 X Single X-Brick
8* 4*per target cluster

6 2 per target cluster


3 X Single X-Brick
12* 4*per target cluster

4 X Single X-Brick 8* 2 per target cluster

4 2 per target cluster

2 X Dual X-Brick 6 3 per target cluster

Three X-Brick 8* 4*per target cluster


6 2 per target cluster
3 X Dual X-Brick
12* 4*per target cluster
8* 2 per target cluster
4 X Dual X-Brick
12* 3 per target cluster

4 2 per target cluster

2 X Three X-Brick 6 3 per target cluster

12* 6*per target cluster

38 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Table 23 Fan-in/Fan-out Configurations:

Source Cluster Destination Cluster Number of Ports at Number of Ports at


Configuration Configuration Source Destination

4 2 per target cluster


2 X Single X-Brick
8* 4*per target cluster
6 2 per target cluster
3 X Single X-Brick
12* 4*per target cluster
8 2 per target cluster
4 X Single X-Brick
16* 4*per target cluster

4 2 per target cluster


Four X-Brick 2 X Dual X-Brick
8 4 per target cluster

3 X Dual X-Brick 6 2 per target cluster

8 2 per target cluster


4 X Dual X-Brick
16* 4 per target cluster

4 2 per target cluster

2 X Three X-Brick 6 3 per target cluster

12*per target cluster 6*per target cluster

* Using the optic connectivity at the source cluster is required for this configuration.

Dell EMC XtremIO Storage Array Site Preparation Guide 39


Connecting the Cluster to Host

Recommended Port to Port Configurations:

Single X-Brick to Single X-Brick Configuration

Single X-Brick to Two Single X-Bricks Configuration

Dual X-Brick to Dual X-Brick Configuration Options

Dual X-Brick to Two Single X-Bricks Configuration Options

40 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Dual X-Brick to Three Single X-Bricks Configuration

Dual X-Brick to Four Single X-Bricks Configuration

Dual X-Brick to Two Dual X-Bricks Configuration Options

Dell EMC XtremIO Storage Array Site Preparation Guide 41


Connecting the Cluster to Host

Dual X-Brick to Three Dual X-Bricks Configuration

Dual X-Brick to Four Dual X-Bricks Configuration

Three X-Brick to Three X-Bricks Configuration Options

42 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Three X-Brick to Two Single X-Bricks Configuration Options

Three X-Brick to Three Single X-Bricks Configuration Options

Three X-Brick to Four Single X-Bricks Configuration

Dell EMC XtremIO Storage Array Site Preparation Guide 43


Connecting the Cluster to Host

Three X-Brick to Two Dual X-Bricks Configuration Options

44 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Three X-Bricks to Three Dual X-Brick Configuration Options

Dell EMC XtremIO Storage Array Site Preparation Guide 45


Connecting the Cluster to Host

Three X-Bricks to Four Dual X-Brick Configuration Options

Three X-Bricks to Two Three X-Brick Configuration Options

46 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Three X-Bricks to Two Three X-Brick Configuration Options (Continued)

Dell EMC XtremIO Storage Array Site Preparation Guide 47


Connecting the Cluster to Host

Four X-Bricks to Four X-Brick Configuration Options

Four X-Bricks to Two Single X-Brick Configuration Options

48 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Four X-Bricks to Three Single X-Brick Configuration Options

Dell EMC XtremIO Storage Array Site Preparation Guide 49


Connecting the Cluster to Host

Four X-Bricks to Four Single X-Brick Configuration Options

Four X-Bricks to Two Dual X-Brick Configuration Options

50 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Four X-Bricks to Three Dual X-Brick Configuration Options

Dell EMC XtremIO Storage Array Site Preparation Guide 51


Connecting the Cluster to Host

Four X-Bricks to Four Dual X-Brick Configuration Options

52 Dell EMC XtremIO Storage Array Site Preparation Guide


Connecting the Cluster to Host

Four X-Bricks to Two Three X-Brick Configuration Options

Dell EMC XtremIO Storage Array Site Preparation Guide 53


Remote Support Requirements

Remote Support Requirements


The XMS Remote Support feature is integrated with remote support technologies such as
Dell EMC® SupportAssist Enterprise (SAE) and Secure Remote Support (SRS) Virtual
Edition (SRS VE), legacy type SRS gateway and with Connect Home Email. This section
provides details on the requirements to use Remote Support on the XMS with any of the
these remote support technologies.

Note: As of version 6.1.0, IP Client and FTPS are no longer supported. If the Remote
Support connectivity of the XMS is currently configured to use IP Client or FTPS, contact
Global Technical Support for assistance in migrating to a supported Remote Support
configuration.

SAE/SRS provides a secure, IP-based, distributed remote support solution that enables
command, control and visibility of remote support access.
SAE/SRS configuration options with XtremIO are SAE & SRS VE gateway and legacy type
SRS gateway. These two configuration options provide Connect-In and Connect-Home
functionalities. The SAE & SRS VE configuration option also provides Managed File
Transfer (MFT) support, ESRS Advisories, and CloudIQ support. SAE & SRS VE is the
recommended configuration option with XtremIO.
If the customer refuses to use SAE/SRS as a connectivity solution, the XMS can be
configured to connect-home only. The connect-home only option with XtremIO is Email.
The option does not provide connect-in functionality to the XMS, but merely ensure that
Dell EMC receives regular configuration report and product alert information from the
customer’s XtremIO environment.

Note: The use of connect-home only configuration with XtremIO is considered an


exception and should be pre-approved.

54 Dell EMC XtremIO Storage Array Site Preparation Guide


Remote Support Requirements

Preconditions for Deploying SAE & SRS VE Gateway and Legacy Type SRS Gateway
The following are preconditions for deploying the SAE & SRS VE gateway configuration
and the legacy type SRS gateway configuration on XtremIO at a customer site with SAE &
SRS VE or legacy-based GW systems:
 The customer should agree to deploy SAE or SRS as part of the XtremIO deployment. It
is recommended to use an SAE or SRS-VE gateway available on-site. An alternative
option is to use an on-site legacy type SRS gateway.
 The customer should open HTTPS connection between the XMS and the SAE, SRS VE
or legacy type SRS gateway. Refer to Table 24 on page 57.
 The XtremIO cluster should be in the Dell EMC Install Base (i.e. assigned by Dell EMC
Manufacturing with a formal PSNT).
 The customer should have a SAE or SRS VE on site (or legacy type SRS gateway).
 For Gateway Connect configuration with an XMS, the customer should have deployed
one of the following:
• Dell EMC SRS VE 3.20 (or later) within their VMware ESX Server/Windows Hyper V
environment.
• SAE 4.0.5 (or greater) within their VMware ESX Server/Windows Hyper V
environment.
When these preconditions are fully-met, it is possible to proceed and deploy SAE/SRS
integration with XtremIO as part of the installation.

Note: SupportAssist Enterprise, SRS Virtual Edition (VE) and legacy type SRS gateway
deployment, configuration, provisioning and upgrade are outside the scope of the
XtremIO system installation.

Dell EMC XtremIO Storage Array Site Preparation Guide 55


Remote Support Requirements

Preconditions for Deploying Email Configuration


The following are preconditions for deploying connect-home on XtremIO, using the Email
configuration:
 The customer should deploy connect-home integration as part of the XtremIO
installation, using the Email configuration on the XMS.
 The customer should open an Email connection between the XMS and the customer
SMTP server. Refer to Table 24 on page 57.
 The customer should configure the SMTP server to prevent the XMS from adding a
footer to outgoing email messages.
 The XtremIO cluster should be in the Dell EMC Install Base (i.e. assigned by Dell EMC
Manufacturing with a formal PSNT).
Once these preconditions are fully met, it is possible proceed and deploy connect-home
integration with the Email configuration as part of your XtremIO installation.

56 Dell EMC XtremIO Storage Array Site Preparation Guide


Ports and Protocols

Ports and Protocols


Table 24 describes the ports that are used by the XtremIO Storage Array.

Table 24 Ports and Protocols

Protocol Port Service Direction Notes

UDP 514 Syslog XMS -> External Syslog

TCP 389 LDAP XMS -> LDAP Server

TCP 636 LDAPS XMS -> LDAP Server

TCP 3268 LDAP XMS -> LDAP Server

TCP 3269 LDAPS XMS -> LDAP Server

TCP 25 SMTP XMS -> SYR SMTP Server When Email connect-home only
configuration is used

UDP 162 SNMP TRAP XMS -> SNMP Server

TCP 443 & HTTPS XMS <-> SAE/SRS GW To/from SAE/SRS Gateway
9443 Server (SAE/SRS VE or (bi-directional connectivity required
legacy type SRS) from XMS to SAE/SRS GW)

ICMP XMS <-> SAE/SRS GW To/from SAE/SRS Gateway


Server (SAE/SRS VE or (bi-directional connectivity required
legacy type SRS) from XMS to SAE/SRS GW)

22 SSH SAE /SRS VE gateway Allow remote support CLI (via SSH) to
TCP (SAE/SRS VE or legacy the XMS from the SAE, SRS-VE, or
443 HTTPS type SRS) ->XMS legacy type SRS gateways

TCP 11111 XMLRPC XMS -> XtremIO Storage Not to be used for Replication or
Controller iSCSI TCP ports

TCP 11000 - XMLRPC XMS -> XtremIO Storage Used for Cluster expansion and FRU
11031 Controllers procedures

TCP 11112 IPv6 XMS <-> XtremIO Storage


Controllers

TCP 22 SSH MGMT Desktop -> XMS Allow XMS shell access

TCP 22 SSH XMS -> XtremIO Storage


Controller

TCP 22000 - SSH XMS -> XtremIO Storage Used for Cluster expansion and FRU
22031 Controllers procedures
Not to be used for Replication or
iSCSI TCP ports

TCP 443 HTTPS MGMT Desktop -> XMS Used for XMCLI, RESTful API

UDP 123 NTP XMS -> NTP Server +


XtremIO Storage
Controller - XMS

UDP & 53 DNS XMS -> DNS Server


TCP

ICMP XMS -> XtremIO Storage Used for diagnostic purposes only
Controller

Dell EMC XtremIO Storage Array Site Preparation Guide 57


Provisioning WWNs and IQNs

Table 24 Ports and Protocols

Protocol Port Service Direction Notes

TCP 3260 iSCSI Hosts -> Storage iSCSI TCP port can be altered if
Controllers necessary

TCP 23000 - IPMI XMS -> Storage Used for Cluster expansion and FRU
23031 Controllers procedures
Not to be used for Replication or
iSCSI TCP ports

TCP 443 HTTPS XtremIO Storage Used for service procedures with
Controller -> XMS Technician Advisor
22 SSH

CIM-XML 5989 ECOM XMS <-> Remote Client www.dmtf.org


Daemon

SLP 427 SLP XMS <-> Remote Client www.openslp.org


Daemon

TCP 1758 Replication Source Storage For each replication IP


5016 Controller Replication Port 1758 can be changed, using the
5201 Port <-> Target Storage following XMCLI command:
Controller Replication modify-clusters-parameters
Port replication-tcp-port

TCP 443 HTTPS Source XMS <-> Target Connection between peer XMS
XMS managing the replication

Provisioning WWNs and IQNs


Once you have obtained the cluster PSNT (Serial Number), you can provision WWPNs and
IQNs in order to preconfigure FC zoning and IP subnetting. Refer to the Dell EMC XtremIO
Storage Array Host Configuration Guide for zoning and sub-netting best practices.
For further details, refer to Dell EMC KB 203234 (https://support.emc.com/kb/203234).

58 Dell EMC XtremIO Storage Array Site Preparation Guide


Provisioning WWNs and IQNs

Copyright © 2020 Dell Inc. or its subsidiaries. All rights reserved. Published in the USA.

Published April 13, 2021

Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. Dell makes no representations or warranties of any kind with respect to the
information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use,
copying, and distribution of any Dell software described in this publication requires an applicable software license.

Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective
owners.

For the most up-to-date regulatory document for your product line, go to Dell EMC Online Support (https://support.emc.com).

Dell EMC XtremIO Storage Array Site Preparation Guide 59

You might also like