02 - XtremIO - X2 - XMS 6 3 To 6 3 3 - XIOS 6 0 - 0 To 6 3 3 - SPG - 302 005 938 - Rev 05
02 - XtremIO - X2 - XMS 6 3 To 6 3 3 - XIOS 6 0 - 0 To 6 3 3 - SPG - 302 005 938 - Rev 05
Topics include:
Overview................................................................................................................... 2
Preparing the Site for Installing the Cluster within an "Dell EMC Titan-D" Rack........... 3
Preparing the Site for Installing the Cluster into a Non-"Dell EMC Titan-D" Rack ....... 17
Hardware Requirements.......................................................................................... 28
Physical XMS Requirements.................................................................................... 29
Virtual XMS Requirements....................................................................................... 29
Temperature Requirements ..................................................................................... 31
Shipping and Storage Requirements ....................................................................... 32
Security Requirements ............................................................................................ 34
Connecting the Cluster to Host................................................................................ 35
Remote Support Requirements ............................................................................... 54
Ports and Protocols................................................................................................. 57
Provisioning WWNs and IQNs.................................................................................. 58
Note: This document was accurate at publication time. Go to Dell EMC Online Support
(https://support.emc.com) to ensure that you are using the latest version of this
document.
This document provides specific information on XtremIO clusters that are managed by
XMS versions 6.3.0 to 6.3.2. For XtremIO clusters that are managed by other versions,
refer to the appropriate XtremIO product documents which are provided for the
corresponding versions.
Overview
Overview
Note: The customer should verify that all site preparation requirements in this guide
(including, but not limited to adequate HVAC, power, floor space, security, etc.) are
completely met before the XtremIO cluster is installed, and the specified operating
environment is maintained for optimal system operation.
An Dell EMC® XtremIO Storage Array requires a properly equipped computer room with
controlled temperature and humidity, proper airflow and ventilation, proper power and
grounding, cluster cable routing facilities, and fire equipment.
To verify that the computer room meets the requirements for XtremIO Storage Array,
confirm that:
Any customer concerns have been addressed through planning sessions between Dell
EMC and the customer.
The site meets the requirements described in this document.
The site contains LAN connections for remote service operation.
Note: It is not allowed to split an existing XtremIO cluster into multiple clusters, or to
merge multiple, existing XtremIO clusters into a single cluster. This is due to the very high
logistical overheads that are required for performing such non-standard procedures.
Preparing the Site for Installing the Cluster within an "Dell EMC
Titan-D" Rack
Rack Clearance
The Dell EMC® rack ventilates from front to back. You must provide adequate clearance at
the rear to service and cool the XtremIO Storage Array components. Depending on
component-specific connections within the rack, the available power cord length may be
somewhat shorter than the 15-foot standard. Figure 1 shows the required clearance for
the Dell EMC rack.
Width
24 in. (61 cm)
Height
75 in.
(190.5 cm)
Power cord length
15 ft. (4.57 m)
Rear access
39 in. (99.1 cm)
Depth
44 in. (111.76 cm)
Front access
48 in. (121.92 cm)
81.00 in.
(2.06 m)
Make sure to leave approximately 96 inches (2.43 meters) clearance at the rack’s rear in
order to unload the unit and roll it off the pallet, as shown in Figure 3.
96.00 in.
(2.43 m)
Rack Stabilizing
If you intend to secure the optional anti-tip bracket to your site floor, prepare the location
for the mounting bolts. The anti-tip bracket provides an extra measure of anti-tip security.
One or two kits may be used. For racks with components that slide, we recommend that
you use two kits.
12.00
7.00
.44
21.25
17.25
3.39
1.56 2.78
Figure 4 Anti-Tip Bracket
Note: The values shown in Table 1 are for X2-R configurations, and do not include the rack
weight. For X2-S configurations of two, three and four X-Brick weights, subtract 18.7 lb
(8.5kg). For X2-T configurations (single X-Brick), subtract 18 lb (8.16 kg).
Install the rack in raised or non-raised floor environments that are capable of supporting
at least 1,180kg (2,600 lbs.) per rack. Your system may weigh less than this, but requires
extra floor support margin to accommodate equipment upgrades and/or reconfiguration.
In a raised floor environment:
Dell EMC recommends 24 x 24 inch or (60 x 60 cm) heavy-duty, concrete filled steel
floor tiles.
Use only floor tiles and stringers rated to withstand:
• Concentrated loads of two casters or leveling feet, each weighing up to 1,000 lb
(454 kg).
• Minimum static ultimate load of 3,000 lb (1,361 kg).
• Rolling loads of 1,000 lb (454 kg). On floor tiles that do not meet the 1,000 lb
rolling load rating, use coverings such a plywood to protect floors during system
roll.
Position adjacent racks with no more than two casters or leveling feet on a single floor
tile.
Note: Cutting tiles per specifications as shown in Figure 5 on page 7 ensures the
proper caster wheel placement.
Cable Routing
Dell EMC recommends installing the equipment into a room with a raised floor, for
accommodation of the cabling under the floor.
If you are installing the rack onto a raised floor, cut a cable-access hole in one tile as
shown in Figure 5.
8 in.
Floor tile (20.3 cm)
24 in. (61 cm) square
C
Cutout Cutout detail 9 in.
(22.9 cm)
6 in.
(15.2 cm)
C
8 in.
(20.3 cm)
Front
Cutouts in 24 x 24 in tiles must be no more that 8 inches (20.3 cm) wide by 6 inches
(15.3 cm) deep, and centered on the tiles, 9 inches (22.9 cm) from the front and rear and
8 inches (20.3 cm) from the sides. Since cutouts will weaken the tile, you can minimize
deflection by adding pedestal mounts adjacent to the cutout. The number and placement
of additional pedestal mounts relative to a cutout must be in accordance with the floor tile
manufacturer recommendations.
When positioning the rack, take care to avoid moving a caster into a floor tile cutout.
Make sure that the combined weight of any other objects in the data center does not
compromise the structural integrity of the raised floor and/or the sub-floor (non-raised
floor).
Dell EMC recommends that a certified data center design consultant inspect your site to
ensure that the floor is capable of supporting the system and surrounding weight. Note
that the actual rack weight depends on your specific product configuration. You can
calculate your total using the tools available at:
https://powercalculator.emc.com/Main.aspx
Detail A
(right front
corner)
Dimension 3.62
17.102 minimum 20.580 maximum to center of caster
(based on swivel (based on swivel wheel from this surface
position of caster wheel) position of caster wheel) Detail B
1.750
18.830 Caster swivel
diameter Bottom view
Outer surface Outer surface Leveling feet
Rear of rear door of rear door 1.750 Rear
Swivel diameter
reference (see
detail B)
32.620
maximum
(based on
swivel position
31.740 of caster wheel)
30.870
minimum
(based on 40.390
swivel position
of caster wheel)
Leveling feet
3.620
Front 20.700
Right 20.650
Top view side view Front
Dimension 3.620 to center of
Note: Some items in the views are caster wheel from this surface
removed for clarity. (see detail A)
The customer is ultimately responsible for ensuring that the data center floor on which the
Dell EMC system is to be configured is capable of supporting the system weight, whether
the system is configured directly on the data center floor, or on a raised floor supported by
the data center floor. Failure to comply with these floor-loading requirements could result
in severe damage to the Dell EMC system, the raised floor, subfloor, site floor and the
surrounding infrastructure. Notwithstanding anything to the contrary in any agreement
between Dell EMC and customer, Dell EMC fully disclaims any and all liability for any
damage or injury resulting from customer's failure to ensure that the raised floor, subfloor
and/or site floor are capable of supporting the system weight as specified in this guide.
The customer assumes all risk and liability associated with such failure.
Power Requirements
Depending on the rack configuration and input AC power source, single or three-phase
listed in Table 2 and Table 4 on page 11, the rack requires two to eight independent power
sources. To determine your site requirements, use the published technical specifications
and device rating labels for all non-Dell EMC equipment. This helps in providing the
current draw of the devices in each rack. The total current draw for each rack can then be
calculated. For Dell EMC products, refer to “Dell EMC Power Calculator” located at
http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the XtremIO
hardware currently in use.
Note: If the data center uses low line input (100/110 Volts), an RPQ should be submitted.
Input nominal 200 - 240V AC +/- 10% L - L nom 220 - 240V AC +/- 10% L - L nom
voltage
Frequency 50 - 60 Hz 50 - 60 Hz
Circuit breakers 30 A 32 A
NEMA L6-30P
NEMA L6-30R
International
Australia
Input nominal 200 - 240V AC +/- 10% L - L nom 220 - 240V AC +/- 10% L - L nom
voltage
Frequency 50 - 60 Hz 50 - 60 Hz
Circuit breakers 50 A 32 A
Note: The interface connector options for the Delta and Wye three-phase
PDUs are listed in Table 5.
North America
International
3 in 4 in
APP
FLY Lead (CE can add appropriate plug
base on customer receptacle)
North America
038-004-778 3 Phase WYE International BLK Sursum K52S30A or Hubbell C530C6S Black
Power Consumption
Table 8 and Table 9 show the cluster power consumption and heat dissipation.
Calculations in these tables are intended to provide typical and maximum power and heat
dissipations. Ensure that the installation site meets these typical and worst-case
requirements.
Note: For specific environmental conditions, refer to the “Dell EMC Power Calculator”
located at http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the
XtremIO hardware currently in use.
Note: The figures refer to cluster configurations not including a physical XMS. The figures
for an XMS are detailed separately.
Note: A cluster's DAE can have varying SSD configurations. The figures refer to cluster
configurations with fully-populated DAEs.
PDU Configuration
Factory-assembled racks are shipped in a “four PDU” configuration.
Table 9 describes the number of line drops that are required per zone according to the
amount of X-Bricks in the cluster.
Single X-Brick 1 1 1
Two X-Bricks
Three X-Bricks 2 1 1
Four X-Bricks
Note: The PDU configurations do not include a power On/Off switch. Make sure the (four)
circuit breaker switches on each PDU are UP, in the OFF (0) position until you are ready to
supply AC power. Make sure that the power is OFF before disconnecting power from a PDU.
PDU = 100-563-477
(Titan-D, Single Phase)
X-Bricks 7 and 8
X-Bricks 5 and 6
An XMS (if present) is connected
to the console outlet at the rear
of the PDU.
X-Bricks 3 and 4
X-Bricks 1 and 2
16AMP
16AMP
CB
21
21
CB
C
B
21
21
CB
CB11
11
CB
C
B1
1
CB
16AMP
16AMP
16AMP
16AMP
CB10
10
CB
CB10
10
CB
CB9 CB9
16AMP 9
CB 9
CB
16AMP
16 AMP
16AMP
CB8 CB8
8
CB 8
CB
CB7 CB7
7
CB 7
CB
16 AMP
16AMP
16 AMP
16AMP
CB6 CB6
6
CB 6
CB
CB5 CB5
5
CB 5
CB
16AMP
16AMP
16 AMP
16AMP
CB4 CB4
4
CB 4
CB
CB3 CB3
3
CB 3
CB
16AMP
16AMP
16AMP
16 AMP
CB2 CB2
2
CB 2
CB
CB1
Second Line Cord for X-Bricks 5-8 CB1
1
CB 1
CB
16AMP
16AMP
Preparing the Site for Installing the Cluster into a Non-"Dell EMC
Titan-D" Rack
When the XtremIO Storage Array is to be installed into a customer rack, the cluster is
delivered in a non-standard packaging (also known as mini-rack). The package comes in
one-size of 12U. Each mini-rack can contain one or two X-Bricks. Therefore, single and two
X-Brick clusters arrive in a single mini-rack and three and four X-Brick clusters arrive in two
mini-racks. If the customer has ordered a physical XMS, the XMS unit is added in a
separate box on top of the mini-rack, as shown in Figure 10.
40.75 in
(103.5 cm)
33 in 42.5 in
(83.8 cm) (108 cm)
Rack Requirements
Non-”Dell EMC Titan-D” racks must meet the following requirements, depending on PDU
orientation:
Figure 11 shows a non-"Dell EMC Titan-D" rack with rear-facing PDUs.
Front
Front Door
d
Rack
Rack
Post
Post
a
Front NEMA
Front NEMA
b
Rack Top View
i
j
Rear NEMA
Rear NEMA
PDU
PDU
e f
Rack
Rack
Post
Post
c
19" NEMA
g
Rear Door
h
Rear
Figure 11 Rear of Rack with Rear-Facing PDUs Service Clearance (Top View)
Non-”Dell EMC Titan-D” racks with rear-facing PDUs must meet the requirements shown in
Table 10.
a Distance between front surface of the rack and the front NEMA rail
c Distance between rear surface of the chassis to rear surface of the rack, min = 2.5"
(63.5mm)
d If a front door exists, distance between inner front surface of the front door and the
front NEMA rail, min=2.5” (63.5mm)
e Distance between the inside surface of the rear post and the rear vertical edge of
the chassis and rails; min 2.5" (63.5mm) is recommended.
Note: If no rack post, minimum recommended distance is measured to inside
surface of the rack.
g Minimum = distance between NEMA rails 19" (482.6) +2e, Min=24" (609.6mm)
h Minimum = distance between NEMA rails + 2x"e" = 2x"f" = 19" + 2.5" + "f" = 21.5" +
"f"
If all of the requirements (described in Table 10) for a non-"Dell EMC Titan-D" rack are not
met, and the customer wishes to continue with a non-compliant rack, an RPQ process
must be initiated.
Front
Front Door
Rack
d
Rack
Post
Post
a
Front NEMA
Front NEMA
b
Rack Top View
j
Rear NEMA
Rear NEMA
PDU PDU
e 19" NEMA c
Rack
Rack
Post
Post
f
Rear Door
h
Rear
Figure 12 Rear of Rack with Center-Facing PDUs Service Clearance (Top View)
Non-”Dell EMC Titan-D” racks with center-facing PDUs must meet the requirements shown
in Table 11.
a Distance between front surface of the rack and the front NEMA rail
c Distance between rear surface of the chassis to rear surface of the rack, min = 2.5"
(63.5mm)
d If a front door exists, distance between inner front surface of the front door and the
front NEMA rail, min=2.5” (63.5mm).
e Distance between the inside surface of the rear post and the rear vertical edge of the
chassis and rails; min 2.5" (63.5mm) is recommended.
Note: If no rack post, minimum recommended distance is measured to inside surface
of the rack.
g PDU depth + 3" (76.2mm) AC cable bend clearance. Racks equipped with
center-facing PDUs that fail to meet this requirement are permitted, as long as the
NEMA rail/product area is not compromised. Therefore some outlets may NOT be
accessible. If those outlets are required, an RPQ request for in-rack PDUs should be
submitted. In all cases, the DAE's PSU accesses must remain unblocked.
j Minimum rack depth = "i" + "c" 36.5" (927.1mm) + 2.5" (63.50) = 39" (990mm)
Note: A rear door need not be present. Regardless, all hardware must be within the
boundaries of the rack.
If all of the requirements (described in Table 11) for a non-"Dell EMC Titan-D" rack are not
met, and the customer wishes to continue with a non-compliant rack, an RPQ process
must be initiated.
Essential Requirements:
The rack space requirements of the different XtremIO Storage Array configurations are
as follows:
• A single X-Brick cluster requires 5U of contiguous rack space.
• A two X-Brick cluster requires 11U of contiguous rack space.
• A three X-Brick cluster requires 16U of contiguous rack space.
• A four X-Brick cluster requires 20U contiguous rack space.
Note: An optional physical XMS may occupy the upper-most U in the rack.
AC power:
• 200-240 VAC +/- 10% single phase or three-phase 50-60 Hz, power connection.
Table 4 on page 11 shows the three-phase power connection requirements.
• Redundant power zones, one on each side of the rack. Each power zone should
have capacity for the maximum power load (refer to Table 14 on page 27).
• The provided power cables suit AC outlets located within 24 inches of each
component receptacle (this does not include the X2-R InfiniBand Switches, whose
AC inlets are located in the front panel and are therefore provided with long power
cables).
Note: If you are using longer power cables, make sure they are cables of a high
standard.
Cluster Weight
Table 12 shows the approximate weights of XtremIO X2-R, X2-S and X2-T configurations,
when fully populated.
Configuration X2-R Approx. Weight X2-S Approx. Weight X2-T Approx. Weight
Single X-Brick cluster 190 lb (86kg) 190 lb (86kg) 176 lb (79.8 kg)
Two X-Brick cluster 468.1 lb (212.7 kg) 431.4 lb (195.7 kg) N/A
Three X-Brick cluster 664.8 lb (301.9 kg) 629 lb (284.9 kg) N/A
Four X-Brick cluster 861.2 lb (391 kg) 824.5 lb (374 kg) N/A
Note: The values shown in Table 12 do not include the rack weight. For configurations that
include a physical XMS, add 33 lb (15 kg).
Component Dimensions
Table 13 shows the dimensions, weight and rack space of each major component in the
XtremIO Storage Array.
Disk Array Enclosure Height: 88.9 mm (3.5 in) 2U 97.0 lb (44 kg)1
(DAE) Width: 438 mm (17.25 in)
Depth: 927.1 mm (36.5 in)
Components Stacking
The following figures display the components stacking order according to the purchased
configuration.
A single X-Brick cluster requires 5U of contiguous rack space.
Figure 13 describes the stacking of a single X-Brick cluster.
XMS (Optional)
X1-DAE
Place Holder
XMS (Optional)
IBSW-2
IBSW-1
X2-DAE
X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1
X1-DAE
Place Holder
XMS (Optional)
X3-DAE
IBSW-2
IBSW-1
X2-DAE
X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1
X1-DAE
Place Holder
XMS (Optional)
X4-DAE
X4-SC2
X4-SC1
Cable Management Duct
X3-SC2
X3-SC1
X3-DAE
IBSW-2
IBSW-1
X2-DAE
X2-SC2
X2-SC1
Cable Management Duct
X1-SC2
X1-SC1
X1-DAE
Place Holder
Power Requirements
Table 14 details the power requirements for each configuration, as well as the number of
IEC 320-C13 outlets required on each power zone.
Note: For specific environmental conditions, refer to the “Dell EMC Power Calculator”
located at http://powercalculator.emc.com/XtremIO.aspx and select the calculator for the
XtremIO hardware currently in use.
Note: The figures refer to cluster configurations, not including a physical XMS. The
requirements for an XMS are detailed separately.
XMS 200 1 1
Table 15 details the power consumption and socket related data for the cluster
components.
Hardware Requirements
Table 16 shows the hardware requirements for each XtremIO Storage Array configuration.
Note: For XtremIO virtual XMS pre-deployment requirements, refer to “Virtual XMS
Requirements”.
Note: For XtremIO physical XMS pre-deployment requirements refer to “Physical XMS
Requirements”.
Table 17 Virtual XMS VM Configurations per the Expected Total Number of Volumes
Note: It is possible to initially configure the virtual XMS per the Regular
configuration, and at a later stage, adjust the virtual XMS to the Expanded
configuration. For details on expanding the virtual XMS configuration, refer to the
XtremIO Storage Array User Guide.
Note: Shared storage used in this case should not originate from the XtremIO cluster
managed by the Virtual XMS.
Network connectivity: The Virtual XMS should be located in the same Local Area
Network (LAN) as the XtremIO cluster.
Host: The virtual XMS VM should be deployed on a single host, running ESX 5.x or 6.x
(or more, if virtual XMS high-availability is required).
Note: XtremIO Storage Array supports both ESX and ESXi. For simplification, all
references to ESX server/host apply to both ESX and ESXi, unless stated otherwise.
The host should be on VMware vSphere HCL approved hardware and meet the
following configuration requirements:
• Single socket dual core CPU
• One 1GbE NICs
• Redundant power supply
Other Specifications
The OVA package from which the virtual XMS is deployed contains VMware tools.
Therefore, no VMware tools upgrade is required following virtual XMS deployment.
The deployed virtual XMS Shares memory resource allocation is set to High. Therefore,
the virtual XMS is given high priority on memory allocation when required.
Note: In case non-standard Shares memory resource allocation is used, the virtual
XMS Shares memory resource allocation should be adjusted post-deployment.
For information pertaining to managing the virtual XMS, refer to the XtremIO Storage Array
User Guide.
Temperature Requirements
Table 18 shows the XtremIO Storage Array environmental operating range requirements.
Note: The environmental data shown in Table 18 complies with ASHRAE A3 standards.
Condition Setting
Systems that are mounted on an approved Dell EMC package complete transportation
testing to withstand shock and vibrations in the vertical direction only. Table 21 shows
the respective maximum shock and vibration values not to exceed.
Security Requirements
This section describes the security requirements in the data center.
Firewall Settings
Set the FW rules prior to installation.
If the XMS is on a different subnet than the X-Bricks, open TCP, UDP and ICMP firewall
ports in both directions. Refer to Table 24 on page 57.
Open TCP ports between the XMS and the managing desktop running the XMS GUI.
Refer to Table 24 on page 57.
Open the services you want to enable from the XMS to the relevant target systems.
Refer to Table 24 on page 57.
Note: The XMS can manage clusters of the same IP version type; either IPv4 or IPv6
(but not both types). The XMSes primary IP address must be of the same version type
as that of the clusters’ IP address version type; IPv4 or IPv6. A secondary IP address
can be added to the XMS to serve as a user connection (for GUI, RESTful API, etc.). The
secondary IP address can not be the same IP address type as that of the primary IP
address type.
Port 4
Port 3
2. Make sure that the other end of the external FC cables are connected to the customer’s
switch.
Note: For connection via FC, at least one FC port of each Storage Controller in the cluster
must be connected to the host switch. However, it is highly recommended to connect both
FC ports of all Storage Controllers to two separate switches, so that each FC port of each
Storage Controller is connected to a different switch.
Port 2
Port 1
2. Make sure that the other end of the external iSCSI cables are connected to the
customer’s switch.
Note:
For connection via iSCSI, at least one iSCSI port of each Storage Controller in the cluster must be
connected to the host switch. However, it is highly recommended to connect both iSCSI ports of
all Storage Controllers to two separate switches, so that each iSCSI port of each Storage
Controller will be connected to a different switch.
Ports 3 and 4 can be configured during the create cluster procedure, to act as 10Gb Ethernet or
16Gb Fibre Channel. For more information, refer to the XtremIO Storage Array Software
Installation and Upgrade Guide.
In case a Storage Controller is configured to four ISCSI ports, and only two ports are used, it is
recommended to use ports 3 and 4 for ISCSI connectivity.
On-Board RJ45
Port 2
Port 1
With this configuration, it is first necessary to modify the ports’ types from iSCSI to
Replication. This is done by using the modify-target command to change the type
to Replication, and providing the IP and subnet for the replication port.
In case the configuration is four iSCSI ports, the preferred ports for Native Replication
are ports 1 and 2.
Note: For instructions on setting the replication ports, and setting IPs for replication
and modifying the port types from iSCSI to Replication, refer to XtremIO Storage Array
User Guide.
Table 22 and Table 23 show the recommended port configurations that can be defined per
cluster.
Single X-Brick 2 or 4* 2 or 4* 2, 3* or 4* 2 or 4*
Two X-Brick 2 or 4 2, 4 or 8* 2, 3, 4, 6* or 8* 2, 4 or 8*
* Using the optic connectivity at the source cluster is required for this configuration.
**Using the optic connectivity at the source cluster is required for this configuration on destination.
* Using the optic connectivity at the source cluster is required for this configuration.
Note: As of version 6.1.0, IP Client and FTPS are no longer supported. If the Remote
Support connectivity of the XMS is currently configured to use IP Client or FTPS, contact
Global Technical Support for assistance in migrating to a supported Remote Support
configuration.
SAE/SRS provides a secure, IP-based, distributed remote support solution that enables
command, control and visibility of remote support access.
SAE/SRS configuration options with XtremIO are SAE & SRS VE gateway and legacy type
SRS gateway. These two configuration options provide Connect-In and Connect-Home
functionalities. The SAE & SRS VE configuration option also provides Managed File
Transfer (MFT) support, ESRS Advisories, and CloudIQ support. SAE & SRS VE is the
recommended configuration option with XtremIO.
If the customer refuses to use SAE/SRS as a connectivity solution, the XMS can be
configured to connect-home only. The connect-home only option with XtremIO is Email.
The option does not provide connect-in functionality to the XMS, but merely ensure that
Dell EMC receives regular configuration report and product alert information from the
customer’s XtremIO environment.
Preconditions for Deploying SAE & SRS VE Gateway and Legacy Type SRS Gateway
The following are preconditions for deploying the SAE & SRS VE gateway configuration
and the legacy type SRS gateway configuration on XtremIO at a customer site with SAE &
SRS VE or legacy-based GW systems:
The customer should agree to deploy SAE or SRS as part of the XtremIO deployment. It
is recommended to use an SAE or SRS-VE gateway available on-site. An alternative
option is to use an on-site legacy type SRS gateway.
The customer should open HTTPS connection between the XMS and the SAE, SRS VE
or legacy type SRS gateway. Refer to Table 24 on page 57.
The XtremIO cluster should be in the Dell EMC Install Base (i.e. assigned by Dell EMC
Manufacturing with a formal PSNT).
The customer should have a SAE or SRS VE on site (or legacy type SRS gateway).
For Gateway Connect configuration with an XMS, the customer should have deployed
one of the following:
• Dell EMC SRS VE 3.20 (or later) within their VMware ESX Server/Windows Hyper V
environment.
• SAE 4.0.5 (or greater) within their VMware ESX Server/Windows Hyper V
environment.
When these preconditions are fully-met, it is possible to proceed and deploy SAE/SRS
integration with XtremIO as part of the installation.
Note: SupportAssist Enterprise, SRS Virtual Edition (VE) and legacy type SRS gateway
deployment, configuration, provisioning and upgrade are outside the scope of the
XtremIO system installation.
TCP 25 SMTP XMS -> SYR SMTP Server When Email connect-home only
configuration is used
TCP 443 & HTTPS XMS <-> SAE/SRS GW To/from SAE/SRS Gateway
9443 Server (SAE/SRS VE or (bi-directional connectivity required
legacy type SRS) from XMS to SAE/SRS GW)
22 SSH SAE /SRS VE gateway Allow remote support CLI (via SSH) to
TCP (SAE/SRS VE or legacy the XMS from the SAE, SRS-VE, or
443 HTTPS type SRS) ->XMS legacy type SRS gateways
TCP 11111 XMLRPC XMS -> XtremIO Storage Not to be used for Replication or
Controller iSCSI TCP ports
TCP 11000 - XMLRPC XMS -> XtremIO Storage Used for Cluster expansion and FRU
11031 Controllers procedures
TCP 22 SSH MGMT Desktop -> XMS Allow XMS shell access
TCP 22000 - SSH XMS -> XtremIO Storage Used for Cluster expansion and FRU
22031 Controllers procedures
Not to be used for Replication or
iSCSI TCP ports
TCP 443 HTTPS MGMT Desktop -> XMS Used for XMCLI, RESTful API
ICMP XMS -> XtremIO Storage Used for diagnostic purposes only
Controller
TCP 3260 iSCSI Hosts -> Storage iSCSI TCP port can be altered if
Controllers necessary
TCP 23000 - IPMI XMS -> Storage Used for Cluster expansion and FRU
23031 Controllers procedures
Not to be used for Replication or
iSCSI TCP ports
TCP 443 HTTPS XtremIO Storage Used for service procedures with
Controller -> XMS Technician Advisor
22 SSH
TCP 443 HTTPS Source XMS <-> Target Connection between peer XMS
XMS managing the replication
Copyright © 2020 Dell Inc. or its subsidiaries. All rights reserved. Published in the USA.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
The information in this publication is provided as is. Dell makes no representations or warranties of any kind with respect to the
information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use,
copying, and distribution of any Dell software described in this publication requires an applicable software license.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective
owners.
For the most up-to-date regulatory document for your product line, go to Dell EMC Online Support (https://support.emc.com).