InfiniBox With EMC VPLEX Best Practice Guide
InfiniBox With EMC VPLEX Best Practice Guide
support.infinidat.com/hc/en-us/articles/360002184418-InfiniBox-with-EMC-VPLEX-Best-Practice-Guide
Introduction
The procedures in this document describe the configuration steps required to configure
the INFINIDAT InfiniBox for use with EMC VPLEX, a virtual storage technology that
connects to multiple storage arrays, allowing for data migration and mirroring across
sites.
METADATA VOLUMES
Metadata volumes are critical to the proper function of the VPLEX system. VPLEX Meta
Data Volumes, or Meta Volumes, contain information about devices, physical-to-virtual
device mappings and other internal system configuration data. The importance of the
information on these volumes justifies a high level of Meta Volume data redundancy.
Meta Volumes are provisioned as RAID 1 along with a minimum of two additional point-
in-time copies (one 24 hours old, the other 48 hours old). It is highly recommended that
Meta Volumes RAID 1 members be stored on two physically separate storage arrays,
using array-provided RAID protection for each member.
LOGGING VOLUMES
A logging volume is dedicated capacity for tracking any blocks written to a cluster. A
logging volume is a required prerequisite to creating a distributed device and a remote
device. Logging volumes keep track of any blocks written during inter-cluster link failure.
The system uses the information in logging volumes to synchronize the distributed
devices by sending only changed block regions across the link.
VIRTUAL VOLUMES
At the top layer of the VPLEX storage structures are virtual volumes. Virtual volumes are
the elements VPLEX exposes to hosts using its front-end (FE) ports. Access to virtual
volumes is controlled using storage views. They act as logical containers determining
host initiator access to VPLEX FE ports and virtual volumes.
Zoning configuration
Zone the InfiniBox storage array to the VPLEX back-end ports. Follow the
recommendations in the "Implementation and Planning Best Practices for EMC VPLEX
Technical Notes".
To ensure high data availability, present the each node of the storage array to each
director of the VPLEX along separate physical paths.
Logical zoning
Zone VPLEX director A-00 ports to Port 1 of InfiniBox Node 1 and Node 2 • Zone
VPLEX director B ports to one group of Port 5 on each InfiniBox Nodes.
Repeat for additional VPLEX engines.
Create a separate host-initiator for each VPLEX cluster.
Map Volumes to allow access to the appropriate VPLEX initiators for each port
groups.
3/28
InfiniBox Node 3 Port 5
cfg: VPLEX_NFINIDAT_FABA
InfiniBox_PLEXE1_DIRA_FABA; InfiniBox_PLEXE1_DIRB_FABA
zone: InfiniBox_PLEXE1_DIRA_FABA
infinidat_node01_port01; infinidat_node02_port01; vplex_c1e1_a1_00
zone: InfiniBox_PLEXE1_DIRB_FABA
infinidat_node01_port01; infinidat_node03_port01; vplex_c1e1_b1_01
alias: vplex_c1e1_a1_00
50:00:XX:XX:60:XX:f1:10
alias: vplex_c1e1_b1_01
50:00:XX:XX:70:XX:f1:11
alias: infinidat_node01_port01
57:42:XX:XX:XX:XX:28:11
alias: infinidat_node02_port01
57:42:XX:XX:XX:XX:28:21
alias: infinidat_node03_port01
57:42:XX:XX:XX:XX:28:31
Fabric B
cfg: VPLEX_NFINIDAT_FABB
InfiniBox_PLEXE1_DIRA_FABB; InfiniBox_PLEXE1_DIRB_FABB
zone: InfiniBox_PLEXE1_DIRA_FABB
infinidat_node02_port05; infinidat_node03_port05; vplex_c1e1_a1_01
zone: InfiniBox_PLEXE1_DIRB_FABB
infinidat_node02_port05; infinidat_node03_port05; vplex_c1e1_b1_00
alias: vplex_c1e1_a1_01
50:00:XX:XX:60:XX:f1:11
alias: vplex_c1e1_b1_00
50:00:XX:XX:70:XX:f1:10
alias: infinidat_node01_port05
57:42:XX:XX:XX:XX:28:15
alias: infinidat_node02_port05
57:42:XX:XX:XX:XX:28:25
alias: infinidat_node03_port05
57:42:XX:XX:XX:XX:28:35
4/28
InfiniBox provisioning
Hosts, and then clusters must be created on InfiniBox in order to map provisioned
storage volumes. Hosts are groupings of initiators that are associated to a physical host,
and clusters are user defined as a grouping of those hosts. Each zoned initiator for the
VPLEX Engine should be grouped into a single Host. These host engines should be
created into a VPLEX cluster.
Once created, storage volumes can be mapped to all grouped initiators of a given
connected host. This section describes host/cluster creation, volume creation and then
volume to cluster mapping.
InfiniBox provisioning takes the following steps:
Creating a host
Creating a cluster
Creating volumes
Mapping volumes to clusters
CREATING A HOST
Suggestions for friendly host names are ones that describe the host being created. For
example, if creating a host for VPLEX Cluster 1 Engine 1, one might enter Plex-C1E1.
Using names that help identify the initiators facilitates maintenance and lifecycle
activities.
Step On the InfiniBox GUI, click the Hosts & Clusters button on the toolbar on the left.
1
The Hosts & Clusters screen opens.
CREATING A CLUSTER
5/28
Step 1 On the InfiniBox GUI, click the Hosts & Clusters button on the toolbar on the left.
6/28
Click them one by one. Click the Add button.
CREATING A POOL
Step On the InfiniBox GUI, click the Pools button on the toolbar on the left.
1 The Pools screen opens.
7/28
Step Click on Create Pool. The Create Pool screen opens. Insert a name for the pool and provision
2 physical capacity. By default, the virtual capacity is coupled with the physical. It is possible to
decouple them, of course.
Optionally, click the Advanced button to change the default values of more of the pool's
settings.
Click Create. The pool is created.
CREATING A VOLUME
8/28
Step On the InfiniBox GUI, click the Volumes button on the toolbar on the left.
1
The Volumes screen opens.
-OR-
Right-click the pool and select Create Volume from the menu.
9/28
Step Insert a name for the volume and provision its capacity. Set the pool that the volume belongs
2 to (no need to set this, if you create the volume from the pool's screen).
Click Advanced to create several volumes at once.
Click Create. The volume is created. In our example, 10 volumes were created and they are
available on the Volumes screen:
10/28
Step 2 Select volumes from the list and click Map.
VPLEX Provisioning
In order to present devices to hosts, there are a number of steps to follow when
provisioning storage on the VPLEX:
LUNs created on the InfinBox are mapped to the VPLEX ports. Appropriate zoning
must be configured on the fibre channel switch that is attached to both devices.
VPLEX is configured to claim the mapped LUNs. Extents are created on the claimed
LUNs.
Stripes, mirrors or concatenated (RAID 0,1, and C geometries respectively) devices
can be provisioned by combining the created extents depending on application
performance/resilience and capacity requirements. Additionally encapsulated (1:1
mapped) devices can be created when claimed LUN data is required to be
preserved and 'imported' into the VPLEX
The aforementioned device raid geometries can be spanned across VPLEX clusters
to provide geographically diverse VPLEX raid configurations
Distributed devices consist of same sized devices created on VPLEX clusters.
Consistency groups ensure consistency across distributed devices.
Virtual device are created from these device types and are then exported to
connected hosts.
11/28
Creating a name mapping file for VPLEX for third-party
arrays
Create a mapping file to batch claim multiple LUNs exported from the Infinibox array:
Step Change context to the storage volumes on the VPLEX cluster being exported to. For example:
2
VPlexcli:/>cd /clusters/cluster-1/storage-elements/storage-volumes>
Step Cut and paste the command output and save it to a file in the /tmp folder of the
4 management server.
12/28
Step Each claimed lun needs a unique name – preselect a unique string that will help identify
5 LUNs to be claimed. Names:
Examples:
INFINIDAT_20140101
INFINDAT_aa3721_
Where:
file1 is the name of the file you saved the storage volume output to
claim_name is the unique name you selected for the luns to be claimed as
filename.txt is a name that you will use during the claimingwizard step.
Example:
Edit filename.txt to add the phrase Generic storage-volumes to the very top of the file.
TIP: The Linux based VPLEX management console includes vim which can be used to create
and edit files text files.
Step Enter the following command to claim the LUNs using the VPLEX claimingwizard.
7
Example:
13/28
service@VPLEX01:/tmp> vplexcli
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Password:
creating
logfile:/var/log/VPlex/cli/session.log_service_localhost_T10175_20150205190610
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll
Name VPD83 ID Capacity Use Vendor IO Type Thin VIAS
---------------------------------------- -------------------------------
NFINIDAT_volume_1
VPD83T3:6742b0f0000004280000000000005cae 2G claimed NFINIDAT alive normal false false
NFINIDAT_volume_2
VPD83T3:6742b0f0000004280000000000005caf 2G claimed NFINIDAT alive normal false false
NFINIDAT_volume_3
VPD83T3:6742b0f0000004280000000000005cb0 2G claimed NFINIDAT alive normal false false
NFINIDAT_volume_4
VPD83T3:6742b0f0000004280000000000005cb1 2G claimed NFINIDAT alive normal false false
Create a meta-volume
As discussed, VPLEX requires four LUNs (min 78GB) for metadata volumes.
14/28
Step Export the LUNs from the array
1
Step Use the meta-volume create command to create a new meta-volume. The syntax for the
3 command is:
Where:
The mirror can consist of multiple storage volumes (which will become a RAID 1), in which
case you would include each additional volume, separated by commas. The meta-volume
and mirror must be on separate arrays, and should be in separate failure domains. This
requirement also applies to the mirror volume and its backup volume.
15/28
VPlexcli:/clusters/cluster-1/system-volumes> ll c1_meta
/clusters/cluster-1/system-volumes/c1_meta:
Attributes:
Name Value
---------------------- ------------
active true
application-consistent false
block-count 23592704
block-size 4K
capacity 90G
component-count 2
free-slots 64000
geometry raid-1
health-indications []
health-state ok
locality local
operational-status ok
ready true
rebuild-allowed true
rebuild-eta -
rebuild-progress -
rebuild-status done
rebuild-type full
slots 64000
stripe-depth -
system-id c1_meta
transfer-size 128K
vias-based false
volume-type meta-volume
Contexts:
Name Description
---------- -------------------------------------------------------------------
components The list of components that support this device or system virtual
volume.
VPlexcli:/clusters/cluster-1/system-volumes/c1_meta> ll components/
/clusters/cluster-1/system-volumes/c1_meta/components:
Name Slot Type Operational Health Capacity
---------------------------------------- Number -------------- Status State --------
---------------------------------------- ------ -------------- ----------- ------ --------
VPD83T3:6742b0f00000042800000000000118d2 0 storage-volume ok ok 90G
VPD83T3:6742b0f00000042800000000000118d3 1 storage-volume ok ok 90G
Use the ll command to display the new meta-volume’s status, verify that the attribute active
shows a value of true.
16/28
Step Create the logging volume. The syntax for the command is:
3
logging-volume create --name name --geometry [raid-0 |raid-1] --extents context-path --stripe-depth
Where:
extents extent_se-logging-source01_1,extent_se-logging-source02_1
VPlexcli:/clusters/cluster-1/system-volumes> ll
Name Volume Type Operational Health Active Ready Geometry Component Block Block
Capacity Slots
------------------------------- -------------- Status State ------ ----- -------- Count Count Size -------- -----
------------------------------- -------------- ----------- ------ ------ ----- -------- --------- -------- ----- -------- -----
c1-logging-volume_vol logging-volume ok ok - - raid-1 2 262560 4K 1G -
VPlexcli:/clusters/cluster-1/system-volumes/c1-logging-volume_vol> ll components/
/clusters/cluster-1/system-volumes/c1-logging-volume_vol/components:
Name Slot Type Operational Health Capacity
------------------------- Number ------ Status State --------
------------------------- -------- ------ -------------- -------- --------
extent_se-logging-source01_1 0 extent ok ok 1G
extent_se-logging-source02_1 1 extent ok ok 1G
On a cluster, click on Storage Array, select the array and then "Show Logical Units". These
are the devices that the cluster can see; ensure that the cluster can see the LUNs you
intend to use to create your devices.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim
VPD83T3:6742b0f0000004280000000000003434 -n se-oralog-vmax
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim
VPD83T3:6742b0f0000004280000000000003435 -n se-oraredo-vmax
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim
VPD83T3:6742b0f0000004280000000000003436 -n se-oradata-vmax
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
17/28
Step Create extents.
2
In order to create the extents, click Provision Storage, Cluster-1, physical storage, Storage
Volumes, you should see your newly claimed volumes as well as any other devices; they can
be used or unclaimed.
Next
Commit
The VPLEX will automatically populate the left side with any possible candidates; choose the
LUNs you want and add them to the right side.
Click Next and then Finish.
Click Create
Select the devices
Devices can be created in different configurations: RAID-0, RAID-1, RAID-C and 1:1
mapping of extents to devices.
Automatically create a virtual volume on each device: "NO"
DO NOT create a virtual volume at this time. You will not be able to create a
distributed device if the virtual volume already exists on the device.
Click next and then commit your changes.
18/28
Step Create Storage View
5
Add Initiators (Hosts, HBAs)
Go to Provision Storage and select cluster > Initiators.
Select the unregistered initiator and click Register.
Type a meaningful name for the initiator or accept the one provided.
Select a host type and click OK
add ports (FE ports VPLEX)
Add virtual volumes
Tech-refresh of an old array: In this use case, a new array is placed under VPLEX
management. Volumes from an existing array are migrated onto the new array.
Typically, the older array is then retired or repurposed.
Load balancing across arrays: In this use case, there are multiple arrays behind
VPLEX. Either because of capacity reasons or performance reasons or the need for
some specific capability, volumes need to be moved from one array to another.
Both arrays continue to be kept in service after the volume moves are complete.
Migrating across arrays across data centers. VPLEX Metro extends the pool of
arrays that you can manage beyond the confines of your data center.
Available operations:
Migration procedure
1. Create a batch migration plan. A plan is a file that identifies the source and target
devices and other attributes.
2. Check the plan and then start the migration session.
3. Verify the status of the migration.
19/28
4. Verify that the migration has completed. When the migration completes the
percentage done will show 100.
5. Once the synchronization completes, the migration session can be committed.
6. Clean up the migration. This dismantles the source device down to the storage
volume and the source storage device is changed to an unclaimed state.
7. Remove all information about the migration session from the VPLEX.
8. Post-Migration task, depends if you want to redeployed the devices for other uses
in the VPLEX or if the source storage system needs to be removed by performing
the necessary masking, zoning, and other configuration changes.
Migration Steps
20/28
Step Establish mirror between source volume and target volume.
2 From here, you have two options, it dependent on the scale of the operations.
Step VPLEX ensures that the volumes on the two arrays are in sync. Host READ I/Os are directed
3 to the source leg. Host WRITE I/Os are sent to both legs of the mirror. After both volumes
are in complete sync, I/Os continues until you decide to disconnect the source volume. Even
after the volumes are in sync, you have the option to remove the destination volume and go
back to the source.
Step Once volumes are in sync, disconnect the source volume / array.
4 From the host standpoint, quite literally, it does not know that anything has changed.
21/28
Step Identify Volume(s) to be migrated. For each volume, identify the geometry (RAID type),
1 members (devices) and device size. Taking note of volume size (Blocks x Block size). The size
of the volumes must be the same or larger size that the source devices to be replaced.
Step Select the device that you want to mirror and then click next.
3
22/28
Step On the next screen select each source and target device. Click both devices andAdd Mirror.
4
Step Click next to synchronize data, which will bring you to the consistency group page. At this
5 time you can choose to create a new group, add to an existing group or no group at all. We
will create a new Consistency Group at this time.
23/28
Step Commit your changes.
6
24/28
Step If you check Distributed Devices now, you will see your newly created mirrored device.
7
Step You'll notice that you have an "unexported" tag under the service status. This means that the
8 device has not yet been masked to an initiator and therefore now storage views exist for this
volume.
25/28
Step If you go back to Cluster-1 and then click on Storage Views. You'll see that therealready
9 exists a view that includes the initiator as well as the ports on the Vplex that present storage
out to hosts. Go to the Virtual Volumes tab and you'll see the volumes that are already
presented out to the host. Add your virtual volume.
26/28
For each VPLEX cluster, allocate four storage volumes of at least 80 GB as metadata
volumes.
Configure the metadata volumes for each cluster with multiple back-end storage
volumes provided by different storage arrays of the same type.
Use Infini-RAID for metadata volumes. The data protection capabilities provided by
these storage arrays ensure the integrity of the system's metadata.
Read caching should be enabled.
A hot spare meta-volume must be preconfigured in case of a catastrophic failure of
the active meta-volume.
Use Infini-RAID for logging volumes. The data protection capabilities provided by
the storage array ensure the integrity of the logging volumes.
Each VPLEX cluster should have sufficient logging volumes to support its distributed
devices. The logging volume must be large enough to contain one bit for every
page of distributed storage space. See EMC documentation.
For logging volumes the best practice is to mirror them across two or more back-
end arrays to eliminate the possibility of data loss on these volumes.
You can have more than one logging volume, and can select which logging volume
is used for which distributed device.
Volumes that will be used for logging volumes must be initialized (have zeros
written to their entire LBA range) before they can be used.
27/28
28/28