InfiniBox - VPLEX Integration
InfiniBox - VPLEX Integration
Product version
1.0
Content
Initial release.
Page i
Contents
Introduction....................................................................................................... 1
ZEROING THE VOLUMES METADATA ..................................................................................................................... 1
Page ii
Introduction
The procedures in this document describe the configuration steps required to configure the
INFINIDAT InfiniBox for use with EMC VPLEX, a virtual storage technology that connects to
multiple storage arrays, allowing for data migration and mirroring across sites.
controlled using storage views. They act as logical containers determining host initiator access
to VPLEX FE ports and virtual volumes.
Page 2
Zoning configuration
InfiniBox provisioning
Zoning configuration
Zone the InfiniBox storage array to the VPLEX back-end ports. Follow the recommendations in
the Implementation and Planning Best Practices for EMC VPLEX Technical Notes.
Note: To ensure high data availability, present the each node of the storage array to each
director of the VPLEX along separate physical paths.
The general rule is to use a configuration that provides the best combination of simplicity and
redundancy. For back-end Storage connectivity the recommended SAN topology is a dual SAN
fabric design to supply redundant and resilient inter-hardware connectivity.
Page 3
Each director in a VPLEX cluster must have a minimum of two paths to every local backend storage array and to every storage volume presented to VPLEX.
InfiniBox contains three or more independent interconnected nodes. Each node should
have a minimum of two ports connected to the VPLEX back-end ports via physically
separate SAN fabrics.
When configuring mirroring or migration across arrays, it is suggested that each array
be accessed through different back-end director ports
ZONING RECOMMENDATIONS
Physical connectivity
Logical zoning
Zone VPLEX director A-00 ports to Port 1 of InfiniBox Node 1 and Node 2
Zone VPLEX director B ports to one group of Port 5 on each InfiniBox Nodes.
Map Volumes to allow access to the appropriate VPLEX initiators for each port groups.
Zone E1A1
Zone E1B1
Page 4
Fabric B switch
Zone E1A2
Zone E1B2
VPLEX_NFINIDAT_FABA
InfiniBox_PLEXE1_DIRA_FABA; InfiniBox_PLEXE1_DIRB_FABA
InfiniBox_PLEXE1_DIRA_FABA
infinidat_node01_port01; infinidat_node02_port01; vplex_c1e1_a1_00
InfiniBox_PLEXE1_DIRB_FABA
infinidat_node01_port01; infinidat_node03_port01; vplex_c1e1_b1_01
vplex_c1e1_a1_00
50:00:XX:XX:60:XX:f1:10
vplex_c1e1_b1_01
50:00:XX:XX:70:XX:f1:11
infinidat_node01_port01
57:42:XX:XX:XX:XX:28:11
infinidat_node02_port01
57:42:XX:XX:XX:XX:28:21
infinidat_node03_port01
57:42:XX:XX:XX:XX:28:31
Fabric B
cfg:
zone:
zone:
alias:
alias:
alias:
alias:
alias:
VPLEX_NFINIDAT_FABB
InfiniBox_PLEXE1_DIRA_FABB; InfiniBox_PLEXE1_DIRB_FABB
InfiniBox_PLEXE1_DIRA_FABB
infinidat_node02_port05; infinidat_node03_port05; vplex_c1e1_a1_01
InfiniBox_PLEXE1_DIRB_FABB
infinidat_node02_port05; infinidat_node03_port05; vplex_c1e1_b1_00
vplex_c1e1_a1_01
50:00:XX:XX:60:XX:f1:11
vplex_c1e1_b1_00
50:00:XX:XX:70:XX:f1:10
infinidat_node01_port05
57:42:XX:XX:XX:XX:28:15
infinidat_node02_port05
57:42:XX:XX:XX:XX:28:25
infinidat_node03_port05
57:42:XX:XX:XX:XX:28:35
InfiniBox provisioning
Hosts, and then clusters must be created on InfiniBox in order to map provisioned storage
volumes. Hosts are groupings of initiators that are associated to a physical host, and clusters
are user defined as a grouping of those hosts. Each zoned initiator for the VPLEX Engine should
be grouped into a single Host. These host engines should be created into a VPLEX cluster.
Page 5
Once created, storage volumes can be mapped to all grouped initiators of a given connected
host. This section describes host/cluster creation, volume creation and then volume to cluster
mapping.
InfiniBox provisioning takes the following steps:
Creating a host
Creating a cluster
Creating volumes
Mapping volumes to clusters
CREATING A HOST
Suggestions for friendly host names are ones that describe the host being created. For
example, if creating a host for VPLEX Cluster 1 Engine 1, one might enter Plex-C1E1. Using
names that help identify the initiators facilitates maintenance and lifecycle activities.
Step 1
On the InfiniBox GUI, click the Hosts & Clusters button on the toolbar on the left.
Step 2
On the InfiniBox GUI, click the Hosts & Clusters button on the toolbar on the left.
Page 6
Step 2
Step 3
Page 7
Step 2
On the InfiniBox GUI, click the Pools button on the toolbar on the left.
Optionally, click the Advanced button to change the default values of more of the
pools settings.
Click Create. The pool is created.
Page 8
CREATING A VOLUME
Step 1
Step 2
On the InfiniBox GUI, click the Volumes button on the toolbar on the left.
The Volumes screen opens.
-ORRight-click the pool and select Create Volume from the menu.
Click Create. The volume is created. In our example, 10 volumes were created and
they are available on the Volumes screen:
Page 9
Step 2
Page 10
VPLEX Provisioning
In order to present devices to hosts, there are a number of steps to follow when provisioning
storage on the VPLEX:
LUNs created on the InfinBox are mapped to the VPLEX ports. Appropriate zoning
must be configured on the fibre channel switch that is attached to both devices.
VPLEX is configured to claim the mapped LUNs. Extents are created on the claimed
LUNs.
Stripes, mirrors or concatenated (RAID 0,1, and C geometries respectively) devices can
be provisioned by combining the created extents depending on application
performance/resilience and capacity requirements. Additionally encapsulated (1:1
mapped) devices can be created when claimed LUN data is required to be preserved
and imported into the VPLEX
The aforementioned device raid geometries can be spanned across VPLEX clusters to
provide geographically diverse VPLEX raid configurations
Virtual device are created from these device types and are then exported to connected
hosts.
Step 1
Step 2
Change context to the storage volumes on the VPLEX cluster being exported to.
For example:
VPlexcli:/>cd /clusters/cluster-1/storage-elements/storage-volumes>
Step 3
Use
Vendor
IO
VPD83 ID
Type
Thin
VIAS
VPD83T3:6742b0f0000004280000000000005cae
VPD83T3:6742b0f0000004280000000000005cae
alive
traditional false
false
2G
unclaimed
NFINIDAT
VPD83T3:6742b0f0000004280000000000005caf
VPD83T3:6742b0f0000004280000000000005caf
alive
traditional false
false
2G
unclaimed
NFINIDAT
VPD83T3:6742b0f0000004280000000000005cb0
VPD83T3:6742b0f0000004280000000000005cb0
alive
traditional false
false
2G
unclaimed
NFINIDAT
VPD83T3:6742b0f0000004280000000000005cb1
VPD83T3:6742b0f0000004280000000000005cb1
alive
traditional false
false
2G
unclaimed
NFINIDAT
Step 4
Cut and paste the command output and save it to a file on the management
server.
Step 5
Each claimed lun needs a unique name preselect a unique string that will help
identify LUNs to be claimed. Names:
Can only begin with an underscore or a letter
Can only contain letters numbers hyphens or underscores for remaining
characters
Cannot exceed 58 characters
Should end in an underscore
Cannot end in a hyphen
Examples:
InfiniBox_20140101
InfiniBox_aa3721_
Page 12
Step 6
Where:
file1 is the name of the file you saved the storage volume output to
claim_name is the unique name you selected for the luns to be claimed as
filename.txt is a name that you will use during the claiming wizard step.
Step 7
Edit filename.txt to add the phrase Generic storage-volumes to the very top of the
file.
TIP: The Linux based VPLEX management console includes vim which can be
used to create and edit files text files.
Enter the following command to claim the LUNs using the VPLEX claimingwizard.
Example:
service@VPLEX01:/tmp> vplexcli
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Password:
creating
logfile:/var/log/VPlex/cli/session.log_service_localhost_T10175_201502051
90610
Page 13
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
claimingwizard -f /tmp/NFINIDAT.txt -c cluster-1
Found unclaimed storage-volume VPD83T3:6742b0f0000004280000000000005cb1
vendor NFINIDAT : claiming and naming NFINIDAT_volume_4.
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> ll
Name
Capacity
Use
Vendor
IO
VPD83 ID
Type
Thin
VIAS
NFINIDAT_volume_1
VPD83T3:6742b0f0000004280000000000005cae
alive
normal
false
false
2G
claimed
NFINIDAT
NFINIDAT_volume_2
VPD83T3:6742b0f0000004280000000000005caf
alive
normal
false
false
2G
claimed
NFINIDAT
NFINIDAT_volume_3
VPD83T3:6742b0f0000004280000000000005cb0
alive
normal
false
false
2G
claimed
NFINIDAT
NFINIDAT_volume_4
VPD83T3:6742b0f0000004280000000000005cb1
alive
normal
false
false
2G
claimed
NFINIDAT
Create a meta-volume
As discussed, VPLEX requires four LUNs (min 78GB) for metadata volumes.
Page 14
Step 1
Step 2
Step 3
Capacity
Vendor
IO Status
-------------------------------------------------- ----------------------
--------
--------
---------
VPD83T3:6742b0f00000042800000000000118d2
traditional NFINIDAT-InfiniBox-b0f000
90G
NFINIDAT
alive
VPD83T3:6742b0f00000042800000000000118d3
traditional NFINIDAT-InfiniBox-b0f000
90G
NFINIDAT
alive
VPD83T3:6742b0f00000042800000000000118d4
traditional NFINIDAT-InfiniBox-b0f000
90G
NFINIDAT
alive
VPD83T3:6742b0f00000042800000000000118d5
traditional NFINIDAT-InfiniBox-b0f000
90G
NFINIDAT
alive
Array Name
Use the meta-volume create command to create a new meta-volume. The syntax
for the command is:
meta-volume create --name meta-volume_name --storage-volumes storagevolume_1,storage-volume_2,storage-volume_3
Where:
meta-volume_name is a name assigned to the meta-volume.
storage-volume_1 is the VPD (vital product data) name of the metavolume.
storage-volume_2 is the VPD name of the mirror.
The mirror can consist of multiple storage volumes (which will become a RAID 1),
in which case you would include each additional volume, separated by commas.
The meta-volume and mirror must be on separate arrays, and should be in
separate failure domains. This requirement also applies to the mirror volume and
its backup volume.
Note: Storage volumes must be unclaimed and on different arrays.
VPlexcli:/> meta-volume create --name c1_meta --storage-volumes
VPD83T3:6742b0f00000042800000000000118d2,
VPD83T3:6742b0f00000042800000000000118d3
This may take a few minutes...
Meta-volume c1_meta is created at /clusters/cluster-1/system-volumes.
Step 4
Use the ll command to display the new meta-volumes status, verify that the
attribute active shows a value of true.
VPlexcli:/clusters/cluster-1/system-volumes> ll c1_meta
/clusters/cluster-1/system-volumes/c1_meta:
Attributes:
Name
Value
Page 15
----------------------
------------
active
true
application-consistent
false
block-count
23592704
block-size
4K
capacity
90G
component-count
free-slots
64000
geometry
raid-1
health-indications
[]
health-state
ok
locality
local
operational-status
ok
ready
true
rebuild-allowed
true
rebuild-eta
rebuild-progress
rebuild-status
done
rebuild-type
full
slots
64000
stripe-depth
system-id
c1_meta
transfer-size
128K
vias-based
false
volume-type
meta-volume
Contexts:
Name
Description
---------------
-------------------------------------------------------------
components
virtual
VPlexcli:/clusters/cluster-1/system-volumes/c1_meta> ll components/
/clusters/cluster-1/system-volumes/c1_meta/components:
Name
Operational
Health
Slot
Type
Number
--------------
Capacity
----------------------------------------
Page 16
Status
State
--------
------
--------------
-------
VPD83T3:6742b0f00000042800000000000118d2
ok
90G
storage-volume
ok
VPD83T3:6742b0f00000042800000000000118d3
ok
90G
storage-volume
ok
Use the ll command to display the new meta-volumes status, verify that the
attribute active shows a value of true.
Step 2
Step 3
Create the logging volume. The syntax for the command is:
logging-volume create --name name --geometry [raid-0 |raid-1] --extents
context-path --stripe-depth
Where:
--name - The name for the new logging volume
--geometry - Valid values are raid-0 or raid-1
--extents - Context paths to one or more extents to use to create the
logging volume.
--stripe-depth - Required if --geometry is raid-0. Strip depth must be:
greater than zero, but not greater than the number of blocks of the
smallest element of the RAID 0 device being created and a multiple of 4 K
bytes.
extents extent_se-logging-source01_1,extent_se-logging-source02_1
Logging-volume 'c1-logging-volume_vol' is created at /clusters/cluster1/system-volumes.
VPlexcli:/clusters/cluster-1/system-volumes> ll
Name
Active
Ready
Geometry
Volume Type
Component Block
Operational Health
Block Capacity Slots
-------------Count
Size
c1-logging-volume_vol
raid-1
2
logging-volume
262560
4K
1G
Page 17
Status
--------
ok
State
-----
ok
-
-----
VPlexcli:/clusters/cluster-1/system-volumes/c1-logging-volume_vol> ll
components/
/clusters/cluster-1/system-volumes/c1-logging-volume_vol/components:
Name
Capacity
Slot
Type
Operational
Health
------------------------------
Number
------
Status
State
--
------------------------------
--------
------
--------------
--------
--
extent_se-logging-source01_1
1G
extent
ok
ok
extent_se-logging-source02_1
1G
extent
ok
ok
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim
VPD83T3:6742b0f0000004280000000000003435 -n se-oraredo-vmax
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> claim
VPD83T3:6742b0f0000004280000000000003436 -n se-oradata-vmax
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes>
Step 2
Create extents.
In order to create the extents, click Provision Storage, Cluster-1, physical storage,
Storage Volumes, you should see your newly claimed volumes as well as any other
devices; they can be used or unclaimed.
Provision Storage > Cluster-1 > Physical Storage
Click create extents.
Page 18
Step 3
Click Create
DO NOT create a virtual volume at this time. You will not be able to
create a distributed device if the virtual volume already exists on the
device.
Step 4
Step 5
Page 19
Tech-refresh of an old array: In this use case, a new array is placed under VPLEX
management. Volumes from an existing array are migrated onto the new array.
Typically, the older array is then retired or repurposed.
Load balancing across arrays: In this use case, there are multiple arrays behind VPLEX.
Either because of capacity reasons or performance reasons or the need for some
specific capability, volumes need to be moved from one array to another. Both arrays
continue to be kept in service after the volume moves are complete.
Migrating across arrays across data centers. VPLEX Metro extends the pool of arrays
that you can manage beyond the confines of your data center.
Available operations:
Migration procedure
1. Create a batch migration plan. A plan is a file that identifies the source and target
devices and other attributes.
2. Check the plan and then start the migration session.
3. Verify the status of the migration.
4. Verify that the migration has completed. When the migration completes the
percentage done will show 100.
5. Once the synchronization completes, the migration session can be committed.
6. Clean up the migration. This dismantles the source device down to the storage volume
and the source storage device is changed to an unclaimed state.
7. Remove all information about the migration session from the VPLEX.
8. Post-Migration task, depends if you want to redeployed the devices for other uses in
the VPLEX or if the source storage system needs to be removed by performing the
necessary masking, zoning, and other configuration changes.
Page 20
Migration Steps
Initial
state
Step 1
Step 2
Page 21
Step 3
Step 4
VPLEX ensures that the volumes on the two arrays are in sync. Host READ I/Os are
directed to the source leg. Host WRITE I/Os are sent to both legs of the mirror.
After both volumes are in complete sync, I/Os continues until you decide to
disconnect the source volume. Even after the volumes are in sync, you have the
option to remove the destination volume and go back to the source.
Once volumes are in sync, disconnect the source volume / array.
From the host standpoint, quite literally, it does not know that anything has
changed.
Identify Volume(s) to be migrated. For each volume, identify the geometry (RAID
type), members (devices) and device size. Taking note of volume size (Blocks x
Block size). The size of the volumes must be the same or larger size that the
source devices to be replaced.
Page 22
Step 2
Step 3
Select the device that you want to mirror and then click next.
Page 23
Step 4
On the next screen select each source and target device. Click both devices and
Add Mirror.
Step 5
Click next to synchronize data, which will bring you to the consistency group page.
At this time you can choose to create a new group, add to an existing group or no
group at all. We will create a new Consistency Group at this time.
Page 24
Step 6
Page 25
Step 6
If you check Distributed Devices now, you will see your newly created mirrored
device.
Page 26
Step 7
Youll notice that you have an unexported tag under the service status. This
means that the device has not yet been masked to an initiator and therefore now
storage views exist for this volume.
Page 27
Step 8
If you go back to Cluster-1 and then click on Storage Views. Youll see that there
already exists a view that includes the initiator as well as the ports on the Vplex
that present storage out to hosts. Go to the Virtual Volumes tab and youll see the
volumes that are already presented out to the host. Add your virtual volume.
If you go back to Virtual Volumes in the Distributed Storage tab, youll see that the
service status is now running instead of unexported. This also means that the host
can now see the newly created device.
Page 28
Consider pausing data migration during critical hours of production and resuming it
during off-peak hours.
The default transfer size value is 2 MB. It is configurable for 4 KB to 32MB. When the
transfer size is set large, migration will be faster but potentially could impact
performance on the front end. Smaller transfer size will result in less front-end impact
but migrations will take longer.
A batch can process either extents or devices, but not a mix of both.
For each VPLEX cluster, allocate four storage volumes of at least 80 GB as metadata
volumes.
Configure the metadata volumes for each cluster with multiple back-end storage
volumes provided by different storage arrays of the same type.
Use Infini-RAID for metadata volumes. The data protection capabilities provided by
these storage arrays ensure the integrity of the system's metadata.
Page 29
Use Infini-RAID for logging volumes. The data protection capabilities provided by the
storage array ensure the integrity of the logging volumes.
Each VPLEX cluster should have sufficient logging volumes to support its distributed
devices. The logging volume must be large enough to contain one bit for every page of
distributed storage space. See EMC documentation.
For logging volumes the best practice is to mirror them across two or more back-end
arrays to eliminate the possibility of data loss on these volumes.
You can have more than one logging volume, and can select which logging volume is
used for which distributed device.
The logging devices can experience significant I/O bursts during and after link outages.
The best practice is to stripe each logging volume across many [TG1] disk for speed and
also to have a mirror on a separate back-end array.
Volumes that will be used for logging volumes must be initialized (have zeros written to
their entire LBA range) before they can be used.
Extents should be sized to match the desired virtual volume's capacity. Do not create
smaller extents and then use devices to concatenate or stripe the extents. When disk
capacities are smaller than desired volume capacities, best practice is to create a single
slice per disk, and use RAID structures to concatenate or stripe these slices into a larger
user volume.
Each storage view contains a list of host/initiator ports, VPLEX FE ports, and virtual
volumes. A one-to-one mapping of storage view and host is recommended.
Each storage view should contain a minimum of two director FE ports, one from an A
director and one from a B director.
A storage view should contain a recommended minimum of two host initiator ports.
Page 30
Please Recycle