0% found this document useful (0 votes)
13 views

Vnx Oe 1.1 Software Differences

The document outlines the features and enhancements of VNX Operating Environment File v7.0.35 and Block R31.5 software, focusing on improved replication capabilities, management through Unisphere, and VAAI enhancements. It is designed for users involved in the implementation and management of VNX systems, highlighting the benefits of Incremental Attach for efficient data replication and system upgrades. The publication emphasizes the importance of proper licensing and the proprietary nature of the software and trademarks mentioned.

Uploaded by

infogui.ti
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Vnx Oe 1.1 Software Differences

The document outlines the features and enhancements of VNX Operating Environment File v7.0.35 and Block R31.5 software, focusing on improved replication capabilities, management through Unisphere, and VAAI enhancements. It is designed for users involved in the implementation and management of VNX systems, highlighting the benefits of Incremental Attach for efficient data replication and system upgrades. The publication emphasizes the importance of proper licensing and the proprietary nature of the software and trademarks mentioned.

Uploaded by

infogui.ti
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Welcome to VNX Operating Environment File v7.0.35 and Block R31.5 Software Differences.

Click on the Supporting Materials tab to access the Student Resource Guide and course navigation information.
Copyright © 2011 EMC Corporation. All rights reserved.
These materials may not be copied without EMC's written consent.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.
EMC² , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution,
Celerra, Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak,
CodeLink, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document
Sciences, Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare,
HighRoad, InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale,
PixTools, Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security,
SecurID, SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist,
WebXtender, where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC
Proven, EMC Snap, EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated
Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation
Technology, Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix,
EDM, E-Lab, eInput, Enginuity, FarPoint, FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover,
Infoscape, MediaStor, MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap,
QuickScan, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView,
StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets,
VisualSRM are trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 1


This course will focus on the VNX OE for File v7.0.35 and Block R31.5 software features
including enhanced Replication V2 Incremental Attach, Unisphere 1.1.2 Domain integration
including true local roles and asynchronous logon, and VAAI for NFS and Block. It also covers
enhancements for upgrading File and Block code using USM.

The training is intended for those with a Block and File background who are involved in the
implementation, configuration, management, upgrading and support of the VNX Operating
Environment.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 2


This module describes how to position the software enhancements of VNX OE for File
v7.0.35 and Block R31.5 with end users.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 3


With this release of VNX OE software, the VNX message is even better. Enhancements made
to Unisphere for VNX and legacy CLARiiON and Celerra systems make management tasks
such as MirrorView much simpler. ReplicatorV2 Incremental Attach allows the efficient use
of time and network bandwidth when replacing aging systems with newer VNX models
without having to do full copying of file systems. The additions to VAAI support for NFS and
Block enhancements make use of the power of the VNX to offload critical features such as
cloning and Thin Provisioning.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 4


With the release of VNX OE for File v7.0.35.x and Block R31.5.5xx comes the latest version of
Unisphere 1.1.2. Among other enhancements, this version of Unisphere supports multiple
VNX running the latest OE in a single domain along with support of multi-domains for older
VNX, and legacy CLARiiON and Celerra systems, all accessed through a single login.
Now users are able to more easily configure LDAP, NTP and both Local and Global users with
enhanced configuration screens.
The configuration of RecoverPoint and MirrorView between systems in the same or multi-
domains can now be done quickly using Unisphere.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 5


The addition of Incremental Attach to the ReplicatorV2 feature now makes it easier and more
efficient than ever to replace legacy Celerra systems with the latest VNX Series models. Once
the new system is installed, a one-to-many or multi-hop replication is implemented from the
existing source to it. Using the Incremental Attach option of ReplicatorV2, the new system
and other destinations and the source are synchronized with a common base snapshot. Any
of the systems can now be removed without effecting your DR strategy. This process enables
the replacement of older systems while avoiding bandwidth consuming full copies.
Incremental Attach is supported on NS to NS and NS to VNX configurations.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 6


Enhancements in VNX OE 1.1 to the vStorage APIs for Array Integration, or VAAI, add
powerful tools for storage administrators when managing VMs by offloading specific
operations from the ESX servers to the EMC arrays. These enhancements for Block and NFS
will decrease I/O traffic while increasing the deployment of VMs. When using NFS data
stores administrators can now use the power of the Storage Processors to implement Fast
Clones within the same file system and Full clones between NFS Datastores hosted on the
same NAS Server. On the Block side, VAAI has added support for features that used to
require a plug-in directly into the core storage stack along with support for Thin Provisioned
LUNs which will allow space to be reclaimed when VMDKs are deleted.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 7


So when talking to users about the VNX OE software the message is clear, simple
management through a single pane of glass with the latest version of Unisphere, a more
efficient use of time and network bandwidth using ReplicatorV2 Incremental Attach when
replacing older systems with new VNX models, and more powerful tools to implement
storage with the latest VAAI enhancements.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 8


This module describes the ReplicatorV2 Incremental Attach feature and how to configure it.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 9


Prior to VNX OE for File v7.0.35.x and Block R31.5.5xx, if a production system which was
using ReplicatorV2 to backup data remotely was replaced, new replication sessions had
to be started with a full data copy which consumed network bandwidth and took a long
time. Also, if the Source in a one-to-many configuration, or the middle unit in a cascade
replication configuration failed, there was no way to continue replication with the
remaining systems without starting new sessions with full data copies.
To solve this problem the Incremental Attach feature was added to the ReplicatorV2
command options. With Incremental Attach an administrator can create user checkpoints
of all the file systems in the replication sessions and then refresh the sessions
referencing the checkpoints. This will create a common base among all the sessions
which can be used later to start new sessions between any of the participating systems
with only an incremental copy of data to sync the file systems.
The Incremental Attach feature can only be used with CLI and there are no plans to offer it
in Unisphere. It can be used for both file system and VDM replication sessions. To run the
commands a global user must be created with the Data Recovery role, as the default
administrator account does not have this capability.
This feature is also available in DART v6.0.41.3.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 10


First a review of pre OE for File v7.0.35.x ReplicatorV2 operations. A single VNX or legacy
Celerra can be configured in a one-to-many implementation where one Source file system is
copied to multiple Destination file systems in the same or other storage systems. When the
first replication session is created, two hidden internal checkpoints are created on the source
file system which are then copied to two more hidden internal checkpoints on the
destination file system. The second replication session from the Source to the next
Destination is created with two more pairs of checkpoints as shown here. Notice that there is
no common base between the two destinations because the sessions are using different sets
of internal checkpoints to start from. If the Source device should become unusable due to
disaster or hardware problems, there would be no simple way to continue replication
between the remaining storage systems other than to start new replication sessions with a
full copy of data over the network or by manually copying it using tape or a third storage
system.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 11


Prior to VNX OE for File v7.0.35.x a VNX or legacy Celerra could also be configured in a
cascade implementation where a Source file system is copied to a Destination file system in
the same or other storage system, which in turn becomes the Source for a second replication
session. When the first leg of the replication session is created, two hidden internal
checkpoints are created on the Source file system which are then copied to two more
internal checkpoints on the Destination file system. The Destination file system from the first
replication session becomes the Source to the next Destination with two more pairs of
internal checkpoints created as shown here. Notice that there is no common base between
the first Source and second Destination because the sessions are using different sets of
internal checkpoints to start from. If the middle system should become unusable due to
disaster or hardware problems, there would be no simple way to continue replication
between the remaining storage systems other than to start new replication sessions with a
full copy of data over the network or by manually copying it using tape or a third storage
system.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 12


To overcome the full copy limitation of ReplicatorV2, it has been enhanced with the
Incremental Attach option. In a one-to-many configuration, an Administrator can manually
create a user checkpoint on the Source and the first Destination, and then refresh the
replication session pointing to the checkpoints. Using the same user checkpoint on the
Source, the Administrator refreshes the second replication session to a user created
checkpoint on the second destination. Once this has been completed, if the Source system
should become unusable due to hardware malfunction or disaster, replication can continue
with one of the previous Destinations becoming the new Source and an incremental copy
being used to start the session referencing the user checkpoints. This replication scenario
can also be used to replace aging systems with new VNX models by adding in the VNX as a
one-to-many configuration, removing the original Source, and restarting replication with the
VNX as the new Source.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 13


As with a one-to-many configuration, Incremental Attach can be used with a Cascade
replication configuration. The administrator would first manually create a checkpoint of the
Source file system, and the Destination file system on the first leg of the cascade and use
them with the Incremental Attach option to refresh the session. Next, a checkpoint of the
final Destination’s file system is manually created and along with the first Destination’s
checkpoint, is used to refresh the replication session using the Incremental Attach option.
This will now set the checkpoints as the common base for all the sessions. Now if the middle
storage system is unavailable due to disaster or hardware malfunction, the original source
can now directly replicate to the final destination with just an incremental copy because of
the common base that was formed. This feature can be used to replace older systems with
new ones. If the middle system was in need of replacement, a new system could be placed
next to it and replicated to in a cascaded configuration. Once all the data was copied the
Incremental Attach option could be used to remove the older middle unit and restart
replication from the source to the new VNX.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 14


There are two main use cases for the one-to-many replication with Incremental Attach.
The first use case which is what the feature was primarily developed for is replacing older
systems with new VNX. In this case the replication session is Source to Destination-A. The
customer wants to replace the Source box with a new VNX. To do this a new session from
Source to the VNX, Destination-B is created. Once all the data has been migrated to the VNX,
checkpoints are created on all the systems and the three sessions are refreshed with them.
Replications can now be stopped from the Source and Incremental Attach used to start a
new session from NewSource-B to Destination-A with just an incremental copying of data to
sync the file systems up.
The second one is in case the Production/Source system goes down due to disaster or
hardware failure. The pre-disaster replication sessions would consist of the Source system to
Destination-A and Source system to Destination-B. With replication running you would have
to periodically create checkpoints on all systems and refresh your sessions with them in case
of disaster. Once the Source system is destroyed, Destination-A or B would take the place of
the Production system. Using Incremental Attach a new session would be started between
Destination-A and Destination-B with just an incremental copying of data to sync the file
systems.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 15


As with the one-to-many configuration, there are two uses for Incremental Attach with
Cascade replications. The first one would be when trying to migrate the destination system
of a replication to a new VNX without having to do a full copy from the production system. In
this configuration there would be one replication session from Production to Destination-A.
To replace Destination-A you would add the VNX at the destination site and start a new
cascading replication session Destination-A to Destination-B. Once all the data is copied over,
user checkpoints would be made of all the file systems and the replication sessions would be
refreshed with the checkpoints so all the systems have a common base. Destination-A would
then be turned off and a new Incremental Attach replication session would be started from
Production to Destination-B with only an incremental copy to sync the file systems up.
The second use case is for disaster recovery in case the middle system of a cascade
replication configuration should fail. There would be two replication sessions, Source-A to
Middle-B and Middle-B to Destination-C. The administrator would have to create user
checkpoints for the replicated file systems on all three systems and then refresh the sessions
using the checkpoints periodically. If the Middle-B site was destroyed or disabled, a new
replication session could be started using Incremental Attach from Source-A to Destination-C
with just an incremental copy to sync the file systems.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 16


Shown here is the nas_replicate command with the –refresh option.
-refresh: Updates the destination side of the specified replication session based on changes to the
source side. Execute this command from the Control Station on the source side only. A refresh
operation handles updates on demand; as an alternative, the -max_time_out_of_sync option
performs an update automatically after a specified number of minutes.
If the data changes on the source are large, this command can take a long time to complete. Consider
running this command in background mode.
-source: Instructs the replication -refresh option to use a specific checkpoint on the source side and a
specific checkpoint on the destination side. Specifying source and destination checkpoints for the
-refresh option is optional. However, if you specify a source checkpoint, you must also specify a
destination checkpoint. Replication transfers the contents of the user-specified source checkpoint to
the destination file system. This transfer can be either a full copy or a differential copy depending on
the existing replication semantics. After the transfer, the replication internally refreshes the user
specified destination checkpoint and marks the two checkpoints as common bases.
After the replication refresh operation completes successfully, both the source and destination
checkpoints have the same view of their file systems. The replication continues to use these
checkpoints as common bases until the next transfer is completed. After a user checkpoint is marked
with a common base property, the property is retained until the checkpoint is refreshed or deleted. A
checkpoint that is already paired as a common base with another checkpoint propagates its common
base property when it is specified as the source in a replication refresh operation. This propagation
makes it possible for file systems without a direct replication relationship to have common base
checkpoints.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 17


Before you can perform replications with VNX OE 1.1, a new global user needs to be created
with the Data Recovery role, as the default administrator account sysadmin does not have
the required privileges to perform replications between systems. To start the process select
your VNX  Settings  Security  User Management to get to this screen. Click on the
Global Users icon which will open the Global User Management window. Here you can see
that the only global user currently is the default administrator account sysadmin. Clicking on
the Add button will open the Add Global User screen. Here you can enter the username and
password, and from a drop down list the Storage Domain Role which will determine the
privileges the new user will have in each system in the local domain. To do replication the
Global User needs the Data Recovery role. Once OK is pressed the new user will appear in
the Global User Management window.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 18


To verify the privileges the new user has for file, click on the User Customization for File icon
to open the user screen. Next select the Roles tab, look at the different roles which could
have been assigned to each user. After selecting Data Recovery click the Properties button to
look at the privileges that role is granted. With the Data Protection section expanded you
can see here that a user with the Data Recovery role has Full Control for both Replication and
Checkpoints which is needed to configure Incremental Attach.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 19


To implement Incremental Attach, we need to first have a regular replication configuration
running. For this example a cascade replication configuration will be used. The screen on the
left is the Create File System Replication window from a VNX called VNX-E-1 and will create
the first session leg of the cascade from the Source to the Middle systems. The name for the
session is cascade-p1, and the interconnect is called vnx1dm2-vnx2dm2 because we are
going from DM2 on VNX-E-1 to DM2 on VNX-E-2. Once the first replication session is running
we can go to the same screen on VNX-E-2 to create the middle to end replication session of
our cascade configuration. This session will be called cascade-p2, and use the interconnect
vnx2dm2-vnx1dm3 because we are going from DM2 on VNX-E-2 back the DM3 on VNX-E-1.
Once this is finished the cascade configuration will be complete.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 20


If we look at the Replications tab on VNX-E-1 the two sessions, cascade-p1 and cascade-p2
are listed. Cascade-p1 is replicating file system Source-1 on local Data Mover server_2 to
VNX-E-2. Session cascade-p2 is going in the opposite direction and is replicating from VNX-E-
2 to file system Source-1_replica1 on local Data Mover server_3. The direction can be
determined by the direction of the arrows displayed here. The upper session has the arrow
pointing to the right, away from Source-1 on the local Data Mover to the remote Celerra
Network Server, while the lower session has the arrow pointing from the remote device to
Source-1_replica1 on the left.

This screen displays the same two replication sessions but looking at them from the view of
VNX-E-2. Notice that the direction of the arrows for the two sessions are going in the
opposite direction, and that the Local Objects are not the same. VNX-E-2 is the middle
system of the cascade configuration. When file system Source-1 is replicated to VNX-E-1 it
can use the same name as the original. For the second session it uses that same file system
and replicates it to the final destination. VNX-E-1 can only have one file system called
Source-1, so once it is replicated to the second system and then comes back it has to be
called something different and so replica1 is added to it.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 21


Now that the cascade configuration is running, checkpoints need to be made for each of the
three file systems being used. The first checkpoint to make is of the Source file system. On
VNX-E-1 the command fs_ckpt Source-1 –Create will be used. The next checkpoint to make
is of file system Source-1 on system VNX-E-2 which is the middle system. Once again the
command fs_ckpt Source-1 –Create is used. The last checkpoint to make is of file system
Source-1_replica1 which is back on VNX-E-1 which is being used as the destination of our
cascade configuration. The command fs_ckpt Source-1_replica1 –Create is used this time.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 22


With the checkpoints created the two sessions can now be refreshed using them. First
refresh the Source to Middle session. Run the CLI command nas_replicate –refresh <session
name> -source <Source Checkpoint name> -destination <destination checkpoint name>.
For our command the session name is cascade-p1, and the two checkpoints are both named
Source-1_ckpt1 since they are on different systems.
Next the Middle to End session will be refreshed. Run the command nas_replicate –refresh
<session name> -source <Source Checkpoint name> -destination <destination checkpoint
name>. This time use the session name cascade-p2, and notice that the two checkpoints
have different names. The Source checkpoint on the Middle system VNX-E-2 is still Source-
1_ckpt1 but the Destination checkpoint back on VNX-E-1 is now Source-1_replica1_ckpt1
because it matches the name of the destination file system.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 23


To prove that Incremental Attach works the original replication sessions will be deleted. First
the Source session on VNX-E-1 will be deleted. In the Confirm Delete window can be seen
the session name cascade-p1 which was going to system VNX-E-2. Once that is done delete
the session from the middle to the end on VNX-E-2. Here you can see the session is cascade-
p2 going to VNX-E-1.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 24


With the replication sessions stopped a new Incremental Attach session which goes from the
original Source to the End system without using the Middle system can be created.
Displayed in the top box is the nas_replicate command to create the session. The name for
the new session is cascade-a-c to represent that we are going from the source to destination
without the middle step. As soon as the create command completes the output of the
nas_replicate –info command should be run to determine if a full copy was used to start the
session or an incremental copy was used. The output of the command nas_replicate –info
cascade-a-c is shown here. Notice that the line “Current Transfer is Full Copy=No” is
displayed. This verifies that only an incremental copy was used to create the new session.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 25


Back under the replication Window on VNX-E-1 can be seen the new replication session
created with CLI. If we look at the properties of it, you can see the name we gave, cascade-
a-c, that the source file system is Source-1 and the destination file system is Source-
1_replica1.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 26


This module describes the latest enhancements to Unisphere 1.1.2X.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 27


The following screens will explain the management capabilities of Unisphere 1.1.2X and the
systems it can be used with. This table should be used to determine the versions of OE for
File, Block and Unisphere which are running on each system. For example, a system which is
labeled VNX 1.1.0 has VNX OE for File 7.0.12, Block R31 and Unisphere 1.1.0 embedded in it
depending on if it is a File, Block or Unified system.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 28


There are three types of domains in the Unisphere 1.0 domain environment:
1. Celerra-only domains
2. CLARiiON-only domains
3. Mixed domains of CLARiiONs and Celerras.
Unisphere can manage Celerra Gateway and Integrated systems running NAS 6.0. If there is
no CLARiiON with a minimum of FLARE Release 30, log in using scope LOCAL to connect to
the Control Station of the Celerra.
If the storage domain has only CLARiiON arrays, and the domain master has at least FLARE
Release 30 installed, use GLOBAL scope for the login to the storage domain. The single sign-
on automatically gives access to all domain members. Unisphere can be used to manage all
CLARiiON systems in the domain running FLARE Release 19 or above. Older FLARE releases
can still only use the features that their version supports.
Mixed domains consist of a CLARiiON domain master running at least FLARE Release 30,
other CLARiiON systems running a minimum of FLARE Release 19, and Celerra systems
running NAS 6.0 or above, all of which can be added to the storage domain and managed by
Unisphere.
USM 1.0 can manage all Celerras and CLARiiONs in the domains and multi-domains.

Copyright © 2010 EMC Corporation. Do not Copy - All Rights Reserved. 29


When Unisphere 1.1.0 was introduced it presented some incompatibilities with legacy
systems. In domains with Celerras running DART 6.0 and CLARiiONs running FLARE R19-R30
it can’t be used. To manage these domains Unisphere 1.0 must still be used. Unisphere 1.1.0
can be used to manage VNX 1.1.0 systems, but a GUI session must be opened for each
system individually, and they are treated as standalone domains. Unisphere 1.1.0 can’t
manage MirrorView between two VNX so it must be done with CLI commands. Also
Unisphere 1.0 which is needed to manage legacy systems is unable to manage VNX 1.1.0
systems. USM 1.1 which was introduced with Unisphere 1.1.0 is able to be used with
Celerras running 6.0, CLARiiONs running R19-R31 and VNX 1.1 systems.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 30


With the introduction of Unisphere 1.1.2 comes the return of the single pane of glass when
managing VNX and legacy Celerras and CLARiiONs and full support for MirrorView.
Unisphere 1.1.2 can manage legacy domains of Celerras running DART 6.0 and CLARiiONs
running FLARE R19-R30. It can also manage VNX 1.1.0 standalone systems, and domains
made up of VNX 1.1.2 systems. VNX 1.1.2 domains are now able to form multi-domain
structures with legacy domains, and VNX 1.1.0 standalone domains. The latest version of
USM 1.1 can also be used to service all of these systems. Note that Unisphere 1.0 can still be
used to manage legacy domains and Unisphere 1.1.0 can still manage standalone VNX
systems.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 31


The transition plan for users who have VNX 1.1.0 systems which have to be managed as
standalone systems with a GUI session opened for each one, is to either add a VNX 1.1.2
system or upgrade an existing 1.1.0 VNX to 1.1.2. This will allow the VNX 1.1.2 system to
form multi-domain links to the other VNX and manage them all with one Unisphere session.
The added or upgraded system will then also be able to form a multi-domain link to legacy
domains for management. Unfortunately, VNX gateway systems are still unable to be linked
to the domain and need to be managed individually for this release.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 32


When the VNX 1.1.0 systems are upgraded to VNX 1.1.2, they can either be left as
standalone domains in a multi-domain configuration, or added together to form one large
domain. As before, this VNX 1.1.2 domain can form a multi-domain with legacy systems to be
managed by a single Unisphere session. The VNX VG2 and VG8 Gateways still must be
managed on their own.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 33


Shown here is the Domains screen in the Unisphere 1.1.2 version. On the top left are icons
for the Local Domain which is a 1.1.2 domain, and icons for VNX 1.1.0 systems which are in
standalone domains with multi-domain links which can be managed by this version of
Unisphere. Under the Multi-Domain section is the wizard to Manage Multi-Domain
Configurations. Below that in the Local Domain section are wizards for configuring NTP,
selecting the Domain Master, Adding and Removing Systems, and Scanning for Block
Systems. If you click on the Local Domain above, the bottom window will display all the
systems which are currently in it. Here on the right, you can find the Block software revision
and the revision of client software. As these systems are in the Local Domain they must be
running a 1.1.2 revision of Unisphere as shown.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 34


To add or remove a system in a multi-domain configuration, click on the Manage Multi-
Domain Configuration wizard which will open the Multi-Domain Management window. Enter
the IP address and a Local name to use for the system in the fields shown, and then click on
the arrow to search for the system. When the system is found, you will have to accept the
security certificate in order to manage the system. The new domain should now appear in
the Selected Domain panel. To use this wizard to remove a domain from the configuration
simply move it over to the Available Domains panel and click OK. It must be noted that to
manage legacy domains and VNX 1.1.0 systems, this wizard must be used to configure the
multi-domain configurations. Only VNX with OE for File v7.0.35.x and Block R31.5.5xx or
newer can be added to a local domain.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 35


To add a VNX running the latest code to the Local Domain, click on the Add/Remove System
wizard which will open the Add/Remove Systems to/from the Local Domain screen. From
here click on the Add button to open the Host Logon screen where you need to enter the IP
address of the VNX SP. If you enter the CS address it will return an error. When you click on
the Connect button an Add System message will appear informing you that this will remove
the VNX from its own domain and add it to this one. Clicking Yes here will open the
Unisphere Login page where you can enter the system’s administrator’s user credentials.
Once you click on the Login button, the system will verify the credentials and then move the
system into the Local Domain where it will now be listed.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 36


Unisphere 1.1.2X comes with support for local users for both Block and File. On the User
Management page which is accessed through the link Hostname  Settings  Security 
User Management can be seen icons for Local Users for File which will create new users on
the Control Station, and Local Users for Block which will create new users on the Storage
Processor. If you are logging in with a local user for File you would enter the IP address of the
CS, the user name and password, and scope Local. If you are logging in with a local user for
Block you enter the IP address for the SP, the user name and password, and scope Local.
Accessing the system like this is called degraded mode or async login which permits a user to
connect/authenticate to either the control station or the storage processor and occurs with
the error shown under the following circumstances:
- Certificates on one side are invalid or rejected
- User logs in as a local user which only exists on one side
- User logs in as a local user which has different credentials on each side
- User logs into SP when CS server is unavailable/not responding (or taking excessively long
to respond)
- User logs into CS when k10governor is unavailable/not responding (or taking excessively
long to respond)
Once you successfully login, Unisphere will only display the fields the local user has rights to
use, and grays out all the rest. It should be noted that this same condition can occur if the
SPs and CS are not communicating with each other correctly. If local users are created on
both the File and Block with the same credentials some of these errors will no longer occur.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 37


On prior versions of VNX OE code, the credentials cache on the CS would go out of synch
when the global administrator account password or role was changed or the account was
deleted. The Control Station uses the global administrator account to issue “naviseccli”
commands. If that global account on the block side is deleted or changed, the CS does
NOT get updated. As a result, naviseccli commands from the CS failed. A Primus provided
a manual way to update credentials on the CS.
With this version of VNX OE the “System” global account will be used to issue naviseccli
commands from the CS. This account will be monitored for all password changes and the
credentials cache will be automatically updated. There now is only one administrator
account and the role and the scope for it can’t be changed. Only one account can be the
system administrator and there is only one system administrator per domain.
This system account is tagged during the initialization of a new VNX, or when a VNX
which has been upgraded to v7.0.35.X is added to an existing domain where the
“System” account must already exist.
Note that if a VNX is upgraded from V7.0.1X to V7.0.35.X the “System” account must be
manually created.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 38


This module describes the latest enhancements to VAAI.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 39


VAAI stands for vStorage Application Programming Interfaces for Array Integration. These
APIs allow ESX to offload supported operations to the array . This reduces the host workload,
lessens network traffic, and improves performance by taking advantage of efficient array
functions. If an unsupported operation is attempted, the array returns an error to ESX and
the ESX server will use its native method to perform the operation. VNX OE for File v7.0.35.X
and Block R31.5.5xx supports VAAI with FC, iSCSI, and NFS protocols.
Please note that the ESX server must also support these features.
RecoverPoint using the VNX splitter fully supports VAAI features. Please refer to “EMC
RecoverPoint Replicating VMware Technical Notes” available on Powerlink for details on VAAI
and RecoverPoint.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 40


Let’s look at an example of VAAI.
An ESX admin initiates a clone operation on a VM located on a VNX.
The ESX server will use the vStorage API, issuing the EXTENDED COPY SCSI command to the
VNX to leverage the array’s efficient ability to copy data. The array performs the data copy,
and communicates this to the server. ESX completes the clone operation.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 41


One can verify that VAAI functionality is enabled on the ESX server using vSphere. Navigating
to the ESX server, Configuration tab, and choosing the Advanced Settings under Software
brings up the Advanced Settings window. The HardwareAccelerated parameters found in the
DataMover section and the HardwareAccelerated parameter found in the VMFS3 section
indicate whether VAAI is enabled on the ESX server. If the parameter is set to “1,” the VAAI
feature is enabled. If the parameter is set to “0,” the VAAI feature is disabled. The features
are enabled by default.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 42


The esxcfg-advcfg command can also be used to set and get the HardwareAccelerated
parameters.
The –s option is used to set the parameter, and the –g option is used to get the current
setting.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 43


VNX OE for Block R31 included the VAAI features shown here. By offloading host-based
functions to take advantage of efficient disk-array storage functions, these features
significantly reduce host workload as well as improve the efficiency and performance of
the environment.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 44


With VNX OE for Block R31.5.5xx, VAAI for Block also includes native support for Virtual
Provisioning. This directly leverages the array’s capability to return space to the storage pool
when VMFS files are deleted. When vmfs files are deleted, a SCSI UNMAP command is sent to
the array and that space is zeroed out. Space is returned to the pool in slices, so at least one
gigabyte’s worth of data needs to be zeroed out before space is returned to the pool. An
example of deleting vmfs files would be deleting Virtual Machines or Virtual Disks (ie. vmdk
files).

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 45


When a file is deleted from the vmfs file system, the file is removed from the file system,
the space is unallocated from the thin LUN, and returned to the storage pool.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 46


VNX OE for File v7.0.35.X enables several VAAI for File features. Fast and Full clones leverage
the array to copy and move blocks of data. Fast clones are instantaneous copies of VM disks
that use Temporary Working Snapshots to keep track of changes. Fast clones can be created
with the vmkfstools dash capital I (-I) option. Full clones are full copies of the VM disk. Full
clones can be created with the vmkfstools dash lowercase i (-i) option. Extended stats
provide the logical, physical, and unshared size of vmdk files. The unshared size of a vmdk
indicates bytes that are not shared with other vmdk files. As an example, in the case of Fast
cloning the initial value of unshared bytes of the Fast clone will be zero since the original and
Fast clones share the same data. The Reserve Space feature avoids “out of space” errors
during I/O for cloning by pre-provisioning the necessary space. Similar to block VAAI features,
these features reduce host workload, lessen network traffic, and improve over-all
performance.
VAAI for File features are currently only supported with NFS version 3 datastores, VMs with
hardware version 8 or later, and require installation of the NFS plug-in on the ESX server.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 47


Some important limitations of VAAI for File features are shown here, please take a moment
to review them.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 48


The Control Station’s server_stats command has been enhanced to provide usage and
performance statistics for VAAI operations. The statpath is nfs.v3.vstorage. Shown here is an
example of the command and the output.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 49


This module describes enhancements made to the latest versions of Unisphere and USM to
improve software compatibility.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 50


Every upgrade package to the VNX OE comes in two pieces, File and Block, which have to be
the correct matching versions in order to get compatibility between the File and Block
portions of the hardware. To accomplish this there have been several enhancements made
to USM and Unisphere. Here you see the Welcome Screen for the Software Installation
Wizard of USM. The top section gives options for installing File and Block code together, File
only, or Block only. For a Unified system the combined upgrade option should be chosen.
Below this is one of several warnings which have been placed throughout the process
reminding the user that File must always be upgraded before Block and that the two should
be upgraded as a set. It should be noted that there is still no enforcement for upgrading File
first in the USM code. Once the File portion of the upgrade is completed, the File Installation
Completion screen gives a warning that if the Block portion is not done immediately that
there could be system performance or management issues. Another enhancement to USM
is that the Cancel button has been disabled so that users won’t exit before doing the Block
portion of the upgrade. It should be noted that if you are doing a manual upgrade of the File
code there are no warning messages in the CLI.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 51


Enhancements have also been made to Unisphere to aid in keeping the two code versions in
line. Here you can see outlined a Critical error telling the administrator that the Block code is
incorrect. If you click on the error to open it up, you can see it goes into detail explaining
that the Block code must be updated to match the File revision or there may be system
compatibility issues. The alert in Unisphere will appear 4 hours after the File upgrade and
can’t be deleted until the Block code is upgraded successfully. The system will also initiate a
call home 4 hours after the upgrade so that the call center is aware of the issue, and will
repeat the call home every 48 hours until the issue has been resolved.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 52


The POST on VNX 5700 and 7500 systems shipped to date (~300 systems) cannot update the
firmware on the SP due to improper power supply status reporting. A new version of POST
has been created to work around the problem. A field action is in place to replace the
affected SPs; but it will take time for the field to execute. A new check in the PUHC has been
requested so an upgrade to VNX OE for Block R31.5.5xx will be prevented if the system is
running the old version of POST.
If the software upgrade is not blocked on a system without the POST FCO, the presence of
High Efficiency power supplies will disrupt necessary R31.5.5xx firmware upgrades. A mixing
of VNX OE for Block software with VNX OE for File v7.0.1X firmware is untested and likely to
be unstable. This check will block a customer from upgrading either File and Block
components of the VNX system to VNX OE for Block R31.5.5xx (or later releases). The File
and Block software must be kept in sync, especially for R31.5.5xx.
If the VNX OE for Block is allowed to be upgraded without the latest POST, there is a risk to
customers of a DU/DL or functional regression – such as a panic due to lack of the 300 MB of
memory that the R31.5.5xx firmware can access.
Field Change Order FC041211FC has been implemented to swap out the older Storage
Processor assemblies with newer ones which also has the correct version of POST installed.
The replacement procedure is documented in Primus emc268277. This procedure also
explains how to use the navicli command getresume to discover the version of the SP before
going onsite to do the SP replacement procedure.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 53


Before running an upgrade to the VNX OE for File, a check should be run to make sure the
system is operating properly. Run the Pre-Upgrade Health Check with the command
/nas/tools/check_nas_upgrade –pre. Shown here is an abbreviated output of the command.
Notice the new check which has been added to check for the correct version of POST on the
SP. In this run the check has failed as the POST is too old. Further on in the output of the
PUHC is listed the corrected actions for any errors found. The corrective action for this error
is to escalate the problem to support which would then have new Storage Processors
ordered.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 54


When using USM to upgrade VNX OE for File the PUHC will be run first to make sure the
system is capable of being upgraded. Here can be seen the same error for the SP POST
revision as when doing a manual upgrade. If the error is clicked on the Information window
will gave a detailed explanation of the problem as seen here in this expanded view.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 55


Like OE for File, if the VNX OE for Block is being upgraded it will first have to go through a
Health Check to make sure the system is capable of being upgraded and everything is
operating normally to prevent DU/DL conditions. Once again the PUHC has found an error
with the POST revision on the Storage Processors which needs to be corrected. If the error is
clicked on the detailed error will appear in the lower window as shown here in the expanded
window.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 56


This module covers the options available with VNX OE for the VNX5100 to boost
performance or increase storage capacity utilization. It also includes environmental benefits,
and implementation scenarios.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 57


Let’s begin with an overview of FAST Cache and Thin Provisioning.

FAST Cache leverages Enterprise Flash Drives (EFDs) to add an extra layer of cache between
the Dynamic Random Access Memory (DRAM) cache and rotating spindles, which increases
the I/O service responsiveness. In a VNX series where FAST Cache is enabled, hot data (or
busy I/O) and cold data (or less busy I/O) are automatically identified. Hot data is moved to
faster drives or cached in the FAST Cache layer to facilitate faster access. Once the data has
gone cold (or less busy I/O), the system automatically identifies that and moves it out of the
cached space or moves it to lower speed drives.

Thin Provisioning is a pool-based storage provisioning solution to implement pool LUNs. If


Thin is not selected at creation, a Thick LUN is automatically created. Thin LUNs provide on-
demand storage while Thick LUNs provide high performance and predictable performance
for your application. Thin Provisioning helps facilitate advanced data services such as Fully
Automated Storage Tiering (FAST VP) and compression.

Starting with the VNX OE for Block R31.5.5xx, you can only install the Thin Provisioning
enabler or the FAST Cache enabler on the VNX5100 platform.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 58


Starting with the VNX OE for Block R31.5.5xx, a new mechanism is embedded in the
operating environment for the VNX5100. This mechanism allows for the installation of the
FAST Cache enabler or Thin Provisioning enabler but never both on the system. This is done
to allow the storage administrator the option to choose between the two solutions without
impacting the system’s performance or availability depending on what matters most.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 59


The changes can also be seen on the command line interface as well. For this example, we are
installing both enablers, FAST Cache and Thin Provisioning. The interface will allow all the rules
to be run; however after confirming the package is installed, the error will show up.
So, how would you decide which enabler is best for your assigned environment? Let’s review
some of the key benefits for both solutions.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 60


One method that you can follow is by informing the administrator about the benefits of both
enablers.
FAST Cache is a solution that leverages EFDs to extend the existing caching capacity of the
VNX storage system. It extends the caching by copying data that is accessed frequently from
their original drives to EFD with a granularity level of 64 KB. It is constantly promoting data
to FAST Cache or vice-versa without polling or relocating cycles.
Thin Provisioning is a pool-based storage provisioning solution to implement Pool LUNs. Its
Thin technology allows the storage administrator to maximize the use of their storage
capacity by allocating storage as it is needed. For applications that require high and
predictable performance, its Thick technology can be used. Thin Provisioning can be
managed simply by a few mouse clicks using EMC Unisphere.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 61


Currently on any VNX series, creating a LUN that resides on an EFD automatically had its
cache settings disabled. With this release of the VNX OE, the read and write cache setting
will automatically be enabled for new EFD LUNs, but will remain disabled for EFD LUNs
created prior. As always, the end-user can always disable it after creation. On the command
line interface, the settings can be disabled using the bind command by specifying the –wc in
the syntax.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 62


This module covers the newly bundled VNX ESRS IP Client. It focuses on uDoctor with
emphasis on theory of operation. It also includes information about the installation process
along with environment requirements.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 63


The VNX Series is monitored with EMC Secure Remote Support (ESRS). ESRS consists of a
secure, asynchronous messaging system that monitors device status and allows for execution
of remote diagnostics activities. Information is transferred securely with the use of
encryption. With the release of VNX OE for Block R31.5.5xx, the VNX ESRS IP Client is being
bundled with another product to aid with the call-home procedure.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 64


As part of this release, the VNX ESRS IP Client is being bundled with uDoctor. uDoctor is a
host-based solution that monitors events that are being generated, and it sends notification
only for specified events that match specific rules for business or technical purposes.
Without uDoctor, ESRS IP Client was simply taking the events and sending them to the
Support Group for analysis. With uDoctor, events now go through a set of rules to identify
the way in which each should be handled. This new process is made possible by uDoctor’s
three components: (1) Array Health Analyzer (AHA); (2) Triiage Real Time (TRT); (3) Triiage on
Management Station (TOMS).
It’s best that we identify any requirements that may exist, in the next slide we will review
them.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 65


Since uDoctor is bundled with the VNX ESRS IP Client, its requirements are the same as the
previous version of ESRS IP Client. The host-based requirements are that the host must be
running the Windows Operating System, and it must not be connected to the storage array.
On the array side, the requirement is that the storage system must either be a VNX for Block
system or one of the CLARiiON systems such as the CX3/5/7, the CX3 series, the CX4 series,
or the AX4-5 series running FLARE 19 and above.
Now that we’ve learned about what systems this solution currently works with, let’s see
what to expect from installing it.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 66


In order to realize the benefits from the use of uDoctor, the VNX ESRS IP Client 2.x must be
installed on a management station in the production environment. The installation process
is the same as installing the previous version of ESRS IP Client, but with one added bonus. In
the status page of the installation, you will notice when AHA is installing on two separate
occasions. Once the installation process has elapsed, there should be a TOMS icon on the
desktop of the management station.
With the VNX ESRS IP Client 2.x installed, it is important to identify and become familiar with
uDoctor’s operation.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 67


uDoctor’s method of operation can be best described in this example. Let us take a moment
to go through it.
We have a VNX series system along with the ESRS host. (Step-1) When an event is created,
AHA intercepts it before it gets to ConnectEMC. (Step-2) The event, in the form of an .xml
file, is processed by AHA which filters the event. If AHA approves it, (Step-3) the file is sent
to TRT for further processing. After receiving it, TRT uses naviseccli and ktcons commands on
the array to gather the necessary files needed to run its scripts. Based on TRT’s analysis, the
event will go one of two ways:
If the event is not worthy of a dial home, (Step-4) it will send it to AHA for archive where the
process will end, or if the event is valid, (Step-5) it will process the .xml file and add relevant
information for the dial home before sending it to AHA.
Note: If after the .xml file is sent to TRT, AHA receives no response within five minutes, AHA
will consider the event valid and follows through the rest of the process.
Once AHA has received the xml file back from TRT, (Step-6) it will send TOMS an
asynchronous request to generate SPCollects on the array and store them locally on the
management station @ C:\\toms_data. The SPCollect will also be triaged and placed in a file
named Triage.zip. (Step-7) AHA will also move the .xml file to the ConnectEMC poll and
then be sent to the support group.
In summary, uDoctor aids VNX ESRS IP Client by filtering each and every event, archiving each
event and sending each event to ConnectEMC for the support group’s analysis.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 68


This module covers the VNX OE for Block enhancement which allows LUNs to be configured
with CLI as Read-only for use by Cloud Service Providers.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 69


Prior to the release of VNX OE for Block R31.5.5xx, all LUNs that were added to Storage
Groups were writable to all members of the group. Cloud Service Providers have requested
that a mechanism be created which will allow them to share LUNs to multiple hosts but with
Read-only privileges. To accomplish this VNX OE for Block R31.5.5xx now supports adding
LUNs to Storage Groups as Read-only by use of the –addhlu option. This feature is only
available using CLI and the –readonly switch will not be displayed in the command line
syntax. In the example shown, LUNs are added to the Storage Group with read-only access
capabilities. This will allow the Cloud Service Providers to broadcast the information on the
LUNs to the web, where it can be viewed safely by multiple users without the possibility of
the data being changed or corrupted.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 70


Before adding read-only LUNs, let’s take a look at our existing Storage Group. Here we see
that there are four LUNs currently in the group. The Properties page does not show any
information if a LUN is read-only, as by default, all LUNs are R/W and the read-only
functionality can only be seen using the naviseccli commands. If a naviseccli storagegroup –
list command is run against the array, it can be seen that there is no listing as to whether a
LUN is R/W or R/O.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 71


To make a LUN read-only, the naviseccli command storagegroup –addhlu is used with the
addition of the –readonly switch as shown here. The command will complete if formatted
correctly with just the returning of the command prompt. The command storagegroup –list
will display the Storage Group with all the LUNs it contains. Notice that there is no field to
identify which LUNs are read-only and which are R/W. The reason for this is that the –
readonly switch was not used with the –list command, and so the Read Only status field will
not display for any LUNs.
This is done purposely to hide the functionality from standard users.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 72


To display the status of the LUNs just add the –readonly switch at the end of the command.
Shown here is the command with the –readonly option added. Notice that there is a Read
Only column added to the output and that the LUN that was added in the previous display
now has a Yes in the field.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 73


Displayed here is the LUNs page of the Storage Group that the read-only LUN was added to.
Notice that there are no fields in the GUI that tell that LUN 14 is currently read-only.
To make a LUN which is currently in a Storage Group read-only, first it needs to be removed
from the Storage Group, then it can be added back into the group with the –readonly switch.
There is no modify switch which can change a LUN currently in a Storage Group from
writable to read-only. This was done so no one would accidentally make a LUN read-only
without a conscious effort to do so by having to run several commands.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 74


To reverse the read-only status and make the LUN writable again, first remove the LUN from
the Storage Group with the
command option –removehlu.
Next add the LUN back into the Storage Group using the –addhlu command option but
without the –readonly switch. Finally run the –list –readonly command options to see the
status of the LUNs in the Storage Group. Notice that none of the LUNs are readonly status
any longer.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 75


This module covers the changes made to AVM to align it with the Best Practices when using
Pool based LUNs.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 76


Prior to the release of VNX OE for File v7.0.35.x, the scripts that AVM used when creating or
extending file systems did not match up with the recommendations that were in the
published Best Practices guide.
Starting with VNX OE for File v7.0.35.x, the AVM scripts now line up with the Best Practices
documents and procedures and can make better use of mapped pool LUNs. As part of the
changes, the Slice Volumes box will be enabled by default for all file systems. This will allow
AVM to use just the amount of space needed without consuming whole volumes. The Thin
Enabled checkbox will also now be enabled so that this functionality can be used on Mapped
Pools. Finally, the search order for selecting LUNs/dvols to use when creating and extending
LUNs has been changed to reflect how file systems will be built.
It should be noted that for file system extension, AVM will always try to expand onto the
existing volumes of the file system first.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 77


The first step the AVM scripts will take is to divide the LUNs into Thick and Thin groupings.
Once this is done it will try to stripe 5 dvols together, with the same size, same data services,
and in an SP balanced manner. If 5 dvols cannot be found, AVM will then try 4, then 3, and
finally 2 dvols to make the stripe with.
AVM will prefer to select SP balanced dvols over homogeneous data services.

If Thick LUNs/dvols cannot be found to satisfy the above, the same search will be
implemented for Thin LUNs, creating a 5 dvol stripe, then 4, then 3 and then finally 2 dvols.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 78


If AVM cannot find enough Thick or Thin LUNs/dvols to stripe, it will then try to concatenate
enough Thick LUNs to meet the size requirement. If AVM cannot find enough Thick LUNs to
concatenate for the correct size, it will try to find enough Thin LUNs to concatenate to meet
the size requirements. Finally, if AVM cannot find enough Thin LUNs to concatenate, it will
then try to concatenate Thick and Thin LUNs together to meet the size requirement. If all of
these efforts to meet the requirements still fail, the file system creation/extension will fail.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 79


In order to comply with the best practice of never using Thin backend storage for file,
Unisphere will need to allow file side Thin on mapped pools. This amounts to enabling the
“Thin” checkbox for mapped pools in the file system creation pages for both Unisphere and
the Create File System Wizard to allow a user to set “Thin Enabled” when creating a file
system or migrating a file system on mapped pools. That is, showing “Thin Enabled”
checkbox for mapped pools within the various file system creation pages. The window on
the left is for the file system creation page. Notice that the Thin Enabled checkbox has been
enabled for use, and the Slice Volumes checkbox is checked by default. If the Thin Enabled
box is checked, the High Water Mark and Maximum Capacity now need to be set also.
The following pages will be affected by this change; file system creation page, migration file
system creation page, file system properties page, migration file system properties page and
the file system wizard.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 80


Displayed here is the New File System Wizard screen. You can see that during the Auto
Extension Enabling step that it is also possible to select Thin Enabled.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 81


Listed are the key points covered in this course. This concludes the instruction, please
proceed to the course assessment.

Copyright © 2011 EMC Corporation. Do not Copy - All Rights Reserved. 82

You might also like