Storage Automation
Storage Automation
Abstract
Microsoft System Center 2012 Virtual Machine Manager introduces new storage
automation features enabled by the Storage Management Initiative
Specification (SMI-S) and supported by EMC Symmetrix VMAX, CLARiiON CX4,
and VNX series of storage systems. This document explains the new storage
architecture and how to set up the environment to explore and validate these
new storage capabilities.
October 2012
Copyright © 2012 EMC Corporation. All rights reserved. Published in the USA.
EMC believes the information in this publication is accurate as of its publication date.
The information is subject to change without notice.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC
Corporation in the United States and other countries. All other trademarks used
herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical
documentation and advisories section on EMC online support at
https://support.emc.com/.
Some examples depicted herein are provided for illustration only and are fictitious.
No real association or connection is intended or should be inferred.
This document does not provide you with any legal rights to any intellectual property
in any Microsoft product. You may copy and use this document for your internal,
reference purposes. You may modify this document for your internal, reference
purposes.
Microsoft, Active Directory, Hyper-V, SQL Server, Windows, Windows PowerShell, and
Windows Server are trademarks of the Microsoft group of companies. All other
trademarks are property of their respective owners.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
2 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Contents
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 3
Reference Architecture and Best Practices
Contents
4 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5 Validate Storage Automation in your Test Environment .. 107
Test environment validation .................................................................... 108
Set up the Microsoft VMM storage automation validation script ............. 108
Download the Microsoft VMM validation script .............................................. 109
Use a script editor that supports breakpoints................................................. 109
Script configuration XML input file contents ................................................... 109
Sample XML file ............................................................................................. 112
Configure trace log collection .................................................................. 113
Configure tracing for the Microsoft Storage Management Service ................... 113
ECOM I/O tracing for the EMC SMI-S Provider ................................................. 115
Review the full test case list developed by VMM ..................................... 117
Test case list for EMC storage arrays ....................................................... 119
Test results for Symmetrix VMAX arrays .......................................................... 119
Test results for CLARiiON CX4 arrays............................................................... 120
Test results for VNX arrays.............................................................................. 122
Test storage automation in your preproduction environment .................. 123
Chapter 6 Prepare for Production Deployment ............................... 125
Production deployment ........................................................................... 126
Identify issues unique to your production environment .......................... 126
Production deployment resources .......................................................... 126
Microsoft Private Cloud Fast Track program ............................................ 129
Appendix A Install VMM .................................................................. 131
Appendix B Array Masking and Hyper-V Host Clusters ...................... 137
Appendix C Enable Large LUNs on Symmetrix VMAX Arrays .............. 151
Appendix D Configure Symmetrix VMAX TimeFinder for Rapid
VM Provisioning ........................................................... 155
Appendix E Configure VNX and CLARiiON for Rapid
VM Provisioning ........................................................... 161
Appendix F Terminology.................................................................. 163
Appendix G References ................................................................... 175
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 5
Reference Architecture and Best Practices
Contents
6 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Figures
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 7
Reference Architecture and Best Practices
Figures
8 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Tables
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 9
Reference Architecture and Best Practices
Tables
10 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Preface
As part of an effort to improve and enhance the performance and capabilities of its
product line, EMC® from time to time releases revisions of its hardware and
software. Therefore, some functions described in this guide may not be supported
by all revisions of the software or hardware currently in use. For the most up-to-date
information on product features, review your product release notes.
If a product does not function properly or does not function as described in this
document, please contact your EMC representative.
Purpose
This document describes how EMC supports new storage automation features that
are available in Microsoft® System Center 2012 – Virtual Machine Manager (VMM).
These new features build on the Storage Management Initiative Specification (SMI-S)
developed by the Storage Networking Industry Association (SNIA).
EMC has validated in a test environment the new VMM storage capabilities with
supported EMC storage systems. This document serves as a guide to build and test a
similar environment.
Audience
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 11
Reference Architecture and Best Practices
Preface
Chapter 3: Plan a Private Cloud Cloud administrators Hyper-V and other server administrators
Storage administrators Network administrators
VMM administrators Self-service portal administrators
Security administrators
Chapter 5: Validate Storage Cloud administrators Hyper-V and other server administrators
Automation in your Test Environment Storage administrators VMM administrators
Chapter 6: Prepare for Production Solution architects Hyper-V and other server administrators
Deployment Cloud administrators Network administrators
VMM administrators Self-service portal administrators
Security administrators
12 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 1 Overview
Introduction .................................................................................... 14
Why automate storage? ....................................................................... 14
Standards-based storage automation .................................................. 15
More sophisticated, yet simpler and faster ........................................... 16
Testing storage automation in a private cloud....................................... 17
Joint private cloud planning and implementation .................................. 17
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 13
Reference Architecture and Best Practices
Chapter 1: Overview
Introduction
EMC and Microsoft collaborate to deliver a private cloud with new and enhanced
storage automation features. Microsoft® System Center 2012 Virtual Machine
Manager (VMM) introduces the automatic discovery of storage resources and
automated administration of those resources within a private cloud. Multiple EMC
storage systems support these new capabilities.
Half indicated that they do not have in-house expertise to automate storage
tasks.
Half indicated that they have so many different types of arrays that the
development effort and time required to automate storage tasks often blocks
major storage automation initiatives.
Of the 14 percent of respondents who do automate storage tasks, they do just
enough automation to reduce the chance of human error. More advanced automation
is a goal, but often a deferred goal, because it requires expertise and time that are in
short supply. An industry standard is needed that enables automation of storage
tasks, yet simplifies storage automation across multiple types of array.
14 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 1: Overview
To take advantage of this new storage capability, EMC updated its SMI-S Provider to
support the System Center 2012 VMM release.
The EMC SMI-S Provider aligns with the SNIA goal to design a single interface that
supports unified management of multiple types of storage array. The one-to-many
model enabled by the SMI-S standard makes it possible for VMM to interoperate by
using the EMC SMI-S Provider, with multiple disparate storage systems from the same
VMM Console that is used to manage all other VMM private cloud components.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 15
Reference Architecture and Best Practices
Chapter 1: Overview
Reduce costs On-demand storage: Aligns IT costs with business priorities by synchronizing
storage allocation with fluctuating user demand. VMM elastic infrastructure
supports thin provisioning, that is, VMM supports expanding or contracting the
allocation of storage resources on EMC storage systems in response to waxing or
waning demand.
Ease-of-use: Simplifies consumption of storage capacity by enabling the
interaction of EMC storage systems with and the integration of storage automation
capabilities within, the VMM private cloud. This saves time and lowers costs.
Simplify Private cloud GUI: Allows administration of private cloud assets (including storage)
administration through a single management UI, the VMM Console, available to VMM or cloud
administrators.
Private cloud CLI: Enables automation through VMM’s comprehensive set of
Windows PowerShell™ cmdlets, including 25 new storage-specific cmdlets.
Reduce errors: Minimizes errors by providing the VMM UI or CLI to view and request
storage.
Private cloud self-service portal: Provides a web-based interface that permits users
to create VMs, as needed, with a storage capacity that is based on predefined
classifications.
Simpler storage requests: Automates storage requests to eliminate delays of days
or weeks.
16 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 1: Overview
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 17
Reference Architecture and Best Practices
Chapter 1: Overview
18 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2 Architecture
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 19
Reference Architecture and Best Practices
Chapter 2: Architecture
Figure 1. The Storage Management component within the Fabric in VMM 2012
20 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
WBEM WBEM is a collection of standards for accessing information and managing computer,
network, and storage resources in an enterprise-environment. WBEM includes:
A CIM model that represent resources.
An XML representation of CIM models and messages (xmlCIM) that travels by way
of CIM-XML.
An XML-based protocol, CIM-XML over HTTP, that lets network components
communicate.
A protocol based on SOAP-(Simple Object Access Protocol) and Web Services for
Management (WS Management, or WS-Man) that support communication
between network components.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 21
Reference Architecture and Best Practices
Chapter 2: Architecture
Standard Description
CIM The CIM standard provides a model for representing heterogeneous compute, network, and
storage resources as objects and for representing relationships among those objects. CIM lets
VMM administer dissimilar elements in a common way. Both SMI-S and WBEM build on CIM.
CIM Infrastructure Specification defines the object-oriented architecture of CIM.
CIM Schema defines a common, extensible language for representing dissimilar
objects.
CIM Classes identify specific types of IT resources (for example, CIM_StorageVolume).
Note EMC SMI-S Provider version 4.4.0 (or later) supports DMTF CIM Schema version 2.31.0.
ECIM The EMC Common Information Model (ECIM) defines a CIM-based model for representing IT
objects (for example, EMC_StorageVolume, which is a subclass of CIM_StorageVolume).
ECOM EMC Common Object Manager (ECOM) implements the DMTF WBEM infrastructure for EMC.
The EMC SMI-S Provider utilizes ECOM to provide a single WBEM infrastructure across all EMC
hardware and software platforms.
The EMC SMI-S Provider is certified by SNIA as compliant with SMI-S versions 1.3.0,
1.4.0, and 1.5.0. EMC plans to update the EMC SMI-S Provider, as appropriate, to
keep current with the SMI S standard as both the standard itself and VMM’s support
for the standard evolve.
For information about the SNIA CTP program and EMC participation in that program:
22 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
Figure 2. SMI-S Provider is the interface between VMM and storage arrays in a
VMM private cloud
VMM Server
The VMM Management Server or VMM Server is the service that a cloud and VMM
administrators use to manage VMM objects. These include hypervisor physical
servers, VMs, storage resources, networks, clouds, and services, which are deployed
together as a set of VMs.
The VMM Server uses WS-Man and Windows Management Instrumentation (WMI), the
Microsoft implementation of DMTF’s WBEM and CIM standards, to enable
management applications to share information:
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 23
Reference Architecture and Best Practices
Chapter 2: Architecture
CIM-XML
CIM-XML is the protocol that is used as the communication mechanism between the
VMM Server and the SMI-S Provider. The use of the CMI-XML protocol is mandated by
the SMI-S standard.
24 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
The EMC SMI-S Provider is the SMI-S-compliant management server that enables
VMM to manage storage resources on EMC storage systems in a unified way.
Host Provider The Host Provider is not used for VMM storage operations. Do not install the Host
(N/A to VMM) Provider in your test environment.
VMware® VASA VMware’s Storage API for Storage Awareness (VASA) Provider is installed automatically
Provider during the Array Provider installation. This default is due to the VASA Provider’s
(N/A to VMM) dependency on the Array Provider.
VMM does not use the VASA Provider for VMM storage operations. However, if your
environment includes VMware vSphere® as well as VMM, you have the option to use the
same EMC SMI-S Provider in both environments.
Array
A storage array is a disk storage system that contains multiple disk drives attached to
a storage area network (SAN) in order to make storage resources available to servers
that have access to the SAN.
In the context of a VMM private cloud, storage arrays, also called storage systems,
make storage resources available to for use by cloud and VMM administrators and by
cloud users.
iSCSI
Fibre Channel (FC)
Fibre Channel over Ethernet (FCoE)
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 25
Reference Architecture and Best Practices
Chapter 2: Architecture
CLARiiON CX4 and VNX arrays: All management traffic between the provider
and array travels over the TCP/IP network.
Symmetrix VMAX arrays: The communication path between the SMI-S Provider
server and the array is in band by means of FC, FCoE, or iSCSI. Communication
to VMAX arrays also requires gatekeeper LUNs. EMC recommends that six
gatekeeper LUNs be created on each VMAX array.
Within an array, the storage elements most important to VMM are:
Storage Pools: A pool of storage is located on an array. You can use VMM to
categorize storage pools based on service level agreement (SLA) factors such
as performance. An example naming convention is to classify pools as “Gold,
Silver, and Bronze.”
Logical Units: A logical unit of storage (a storage volume) is located within a
storage pool. In VMM, a logical unit is typically a virtual disk that contains the
VHD file for a VM. The SMI-S term for a logical unit is storage volume. (A SAN
logical unit is often, if somewhat imprecisely, referred to as a logical unit
number or LUN).
The storage administrator uses an Element Manager tool that is provided by the
vendor to access and manage storage arrays and, typically, the administrative
domain. An Element Manager is one of an administrator’s key Storage Resource
Management (SRM) tools. EMC Unisphere® is an example of an Element Manager.
Currently, VMM supports storage automation only for Hyper-V hosts and host
clusters.
In Figure 2, the stand-alone Hyper-V Server is both a VM host and a VMM Library
Server. This is the same configuration that is found in the SCVMM test validation
environment:
VM host: A physical computer managed by VMM and on which you can deploy
one or more VMs. VMM 2012 supports Hyper-V hosts (on which the VMM
agent is installed), VMware ESX hosts, and Citrix® XenServer® hosts. However,
in the current release, VMM supports storage provisioning only for Hyper-V
hosts.
Library Server: A file server managed by VMM that you can use as a repository
to store files used for VMM tasks. These files include virtual hard disks
(VHDs), ISOs, scripts, VM templates (typically used for rapid provisioning),
service templates, application installation packages, and other files.
You can use VHD files stored on the VMM Library Server provision VMs.
VHD files used to support VM rapid provisioning are contained within
26 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
LUNs on storage arrays but are mounted to folders on the VMM Library
Server.
You can install the VMM Library Server on the VMM Server, on a VM host,
or on a stand-alone Hyper-V host. However, to fully implement (and test)
all VMM 2012 storage functionality, the VMM Library Server must be
installed on a stand-alone Hyper-V host that is configured as a VM host.
“Minimum hardware requirements explained” on page 55 has more
details.
Hyper-V hosts or host clusters in a VMM private cloud must be able to access one or
more storage arrays:
iSCSI initiator (on the host) to access iSCSI SAN: If you use an iSCSI SAN,
each Hyper-V host accesses a storage array by using the Microsoft iSCSI
initiator, which is part of the operating system. During storage operations,
such as creating a logical unit and assigning it to the host, the iSCSI initiator
on the host is logged on to the array.
An iSCSI initiator (on the Hyper-V host) is the endpoint that initiates a
SCSI session with an iSCSI target (the storage array). The target (array) is
the endpoint that waits for commands from initiators and returns
requested information.
Note Whether you use an iSCSI HBA, TCP/IP Offload Engines (TOE), or a
network interface card (NIC), you are using the Microsoft iSCSI
Initiator to manage them and to manage sessions established
through them.
HBA Provider (on the host) to access FC SAN: If you use an FC SAN, each
Hyper-V host that accesses a storage array must have a host bus adapter
(HBA) installed. An HBA connects a host system (the computer) to a storage
fabric. Storage devices are also connected to this fabric. Each host and
related storage devices must be zoned correctly so that the host can access
the storage arrays.
NPIV Provider (on the host) for FC SAN: VMM supports N_Port ID Virtualization
(NPIV) on an FC SAN. NPIV uses HBA technology (which creates virtual HBA
ports, also called vPorts, on hosts) to enable a single physical FC port to
function as multiple logical ports, each with its own identity. One purpose of
this is to provide an identity for a VM on the host. In this case, a vPort enables
the host to see the LUN that is used by the VM. VMM 2012 does not support
creation or deletion of vPorts on the host as an individual operation. However,
for an existing VM, VMM 2012 can move the vPort that identifies that
particular VM from the source host to the destination host and the SAN
transfer is used to migrate the VM. Moving the vPort refers to deleting the
vPort from the source host and creating the vPort on the destination host.
VMM storage automation requires discovery of storage objects not only on arrays but
also on each host and host cluster:
VMM agent and software VDS on the host for discovery: Just as the Microsoft
Storage Management Service on the VMM Server enables VMM by using the
SMI-S Provider to discover storage objects on external arrays. VMM can also
discover storage-related information on Hyper-V hosts and host clusters.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 27
Reference Architecture and Best Practices
Chapter 2: Architecture
VMM agent on the host: VMM uses the VMM agent installed on a
physical Hyper-V host computer to ask the iSCSI initiator (on the host
side) for a list of iSCSI targets (on the array side). Similarly, the VMM
agent queries the FC HBA APIs for FC ports.
Microsoft VDS software provider on the host: VMM uses the VDS API
(VDS software provider) on the host to retrieve disk and volume
information on the host, to initialize and partition disks on the host, and
to format and mount volumes on the host.
VDS hardware provider on the VMM Server for arrays that do not support
SMI-S: The VDS hardware provider is used by VMM 2008 R2 SP1 to discover
and communicate with SAN arrays. In VMM 2012, the SMI-S Provider
supersedes the VDS hardware provider because SMI-S provides more
extensive support for storage automation than does the VDS hardware
provider. However, the VDS hardware provider is still available in VMM 2012
and can be used to enable SAN transfers if no SMI-S Provider is available.
However, if an SMI-S Provider is available, do not install the VDS hardware
provider in a VMM 2012 environment.
The SMI-S standard and VMM 2012 make it possible for one instance of VMM to use
a single provider to communicate with one or more arrays of different types. In
addition, a single VMM instance can communicate with multiple providers at the
same time. Some vendors implement more than one provider. Some customers might
choose to use multiple providers from different vendors and might incorporate
storage systems from different vendors in their private cloud. In addition, multiple
VMM instances can communicate, simultaneously, with multiple providers.
28 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 29
Reference Architecture and Best Practices
Chapter 2: Architecture
After you complete a top-down design, you can then implement that design from the
bottom up. Becoming familiar with some of the scenarios of storage automation is
useful before building your test environment. Also reviewing the issues and
limitations, as summarized in the planning section are useful.
In VMM 2012, the deep integration of storage provisioning with the VMM Console and
VMM PowerShell substantially reduces the learning curve for administrators. For
example, you do not need a special plug-in to add shared storage capacity to a Hyper-
V cluster, nor do you have to learn complex new skills to perform rapid provisioning of
VMs. These capabilities are built into and delivered by VMM.
30 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
VMM 2012 discovers two broad categories of storage, which are remote (on the array)
and local (on the host), as summarized in Table 6.
Array object Microsoft storage Level 2 discovery is targeted against storage pools
level 2 discovery management service: already under VMM management and returns the
Resides on the VMM following array objects:
(2 of 2)
Server. Storage logical units (commonly called LUNs)
Discovers storage associated with that storage pool
objects on remote Storage initiators associated with the
arrays by using the imported LUNs
SMI-S Provider. Storage groups (often called masking sets)
associated with the imported LUNs
Note Storage groups are discovered by VMM but
are not displayed in the VMM Console. You can
display storage groups by using the following VMM
PowerShell command line:
Get-SCStorageArray -All | Select-Object
Name,ObjectType, StorageGroups | Format-
List
Host object VMM agent: VMM agent discovery returns information about the
agent discovery Resides on a Hyper-V following Hyper-V (VM host) storage objects:
(1 of 2) Server (a VM host). FC endpoints
Discovers specific FC ports
storage objects on the iSCSI endpoints (iSCSI targets)
local host.
iSCSI portals
Host object Virtual Disk Service (VDS VDS discovery returns information about the
VDS discovery software provider): following Hyper-V (VM host) storage objects:
(2 of 2) Resides on a Hyper-V Disks
Server (a VM host). Volumes
Discovers specific
storage objects on the
local host.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 31
Reference Architecture and Best Practices
Chapter 2: Architecture
The VMM Level 1 discovery retrieves information about all storage objects of specific
types (storage pools, endpoints, and iSCSI portals) on an array with which VMM is
configured to interact through the SMI-S Provider.
Level 2 discovery starts by retrieving information about logical units (only about
logical units for storage pools that have already been brought under VMM
management), and then retrieves storage initiators and storage groups associated
with the imported logical units.
As part of importing information about logical units, VMM also populates the VMM
database with any discovered associations between storage group objects and
logical unit objects. In VMM, storage groups are defined as objects that bind together
host initiators (on a Hyper-V host or host cluster) with target ports and logical units
(on the target storage array). Thus, if a storage group contains a host initiator, the
logical unit is unmasked to (assigned to) that host (or cluster). If no association
exists, the logical unit is masked (that is, it is not visible to the host or cluster).
By default, when VMM manages the assignment of logical units for a host cluster,
VMM creates storage groups per node (although it is also possible to specify storage
groups per cluster instead of by individual node). A storage group has one or more
host initiators, one or more target ports, and one or more logical units. “Appendix B:
Array Masking and Hyper-V Host Clusters” on page 137 provides more detail on how
VMM handles storage groups in the context of masking and unmasking Hyper-V host
clusters.
LUN-to-host map: With information now stored in the VMM database about
discovered associations between storage groups and logical units, VMM has
an initial logical map of each discovered logical unit that is associated with a
specific host.
Array-to-host map: However, detailed information about a Hyper-V host is
available only if the VMM agent is installed on the host. The VMM agent is
installed on any Hyper-V Server that acts as a VM host and is therefore,
managed by VMM. A more detailed map between storage objects on a VM
host and on any associated arrays is automatically created. This information
tells you which arrays are visible to a given host.
VM-to-LUN map: After VMM discovers all available storage assets, VMM maps
a VM, which is any VM that consumes storage from the SAN (VHDs or
passthrough disks), to its LUN. VMM then creates a complete VM-to-LUN map.
The administrator can access this VM-to-LUN map in the VMM Console or by
using a VMM PowerShell script. A sample script is provided in the “List all the
VMs hosted on a specific SAN array” blog at
http://blogs.technet.com/b/hectorl/archive/2011/07/26/list-all-the-vms-
hosted-on-a-specific-san-array.aspx.
32 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
Note Although VMM supports VMware ESX hosts and Citrix XenServer hosts in
addition to Hyper-V hosts, in the current release, the storage provisioning
functionality of VMM applies only to Hyper-V hosts.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 33
Reference Architecture and Best Practices
Chapter 2: Architecture
Each VMM cloud must have one or more VMM host groups. Before you can provision
new logical units or assign storage to a host or cluster, you must first assign storage
to a host group. You can allocate both logical units and storage pools to a VMM host
group.
Storage pools and logical units are allocated to host groups differently:
34 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
After storage resources available to your private cloud are discovered and allocated
to a host group, you can start to make use of those storage resources. For example,
you can set up your private cloud so that users with different types of requirements,
such as a software development team, members of a marketing department, and
inventory-control staff, know what storage resources are allocated to them. From
storage allocated to their business unit, they can assign what they need to a Hyper-V
host or host cluster and can focus quickly on their job-related tasks because VMM
automates the provisioning process.
In addition, you can assign storage to a new cluster by using the new cluster wizard.
VMM supports the creation of a new cluster from available Hyper-V hosts. In the new
cluster wizard you can select which logical units to assign to the cluster. As part of
creating the new cluster, the logical units are unmasked to all of the nodes and
prepared as cluster shared volumes (CSVs).
Assign a newly created You can use VMM to assign a newly created logical unit, or an
logical unit (or an existing one, to a Hyper-V VM host or to an existing host
existing one) to a cluster by unmasking (assigning) the logical unit to that host
Hyper-V VM host. or cluster.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 35
Reference Architecture and Best Practices
Chapter 2: Architecture
Provisioning
Task How?
sequence
Host disk and Prepare disks and After storage is assigned to a host or cluster, VMM lets you
volume volumes. perform the following tasks on the host or cluster:
operations Disk (LUN) on a stand-alone host:
Format the volume as NTFS volume (optional):
Specify partition type: GBT or MBR (GUID Partition
Table or Master Boot Record)
Specify a volume label
Specify allocation unit size
Choose Quick format (optional)
Specify the mount point:
Specify a drive letter, a path to an empty NTFS folder,
or none
Cluster disk (LUN):
Format the volume as an NTFS volume (required):
Specify partition type: GBT or MBR
Specify a volume label
Specify allocation unit size
Choose Quick format (optional)
Note No mount point fields exist for a cluster disk.
In VMM 2012, the earlier rapid provisioning capability is greatly extended by the
introduction of SMI-S support. This support enables automated SAN-based rapid
provisioning of new VMs on a large scale. With VMM 2012, the entire process is
intrinsic to VMM and you can use either the VMM Console or VMM PowerShell to
rapidly provision new VMs.
Copying a VHD on a LUN from one location to another on a SAN (SAN transfer) when
two VM hosts are connected to the same SAN is far faster than copying a VHD from
one computer to another over a local area network (LAN transfer).
36 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 2: Architecture
With VMM 2012, you can create and customize easy-to-use SAN-copy-capable
templates (SCC templates) to perform automated large-scale rapid provisioning of
VMs either to stand-alone Hyper-V VM hosts or to Hyper-V host clusters. Once
created, these templates are stored in the VMM library and are therefore reusable.
Table 8. VMM 2012 automates the entire workflow for VM rapid provisioning
Task Automation workflow
Identify an SCC VHD in Identify a SAN-copy-capable VHD (SCC VHD) in the VMM Library that resides on a
library SAN array. The array must support copying a logical unit by cloning it or by
creating a writeable snapshot of it (or both).
Create an SCC Create an SCC template that uses the SCC VHD as the source for repeatedly
template creating new VMs with identical hardware and software characteristics (as
specified in this particular template). This is a SAN-copy-capable template (SCC
template). Like the SCC VHD, it is stored in the VMM library and is available for re-
use.
See also:
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 37
Reference Architecture and Best Practices
Chapter 2: Architecture
Existing VMs that use a dedicated logical unit can be migrated by using SAN transfer
(also called SAN migration). A Hyper-V-based VM can have either a virtual hard disk
(VHD) file attached or a passthrough SCSI disk. In either case, SAN transfer moves the
LUN regardless of whether the manifestation of the LUN on the Hyper-V side is a VHD
or a passthrough SCSI disk.
In the case of a VM with a VHD attached (the LUN contains the VHD), using SAN
transfer to migrate the VM from a source host to a destination host simply transfers
the path to the LUN from one Hyper-V server to another. Assuming that both the
source and destination Hyper-V VM hosts can access the storage array, the only
change required is to the path.
The mechanism for moving the LUN path is unmasking and masking. The path to the
storage volume (to the LUN) is masked (hidden) from the source host and unmasked
(exposed) to the destination host. The storage volume is mounted on the destination
host so that the VHD can be accessed.
A SAN transfer is much faster than copying a VHD file over a local area network (LAN)
to move a VM from a source to a destination host. The LUN is not moved. The only
change made is that the path to the LUN changes.
iSCSI migration
VMM can use either of the following methods (based on what the
underlying array supports):
Unmask and mask
iSCSI initiator logon/logoff
FC migration
Prerequisite: Zoning must be set up appropriately.
VMM can use either of the following methods:
Unmask and mask
NPIV vPort creation/deletion
38 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 3 Plan a Private Cloud
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 39
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
Before you set up your environment for testing, consider the following:
Developing your approach now for coordinating private cloud and storage
requirements, developing jointly agreed-on security measures, and gaining familiarity
with FAQs and known issues will enable you to set up your preproduction test
environment in an optimal way. Planning and coordination will also help ensure a
more efficient deployment into your production environment later.
What the VMM cloud administrator manages now What the storage administrator manages now
Coordination between cloud and storage administrators, starting with design and
planning, is critical to the successful deployment of one or more private clouds that
can take advantage of all available storage automation capabilities.
40 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
However, the necessity for coordinated planning of all aspects of a private cloud goes
beyond cloud and storage administrators. Administrators who need to identify and
coordinate storage-related needs for a VMM-based private cloud include:
Storage administrators
VMM administrators
Cloud administrators
Self-service portal administrators
Hyper-V and other server administrators
Network administrators
Security administrators
Note In an enterprise-scale heterogeneous environment, this document assumes
that the role, or VMM cloud administrator or cloud administrator, refers to a
person who focuses on and is responsible for cloud services provided to
users. Although a VMM cloud administrator must have VMM administrator or
VMM delegated administrator permissions to view and manage cloud storage
systems, the VMM cloud administrator role is different from the VMM
administrator role. The VMM administrator role focuses on managing and
monitoring the VMM infrastructure that supports the cloud and ensures that
cloud services remain accessible at all times.
Design your cloud infrastructure based on the number and types of private clouds
that VMM will host. Each cloud will have capacity, performance, scale, and elasticity
requirements defined as SLAs.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 41
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
Given the sophisticated storage automation capabilities introduced with the VMM
2012 private cloud, storage and non-storage administrators need to develop
systematic ways to communicate with each other any requirements, preferences, and
limitations.
Table 9 lists some areas where the storage administrator will likely need to take a
leadership role when working with other IT administrators.
Balance Respond to multiple, often simultaneous, competing storage and SAN requests
competing from VMM, cloud, self-service portal, server, and network administrators.
requests
Will existing methods for balancing competing requests be modified to handle
increased demand from private cloud administrators and users? For example:
Can you expect to install additional SMI-S Providers in order to provide load
balancing by reducing the number of arrays managed by each provider?
Do you need to install additional SMI-S Providers in order to eliminate a
service or workload interdependency?
Allocate for a Allocate storage in a systematic way that is appropriate for the new private cloud
private cloud environment.
Ask whether rapid provisioning will alter storage administration, and ask:
How much? Will the quantity of storage allocated in a very short time in order
to rapidly provision VMs change how storage resources are tracked and
allocated?
How fast? Will the speed at which storage is made available need to be
expedited to keep up with rapid provisioning of large numbers of VMs?
Define and create the appropriate storage classifications. The following should be
considered for each storage classification:
Disk drive types
Tiered storage
Caching
Thick and thin pools
Table 10 lists some areas where IT administrators (other than storage administrators)
likely need to take a proactive role in communicating their needs to storage
administrators.
42 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
Table 10. IT administrators: private cloud storage requests and processes for
global issues
Systematize Role of IT administrators working with storage administrators
Understand Gain familiarity with the impact of storage requests on the storage domain.
storage domain
How will storage administrators classify and allocate storage for IT areas?
Communicate Communicate to the storage administrator, in a predictable way, the specific storage
storage needs needs for each IT area, and the specific storage needs of your users.
Identify available Ascertain how much storage the storage administrator can make available to
storage each IT area and to each set of users within that area.
Ascertain how the storage administrator plans to handle storage allocation for
sets of users whose needs fluctuate significantly based on factors such as
shopping season, accounting quarters, project development cycles, and so on.
Identify location Ascertain which specific storage pools the storage administrator can make available to
of available each IT area and to each set of users in that area.
storage
Cloud Storage to support existing VMs and services (if any), and expansion of VMs
administrators and services
Capacity-planning requirements that meet expected cloud workflow demands,
including populating the Reserved LUN Pool with sufficient capacity to support
rapid provisioning with snapshots
Recovery of storage from deleted VMs and services
Storage for new VMs and services
Classification of storage based on established SLAs
Required storage system features
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 43
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
Self-service What are the backup and recovery requirements for self-service VMs and for their
administrators storage?
44 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
IT administrators Storage administrators need the following from the other IT administrators
Server Do I need to install storage management software on one or more servers?
administrators
What dependent software is required?
(non-Hyper-V)
VMM role-based access control to grant rights to VMM host groups and
clouds
Run As Accounts and Basic Authentication
Storage system global administrator account
SMI-S Provider object security
The following section addresses these issues.
VMM role-based access control to grant rights to VMM host groups and clouds
VMM supports role-based access control (RBAC) security for defining the scope within
which a specific VMM user role can perform tasks. In VMM, this refers to having rights
to perform all administrative tasks on all objects within the scope allowed for that
user role. The scope for the VMM administrator role extends to all objects that VMM
manages. The scope for any particular delegated administrator role is limited to
objects within the assigned scope, which can include one, a few, or all host groups,
clouds, and library servers.
The current RBAC model allows members of the VMM roles administrator and
delegated administrator roles to add and remove SMI-S Providers from VMM.
Members of the VMM administrator and delegated administrator roles can also
allocate storage, but which storage they can allocate is limited to those VMM host
groups or clouds that they have the right to access.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 45
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
The following screenshots illustrate how VMM defines the scope for a delegated
administrator role that is limited to host groups, but does not include clouds, and can
administer storage resources allocated to the defined host group:
Figure 6. The scope for this delegated administrator role includes only one host
group
Properties for LDMHostGroup1 host group: The Storage allocated to this host
group page shows total storage capacity in GB and represents allocated
storage in terms of logical units and storage pools.
46 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
However, the VMM Run As Accounts do not grant or deny rights to administer storage
associated with a specific SMI S Provider. Instead, you use the EMC SMI-S Provider
Run As account (which is not associated with any VMM user role) to access storage.
This account is an ECOM account and must be created separately (outside of VMM)
by using the ECOM Configuration Tool. This account allows a connection from the
VMM Server to the provider by using Basic Authentication.
The storage administrator should work with the VMM administrator and Security
administrator to determine the security model to use for storage in a VMM-based
private cloud. This includes determining how many Run As Accounts are needed
based on the number of VMM and SMI S Provider management servers.
See also:
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 47
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
Question: Can I install the SMI-S Provider and VMM on the same computer?
Answer: No, Microsoft and EMC recommend that you do not install the SMI-S Provider
on the VMM Server in either a preproduction or a production environment. This
configuration is untested and therefore unsupported. Install the SMI-S Provider on a
dedicated server with sufficient resources to support your performance requirements.
“Set up EMC SMI-S Provider for storage validation testing” on page 66 provides
details.
Note EMC SMI-S Provider can be installed on other Windows and Linux
platforms, as listed in the latest version of the EMC SMI-S Provider
Release Notes. However, the validation tests in this document
were performed with an EMC SMI-S Provider installed on a
Windows Server 2008 R2 SP1 64-bit computer.
Question: Why would you install fewer than five arrays per SMI-S Provider?
Answer: If you have an array with a large number of storage groups, or a large number
of storage volumes within its storage groups, reduce the number of storage systems
per SMI-S Provider to ensure acceptable performance. Storage groups are often called
masking views or SCSI Protocol Controllers (SPCs).
48 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
Question: Do I need to install the EMC VDS Hardware Provider on Hyper-V hosts?
Answer: No. VMM uses the Microsoft VDS Software Provider on a Hyper-V host to
retrieve and configure disk and volume information on the host. Installation of the
EMC VDS Hardware Provider is not needed on the Hyper-V host.
Note Install the VDS hardware provider on the VMM Server only in the
case where you use arrays (such as VNXe arrays) in your private
cloud environment that are not supported by the EMC SMI-S
Provider.
Question: If I install the EMC VDS Hardware Provider on my VMM Server, will I be able
to do rapid provisioning as it is available in VMM 2012?
Answer: No, you cannot do automated rapid provisioning at scale unless you use
VMM 2012 in conjunction with the EMC SMI-S Provider. Installing the EMC VDS
Hardware Provider on the VMM Server provides only the more limited rapid
provisioning capability that was possible with SCVMM 2008 R2 SP1.
Question: Is there a limit on how many VMs you can rapidly provision at the same
time?
Answer: Rapid VM provisioning should be batched to contain no more than eight VMs
to avoid the possibility of VMM and/or provider timeouts. Results will vary depending
on the configuration.
Question: What do I need to know about array management ports and their IP
addresses?
Answer: A CLARiiON or VNX array has two management port IP addresses that the
SMI-S Provider uses to manage the array. To configure a CLARiiON or VNX array with
the provider, you must specify both management port IP addresses and must open
port 443. The IP addresses of both management ports must be accessible so that the
provider can fully discover and manage the array.
Secure Sockets Layer (SSL) port 443 is the port used for the communication. If a
firewall exists between the SMI-S Provider installation and a CLARiiON or VNX array,
open SSL port 443 in the firewall (inbound and outbound) for management
communications to occur with the array.
Answer: Some array features needed by VMM might be disabled by default and
require licenses to be added. For more information about licensing, review “Array
system requirements” on page 61.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 49
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
Question: What is required for a Symmetrix VMAX array to communicate with the
SMI-S Provider?
Answer: Symmetrix VMAX arrays must have an inband (iSCSI, FC, or FCoE)
communication path between the SMI-S Provider server and each array. EMC
recommends that six gatekeeper LUNs be created on each array. And to enable the
provider to manage the array, the array must be zoned and unmasked to the SMI-S
Provider server.
Important The latest version of the EMC SMI-S Provider Release Notes has the
most up-to-date information for:
Symmetrix VMAX VMM can discover and You cannot use the VMM Console or VMM PowerShell
series modify cascading storage commands to create cascaded storage groups. This is a
groups, but cannot create product limitation, so no solution is available.
them. However, if cascading storage groups are created and
configured outside of VMM on VMAX arrays, VMM can
discover these as externally created cascading storage
groups.
VMM can perform masking operations on cascaded
storage groups that VMM has discovered. VMM can also
modify an existing cascaded storage group by assigning
storage from that cascaded storage group to a Hyper-V
VM host or host cluster that is a member of the VMM
host group.
50 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
CLARiiON CX4 Cannot specify a new size The workaround is to supply additional LUNs to increase
series when expanding the size of the reserved snapshot pool. Use EMC
snapshot pool capacity. Unisphere to perform this operation.
CLARiiON CX4 Managing pools with Managing pools with MetaLUN is not supported. This is
series MetaLUNs is not a product limitation, so no solution is available.
supported.
EMC SMI-S Array HTTPS connection fails. The default configuration of ECOM can conflict with the
Provider Windows HTTPS implementation. To change the value:
1. Open the following file: C>:\Program
Files\EMC\ECIM\ECOM\conf\security_settings.xml
2. Change the following setting from the current value:
<ECOMSetting Name=“SSLClientAuthentication”
Type=“string” Value=Optional"/>
To:
<ECOMSetting Name=“SSLClientAuthentication"
Type=“string” Value=“None”/>
3. Restart the ECOM service.
EMC SMI-S Array Timeouts appear in the When you perform multiple provisioning steps at the
Provider VMM event log when same time, VMM can create more connections to the
performing multiple SMI-S Provider server than the default configuration
provisioning steps at the supports. “Install and configure the EMC SMI-S Provider”
same time. on page 68 provides connection limit settings.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 51
Reference Architecture and Best Practices
Chapter 3: Plan a Private Cloud
52 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4 Build a Preproduction Test
Environment
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 53
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
A quick preview of the test environment is followed by array, provider, and VMM
requirements that customers must consider when planning how to build and
deploy a private cloud.
Note This document does not include steps to configure the virtual network
required for the test infrastructure.
54 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Notes
Both EMC and Microsoft used the VMM Storage Automation Validation
Script to test configurations with hardware similar to the preceding
minimum requirements. “Validate Storage Automation in your Test
Environment” on page 107 has more details. EMC also performed
validation testing with another configuration with an eight-node Hyper-
V host cluster.
The VMM server and SMI-S Provider server can be installed on VMs,
however Microsoft recommends installing all servers in the preceding
list on physical servers for the reasons listed in the following section.
Register-SCStorageLogicalUnit [-StorageLogicalUnit]
<StorageLogicalUnit[]> -JobGroup <Guid> -VMHost [<String
Host>] [-JobVariable <String>] [-PROTipID <Guid>] [-
RunAsynchronously <SwitchParameter>] [<CommonParameters>]
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 55
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Notes
56 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
SMI-S Provider servers, one is the minimum, but the number can vary
depending on:
Number of arrays
Size of arrays (that is, the number of pools and LUNs on each array)
Array product family and model
Array OS version
Connectivity
At scale, Microsoft recommends testing with physical rather than virtual
servers to maximize throughput, scalability, and reliability of the entire
system. Running the infrastructure servers, which include the VMM, SQL,
and SMI-S Provider servers, on VMs limits the throughput. The main cause
of limited throughput is the CPU sharing model when running
multithreaded applications in a VM. The storage validation tests kick off
multiple parallel operations. VMM uses multiple threads to handle those
parallel operations:
Physical VMM Server is recommended and requires a minimum of four
processor cores.
Virtual VMM Server is not recommended for scalability, but requires a
minimum of four logical processors.
Table 14 lists the communication protocol requirements for the test infrastructure
that is depicted in Figure 8.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 57
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
FC or iSCSI FC indicates that the two endpoints send SCSI commands over an FC network.
iSCSI indicates that the two endpoints send SCSI commands over an IP network.
Note If a Hyper-V host has FC and iSCSI connections to an array, VMM uses FC by
default.
FC or iSCSI FC or iSCSI | TCP/IP indicates that the EMC SMI S Provider needs one of the following for
TCP/IP communications between the provider and array through the EMC Solutions Enabler:
TCP/IP for CLARiiON CX4 or VNX arrays
FC or iSCSI for Symmetrix VMAX arrays
Note Communication to VMAX arrays also requires gatekeeper LUNs. EMC
recommends that six gatekeeper LUNs be created on each Symmetrix array
“Test case list for EMC storage arrays” on page 119 has detailed results of a VMM
Storage Automation Validation Script test.
Note If a Hyper-V host has both FC and iSCSI connectivity to the same array, VMM
uses FC by default.
The EMC storage systems in Table 15 were tested with VMM. EMC SMI-S version
4.3.2 introduced support for VMM. EMC recommends that you use SMI-S version
4.4.0 or later with a maximum of five arrays per SMI-S Provider. EMC Online
Support at https://support.emc.com/ has more details about EMC storage
systems.
58 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Symmetrix VMAX 20K (SE) series Enginuity release level 5875 (or later) iSCSI, FC
Symmetrix VMAX 40K series Enginuity release level 5876 (or later) iSCSI, FC
Table 16. Tests run, by array type, that validate EMC support for VMM 2012
storage capabilities
EMC arrays tested
Private cloud
Storage primitives VMAX CX4 VNX
scenario
series series series1
End-to-end Discover arrays X X X
discovery
scenario2 Discover storage pools X X X
Discover LUNs X X X
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 59
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
1. Supported arrays include the VNX storage family, with the exception of
VNXe, which is not supported.
2. Discovery primitives are only in reference to VMM discovery of storage
resources on arrays and not storage objects on hosts.
3. Table 17 has the number of snapshots and clones supported by each
array type. VMAX 10K (VMAXe) series arrays support only clones by
design.
4. “Appendix B: Array Masking and Hyper-V Host Clusters” on page 137 has
more details on array masking.
5. The number of arrays per provider at scale is one array to one provider.
Deployment to host or cluster is limited by array capabilities.
EMC performed comprehensive testing with the “VMM Storage Automation
Validation Script” provided by Microsoft on each the array types listed in Table 16.
“Test case list for EMC storage arrays” on page 119 has detailed results of VMM
Storage Automation Validation Script testing.
“Test case list for EMC storage arrays” on page 119 has result details of snapshot
and clone testing for each EMC array series.
60 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Table 17. Maximum number of clones and snapshots per source LUN
EMC array Maximum snapshots Maximum clones
VMAX 40K and 20K (VMAX SE) series 128 15
Note To see the maximum number of clones or snapshots per source LUN in your
environment, open a VMM PowerShell command shell and type the
following command line:
Get-SCStorageArray - All | Select-Object Name, ObjectType,
Manufacturer, Model, LogicalUnitCopyMethod,
IsCloneCapable, IsSnapshotCapable,
MaximumReplicasPerSourceClone,
MaximumReplicasPerSourceSnapshot | Format-List
The following sections list software that you need for Symmetrix VMAX, VNX, and
CLARiiON CX4 series of storage systems.
VMAX requirements
Table 18 lists software and license requirements and Table 19 lists configuration
and license requirements for Symmetrix VMAX arrays that support VMM storage
automation.
See also:
The latest version of the EMC SMI-S Provider Release Notes on EMC Online
Support at https://support.emc.com/
Hardware and platforms documentation also on EMC Online Support
EMC Symmetrix VMAX data sheets:
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 61
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
http://www.emc.com/collateral/hardware/data-sheet/h8816-
symmetrix-vmax-10k-ds.pdf
http://www.emc.com/collateral/hardware/data-sheet/h6193-
symmetrix-vmax-20k-ds.pdf
http://www.emc.com/collateral/hardware/data-sheet/h9716-
symmetrix-vmax-40k-ds.pdf
http://www.emc.com/collateral/hardware/product-
description/h6544-vmax-w-enginuity-pdg.pdf
Table 18. Symmetrix VMAX software and license requirements
Additional
Requirement Description Version license
required?
Firmware
Enginuity An operating environment (OE) VMAX 40K Series: Enginuity 5876 (or No
designed by EMC for data later)
storage; used to control
components in Symmetrix VMAX 20K (VMAX SE): Enginuity 5875 No
VMAX arrays. (Installed with the (or later)
array)
VMAX 10K (VMAXe): Enginuity 5875 (or No
later)
Management software
EMC Solutions Provides the interface between Version 7.4.0 (or later) No
Enabler the EMC SMI-S Provider and (installed with EMC SMI-S Provider kit)
Symmetrix VMAX, VNX, and
CLARiiON arrays
Management software
Gatekeeper devices Gatekeepers enable the EMC SMI-S Provider to manage the No
Symmetrix VMAX array. EMC recommends that six
gatekeeper LUNs be created on the array and masked to the
EMC SMI S Provider server.
62 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
VNX requirements
Table 20 lists software and license requirements and Table 21 lists configuration
and license requirements for VNX arrays that support VMM storage automation.
See also:
Latest version of the EMC SMI-S Provider Release Notes on EMC Online
Support at https://support.emc.com/
Hardware and platforms documentation also on EMC Online Support
EMC VNX Series Total Efficiency Pack
VNX data sheets:
http://www.emc.com/collateral/software/data-sheet/h8509-vnx-
software-suites-ds.pdf
http://www.emc.com/collateral/hardware/data-sheets/h8520-vnx-
family-ds.pdf
http://www.emc.com/collateral/software/specification-sheet/h8514-
vnx-series-ss.pdf
Management software
EMC Provides the interface between the EMC Version 7.4.0 (or later) No
Solutions SMI-S Provider and Symmetrix, (installed with EMC
Enabler CLARiiON, and VNX arrays SMI-S Provider kit)
1 For advanced features, you can buy add-ons, such as the Total Efficiency Pack. The
feature FAST Suite, for example, is purchased as part of a pack.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 63
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Management software
Table 22 lists software and license requirements and Table 23 lists configuration
and license requirements for CLARiiON CX4 arrays that support VMM storage
automation.
See also:
Latest version of the EMC SMI-S Provider Release Notes on EMC Online
Support at https://support.emc.com/
Hardware and platforms documentation on EMC Online Support
CLARiiON data sheets:
http://www.emc.com/collateral/hardware/data-sheet/h5527-emc-
clariion-cx4-ds.pdf
http://www.emc.com/collateral/software/data-sheet/h2306-clariion-
rep-snap-ds.pdf
http://www.emc.com/collateral/hardware/data-sheet/h5521-clariion-
cx4-virtual-ds.pdf
64 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Management software
EMC Provides the interface between the EMC Version 7.4.0 (or later) No
Solutions SMI S Provider and Symmetrix, (installed with EMC SMI-S
Enabler CLARiiON, and VNX arrays Provider kit)
Management software
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 65
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
EMC SMI-S Provider is hosted by the EMC CIMOM Server, ECOM, to provide an SMI-
S compliant interface for EMC Symmetrix VMAX, VNX, and CLARiiON CX4 series of
storage systems.
This section shows you how to set up the SMI-S Provider so that you can test VMM
storage capabilities with one or more EMC storage systems.
Important The latest EMC SMI-S Provider Release Notes on EMC Online Support
at https://support.emc.com/ has the most up-to-date information
for:
Installation
Post-installation tasks
66 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Table 24. Software requirements for the SMI-S Provider server in your test
environment
Requirement Description
Server operating Windows Server 2008 R2 SP1 64-bit
system Notes
EMC recommends installing the operating system for the computer that will
host the 64-bit version of the EMC SMI-S Provider on a multiple core server with
a minimum of 8 GB of physical memory.
EMC SMI-S Provider can be installed on any Windows or Linux platform listed in
the latest version of the EMC SMI-S Provider Release Notes. EMC performed the
tests in this document with an EMC SMI-S Provider installed on a Windows
Server 2008 R2 SP1 64-bit computer.
C++ 2008 SP1 Visual C++ 2008 SP1 redistributable package with KB 973923 applied
(Installed with Visual C++ 2008 SP1 Redistributable Package with KB973923 applied is required for
EMC SMI-S Windows environments (this is a Microsoft Visual Studio runtime requirement).
Provider)
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 67
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Important The latest version of the EMC SMI-S Provider Release Notes is on EMC
Online Support at https://support.emc.com/.
To install and configure the EMC SMI-S Provider for a VMM-based private cloud:
68 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
10. Change the ECOM External Connection Limit and HTTPS settings:
a. On the SMI-S Provider server, if necessary, stop Ecom.exe by typing
services.msc to open Services, click ECOM, and then click Stop.
b. Open Windows Explorer, navigate to and open the following XML file:
C:\Program Files\EMC\ECIM\ECOM\Conf\Security_Settings.xml
c. To increase the ECOM external connection limit and HTTP options,
change the following settings:
Change the default value for ExternalConnectionLimit from 100 to
600:
<ECOMSetting Name="ExternalConnectionLimit" Type="uint32"
Value="600"/>
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 69
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
3. On the SMI-S Provider, open a command prompt and type the following
command:
%ProgramFiles%\EMC\ECIM\ECOM\bin\TestSmiProvider.exe
4. Enter the requested information for the storage system and its
management ports. To accept the default values that are displayed just
left of the colon, press Enter for each line:
Connection Type (ssl,no_ssl) [no_ssl]:
Host [localhost]:
Port [5988]:
Username [admin]:
Password [#1Password]:
Log output to console [y|n (default y)]:
Log output to file [y|n (default y)]:
Logfile path [Testsmiprovider.log]:
6. To confirm that the Symmetrix arrays are configured correctly, check that
they are listed as the output of the dv command.
1. On the SMI-S Provider server, open a command prompt and type the
following command:
%ProgramFiles%\EMC\ECIM\ECOM\bin\TestSmiProvider.exe
4. Enter the requested information for the storage system and its
management ports. To accept the default values that are displayed just
left of the colon, press Enter for each line:
Connection Type (ssl,no_ssl) [no_ssl]:
Host [localhost]:
Port [5988]:
Username [admin]:
Password [#1Password]:
Log output to console [y|n (default y)]:
Log output to file [y|n (default y)]:
Logfile path [Testsmiprovider.log]:
70 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
6. At the prompt, collect the following key data for your applicable storage
system details.
<Key data>:
<YourIPAddress1> is the SP A management port, which is required
for CLARiiON or VNX.
<YourIPAddress1> is the SP B management port and both addresses
are required to successfully connect to the array.
<YourGlobalAdminAccountName> is the user name required to
connect to the storage system.
<YourGlobalAdminAccountPwd> is the password required to connect
to the storage system.
And then type the following commands, but replace the <key data> with your
specific storage system data:
(localhost:5988) ? addsys
Add System {y|n} [n]: y
ArrayType (1=Clar, 2=Symm) [1]: 1
One or more IP address or Hostname or Array ID
Elements for Addresses
IP address or hostname or array id 0 (blank to quit):
<YourIPAddress1>
IP address or hostname or array id 1 (blank to quit):
<YourIPAddress2>
IP address or hostname or array id 2 (blank to quit):
Address types corresponding to addresses specified above.
(1=URL, 2=IP/Nodename, 3=Array ID)
Address Type (0) [default=2]: 2
Address Type (1) [default=2]: 2
User [null]: <YourGlobalAdminAccountName>
Password [null]: <YourGlobalAdminAccountPwd>
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 71
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
To use the public EMC web page for EMC VMM 2012 support:
The EMC Community Network page ( the parent page for “Everything
Microsoft at EMC”) at https://community.emc.com/index.jspa
Virtual Machine Manager page (online VMM product team page) at
http://technet.microsoft.com/en-us/library/gg610610.aspx
“Appendix G: References” on page 175
This document describes a simple installation of VMM that is sufficient for storage
validation testing.
See also:
72 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
VMM prerequisites
This section lists hardware and software requirements for installing the VMM
Server in the storage validation test environment.
Install VMM on a server running Windows Server 2008 R2 SP1 with at least four
processor cores. For large-scale testing, Microsoft recommends installing VMM on
a physical server.
See also:
The following table lists the software requirements for a test deployment, as
described in this guide. “System Requirements: VMM Management Server” at
http://technet.microsoft.com/en-us/library/gg610562.aspx has a comprehensive
list of system requirements for installing VMM 2012 in a production environment.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 73
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Table 25. Software requirements for installing VMM 2012 in a test environment
Requirement Description
Active directory One active directory domain
You must join the VMM Server, SQL Server (if it is on a separate server than the
VMM Server), and Hyper-V Servers (VM Host or Library Server, and cluster nodes)
to the domain. Optionally, you can join the EMC SMI-S Provider to the domain.
Note VMM supports Active Directory with a domain functional level of Windows
Server 2003 (or later) that includes at least one Windows Server 2003 (or later)
domain controller.
WAIK Windows Automated Installation Kit (Windows AIK, or WAIK) for Windows 7
You can download WAIK from the Microsoft Download site
athttp://www.microsoft.com/download/en/details.aspx?displaylang=en&id=575
3
74 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Install VMM
“Appendix A: Install VMM” on page 131 provides instructions for installing a VMM
Server in your preproduction environment. The installation instructions in this
appendix do not relate to storage automation, but are simply the major installation
steps for a deployment.
Hyper-V Server: You must have a physical server running Windows Server
2008 R2 SP1 with the Hyper-V Server role installed. Join this server to the
same Active Directory domain to which the VMM Server belongs.
If the Windows Server computer that you want to add as a VM host, does
not already have the Hyper-V Server role installed, make sure that the BIOS
on the computer is configured to support Hyper-V. If the BIOS is enabled to
support Hyper-V but the Hyper-V role is not already installed on the server,
VMM automatically adds and enables the Hyper-V role when you add the
server.
See also:
“Minimum hardware requirements for a test environment” on page 54
“Hyper-V Installation Prerequisites” at
http://technet.microsoft.com/library/cc731898.aspx and “System
Requirements: Hyper-V Hosts” at http://technet.microsoft.com/en-
us/library/gg610649
Note A test environment may not need to meet all requirements that
are recommended for a production environment.
Run As Account: You must have, or create, a Run As Account with the
following characteristics:
You must use an Active Directory domain account, and that account
must be added to the local administrators group on the Hyper-V host
that you want to add as a VM host to VMM.
If you configured your VMM Server to use a domain account when you
installed the VMM Server, then do not use the same domain account
to add and remove VM hosts.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 75
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Group Policy and WinRM: If you use Group Policy to configure Windows
Remote Management (WinRM) settings, then before you add a Hyper-V host
to VMM management, review the “Prerequisites” section in the “How to
Add Trusted Hyper-V Hosts and Host Clusters” online help topic at
http://technet.microsoft.com/en-us/library/gg610648.
1. In the lower left pane of the VMM Console, click Fabric and on the ribbon,
click the Home tab. Then click Add Resources and select Hyper-V Hosts
and Clusters.
76 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
6. In Select a Run As Account, click the name of the new Run As account that
you just created and click OK.
7. On the Credentials page, click Next.
8. On the Hyper-V host, open Server Manager, expand Configuration, and
then expand Local Users and Groups.
9. Click Groups, double-click Administrators.
10. On the Administrators Properties page, click Add, and then in Enter the
object names to select, type the applicable values:
<DomainName>\<NewRunAsAccountName>
11. Click Check Names and then click OK twice.
12. On the VMM Server Console, return to the Discovery scope page in the
Add Resources Wizard.
13. Select Specify Windows Server computers by names and in Computer
names, type the name (or part of the name) of the computer that you want
to add as a VM host.
14. Wait until the name of the server that you specified appears and in
Discovered computers, select the server name.
15. In the Host Settings page, specify:
For Host group, assign the host to a host group by selecting All Hosts
or by selecting the name of a specific host group.
For Add the following path, do one of the following to specify the path
to the directory on the host where you want to store VM files, which
will be deployed on this host:
To accept the default VM placement path of
%SystemDrive%\ProgramData\Microsoft\Windows\Hyper-V leave
this field blank and click Add.
Or to specify a VM placement path other than the default, type the
path and then click Add. For example, type C:\MyVMs as the path.
Note Add a path only for a stand-alone host. For a host cluster, VMM
automatically manages the paths that are available for VMs
based on the shared storage available to the host cluster.
16. In the Summary page, confirm the settings that you selected and then
click Finish.
17. In the Jobs dialog box, confirm that adding the host completes
successfully and then close the dialog box.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 77
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Hyper-V Servers configured as a host cluster: You must have four servers
running Windows Server 2008 R2 SP1 with the Hyper-V Server role
installed. These servers must belong to the same Active Directory domain
as the VMM Server.
These four servers should be the nodes of an existing host cluster. The
steps used in the following procedure assume that you have an existing
Hyper-V host cluster that you want to add to VMM.
See also:
“Minimum hardware requirements for a test environment” on page 54
“Hyper-V: Using Hyper-V and Failover Clustering” at
http://technet.microsoft.com/en-
us/library/cc732181(v=WS.10).aspx.
Note A test environment may not need to meet all of the
requirements recommended for a production environment.
With VMM 2012, you can create a host cluster as described in “How to
Create a Hyper-V Host Cluster in VMM” at
http://technet.microsoft.com/en-us/library/gg610614.
Run As Account: You must have or need to create a Run As account with the
following characteristics:
An Active Directory domain account and that account must be added
to the local administrators group on each node (each Hyper-V host) in
the cluster.
If you configured your VMM server to use a domain account when you
installed the VMM server, do not use the same domain account to add
or remove host clusters.
Group Policy and WinRM: If you use Group Policy to configure Windows
Remote Management (WinRM) settings, before you add a host cluster to
VMM management, see the “Prerequisites” section for steps you might
need to take in the online help topic “How to Add Trusted Hyper-V Hosts
and Host Clusters” at http://technet.microsoft.com/en-
us/library/gg610648.
78 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
1. In the lower left pane of the VMM Console, click Fabric and on the ribbon,
click the Home tab. Then click Add Resources and select Hyper-V Hosts
and Clusters.
2. To specify the location of the cluster that you want to add, on the
Resource location page, select Windows Server Computers in a trusted
Active Directory domain.
3. Select Use an existing Run As account and click Browse.
4. On the Select a Run As Account dialog box, click the name of the Run As
account that you want to use (the one that you created for each of the
cluster nodes), and click OK.
5. On the Credentials page, click Next.
6. To search for the cluster that you want to add, on the Discovery scope
page, select Specify Windows Server computers by names.
7. In Computer names, type either:
The NETBIOS name of the cluster. For example: LAMANNA-CLUS01
Or you can type the fully qualified domain name (FQDN) of the cluster.
For example: LAMANNA-CLUS01.sr5fdom.eng.emc.com
8. For Skip AD verification, confirm the check box is clear or not selected.
You do not want to skip the AD verification.
9. In the Target resources page, wait until the Discovered computers are
listed and then confirm that both the FQDN of the cluster and the FQDN of
each cluster node for each VM host appears. Then select the name of the
cluster that you specified in the previous step.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 79
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
10. In Host group on the Host settings page, select the specific host group
that you want to assign for the host cluster. Also confirm that the wizard
recognizes that you have chosen to add a cluster (rather than a stand-
alone host) and therefore, Add the following path is unavailable on this
page.
Note You add a VM path only for a stand-alone host. For a cluster,
VMM automatically manages paths based on the shared
storage available to the cluster:
For shared storage, VMM uses Failover Clustering WMI API to list the
paths for shared storage. These paths are usually similar to:
C:\ClusterStorage\Volume1, C:\ClusterStorageVolume2.
For SAN deployments to a cluster, VMM uses a volume GUID path
similar to: (Error! Hyperlink reference not valid.). If this GUID path is
used, the administrator does not need to specify a path in Add the
following path.
80 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
13. After all jobs complete successfully, in the lower left pane on the VMM
Console, click Fabric.
14. In the upper-left pane, expand Servers, expand All Hosts, and navigate to
the host group for the host cluster. Select and right-click the cluster name
and click Properties.
15. Click the General tab and for the Cluster reserve (nodes) specify 0, and
then click OK.
Note This setting specifies the number of node failures that a cluster
must sustain while supporting all virtual machines deployed on
the host cluster. “Configuring Hyper-V Host Cluster Properties”
at http://technet.microsoft.com/en-us/library/hh389112 has
more details.
Add EMC SMI-S Provider to VMM and place storage pools under management
The EMC SMI-S Provider is required in your test environment in order to test VMM
2012 storage automation functionality with EMC arrays.
Confirm SMI-S Provider server is installed. You must have already installed
the EMC SMI-S Provider on a server. “Install and configure the EMC SMI-S
Provider” on page 68 and “Configure the EMC SMI-S Provider to manage
EMC storage systems” on page 69 has more details.
Pick a port to use for the SMI-S Provider server. The default HTTP port for
the EMC SMI-S Provider is 5988 and the default HTTPS port is 5989 (SSL
port). When adding a provider, VMM assumes you will use a Secure Sockets
Layer (SSL) port. Ask your storage administrator which port to use in your
environment. A specific security policy might be required. Also, the provider
might have been configured with ports different from these defaults.
Confirm the availability of storage pools. Check with your Storage
administrator to see which storage pools are available for you to add to
your VMM private cloud.
Caution Identifying available storage pools is particularly important if
you plan to use storage arrays in your test environment from
which the Storage administrator has already allocated some
storage pools to the production environment.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 81
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Create a separate ECOM Run As account on the SMI-S Provider server for
VMM use. Before you can add the SMI-S Provider to VMM, you must create
a Run As account for the SMI-S Provider. This Run As account must be an
ECOM administrator account. VMM will use the account when it uses Basic
Authentication to connect to ECOM and to the provider.
EMC recommends that you create a separate account solely for VMM use so
that any required security policies can be applied independently of any
other IT services in your environment that are using the same provider.
Consult with your security and storage administrators for additional
guidance.
To create the ECOM account, use the ECOM Administration Web Server:
1. Open a browser on the SMI S Provider server that needs the new ECOM
account and enter the following URL: http://localhost:5988/ecomconfig
2. Confirm that the ECOM Administration Login Page appears.
82 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
5. When the ECOM Security Admin Add User page opens, create an ECOM
account that you will use solely for VMM operations and type the
following field values. Replace <text> with the applicable values:
User Name: <YourNameForEcomUserAccountForVMM>
Password: <YourPassword>
Role: Administrator
Scope: Local
Password never expires: Select true or false, depending on your
organization's security policies. If you select false, the password
expires every 90 days.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 83
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
To add EMC SMI-S Provider to VMM and place storage pools under management:
1. In the lower-left pane on the VMM Console, click Fabric and in the upper-
left pane, expand Storage. Then click Providers and review the list of
existing SMI-S Providers (if any).
2. In the lower-left pane on the VMM Console, click Fabric and on the ribbon,
click the Home tab. Then click Add Resources and select Storage Devices.
3. In the Specify Discovery Scope page, type one of these applicable values
for the Provider IP address or FQDN and the applicable default port
number, or if you have specified new port numbers, use your new
numbers:
<Provider IP Address>: 5988
<Provider FQDN>:5988
<Provider IP Address>: 5989
<Provider FQDN>:5989
84 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Note 5988 is the default non-secure port number and 5989 is the
default secure port number (use 5989 only if the provider uses
SSL).
4. For Use Secure Sockets Layer (SSL) connection, do one of the following
based on the port in the path you received from the storage administrator:
Select the Use SSL connection checkbox if the port is an SSL port (this
is the default for VMM).
Or Clear the Use SSL connection checkbox if the port is not an SSL port
(this is the default for the EMC SMI-S Provider).
5. For Run As account, click Browse, and select a Run As account, which
must be a Run As Account that you created earlier on this SMI-S Provider
server by using the ECOM Administration Web Server. Then click Next.
6. In the Gather Information page, wait until the discovery of the storage
device information is complete and confirm that the discovered storage
arrays appear. Then click Next.
7. In the Select Storage Pools page, select one or more storage pools that
you want to place under VMM management and click Create
classification.
Caution You may see storage pools that the storage administrator
has assigned to other IT administrators. Confirm that you
know which storage pools on this array that you need to
assign for VMM management in your test environment.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 85
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Figure 18. Select the storage pools for VMM to manage and create classification
86 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
11. Wait until the adding the SMI-S Provider server job, and the discovering
and importing the storage information jobs are successfully completed,
and then close the Jobs dialog box.
12. In the lower-left pane of the VMM Console, click Fabric and in the upper-
left pane, expand Storage, and then click Classification and Pools.
13. In the main pane, confirm that the storage pools that you assigned to
VMM management are listed. These were assigned when you added the
EMC SMI-S Provider to VMM.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 87
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
1. In the lower-left pane of the VMM Console, click Fabric and in the upper-
left pane, expand Storage, and then click Arrays to display arrays under
VMM management in the main pane.
2. In the main pane, right click an array, and then click Properties.
3. Click the Settings tab to display the Storage array Settings page, select
the following applicable choice for VM rapid provisioning, and then click
OK. The choice you make depends on the capabilities of the array:
Select Snapshots if the array supports creating snapshots at scale
Select Clone logical units if the snapshot technology for this array is
not designed or optimized for application data.
Note The default value depends on the capabilities that are returned
from the array to VMM:
If using a Symmetrix VMAX, the default value depends on the
array.
If using a VNX or CLARiiON CX4, the default value depends on
the software packages installed on the array.
88 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Specify the default for creating storage groups for a Hyper-V host cluster
By default, VMM sets the value for CreateStorageGroupsPerCluster (a property on a
storage array object) to False, which means that VMM creates storage groups per
node for a Hyper-V host cluster and adds host initiators to storage groups by node
(not by cluster). Storage groups are also called masking sets.
For some storage arrays, if the provider does not scale for unmasking storage
volumes to a cluster, it is preferable to specify that VMM manage storage groups
for the entire cluster. In this case, VMM adds host initiators for all cluster nodes (as
a set) to a single storage group.
If storage groups on an array are discovered by VMM, but do not display in the
VMM Console, perform the following procedure to change the defaults.
To change the defaults on an array for how VMM creates storage groups for a
cluster:
1. In the ribbon on the VMM Console, click PowerShell to open the Windows
PowerShell Virtual Machine Manager command shell.
2. To display storage groups, and other information about the arrays in your
test environment, type:
Get-SCStorageArray -All | Select-Object Name, Model,
ObjectType, StorageGroups, LogicalUnitCopyMethod,
CreateStorageGroupsPerCluster | fl
Name : 000194900376
Model : VMAX-1SE
ObjectType : StorageArray
StorageGroups : {ACLX View, ACLX View,
ACLX View, ACLX View1}
LogicalUnitCopyMethod : Snapshot
CreateStorageGroupsPerCluster : False
Name : APM00111102546
Model : Rack Mounted VNX5100
ObjectType : StorageArray
StorageGroups : {Storage Group}
LogicalUnitCopyMethod : Snapshot
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 89
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
CreateStorageGroupsPerCluster : False
Notes
Create local shares, add shares as Library Shares, designate a VM host as a Library Server
Adding the VMM Library Server role to a Hyper-V Server already configured as a
stand-alone VM host is required in your test environment if you want to fully test all
VMM 2012 storage automation functionality with EMC arrays. Using the same
Hyper-V Server for a VM host and a Library Server lets you unmask and mask LUNs
to that server. This is because the folder mount path that you specify on the VM
host (in the test steps described in this document) is a path that is managed by the
Library Server.
Note If you prefer not to co-host a VM host and a Library Server on the same
physical server in your production environment, VMM also supports adding
each role to different servers. In this case, you would have to do all
unmasking and masking for the Library Server outside of VMM (you cannot
be able to use the VMM Console or VMM PowerShell).
VM Host: You need an existing server running Windows Server 2008 R2 with
the Hyper-V role installed that belongs to the same Active Directory domain
as the VMM Server. This server must have been already added to VMM as a
VM host. In this example test environment, this server is the VM host that
you added in “Add a stand-alone Hyper-V Server as a VM host to VMM” on
page 75.
90 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
This Library Server must be on a server that is also a VM host so that you can
use VMM to assign a logical unit to this server. VMM assigns logical units to
the VM host component, but cannot assign logical units to Library Servers.
See also:
“Minimum hardware requirements for a test environment” on page 55
“System Requirements: VMM Library Server” at
http://technet.microsoft.com/en-us/library/gg610631. However, this
test environment may not need to meet all requirements
recommended for a production environment.
Run As Account: When you add a Library Server to VMM, you must provide
credentials for a domain account that has administrative rights on the
computer that you want to add. In this procedure, you can use the same
Run As account that you used earlier to add the VM host.
Firewall: When you add a Library Server to VMM, the firewall on the server
that you want to add must allow File and Print Sharing (SMB) traffic so that
VMM can display available shares.
Windows shared folders become VMM library shares: To add resources to a
library share, an administrator typically needs to access the share through
Windows Explorer.
To create local shares, add shares as Library Shares, and add a VM host as a
Library Server:
1. On the VM host that you want to add to VMM as a Library Server, open
Windows Explorer and create the following parent folder:
C:\Library
3. In Windows Explorer, right-click the Library parent folder that you just
created, and then click Properties.
4. On the Properties page, click the Sharing tab, click Advanced Sharing, on
the Advanced Sharing dialog box, and then select Share this folder.
5. In the lower left pane on the VMM Console, click Library in the upper-left
pane, right-click Library Servers, and then click Add Library Server.
6. In Use an existing Run As Account section, click Browse, select a Run As
account with permissions on the VM Host that you will now add as a
Library Server, click OK to return to the Credentials page, and then click
Next.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 91
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
9. In the Summary page, confirm that the name of the server you want to add
as a Library Server appears under Confirm the Settings, and then click
Add Library Servers.
10. In the Jobs dialog box, confirm that the job to add the Library Server
completes successfully, and then close the dialog box.
In VMM, you allocate a storage pool on an array to a VMM host group. This action
makes that storage pool available for use by Hyper-V VM hosts (or by Hyper-V host
clusters) in the VMM host group. In VMM, the storage available to a host or cluster
from a storage pool is used only for VM workloads.
To allocate a storage pool to the host group to which the VM host Library Server
belongs:
1. In the lower left pane of the VMM Console, click Fabric. In the upper left
pane, expand Servers and expand the host group where the VM host
Library Server is stored. For example, LDMHostGroup1. Then right-click
the VM host, and click Properties.
2. Click the Storage tab and click Allocate Storage Pools.
3. In the Allocate Storage Pools dialog box, select a storage pool, click Add
and click OK.
4. On the Storage tab, click OK.
92 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
5. In the Jobs dialog box, confirm that the specific storage pool is
successfully allocated to the VMM host group (on which the VM host
Library Server computer resides) and then close the dialog box.
Create and mount a LUN on the VM host Library Server for the HA Template
Before running the validation tests, you must create a logical unit and mount it on
the standalone VM host that is also a Library Server. You use this LUN in “Create an
HA VM template to test rapid deployment to a host cluster” on page 100.
Note VHD files used to support rapid provisioning of VMs are contained within
LUNs on the arrays but are mounted to folders on the VMM Library Server.
Confirm VM host connectivity to array: Confirm that you have configured the
VM host correctly to access the storage array. This varies by array.
Optionally, when configuring the connection from the host to the array, you
can add the Microsoft Multipath Input/Output (MPIO) feature to the host to
improve access to an FC or iSCSI array.
MPIO supports multiple data paths to storage and, in some cases, can
increase throughput by using multiple paths simultaneously. “How to
Configure Storage on a Hyper-V Host” at http://technet.microsoft.com/en-
us/library/gg610696.aspx and “Support for Multipath IP (MPIO)” at
http://technet.microsoft.com/library/cc770294.aspx provide more details.
SAN Type:
For FC SAN, the VM host must have a host bus adapter (HBA) installed
and must be zoned so that the host can access the array.
For iSCSI SAN, the VM host must have the Microsoft iSCSI Initiator
Service started and set to Automatic startup.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 93
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Example output:
Name : <YourVMHostName>.<YourDomainName>.com
ObjectType : VMHost
FCSANStatus : Success (0)
ISCSISANStatus: Success (0)
1. In the lower left pane of the VMM Console, click Fabric. In the upper left
pane, expand Servers and expand the host group where the VM host
Library Server is stored. For example, LDMHostGroup1. Then right-click
the VM host, and click Properties.
2. In the upper left menu, click the Storage tab and click Add (Disk: Add),
which opens the screen for creating a logical unit. Then click Create
Logical Unit.
94 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
3. In the Create Logical Unit dialog box, specify the following field values:
For Storage Pool, select SMI-Thin or an applicable storage pool from
your VMM management list.
For Description, type a description for the LUN. This field is optional.
For Name, type HATemplateLU1.
For Size, select the applicable size. For example, 25.
4. Click OK to return to the Storage tab. Wait until this step completes.
5. On the Storage tab, confirm that HATemplateLU1 appears in the Logical
unit field, and in the Mount point section, select Mount to the following
empty NTFS folder.
6. Click Browse to open the Select Destination Folder dialog box; in the
server name section, expand the C:\ drive, expand C:\Library, and then
click the HATemplateShare or applicable folder.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 95
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
7. Click OK to return to the Storage tab, and then click OK to close the VM
host Properties page.
8. In the Jobs dialog box, confirm the HATemplateLU1 is successfully created
and then close the dialog box.
Create and mount a LUN on the Library Server for the SA Template
Before running the validation tests, you need to create a second LUN (logical unit)
and mount it on the stand-alone VM host, which is also a Library Server.
Use this new LUN when you “Create an SA VM template to test rapid deployment to
a stand-alone host” on page 102 for testing rapid provisioning of VMs to a stand-
alone Hyper-V host. The following procedure is almost identical to the previous
procedure, except for the folder and the logical unit names.
To create and mount a LUN as a SATemplateLU1 for the SA template on the VM host
Library Server:
1. In the lower left pane of the VMM Console, click Fabric. In the upper left
pane, expand Servers and expand the host group where the VM host
Library Server is stored. For example, LDMHostGroup1. Then right-click
the VM host, and click Properties.
2. In the upper left menu, click the Storage tab, click Add (Disk: Add), which
opens the screen that lets you create a logical unit.
96 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
3. Open Create Logical Unit dialog box and on the Storage tab, click Create
logical unit.
4. In the Create Logical Unit dialog box, specify the following field values:
For Storage Pool, select SMI-Thin or an applicable storage pool from
your VMM management list.
For Description, type a description for the LUN. This field is optional.
For Name, type SATemplateLU1.
Size, select the applicable size. For example, 25.
5. Click OK to return to the Storage tab. Wait until this step completes.
6. On the Storage tab, confirm that SATemplateLU1 appears in the Logical
unit field, and in the Mount point section, select Mount to the following
empty NTFS folder.
7. Click Browse to open the Select Destination Folder dialog box and in the
server name section, expand the C:\ drive. Then expand C:\Library and
click the SATemplateShare or applicable folder.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 97
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
8. Click OK to return to the Storage tab, and then click OK to close the VM
host Properties page.
9. In the Jobs dialog box, confirm that the job to create SATemplateLU1
completes successfully, and then close the dialog box to complete the
task.
Copy and import a dummy operating system VHD to local shared folders in VMM
For this test environment, you need to copy and import a VHD into the
HATemplateShare and SATemplateShare folders on the host in VMM. You will use
this VHD later to create the actual VM templates, which in this example test
environment are named HATemplate and SATemplate, respectively.
Windows OS VHD: For the following procedure, you can use a VHD that
contains an operating system, but you do not need an operating system to
test storage automation. Typically, administrators use a dummy VHD or an
empty VHD for this procedure. In this example procedure, the VHD is named
DummyW2k8r2.vhd, which helps explain that this VHD does not contain an
operating system.
Note If you had copied the VHD file into a Windows folder that is an
existing VMM Library share, you do not need to perform the
following steps to add a library share. Instead, right-click the
applicable <Library Share Name> and click Import.
2. In the lower-left pane of the VMM Console, click Library, and in the upper
left pane, expand Library Servers. Then click the applicable server name
that contains the C:\Library folder you just created.
3. Above the ribbon, click the Library Server tab, and then click Add Library
Shares.
98 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
4. On the Add Library Shares page, select C:\Library and click Next.
5. On the Summary page, click Add Library Shares.
6. In the Jobs dialog box, confirm that the library share import completes
successfully and then close the dialog box.
7. In the lower left pane of the VMM Console, confirm that the dummy VHD
appears in the VMM Library and click Library.
8. In the upper left pane, expand Library Servers, expand the name of the
server that you created in the C:\Library folder, expand Library, and then
expand the HATemplateLU1 template.
9. In the Physical Library Objects section, confirm that DummyWin2k42.vhd
appears with SAN Copy Capable set to Yes.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 99
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
You are now ready to create a VM template, which is named HATemplate in this test
environment. You can use this template to deploy VMs to a Hyper-V host cluster.
1. In the lower left pane of the VMM Console, click Library and on the ribbon,
click the Home tab. Then click Create VM Template and the Create VM
Template Wizard appears.
2. In the VM Template Source dialog box, select Use an existing VM
template or a virtual hard disk stored in the library and click Browse. The
Select VM Template Source dialog box appears.
3. In the HATemplateShare folder, select the applicable VHD or for this test
environment, select DummyWin2k8r2.vhd and click OK.
Caution Do not select DummyWin2k8r2.vhd in the SATemplateShare
folder.
100 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
Figure 33. Configure Operating System page in the Create VM Template Wizard
Notes
When creating a VM template for creating and deploying new VMs,
you rarely will choose not to install and customize an operating
system. However, because the HA and SA templates use dummy
VHDs, this option is applicable. This choice saves time when testing
storage automation in the test environment.
If you do choose [None – customization not required], the new VM
template wizard skips the Configure Application and Configure SQL
Server pages and goes directly to the Summary page.
8. On the Summary page, confirm that HATemplate is set as the VM
Template and is the only setting specified, and then click Create.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 101
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
You are now ready to create a VM template, which is named SATemplate in this test
environment. You can use this template to deploy VMs to a stand-alone Hyper-V
host.
This procedure is almost identical to the preceding one, except for the template
name. In this procedure, you use SATemplate, not HATemplate, and omit the
Availability High option.
1. In the lower left pane of the VMM Console, click Library and on the ribbon,
click the Home tab. Then click Create VM Template. The Create VM
Template Wizard appears.
2. In the VM Template Source dialog box, select Use an existing VM
template or a virtual hard disk stored in the library and click Browse. The
Select VM Template Source dialog box appears.
3. In the SATemplateShare folder, select the applicable VHD or for this test
environment, select DummyWin2k8r2.vhd and click OK.
Caution Do not select DummyWin2k8r2.vhd in the HATemplateShare
folder.
102 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
You have now created the two SAN-copy-capable (SCC) VM templates that you will
use to validate storage automation in your test environment. The following
procedure confirms that both templates exist and are available for use in the VMM
Library.
1. In the lower left pane of the VMM Console, click Library and in the upper
left pane, expand Templates. Then click VM Templates and confirm that
you see both templates in the main pane.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 103
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
2. In the main pane, right-click HATemplate, click Properties, and click the
Hardware Configuration tab. In the Advanced section, click Availability
High to confirm that this template has the option selected so that this
template can be used to create a highly available VM.
3. In the main pane, right-click SATemplate, click Properties, and click the
Hardware Configuration tab. In the Advanced section, click Availability to
104 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
confirm that this template is set to Normal, which means it can be used to
create a VM with normal availability.
4. This step is optional, because any VM template stored in the VMM Library
can be used again. Before you start automated testing, you can
experiment with non-automated VM provisioning (non-scripted
provisioning) by using either or both of the VM templates that you just
created.
Right-click one of the templates, select Create Virtual Machine, and then
complete the steps as prompted in the Create Virtual Machine wizard.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 105
Reference Architecture and Best Practices
Chapter 4: Build a Preproduction Test Environment
106 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5 Validate Storage Automation in your
Test Environment
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 107
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
EMC and Microsoft co-authored this document. Microsoft defined the structure, which
is common to all vendors who perform similar testing. This document provides:
Configuration for VMM, the EMC SMI-S Provider, and managed EMC arrays
Best practices and software and hardware configuration requirements for
enabling specific storage features
List of limitations and known issues that can emerge from the development
and testing process
You or your customers can use this document as a guide for deploying a configuration
in a lab or data center that is similar to the EMC preproduction environment. Setting
up a similar test environment enables customers to benefit directly from the storage
automation validation testing that is performed by EMC. “Test storage automation in
your preproduction environment” on page 123 has more details.
You can take this EMC testing one step farther by setting up a similar test
environment and running the same VMM storage validation script that EMC and other
vendors use.
The purpose of the validation script is to validate that the EMC SMI-S Provider and
each supported EMC array meet VMM’s defined functionality and scale requirements.
108 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
1. Download the VMM Storage Automation Validation Script from the Microsoft
TechNet blogs site (requires a login account) at
http://blogs.technet.com/b/hectorl/archive/2012/06/12/sc-2012-vmm-
storage-automation-validation-script.aspx
Notes
The file name of the script that EMC used to validate the
preproduction environment described in this paper is named
StorageAutomationScript.ps1.
However, a later script version of the script (for VMM 2012 SP1) is
expected to be backward compatible and also work for a VMM 2012
environment.
2. On the VMM server, open Windows Explorer and create the following folder:
C:\Toolbox\VMMValidationScript.
3. Unzip the contents of the downloaded validation script to:
C:\Toolbox\VMMValidationScript
Table 26. Contents of StorageConfig.xml input file read by the VMM validation
script
XML tag Description
VmmServer Name of the server that the VMM Management Server is installed.
ProviderName Name of the provider used when you add it to the VMM Management Server:
ServerName:Port.
UserName Name of the ECOM user account used to add the provider to the VMM
Management Server.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 109
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
NetName URL for the provider computer to which to connect. For example:
http://ServerName.
ArrayName Name of the array from which the storage pool should be selected, which is
usually the serial number of the array.
Note Required only if the provider manages multiple arrays and two or more
have duplicate names for storage pools. Otherwise, this tag is optional.
ClassificationName Any name to be used for classifying types of storage. This name must agree with
the pool specified.
HostName1 Name of the stand-alone VM host against which validation tests are run.
ClusterName1 Name of the Hyper-V host cluster against which validation tests are run.
ClusterNodes A list that contains the name of each node in the specified cluster.
Node Name of a node in the cluster. Add a node name for each node in the cluster.
LunDescPrefix Prefix used for all LUNs that are created by the validation test. This prefix
facilitates clean-up in case tests fail to complete.
ParallelSnapshotCount Number of parallel operations for creating snapshots. This value can be
overwritten in the test function.
ParallelCloneCount Number of parallel operations for creating clones. This value can be overwritten
in the test function.
VmTemplate Template name used for creating and deploying new VMs to a stand-alone host.
Note In this document, it is the SATemplate.
HaVmTemplate Template name used for creating and deploying new VMs to a Hyper-V host
cluster.
Note In this document, it is the HATemplate.
VmLocation Path to the location on the VM host where new VMs will be stored.
Note For SAN deployments to a Hyper-V host cluster and for VM rapid
provisioning to a cluster, no paths are required.
DomainUserName Name of the Active Directory user account that is a VMM administrator or
delegated administrator for the specified host and storage resources.
110 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
OutputCSVFile Name of the CSV file that contains the results of each test together with the
completion time for each operation.
LibLocalShare Local path to shared folder (on the Library Server computer) where LUNs are
(optional) mounted that are used to create SCC templates.
LibShareName Name of the VMM Library Share for the specified local LibLocalShare folder that
(optional) is used.
VhdName Name of the virtual hard disk that are copied onto the SCC LUN.
(optional)
Note Assuming that you have already created the templates as specified earlier in
this document, the following four values in the StorageConfig.xml file are
optional: LibServer, LibLocalShare, LibShareName, and VhdName (and will
not be used if you do fill them in).
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 111
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
Figure 40 is an example of the tags and contents of a StorageConfig.xml file that EMC
used during one of its actual validation tests.
112 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
In some cases, however, you will need to obtain CIM-XML output and trace output
from the Storage Management Service directly to help you troubleshoot.
The three levels of tracing that you will need, and how to enable each one, are
described in this section.
Enable VMM tracing to collect traces on the VMM server. “How to collect traces
in System Center Virtual Machine Manager” at
http://support.microsoft.com/kb/970066 has instructions to set up VMM
tracing.
VMM traces produce error and exception information. You need Hyper-V host
traces only if the failure occurs on the Hyper V side. For example, this can be
helpful if you encounter volume mount issues.
Microsoft Storage Management Service uses CIM-XML to communicate with the SMI S
Provider.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 113
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
The output produced by SCX CIM-XML command tracing is the raw call-and-response
interaction between the service and the provider. This information is very verbose, so
to help minimize noise, collect this information only when you reproduce the issue.
Microsoft Storage Management Service has its own trace output, which you can
collect by using Traceview Extract, Transform, and Load (ETL).
114 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
1. In a command shell on the EMC SMI-S Provider server, shut down Ecom.exe:
C:\Program Files\EMC\ECIM\ECOM\bin\sm_service Stop Ecom.exe
Note Alternatively, you can use one of these to shut down ECOM:
Service Manager
Command shell command: net stop ECOM
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 115
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
2. Clean up the log files by deleting the existing log files in the ECOM log
folder:
C:\Program Files\EMC\ECIM\ECOM\log
3. Then open and edit the Log_settings.xml file to turn on ECOM HTTP I/O
tracing at the following location: C:\Program
Files\EMC\ECIM\ECOM\conf\Log_settings.xml
4. In a command shell, make the following changes to the Log_settings.xml
file:
a. Change the value for Severity from this:
<ECOMSetting Name="Severity" Type="string"
Value="NAVI_WARNING"/>
To this:
<ECOMSetting Name="Severity" Type="string"
Value="NAVI_TRACE"/>
To this:
<ECOMSetting Name="HTTPTraceOutput" Type="boolean"
Value="true"/>
To this:
<ECOMSetting Name="HTTPTraceInput" Type="boolean"
Value="true"/>
To this:
<ECOMSetting Name="HTTPTraceMaxVersions" Type="uint32"
Value="30"/>
116 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
Note Alternatively, you can use one of these to shut down ECOM:
Service Manager
Command shell command: net stop ECOM
9. Collect all of the files in both of the following locations:
C:\Program Files\EMC\ECIM\ECOM\log
C:\Program Files\EMC\SYMAPI\log
10. Undo each change made to Log_settings.xml by reverting the value for each
ECOMSetting modified above to its original value.
11. Restart ECOM Start Ecom.exe:
C:\Program Files\EMC\ECIM\ECOM\bin\sm_service Start
Ecom.exe
The following table lists the test cases for the three different types: single operation
tests, baseline scale tests, and full scale tests.
Note In Table 27, Test207 is the only test that is different between the full-scale
tests and the baseline-scale tests.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 117
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
Table 27. Tests developed by VMM that exercise storage automation functionality
and scale
Test type Test
Single Test102_CreateDeleteOneLun -LunSizeinMB 10240
operations Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Test104_CreateOneCloneOfLun -LunSizeinMB 10240
Test105_RegisterUnRegisterOneLunToHost
Test155_RegisterUnRegisterOneLunToCluster
Test106_RegisterOneLunAndMountToHost -LunSizeinMB 10240
Test107_RapidCreateOneVMToHost
Test157_RapidCreateOneVMToCluster
End-to-end Test101_AddRemoveProvider
scenarios Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
(baseline
scale test) Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
End-to-end Test101_AddRemoveProvider
scenarios (full Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
scale test)
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test204_CreateMultipleClonesOfLun -Count 10 -LunSizeinMB 10240
Test205_RegisterUnRegisterMultipleLunsToHost -Count 10 -LunSizeinMB 10240
Test255_RegisterUnRegisterMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test206_MountMultipleLunsToHost -LunSizeinMB 1024 -Count 10
Test256_MountMultipleLunsToCluster -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test207_BatchRapidCreateMultipleVMsToCluster -BatchSize 10 -NumberofBatches 251
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
118 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
The test results validate the operation of each supported array, its operating
environment, and the EMC SMI-S Provider that communicates with the Microsoft
Storage Management Service.
Table 28. Tests developed by VMM that EMC ran successfully on Symmetrix family
arrays
Test type Test
Single Test102_CreateDeleteOneLun -LunSizeinMB 10240
operations
Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Test105_RegisterUnRegisterOneLunToHost
Test155_RegisterUnRegisterOneLunToCluster
Test107_RapidCreateOneVMToHost
Test157_RapidCreateOneVMToCluster
End-to-end Test101_AddRemoveProvider
scenarios
(baseline scale Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
tests)
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 119
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
End-to-end Test101_AddRemoveProvider
scenarios
(full scale Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
tests)
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
Table 29. Tests developed by VMM that EMC ran successfully on EMC CLARiiON
family arrays
Test type Test
Single Test102_CreateDeleteOneLun -LunSizeinMB 10240
operations
Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Test105_RegisterUnRegisterOneLunToHost
Test155_RegisterUnRegisterOneLunToCluster
Test107_RapidCreateOneVMToHost
120 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
End-to-end Test101_AddRemoveProvider
scenarios
(baseline Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
scale tests)
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
End-to-end Test101_AddRemoveProvider
scenarios
(full scale Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
tests)
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 121
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
Table 30. Tests developed by VMM that EMC ran successfully on EMC VNX family
arrays
Test type Test
Single Test102_CreateDeleteOneLun -LunSizeinMB 10240
operations
Test103_CreateOneSnapshotOfLun -LunSizeinMB 10240
Test105_RegisterUnRegisterOneLunToHost
Test155_RegisterUnRegisterOneLunToCluster
Test107_RapidCreateOneVMToHost
Test157_RapidCreateOneVMToCluster
End-to-end Test101_AddRemoveProvider
scenarios
(baseline scale Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
tests)
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Test501_MigrateMultipleVMFromHost2Cluster -VMCount 10
Test502_MigrateMultipleVMFromCluster2Host -VMCount 10
Test400_PerformAllClusterTests
End-to-end Test101_AddRemoveProvider
scenarios
(full scale Test202_CreateDeleteMultipleLun -Count 10 -LunSizeinMB 10240
tests)
Test203_CreateMultipleSnapshotsOfLun -Count 10 -LunSizeinMB 10240
122 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
Test207_RapidCreateMultipleVMsToHost -Count 10
Test257_RapidCreateMultipleVMsToCluster -Count 10
Now, you can build your own preproduction test environment, download the VMM
validation script, and run your own validation testing. This will enable you to learn
about VMM 2012, EMC storage arrays, and how your private cloud components
interact in your own environment.
For example, masking operations are working in the preproduction setting. However,
they start to fail at the same 16-node cluster that you used earlier. You can eliminate
issues with VMM first by restarting the failed masking job. If the job completes,
investigate whether a timeout for the job occurred. Timeouts on the provider side
might indicate an overloaded provider.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 123
Reference Architecture and Best Practices
Chapter 5: Validate Storage Automation in your Test Environment
124 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 6 Prepare for Production Deployment
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 125
Reference Architecture and Best Practices
Chapter 6: Prepare for Production Deployment
Production deployment
Carefully plan the transition from a preproduction or lab environment to a production
environment. Effectively using VMM to create and manage one or more private clouds
requires that you design and implement your management of storage resources with
VMM in mind. This chapter can help you do that by describing how to get started and
identifying important resources.
You must also understand what issues may exist in your production environment that
limit the performance and scale of your private cloud. One example is the number of
nodes that you plan to use in your production Hyper-V host clusters. Another example
is the set of rapid provisioning requirements that your organization plans to specify
for VMM host groups.
If you are just starting with VMM, viewing Microsoft Private Cloud Videos can help
familiarize you with VMM features and functionality. After viewing the videos, contact
an EMC representative about how to use the Microsoft Technology Centers (MTCs) to
aid you in building your private cloud. The representative might recommend validated
Fast Track configurations to expedite deploying VMM and EMC storage in your
production environment.
You can find the IT Showcase video “How Microsoft IT Uses System Center Virtual
Machine Manager to Manage the Private Cloud” at http://technet.microsoft.com/en-
us/edge/Video/hh748210.
VMM 2012 helps enable centralized management of both physical and virtual IT
infrastructure, increases server utilization, and improves dynamic resource
optimization across multiple virtualization platforms. Microsoft uses VMM to plan,
deploy, manage, and optimize their own virtual infrastructure, while at the same time
maximizing its datacenter resources.
126 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 6: Prepare for Production Deployment
“Private Cloud Jump Start (01): Introduction to the Microsoft Private Cloud
with System Center 2012” at http://technet.microsoft.com/en-
US/edge/private-cloud-jump-start-01-introduction-to-the-microsoft-private-
cloud-with-system-center-2012
“Private Cloud Jump Start (02): Configure and Deploy Infrastructure
Components” at http://technet.microsoft.com/en-us/edge/video/private-
cloud-jump-start-02-configure-and-deploy-infrastructure-components
Today, you can find Microsoft Technology Centers (MTCs) all over the world. These
centers bring together Microsoft and its partners in a joint effort to help enterprise
customers find innovative solutions for their unique environments.
Customers can meet with solution and technology experts at an MTC location and
find answers to questions such as the following:
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 127
Reference Architecture and Best Practices
Chapter 6: Prepare for Production Deployment
No matter what development stage you are at with your solution, MTC can help get
you to the next step.
To engage with EMC at the MTC, contact your local Microsoft or EMC account
manager. MTC visits are free, with convenient locations and flexible schedules. You
can also schedule a visit using the EMC online booking request form at
http://powerlink.emc.com/km/appmanager/km/secureDesktop?_nfpb=true&_pageL
abel=formsPgSecureContentBk&internalId=0b014066800248e7&_irrt=true.
128 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Chapter 6: Prepare for Production Deployment
Additional integration between System Center and EMC is provided in Fast Track
deliverables that include System Center 2012—Operations Manager and System
Center 2012—Orchestrator as well as other solutions that include EMC PowerShell
components. Customers implementing Microsoft Private Cloud Fast Track solutions
are provided with a pre-staged, validated configuration of storage, compute, and
network resources that fulfill all private cloud requirements. These solutions
significantly improve return on investment for private cloud deployments.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 129
Reference Architecture and Best Practices
Chapter 6: Prepare for Production Deployment
130 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix A Install VMM
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 131
Reference Architecture and Best Practices
Appendix A: Install VMM
132 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix A: Install VMM
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 133
Reference Architecture and Best Practices
Appendix A: Install VMM
If you plan to use shared ISO images with Hyper-V virtual machines,
you must use a domain account. “Specifying a Service Account for
VMM” at http://technet.microsoft.com/library/gg697600.aspx has
more details about which type of account to use.
Optionally, you can select Store my keys in Active Directory. However,
for this preproduction test installation, you may not need to select this
option. “Configuring Distributed Key Management in VMM” on
Microsoft TechNet at
http://technet.microsoft.com/library/gg697604.aspx has more
details.
11. For this and most test installations, you can accept the following default
values for ports:
8100: Communication with the VMM Console
5975: Communication to agents on hosts and Library Servers
443: File transfers to agents on hosts and Library Servers
8102: Communication with Windows Deployment Services
8101: Communication with Windows PE agents
8013Communication with Windows PE agent for time synchronization
Note The values you assign for these ports during setup cannot be
changed without uninstalling and reinstalling the VMM Server.
12. To specify a share for the VMM library, select Create a new library share.
13. Accept the following default values:
Share name: MSSCVMMLibrary
Share location: C:\ProgramData\Virtual Machine Manager Library Files
Share description: VMM Library Share
Notes
MSSCVMMLibrary is the default library share name; its location is:
%SYSTEMDRIVE%\ProgramData\Virtual Machine Manager Library
Files
Because ProgramData is a hidden folder, if you want to see the
contents in Windows Explorer, then configure Windows Explorer to
show hidden folders.
After VMM setup completes, you can add library shares and
additional library servers by using the VMM Console or by using
VMM PowerShell.
14. On the Review Installation summary page, review your selections and
click Install.
15. Wait for the VMM management server and VMM console to install. When
you see a message that the Setup wizard has completed the installation,
click Close.
134 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix A: Install VMM
16. After you have successfully installed the VMM Server in your
preproduction environment, configure storage by referring to the following
procedures:
“Configure VMM to discover and manage storage” on page 75
“Create SAN-copy-capable templates for testing VM rapid
provisioning” on page 90
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 135
Reference Architecture and Best Practices
Appendix A: Install VMM
136 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix B Array Masking and Hyper-V
Host Clusters
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 137
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Before addressing how VMM handles unmasking operations for clusters, the next
section introduces the concept of storage groups and explains how VMM uses
storage groups to bind logical units on arrays to specific VM host servers.
VMM creates new storage groups and modifies existing storage groups. The following
table lists commonly used synonyms for storage groups, initiator endpoint, and target
endpoint.
Table 31. Commonly used synonyms for storage groups, initiators, and targets
Synonyms for the interface that Synonyms for the endpoint Synonyms for the endpoint on a
binds initiators to targets on a Hyper-V host storage array
Cell Bullet 1 Initiator Target
Storage groups Storage initiator Target endpoint
Masking sets Host initiator Target port
Masking Views Host initiator endpoint Target portal
Views Host initiator port Target iSCSI portal
SCSI Protocol Controllers (SPCs) Initiator port Storage endpoint
SPC is the term Port Storage target
typically used by SMI-
Hardware ID Storage endpoint
S.
A specific implementation Storage port
SCSI is the common
(FC SAN): Port
protocol used (over FC
or over Ethernet) when FC initiator port
ISCSI portal2
storage is assigned HBA1 port
remotely to a server. A specific implementation (FC
HBA1 SAN):
A specific implementation FC target port
(iSCSI SAN):
A specific implementation (iSCSI
iSCSI initiator port SAN):
iSCSI initiator iSCSI target port
iSCSI target
138 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
1. HBA is the physical adapter. An HBA adapter may have one or more physical
ports. In the NPIV case, one physical port can have multiple virtual ports
associated to it, each with its own World Wide name (WWN).
2. The portal in “iSCSI Portal” refers to the IP address that initiators use to first
gain access to iSCSI targets.
As indicated in preceding table, the term “storage groups” is sometimes is used
interchangeably with SPCs. Using SCSI as the first element of the SPC acronym is
appropriate because SCSI is the protocol used for both FC and iSCSI communications
in a SAN. From an SMI-S perspective, a storage group is an instance of the CIM class
CIM_SCSIProtocolController, as illustrated in the following figure.
VMM 2012 discovers existing storage groups during Level 2 discovery when it
retrieves storage groups (and storage endpoints) associated with discovered logical
units in VMM-managed storage pools on an array. VMM populates the VMM database
not only with discovered storage objects, but also with any discovered association
between a host and a logical unit. Storage groups act as the interface that binds host
initiator endpoints (called InitiatorPorts in the figure) on a Hyper-V VM host (or Hyper-
V host cluster) to storage endpoints (called TargetPorts in the figure) for specific
logical units on target arrays.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 139
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Figure 44. Figure 2: VMM modifies storage groups during masking operations to
unmask LUNs to hosts
Thus, if a storage group contains a host initiator endpoint (InitiatorPort in the figure)
on the host side that maps to TargetPorts on the array side, VMM unmasks the logical
unit to that host through the association established by the storage group. If no
association exists, the logical unit is masked (the logical unit is not visible to the
host).
Ports per View (Ports refers to target ports on an array and View refers to
storage groups)
This property indicates that the array supports one of the following options:
140 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Note The Hardware ID per View setting does not apply to EMC arrays but is included
in this document for completeness. If you run the following VMM PowerShell
command in an environment with EMC arrays, you can see that the value
returned for MaskingOneHardwareIDPerView is always returned as FALSE:
$Arrays = Get-SCStorageArray –All
$Arrays | Select-Object ObjectType, Name, Model,
MaskingOneHardwareIDPerView, HardwareIDFlags
A host-side configurable setting, called storage groups, is affected by values for the
above two array-side properties. Hardware ID per View and Ports per View individually
and together determine, how you should configure VMM to managed storage groups
for Hyper-V host clusters:
Valid values for the Ports per View property are a set of read-only strings limited to
those in the following list. In each case, the value returned indicates the option that a
specific type of array supports:
OnePortPerView (traditional):
Adding only one target port to the storage group is the only option
Not implemented by EMC VMAX, VNX, and CLARiiON CX4 arrays that
support VMM 2012
AllPortsShareTheSameView (simplest):
Adding all target ports to the storage group is required
Supported by EMC VNX and CLARiiON CX4 arrays
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 141
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Example output
ObjectType : StorageArray
Name : APM00101000787
Model : Rack Mounted CX4_240
MaskingPortsPerView : AllPortsShareTheSameView
ObjectType : StorageArray
Name : 000194900376
Model : VMAX-1SE
MaskingPortsPerView : MultiplePortsPerView
ObjectType : StorageArray
Name : APM00111102546
Model : Rack Mounted VNX5100
MaskingPortsPerView : AllPortsShareTheSameView
Note The Hardware ID per View setting does not apply to EMC arrays but is included
in this document for completeness.
VMM creates a new masking set if no hardware ID already exists. The array detects
which hardware IDs exist on the host and a corresponding hardware ID object is
created on the array.
142 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
The Boolean value returned for the Hardware ID per View property indicates:
True (traditional):
This type of array supports only one hardware ID object (host initiator
port) per masking view (per SPC or storage group).
Not implemented by EMC VMAX, CLARiiON, or VNX arrays that support
VMM 2012.
False (more flexible):
This type of array supports multiple hardware ID objects (host initiator
ports) per masking view (per SPC or storage group). Storage groups can
contain multiple host initiator ports and more than one masking view can
exist.
Supported by EMC VMAX, CLARiiON and VNX arrays.
The Hardware ID per View property is an array-based property. The value is not set or
modified by VMM. However, the True or False value for this property is made available
to VMM through the SMI-S Provider. So you can use VMM cmdlets to return its value.
Example output
ObjectType : StorageArray
Name : APM00101000787
Model : Rack Mounted CX4_240
MaskingOneHardwareIDPerView : False
HardwareIDFlags : SupportsPortWWN, SupportsISCSIName
You can manually change the default value to specify that storage groups be created
per cluster. Note that this setting has an array scoping and therefore affects all host
clusters that have storage allocated to this array.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 143
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
The Boolean value that you can configure for Create Storage Groups per Cluster
specifies:
CreateStorageGroupsPerCluster = False
(more flexible default setting)
Creates storage groups on an array at the node level. Each storage group
contains all initiator ports for one node. Thus, the LUN (or LUNs)
associated with this storage group are made available to a single node or
to a subset of nodes in the cluster.
Drivers:
Supports the ability to make a specific LUN available to just one node,
which means that you can have a separate LUN for boot-from-SAN
scenarios. In the boot-from-SAN scenario, the boot LUN must be
specific to a particular host and only that host can access that LUN.
Supported by EMC VMAX, CLARiiON, and VNX arrays
CreateStorageGroupsPerCluster = True
(the simplest setting that improves performance with only one storage group
to manage)
Creates storage groups on an array at the cluster level. The storage group
contains all host initiator ports for all nodes in that cluster. Thus, the LUN
(or LUNs) associated with this storage group are made available to all
nodes in the cluster.
Drivers:
On some arrays, masking operations are serialized, which means that
the time required to unmask or mask a LUN increases if there are
multiple masking requests. In this case, timeouts can occur, so you
should consider setting CreateStorageGroupsPerCluster to True.
If you have a large number of nodes (8 to 16) in a cluster, you may
encounter timeout issues. The more nodes, the greater the chance of
timeouts. If so, consider setting CreateStorageGroupsPerCluster to
True.
If you have fewer than eight nodes per cluster, but the cluster is
heavily used, you may encounter timeout issues. If so, consider
setting CreateStorageGroupsPerCluster to True.
Notes
144 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
You can change the default value of False to True by using VMM cmdlets.
Example output
ObjectType : StorageArray
Name : APM00101000787
Model : Rack Mounted CX4_240
StorageGroups : {Storage Group, Storage Group}
CreateStorageGroupsPerCluster : False
How Ports per View and Hardware ID per View influence unmasking to a
cluster
This section ties together the array-side Hardware ID per View and Ports per View
properties with the host-side Storage Groups per Cluster setting. The former two
sections explain the appropriate way to unmask a LUN either to cluster nodes or to
the entire cluster.
As noted earlier, configuring storage groups per cluster or per node is a VMM 2012
setting, whereas the value for Ports per View (one, all, or multiple) and for Hardware
ID per View (True or False) are array-based read-only properties. Because SMI-S
makes available to VMM the values for both of these properties, VMM can utilize both
properties to help determine the appropriate (or required) value for
CreateStorageGroupsPerCluster.
Impact of Ports per View and Hardware ID on storage groups per clusters
Table 32 provides the intersection between the value for the Hardware ID per View
property and the value for the Ports per View property determines the configuration
that VMM can or must use for host clusters. For each cell in Table 32, the combination
of the value for these two array-side properties indicates whether
CreateStorageGroupsPerCluster is True, False, or N/A (Not Applicable).
Notes
The term of storage groups is used interchangeably with the SPCs and
masking views.
The values of these array-side properties affect how storage groups are
managed or modified if storage groups already exist. Or if none currently
exist, the values affect how storage groups are created.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 145
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Table 32. Array-side properties whose values affect how storage groups are set for
host clusters
All target ports
Multiple target ports One target port
Setting share same storage
per storage group per storage group
group
One Initiator Port CreateStorageGroups CreateStorageGroups CreateStorageGroups
per Storage Group = PerCluster = TRUE or PerCluster = TRUE or PerCluster = TRUE or
FALSE FALSE FALSE FALSE
One Initiator Port CreateStorageGroups CreateStorageGroups CreateStorageGroups
per Storage Group PerCluster – N/A to PerCluster – N/A to PerCluster – N/A to
View = TRUE EMC Storage Arrays EMC Storage Arrays EMC Storage Arrays
In this case: VMM creates one storage group for In this case: VMM creates one storage group for each
the entire cluster (for all nodes in the cluster). node in the cluster.
Figure 45. Each storage group has at least two target ports
In Figure 46, the result is not as intuitive as in Figure 45, because when you set
CreateStorageGroupsPerCluster to True, the result is one storage group per node.
146 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Figure 46. Each storage group has only one target port set to either True or False for
CreateStorageGroupsPerCluster
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 147
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Example 1 command
Example 1 output
ObjectType : StorageGroup
Name : Storage Group
ObjectId : root/emc:hSMIS-SRV-
VM1.SR5DOM.ENG.EMC.COM:5988;Clar_LunMa
skingSCSIProtocolController.CreationClassName=%'Clar_LunM
askingSCSIProtocolController%',DeviceID=%'CLARiiON+APM001
11102546+b266edfa68a4e011bd47006016372cc9%',SystemCreatio
nClassName=%'Clar_StorageSystem%',SystemName=%'CLARiiON+A
PM00111102546%'
StorageArray : APM00111102546
StorageInitiators : {5001438001343E40}
StorageEndpoints : {500601603DE00835, 500601683DE00835}
StorageLogicalUnits : {LaurieTestLun}
Example 2 command
148 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
Example 2 output
ObjectType : StorageLUN
Name : LaurieTestLun
HostGroup : All Hosts
HostDisks : {\\.\PHYSICALDRIVE2, \\.\PHYSICALDRIVE2}
StorageGrouops :
StoragePool : Pool 1
NumberOfBlocks : 33554432
ConsumableBlocks : 33554432
AccessDescription : Read/Write Supported
TotalCapacity : 17179869184
AllocatedCapacity : 0
InUseCapacity : 0
RemainingCapacity : 17179869184
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 149
Reference Architecture and Best Practices
Appendix B: Array Masking and Hyper-V Host Clusters
150 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix C Enable Large LUNs on Symmetrix
VMAX Arrays
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 151
Reference Architecture and Best Practices
Appendix C: Enable Large LUNs on Symmetrix VMAX Arrays
Note The contents of this appendix are adapted from “EMC Solutions Enabler
Symmetrix Array Controls CLI” available at
https://support.emc.com/docu40313_EMC-Solutions-Enabler-Symmetrix-
Array-Controls-CLI-V7.4-Product-Guide.pdf?language=en_US
For Enginuity 5874 and later, the maximum device size in cylinders is 262668.
For Enginuity 5773 and earlier, the maximum device size in cylinders is 65520.
If the auto_meta feature is set to DISABLED (the default value) and you try to create a
device larger than the allowable maximum, creating the device will fail. However, if
you set auto_meta to ENABLE and then specify the creation of a single standard
device larger than the maximum allowable size, Symmetrix will create a metadevice
instead of a standard device.
The following table lists, by Enginuity version, the meta device sizes that are enabled
by the auto_meta feature.
Table 33. Meta device sizes enabled by the Auto Meta feature
Auto Meta
Max single Maximum single Minimum Auto
Enginuity version member size
device size (CYL) device size (GB) Meta size (CYL)
(CYL)
5874 262668 240 262669 262668
152 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix C: Enable Large LUNs on Symmetrix VMAX Arrays
1. Open the command shell (using the Windows Command Prompt, Windows
PowerShell, or a Linux or Unix shell).
2. Run the following command to verify if auto_meta is disabled:
symcfg list -sid xxxx -v
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 153
Reference Architecture and Best Practices
Appendix C: Enable Large LUNs on Symmetrix VMAX Arrays
1. In the Console, right-click Symmetrix ID, and then select Symmetrix Admin.
2. Select Set Symmetrix Attributes, and then enable the Auto Meta feature.
3. Enter applicable values for each of the following parameters:
Minimum Auto Meta Size
Auto Meta Member Size
Auto Meta Configuration
“Maximum device size limits” on page 152 for more details on the correct
values.
4. Select Add to Config Session List (which will create a configuration session
task).
5. Commit the task from the Config Session menu.
154 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix D Configure Symmetrix VMAX
TimeFinder for Rapid VM
Provisioning
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 155
Reference Architecture and Best Practices
Appendix D: Configure Symmetrix VMAX TimeFinder for Rapid VM Provisioning
You can use this appendix to help determine which configuration steps to perform
before deploying VMs.
See Also:
“EMC Solutions Enabler Symmetrix TimeFinder Family CLI V7.4 Product Guide” at
https://support.emc.com/docu40317_EMC-Solutions-Enabler-Symmetrix-
TimeFinder-Family-CLI-V7.4-Product-Guide.pdf?language=en_US
“EMC Symmetrix Timefinder Product Guide” at
https://support.emc.com/docu31118_Symmetrix-TimeFinder-Product-
Guide.pdf?language=en_US
TimeFinder/Snap overview
TimeFinder/Snap creates space-saving, logical point-in-time images called
snapshots. You can create multiple snapshots simultaneously on multiple target
devices from a single source device. Snapshots are not complete copies of data. They
are logical images of the original information, based on the time the snapshot was
created.
TimeFinder/Snap uses source and target devices where the target device is a special
Symmetrix device known as a virtual device (VDEV). Through the use of device
pointers to the original data, VDEVs allow you to allocate space based on changes to
a device (using an Asynchronous Copy on First Write, or ACOFW, mechanism), rather
than replicating the complete device.
156 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix D: Configure Symmetrix VMAX TimeFinder for Rapid VM Provisioning
SAVE device
A SAVE device is a Symmetrix device that is not accessible to the host and can be
accessed only through VDEVs that store data on SAVE devices. SAVE devices provide
pooled physical storage and are configured with any supported RAID scheme. SAVE
devices are placed within logical constructs called snap pools (also referred to as
SAVE pools) in order to aggregate or isolate physical disk resources for the purpose of
storing data associated with TimeFinder/Snap. The following figure shows the
relationship between source devices, VDEVs, and SAVE devices.
To support snapshot operations, the EMC SMI-S Provider can automatically select
appropriately sized VDEVs, or it can create new VDEVs. By default, the SMI-S Provider
first attempts to find pre-created VDEVs within the Symmetrix array before the
provider creates new VDEVs. You can find the settings that control this behavior in the
file called OSLSProvider.conf (located in the EMC\ECIM\ECOM\Providers installation
directory on your SMI-S Provider server). (These settings are described in the table
labeled “Property descriptions and default values in the OSLSProvider.conf settings
file” at the end of this section.)
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 157
Reference Architecture and Best Practices
Appendix D: Configure Symmetrix VMAX TimeFinder for Rapid VM Provisioning
When a VM is deleted from VMM (by using the VMM Console or VMM PowerShell), a
request is sent to the provider to automatically terminate the snapshot relationship.
However, the VDEV is not deleted as a part of the VM delete process.
TimeFinder/Clone overview
TimeFinder/Clone is a local Symmetrix replication solution that creates full-device
point-in-time copies that you can use for backups, decision support, data warehouse
refreshes, or any other process that requires parallel access to production data. To
support rapid VM deployment in a VMM 2012 private cloud, TimeFinder/Clone is
used to create full-device copies. VMM uses these copies to deploy VMs from VM
templates that reside on a source LUN on an array that the VM host can access.
When using TimeFinder/Clone, by default, the SMI-S Provider creates a full volume,
non-differential copy of the source device. Non-differential means that after the clone
copy is complete, no incremental relationship is maintained between the source
device and the clone target. The VM deployment process waits for the full data copy
(from the source to the clone target) to complete before VMM continues the
associated VM deployment job. After the copy completes, the provider terminates the
clone relationship.
You can find the settings that control this behavior in the file OSLSProvider.conf
(located in the EMC\ECIM\ECOM\Providers installation directory). For the provider to
select existing clone devices automatically, you must change the default setting in
the file OSLSProvider.conf. For a list of the possible default values, review Table 34.
158 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix D: Configure Symmetrix VMAX TimeFinder for Rapid VM Provisioning
One benefit of pre-creating clone targets for automatic selection is that doing so
accelerates the VM deployment process, especially when multiple clones are
requested in parallel. When the SMI-S Provider creates devices, it does so in a serial
fashion. By default, the clone copy process also occurs serially when there are
multiple requests. If multiple clone requests occur, and if those requests must create
clone targets as part of establishing the clone relationship, the VM deployment
process will be slower. By pre-creating clone targets based on the requirements listed
in the bullets, the provider must only choose the clone target, establish the clone
copy session, and then wait for the clone copy to complete.
When a VM is deleted from VMM (by using the VMM Console or VMM PowerShell), a
request is sent to the provider to automatically delete the device (in the case of a
clone, the device is not a VDEV) that is associated with the virtual machine. This frees
space within the disk group.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 159
Reference Architecture and Best Practices
Appendix D: Configure Symmetrix VMAX TimeFinder for Rapid VM Provisioning
160 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix E Configure VNX and CLARiiON for
Rapid VM Provisioning
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 161
Reference Architecture and Best Practices
Appendix E: Configure VNX and CLARiiON for Rapid VM Provisioning
162 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix F Terminology
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 163
Reference Architecture and Best Practices
Appendix F: Terminology
Terminology
The following defines terms used in this document.
boot from SAN Refers to a computer booting (loading) its operating system over
connection to a SAN rather than from a local hard disk on the computer.
CIM-XML client A component on the VMM Server that enables the Microsoft Storage
Management Service by using the SMI-S Module to communicate with the
SMI-S Provider, which uses the CIM-XML protocol.
CIM-XML protocol The communication mechanism between the VMM Server's Storage
Management Service and the SMI-S Provider.
Common Information Model A DMTF (Distributed Management Task Force) standard that provides a
(CIM) model for representing heterogeneous computer, network, and storage
resources as objects. The model also includes the relationships among
these objects:
CIM Infrastructure Specification defines the object-oriented
architecture of CIM.
CIM Schema defines a common, extensible language for representing
dissimilar objects.
CIM Classes identify specific types of IT resources (for example:
CIM_NetworkPort).
CIM enables VMM to administer dissimilar elements (storage-related
objects) in a common way through the SMI-S Provider. The EMC SMI-S
Provider version 4.4.0 (or later) supports CIM Schema version 2.31.0.
Distributed Management Task An international organization that promotes the development of standards
Force (DMTF) that simplify management of millions of IT systems worldwide. DMTF
creates standards that enable interoperability at the enterprise level
among multi-vendor systems, tools, and solutions.
164 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
EMC Common Information Defines a CIM-based model for representing IT objects (for example:
Model (ECIM) EMC_NetworkPort, which is a subclass of the CIM class CIM_NetworkPort).
EMC Common Object Manager Serves as the interoperability hub for the EMC Common Management
(ECOM) Platform (CMP) that manages EMC storage systems.
EMC SMI-S Provider EMC software that uses SMI-S to allow management of EMC arrays. EMC
SMI-S Provider version V4.4.0 is certified by SNIA as compliant with SMI-S
1.3, 1.4, and 1.5. VMM uses the EMC SMI-S Provider to discover arrays,
storage pools, and logical units. And to classify storage, to assign storage
to one or more host groups, to create a clone, create a snapshot, or delete
logical units. And to unmask or mask logical units to a Hyper-V host or
cluster.
endpoint Two endpoints are associated with each other and are thus best described
(host initiator endpoints together:
-and- Host-initiator endpoints
storage endpoints)
storage endpoints
Host-initiator endpoints on a Hyper-V VM host are bound (mapped) to
storage endpoints on the target array. This mapping is done through an
intermediary called a storage group (also called a masking set or SPC).
See also the lists of synonyms for initiator endpoints and storage
endpoints in “Storage groups unmask logical units to Hyper-V VM hosts”
on page 138.
Fibre Channel Protocol (FCP) A transport protocol (analogous to TCP on IP networks) that sends SCSI
commands over Fibre Channel networks. All EMC storage systems support
FCP.
host agent Service installed on Hyper-V Servers (VM hosts) that communicates with
the VMM Server. VMM does not install host agents for Citrix XenServer
hosts or VMware ESX hosts.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 165
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
Host Bus Adapter (HBA) Connects a host computer to a storage device for input/output (I/O)
processing. An HBA is a physical device that contains one or more ports; a
single system contains one or more HBAs. FC HBAs are more common, but
iSCSI HBAs also exist:
FC HBA: A physical card on the host that acts as the initiator that sends
commands from the host to storage devices on a target array.
iSCSI HBA: A physical card on the host that acts as the initiator that
sends commands from the host to storage devices on a target array.
A computer with more than one HBA can connect to multiple storage
devices. HBA is used in this test environment to specifically connect to
one or more devices on a VM host that initiates a connection to storage
arrays. This connection most likely is an FC HBA connection.
initiator/target These terms are binary opposites and are thus best defined together:
Initiator (on the host): The endpoint (a SCSI port or an FC port) on
the host requests information and receives responses from the
target array.
Target (on the array): The endpoint (a SCSI port or an FC port) that
returns information requested by the initiator. A target consists of
one or more LUNs and, typically, returns one or more LUNs to the
initiator. See “endpoint.”
Internet Engineering Task An international organization that promotes the publication of high-
Force (IETF) quality, relevant technical documents and Internet standards that
influence the way that people design, use, and manage the Internet. IETF
focuses on improving the Internet from an engineering point of view. The
IETF's official products are documents, called RFCs, published free of
charge.
Internet Small Computer An IP-based standard developed by IETF that links data storage devices to
System Interface (iSCSI) each other and to computers. iSCSI carries SCSI packets (SCSI commands)
over TCP/IP networks, including local area networks (LANs), wide area
networks (WANs), and the Internet. iSCSI supports storage area networks
(SANs) by enabling location-independent data storage and retrieval and
by increasing the speed of transmission of storage data. Almost all EMC
storage systems support iSCSI, in addition to supporting FC (one
exception is the VNX 5100, which supports only FC).
166 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
logical unit A unit of storage within a storage pool on a storage array in a SAN. Each
logical unit exported by an array controller corresponds to a virtual disk.
From the perspective of a host computer that can access that logical unit,
the logical unit appears as a disk drive.
In VMM, a logical unit is typically a virtual disk that contains the VHD file
for a VM.
Get-SCStorageLogicalUnit | Select-Object
ObjectID,ObjectType,Name,Description,Enabled,SMDisplayName
,SMName,SMLunIdFormat,SMLunIdDescription
Get-SCStorageLogicalUnit | Select-Object
ObjectID,ObjectType,Name,BlockSize,NumberOfBlocks,Consumab
leBlocks,TotalCapacity,InUseCapacity,AllocatedCapacity,Rem
ainingCapacity
Get-SCStorageLogicalUnit | Select-Object
ObjectID,ObjectType,Name,WorkloadType,Status,ThinlyProvisi
oned,StoragePool,StorageGroups,HostGroup,IsAssigned,IsView
Only | fl
Get-SCStorageLogicalUnit | Select-Object
ObjectID,ObjectType,Name,
SourceLogicalUnit,LogicalUnitCopies,LogicalUnitCopySource
| fl
Logical Unit Number (LUN) A number that identifies a logical unit of storage within a storage pool on a
SAN array. Frequently, the acronym LUN is used as a synonym for the
logical unit that it identifies.
LUN mapping Refers to configuring access paths (by means of a target port) to logical
units, which makes storage that is represented by logical units available
for use by servers.
LUN masking Refers to configuring access permissions to determine which hosts have
access to specific logical units on SANs.
LUN mask A set of access permissions that identify which initiator (on a host) can
access specific LUNs on a target (an array). This mask makes available a
LUN (and the logical unit of storage identified by that LUN) to specified
hosts, and makes that LUN unavailable to other hosts.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 167
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
mask / unmask These terms are binary opposites and are thus best defined together:
Unmask: Assign a logical unit to a host or host cluster.
Mask: Hide a logical unit from a host of host cluster.
Microsoft Storage A service (a WMI provider installed by default on the VMM Server) used by
Management Service VMM to discover storage objects and to manage storage operations. This
service is an SMI-S client that communicates with the SMI-S Provider
server over the network. It converts retrieved SMI-S objects to Storage
Management Service objects that VMM can manage.
N_Port ID Virtualization (NPIV) Enables multiple N_Port IDs to share a single physical N_Port. This allows
(applies only to Fibre Channel) multiple FC initiators to occupy a single physical port, easing hardware
requirements for SANs.
In VMM, the NPIV Provider (on a VM host) uses HBA technology (which
creates virtual HBA ports, also called vPorts, on hosts) to enable a single
physical FC vPort to function as multiple logical ports, each with its own
identity. VMM 2012 automates the creation (and deletion) of vPorts as
part of the SAN transfer of a VM (from one computer to another) on an FC
SAN. VMM 2012 does not create vPorts when creating a new VM.
Operating Environment (OE) The Operating Environment (array OS) on an EMC storage array:
Enginuity is a specialized operating environment (OE) designed by EMC
for data storage. It is used to control components in a Symmetrix array.
FLARE is a specialized operating environment (OE) designed by EMC for
data storage and used to control components in a CLARiiON array.
FLARE manages all input/output (I/O) functions of the storage array.
VNX OE is a specialized operating environment designed by EMC to
provide file and block code for a unified system. VNX OE contains basic
features, such as thin provisioning. For advanced features, you can buy
add-ons, such as the Total Efficiency Pack.
See also “Array OS.”
self-hosted service A service that runs within a process (application) that the developer
created. The developer controls its lifetime, sets the properties of the
service, opens the service (which sets it into a listening mode), and closes
the service. Services can be self-hosted or can be managed by an existing
hosting process.
168 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
Small Computer Systems A set of standards that define how to physically connect, and transfer data
Interface (SCSI) between, computers and external devices such as storage arrays. SCSI
standards define commands, protocols, and electrical and optical
interfaces. Typically, a computer is an initiator and a data storage device
is a target.
SMI-S module A component of the Microsoft Storage Management Service that maps
Storage Management Service objects to SMI-S objects.
storage area network (SAN) A dedicated network that provides access to consolidated, block level
data storage, thus making storage devices, such as disk arrays, accessible
to servers. Storage devices appear, to the server's operating system, like
locally attached devices.
VMM 2012 supports FC and iSCSI SANs:
FC SAN: The VM host uses a host bus adapter (HBA) to access the array
by initiating a connection to a target on the array.
iSCSI SAN: The VM host uses the Microsoft iSCSI Initiator Service to
access the array by issuing a SCSI command to a target on the array.
storage array A disk storage system that contains multiple disk drives attached to a SAN
in order to make storage resources available to servers. Also called a
storage system.
SANs make storage arrays available to servers. Arrays appear like
locally attached devices to the server operating system.
EMC storage systems that support the VMM private cloud include the
Symmetrix VMAX family, the CLARiiON CX4 series, and the VNX family.
VMM discovers storage resources on storage arrays and can then make
storage resources available to VM hosts. An array in a VMM private
cloud must support the FC or iSCSI storage protocol, or both. Within an
array, the storage elements most important to VMM are storage pools
and logical units.
storage classification A string value defined in VMM and associated with a storage pool that
represents a level of service or quality of service guarantee. For example,
one typical naming convention used is to categorize storage pools as
Gold, Silver, and Bronze.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 169
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
storage group Binds host initiator endpoints on a Hyper-V host to storage endpoints on
the target array. VMM discovers existing storage groups but does not
display storage groups in the VMM Console. Instead, you can display
storage groups by using the following VMM PowerShell command:
Get-SCStorageArray -All | Select-Object
Name,ObjectType,StorageGroups | Format-List
Synonyms:
Masking set
SCSI Protocol Controller (SPC)
See also:
“endpoint”
“initiator / target”
“Appendix B: Array Masking and Hyper-V Host Clusters” on page 137
Storage Management Initiative A standard developed by the Storage Networking Industry Association
Specification (SMI-S) (SNIA). SMI-S defines a standardized management interface that enables
a management application, such as VMM, to discover, assign, configure,
and automate functionality for heterogeneous storage systems in a unified
way.
An SMI-S Provider implements SMI-S standard. The EMC SMI-S Provider
enables VMM to manage EMC VMAX, CLARiiON, and VNX arrays in a
unified way.
170 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
storage pool A repository of homogeneous or heterogeneous physical disks on a
storage array from which logical units (often called LUNs) can be created.
A storage pool on an array can be categorized by VMM based on service
level agreement (SLA) factors, such as performance. For example, a typical
naming convention used is to categorize storage pools as Gold, Silver, and
Bronze.
To see information about the storage pools in your environment, open the
VMM PowerShell command shell and type the following:
thin provisioning Configurable feature that lets you allocate storage based on fluctuating
demand.
Virtual Disk Service (VDS) VDS can be either of the following, which should not be confused:
VDS software provider on the VM host (central to VMM 2012):
Retrieves disk and volume information on the host, initializes and
partitions disks on the host, and formats and mounts volumes on the
host.
VDS hardware provider on the VMM Server (deprecated in VMM 2012):
Used only for storage arrays that do not support SMI-S. The VDS
hardware provider can discover and communicate with SAN arrays and
can enable SAN transfers, but the VDS hardware provider does not
support automated provisioning.
VM host A physical computer (managed by VMM) on which you can deploy one or
more VMs. VMM 2012 supports Hyper-V hosts (on which is installed the
VMM agent), VMware ESX hosts, and Citrix XenServer hosts. However, in
the current release, VMM supports storage provisioning only for Hyper-V
hosts.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 171
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
VMM PowerShell command Command-line interface (CLI) for the VMM Server. VMM 2012 provides 450
shell Windows PowerShell cmdlets developed specifically for VMM to perform
all tasks that are available in the VMM Console. VMM 2012 includes 25
new storage-specific cmdlets.
VMM Console Graphical user interface (GUI) for the VMM Server. You can use the VMM
Console on the VMM Server or from a remote computer.
VMM Library Server File server (managed by VMM) used as a repository to store files (used for
VMM tasks) such virtual hard disks (VHDs), ISOs, scripts, VM templates
(typically used for rapid provisioning), service templates, application
installation packages, and other files.
VHD files used to support rapid provisioning of VMs are contained within
LUNs on the arrays but are mounted to folders on the Library Server.
You can install the VMM Library on the VMM Server, on a VM host, or on a
stand-alone Hyper-V host.
VMM Management Server Service used to manage VMM objects such as virtual machines, hypervisor
(VMM Server) physical servers, storage, network, clouds, and services. Also called VMM
Server.
Web Services Management Enables IT systems to access and exchange management information. WS-
(WS-Man) Man is a DMTF standard that supports the use of web services to enable
remote access to network devices and promotes interoperability between
management applications and managed resources.
Web-Based Enterprise A group of standards that enable accessing information and managing
Management (WBEM) compute, network, and storage resources in an enterprise environment.
WBEM includes:
CIM: A model to represent resources.
CIM-XML: An XML-based protocol, CIM-XML over HTTP, that lets
network components communicate.
WS-Man: A SOAP-based protocol, Web Services for Management (WS
Management, or WS-Man), that lets network components
communicate.
xmlCIM: An XML representation of CIM models and messages (xmlCIM)
that travels by way of CIM-XML
Windows Management The Microsoft implementation of the WBEM standard that enables
Instrumentation (WMI) accessing management information in an enterprise-scale distributed
environment.
WMI uses the CIM standard to represent systems, applications,
networks, devices, and other managed components.
The WMI Service is the Windows implementation of the CIM Object
Manager (CIMOM), which provides applications with uniform access to
management data.
The Microsoft Storage Management Service that VMM 2012 uses to
communicate with the SMI-S Provider is implemented as a WMI
provider.
172 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix F: Terminology
Term Definition
Windows Remote The Microsoft implementation of WS-Man. WinRM enables Windows
Management (WinRM) PowerShell 2.0 cmdlets and scripts to be invoked on one or more remote
machines.
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 173
Reference Architecture and Best Practices
Appendix F: Terminology
174 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix G References
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 175
Reference Architecture and Best Practices
Appendix G: References
References
The sources in this appendix focus on storage automation enabled by the SNIA SMI-S
standard in the context of EMC storage systems and the VMM 2012 private cloud.
Standards sources
Table 36. SNIA, DMTF, and other standards related to storage automation
Source Website Link
CIM Infrastructure http://dmtf.org/sites/default/files/standards/documents/DSP00
DMTF
Specification 04_2.6.0_0.pdf
Common Information
DMTF http://dmtf.org/standards/cim
Model (CIM)
Standards and
DMTF http://dmtf.org/standards
Technology
Web Services
DMTF http://dmtf.org/standards/wsman
Management (WS-MAN)
Web-Based Enterprise
DMTF http://dmtf.org/standards/wbem
Management (WBEM)
http://www.snia.org/sites/default/files/SMI-Sv1.6r4-
SNIA SMI Specification
Block.book_.pdf
SMI-S Conforming
SNIA http://www.snia.org/ctp/conformingproviders/index.html
Provider Companies
SNIA – SMI-S
Conformance Testing
SNIA Program – Official CTP http://www.snia.org/ctp/conformingproviders/emc.html
Test Results – EMC
Corporation
SNIA Conformance
SNIA Testing Program (SNIA- http://www.snia.org/ctp/
CTP)
176 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix G: References
Storage Management
SNIA http://www.snia.org/forums/smi
Initiative (SMI) forums
Storage Management
http://www.snia.org/tech_activities/standards/curr_standards/s
SNIA Initiative Specification
mi
(SMI-S)
Storage Management
http://www.snia.org/sites/default/files/SMI-
SNIA Technical Specification
Sv1.3r6_Overview.book_.pdf
Overview
Storage Networking
SNIA Industry Association http://www.snia.org/
(SNIA)
EMC sources
EMC sources in Table 37 introduce some of the EMC sources relevant for storage
systems that support VMM storage automation.
You can find all EMC documents on EMC Online Support at https://support.emc.com.
Access to these documents requires login credentials. Contact your EMC sales
representative for details about obtaining a valid support agreement or to answer any
questions about your account.
Table 37. EMC related document for VMM 2012 storage automation
EMC related document Link or location
Arrays–Announcing the EMC Symmetrix VMAX http://www.emc.com/collateral/hardware/white-
40K, 20K, 10K Series and Enginuity 5876 papers/h10497-enginuity5876-new-features-vmax-wp.pdf
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 177
Reference Architecture and Best Practices
Appendix G: References
EMC SMI-S Provider Release Notes version “SMI-S Provider Release Notes” on EMC Online Support at
4.4.0 (or later) https://support.emc.com/.
(Navigate to the most recent version)
Microsoft sources
178 Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S
Reference Architecture and Best Practices
Appendix G: References
Storage Automation with System Center 2012 and EMC Storage Systems using SMI-S 179
Reference Architecture and Best Practices