HP 3PAR VMware ESX Implementation Guide
HP 3PAR VMware ESX Implementation Guide
Abstract
This implementation guide provides information for establishing communications between an HP 3PAR StoreServ Storage and a VMware ESX host. General information is also provided on the basic steps required to allocate storage on the HP 3PAR StoreServ Storage that can then be accessed by the ESX host.
Copyright 2013 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.21 1 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Acknowledgments Java and Oracle are registered trademarks of Oracle and/or its affiliates. Windows is a U.S. registered trademark of Microsoft Corporation.
Contents
1 Introduction...............................................................................................5
Supported Configurations..........................................................................................................5 HP 3PAR OS Upgrade Considerations.........................................................................................6 Audience.................................................................................................................................6
Modifying the Tuneable Parameters for Queue Depth Throttling in ESX 3.x............................42 ESX/ESXi 4.1, ESXi 5.x Additional Feature Considerations............................................................44 Storage I/O Control...........................................................................................................44 vStorage APIs for Array Integration (VAAI).............................................................................44 HP 3PAR VAAI Plugin 1.1.1 for ESXi 4.1.................................................................................45 HP 3PAR VAAI Plugin 2.2.0 for ESXi 5.x...............................................................................45 UNMAP (Space Reclaim) Storage Hardware Support for ESXi 5.x............................................46 Out-of-Space Condition for ESX 4.1 and ESXi 5.x...................................................................46 Additional New Primitives Support on ESXi 5.x......................................................................48 VAAI and New Feature Support Table..................................................................................48 VAAI Plugin Verification......................................................................................................49 VMware All Paths Down.....................................................................................................51
8 Booting the VMware ESX Server from the HP 3PAR StoreServ Storage.............73 9 Support and Other Resources.....................................................................74
Contacting HP........................................................................................................................74 HP 3PAR documentation..........................................................................................................74 Typographic conventions.........................................................................................................77 HP 3PAR branding information.................................................................................................77
10 Documentation feedback.........................................................................78
Contents
1 Introduction
This implementation guide provides information for establishing communications between an HP 3PAR StoreServ Storage and a VMware ESX host. General information is also provided on the basic steps required to allocate storage on the HP 3PAR StoreServ Storage that can then be accessed by the ESX host. The information contained in this implementation guide is the outcome of careful testing of the HP 3PAR StoreServ Storage with as many representative hardware and software configurations as possible.
Required
For predictable performance and results with your HP 3PAR StoreServ Storage, you must use the information in this guide in concert with the documentation set provided by HP 3PAR for the HP 3PAR StoreServ Storage and the documentation provided by the vendor for their respective products.
Supported Configurations
The following types of host connections are supported between the HP 3PAR StoreServ Storage and hosts running a VMware ESX OS: Fibre Channel connections are supported between the HP 3PAR StoreServ Storage and the ESX host server in both a fabric-attached and direct-connect topology. For information about supported hardware and software platforms, see the HP Single Point of Connectivity Knowledge (HP SPOCK) website: http://www.hp.com/storage/spock For more information about HP 3PAR storage products, follow the links in Table 1 (page 5). Table 1 HP 3PAR Storage Products
Product HP 3PAR StoreServ 7000 Storage See... http://h20000.www2.hp.com/bizsupport/TechSupport/ Home.jsp?lang=en&cc=us&prodTypeId=12169& prodSeriesId=5335712&lang=en&cc=us http://h20000.www2.hp.com/bizsupport/TechSupport/ Home.jsp?lang=en&cc=us&prodTypeId=12169& prodSeriesId=5157544&lang=en&cc=us http://h20180.www2.hp.com/apps/Nav? h_pagetype=s-001&h_lang=en&h_cc=us& h_product=5044012&h_client=S-A-R163-1& h_page=hpcom&lang=en&cc=us http://h20180.www2.hp.com/apps/Nav? h_pagetype=s-001&h_lang=en&h_cc=us& h_product=5046476&h_client=S-A-R163-1& h_page=hpcom&lang=en&cc=us http://h20180.www2.hp.com/apps/Nav? h_pagetype=s-001&h_lang=en&h_cc=us& h_product=5053605&h_client=S-A-R163-1& h_page=hpcom&lang=en&cc=us
Required
All installation steps should be performed in the order described in this implementation guide.
Supported Configurations
Audience
This implementation guide is intended for system and storage administrators who perform and manage the system configurations and resource allocation for the HP 3PAR StoreServ Storage. This guide provides basic information that is required to establish communications between the HP 3PAR StoreServ Storage and the VMware ESX host and to allocate the required storage for a given configuration. However, the appropriate HP documentation must be consulted in conjunction with the ESX host and host bus adapter (HBA) vendor documentation for specific details and procedures. NOTE: This implementation guide is not intended to reproduce or replace any third-party product documentation. For details about devices such as host servers, HBAs, fabric switches, and non-HP 3PAR software management tools, consult the appropriate third-party documentation.
Introduction
Required
The following setup must be completed before connecting the HP 3PAR StoreServ Storage port to a device. NOTE: When deploying HP Virtual Connect Direct-Attach for HP 3PAR Storage where the HP 3PAR StoreServ Storage ports are cabled directly to the uplink ports on the HP Virtual Connect FLexFabric 10 Gb/24-port Module for c-Class BladeSystem, follow the steps for configuring the HP 3PAR StoreServ Storage ports for a fabric connection. For more information about HP Virtual Connect, HP Virtual Connect interconnect modules, and the HP Virtual Connect Direct-Attach feature, see Virtual Connect documentation and the HP SAN Design Reference Guide. This documentation is available on the HP BSC website: http://www.hp.com/go/3par/
CfgRate MaxRate 4Gbps disabled 4Gbps disabled 4Gbps disabled 4Gbps disabled 4Gbps disabled 4Gbps disabled 4Gbps disabled 4Gbps disabled
Class2 UniqNodeWwn disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled disabled
VCN IntCoal enabled enabled enabled enabled enabled enabled enabled enabled
2.
If the port has not been configured, take the port offline before configuring it for the ESX host by issuing the following HP 3PAR OS CLI command: controlport offline [node:slot:port] CAUTION: Before taking a port offline in preparation for a direct or fabric connection, you should verify that the port has not been previously defined and that it is not already connected to a host as this would interrupt the existing host connection. If an HP 3PAR StoreServ Storage port is already configured for a direct or fabric connection, you can ignore this step, as you do not have to take the port offline.
3.
To configure the port for the host server, issue the following command, with the appropriate option for the -ct parameter: controlport config host -ct loop/point [node:slot:port] For a direct connection: Use the -ct loop parameter to specify a direct connection. For a fabric connection: Use the -ct point parameter to specify a fabric connection.
4.
Issue the controlport rst command to reset and register the new port definitions. The following example shows how to set up a fabric connected port.
% controlport offline 1:5:1 % controlport config host -ct point 1:5:1 % controlport rst 1:5:1
2.
To create host definitions, issue the createhost command with the -persona option to specify the persona and the host name. For HP 3PAR OS 3.1.1 or earlier:
# createhost -persona 6 ESXserver1 10000000C9724AB2 10000000C97244FE
3.
To verify that the host has been created, issue the showhost command. For HP 3PAR OS 3.1.1 or earlier, using persona 6:
Id Name Persona -WWN/iSCSI_Name- Port 0 ESXserver1 Generic-legacy 10000000C972AB2 --10000000C9724FE ---
NOTE: If the persona is not correctly set, then use the sethost -persona <host number> <hostname> command to correct the issue, where host number is 6 (for HP 3PAR OS 3.1.1 or earlier) or 11 (for HP 3PAR OS 3.1.2). A reboot of the ESX host server is required if host persona is changed to 1 1. NOTE: See the HP 3PAR Command Line Interface Reference or the HP 3PAR Management Console Help for complete details on using the controlport, createhost, and showhost commands. These documents are available on the HP BSC website: http://www.hp.com/go/3par/
Required
The following setup must be completed before connecting the HP 3PAR StoreServ Storage port to a device.
Verify port persona 1, connection type loop, using the HP 3PAR OS CLI showport -par command.
Verify port persona 7, connection type point, using the HP 3PAR OS CLI showport -par command.
10
2.
To verify that the host has been created, issue the HP 3PAR OS CLI showhost command.
# showhost -persona Id Name -WWN/iSCSI_Name- Port 0 ESXserver1 10000000C9724AB2 --10000000C97244FE ---
3.
(Optional) You can create a host set using createhostset, which allows the addition of multiple host names as a host definition set. A host set gives the convenience of exporting storage volume to hosts which are in a cluster. The same storage volumes need to be exported individually to each of the hosts, or they can be exported to a host set, which in turn will be exported to each of the hosts defined in the host set.
# createhostset ESXCluster # createhost -add ESXCluster ESXserver1 # createhost -add ESXCluster ESXserver2 # showhostset Id Name Members 0 ESXCluster ESXServer1 ESXServer2
NOTE: See the HP 3PAR Command Line Interface Reference or the HP 3PAR Management Console Help for complete details on using the controlport, createhost, and showhost commands. These documents are available on the HP BSC website: http://www.hp.com/go/3par/
Required
Employ fabric zoning, using the methods provided by the switch vendor, to create relationships between host server HBA ports and storage server ports before connecting the host server HBA ports or HP 3PAR StoreServ Storage ports to the fabric(s).
1 1
Fibre Channel switch vendors support the zoning of the fabric end-devices in different zoning configurations. There are advantages and disadvantages with each zoning configuration. Choose a zoning configuration based on your needs. The HP 3PAR arrays support the following zoning configurations: One initiator to one target per zone One initiator to multiple targets per zone (zoning by HBA). This zoning configuration is recommended for the HP 3PAR StoreServ Storage. Zoning by HBA is required for coexistence with other HP Storage arrays. NOTE: For high availability/clustered environments that require multiple initiators to access the same set of target ports, HP recommends that separate zones be created for each initiator with the same set of target ports. NOTE: The storage targets in the zone can be from the same HP 3PAR StoreServ Storage, multiple HP 3PAR StoreServ Storages , or a mixture of HP 3PAR and other HP storage systems. For more information about using one initiator to multiple targets per zone, see Zoning by HBA in the Best Practices chapter of the HP SAN Design Reference Guide. This document is available on the HP BSC website: http://www.hp.com/go/3par/ If you use an unsupported zoning configuration and an issue occurs, HP may require that you implement one of the supported zoning configurations as part of the troubleshooting or corrective action. After configuring zoning and connecting each host server HBA port and HP 3PAR StoreServ Storage port to the fabric(s), verify the switch and zone configurations using the HP 3PAR OS CLI showhost command, to ensure that each initiator is zoned with the correct target(s).
HP 3PAR Coexistence
The HP 3PAR StoreServ Storage array can coexist with other HP array families. For supported HP arrays combinations and rules, see the HP SAN Design Reference Guide, available on the HP BSC website: http://www.hp.com/go/3par/
12
The following fill-word modes are supported on a Brocade 8 G/s switch running FOS firmware 6.3.1a and later:
admin>portcfgfillword Usage: portCfgFillWord PortNumber Mode [Passive] Mode: 0/-idle-idle - IDLE in Link Init, IDLE as fill word (default) 1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word 2/-idle-arbff - IDLE in Link Init, ARBFF as fill word (SW) 3/-aa-then-ia - If ARBFF/ARBFF failed, then do IDLE/ARBFF
HP recommends that you set the fill word to mode 3 (aa-then-ia), which is the preferred mode using the portcfgfillword command. If the fill word is not correctly set, er_bad_os counters (invalid ordered set) will increase when you use the portstatsshow command while connected to 8 G HBA ports, as they need the ARBFF-ARBFF fill word. Mode 3 will also work correctly for lower-speed HBAs, such as 4 G/2 G HBAs. For more information, see the Fabric OS command Reference Manual supporting FOS 6.3.1a and the FOS release notes. In addition, some HP switches, such as the HP SN8000B 8-slot SAN backbone director switch, the HP SN8000B 4-slot SAN director switch, the HP SN6000B 16 Gb FC switch, or the HP SN3000B 16 Gb FC switch automatically select the proper fill-word mode 3 as the default setting. McDATA switch or director ports should be in their default modes as G or GX-port (depending on the switch model), with their speed setting permitting them to autonegotiate. Cisco switch ports that connect to HP 3PAR StoreServ Storage ports or host HBA ports should be set to AdminMode = FX and AdminSpeed = auto port, with the speed set to auto negotiate. QLogic switch ports should be set to port type GL-port and port speed auto-detect. QLogic switch ports that connect to the HP 3PAR StoreServ Storage should be set to I/O Stream Guard disable or auto, but never enable.
QLogic 2G: 497 LSI 2G: 510 Emulex 4G: 959 HP 3PAR HBA 4G: 1638 HP 3PAR HBA 8G: 3276 (HP 3PAR StoreServ 10000 and HP 3PAR StoreServ 7000 systems only)
The I/O queues are shared among the connected host server HBA ports on a first-come, first-served basis. When all queues are in use and a host HBA port tries to initiate I/O, it receives a target queue full response from the HP 3PAR StoreServ Storage port. This condition can result in erratic I/O performance on each host server. If this condition occurs, each host server should be throttled so that it cannot overrun the HP 3PAR StoreServ Storage port's queues when all host servers are delivering their maximum number of I/O requests.
Setting Up and Zoning the Fabric 13
NOTE: When host server ports can access multiple targets on fabric zones, the assigned target number assigned by the host driver for each discovered target can change when the host server is booted and some targets are not present in the zone. This situation may change the device node access point for devices during a host server reboot. This issue can occur with any fabric-connected storage, and is not specific to the HP 3PAR StoreServ Storage.
Persistent Ports
NOTE: The Persistent Ports feature is supported only on HP 3PAR OS 3.1.2. The Persistent Ports (or virtual ports) feature minimizes I/O disruption during an HP 3PAR Storage online upgrade or node-down event. Currently, persistent ports are supported only with Fibre Channel connections. Persistent Ports allows a Fibre Channel HP 3PAR Storage port to assume the identity (port WWN) of a failed port while retaining its own identity. The solution uses the NPIV feature for Fibre Channel. This feature does not work in direct-connect mode and is supported only on Fibre Channel target ports that connect to Fibre Channel fabric and are in point-to-point mode where both the active and partner ports share the same fabric. Each Fibre Channel port has a partner port automatically assigned by the system. Where a given physical port assumes the identity of its partner port, the assumed port is designated as a persistent port. Array port failover and failback with Persistent Ports is transparent to most host-based multipathing software which, in most cases, can keep all its I/O paths active. The Persistent Ports feature is activated by default during node-down events (online upgrade or node reboot). Port shutdown or reset events do not trigger this feature. Persistent Ports is enabled by default starting with the HP 3PAR OS 3.1.2 software. In the event that an HP 3PAR Storage node is downed during an online upgrade or node-down event, the Fibre Channel target ports fail over to their partner ports. For example, in a two-node HP 3PAR Storage array configuration, if ports 0:1:1, 0:5:1 and 1:1:1, 1:5:1 are connected to the fabric, then if node 0 goes down, ports 0:1:1, 0:5:1 fail over to ports 1:1:1, 1:5:1 and become active while ports 1:1:1, 1:5:1 remain active. In HP 3PAR Storage arrays with more than two nodes, failover behavior occurs on node pairs; that is, if node 0 goes down, ports on node 0 fail over to node 1, if node 2 goes down, ports on node 2 fail over to node 3, and so on. Conversely, when node 1 goes down, ports on node 1 fail over to node 0, and when node 3 goes down, ports on node 3 fail over to node 2. When the downed node is up again, the failed-over ports automatically fail back to their original ports. During the failover and failback process, a short pause in I/O could be experienced by the host.
none: No failover in operation failover_pending: In the process of failing over to partner failed_over: Failed over to partner
active: The partner port is failed over to this port active_down: The partner port is failed over to this port, but this port is down failback_pending: In the process of failing back from partner
Use the showport HP 3PAR CLI commands to get the state of the persistent ports. In the output of the showport command shown below, under the Partner column, port 1:1:1 is the partner port that 0:1:1 would fail over to and 0:1:1 is the partner port to which 1:1:1 would fail over. When Persistent Ports is not active, the FailoverState for the ports would indicate none.
When a node is down during an online upgrade or node reboot, from the output of the showport command, the FailoverState column would show that Persistent Ports is active. In the example below, node 1 has gone down, Persistent Ports for 1:1:1 has become active on port 0:1:1, and all filesystem I/O for port 1:1:1 is physically served by port 0:1:1.
Before Persistent Ports is active, the output of the showhost command displays as follows:
# showhost Id Name 1 server1
Persona Generic
When Persistent Ports is active, the output of the showhost command, under the Port column, shows both the physical port and the physical port where Persistent Ports is active. In the example below, port 0:1:1, logged in from each of the host HBA ports, appears twice, once for the physical port and once again for the persistent port that is active on the physical port.
# showhost Id Name 1 server1
Persona Generic
15
After the controller node has been successfully rebooted, the FailoverState for the ports changes back to none, as shown in the following example:
After the node has been successfully rebooted, the node entry of node 0 reappears in the GUI and I/O is still in progress. Manually, you can perform failover and failback using the controlport failover <N:S:P> and controlport failback <N:S:P> command options.
16
2.
If State=config_wait or Firmware=0.0.0.0, use the controlport config iscsi <n:s:p> command to configure, and then use the showport and showport -i commands to verify the configuration setting. For example:
# controlport config iscsi 0:1:1 # controlport config iscsi 1:1:1 # showport N:S:P Mode State ----Node_WWN---- -Port_WWN/HW_Addr- Type Protocol ... 0:1:1 target ready 2C27D7521F3E iscsi iSCSI ... 1:1:1 target ready 2C27D7521F3A iscsi iSCSI # showport -i ... N:S:P Brand Model ... 0:1:1 QLOGIC QLE8242 1:1:1 QLOGIC QLE8242
Rev Firmware 58 58
Serial
HWType
17
3.
Issue the HP 3PAR OS CLI showport -iscsi command to check the current settings of the iSCSI ports:
4.
Issue the HP 3PAR OS CLI controliscsiport command to set up the IP address and netmask address of the iSCSI target ports.
# # # # controliscsiport controliscsiport controliscsiport controliscsiport addr addr addr addr 10.1.1.100 10.1.1.102 10.1.1.101 10.1.1.103 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 -f -f -f -f 0:1:1 0:1:1 0:1:2 0:1:2
NOTE: Make sure the IP switch ports where the HP 3PAR StoreServ Storage iSCSI target port(s) and iSCSI initiator host are connected are able to communicate with each other by using the vmkping command on the ESX host. (The VMware ESX Server iSCSI initiator must be configured to perform this operation in accordance with Configuring the Host for an iSCSI Connection (page 55).) To verify that the ESX host can see the HP 3PAR StoreServ Storage, issue the following command:
# vmkping 0.1.1.0
To verify that the HP 3PAR StoreServ Storage can see the ESX host, issue the following command:
# controliscsiport ping 10.1.1.100 0:1:1
NOTE: A maximum of 64 host server iSCSI initiator ports can be connected to any one HP 3PAR StoreServ Storage target port. NOTE: When the host initiator port and the HP 3PAR OS target port are in different IP subnets, the gateway address for the HP 3PAR OS port should be configured in order to avoid unexpected behavior.
Creating the iSCSI Host Definition on an HP 3PAR StoreServ Storage Running HP 3PAR OS 3.1.x and OS 2.3.x
Create a host definition that ties all of the connections from a single host to a host name. Prior to creating a host definition using the following steps, the HP 3PAR StoreServ Storage iSCSI target ports must have been set up and an iSCSI connection/session must be established. The iSCSI connection/session is established by following the steps in Setting Up the Ports for an iSCSI Connection (page 17) and the steps in Configuring the Host for an iSCSI Connection (page 55) through Configuring the VMware iSCSI Initiator (page 61) (ESX host setup).
18
The following example of host definition creation depicts a VMware iSCSI initiator iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 on an ESX server (the only iSCSI initiator for this server in this case) connecting through a VLAN to a pair of HP 3PAR StoreServ Storage iSCSI ports. The host definition is given the name ESX1 and the host persona is set to 6 (Generic-legacy). 1. Issue the HP 3PAR OS CLI showhost command to verify that the host iSCSI initiators are connected to the HP 3PAR StoreServ Storage iSCSI target ports.
# showhost Id Name Persona ----------------WWN/iSCSI_Name---------------- Port -- -- iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 0:1:2 -- -- iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 1:1:2 -- -- iqn.1998-01.com.vmware:dl360g8-02-42b20fff 0:1:2 -- -- iqn.1998-01.com.vmware:dl360g8-02-42b20fff 1:1:2
2.
Issue the HP 3PAR OS CLI createhost command to create the appropriate host definition entry.
# createhost -iscsi -persona 6 ESX1 iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56
or:
# createhost -iscsi -persona 11 ESX2 iqn.1998-01.com.vmware:dl360g8-02-42b20fff
3.
Issue the HP 3PAR OS CLI showhost command to verify that the host entry has been created.
# showhost Id Name Persona ----------------WWN/iSCSI_Name---------------- Port 0 ESX1 Generic-legacy iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 0:1:2 iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 1:1:2 1 ESX2 VMware iqn.1998-01.com.vmware:dl360g8-02-42b20fff iqn.1998-01.com.vmware:dl360g8-02-42b20fff 0:1:2 1:1:2
4.
To test the connection, create some temporary virtual volumes and export the VLUNs to the host. NOTE: See Allocating Storage for Access by the ESX Host (page 67) for complete details on creating, exporting and discovering storage for an iSCSI connection.
5.
On the ESX iSCSI initiator host, perform a rescan and verify that the disks have been discovered.
Creating the iSCSI Host Definition on an HP 3PAR StoreServ Storage Running HP 3PAR OS 2.2.x
Create a host definition that ties all of the connections from a single host to a host name. Prior to creating a host definition using the following steps, the HP 3PAR StoreServ Storage iSCSI target ports must have been set up and an iSCSI connection/session must be established. The iSCSI connection/session is established by following the steps in Setting Up the Ports for an iSCSI Connection (page 17) and the steps in Configuring the Host for an iSCSI Connection (page 55) through section Configuring the VMware iSCSI Initiator (page 61) (ESX host setup). The following example of host definition creation depicts a VMware iSCSI initiator iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56on an ESX server (the only iSCSI
Software iSCSI Support 19
initiator for this server in this case) connecting through a VLAN to a pair of HP 3PAR StoreServ Storage iSCSI ports. The host definition is given the name ESX1. 1. Issue the HP 3PAR OS CLI showhost command to verify that the host iSCSI initiators are connected to the HP 3PAR StoreServ Storage iSCSI target ports.
# showhost Id Name Persona ----------------WWN/iSCSI_Name---------------- Port -- -iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 0:1:2 -- -iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 1:1:2
2.
Issue the HP 3PAR OS CLI createhost command to create the appropriate host entry.
# createhost -iscsi ESX1 iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56
3.
Issue the HP 3PAR OS CLI showhost command to verify that the host entry has been created.
# showhost Id Name ----------------WWN/iSCSI_Name---------------- Port 0 ESX1 iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 0:1:2 iqn.1998-01.com.vmware:sqahpbc02icm5-40e25c56 1:1:2
4.
To test the connection, create some temporary virtual volumes and export the VLUNs to the host. NOTE: See Allocating Storage for Access by the ESX Host (page 67) for complete details on creating, exporting and discovering storage for an iSCSI connection.
5.
On the ESX iSCSI initiator host, perform a rescan and verify that the disks have been discovered.
The following example uses the CHAP secret (CHAP password) host_secret3 for the ESX host. Be aware that the CHAP secret must be at least 12 characters long. 2. On the ESX server's VI/vSphere client, open the Storage Adapters tab, then select the iSCSI Software Adapter, and select the Properties link. For ESX 3.5, select the CHAP Authentication tab, and then select the Use the following CHAP credentials radio button.
20
For ESX 4.x or ESXi, select the Use initiator name check box. Figure 2 CHAP Credentials in ESX 4.x or ESXi 5.x
21
3. 4.
Enter the CHAP Secret (must be at least 12 characters long). Click OK when you are done. A warning screen appears indicating that a reboot of the ESX server is required. NOTE: A server reboot is required for ESX 3.5. For ESX 4.x and ESXi 5.x, a rescan of the HBA should pick up the changes.
5. 6.
Click OK again to confirm. On the HP 3PAR StoreServ Storage, issue the HP 3PAR OS CLI sethost command with the initchap parameter to set the CHAP secret for the ESX host.
# sethost initchap -f host_secret3 ESX1
NOTE: If mutual CHAP on ESX is being configured, then target CHAP will need to be configured on the HP 3PAR StoreServ Storage as well as initiator CHAP. Set target CHAP secret using the HP 3PAR OS CLI sethost command with the targetchap parameter.
# sethost targetchap -f host_secret3 ESX1
22
a.
For the target CHAP, make sure to give the storage system name as the Name field variable. The storage name is obtained using the showsys output, as shown below.
b.
For ESX 4.x and 5.x: Figure 3 CHAP Credentials in ESX 4.x and 5.x
Issue the HP 3PAR OS CLI showhost -chap command to verify that the specified CHAP secret has been set for the host definition.
For Initiator chap # showhost -chap Id Name -Initiator_CHAP_Name- -Target_CHAP_Name0 ESX1 ESX1 -For mutual chap # showhost -chap Id Name -Initiator_CHAP_Name- -Target_CHAP_Name0 ESX1 ESX1 s331
c.
entered into the CNA card). The CN1 100E can be configured to boot from SAN; SCSI targets are entered into the card. For general information about the CN1 100E and other BE3 models supported, see the HP SPOCK website: http://www.hp.com/storage/spock To set a static IP address, follow these steps: 1. After installing the CN1 100E, boot the system. The following text appears :
Emulex 10Gb iSCSI Initiator BIOS.. Press <Ctrl> <S> for iSCSISelect(TM) Utility
2.
3. 4. 5.
Select a controller and press Enter. From the Controller Configuration screen, select Network Configuration and press Enter. In the Network Configuration screen, select Configure Static IP Address and press Enter. The screen for setting a static IP address displays.
24
6.
After entering the IP address, subnet mask, and default gateway, click Save to return to the Controller Configuration menu.
If the configuration being set up will be booted from SAN rather than from the host, follow these steps. 1. After entering the iSCSI Initiator Configuration screen, which will be the first screen displayed, obtain the IQN for the card and create a host definition on the HP 3PAR StoreServ Storage. For example:
# createhost iscsi persona 11 Esx50Sys1 iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1
2. 3. 4. 5. 6. 7.
Assign a VLUN to this host definition to be used as the SAN boot LUN. From the Controller Configuration menu, select Controller Properties. In the properties screen, verify that boot support is enabled. If it is not, scroll to Boot Support and enable it, then save and exit this screen. from the Controller Configuration menu, select iSCSI Target Configuration. In the iSCSI Target Configuration menu, select Add New iSCSI Target and press Enter. Fill in the information for the first iSCSI target. Make sure Boot Target is set to Yes.
25
8. After the information is filled in, click Ping to verify connectivity. 9. After a successful ping, click Save/Login. 10. After both controllers have been configured, issue the showiscsisession command to display the iSCSI sessions on the HP 3PAR StoreServ Storage and the host. If everything is configured correctly, the displays should appear as follows:
26
root@jnodec103140:S99814# showiscsisession 0:2:1 10.101.0.100 21 15 1 iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 2012-09-24 09:57:58 PDT 1:2:1 10.101.1.100 121 15 1 iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 2012-09-24 09:57:58 PDT root@jnodec103140:S99814# showhost -d Esx50Sys1 1 Esx50Sys1 VMware iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 0:2:1 10.101.0.100 1 Esx509Sys1 VMware iqn.1990-07.com.emulex:a0-b3-cc-1c-94-e1 1:2:1 10.101.1.100
1 1. If you do not want to use CHAP as an authentication method, exit the CN1 100E setup screens and reboot now. If you would like to use CHAP as an authentication method, return to the Add/Ping iSCSI Target screen as shown in Figure 6 (page 26), select Authentication Method, and then choose one of the following options: Select One-Way CHAP (see Figure 7 (page 27)). Figure 7 One-Way CHAP
The CHAP Configuration screen appears (see Figure 8 (page 27)). Figure 8 CHAP Configuration for One-Way CHAP
27
Fill in the Target CHAP Name (the initiator IQN name) and Target Secret, then click OK. In the Authentication Method setting on the Add-Ping iSCSI Target screen (Figure 7 (page 27)), select Mutual CHAP. The CHAP Configuration screen appears (see Figure 9 (page 28)). Figure 9 CHAP Configuration for Mutual CHAP
Fill in the Target CHAP Name (the initiator IQN name), the Target Secret, the Initiator CHAP Name (which is the DNS name of the storage), and an Initiator Secret, and then click OK. If you want to remove CHAP authentication later on, in the Authentication Method setting on the Add-Ping iSCSI Target screen (Figure 7 (page 27)), select None.
12. If you have set up CHAP authentication, then before rebooting the host system, make sure to set the matching CHAP parameters for the host in the HP 3PAR StoreServ Storage. NOTE: If you do not want to configure CHAP using BIOS, you can alter the iSCSI initiator properties after the system is booted. If one-way CHAP has been selected, enter the matching CHAP secret as follows:
root@jnodec103140:S99814# sethost initchap -f aaaaaabbbbbb EsxHost1 root@jnodec103140:S99814# showhost -chap
If mutual CHAP has been selected, enter the mutual CHAP secret as follows:
root@jnodec103140:S99814# sethost targetchap -f bbbbbbcccccc EsxHost1 root@jnodec103140:S99814# root@jnodec103140:S99814# showhost -chap Id Name -Initiator_CHAP_Name- -Target_CHAP_Name1 EsxHost1 EsxHost1 S814 root@jnodec103140:S99814#
After entering the CHAP secret, exit the BIOS and reboot the host.
28
29
NOTE: VMware and HP recommend the LSI logic adapter emulation for Windows 2003 Servers. The LSI Logic adapter is also the default option for Windows 2003 when creating a new virtual machine. HP testing has noted a high incidence of Windows 2003 virtual machine failures during an ESX multipath failover/failback event when the BUS Logic adapter is used with Windows 2003 VMs. NOTE: HP testing indicates that the SCSI timeout value for virtual machine guest operating systems should be 60 seconds in order to successfully ride out path failovers at the ESX layer. Most guest operating systems supported by VMware have a default SCSI timeout value of 60 seconds, but this value should be checked and verified for each GOS installation. In particular, Red Hat 4.x guest operating systems should have their SCSI timeout value changed from their default value of 30 seconds to 60 seconds. This command line can be used to set the SCSI timeout on all SCSI devices presented to a Red Hat 4.x virtual machine to 60 seconds: find /sys -name timeout | grep "host.*target.*timeout" | xargs -n 1 echo "echo 60 >"|sh This must be added as a line in /etc/rc.local of the Red Hat 4.x guest OS in order for the timeout change to be maintained with a virtual machine reboot. Example of a modified /etc/rc.local file:
# cat /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. find /sys -name timeout | grep "host.*target.*timeout" | xargs -n 1 echo "echo 60 >"|shtouch /var/lock/subsys/local
30
Fibre Channel (page 32) for procedures on implementing and configuring the round-robin path policy on ESX/ESXi 4.0 and later with an HP 3PAR StoreServ Storage. A path policy of "round-robin" is the preferred multipath implementation for ESX/ESXi 4.0 and later. For procedures on implementing and configuring the round-robin path policy on ESX/ESXi 4.0 and later with an HP 3PAR StoreServ Storage, see Configuring Round Robin Multipathing on ESX 4.x or later for Fibre Channel (page 32). A path policy of "fixed" and the preferred/active paths manually set to balance I/O load evenly across all paths is the preferred multipath implementation for ESX 3.0 - 3.5.
In the event the active path is detected as having failed or has been disabled either at the fabric switch, or on the storage array, all ESX server I/O to the storage array continues by failing over to a 'standby' path. When the ESX server detects that the preferred path has been recovered or is enabled, I/O from the ESX server then resumes on the preferred path -- assuming a preferred path policy had previously been set to that path. I/O from the ESX server should be manually distributed or balanced when two or more paths exist to more than one HP 3PAR StoreServ Storage volume on the storage array. Manually balancing the loads across available paths may improve I/O performance. This path load balancing to the storage array is dependant on the number of I/Os that are targeted for specific volumes on the storage array. Tuning I/Os to specific volumes on specific paths to the storage array varies from configuration to configuration and is totally dependant on the workload from the ESX server and the virtual machines to the devices on the storage array.
The following vSphere client screen shot depicts a LUN with five I/O paths in a FIXED I/O policy scheme. The path marked Active (I/O) with the '*' in the Preferred column is the path chosen as preferred, and is the path to which all I/O is currently assigned for the given LUN. The other paths listed are active, but in 'standby' mode. The paths in active 'standby' will not be used for I/O traffic for this LUN unless the preferred path fails.
31
A path policy of MRU (most recently used) does not maintain or reinstate balancing of I/O load after a failover/failback multipath event. This could leave I/O in an unplanned for and unbalanced state which may yield significant I/O performance issues. Implementation of an MRU path policy is not recommended by HP.
NOTE: If I/O is active to a LUN and an attempt is made to modify the path policy, a failure can occur indicating: "error during the configuration of the host: sysinfoException; Status=Busy: Message=Unable to Set". If this problem occurs while attempting to change the path policy, reduce the I/Os to that LUN and then try making the desired changes. For additional information on this topic, refer to the chapter on "Multipathing" contained in the VMware SAN Configuration Guide.
Configuring Round Robin Multipathing on ESX 4.x or later for Fibre Channel
With ESX version 4.0 and later, VMware supports a round-robin I/O path policy for active/active storage arrays such as HP 3PAR StoreServ Storage. A round-robin I/O path policy is the preferred configuration for ESX 4.0 and later; however, this path policy is not enabled by default for HP 3PAR devices. CAUTION: If you are running Windows Server 2012 or Windows Server 2008 VM Cluster with RDM shared LUNs, then individually change these specific RDM LUNs from Round Robin policy to FIXED or MRU path policy. Figure 10 (page 33), which is output from a Fibre Channel configuration, shows a LUN with a path that has been set to Round Robin (VMware). NOTE: Note that each path status is shown as Active (I/O). The path status for an iSCSI configuration would be the same.
32
Managing a round robin I/O path policy scheme through the VI/vSphere client GUI for a large network can be cumbersome and challenging to maintain because the policy must be specified for each LUN individually and updated whenever new devices are added. Alternatively, VMware provides a mechanism whereby the server administrator can use esxcli, vCLI, or vSphere Management Assistant (vMA) commands to manage I/O path policy for storage devices on a per-host basis using parameters defined within a set of native ESX/ESXi storage plugins. The VMware native multipathing has two important plugins: a Storage Array Type Plugin (SATP) that handles path failover and monitors path health, and a path-selection plugin (PSP) that chooses the best path and routes I/O requests for a specific logical device (PSP defines the path policy). The correct ESX/ESXi host Storage Array Type Plugin (SATP) to be used is related to the HP 3PAR array host persona in use. When HP 3PAR host persona 6/Generic-legacy is the host persona in use with an ESX/ESXi host, the SATP VMW_SATP_DEFAULT_AA should be used. When HP 3PAR host persona 1 1/VMware is the host persona in use with an ESX/ESXi host, the SATP VMW_SATP_ALUA should be used. For ESX/ESXi 4.0 versions (4.0 GA through all 4.0 updates), the default SATP rules must be edited in order to automatically achieve a round robin I/O path policy for storage devices. As of ESX/ESXi 4.1, additional custom SATP rules can be created that target SATP/PSP to specific vendors while leaving the default SATP rules unmodified. The custom SATP can be used to automatically achieve a round robin I/O path policy for storage devices.
33
NOTE: SATP rule changes cannot be affected through vSphere GUI. SATP rule changes through esxcli commands populate the esx.conf file. A custom SATP rule is an added SATP rule that modifies or redefines parameters of an existing SATP default rule, defines the targeted devices to be affected, and is given a unique custom rule name. A custom SATP rule cannot be changed/edited. A custom SATP rule must be removed and then a new one created with the desired changes added to affect a change in the parameters of the custom rule. SATP and PSP creation, changes, additions, or removals will take effect for any new devices presented afterward without the need for server reboot. The host must be rebooted for SATP rule creation, changes, additions, or removals to take effect on existing, previously presented devices. Path policy changes done on an individual device basis, either done via vCenter or esxcli command, supersede the PSP path policy defined in a SATP rule, and such path policy changes to individual devices are maintained through host reboots. Valid PSP for SATP VMW_SATP_DEFAULT_AA rules are:
VMW_PSP_RR VMW_PSP_MRU
VMW_PSP_FIXED is not a valid PSP to be defined within an ALUA SATP rule. VMW_PSP_RR is preferred. Changing from HP 3PAR host persona 6/Generic-legacy to host persona 1 1/VMware or visa versa: A change from persona 6 to 1 1, or from 1 1 to 6, requires that the array ports affected be taken offline, or that the host for which the persona is being changed is not connected (not logged in). This is an HP 3PAR OS requirement. For existing devices targeted in a custom SATP rule to be claimed by the rule, the host must be rebooted. This is an ESX/ESXi OS requirement.
Because of the items required for this persona change listed above, the following procedure is recommended for changing from persona 6 to 1 1, or from 1 1 to 6: 1. Stop all host I/O and apply the necessary SATP changes (create custom SATP rule and/or modify default SATP rule PSP defaults) to the ESX/ESXi host. 2. Shut down the host. 3. Change the host persona on the array. 4. Boot the host. 5. Verify that the target devices have been claimed properly by the SATP rule as desired.
34
HP 3PAR SATP rules for use with persona 6/Generic-legacy (Active-Active array port presentation) A custom SATP rule is not used. The PSP (path policy) is changed on the default active-active SATP rule. The default multipath policy for VMW_SATP_DEFAULT_AA is VMW_PSP_FIXED (Fixed path). The default is changed to the preferred PSP (path policy) of round-robin.
# esxcli nmp satp setdefaultpsp -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR
HP 3PAR SATP rules for use with persona 1 1/VMware (ALUA compliant array port presentation)
# esxcli nmp satp setdefaultpsp -s VMW_SATP_ALUA -P VMW_PSP_RR
# esxcli nmp satp addrule -s "VMW_SATP_ALUA" -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR Custom iSCSI/FC/FCoE ALUA Rule"
35
CAUTION: The procedure for changing the default SATP rules to use the round robin I/O multipathing policy is intended to apply only to VMware hosts using HP 3PAR StoreServ Storage LUNs. If the host is sharing storage from other vendors, then before making any I/O policy changes, consideration should be given to the effect that changing the default rules will have on the storage environment as a whole. A change of the default PSP for a given SATP affects all storage devices (FC, FCoE, iSCSI) that use the same default SATP rule. If a host server is sharing multiple storage vendors together with an HP 3PAR StoreServ Storage, and if the other connected storage does not support active/active round robin multipathing using the same SATP rule, such as VMW_SATP_DEFAULT_AA or VMW_DEFAULT_ALUA, then its multipathing will be also be affected. If the other storage uses a different SATP of its own, then SATP VMW_SATP_DEFAULT_AA mapping should be changed to VMW_PSP_RR to take advantage of round-robin multipathing. You can check the SATP-PSP relationship of a given device for ESX 4.0 by using the esxcli nmp device list or esxcli nmp device list -d <device id> command. For example, If the HP 3PAR StoreServ Storage and storage X are connected to the same host using VMW_SATP_DEFAULT_AA, and if storage X does not have its own SATP, then it might cause an issue if storage X does not support round-robin multipathing. If the HP 3PAR StoreServ Storage and storage Y are sharing the same host, and if storage Y has its own SATP VMW_SATP_Y and HP uses VMW_SATP_DEFAULT_AA, then there will be no conflict, and the change can be made.
HP 3PAR custom SATP rule for use with persona 1 1/VMware (ALUA compliant array port presentation)
# esxcli nmp satp addrule -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=100 -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR Custom iSCSI/FC/FCoE ALUA Rule"
ESXi 5.x
HP 3PAR custom SATP rule for use with persona 6/Generic-legacy (Active-Active array port presentation)
# esxcli storage nmp satp rule add -s "VMW_SATP_DEFAULT_AA" -P "VMW_PSP_RR" -O iops=100 c "tpgs_off" -V "3PARdata" -M "VV" -e "HP 3PAR Custom iSCSI/FC/FCoE Rule"
36
HP 3PAR custom SATP rule for use with persona 1 1/VMware (ALUA compliant array port presentation)
# esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=100 -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP 3PAR Custom iSCSI/FC/FCoE ALUA Rule"
37
For persona 6:
38
ESX/ESXi 4.x example The command is the same for ESX/ESXi 4.x. The output shown is for ESX 4.0:
# esxcli nmp device list naa.50002ac000b40125 Device Display Name: 3PARdata Fibre Channel Disk (naa.50002ac000b40125) Storage Array Type: VMW_SATP_DEFAULT_AA Storage Array Type Device Config: Path Selection Policy: VMW_PSP_RR Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0;lastPathIndex=3: NumIOsPending=0,numBytesPending=0} Working Paths: vmhba5:C0:T0:L25, vmhba5:C0:T1:L25, vmhba4:C0:T0:L25, vmhba4:C0:T1:L25
For ESX 4.1, the iops will be 100 for the device list output shown above.
Script Alternative for Path Policy Changes on Storage Devices without a Host Reboot
If a reboot of the ESX/ESXi host to affect path policy changes through SATP on a large number of existing, previously presented storage devices is not desirable, the path policy changes on a batch of LUNs can be made by scripting esxcli commands. Create a script that uses the following commands: 1. List all the HP 3PAR devices present on the host: ESXi 5.x
# esxcli storage nmp device list | grep -i naa.50002ac | grep -v Device naa.50002ac0005800ac naa.50002ac003b800ac naa.50002ac0039300ac
ESX/ESXi 4.x
# esxcli nmp device list | grep -i naa.50002ac | grep -v Device naa.50002ac0005800ac naa.50002ac003b800ac naa.50002ac0039300ac
2.
Change the I/O path policy to round robin for each device identified in the previous output: ESXi 5.x
# esxcli storage nmp device set -d naa.50002ac0005800ac -P VMW_PSP_RR
ESX/ESXi 4.x
# esxcli nmp device setpolicy -d naa.50002ac0005800ac -P VMW_PSP_RR
3.
39
ESX/ESXi 4.x
# esxcli nmp device list -d naa.50002ac0005800ac
NOTE: occur:
If I/O is active to a LUN and an attempt is made to modify the path policy, a failure can
error during the configuration of the host: sysinfoException; Status=Busy: Message=Unable to Set
If this problem occurs during an attempt to change the path policy, reduce the I/Os to that LUN and then try making the desired changes.
ESX/ESXi Handling SCSI Queue Full and Busy Messages from the HP 3PAR StoreServ Storage Array
VMware ESX Server Releases through ESX 3.5 Update 3
The default behavior of an ESX 3.5 update 3 and older servers to Queue Full and Busy SCSI messages from the HP 3PAR StoreServ Storage is to treat them as valid commands and to continue sending data. When continued outstanding commands are being sent to an HP 3PAR StoreServ Storage port, the port cannot recover and stops responding for attached hosts. This type of action is critical where QLogic HBAs are used on the HP 3PAR StoreServ Storage because, when the storage port stops responding, the QLogic driver on the HP 3PAR StoreServ Storage has to issue a reset of the affected port. The time in which the HP 3PAR StoreServ Storage port is at full capacity and the reset time of the port does not trigger a failover in the ESX server, since the ESX server never detects the port going away. This results in a virtual machine crash. There are two solutions: Upgrade to ESX 3.5 Update 4 or later. Control the IO that each array port receives by manipulating the HBA queue depth (see Modifying the Tuneable Parameters for Queue Depth Throttling in ESX 3.x (page 42)).
40
VMware ESX Server Release 3.5 Update 4 through ESX 4.x and ESXi 5.0
As of VMware ESX server release 3.5 update 4, and including ESX 4.0 GA and ESX 4.1 (with all ESX 4.x updates) , and ESXi 5.0 (with all updates), an algorithm has been added that allows ESX to respond to Queue Full and Busy SCSI messages from the storage array. The Queue Full or Busy response by ESX is to back off of I/O for a period of time, thus helping to prevent overdriving of the HP 3PAR StoreServ Storage ports. This feature should be enabled as part of an ESX - HP 3PAR StoreServ Storage deployment. The Queue Full and Busy LUN-throttling algorithm is disabled by default. To enable the algorithm, complete the following steps: 1. From the VI/vSphere client, select the ESX host Configuration tabAdvanced SettingsDisk 2. Scroll to find and adjust the following HP-recommended settings: QFullSampleSize = 32 QFullThreshold = 4 With the algorithm enabled, no additional I/O throttling scheme on ESX 3.5 update 4 and newer ESX servers is necessary. Consult the additional information regarding the ESX Queue Full/Busy response algorithm found in KB 10081 13, which is available on the VMware Knowledge Base website: http://kb.vmware.com
The settings do not require a reboot to take effect and are persistent across reboots. You can retrieve the values for a device by using the corresponding list command:
# esxcli storage core device list
41
Recommendations for ESX Hosts Attached to a Storage Port on the HP 3PAR StoreServ Storage
For performance and stability, HP recommends that no more than 16 hosts be attached to any one 2 Gbps HP 3PAR StoreServ Storage port, and no more than 32 hosts attached to any one 4 Gbps or 8 Gbps HP 3PAR StoreServ Storage port. This is due to the HP recommendation to throttle the total aggregate of the outstanding commands from all hosts to be less than the queue depth of the HP 3PAR StoreServ Storage port and throughput. NOTE: These recommendations are guidelines for best practice. Adding more than the recommended ESX hosts should only be attempted when the total expected workload is calculated and shown not to overrun either the queue depth or throughput of the storage port.
Modifying the Tuneable Parameters for Queue Depth Throttling in ESX 3.x
The default settings for target port queue depth on the ESX server can be modified to ensure that the total workload of all servers will not overrun the total queue depth of the target HP 3PAR StoreServ Storage port. The method endorsed by HP is to limit the queue depth on a per-target basis. This recommendation comes from the simplicity in limiting the number of outstanding commands on a target (HP 3PAR StoreServ Storage port) per ESX server. The following values can be set on the instances of an HBA in an ESX operating system. These values limit the total number of outstanding commands that the operating system routes to one target. 1. ESX Emulex HBA Target Throttle = tgt_queue_depth 2. ESX QLogic HBA Target Throttle = ql2xmaxdepth The formula is as follows:
(3PAR port queue depth) / (total number of ESX severs attached) = recommended target port queue depth.
The HP 3PAR port queue depth limitations used for the calculations are from the listings in Target Port Limits and Specifications (page 13). Example 1 (set up as follows): QLogic 2G FC HBAs installed on the HP 3PAR StoreServ Storage 16 ESX hosts attached to a QLogic port on the HP 3PAR StoreServ Storage
Formula:
497 / 16 = 31.xxx (recommended max target port queue depth = 31)
42
Example 2 (set up as follows): LSI 2G FC HBA installed on the HP 3PAR StoreServ Storage 12 ESX hosts attached to a LSI port on the HP 3PAR StoreServ Storage
Formula:
510 / 12 = 42.xxx (recommended max target port queue depth = 42)
Setting tgt_queue_depth for Emulex in ESX 3.x (example) To set the tgt_queue_depth for an Emulex FC HBA in ESX 3.x to something other than the default requires a multistep process: 1. Shut down all of the virtual machines. 2. Log into the ESX service console as root. 3. Make a backup copy of /etc/vmware/esx.conf.
cp /etc/vmware/esx.conf /etc/vmware/esx.conf.orig
4.
Depending on the module of the HBA, the module can be one of the following: 5. lpfcdd_7xx lpfcdd_732 lpfc_740
The target queue depth can now be modified via the command line using VMware supplied binaries. The example shows the lpfc_740 module. Use the appropriate module based on the outcome of Step 4.
esxcfg-module s lpfc_tgt_queue_depth=31 lpfc_740 esxcfg-boot b
6.
You can check to see that the change has been implemented as follows:
esxcfg-module -q
7.
Reboot the ESX server for the changes to take effect. Upon bootup, the ESX server will now be throttled to a maximum of 31 outstanding commands (as per example shown in Step 5) to the target HP 3PAR StoreServ Storage port).
Setting ql2xmaxdepth for QLogic in ESX 3.x (example) To set the ql2xmaxdepth for an QLogic FC HBA in ESX 3.x to something other than the default requires a multi step process. 1. Shut down all of the virtual machines. 2. Log into the ESX service console as root.
43
3.
4.
Depending on the model of the HBA, the module can be one of the following: qla2300_707 (ESX Server 3.0.x) qla2300_707_vmw (ESX 3.5) 5. The target port queue depth can now be modified via the service console command line using VMware supplied binaries. The example shows the qla2300_707 module. Use the appropriate module based on the outcome of Step 4.
esxcfg-module -s ql2xmaxqdepth=42 qla2300_707 esxcfg-boot b
The server must now be rebooted for the changes to take effect. 6. Reboot the ESX server. Upon boot up, the ESX server will now be throttled to a maximum of 42 outstanding commands (as per the example in Step 5) to the target (HP 3PAR StoreServ Storage port). NOTE: For additional information on changing the queue depth for HBAs, refer to the VMware Fibre Channel SAN Configuration Guide.
ESX extensions that make use of these primitives are collectively referred to as vStorage APIs for Array Integration (VAAI). The VMware primitives enable an ESX/ESXi host to convey virtual machine operations to storage hardware at a meta level instead of at the traditional data level. This reduces operational latency and traffic on the FC fabric/iSCSI network. Some of these primitives enable the storage hardware to participate in block allocation and de-allocation functions for virtual machines. These primitives are also known as hardware offloads. A brief description of the "primitives": Full Copy (XCOPY) enables the storage array to make full copies of data within the array without having to have the ESX server read and write the data. This offloads some data copy processes to the storage array. Block Zeroing (WRITE-SAME) enables the storage array to zero-out a large number of blocks within the array without having to have the ESX server write the zeros as data and helps expedite the provisioning of VMs. This offloads some of the file space zeroing functions to the storage array. Hardware Assisted Locking (ATS) provides an alternative to SCSI reservations as a means to protect the metadata for VMFS cluster file systems and helps improve the scalability of large ESX server farms sharing a datastore.
HP 3PAR OS 2.3.1 MU2 HP 3PAR OS 3.1.1 or later Supported. Does not require HP 3PAR VAAI plugin (supported by Standard T10 ESX plugin).
45
NOTE: VAAI Plugin 2.2.0 is required if the ESXi 5.x server is connected to two or more arrays that are running a mix of OS HP 3PAR OS 2.3.1.x and OS 3.1.x. For LUNs on HP 3PAR OS 3.1.x, the default VMware T10 plugin will be effective, and for storage LUNs on HP 3PAR OS 2.3.x, HP 3PAR VAAI Plugin 2.2.0 will be effective. For more information, refer to the HP 3PAR VAAI Plug-in Software for VMware vSphere User's Guide (HP part number QL226-96072). To download the HP 3PAR VAAI Plugin software, go to: https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do? productNumber=HP3PARVAAI&jumpid=reg_r1002_usen
NOTE:
UNMAP will also free up space if files are deleted on UNMAP-supported VMs such as Red Hat Enterprise 6, provided it is a RDM LUN on a TPVV storage volume-for example, for RDM volumes on a Red Hat VM using the ext4 filesystem and mounted using the discard option.
# mount t ext4 o discard /dev/sda2 /mnt
This will cause the RH6 VM to issue the UNMAP command and cause space to be released back on the array for any deletes in that ext4 file system.
additional disk space is added, use the Retry option on the warning message to bring the VM back to the read-write state. If you select the Cancel option, the VM is rebooted. In the following example, an HP 3PAR StoreServ Storage TPVV volume was created with a warning limit of 60%, as shown in the showvv -alert command.
When the warning limit is reached, the HP 3PAR StoreServ Storage sends a soft threshold error asc/q: 0x38/0x7 and ESX continues to write.
InServ debug log: 1 Debug Host error undefined Port 1:5:2 -- SCSI status 0x02 (Check condition) Host:sqa-dl380g5-14-esx5 (WWN 2101001B32A4BA98) LUN:22 LUN WWN:50002ac00264011c VV:0 CDB:280000AB082000000800 (Read10) Skey:0x06 (Unit attention) asc/q:0x38/07 (Thin provisioning soft threshold reached) VVstat:0x00 (TE_PASS -- Success) after 0.000s (Abort source unknown) toterr:74882, lunerr:2 # showalert Id: 193 State: New Message Code: 0x0270001 Time: 2011-07-13 16:12:15 PDT Severity: Informational Type: TP VV allocation size warning Message: Thin provisioned VV nospace1 has reached allocation warning of 60G (60% of 100G)
When the HP 3PAR StoreServ Storage runs out of disk space, a hard permanent error asc/q: 0x27/0x7 is sent. Use showspace, showvv -r, and showalert to see the warning and space usage. The ESX responds by stunning the VM.
InServ debug log: 1 Debug Host error undefined Port 1:5:2 -- SCSI status 0x02 (Check condition) Host:sqa-dl380g5-14-esx5 (WWN 2101001B32A4BA98) LUN:22 LUN WWN:50002ac00264011c VV:612 CDB:2A00005D6CC800040000 (Write10) Skey:0x07 (Data protect) asc/q:0x27/07 (Space allocation failed write protect) VVstat:0x41 (VV_ADM_NO_R5 -- No space left on snap data volume) after 0.302s (Abort source unknown) toterr:74886, lunerr:3
The following figure shows the VM warning displayed on the vSphere with the Retry and Cancel options.
47
Not supported by ESX 4.1 or later Not supported by ESX 4.1 or later Not supported by HP 3PAR OS 2.3.1
48
The vSphere shows that hardware acceleration is supported on the HP 3PAR devices. To show the version of the plugin:
To show that the plugin is active in the claimrule, which runs for each of the devices discovered:
To show that the VAAI is enabled for the HP 3PAR data device:
# esxcfg-scsidevs -l naa.50002ac0002200d7 Device Type: Direct-Access Size: 51200 MB Display Name: 3PARdata Fibre Channel Disk (naa.50002ac0002200d7) Multipath Plugin: NMP Console Device: /vmfs/devices/disks/naa.50002ac0002200d7 Devfs Path: /vmfs/devices/disks/naa.50002ac0002200d7 Vendor: 3PARdata Model: VV Revis: 0000 SCSI Level: 5 Is Pseudo: false Status: on Is RDM Capable: true Is Removable: false Is Local: false Is SSD: false Other Names: vml.020022000050002ac0002200d7565620202020 VAAI Status: supported
49
On ESX 4.1, you can verify that the VAAI Plugin is installed and enabled on devices using the following commands: To show the version of the installed VAAI plugin:
# esxupdate --vib-view query | grep 3par cross_3par-vaaip-inserv_410.1.1-230815 installed
To show that the claim rule is in effect for the HP 3PAR devices discovered:
ESXi 5.x with HP 3PAR OS 3.1.1 uses the native T10 plugin, and should not show any HP 3PAR plugin.
# esxcli storage core plugin list Plugin name Plugin class ----------- -----------NMP MP
The following outputs shows that Hardware Acceleration is enabled on HP 3PAR LUNs to take advantage of the storage primitives on ESX 4.1 and ESXi 5.x. Use the esxcfg-advcfg command to check that the options are set to 1 (enabled):
# esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove # esxcfg-advcfg -g /DataMover/HardwareAcceleratedInit # esxcfg-advcfg -g /VMFS3/HardwareAcceleratedLocking
50
51
52
NOTE: For complete and detailed instructions for configuring a server with a given Converged Network Adapter, refer to the CNA manufacturer documentation. The FCoE switch or FCoE forwarder must be able to convert FCoE traffic to FC and also be able to trunk this traffic to the fabric that the HP 3PAR StoreServ Storage target ports are connected to. FCoE switch VLANs and routing setup and configuration are beyond the scope of this document. Consult your switch manufacturer's documentation for instructions of how to set up VLANs and routing. 1. 2. 3. 4. Install the CNA card in the server just like any other PCIe card - refer to the server vendor documentation. Install the CNA card driver following the CNA card installation instructions (it assumes the server is already running a supported operating system). Physically connect the server CNA card ports to the FCoE Forwarder switch and configure the FCoE Forwarder switch ports - refer to the switch vendor documentation. Configure the HP 3PAR StoreServ Storage ports in accordance with the guidelines in section Configuring the HP 3PAR StoreServ Storage Running HP 3PAR OS 3.1.x or OS 2.3.x (page 7) and connect the HP 3PAR StoreServ Storage port either to the FCoE Forwarder FC switch ports or the Fibre Channel fabric connected to the FCoE Forwarder.
Configuring a CNA and FCoE Forwarder Switch 53
5.
Create FC zones for the host initiators ports and the HP 3PAR StoreServ Storage target port. Once the initiators have logged in to the HP 3PAR StoreServ Storage target ports, create a host definition and provision storage to the host. NOTE: It is not possible to connect a server with a CNA directly to the HP 3PAR StoreServ Storage. A FCoE Forwarder switch must be used.
6.
All contents of the Fibre Channel section of this guide apply to FCoE connectivity as well. See the following sections: Multipath Failover Considerations and I/O Load Balancing (page 30) Performance Considerations for Multiple Host Configurations (page 40) ESX/ESXi 4.1, ESXi 5.x Additional Feature Considerations (page 44)
54
55
NOTE: When multiple teamed NICs are configured, all HP 3PAR StoreServ Storage iSCSI ports and ESX server iSCSI NICs must be in the same VLAN of the IP switch. NOTE: NIC teaming is not supported in ESXi 5.x
56
4. 5.
6.
Select VMkernel and click Next. This lets you connect to the VMkernel which runs services for iSCSI storage to the physical network. The Network Access page appears. Select Create a Virtual Switch and select the NICs that will be used for iSCSI. (In this example, 2 NICs are selected to configure an active/active teamed NICs that will connect to the HP 3PAR storage array.) Configure Active/Active NIC teaming by bringing up all of the NIC adapters being used as "Active Adapters" in the vSwitch Properties. For each ESX server, use the VI/vSphere client Configuration tabNetworkingProperties, click the Edit radio button, and then highlight and use the Move Up radio button to bring each of the NIC adapters being used for NIC teaming from the Standby Adapters or Unused Adapters section to the Active Adapters section. The screen below shows that this has been completed for NIC adapters vmnic1 and vmnic2.
57
7.
Click OK to complete. NOTE: HP recommends an Active/Active NIC Teaming configuration for best failover performance. NIC Teaming however, is not supported with ESXi 5.x.
8.
Click Next.
58
9. Select a network label and the IP address that will be used for the iSCSI network. 10. Click Next. A window shows your settings about to be completed. 1 1. Click Close. A window will appear stating that no DNS setting and gateways have been set. 12. Add a VMkernel default gateway that is in the same subnet as the iSCSI network. Refer to the VMware ESX Server Configuration Guide for detailed instructions regarding these settings. 13. Click Finish when you have completed all the necessary changes.
2. 3. 4.
Click Add. Select the radio button for VMkernel to add support for host management traffic. Click Next.
59
5. 6.
Enter the IP address for the Service console used to communicate with the iSCSI software initiator. The IP address must be in the same subnet as the iSCSI. Click Next. A window appears showing the changes/additions that have been made.
7. 8.
Click Finish. Close all windows associated with the network configuration.
60
9.
The new network configuration is displayed with the addition of the iSCSI network. You should now be able to ping the HP 3PAR StoreServ Storage ports that were previously defined from the COS.
2. 3.
Open up the ports that will be used for the iSCSI connection, then click OK. The iSCSI software initiator needs to be enabled before the ESX server can use it. Click the Storage Adapters option in the Hardware menu box.
61
4. 5. 6. 7. 8. 9. 10.
From the ESX Server Configuration tab, select iSCSI Software Adapter. Click the Properties tab. Select the General tab. Click Configure... Select the Enabled check box for the status. Click OK. Click the Dynamic Discovery tab. Dynamic discovery enables the Send Targets discovery method, where the initiator sends the Send Targets request to discover and log into the targets.
62
Click Add.... Enter the IP address of one of the previously defined HP 3PAR StoreServ Storage iSCSI ports. Click OK. Add additional HP 3PAR StoreServ Storage iSCSI ports if they exist and have been defined on the system.
63
15. When all of the desired HP 3PAR StoreServ Storage iSCSI ports have been added to the Dynamic Discovery tab, close this window.
64
16. Reboot the ESX server. If virtual machines are active, shut them down or suspend them. The ESX server and HP 3PAR StoreServ Storage should now be configured for use. Using the showhost command on the HP 3PAR StoreServ Storage, the new iSCSI connections should now show as present.
# showhost Id Name --
As new LUNs are exported to the ESX server iSCSI host, a rescan must be performed on the iSCSI software adapter. This is accomplished in the same manner that Fiber Channel LUNs would be rescanned.
Click Rescan, select the Scan for New Storage Devices check box, then click OK to rescan for new LUNs exported to the ESX iSCSI server.
65
NOTE: If you are running W2K8 VM Cluster with RDM-shared LUNs, then individually change these specific RDM LUNs from Round Robin policy to Fixed Path policy. NOTE: HP recommends changing the ESX Scsi.ConflictRetries from its default value of 80 to a value of 200 when connected to an HP 3PAR StoreServ Storage running HP 3PAR OS version 2.2.4 or prior. This change lengthens the time allowed for I/O retries due to ESX SCSI-2 reservation conflicts on VMFS LUNs caused by delayed processing of SCSI-2 reservation commands on the HP 3PAR StoreServ Storage, thereby helping to avoid VM I/O failure. Changing this value can be achieved through use of the VMware ESX VI/vSphere client as follows: ESX serverConfigurationSoftware Advanced SettingsScsi Scroll to Scsi.ConflictRetries and change the value in the field. Click OK to complete the change. A reboot is not required for the change to take effect.
66
Here is an example:
# createvv -cnt 5 TESTLUNs 5G
67
Here is an example:
# createaldvv -cnt 5 TESTLUNs 5G
This will create five virtual volumes of 5 Gb each in size, fully provisioned from PDs. NOTE: To create a fully-provisioned or thinly-provisioned virtual volume, consult the HP 3PAR Command Line Interface Reference, which is available on the HP BSC website: http://www.hp.com/go/3par/
For failover support using the QLogic or Emulex drivers, virtual volumes should be exported down multiple paths to the host server simultaneously. To facilitate this task, create a host definition on the HP 3PAR StoreServ Storage that includes the WWNs of multiple HBA ports on the host server and export the VLUNs to that host definition. It has been noted by HP that provisioning several VMs to a smaller number of large LUNs, versus a single VM per single LUN, provides better overall results. Further examination and explanation of this recommendation is outlined in the document 3PAR Utility Storage with VMware vSphere, which is available on the HP BSC website:
68 Allocating Storage for Access by the ESX Host
http://www.hp.com/go/3par/ Concerning TPVVs, ESX server VMFS-3 does not write data to the entire volume at initialization and it can be used with TPVVs without any configuration changes to VMFS. A further examination of this subject, recommendations, and limitations are explored in the HP document 3PAR Utility Storage with VMware vSphere.
You have the option of exporting the LUNs through the HP 3PAR Management Console or the HP 3PAR OS CLI.
To create a host sees or host set VLUN template, issue the following command:
# createvlun [options] <VV_name | VV_set> <LUN> <host_name/set>
Here is an example:
# createvlun -cnt 5 TESTLUNs.0 0 hostname/hostdefinition
or:
# createvlun -cnt 5 TESTVLUN.0 0 set:hostsetdefination Exporting LUNs to an ESX Host 69
Consult the HP 3PAR Management Console Help and the HP 3PAR Command Line Interface Reference for complete details on exporting volumes and available options for the HP 3PAR OS version that is being used on the HP 3PAR StoreServ Storage. These documents are available on the HP BSC website: http://www.hp.com/go/3par/ NOTE: The commands and options available for creating a virtual volume may vary for earlier versions of the HP 3PAR OS.
Removing Volumes
After removing a VLUN exported from the HP 3PAR StoreServ Storage, perform a ESX host bus adapter rescan. ESX will update the disk inventory upon rescan. This applies to both Fibre Channel and iSCSI.
It is advisable to remove the disk/LUN from the virtual machine inventory, detach from the ESX servers, and then remove it from the HP 3PAR StoreServ Storage using the removevlun and removevv HP 3PAR OS CLI commands or using the HP 3PAR Management Console. If a LUN is not detached but is removed from the HP 3PAR StoreServ Storage, it appears as a device in an error state, and is cleared after an ESX server reboot.
70 Allocating Storage for Access by the ESX Host
If the user has set a hard limit on the HP 3PAR StoreServ Storage volume when using HP 3PAR OS 2.3.1 MU2 or later, the HP 3PAR StoreServ Storage will post the write protect error (ASC/Q: 0x27/0x7) when the limit is reached. This is considered a hard error, and the VM is stunned in ESX 4.x and ESXi 5.x. Additional space needs to be added to the storage to clear the error condition.
PDT 1 Debug Host error undefined Port 1:5:2 -- SCSI status 0x02 (Check condition) Host:sqa-dl380g5-14-esx5 (WWN 2101001B32A4BA98) LUN:22 LUN WWN:50002ac00264011c VV:612 CDB:2A00005D6CC800040000 (Write10) Skey:0x07 (Data protect) asc/q:0x27/07 (Space allocation failed write protect) VVstat:0x41 (VV_ADM_NO_R5 -- No space left on snap data volume) after 0.302s (Abort source unknown) toterr:74886, lunerr:3
For ESX 4.x or ESXi 5.x, an ESX server managed by vSphere client or vCenter Server will scan for LUNs at 5-minute intervals by issuing the REPORT LUN command and discovering any new LUNs.
Host and Storage Usage 71
At an interval of every 5 minutes, an ESX host such as ESXi 5.x sends REPORT LUN commands and discovers any new LUNs. On ESXi 5.x, the following vmkernel log message is not harmful and can be ignored:
34:41.371Z cpu0:7701)WARNING: ScsiDeviceIO: 6227: The device naa.50002ac00026011b does not permit the system to change the sitpua bit to 1.
For ESXi 5.x, the Virtual Disk max disk can be only 2 TB, even though the physical LUN seen on the system is 16 TB.
72
8 Booting the VMware ESX Server from the HP 3PAR StoreServ Storage
This chapter provides a general overview of the procedures that are required to boot the VMware ESX operating system from the SAN. In a boot-from-SAN environment, each ESX servers operating system is installed on a the HP 3PAR StoreServ Storage, rather than on the hosts internal disk. In this situation, you should create a separate virtual volume for each ESX server to be used for the boot image. Here are the general steps in this process: Boot the HBA BIOS Enable the HBA port to boot Perform the required zoning Create a virtual volume and export it as a VLUN to the ESX host Discover the LUN and designate it as bootable through the HBA BIOS
For detailed information, consult the VMware Fibre Channel SAN Configuration Guide. For information about setting a CN1 100E CNA to boot the host from SAN, see Hardware iSCSI Support (page 23). The VMware ESX server has an option that allows the VMware Base OS to be installed and booted from a SAN or HP 3PAR StoreServ Storage virtual storage device. You can choose this option during the initial installation phase of the VMware Server Installation. Refer to the VMware documentation for further information regarding 'SANboot'. HP makes the following general recommendations for preparing the host HBAs for a SAN boot deployment: NOTE: The NVRAM settings on HBAs can be changed by any server in which they are installed. These settings will persist in the HBA even after it is removed from a server. To obtain the correct settings for this configuration, you must return all NVRAM settings to their default values. 1. After installation of the HBAs, reset all of the HBA NVRAM settings to their default values. NOTE: Each HBA port is reported as a host bus adapter and the HBA settings should be set to default. 2. Enter the HBA setup program during server boot by pressing the combination of keys indicated by the HBA. For example, press one of the following key combinations: Alt+Q or Ctrl+Q for QLogic HBAs. Alt+E for Emulex HBAs. Alt+B or Ctrl+B for Brocade. Ctrl+S for Emulex CN1 100E.
Change and save all HBA settings to their default settings. NOTE: When using a McDATA fabric, set the HBA topology to 'point to point.
There may be other vendor HBAs not listed here with different setup entry key combinations. 3. Reboot the host computer.
73
HP 3PAR documentation
For information about: Supported hardware and software platforms See: The Single Point of Connectivity Knowledge for HP Storage Products (SPOCK) website: http://www.hp.com/storage/spock Locating HP 3PAR documents The HP 3PAR StoreServ Storage site: http://www.hp.com/go/3par To access HP 3PAR documents, click the Support link for your product. HP 3PAR storage system software Storage concepts and terminology HP 3PAR StoreServ Storage Concepts Guide
Using the HP 3PAR Management Console (GUI) to configure HP 3PAR Management Console User's Guide and administer HP 3PAR storage systems Using the HP 3PAR CLI to configure and administer storage systems CLI commands Analyzing system performance Installing and maintaining the Host Explorer agent in order to manage host configuration and connectivity information HP 3PAR Command Line Interface Administrators Manual HP 3PAR Command Line Interface Reference HP 3PAR System Reporter Software User's Guide HP 3PAR Host Explorer Users Guide
Creating applications compliant with the Common Information HP 3PAR CIM API Programming Reference Model (CIM) to manage HP 3PAR storage systems
74
See:
Migrating data from one HP 3PAR storage system to another HP 3PAR-to-3PAR Storage Peer Motion Guide Configuring the Secure Service Custodian server in order to monitor and control HP 3PAR storage systems Using the CLI to configure and manage HP 3PAR Remote Copy Updating HP 3PAR operating systems Identifying storage system components, troubleshooting information, and detailed alert information Installing, configuring, and maintaining the HP 3PAR Policy Server HP 3PAR Secure Service Custodian Configuration Utility Reference HP 3PAR Remote Copy Software Users Guide HP 3PAR Upgrade Pre-Planning Guide HP 3PAR F-Class, T-Class, and StoreServ 10000 Storage Troubleshooting Guide HP 3PAR Policy Server Installation and Setup Guide HP 3PAR Policy Server Administration Guide
HP 3PAR documentation
75
Hardware specifications, installation considerations, power requirements, networking options, and cabling information for HP 3PAR storage systems HP 3PAR 7200 and 7400 storage systems HP 3PAR 10000 storage systems HP 3PAR StoreServ 7000 Storage Site Planning Manual HP 3PAR StoreServ 10000 Storage Physical Planning Manual HP 3PAR StoreServ 10000 Storage Third-Party Rack Physical Planning Manual Installing and maintaining HP 3PAR 7200 and 7400 storage systems Installing 7200 and 7400 storage systems and initializing the Service Processor HP 3PAR StoreServ 7000 Storage Installation Guide HP 3PAR StoreServ 7000 Storage SmartStart Software Users Guide HP 3PAR StoreServ 7000 Storage Service Guide HP 3PAR StoreServ 7000 Storage Troubleshooting Guide HP 3PAR Service Processor Software User Guide HP 3PAR Service Processor Onsite Customer Care (SPOCC) User's Guide HP 3PAR host application solutions Backing up Oracle databases and using backups for disaster HP 3PAR Recovery Manager Software for Oracle User's recovery Guide Backing up Exchange databases and using backups for disaster recovery Backing up SQL databases and using backups for disaster recovery Backing up VMware databases and using backups for disaster recovery HP 3PAR Recovery Manager Software for Microsoft Exchange 2007 and 2010 User's Guide HP 3PAR Recovery Manager Software for Microsoft SQL Server Users Guide HP 3PAR Management Plug-in and Recovery Manager Software for VMware vSphere User's Guide
Maintaining, servicing, and upgrading 7200 and 7400 storage systems Troubleshooting 7200 and 7400 storage systems Maintaining the Service Processor
Installing and using the HP 3PAR VSS (Volume Shadow Copy HP 3PAR VSS Provider Software for Microsoft Windows Service) Provider software for Microsoft Windows User's Guide Best practices for setting up the Storage Replication Adapter HP 3PAR Storage Replication Adapter for VMware for VMware vCenter vCenter Site Recovery Manager Implementation Guide Troubleshooting the Storage Replication Adapter for VMware HP 3PAR Storage Replication Adapter for VMware vCenter Site Recovery Manager vCenter Site Recovery Manager Troubleshooting Guide Installing and using vSphere Storage APIs for Array Integration (VAAI) plug-in software for VMware vSphere HP 3PAR VAAI Plug-in Software for VMware vSphere User's Guide
76
Typographic conventions
Table 4 Document conventions
Convention Bold text Element Keys that you press Text you typed into a GUI element, such as a text box GUI elements that you click or select, such as menu items, buttons, and so on Monospace text File and directory names System output Code Commands, their arguments, and argument values <Monospace text in angle brackets> Code variables Command variables Bold monospace text Commands you enter into a command line interface System output emphasized for scannability
WARNING! Indicates that failure to follow directions could result in bodily harm or death, or in irreversible damage to data or to the operating system. CAUTION: NOTE: Indicates that failure to follow directions could result in damage to equipment or data.
Required
Indicates that a procedure must be followed as directed in order to achieve a functional and supported implementation based on testing at HP.
Typographic conventions
77
10 Documentation feedback
HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback ([email protected]). Include the document title and part number, version number, or the URL when submitting your feedback.
78
Documentation feedback