Assignment 1
Assignment 1
Reporting Problems
To send comments or report errors regarding this document,
please email: [email protected].
For Issues not related to this document, contact your service provider.
Refer to Document ID:
1416882349914
Content Creation Date November 24, 2014
If you will configure your system connections to your ESX or Hyper-V server, you must
attach the ESX or Hyper-V server to your system prior to performing the steps in this
document. For information on attaching an ESX or Hyper-V server to your system,
generate a new document and select the appropriate ESX or Hyper-V server version.
u
Read the release notes for your system, which are available on the EMC Online
Support website.
You must have a supported Windows host on the same network as the system
management ports. You can use this host:
l
To run the Unisphere Service Manager, which runs only on a Windows host.
You must have a Unisphere Server with a supported Internet browser that is on the
same network as the system management ports. This host can also be the server or a
Unisphere management station. For supported Internet browsers, see the Unisphere
release notes on the EMC Online Support website.
You must have one or more supported Fibre Channel host bus adapters (HBAs), which
may already be installed in the server. These adapters must have the latest
supported BIOS and driver.
We recommend that you do not mix Fibre Channel HBAs from different vendors in the
same server.
Each storage-processor (SP) Fibre Channel port that you will use on the system must
have an optical cable. These cables may already be connected for a configuration
with an existing system or server. We strongly recommend you use OM3 50 m
cables. For cable specifications, refer to the system's technical specifications.
You must have a method for writing data to a LUN on the system that will test the
paths from the server to the system. You can download an I/O simulator (Iometer)
from the following website: http://www.iometer.org.
For information on supported HBAs, BIOS, and drivers, refer to the E-Lab Interoperability
Navigator on the EMC Online Support website.
Note
We recommend that you never mix HBAs from different vendors in the same server.
Installing HBAs
CAUTION
HBAs are very susceptible to damage caused by static discharge and need to be handled
accordingly. Before handling HBAs, observe the following precautions:
u
Never plug or unplug HBAs with the power on. Severe component damage can result.
Procedure
1. If the server is powered up:
a. Shut down the server's operating system.
b. Power down the server.
c. Unplug the server's power cord from the power outlet.
2. Put on an ESD wristband, and clip its lead to bare metal on the server's chassis.
3. For each HBA that you are installing:
a. Locate an empty PCI bus slot or a slot in the server that is preferred for PCI cards.
b. Install the HBA following the instructions provided by the HBA vendor.
c. If you installed a replacement HBA, reconnect the cables that you removed in the
exact same way as they were connected to the original HBA.
4. Plug the server's power cord into the power outlet, and power up the server.
Results
Installing HBAs
Note
The HBA driver is also on the installation CD that ships with the HBA. However, this
version may not be the latest supported version.
If you have an Emulex driver, download the latest supported version and instructions
for installing the driver from the vendors website:
http://www.emulex.com/products/fibre-channel-hbas.html
If you have a QLogic driver, download the latest supported version and instructions
for installing the driver from the vendors website:
http://support.qlogic.com/support/oem_emc.asp
If you have a Brocade driver, download the latest supported version and instructions
for installing the driver from the vendors website:
http://www.brocade.com/services-support/driversdownloads/HBA/HBA_EMC.page
u
Any updates, such as hot fixes or patches to the servers operating system that are
required for the HBA driver version you will install.
For information on any required updates, refer to one of the following:
l
If an update is available
If an update is available, follow the instructions to install it as described on the http://
www.novell.com/linux/ website for SuSE or on the http://www.redhat.com website for
Red Hat.
You must install the host agent or server utility on your Hyper-V or ESX server if you:
u
If you configured your system connections to your Windows virtual machine with NICs,
install the host agent or server utility on the Windows virtual machine.
If you have a Hyper-V or ESX server, you must install the host agent or server utility on
your Hyper-V or ESX server. Do not install these programs on your virtual machine.
If you plan to install Navisphere CLI or Admsnap, you must install them on a virtual
machine. For instructions on installing these software programs on a virtual machine ,
generate a new document and select the operating system running on the virtual
machine.
Before you begin
Refer to the sections below to determine which application to install for host registration
and the requirements for installing each of these applications.
u
To run Unisphere server software, your server must meet the requirements outlined in
Requirements for Unisphere server software on page 5.
To determine whether to install the Unisphere Host Agent or Unisphere Server Utility
to register your HBAs with the system, refer to Determining whether to install the
Unisphere Host Agent on page 6.
Unisphere Host Agent see Installing the Unisphere Host Agent on page 7.
Navisphere CLI see Installing VNX for Block Secure CLI on page 12.
For Fibre Channel connections, have the EMC VNX supported HBA hardware and
driver installed.
Installing Unisphere server software
Be connected to at least one SP (two SPs for high availability) in each system either
directly or through a switch or hub. Each SP must have an IP connection.
For the host agent and CLI only - Be on a TCP/IP network connected to at least one SP
(two SPs for high availability) in the system.
The TCP/IP network connection allows the server to send LUN mapping information to
the system and it allows Navisphere CLI to communicate with the system over the
network.
Have a configured TCP/IP network connection to any remote hosts that you will use to
manage the systems, including:
l
any AIX, HP-UX, Linux, VMware ESX Server, Solaris, or Windows server running
VNX for Block CLI.
If you want to use VNX for Block CLI on the server to manage systems on a remote server,
the server must be on a TCP/IP network connected to both the remote server and each SP
in the system. The remote server can be running AIX, HP-UX, Linux, Solaris, or the
Windows operating system.
Note
For information about the specific revisions of the server operating system, the system
VNX for Block OE, and Access Logix software that are required for your version of the
host agent, see the release notes for the host agent on the EMC Online Support website.
Monitor system events and notify personnel by e-mail, page, or modem when any
designated event occurs.
Retrieve LUN world wide name (WWN) and capacity information from VNX systems.
Table 1 Host registration differences between the host agent and the server utility
Function
Table 1 Host registration differences between the host agent and the server utility (continued)
Function
Runs automatically to
send information to the
system.
Requires network
connectivity to the
system.
where version and build are the version number and the build number of the software.
Note
If you have an IA64 system, you must install the 32bit package and 32bit OS
compatibility packages. The 64bit package is currently supported on x86_64
systems only. Refer to the release notes for any updates.
If you are upgrading the utility on the server, use -U in place of -i.
6. Verify that Host Agent is installed: rpm -qa | grep -i agent
Note
Before you can use the host agent, you must modify the user login scripts and
configure the host agent configuration file.
where name is the person's account name and name@hostname is the name of the
remote server the person will be using.
The default host agent configuration file includes a user root entry.
/etc/init.d/naviagent start
When a system experiences heavy input/output traffic (that is, applications are using the
system), information may not be reported to the host agent in a timely manner, resulting
in the host agent taking several minutes to execute a system management task. This
behavior is most evident when one host agent is managing multiple systems. Also, if the
SP event log is large and the host agent configuration file is set up to read all events, it
may take a few minutes for the host agent to start.
Procedure
1. Log in as root or the equivalent.
2. Enter the following command: /etc/init.d/hostagent start
Results
The host agent now start automatically at system startup.
It may take a few minutes for the host agent to start when:
u
The SP event log is large and the host agent configuration file is not set up.
The SP event log is large and the host agent configuration file is not set up.
b. In the listing, verify the path for each HBA installed in the host to the SP.
10
If you have an IA64 system, you must install the 32bit package. The 64bit
package is currently supported on x86_64 systems only. Refer to the release
notes for any updates.
If you are upgrading the utility on the server, use -U in place of -i.
Results
The installation process adds a line to the /etc/rc.d/rc.local file that starts the
server utility on reboot, provided root has execute permission for the /etc/rc.d/
rc.local file.
Installing the Unisphere Server Utility
11
Installing the VNX for Block Secure CLI on a Linux server or Linux virtual machine
We recommend that you download and install the most recent version of the VNX for
Block Secure CLI software from the applicable support by product page on the EMC
Online Support website.
Procedure
1. Log in to the root account.
2. If your server is behind a firewall, open the TCP/IP ports listed in TCP/IP ports on page
12.
These ports are used by VNX for Block CLI. If these ports are not opened, the software
will not function properly.
Table 2 TCP/IP ports
3. At the command line prompt, look for any existing CLI by typing: rpm -qi navicli
If an earlier version of the software has been installed, you must remove it before
continuing.
4. Download the software:
a. From the EMC Online Support website, select the VNX Series Support by Product
page and select Downloads.
b. Select the VNX for Block Secure CLI, and then select the option to save the zip file
to your server.
c. At the command line prompt, navigate to the directory where you saved the zip file
and unzip the file.
d. Install the software:
32-bit server rpm -ivh NaviCLI-Linux-32-x86languageversion-build.i386.rpm
64-bit server rpm -ivh NaviCLI-Linux-64-x86-languageversion-build.x86_64.rpm
where
language is either en_US, when only the English version is available, or loc, when
the localized versions are available (including English).
version and build are the version number and the build number of the software.
12
Note
If you have an IA64 system, you must install the 32bit package. The 64bit
package is currently supported on x86_64 systems only. Refer to the release notes
for any updates.
If you are upgrading the utility on the server, use -U in place of -i.
The system displays the word navicli or naviseccli and a series of pound (#) signs.
When the installation is complete, the system prompt reappears.
5. Verify that VNX for Block Secure CLi is installed by using the rpm -qa | grep -i
navicli command.
Currently, you cannot install admsnap on your virtual machine if your virtual machine is
connected to Fibre Channel storage; you must install it on the Hyper-V server. For any
updates, refer to the EMC SnapView, Admsnap, and ADMhost Release Notes.
Installation prerequisites
Before you can install and use the Admsnap Utility, you must install SnapView on a
supported system.
For a list of supported systems, refer to the release notes for SnapView and admsnap.
13
If you are upgrading the utility on the server, use -U in place of -i.
The following files are installed in the /usr/admsnap directory:
/usr/admsnap/admsnap
/usr/admsnap/man/man1/admsnap.1
/usr/admsnap/readme
5. Verify that the correct version of admsnap is installed by entering the following
command: /usr/admsnap/admsnap help
This command displays a message about the help command, which includes the
revision number of the installed admsnap software.
6. Configure MANPATH to access the Linux man pages. Edit the /etc/man.config file
by adding the following lines: MANPATH /usr/admsnap/man
7. Configure sg devices by using the MAKEDEV utility.
For information on how to use the MAKEDEV utility, refer to the MAKEDEV man pages.
14
Keep the covers on all optical cables until you are ready to insert them.
Do not use optical cables to support weight (including their own unsupported weight
if they are long).
Do not pull long runs of cable; instead, lay the cable in place or pull only a few feet at
a time.
Place the cables where no one can step on them or roll equipment over them.
If the server has two HBAs, connect one HBA to a Fibre Channel host port on SP A and
the other HBA to a Fibre Channel host port on SP B.
If the server has four HBAs, connect two HBAs to separate Fibre Channel host ports on
SP A, and the other two HBAs to separate Fibre Channel host ports on SP B.
If the server has more than four HBAs, connect an HBA to a Fibre Channel host port on
SP A and another HBA to a Fibre Channel host port on SP B, and continue connecting
an HBA to a Fibre Channel host port on SP A and another next HBA to a Fibre Channel
host port on SP B until you have connected all the HBAs or used all the available Fibre
Channel host ports.
For each new HBA port you want to connect to a VNX Fibre Channel (FC) host port:
Procedure
1. Locate the FC host port to which you will connect the HBA port.
For information on identifying the host ports using Unisphere, refer to the Unisphere
online help.
Note
CL4021
15
Booting from a SAN configuration - If you are booting from a SAN, do not register you
server using the Unisphere Host Agent or Unisphere Server Utility. You will perform a
manual registration in the next section.
You must run the Unisphere Server Utility on each server connected to the system to
register the server's HBAs with the system.
If you have a Hyper-V or ESX server, perform this procedure on your Hyper-V or ESX server.
For updates on SCSI pass through support for virtual machines with FC devices, refer to
the E-Lab Interoperability Navigator on the EMC Online Support website.
16
Procedure
1. If you do not have the Unisphere Service Manager running:
Starting the Unisphere Host Agent
17
a. Download and install the Unisphere Service Manager from the EMC Online Support
website to a Windows management station that is connected to the system's
management ports. If you do not have a Windows management station, your
service provider can run this wizard.
b. Start the Unisphere Service Manager by doing one of the following:
Click the Unisphere Service Manager icon on your desktop,
or
Select Start > All Programs or Start > Programs, then select EMC > Unisphere >
Unisphere Service Manager > Unisphere Service Manager
2. Log in to your system.
3. From the System screen, select Diagnostics > Verify Storage System and follow the
instructions that appear.
4. Review the report that the wizard generates, and if it lists any problems, try to resolve
them.
blacklist
{
## Replace the wwid with the output of the command
## 'scsi_id -g -u -s /block/[internal scsi disk name]'
## Enumerate the wwid for all internal scsi disks.
## Optionally, the wwid of VCM database may also be listed
## here.
18
RHEL 5
Follow the instructions in the multipath.conf.annotated file for masking
internal SCSI disks or disks that need to be excluded from multipath control.
#
#
#
#
#
#
#
19
#defaults {
# udev_dir
/dev
# polling_interval
10
# selector
"round-robin 0"
# path_grouping_policy multibus
# getuid_callout
"/sbin/scsi_id -g -u -s /block/%n"
# prio_callout
/bin/true
# path_checker
readsector0
# rr_min_io
100
# rr_weight
priorities
# failback
immediate
# no_path_retry
fail
# user_friendly_name
yes
#}
##
## The wwid line in the following blacklist section is
## shown as an example of how to blacklist devices by wwid.
## The 3 devnode lines are the compiled in default blacklist.
## If you want to blacklist entire types of devices, such
## as all scsi devices, you should use a devnode line.
## However, if you want to blacklist specific devices, you
## should us a wwid line. Since there is no guarantee that
## a specific device willnot change names on reboot
## (from /dev/sda to /dev/sdb for example)
## devnode lines are not recommended for blacklisting
## specific devices.
Note
# }
#}
3. Perform a dry run and evaluate the setup with the following command: multipath
-v2 -d
RHEL 4
The output looks similar to:
create: mpath21 (360060160b540160171f77f705558da11)
[size=10 GB][features="1
queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0
\_ 11:0:0:0 sdad 65:208
\_ round-robin 0
\_ 10:0:0:0 sdb 8:16
create: mpath72 (360060160b540160170f77f705558da11)
[size=10 GB][features="1
queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0
\_ 11:0:0:1 sdae 65:224
\_ round-robin 0
\_ 10:0:0:1 sdc 8:32
RHEL 5
The output looks similar to:
create: mpath3 (36006016005d01800b207271bb8ecda11) DGC,RAID 5
[size=3.1G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 5:0:0:21 sdar 66:176 [undef][ready]
\_ 1:0:0:21 sdj 8:144 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 4:0:0:21 sdad 65:208 [undef][ready]
\_ 6:0:0:21 sdbf 67:144 [undef][ready]
create: mpath4 (36006016005d018005714f1abbbecda11) DGC,RAID 5
[size=6.3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 4:0:0:28 sdae 65:224 [undef][ready]
\_ 6:0:0:28 sdbg 67:160 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 5:0:0:28 sdas 66:192 [undef][ready]
\_ 1:0:0:28 sdk 8:160 [undef][ready]
21
For SLES 9 and 10, if required, upgrade the uDev package and then use the following
command to recreate the new devices:
SLES 9 udevstart
SLES 10 /etc/init.d/boot.udev restart
2. For SLES 9, edit the /etc/sysconfig/hotplug file to disable the auto-mount
option as follows:
HOTPLUG_USE_SUBFS=no
Note
}
devices {
## Device attributes for EMC CLARiiON
device {
vendor
"DGC"
product
"*"
path_grouping_policy group_by_prio
getuid_callout
"/sbin/scsi_id -g -u -s /block/%n"
prio_callout
"/sbin/mpath_prio_emc /dev/%n"
hardware_handler
"1 emc"
features
"1 queue_if_no_path"
no_path_retry
300
path_checker
emc_clariion
failback
immediate
}
}
SLES 10 or higher
22
The default configuration options for EMC systems are included as part of the
multipath package, so you do not need to replace the default /etc/
multipath.conf file.
5. Perform a dry run and evaluate the setup with the following command: multipath
-v2 -d
SLES 9
The output looks similar to:
create: 360060160aa4018002ae6839182a8da11
[size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=2]
\_ 3:0:0:6
sdbe 67:128 [ready]
\_ 2:0:0:6
sdbk 67:128 [ready]
\_ round-robin 0
\_ 3:0:1:6
sdbr 68:80 [ready]
\_ 2:0:1:6
sdi 8:128 [ready]
create: 360060160aa4018002ce6839182a8da11
[size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=2]
\_ 3:0:0:7
sdbf 67:144 [ready]
\_ 2:0:0:7
sdbl 67:128 [ready]
\_ round-robin 0
\_ 3:0:1:7
sdbs 68:96 [ready]
\_ 2:0:1:7
sdj 8:144 [ready]
SLES 10 or higher
The output looks similar to:
create: mpath12 (360060160782918000be69f3182b2da11) DGC,RAID 3
[size=3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 10:0:3:0 sdaq 66:160 [undef][ready]
\_ 11:0:3:0 sdcw 70:64 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 11:0:2:0 sdce 69:32 [undef][ready]
\_ 10:0:2:0 sdy 65:128 [undef][ready]
create: mpath13 (360060160782918004f64715b83b2da11) DGC,RAID 3
[size=3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 11:0:2:1 sdcf 69:48 [undef][ready]
\_ 10:0:2:1 sdz 65:144 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 10:0:3:1 sdar 66:176 [undef][ready]
\_ 11:0:3:1 sdcx 70:80 [undef][ready]
23
Starting Unisphere
Procedure
1. Log in to a host (which can be a server) that is connected through a network to the
systems management ports and that has an Internet browser: Microsoft Internet
Explorer, Netscape, or Mozilla.
2. Start the browser.
3. In the browser window, enter the IP address of one of the following that is in the same
domain as the systems that you want to manage:
l
A system SP with the most recent version of the VNX Operating Environment (OE)
installed
Note
A Unisphere management station with the most recent Unisphere Server and UIs
installed
Note
If you do not have a supported version of the JRE installed, you will be directed to the
Sun website where you can select a supported version to download. For information
on the supported JRE versions for your version of Unisphere, refer to Environment and
System Requirements in the Unisphere release notes on the EMC Online Support
website.
4. Enter your user name and password.
5. Select Use LDAP if you are using an LDAP-based directory server to authenticate user
credentials.
If you select the Use LDAP option, do not include the domain name.
24
When you select the LDAP option, the username / password entries are mapped to an
external LDAP or Active Directory server for authentication. Username / password
pairs whose roles are not mapped to the external directory will be denied access. If
the user credentials are valid, Unisphere stores them as the default credentials.
6. Select Options to specify the scope of the systems to be managed.
Global (default) indicates that all systems in the domain and any remote domains can
be managed. Local indicates that only the targeted system can be managed.
7. Click Login.
When the user credentials are successfully authenticated, Unisphere stores them as
the default credentials and the specified system is added to the list of managed
systems in the Local domain.
8. If you are prompted to add the system to a domain, add it now.
The first time that you log in to a system, you are prompted to add the system to a
Unisphere domain. If the system is the first one, create a domain for it. If you already
have systems in a domain, you can either add the new system to the existing domain
or create a new domain for it. For details on adding the system to a domain, use the
Unisphere help.
Committing VNX for Block Operating Environment (OE) software with Unisphere
If you did not install a VNX for Block OE update on the system, you need to commit the
VNX for Block OE software now.
Procedure
1. From Unisphere, select All Systems > System List.
2. From the Systems page, right-click the entry for the system for which you want commit
the VNX for Block OE and select Properties.
3. Click the Software tab, select VNX-Block-Operating-Environment, and click Commit.
4. Click Apply.
Committing VNX for Block Operating Environment (OE) software with Unisphere
25
Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Hosts > Storage Groups.
3. Under Storage Groups, select Create.
4. In Storage Group Name, enter a name for the Storage Group to replace the default
name.
5. Choose from the following:
l
Click OK to create the new Storage Group and close the dialog box, or
Click Apply to create the new Storage Group without closing the dialog box. This
allows you to create additional Storage Groups.
6. Select the storage group you just created and click the Connect hosts.
7. Move the host from Available host to Host to be connected and click OK.
Verifying that Linux native multipath failover sees all paths to the LUNs
At a Linux prompt, view all LUNs this server has access to by entering the multipath ll command. This command will show all LUNs that are available.
Sample output with two LUNs:
multipath -ll
mpath1 (360060160a1f01600ef72a9c70d8fda11) DGC,RAID 5
[size=5G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][active]
\_ 2:0:1:1 sdh 8:112 [active][ready]
\_ 3:0:1:1 sdo 8:224 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:1 sdc 8:32
[active][ready]
\_ 3:0:0:1 sdi 8:128 [active][ready]
mpath2 (360060160ed301800d68dc75538f6da11) DGC,RAID 3
[size=3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][active]
\_ 2:0:0:2 sdd 8:48
[active][ready]
\_ 3:0:0:2 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:1:2 sdk 8:160 [active][ready]
\_ 3:0:1:2 sdp 8:240 [active][ready]
In the output above, there are four paths from the server to the system.
u
The first line shows the alias name, WWN, vendor ID and RAID type of the LUN.
The second line shows unique system parameters used by the multipath driver.
The remaining output shows two path groups. The first path group, Active, has two
active paths and uses round-robin I/O load balancing. The second group, Enabled, is
available for use if the Active path fails, using round-robin I/O load balancing.
where
host:channel:target:lun indicates the
device mapping, for example, 2:0:0:2.
26
You can also view information for an individual LUN by using the multipath -ll
alias name| WWN command.
For example, multipath -ll mpath2 or multipath -ll
360060160ed301800a5e373e537f6da11.
Verifying that the system received the LUN information using Unisphere
Procedure
1. From Unisphere, select your system.
2. Select Hosts > Hosts List.
Preparing LUNs to receive data
27
3. Select a host and then, on the Details page, click the LUNs tab.
4. Verify that the LUNs tab displays a physical drive and logical drive name for each LUN
on the host.
The server can send data to and receive data from the system.
Note
You can download an I/O simulator (Iometer) for writing data to the system from the
following website: http://www.iometer.org/.
u
Linux native multipath failover shows the paths from the server to the LUNs that you
expect for your configuration.
The Linux native multipath failover software returns several path states including:
u
This output shows two active (ready) paths one from HBA 2 through SP A to LUN 1
and one from HBA 3 through SP B to LUN 1. The device mapping (for example, 2:0:0:1)
uses the mapping format host:channel:target:lun. See the Linux MPIO documentation
for further information.
28
This output shows four active (ready) paths from and through each system SP to LUN
1. The device mapping (for example, 2:0:1:1) uses the mapping format
host:channel:target:lun. See the Linux MPIO documentation for further information.
2. Disconnect a path from an HBA (HBA 2 in this example) by unplugging the cable from
the HBA.
3. With the path disconnected, enter the following command to show the active and
failed paths: multipath -ll
Sample output for a direct connection is shown below. It may vary depending on the
Linux version running on the server.
.
.
query command indicates error query command indicates error
mpath1 (360060160a1f01600ef72a9c70d8fda11) DGC,RAID 5
[size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:1:1
sde 8:64 [failed][faulty]
\_ round-robin 0 [enabled]
\_ 3:0:1:1
sdi 8:128 [active][ready]
The output shows that the path from HBA 2 has failed. The MPIO failover software
failed the path from HBA 2 over to HBA 3.
The output shows that the path from failed. The MPIO failover software failed the
paths from over to .
4. Reconnect the path.
29