0% found this document useful (0 votes)
106 views

Assignment 1

Assignment 1

Uploaded by

Estuardo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views

Assignment 1

Assignment 1

Uploaded by

Estuardo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Here is Your Customized Document

Your Configuration is:


Attach a server
Operating System - Linux
Path Management Software - Linux native
Model - VNX5200
Connection Type - Fibre Channel Direct
Document ID - 1416882349914

Reporting Problems
To send comments or report errors regarding this document,
please email: [email protected].
For Issues not related to this document, contact your service provider.
Refer to Document ID:
1416882349914
Content Creation Date November 24, 2014

EMC VNX Series


Attaching a Server to a Configuration
November, 2014
This document explains how to attach a Linux Server with Native Multipath Failover to a
VNX in a Fibre Channel direct configuration.
The main topics in this document are:
u
u
u
u
u
u
u
u
u
u
u
u
u
u

Before you start..........................................................................................................2


Installing HBAs in the server.......................................................................................2
Installing or updating the HBA driver.......................................................................... 3
Installing or updating the multipath tools package.....................................................4
Installing Unisphere server software.......................................................................... 5
Connecting the VNX to the server in a Fibre Channel direct configuration..................14
Determining if your server has a supported configuration.........................................16
Registering the server with the system..................................................................... 16
Verifying system health............................................................................................17
Configuring native multipath failover on the server...................................................18
Configuring your VNX system....................................................................................24
Preparing LUNs to receive data.................................................................................27
Sending Linux disk information to the system.......................................................... 27
Verifying your Linux native multipath failover configuration......................................28

Before you start


Note

This document uses the term system to refer to your VNX.


If you are an EMC partner, refer to the EMC online support website to download the
software mentioned in ths guide (support.emc.com).
NOTICE

If you will configure your system connections to your ESX or Hyper-V server, you must
attach the ESX or Hyper-V server to your system prior to performing the steps in this
document. For information on attaching an ESX or Hyper-V server to your system,
generate a new document and select the appropriate ESX or Hyper-V server version.
u

Read the release notes for your system, which are available on the EMC Online
Support website.

You must have a supported Windows host on the same network as the system
management ports. You can use this host:
l

As a client in which you launch the Unisphere software.

To run the Unisphere Service Manager, which runs only on a Windows host.

As an EMC Secure Remote Support (ESRS) IP Client, which must be a Windows


host, but cannot be a server (that is, it cannot send I/O to the system data ports).

You must have a Unisphere Server with a supported Internet browser that is on the
same network as the system management ports. This host can also be the server or a
Unisphere management station. For supported Internet browsers, see the Unisphere
release notes on the EMC Online Support website.

You must have one or more supported Fibre Channel host bus adapters (HBAs), which
may already be installed in the server. These adapters must have the latest
supported BIOS and driver.
We recommend that you do not mix Fibre Channel HBAs from different vendors in the
same server.

Each storage-processor (SP) Fibre Channel port that you will use on the system must
have an optical cable. These cables may already be connected for a configuration
with an existing system or server. We strongly recommend you use OM3 50 m
cables. For cable specifications, refer to the system's technical specifications.

You must have a method for writing data to a LUN on the system that will test the
paths from the server to the system. You can download an I/O simulator (Iometer)
from the following website: http://www.iometer.org.

Installing HBAs in the server


For the server to communicate with the system Fibre Channel data ports, it must have one
or more supported HBAs installed.

Before you start


To complete this procedure, you will need one or more supported HBAs with the latest
supported BIOS and drivers.
2

Attaching a Server to a Configuration

For information on supported HBAs, BIOS, and drivers, refer to the E-Lab Interoperability
Navigator on the EMC Online Support website.
Note

We recommend that you never mix HBAs from different vendors in the same server.

Installing HBAs
CAUTION

HBAs are very susceptible to damage caused by static discharge and need to be handled
accordingly. Before handling HBAs, observe the following precautions:
u

Store HBAs in antistatic bags.

Use a ground (ESD) strap whenever you handle HBAs.

Never plug or unplug HBAs with the power on. Severe component damage can result.

Procedure
1. If the server is powered up:
a. Shut down the server's operating system.
b. Power down the server.
c. Unplug the server's power cord from the power outlet.
2. Put on an ESD wristband, and clip its lead to bare metal on the server's chassis.
3. For each HBA that you are installing:
a. Locate an empty PCI bus slot or a slot in the server that is preferred for PCI cards.
b. Install the HBA following the instructions provided by the HBA vendor.
c. If you installed a replacement HBA, reconnect the cables that you removed in the
exact same way as they were connected to the original HBA.
4. Plug the server's power cord into the power outlet, and power up the server.
Results

Installing or updating the HBA driver


The server must run a supported operating system and a supported HBA driver. EMC
recommends that you install the latest supported version of the driver.
For information on the supported HBA drivers, refer to the E-Lab Interoperability
Navigator on EMC Online Support website.

Before you start


To complete this procedure, you will need:
u

The latest supported version of the HBA driver.

Installing HBAs

Note

The HBA driver is also on the installation CD that ships with the HBA. However, this
version may not be the latest supported version.
If you have an Emulex driver, download the latest supported version and instructions
for installing the driver from the vendors website:
http://www.emulex.com/products/fibre-channel-hbas.html
If you have a QLogic driver, download the latest supported version and instructions
for installing the driver from the vendors website:
http://support.qlogic.com/support/oem_emc.asp
If you have a Brocade driver, download the latest supported version and instructions
for installing the driver from the vendors website:
http://www.brocade.com/services-support/driversdownloads/HBA/HBA_EMC.page
u

Any updates, such as hot fixes or patches to the servers operating system that are
required for the HBA driver version you will install.
For information on any required updates, refer to one of the following:
l

E-Lab Interoperability Navigator on the EMC Online Support website

The HBA vendors website

Installing the HBA driver


Procedure
1. Install any updates, such as hot fixes or patches, to the servers operating system that
are required for the HBA driver version you are installing.
2. If the hot fix or patch requires it, reboot the server.
3. Install the driver following the instructions on the HBA vendors website.
4. Reboot the server when the installation program prompts you to do so. If the
installation program did not prompt you to reboot, then reboot the server when the
driver installation is complete.

Installing or updating the multipath tools package


Update>Install> the required multipath tools package from the appropriate website below.
The multipath tools package is installed by default on SuSE SLES 10 or higher and not
installed by default on any version fo Red Hat.
For SuSE:
http://www.novell.com/linux/
The multipath tools package is included with SuSE SLES 9 SP3 and you can install it with
YAST or RPM.
For Red Hat:
http://www.redhat.com
The multipath tools package is included with Red Hat RHEL4 U3 or RHEL5, and you can
install it with YAST or Package Manager.

Attaching a Server to a Configuration

If an update is available
If an update is available, follow the instructions to install it as described on the http://
www.novell.com/linux/ website for SuSE or on the http://www.redhat.com website for
Red Hat.

Installing Unisphere server software


This section describes how to install Unisphere server software.
NOTICE

You must install the host agent or server utility on your Hyper-V or ESX server if you:
u

configured your system connections to your Hyper-V or ESX server,

have a non-Windows virtual machine, or

have a Windows virtual machine with iSCSI HBAs.

If you configured your system connections to your Windows virtual machine with NICs,
install the host agent or server utility on the Windows virtual machine.
If you have a Hyper-V or ESX server, you must install the host agent or server utility on
your Hyper-V or ESX server. Do not install these programs on your virtual machine.
If you plan to install Navisphere CLI or Admsnap, you must install them on a virtual
machine. For instructions on installing these software programs on a virtual machine ,
generate a new document and select the operating system running on the virtual
machine.
Before you begin
Refer to the sections below to determine which application to install for host registration
and the requirements for installing each of these applications.
u

To run Unisphere server software, your server must meet the requirements outlined in
Requirements for Unisphere server software on page 5.

To determine whether to install the Unisphere Host Agent or Unisphere Server Utility
to register your HBAs with the system, refer to Determining whether to install the
Unisphere Host Agent on page 6.

Installing Unisphere server software


Depending on which Unisphere server software you are installing, refer to the appropriate
section below.
u

Unisphere Host Agent see Installing the Unisphere Host Agent on page 7.

Unisphere Server Utility see Installing the Unisphere Server Utility on


page 11.

Navisphere CLI see Installing VNX for Block Secure CLI on page 12.

Admsnap Utility see Installing the Admsnap Utility on page 13.

Requirements for Unisphere server software


To run Unisphere server software, your server must meet the following requirements:
u

Run a supported version of the Linux operating system.

For Fibre Channel connections, have the EMC VNX supported HBA hardware and
driver installed.
Installing Unisphere server software

Be connected to at least one SP (two SPs for high availability) in each system either
directly or through a switch or hub. Each SP must have an IP connection.

For the host agent and CLI only - Be on a TCP/IP network connected to at least one SP
(two SPs for high availability) in the system.
The TCP/IP network connection allows the server to send LUN mapping information to
the system and it allows Navisphere CLI to communicate with the system over the
network.

Have a configured TCP/IP network connection to any remote hosts that you will use to
manage the systems, including:
l

any server whose browser you will use to access Unisphere,

a supported Windows server running Unisphere Server software,

any AIX, HP-UX, Linux, VMware ESX Server, Solaris, or Windows server running
VNX for Block CLI.

If you want to use VNX for Block CLI on the server to manage systems on a remote server,
the server must be on a TCP/IP network connected to both the remote server and each SP
in the system. The remote server can be running AIX, HP-UX, Linux, Solaris, or the
Windows operating system.
Note

For information about the specific revisions of the server operating system, the system

VNX for Block OE, and Access Logix software that are required for your version of the
host agent, see the release notes for the host agent on the EMC Online Support website.

Determining whether to install the Unisphere Host Agent


Depending on your application needs, you can install the host agent to:
u

Monitor system events and notify personnel by e-mail, page, or modem when any
designated event occurs.

Retrieve LUN world wide name (WWN) and capacity information from VNX systems.

Register the servers HBAs with the system.


Alternatively, you can use the Unisphere Server Utility to register the servers HBAs
with the system. Host registration differences between the host agent and the
server on page 6 describes the host registration differences between the host
agent and the server utility.

Table 1 Host registration differences between the host agent and the server utility

Function

Unisphere Host Agent

Unisphere Server Utility

Pushes LUN mapping


and OS information to
the system.

Yes LUN mapping


information is displayed in
the Unisphere UI next to the
LUN icon or with the CLI
using the Server volmap command.

No LUN mapping information is not


sent to the system. Only the servers
name, ID, and IP address are sent to the
system. The text Manually

Registered appears next to the


hostname icon in the Unisphere UI
indicating that the host agent was not
used to register this server.

Attaching a Server to a Configuration

Table 1 Host registration differences between the host agent and the server utility (continued)

Function

Unisphere Host Agent

Unisphere Server Utility

Runs automatically to
send information to the
system.

Yes No user interaction is


required.

No You must manually update the


information by starting the utility or you
can create a script to run the utility.
Since you run the server utility on
demand, you have more control as to
how often or when the utility is
executed.

Requires network
connectivity to the
system.

Yes Network connectivity


allows LUN mapping
information to be available to
the system.

No LUN mapping information is not


sent to the system. Note that if you are
using the server utility to upload a highavailability report to the system, you
must have network connectivity.

Installing the Unisphere Host Agent


This section describes how to install the Unisphere Host Agent.
To modify an existing host agent configuration, refer to the next section.

Installing the Unisphere Host Agent on a Linux server


We recommend that you download and install the most recent version of the Unisphere
Host Agent software from the EMC Online Support website.
Procedure
1. On the Linux server, log in to the root account.
2. If your server is behind a firewall, open TCP/IP port 6389.
This port is used by the host agent. If this port is not opened, the host agent will not
function properly.
3. At a command line prompt, look for an existing host agent package and use the
following command: rpm -qa | grep -i agent
If an earlier version of host agent package is listed, you must remove it before
installing the new host agent package.
4. Download the software:
a. From the EMC Online Support website, select the appropriate VNX Support by
Product page and locate the Software Downloads.
b. Select the Unisphere Host Agent, and then select the option to save the zip file to
your server.
c. At the command line prompt, navigate to the directory where you saved the zip file
and unzip the file:unzip Navi_Agent_CLI_Linux-version.zip
where version is the version listed in the filename.
5. Depending on which version you are installing, enter one of the following commands
to install the software:
l

32-bit server rpm -ivh HostAgent-Linux-32-x86-en_US-versionbuild.i386.rpm


Installing the Unisphere Host Agent

64-bit server rpm -ivh HostAgent-Linux-64-x86-en_US--versionbuild.x86_64.rpm

where version and build are the version number and the build number of the software.
Note

If you have an IA64 system, you must install the 32bit package and 32bit OS
compatibility packages. The 64bit package is currently supported on x86_64
systems only. Refer to the release notes for any updates.
If you are upgrading the utility on the server, use -U in place of -i.
6. Verify that Host Agent is installed: rpm -qa | grep -i agent
Note

Before you can use the host agent, you must modify the user login scripts and
configure the host agent configuration file.

Configuring the Unisphere Host Agent


Verify that the host agent configuration file includes a privileged user, as described in
Adding privileged users on page 8
Note

The pathname of the host agent configuration file is /etc/Unisphere/


agent.config

Adding privileged users


Before you begin
If you use Navisphere CLI to configure any system, the host agent configuration file must
include an entry that defines the person who will issue the CLI commands as a privileged
user.
To define a privileged user, add a local or remote privileged user by adding the
appropriate entry below.
For a local user:
user name

For a remote user:


user name@hostname

where name is the person's account name and name@hostname is the name of the
remote server the person will be using.
The default host agent configuration file includes a user root entry.

Saving the host agent configuration file


Procedure
1. If you have finished adding information to the host agent configuration file, save the
host agent configuration file.
2. Stop and restart the host agent by entering the following:
/etc/init.d/naviagent stop
8

Attaching a Server to a Configuration

/etc/init.d/naviagent start

Using the event monitor configuration file


The Unisphere Host Agent can monitor system events and take such action as sending email or paging you if specified events occur.
The event monitor that ships with Unisphere provides an interactive way to define these
events and actions. If you do not have event monitor, you can still define such events and
actions by editing the event monitor configuration file.
/etc/Unisphere/Navimon.cfg
The file is self-documenting; that is, text in it describes how to define events and the
actions you want taken if the events occur. You can test the file after editing it using the
Navisphere CLI command responsetest, as explained in the Navisphere Command
Line Interface Reference.

Running the Unisphere Host Agent


This section describes how to start and stop the host agent and how to test the host
agent connections.

Starting the host agent on a Linux server


The host agent starts automatically when you bring the server up to init level 3. When you
first start the host agent, look at the system log for the servers operating system to make
sure the agent started and no device errors occurred.
The system log is located in /var/log/messages.
Note

When a system experiences heavy input/output traffic (that is, applications are using the
system), information may not be reported to the host agent in a timely manner, resulting
in the host agent taking several minutes to execute a system management task. This
behavior is most evident when one host agent is managing multiple systems. Also, if the
SP event log is large and the host agent configuration file is set up to read all events, it
may take a few minutes for the host agent to start.
Procedure
1. Log in as root or the equivalent.
2. Enter the following command: /etc/init.d/hostagent start
Results
The host agent now start automatically at system startup.
It may take a few minutes for the host agent to start when:
u

Applications are using the system, or

The SP event log is large and the host agent configuration file is not set up.

Applications are using the system, or

The SP event log is large and the host agent configuration file is not set up.

Stopping the host agent on a Linux server


Procedure
1. Log in as root or the equivalent.
Running the Unisphere Host Agent

2. Enter the following command:/etc/init.d/hostagent stop

Testing the host agent connections


Before continuing, you should test the host agent connections as follows:
Procedure
1. Start the host agent.
2. Look for any errors on the console and in the operating system log to make sure the
agent started and no device errors occurred.
3. Verify that the host agent on the server can see the system as follows:
a. Enter the following CLI command: naviseccli |navicli [-d device]| -h
hostname port -list -hba
Note

You cannot specify both the d switch and h switch.


where
[-d device] is the device name for the system (only supported with legacy
systems).
-h hostname is the IP address of the SP.
For each HBA in the server, a listing similar to the following will be displayed. For
systems in a SAN (shared storage) environment, the listing includes HBAs in all
connected hosts.
Information about each HBA:
HBA UID: 10:00:00:60:B0:3E:46:AC:10:00:00:60:B0:3E:46:AC
Server Name: siux134
Server IP Address: 128.221.208.134
HBA Model Description:
HBA Vendor Description:
HBA Device Driver Name:
Information about each port of this HBA:
SP Name: spa
HBA Devicename: sp0
Trusted: NO
Logged In: YES
Source ID: 1
Defined: YES
Initiator Type: 0
Storage Group Name:
Storage Group 134

b. In the listing, verify the path for each HBA installed in the host to the SP.

Host agent status and error logging


While the system is running, the operating system tracks information about host agent
events and host agent errors, and places this information in log files on the server.
The host agent error log tracks information about the host agents startup, the host agent
shutdown, and errors that might occur, such as the host agents inability to access a
device in the configuration file. If problems occur, log files are a good place to start your
troubleshooting.
Host agent events and errors are logged in /var/log/agent.log. System events are
logged in /var/log/messages.

10

Attaching a Server to a Configuration

Installing the Unisphere Server Utility


This section describes how to install the Unisphere Server Utility on your server.

Installing the Unisphere Server Utility on a Linux server


We recommend that you download and install the most recent version of the Unisphere
Server Utility software from the applicable support by product page on the EMC Online
Support website.
Procedure
1. Log in to the root account.
2. At the command line prompt, look for any existing server utility:
rpm -qa | grep -i serverutil
3. If an earlier version of the software has been installed, you must remove it before
continuing.
Use the following command to get the install package: rpm -qa | grep -i
serverutil. You may get a result similar to:ServerUtil-Linux*
Using the following command to remove the installed package: rpm -ev
ServerUtil-Linux*
4. Download the software:
a. From the EMC Online Support website, select the VNX Series Support by Product
page and select Downloads.
b. Select the Unisphere Server Utility, and then select the option to save the zip file to
your server.
c. At the command line prompt, navigate to the directory where you saved the zip file
and unzip the file.
d. Install the software:
32-bit server rpm -ivh ServerUtil-Linux-32-x86-en_USversionbuild.i386.rpm
where version and build are the version number and the build number of the
server utility.
64-bit server rpm -ivh UnisphereServerUtil-Linux-64-x86en_US-versionbuild.x86_64.rpm
where version and build are the version number and the build number of the
software.
Note

If you have an IA64 system, you must install the 32bit package. The 64bit
package is currently supported on x86_64 systems only. Refer to the release
notes for any updates.
If you are upgrading the utility on the server, use -U in place of -i.
Results
The installation process adds a line to the /etc/rc.d/rc.local file that starts the
server utility on reboot, provided root has execute permission for the /etc/rc.d/
rc.local file.
Installing the Unisphere Server Utility

11

Installing VNX for Block Secure CLI


This section describes how to install VNX for Block Secure CLI.
You can install VNX for Block CLI on either the server or virtual machine.

Installing the VNX for Block Secure CLI on a Linux server or Linux virtual machine
We recommend that you download and install the most recent version of the VNX for
Block Secure CLI software from the applicable support by product page on the EMC
Online Support website.
Procedure
1. Log in to the root account.
2. If your server is behind a firewall, open the TCP/IP ports listed in TCP/IP ports on page
12.
These ports are used by VNX for Block CLI. If these ports are not opened, the software
will not function properly.
Table 2 TCP/IP ports

Software TCP/IP ports


Secure CLI 443, 2163

3. At the command line prompt, look for any existing CLI by typing: rpm -qi navicli
If an earlier version of the software has been installed, you must remove it before
continuing.
4. Download the software:
a. From the EMC Online Support website, select the VNX Series Support by Product
page and select Downloads.
b. Select the VNX for Block Secure CLI, and then select the option to save the zip file
to your server.
c. At the command line prompt, navigate to the directory where you saved the zip file
and unzip the file.
d. Install the software:
32-bit server rpm -ivh NaviCLI-Linux-32-x86languageversion-build.i386.rpm
64-bit server rpm -ivh NaviCLI-Linux-64-x86-languageversion-build.x86_64.rpm
where
language is either en_US, when only the English version is available, or loc, when
the localized versions are available (including English).
version and build are the version number and the build number of the software.

12

Attaching a Server to a Configuration

Note

If you have an IA64 system, you must install the 32bit package. The 64bit
package is currently supported on x86_64 systems only. Refer to the release notes
for any updates.
If you are upgrading the utility on the server, use -U in place of -i.
The system displays the word navicli or naviseccli and a series of pound (#) signs.
When the installation is complete, the system prompt reappears.
5. Verify that VNX for Block Secure CLi is installed by using the rpm -qa | grep -i
navicli command.

Installing the Admsnap Utility


To access snapshots of LUNs in the system, install the Admsnap Utility.
You can install admsnap on the server or on the virtual machine.
NOTICE

Currently, you cannot install admsnap on your virtual machine if your virtual machine is
connected to Fibre Channel storage; you must install it on the Hyper-V server. For any
updates, refer to the EMC SnapView, Admsnap, and ADMhost Release Notes.
Installation prerequisites
Before you can install and use the Admsnap Utility, you must install SnapView on a
supported system.
For a list of supported systems, refer to the release notes for SnapView and admsnap.

Installing the Admsnap Utility on a Linux server or a Linux virtual machine


We recommend that you download and install the most recent version of the Admsnap
Utility software from the Downloads section of the VNX Series support by product page on
the EMC Online Support website.
Procedure
1. Log in as root or as a member of an administrative group.
2. Open a terminal window and enter the following command to list any admsnap
package that may be currently installed: rpm -qi admsnap
3. Examine the list for an earlier version of admsnap.
If an earlier version is installed, remove it.
4. Download the software:
a. From the EMC Online Support website, select the VNX Series Support by Product
page and select Downloads.
b. Select the Admsnap Utility version you want to download and select the option to
save the zip file to your server.
c. At the command line prompt, navigate to the directory where you saved the zip file
and unzip the file. unzip admsnap_version.zip
where version is the version listed in the filename.
d. Install the software:
Installing the Admsnap Utility

13

32-bit Linux server: rpm -ivh admsnap-Linux-32-x86-en_USversion-build.rpm


64-bit Linux server:rpm -ivh admsnap-Linux-64-x86-en_USversion-build.rpm
where version and build are the version number and the build number of the
software.
Note

If you are upgrading the utility on the server, use -U in place of -i.
The following files are installed in the /usr/admsnap directory:
/usr/admsnap/admsnap
/usr/admsnap/man/man1/admsnap.1
/usr/admsnap/readme
5. Verify that the correct version of admsnap is installed by entering the following
command: /usr/admsnap/admsnap help
This command displays a message about the help command, which includes the
revision number of the installed admsnap software.
6. Configure MANPATH to access the Linux man pages. Edit the /etc/man.config file
by adding the following lines: MANPATH /usr/admsnap/man
7. Configure sg devices by using the MAKEDEV utility.
For information on how to use the MAKEDEV utility, refer to the MAKEDEV man pages.

Connecting the VNX to the server in a Fibre Channel direct


configuration
To connect the VNX to the server in a Fibre Channel direct configuration, you need an
optical cable for each Fibre Channel SP host port that on the VNX that you will connect to
a server HBA.
For cable specifications, refer to the technical specifications for your VNX available from
the Learn about VNX link on the VNX series support website or from the VNX Series
Support by Product page on the EMC Online Support website.
Note

An SP host port is also called an SP front-end data port.

Identifying VNX Fibre-Channel host ports for server connections


Module labels and FE port connectors
Each Fibre Channel I/O module has a 8 GB Fibre label on its handle and an optical smallform factor pluggable (SFP) transceiver module in each of its Fibre FE ports.

Handling optical cables


Optical cables are susceptible to damage, so take the following precautions when
handling them:
u

14

Keep the covers on all optical cables until you are ready to insert them.

Attaching a Server to a Configuration

Avoid tight bends. If you need to make a 90 bend, do it over 6 to 12 inches.

Do not use optical cables to support weight (including their own unsupported weight
if they are long).

Do not pull long runs of cable; instead, lay the cable in place or pull only a few feet at
a time.

Place the cables where no one can step on them or roll equipment over them.

Cabling VNX host ports to server HBA ports


For the highest availability with a multiple-HBA server:
u

If the server has two HBAs, connect one HBA to a Fibre Channel host port on SP A and
the other HBA to a Fibre Channel host port on SP B.

If the server has four HBAs, connect two HBAs to separate Fibre Channel host ports on
SP A, and the other two HBAs to separate Fibre Channel host ports on SP B.

If the server has more than four HBAs, connect an HBA to a Fibre Channel host port on
SP A and another HBA to a Fibre Channel host port on SP B, and continue connecting
an HBA to a Fibre Channel host port on SP A and another next HBA to a Fibre Channel
host port on SP B until you have connected all the HBAs or used all the available Fibre
Channel host ports.

For each new HBA port you want to connect to a VNX Fibre Channel (FC) host port:
Procedure
1. Locate the FC host port to which you will connect the HBA port.
For information on identifying the host ports using Unisphere, refer to the Unisphere
online help.
Note

Applications such as MirrorView/A,MirrorView/S, or SAN Copy software may restrict or


require the use of certain SP ports. Refer to the application documentation for specific
cabling information.
2. Remove the protective covers from the optical connector on the HBA port and from
one end of an optical cable and plug the cable into the HBA connector.
3. Remove the protective covers from the free end of the optical cable and from the FC
host port connector on the VNX storage processor (SP), and plug the cable into the
data port connector (Connecting an optical cable on page 15 and Sample cabling for
a Fibre Channel direct configuration on page 16).
Figure 1 Connecting an optical cable

CL4021

Cabling VNX host ports to server HBA ports

15

Figure 2 Sample cabling for a Fibre Channel direct configuration

Determining if your server has a supported configuration


Before you can determine if your server has a supported configuration, you need to know
the revision and patch level of the operating system on the server .
If you have this information, go to Verifying a server's configuration with E-Lab Navigator.
If you do not have this information, you can generate a server configuration report for
your server using the Unisphere Server Utility.

Starting the Unisphere Server Utility on a Linux server


Procedure
1. Open a console window.
2. Navigate to the Unisphere bin directory and run the server utility:
/opt/Unisphere/bin/serverutilcli

Generating a high-availability report for a server


Procedure
1. In the Unisphere Server Utility, select option 3 from the server utility's Welcome
screen to generate a resport of the server's environment.
This option detects if PowerPath or some other failover software, such as DMP, is
running. The utility will not detect any other native failover software, such as Linux
native multipath (MPIO).After the verification, the utility generates a summary report
and saves it to the server.
2. In the summary report, select the Checklist tab to view the information about the
server that you need to compare with the E-Lab Navigator information.

Registering the server with the system


NOTICE

Booting from a SAN configuration - If you are booting from a SAN, do not register you
server using the Unisphere Host Agent or Unisphere Server Utility. You will perform a
manual registration in the next section.
You must run the Unisphere Server Utility on each server connected to the system to
register the server's HBAs with the system.

Running the Unisphere Server Utility on a Linux server


You can run the Unisphere Server Utility from the server.
NOTICE

If you have a Hyper-V or ESX server, perform this procedure on your Hyper-V or ESX server.
For updates on SCSI pass through support for virtual machines with FC devices, refer to
the E-Lab Interoperability Navigator on the EMC Online Support website.

16

Attaching a Server to a Configuration

Starting the Unisphere Server Utility on a Linux server


Procedure
1. Open a console window.
2. Navigate to the Unisphere bin directory and run the server utility:
/opt/Unisphere/bin/serverutilcli

Registering the Linux server using the Unisphere Server Utility


Procedure
1. If the host agent is running, stop the host agent service.
2. In the server utility, enter 1 to select Update Server Information.
The utility automatically scans for connected systems, and displays a list of the ones
it finds.
3. In the server utility, enter u to register the server with each system the utility found.
The utility sends the servers name and IP address to each system. Once the server
has storage on the system, the utility also sends the Linux device name and volume or
file system information for each LUN (virtual disk) in the system that the server sees.
4. Enter c (cancel) to stop the utility.
5. If you stopped the host agent, restart it.

Starting the Unisphere Host Agent


Starting the host agent on a server automatically registers the servers HBAs with the
system.

Verifying HBA registration using Unisphere


Procedure
1. From Unisphere, select your system, then Hosts > Initiators.
2. In the Initiators list, select the initiator name, and tverify that the SP port connection
is displayed as Registered.
Once all HBAs belonging to the server are registered, you can assign the server to
storage groups.

Verifying system health


Use the system verification wizard that is part of the Unisphere Service Manager (USM)
to:
u

Validate the connectivity of the system hardware components

Verify back-end functionality

Verify the status of all field-replaceable units

Analyze system logs

Procedure
1. If you do not have the Unisphere Service Manager running:
Starting the Unisphere Host Agent

17

a. Download and install the Unisphere Service Manager from the EMC Online Support
website to a Windows management station that is connected to the system's
management ports. If you do not have a Windows management station, your
service provider can run this wizard.
b. Start the Unisphere Service Manager by doing one of the following:
Click the Unisphere Service Manager icon on your desktop,
or
Select Start > All Programs or Start > Programs, then select EMC > Unisphere >
Unisphere Service Manager > Unisphere Service Manager
2. Log in to your system.
3. From the System screen, select Diagnostics > Verify Storage System and follow the
instructions that appear.
4. Review the report that the wizard generates, and if it lists any problems, try to resolve
them.

Configuring native multipath failover on the server


How you configure native multipath failover varies with the version of Linux running on
the server.

Configuring multipath failover on a Red Hat server


Procedure
1. For RHEL 4:
a. Verify that you are running the default version of uDev that was included with your
operating system with the following command: rpm -q udev
If required, upgrade the uDev package and then recreate a new device in the /dev
directory with the following command: undevstart
b. If it is not already loaded, load the dm_multipath kernel module with the
following command: modprobe dm_multipath
2. When you attach to an EMC system, EMC recommends that you replace the
default /etc/multipath.conf file with the following multipath.conf file:
RHEL 4
Follow the instructions in the multipath.conf.annotated file for masking
internal SCSI disks or disks that need to be excluded from multipath control.
##
##
##
##
##
##
##
##

This is the /etc/multipath.conf file recommended for


EMC storage devices.
OS : RHEL 4 U3
Arrays : CLARiiON and Symmetrix
The blacklist is the enumeration of all devices that
are to be excluded from multipath control

blacklist
{
## Replace the wwid with the output of the command
## 'scsi_id -g -u -s /block/[internal scsi disk name]'
## Enumerate the wwid for all internal scsi disks.
## Optionally, the wwid of VCM database may also be listed
## here.
18

Attaching a Server to a Configuration

wwid 35005076718 d4224d


devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][0-9]*"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]

## Use user friendly names, instead of using WWIDs as names.


defaults {
## Use user friendly names, instead of using WWIDs as names.
user_friendly_names yes
}
devices {
## Device attributes for EMC CLARiiON
device {
vendor
"DGC"
product
"*"
path_grouping_policy roup_by_prio
getuid_callout
"/sbin/scsi_id -g -u -s /block/%n"
prio_callout
"/sbin/mpath_prio_emc /dev/%n"
path_checker
emc_clariion
path_selector
"round-robin 0"
features
"1 queue_if_no_path"
no_path_retry
300
hardware_handler
"1 emc"
failback
immediate
}
}

RHEL 5
Follow the instructions in the multipath.conf.annotated file for masking
internal SCSI disks or disks that need to be excluded from multipath control.
#
#
#
#
#
#
#

This is an example configuration file for device mapper


multipath. For a complete list of the default configuration
values, see /usr/share/doc/device-mapper-multipath-0.4.5/
multipath.conf.defaults
For a list of configuration options with descriptions, see
/usr/share/doc/device-mapper-multipath-0.4.5/
multipath.conf.annotated

# Blacklist all devices by default. Remove this to enable


# multipathing on the default devices.
Note

Insert # to disable the all device blacklist.


#blacklist {
#
devnode "*"
#}
## By default, devices with vendor = "IBM" and product =
## "S/390.*" are blacklisted. To enable mulitpathing on
## these devies, uncomment the following lines.
#blacklist_exceptions {
#
device {
#
vendor "IBM"
#
product "S/390.*"
#
}
#}
## Use user friendly names, instead of using WWIDs as names.
defaults {
user_friendly_names yes
}
##
## This is a template multipath-tools configuration file
## Uncomment the lines relevent to your environment
##
Configuring multipath failover on a Red Hat server

19

#defaults {
# udev_dir
/dev
# polling_interval
10
# selector
"round-robin 0"
# path_grouping_policy multibus
# getuid_callout
"/sbin/scsi_id -g -u -s /block/%n"
# prio_callout
/bin/true
# path_checker
readsector0
# rr_min_io
100
# rr_weight
priorities
# failback
immediate
# no_path_retry
fail
# user_friendly_name
yes
#}
##
## The wwid line in the following blacklist section is
## shown as an example of how to blacklist devices by wwid.
## The 3 devnode lines are the compiled in default blacklist.
## If you want to blacklist entire types of devices, such
## as all scsi devices, you should use a devnode line.
## However, if you want to blacklist specific devices, you
## should us a wwid line. Since there is no guarantee that
## a specific device willnot change names on reboot
## (from /dev/sda to /dev/sdb for example)
## devnode lines are not recommended for blacklisting
## specific devices.
Note

Remove # to enable the devnode blacklist.


blacklist {
wwid 360060480000190101965533030303230
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*"
}
#multipaths {
# multipath {
#
wwid
3600508b4000156d700012000000b0000
#
alias
yellow
#
path_grouping_policy multibus
#
path_checker
readsector0
#
path_selector
"round-robin 0"
#
failback
manual
#
rr_weight
priorities
#
no_path_retry
5
# }
# multipath {
#
wwid
1DEC_____321816758474
#
alias
red
# }
#}
#devices {
# device {
#
vendor
"COMPAQ "
#
product
"HSV110 (C)COMPAQ"
#
path_grouping_policy
multibus
#
getuid_callout
"/sbin/scsi_id -g -u -s /block/%n"
#
path_checker
readsector0
#
path_selector
"round-robin 0"
#
hardware_handler
"0"
#
failback
15
#
rr_weight
priorities
#
no_path_retry
queue
# }
# device {
#
vendor
"COMPAQ "
#
product
"MSA1000
"
#
path_grouping_policy
multibus
20

Attaching a Server to a Configuration

# }
#}

3. Perform a dry run and evaluate the setup with the following command: multipath
-v2 -d
RHEL 4
The output looks similar to:
create: mpath21 (360060160b540160171f77f705558da11)
[size=10 GB][features="1
queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0
\_ 11:0:0:0 sdad 65:208
\_ round-robin 0
\_ 10:0:0:0 sdb 8:16
create: mpath72 (360060160b540160170f77f705558da11)
[size=10 GB][features="1
queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0
\_ 11:0:0:1 sdae 65:224
\_ round-robin 0
\_ 10:0:0:1 sdc 8:32

RHEL 5
The output looks similar to:
create: mpath3 (36006016005d01800b207271bb8ecda11) DGC,RAID 5
[size=3.1G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 5:0:0:21 sdar 66:176 [undef][ready]
\_ 1:0:0:21 sdj 8:144 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 4:0:0:21 sdad 65:208 [undef][ready]
\_ 6:0:0:21 sdbf 67:144 [undef][ready]
create: mpath4 (36006016005d018005714f1abbbecda11) DGC,RAID 5
[size=6.3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 4:0:0:28 sdae 65:224 [undef][ready]
\_ 6:0:0:28 sdbg 67:160 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 5:0:0:28 sdas 66:192 [undef][ready]
\_ 1:0:0:28 sdk 8:160 [undef][ready]

4. If the listing is appropriate, commit the configuration as follows:


a. Start the required multipath processes with the following commands: /etc/
init.d/multipathd startmultipath -v2
b. Verify that the multipathd module is loaded:ps -ef | grep multipathd
c. Use the following command to verify that the dm_emc, dm_round_robin,
dm_multipath, dm_mod and kernel modules are loaded: lsmod |grep dm
5. To get a listing of the current setup, use the multipath -ll command.
6. Integrate the startup of the appropriate daemons in the boot sequence with the
following commands:chkconfig - - add smultipathd chkconfig
multipathd on
7. Reboot the server to verify that the required processes automatically start up.

Configuring multipath failover on a SuSE server


Procedure
1. Verify that you are running the default version of uDev that was included with your
operating system with the following command: rpm -q udev
Configuring multipath failover on a SuSE server

21

For SLES 9 and 10, if required, upgrade the uDev package and then use the following
command to recreate the new devices:
SLES 9 udevstart
SLES 10 /etc/init.d/boot.udev restart
2. For SLES 9, edit the /etc/sysconfig/hotplug file to disable the auto-mount
option as follows:
HOTPLUG_USE_SUBFS=no
Note

To prevent significant boot up delays in SLES 9, edit the /etc/sysconfig/boot as


follows:
DISABLE_BLKID=yes
3. If it is not already loaded, load the dm_multipath kernel module with the following
command: modprobe dm_multipath
4. Replace the default /etc/multipath.conf with the following /etc/
multipath.conf file recommended by EMC when attaching to EMC systems.
SLES 9
Follow the instructions in the multipath.conf.annotated file for masking
internal SCSI disks or disks that need to be excluded from multipath control. This file
is located in the /usr/share/doc/packages/multipath-tools/ directory.
## This is the /etc/multipath.conf file recommended for
## EMC storage devices.
##
## OS
: SLES 9 SP3
## Arrays : CLARiiON and SYMMETRIX
##
## The blacklist is the enumeration of all devices that
## are to be excluded from multipath control
devnode_blacklist {
## Replace the wwid with the output of the command
## 'scsi_id -g -u -s /block/[internal scsi disk name]'
## Enumerate the wwid for all internal scsi disks.
## Optionally, the wwid of VCM database may also be
listed here.
wwid 20010b9fd080b7321
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

}
devices {
## Device attributes for EMC CLARiiON
device {
vendor
"DGC"
product
"*"
path_grouping_policy group_by_prio
getuid_callout
"/sbin/scsi_id -g -u -s /block/%n"
prio_callout
"/sbin/mpath_prio_emc /dev/%n"
hardware_handler
"1 emc"
features
"1 queue_if_no_path"
no_path_retry
300
path_checker
emc_clariion
failback
immediate
}
}

SLES 10 or higher
22

Attaching a Server to a Configuration

The default configuration options for EMC systems are included as part of the
multipath package, so you do not need to replace the default /etc/
multipath.conf file.
5. Perform a dry run and evaluate the setup with the following command: multipath
-v2 -d
SLES 9
The output looks similar to:
create: 360060160aa4018002ae6839182a8da11
[size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=2]
\_ 3:0:0:6
sdbe 67:128 [ready]
\_ 2:0:0:6
sdbk 67:128 [ready]
\_ round-robin 0
\_ 3:0:1:6
sdbr 68:80 [ready]
\_ 2:0:1:6
sdi 8:128 [ready]
create: 360060160aa4018002ce6839182a8da11
[size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=2]
\_ 3:0:0:7
sdbf 67:144 [ready]
\_ 2:0:0:7
sdbl 67:128 [ready]
\_ round-robin 0
\_ 3:0:1:7
sdbs 68:96 [ready]
\_ 2:0:1:7
sdj 8:144 [ready]

SLES 10 or higher
The output looks similar to:
create: mpath12 (360060160782918000be69f3182b2da11) DGC,RAID 3
[size=3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 10:0:3:0 sdaq 66:160 [undef][ready]
\_ 11:0:3:0 sdcw 70:64 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 11:0:2:0 sdce 69:32 [undef][ready]
\_ 10:0:2:0 sdy 65:128 [undef][ready]
create: mpath13 (360060160782918004f64715b83b2da11) DGC,RAID 3
[size=3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][undef]
\_ 11:0:2:1 sdcf 69:48 [undef][ready]
\_ 10:0:2:1 sdz 65:144 [undef][ready]
\_ round-robin 0 [prio=0][undef]
\_ 10:0:3:1 sdar 66:176 [undef][ready]
\_ 11:0:3:1 sdcx 70:80 [undef][ready]

6. If the listing is appropriate, commit the configuration as follows:


a. Start the required multipath processes with the following commands: /etc/
init.d/boot.multipath start/etc/init.d/multipathd start
b. Verify that the multipathd module is loaded with the following command: ps ef | grep multipathd
c. Use the following command to verify that the dm_emc, dm_round_robin,
dm_multipath, dm_mod and kernel modules are loaded: lsmod |grep dm
7. To get a listing of the current setup, use the multipath -ll command.
8. Integrate the startup of the appropriate daemons in the boot sequence with the
following command: insserv boot.device-mapper multipathd
boot.multipath
9. Reboot the server to verify that the required processes automatically start up.

Configuring multipath failover on a SuSE server

23

Verifying HBA registration using Unisphere


Procedure
1. From Unisphere, select your system, then Hosts > Initiators.
2. In the Initiators list, select the initiator name, and tverify that the SP port connection
is displayed as Registered.
Once all HBAs belonging to the server are registered, you can assign the server to
storage groups.

Verifying devices are attached correctly


For help in verifying that the HBA is installed correctly and that they can see the system
devices connected to them, refer to the Linux host connectivity guide on the EMC Online
Support website.

Configuring your VNX system


To configure your VNX system, use either the Unisphere Service Manager wizards or
Unisphere.

Starting Unisphere
Procedure
1. Log in to a host (which can be a server) that is connected through a network to the
systems management ports and that has an Internet browser: Microsoft Internet
Explorer, Netscape, or Mozilla.
2. Start the browser.
3. In the browser window, enter the IP address of one of the following that is in the same
domain as the systems that you want to manage:
l

A system SP with the most recent version of the VNX Operating Environment (OE)
installed
Note

This SP can be in one of the systems that you want to manage.


l

A Unisphere management station with the most recent Unisphere Server and UIs
installed

Note

If you do not have a supported version of the JRE installed, you will be directed to the
Sun website where you can select a supported version to download. For information
on the supported JRE versions for your version of Unisphere, refer to Environment and
System Requirements in the Unisphere release notes on the EMC Online Support
website.
4. Enter your user name and password.
5. Select Use LDAP if you are using an LDAP-based directory server to authenticate user
credentials.
If you select the Use LDAP option, do not include the domain name.
24

Attaching a Server to a Configuration

When you select the LDAP option, the username / password entries are mapped to an
external LDAP or Active Directory server for authentication. Username / password
pairs whose roles are not mapped to the external directory will be denied access. If
the user credentials are valid, Unisphere stores them as the default credentials.
6. Select Options to specify the scope of the systems to be managed.
Global (default) indicates that all systems in the domain and any remote domains can
be managed. Local indicates that only the targeted system can be managed.
7. Click Login.
When the user credentials are successfully authenticated, Unisphere stores them as
the default credentials and the specified system is added to the list of managed
systems in the Local domain.
8. If you are prompted to add the system to a domain, add it now.
The first time that you log in to a system, you are prompted to add the system to a
Unisphere domain. If the system is the first one, create a domain for it. If you already
have systems in a domain, you can either add the new system to the existing domain
or create a new domain for it. For details on adding the system to a domain, use the
Unisphere help.

Committing VNX for Block Operating Environment (OE) software with Unisphere
If you did not install a VNX for Block OE update on the system, you need to commit the
VNX for Block OE software now.
Procedure
1. From Unisphere, select All Systems > System List.
2. From the Systems page, right-click the entry for the system for which you want commit
the VNX for Block OE and select Properties.
3. Click the Software tab, select VNX-Block-Operating-Environment, and click Commit.
4. Click Apply.

Verifying that each LUN is fully initialized using Unisphere


Although the storage group with a new LUN is assigned to the server, the server cannot
see the new LUN until it is fully initialized (completely bound). The time the initialization
process takes to complete varies with the size of the LUN and other parameters. While a
LUN is initializing, it is in a transitioning state, and when the initialization is complete, its
state becomes ready.
To determine the state of a LUN:
Procedure
1. From Unisphere, navigate to the LUN you want to verify (Storage > LUNs).
2. Right-click the LUN and click Properties.
3. Verify that the state of the LUN is Normal.
If the state is Transitioning, wait for the state to change to Ready before continuing.

Creating storage groups with Unisphere


If you do not have any storage groups created, create them now.

Committing VNX for Block Operating Environment (OE) software with Unisphere

25

Procedure
1. In the systems drop-down list on the menu bar, select a system.
2. Hosts > Storage Groups.
3. Under Storage Groups, select Create.
4. In Storage Group Name, enter a name for the Storage Group to replace the default
name.
5. Choose from the following:
l

Click OK to create the new Storage Group and close the dialog box, or

Click Apply to create the new Storage Group without closing the dialog box. This
allows you to create additional Storage Groups.

6. Select the storage group you just created and click the Connect hosts.
7. Move the host from Available host to Host to be connected and click OK.

Making LUNs visible to a Linux server


To allow the Linux server to access the LUNs that you created, enter the following
commands:
For a server with HBA connections to the system: rmmod driver_modulemodprobe
driver_module (or insmod driver_module)
where driver_module is the driver module name.

Verifying that Linux native multipath failover sees all paths to the LUNs
At a Linux prompt, view all LUNs this server has access to by entering the multipath ll command. This command will show all LUNs that are available.
Sample output with two LUNs:
multipath -ll
mpath1 (360060160a1f01600ef72a9c70d8fda11) DGC,RAID 5
[size=5G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][active]
\_ 2:0:1:1 sdh 8:112 [active][ready]
\_ 3:0:1:1 sdo 8:224 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:1 sdc 8:32
[active][ready]
\_ 3:0:0:1 sdi 8:128 [active][ready]
mpath2 (360060160ed301800d68dc75538f6da11) DGC,RAID 3
[size=3G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=2][active]
\_ 2:0:0:2 sdd 8:48
[active][ready]
\_ 3:0:0:2 sdj 8:144 [active][ready]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:1:2 sdk 8:160 [active][ready]
\_ 3:0:1:2 sdp 8:240 [active][ready]

In the output above, there are four paths from the server to the system.
u

The first line shows the alias name, WWN, vendor ID and RAID type of the LUN.

The second line shows unique system parameters used by the multipath driver.

The remaining output shows two path groups. The first path group, Active, has two
active paths and uses round-robin I/O load balancing. The second group, Enabled, is
available for use if the Active path fails, using round-robin I/O load balancing.
where
host:channel:target:lun indicates the
device mapping, for example, 2:0:0:2.

26

Attaching a Server to a Configuration

sdx indicates the dev device, for example sdd.


8:xxx indicates the multipath path identifier, for example 8:48.
active and ready indicates the status of the path.
Note

You can also view information for an individual LUN by using the multipath -ll
alias name| WWN command.
For example, multipath -ll mpath2 or multipath -ll
360060160ed301800a5e373e537f6da11.

Preparing LUNs to receive data


If you do not want to use a LUN as a raw disk or raw volume, then before Linux can send
data to a LUN, Linux must recognize the disk, and you must partition the disk and then
create and mount a file system on it. For information on how to perform these tasks, refer
to your operating system documentation.
Before a virtual machine can send data to a virtual disk that is a VMFS volume, you must
do the following:
Linux virtual machine
1. Partition the VMware virtual disk.
2. Create and mount a file system on the partition.
Windows virtual machine
1. Write a signature to the VMware virtual disk.
2. Either create partitions on a basic disk or create volumes on a dynamic disk.

Sending Linux disk information to the system


If the Unisphere Host Agent is installed on the server, stop and then restart it to send the
system the operating systems device name and volume or file system information for
each LUN that the server sees. Unisphere displays this information in the LUN Properties
Host dialog box for each LUN.
The Unisphere Server Utility does not send operating system LUN mapping information to
the system, so this procedure is not required.
NOTICE

Perform this procedure on your Hyper-V or ESX server.

Starting the host agent


Log in as root and enter the following command:/etc/init.d/naviagent start

Verifying that the system received the LUN information using Unisphere
Procedure
1. From Unisphere, select your system.
2. Select Hosts > Hosts List.
Preparing LUNs to receive data

27

3. Select a host and then, on the Details page, click the LUNs tab.
4. Verify that the LUNs tab displays a physical drive and logical drive name for each LUN
on the host.

Verifying your Linux native multipath failover configuration


If your server has a high-availability configuration, that is, at least one path to each SP on
the system, your environment is enabled for failover and you can proceed to verify your
failover configuration for a dual-SP system.
If you do not have a high-availability configuration, you have a single SP, and an SP
failure leaves the server without access to the data. If you have a single SP with only one ,
your environment is not enabled for failover. An failure leaves the server without access
to the data.
Before you start
Before you store data on LUNs, use the procedure in this section to verify that:
u

The server can send data to and receive data from the system.
Note

You can download an I/O simulator (Iometer) for writing data to the system from the
following website: http://www.iometer.org/.
u

Linux native multipath failover shows the paths from the server to the LUNs that you
expect for your configuration.

The Linux native multipath failover software returns several path states including:
u

Active - Kernel space

Failed - Kernel space

Ready - User space

Faulty - User space

Verifying your Linux native multipath failover configuration (dual-SP)


This test applies to a server with two single-port HBAs connected .
Procedure
1. With all paths connected, at a Linux prompt, enter the following command to show
that the paths are active: multipath -ll
Sample output for a direct connection is shown below. It may vary depending on the
Linux version running on the server.
.
.
mpath1 (360060160a1f01600ef72a9c70d8fda11) DGC,RAID 5
[size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=2][active]
\_ 2:0:0:1
sde 8:64 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:1:1
sdi 8:128 [active][ready]

This output shows two active (ready) paths one from HBA 2 through SP A to LUN 1
and one from HBA 3 through SP B to LUN 1. The device mapping (for example, 2:0:0:1)
uses the mapping format host:channel:target:lun. See the Linux MPIO documentation
for further information.
28

Attaching a Server to a Configuration

This output shows four active (ready) paths from and through each system SP to LUN
1. The device mapping (for example, 2:0:1:1) uses the mapping format
host:channel:target:lun. See the Linux MPIO documentation for further information.
2. Disconnect a path from an HBA (HBA 2 in this example) by unplugging the cable from
the HBA.
3. With the path disconnected, enter the following command to show the active and
failed paths: multipath -ll
Sample output for a direct connection is shown below. It may vary depending on the
Linux version running on the server.
.
.
query command indicates error query command indicates error
mpath1 (360060160a1f01600ef72a9c70d8fda11) DGC,RAID 5
[size=3 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 2:0:1:1
sde 8:64 [failed][faulty]
\_ round-robin 0 [enabled]
\_ 3:0:1:1
sdi 8:128 [active][ready]

The output shows that the path from HBA 2 has failed. The MPIO failover software
failed the path from HBA 2 over to HBA 3.
The output shows that the path from failed. The MPIO failover software failed the
paths from over to .
4. Reconnect the path.

Verifying your Linux native multipath failover configuration (dual-SP)

29

Copyright 2010-2014 EMC Corporation. All rights reserved. Published in USA.


Published November, 2014
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.
EMC, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support (https://support.emc.com).
30

You might also like