Configuration Guidefor Sun Solaris Host Attachment
Configuration Guidefor Sun Solaris Host Attachment
Attachment
Hitachi Virtual Storage Platform
Hitachi Universal Storage Platform V/VM
FASTFIND LINKS
Document Organization
Product Version
Getting Help
Contents
MK-96RD632-05
Copyright © 2010 Hitachi, Ltd., all rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose
without the express written permission of Hitachi, Ltd. (hereinafter referred to as “Hitachi”) and Hitachi Data
Systems Corporation (hereinafter referred to as “Hitachi Data Systems”).
Hitachi Data Systems reserves the right to make changes to this document at any time without notice and
assumes no responsibility for its use. This document contains the most current information available at the
time of publication. When new and/or revised information becomes available, this entire document will be
updated and distributed to all registered users.
All of the features described in this document may not be currently available. Refer to the most recent product
announcement or contact your local Hitachi Data Systems sales office for information about feature and
product availability.
Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of the
applicable Hitachi Data Systems agreement(s). The use of Hitachi Data Systems products is governed by the
terms of your agreement(s) with Hitachi Data Systems.
Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data
Systems is a registered trademark and service mark of Hitachi, Ltd. in the United States and other countries.
All other trademarks, service marks, and company names are properties of their respective owners.
Microsoft product screen shot(s) reprinted with permission from Microsoft Corporation.
ii
Preface ................................................................................................... v
Intended Audience .............................................................................................. vi
Product Version................................................................................................... vi
Document Revision Level ..................................................................................... vi
Source Documents for this Revision ..................................................................... vii
Changes in this Revision ..................................................................................... vii
Referenced Documents....................................................................................... vii
Document Organization ..................................................................................... viii
Document Conventions........................................................................................ ix
Convention for Storage Capacity Values .................................................................x
Accessing Product Documentation .........................................................................x
Getting Help ....................................................................................................... xi
Comments .......................................................................................................... xi
Contents iii
iv Contents
Please read this document carefully to understand how to use this product,
and maintain a copy for reference purposes.
Preface v
Product Version
This document revision applies to the following microcode levels:
• Hitachi Virtual Storage Platform microcode 70-01-0x or later.
• Hitachi Universal Storage Platform V/VM microcode 60-03-2x or later.
vi Preface
Referenced Documents
Hitachi Virtual Storage Platform documentation:
• Provisioning Guide for Open Systems, MK-90RD7022
• Storage Navigator User’s Guide, MK-90RD7027
• Storage Navigator Messages, MK-90RD7028
• User and Reference Guide, MK-90RD7042
Preface vii
Chapter Description
Chapter 1, Introduction Provides a brief overview of the Hitachi RAID storage systems,
supported device types, and an installation roadmap.
Chapter 2, Installing the Provides instructions for installing and connecting the Hitachi RAID
Storage System storage system to a Solaris host.
Chapter 3, Configuring the New Provides instructions for configuring the new devices on the Hitachi
Disk Devices RAID storage system for use.
Chapter 4, Failover and SNMP Describes how to configure the Hitachi RAID storage system for
Operations failover and SNMP.
Appendix B, Online Device Provides instructions for online installation of new devices.
Installation
Appendix C, Using MPxIO (Sun Describes how to use Solaris Operating Environment Multi−path I/O
Path Failover Software) with the Hitachi RAID storage system.
Appendix D, Note on Using Provides information about adding reserve keys for LUs to increase
Veritas Cluster Server disk capacity.
viii Preface
The terms “Universal Storage Platform V” and “Universal Storage Platform VM”
refer to all models of the Hitachi Universal Storage Platform V and VM storage
systems, unless otherwise noted.
Convention Description
Bold Indicates text on a window, other than the window title, including menus,
menu options, buttons, fields, and labels. Example: Click OK.
Italic Indicates a variable, which is a placeholder for actual text provided by the
user or system. Example: copy source-file target-file
Angled brackets (< >) are also used to indicate variables.
screen/code Indicates text that is displayed on screen or entered by the user.
Example: # pairdisplay -g oradb
< > angled brackets Indicates a variable, which is a placeholder for actual text provided by the
user or system. Example: # pairdisplay -g <group>
Italic font is also used to indicate variables.
[ ] square brackets Indicates optional values. Example: [ a | b ] indicates that you can choose
a, b, or nothing.
| vertical bar Indicates that you have a choice between two or more options or
arguments. Examples:
[ a | b ] indicates that you can choose a, b, or nothing.
{ a | b } indicates that you must choose either a or b.
Preface ix
Logical storage capacity values (e.g., logical device capacity) are calculated
based on the following values:
x Preface
Comments
Please send us your comments on this document: [email protected]
Include the document title, number, and revision, and refer to specific
section(s) and paragraph(s) whenever possible.
Thank you! (All comments become the property of Hitachi Data Systems.)
Preface xi
Introduction 1-1
The Hitachi RAID storage systems are configured with OPEN-V logical units
(LUs) and are compatible with most fibre-channel (FC) host bus adapters
(HBAs). Users can perform additional LU configuration activities using the LUN
Manager, Virtual LVI/LUN (VLL), and LUN Expansion (LUSE) features provided
by the Storage Navigator software, which is the primary user interface for the
storage systems.
For further information on storage solutions and the Hitachi RAID storage
systems, please contact your Hitachi Data Systems account team.
1-2 Introduction
Table 1-1 Logical Devices Supported by the Hitachi RAID storage systems
Device Type Description
OPEN-V Devices OPEN-V logical units (LUs) are disk devices (VLL-based volumes) that do not have a predefined
size.
OPEN-x Devices OPEN-x logical units (LUs) (e.g., OPEN-3, OPEN-9) are disk devices of predefined sizes. The
Hitachi RAID storage systems support OPEN-3, OPEN-8, OPEN-9, OPEN-E, and OPEN-L, devices.
For the latest information on usage of these device types, contact your Hitachi Data Systems
account team.
LUSE Devices LUSE devices are combined LUs that can be from 2 to 36 times larger than standard OPEN-x
(OPEN-x*n) LUs. Using LUN Expansion (LUSE) remote console software, you can configure these custom-size
devices. LUSE devices are designated as OPEN-x*n, where x is the LU type (e.g., OPEN-9*n)
and 2< n < 36). For example, a LUSE device created from 10 OPEN-3 LUs is designated as an
OPEN-3*10 disk device. This lets the host combine logical devices and access the data stored on
the Hitachi RAID storage system using fewer LU numbers.
VLL Devices VLL devices are custom-size LUs that are smaller than standard OPEN-x LUs. Using Virtual
(OPEN-x VLL) LVI/LUN remote console software, you can configure VLL devices by “slicing” a single LU into
several smaller LUs that best fit your application needs to improve host access to frequently
used files. The product name for the OPEN-x VLL devices is OPEN-x-CVS (CVS stands for custom
volume size). The OPEN-L LU type does not support Virtual LVI/LUN.
VLL LUSE Devices VLL LUSE devices combine Virtual LVI/LUN devices (instead of standard OPEN-x LUs) into LUSE
(OPEN-x*n VLL) devices. Use the Virtual LVI/LUN feature to create custom-size devices, then use the LUSE
feature to combine the VLL devices. You can combine from 2 to 36 VLL devices into one VLL
LUSE device. For example, an OPEN-3 LUSE volume created from a0 OPEN-3 VLL volumes is
designated as an OPEN-3*10 VLL device (product name OPEN-3*10-CVS).
FX Devices The Hitachi Cross-OS File Exchange (FX) feature allows you to share data across mainframe,
(3390-3A/B/C, UNIX, and PC server platforms using special multiplatform volumes. The VLL feature can be
OPEN-x-FXoto) applied to FX devices for maximum flexibility in volume size. For more information about FX, see
the Cross-OS File Exchange User’s Guide, or contact your Hitachi Data Systems account team.
FX devices are not SCSI disk devices, and must be installed and accessed as raw devices.
UNIX/PC server hosts must use FX to access the FX devices as raw devices (no file system, no
mount operation).
The 3390-3B devices are write-protected from UNIX/PC server access. The Hitachi RAID storage
system rejects all UNIX/PC server write operations (including fibre-channel adapters) for 3390-
3B devices.
Multiplatform devices are not write-protected for UNIX/PC server access. Do not execute any
write operation by the fibre-channel adapters on these devices. Do not create a partition or file
system on these devices. This will overwrite any data on the FX device and prevent the FX
software from accessing the device.
Introduction 1-3
Table Notes:
1-4 Introduction
SCSI OPEN-x, OPEN-x VLL, OPEN-x*n LUSE, OPEN-x*n File System or raw device (e.g.,
Disk VLL LUSE some applications use raw
devices)
Introduction 1-5
1-6 Introduction
1. Verify that the system on which you are installing the Hitachi RAID storage system
meets the minimum requirements for this release.
8. Create mount directories, mount and verify the file system, and set and verify auto-
mount parameters.
Introduction 1-7
Hitachi RAID storage The availability of features and devices depends on the level of microcode
system installed on the Hitachi RAID storage system.
Use the LUN Manager software on Storage Navigator to configure the fibre-
channel ports.
Solaris operating Please refer to the Hitachi Data Systems interoperability site for specific support
system information for the Solaris operating system:
http://www.hds.com/products/interoperability
Root login access to the Solaris system is required.
Fibre-channel HBAs The Hitachi RAID storage system supports fibre-channel HBAs equipped as
follows:
8-Gbps fibre-channel interface, including shortwave non-OFC (open fibre
control) optical interface and multimode optical cables with LC connectors.
4-Gbps fibre-channel interface, including shortwave non-OFC (open fibre
control) optical interface and multimode optical cables with LC connectors.
2-Gbps fibre-channel interface, including shortwave non-OFC (open fibre
control) optical interface and multimode optical cables with LC connectors.
1-Gbps fibre-channel interface, including shortwave non-OFC optical
interface and multimode optical cables with SC connectors.
If a switch or HBA with a 1-Gbps transfer rate is used, configure the device to
use a fixed 1-Gbps setting instead of Auto Negotiation. Otherwise, it may
prevent a connection from being established.
However, the transfer speed of CHF port cannot be set as 1 Gbps when the CHF
is 8US/8UFC/16UFC. Therefore 1 Gbps HBA and switch cannot be connected.
Do not connect OFC-type fibre-channel interfaces to the storage system. For
information about supported fibre-channel HBAs, optical cables, hubs, and
fabric switches, contact your Hitachi Data Systems account team.
For information about supported HBAs, drivers, hubs, and switches, see the
Hitachi Data Systems interoperability site:
http://www.hds.com/products/interoperability
Fibre-channel Refer to the documentation for your fibre-channel HBA for information about
utilities and tools installing the utilities and tools for your adapter.
Fibre-channel Do not install/load the driver(s) yet. When instructed in this guide to install the
drivers drives for your fibre-channel HBA, refer to the documentation for your adapter.
The required host mode for Solaris is 09. Do not select a host mode other
than 09 for Solaris.
Use the LUN Manager software to set the host mode. For instructions, see the
LUN Manager User’s Guide for the USP V/VM or the Provisioning Guide for
Open Systems for the VSP.
Table 2-2 lists the HMOs for Solaris and specifies the conditions for setting the
mode. Note that HMO 13 is common to all platforms.
2 Veritas Select HMO 2 if you are using either: Mandatory. Do not apply
Database Veritas Database Edition™/Advanced Cluster for Real this option to Sun Cluster.
Edition™/
Application Clusters, or
Advanced
Cluster Veritas Cluster Server™ 4.0 or later (I/O fencing function).
7 Automatic Select HMO 7 when all of the following conditions are Optional
recognition satisfied:
function of You are using host mode 00 Standard or 09 Solaris, and
LUN
You are using SUN StorEdge SAN Foundation Software
Version 4.2 or later, and
You want to automate recognition of increase and decrease
of devices when a SUN HBA is connected.
13 SIM report at Select HMO 13 to enable SIM notification when the number of Optional
link failure link failures detected between ports exceeds the threshold. This mode is common to
all host platforms.
22 Veritas Cluster When a reserved volume receives a Mode Sense command Note:
Server from a node that is not reserving this volume, the host will Before setting HMO 22
receive the following responses from the storage system: ask your Hitachi Data
ON: Normal response Systems representative
OFF (default): Reservation Conflict for assistance.
Note: HMO 22 can be
changed while the host
1. When HMO 22 is ON, the volume status (reserved/non-
is online. However I/O
reserved) will be checked more frequently (several tens of
activity may be affected
msec per LU).
when it is being
2. When HMO 22 is ON, the host OS will not receive warning changed. It is
messages when a Mode Select command is issued to a recommended to stop
reserved volume. the host IO on the port
3. There is no impact on the Veritas Cluster Server software where you want to
when HMO 22 is OFF. Set HMO 22 to ON when the software change the HMO 22
is experiencing numerous reservation conflicts. setting.
4. Set HMO 22 to ON when Veritas Cluster Server is
connected.
Table 2-3 explains the fibre parameter settings for the Hitachi RAID storage
system.
Notes:
• If you plan to connect different types of servers to the Hitachi RAID
storage system via the same fabric switch, use the zoning function
of the fabric switch.
• Contact Hitachi Data Systems for information about port topology
configurations supported by HBA/switch combinations. Not all
switches support F-port connection.
Table 2-4 shows the available AL-PA values ranging from 01 to EF. Fibre-
channel protocol uses the AL-PAs to communicate on the fibre-channel link,
but the software driver of the platform host adapter translates the AL-PA value
assigned to the port to a SCSI TID. See Appendix A for a description of the AL-
PA-to-TID translation.
Loop ID Conflicts
The Solaris operating system assigns port addresses from lowest (01) to
highest (EF). To avoid loop ID conflict, assign the port addresses from highest
to lowest (i.e., starting at EF). The AL-PAs should be unique for each device on
the loop to avoid conflicts. Do not use more than one port address with the
same TID in same loop (e.g., addresses EF and CD both have TID 0, see
Appendix A for the TID-to-AL-PA mapping).
# dmesg
Nov 9 23:14
ems, Inc.
mem = 65536K (0x4000000)
avail mem = 60129280
Ethernet address = 8:0:20:92:32:48
root nexus = Sun Ultra 1 SBus (UltraSPARC 167MHz)
sbus0 at root: UPA 0x1f 0x0 ...
espdma0 at sbus0: SBus0 slot 0xe offset 0x8400000
esp0: esp-options=0x46
esp0 at espdma0: SBus0 slot 0xe offset 0x8800000 Onboard device sparc9 ipl 4
sd0 at esp0: target 0 lun 0
sd0 is /sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@0,0
<SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
sd6 at esp0: target 6 lun 0
sd6 is /sbus@1f,0/espdma@e,8400000/esp@e,8800000/sd@6,0
fca0: JNI Fibre Channel Adapter (1062 MB/sec), model FC Í Verify that
fca0: SBus 1: IRQ 4: FCODE Version 11.0.9 [1a6384]: SCSI ID 125: AL_PA 01 Í these items
fca0: Fibre Channel WWN: 100000e0690000d5 Í are listed.
fca0: FCA Driver Version 2.2.HIT.03, Oct 09, 1999 Solaris 2.5, 2.6
You can adjust the queue depth for the devices later as needed (within the
specified range) to optimize the I/O performance.
The required I/O time-out value (TOV) for Hitachi RAID storage system
devices is 60 seconds (default TOV=60). If the I/O TOV has been changed
from the default, change it back to 60 seconds by editing the sd_io_time or
ssd_io_time parameter in the /etc/system file.
Several other parameters (e.g., FC fibre support) may also need to be set.
Please refer to the user documentation that came with your HBA to determine
whether other options are required to meet your operational requirements.
Use the same settings and device parameters for all Hitachi RAID storage
system devices. For fibre-channel, the settings in the system file apply to the
entire system, not to just the HBA(s).
For Sun generic HBA: set ssd:ssd_max_throttle = x (for x see Table 2-5)
5. Save your changes, and exit the text editor.
6. Shutdown and reboot to apply the I/O TOV setting.
:
* To set a variable named ‘debug’ in the module named ‘test_module’
*
* set test_module:debug = 0x13
Table 2-6 summarizes the steps for connecting the Hitachi RAID storage
system to the Solaris host. Some steps are performed by the Hitachi Data
Systems representative, while others are performed by the user.
Table 2-6 Steps for Connecting the Storage System to a Solaris Host
Activity Performed by Description
1. Verify storage Hitachi Data Systems Confirm that the status of the fibre-channel ports
system representative and LDEVs is NORMAL.
installation.
2. Shut down the User Power off the Solaris system before connecting the
Solaris system. Hitachi RAID storage system:
Shut down the Solaris system.
When shutdown is complete, power off the
Solaris display.
Power off all peripheral devices except for the
Hitachi RAID storage system.
Power off the host system. You are now ready to
connect the Hitachi RAID storage system.
3. Connect the Hitachi Data Systems Install fibre-channel cables between the storage
storage system to representative system and the Solaris host. Follow all precautions
the Solaris and procedures in the Maintenance Manual. Check
system. all specifications to ensure proper installation and
configuration.
4. Power on the User Power on the Solaris system after connecting the
Solaris system. Hitachi RAID storage system:
Power on the Solaris system display.
Power on all peripheral devices. The Hitachi
RAID storage system should be on, the fibre-
channel ports should be configured, and the
driver configuration file and system configuration
file should be edited. If the fibre ports are
configured or configuration files edited after the
Solaris system is powered on, restart the system
to have the new devices recognized.
Confirm the ready status of all peripheral
devices, including the Hitachi storage system.
Power on the Solaris system.
5. Boot the Solaris User When the OK prompt appears, boot the system
system. using the boot -r command. The -r option tells the
system to rebuild the devices. Using boot by itself
will not build the new devices on the Hitachi RAID
storage system.
For information about configuring the Hitachi RAID storage system for failover
and SNMP, see Chapter 4.
For information about fibre port addressing (AL-PA to SCSI TID mapping) for
Solaris systems, see Appendix A.
name=“sd” class=“scsi”
target=2 lun=0;
name=“sd” class=“scsi”
target=4 lun=0;
#
# halt Å Enter halt.
Jan 11 10:10:09 sunss20 halt:halted by root
Jan 11 10:10:09 sunss20 syslogd:going down on signal 15
Syncing file systems... done
Halted
Program terminated
Type help for more information
OK
SUNW,fas0 at sbus0: SBus0 slot 0xe offset 0x8800000 and slot 0xe offset 0x8810000 Onboard
device sparc9 ipl 4
SUNW,fas0 is /sbus@1f,0/SUNW,fas@e,8800000
sd0 at SUNW,fas0: target 0 lun 0
sd0 is /sbus@1f,0/SUNW,fas@e,8800000/sd@0,0
<SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
sd6 at SUNW,fas0: target 6 lun 0
sd6 is /sbus@1f,0/SUNW,fas@e,8800000/sd@6,0
WARNING: fca0: fmle: sc1: 000e0000 sc2: 00000000
fca0: JNI Fibre Channel Adapter (1062 MB/sec), model FC
fca0: SBus 1 / IRQ 4 / FCODE Version 10 [20148b] / SCSI ID 125 / AL_PA 0x1
fca0: Fibre Channel WWN: 100000e0690002b7
fca0: FCA Driver Version 2.1+, June 24, 1998 Solaris 2.5, 2.6
fca0: All Rights Reserved.
fca0: < Total IOPB space used: 1100624 bytes >
fca0: < Total DMA space used: 532644 bytes >
fca0: <HITACHI :OPEN-3 :5235> target 2 (alpa 0xe4) lun 0 online
sd192 at fca: target 2 lun 0
Ñ LUN = 0
Ñ target ID = 2
sd192 is /sbus@1f,0/fca@1,0/sd@2,0
Note: When the Solaris system accesses the multiplatform devices, the
message “Request sense couldn’t get sense data” may be displayed. You can
disregard this message.
# dmesg | more
:
sbus0 at root: UPA 0x1f 0x0 ...
fas0: rev 2.2 FEPS chip
SUNW,fas0 at sbus0: SBus0 slot 0xe offset 0x8800000 and slot 0xe offset 0x8810000 Onboard device
sparc9 ipl 4
SUNW,fas0 is /sbus@1f,0/SUNW,fas@e,8800000
sd0 at SUNW,fas0: target 0 lun 0
sd0 is /sbus@1f,0/SUNW,fas@e,8800000/sd@0,0
<SUN2.1G cyl 2733 alt 2 hd 19 sec 80>
sd6 at SUNW,fas0: target 6 lun 0
sd6 is /sbus@1f,0/SUNW,fas@e,8800000/sd@6,0
WARNING: fca0: fmle: sc1: 000e0000 sc2: 00000000
fca0: JNI Fibre Channel Adapter (1062 MB/sec), model FC
fca0: SBus 1 / IRQ 4 / FCODE Version 10 [20148b] / SCSI ID 125 / AL_PA 0x1
fca0: Fibre Channel WWN: 100000e0690002b7
fca0: FCA Driver Version 2.1+, June 24, 1998 Solaris 2.5, 2.6
fca0: All Rights Reserved.
fca0: < Total IOPB space used: 1100624 bytes >
fca0: < Total DMA space used: 532644 bytes >
fca0: <HITACHI :OPEN-3 :5235> target 2 (alpa 0xe4) lun 0 online
sd192 at fca: target 2 lun 0
Ñ Ñ LUN = 0
target ID = 2
sd192 is /sbus@1f,0/fca@1,0/sd@2,0
WARNING: /sbus@1f,0/fca@1,0/sd@2,0 (sd192)
corrupt label - wrong magic number Í Not yet labeled.
Vendor ‘HITACHI’, product ‘OPEN-3’, 4806720 512 byte blocks
Ñ Vendor name Ñ Product name Ñ Number of blocks
fca0: <HITACHI :OPEN-3 :5235> target 2 (alpa 0xdc) lun 1 online
sd193 at fca: target 2 lun 1 (LUN=1, target ID=2)
sd193 is /sbus@1f,0/fca@1,0/sd@2,1
WARNING: /sbus@1f,0/fca@1,0/sd@2,1 (sd193)
corrupt label - wrong magic number
Vendor ‘HITACHI’, product ‘OPEN-3’, 4806720 512 byte blocks
The disk partitioning and labeling procedure involves the following tasks:
1. Defining and setting the disk type.
2. Setting the partition(s).
3. Labeling the disk (required for devices to be managed by HDLM).
4. Verifying the disk label.
A good way to partition and label the disks is to partition and label all devices
of one type (e.g., OPEN-3), then all devices of the next type (e.g., OPEN-9),
and so on until you partition and label all new devices. You will enter this
information into the Solaris system during the disk partitioning and labeling
procedure.
Note:
• Do not use HITACHI-OPEN-x-0315, HITACHI-3390-3A/B-0315. These
disk types are created automatically by the Solaris system and cannot
be used for the Hitachi RAID storage system devices.
• LU capacity must be less than 1 TB. In case of selecting other type, the
disk type parameters described below cannot be set for an LU larger
than 32,767 data cylinders.
6. If the disk type for the selected device is not already defined, enter the
number for other to define a new disk type.
7. Enter the disk type parameters for the selected device using the data
provided above. Be sure to enter the parameters exactly as shown in
Figure 3-5.
8. When prompted to label the disk, enter n for “no”.
9. When the format menu appears, enter partition to display the partition
menu.
10. Enter the desired partition number and the partition parameters in Figure
3-6 and Table 3-1 through Table 3-8.
11. At the partition> prompt, enter print to display the current partition
table.
Note: This step does not apply to the multiplatform devices (e.g., 3390-
3A/B/C), because these devices can only have one partition of fixed size.
13. After setting the partitions for the selected device, enter label at the
partition> prompt and enter y to label the device (see Figure 3-7).
14. Enter quit to exit the partition utility and return to the format utility.
15. At the format> prompt, enter disk to display the available disks. Be sure
the disk you just labeled is displayed with the proper disk type name and
parameters.
16. Repeat steps 2 through 15 for each new device to be partitioned and
labeled. After a device type is defined (e.g., HITACHI OPEN-3), you can
label all devices of that same type without having to enter the parameters
(skipping steps 6 and 7). For this reason, you may want to label the
devices by type (e.g., labeling all OPEN-3 devices, then all OPEN-9 devices,
and so on) until all new devices have been partitioned and labeled.
17. When you finish partitioning and labeling the disks and verifying the disk
labels, exit the format utility by entering quit or Ctrl-d.
c1t2d0: configured with capacity of 2.29GB (OPEN-3) Í These devices are not yet labeled.
c1t2d1: configured with capacity of 2.29GB (OPEN-3) Í
c2t4d0: configured with capacity of 6.88GB (OPEN-9) Í
c2t5d0: configured with capacity of 2.77GB (3390-3B) Í
c2t6d0: configured with capacity of 2.78GB (3390-3A) Í
¿ These character-type device file names are used later to create the file systems.
1: The number of cylinders for the 3390-3B is 3346, and the Hitachi RAID
storage system returns ‘3346 cylinder’ to the Mode Sense command, and
‘5822040 blocks’ (Maximum LBA 5822039) to the Read capacity command.
When 3390-3B is not labeled yet, Solaris displays 3344 data cylinders and 2
alternate cylinders. When 3390-3B is labeled by the Solaris format type
subcommand, use 3340 for data cylinder and 2 for alternate cylinder. This is
similar to the 3390-3B VLL.
2: The Hitachi RAID storage system reports the RPM of the physical disk drive
in response to the type subcommand parameter.
3: It is also possible to follow the procedure using type => “0. Auto
Configure” => label the drive without calculating detail values like as Cylinder,
Header, Blocks/Tracks.
If the number of cylinders entered exceeds 65,533, the total LU block number
equals or is less than 65,533. Use the Format Menu to specify the numbers of
cylinders, heads, and blocks per track.
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volume - set 8-character volume name
quit
format> type Í Enter type.
PARTITION MENU
0 - change ‘0’ partition
1 - change ‘1’ partition
2 - change ‘2’ partition
3 - change ‘3’ partition
4 - change ‘4’ partition
5 - change ‘5’ partition
6 - change ‘6’ partition
7 - change ‘7’ partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
quit
partition> 0 Í Select partition number.
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 - 0 (0/0/0)
Specify disk (enter its number): 3 Í Enter number for next disk to label,
or press Ctrl-d to quit.
Figure 3-7 Labeling the Disk and Verifying the Disk Label
Note: The Sun Solaris system displays the following warnings when an FX
device (e.g., 3390-3A) is labeled. You can ignore these warnings:
Warning: error warning VTOC.
Warning: no backup labels. Label failed.
Note: For the values indicated by Nxx (e.g., N15, N22), see Table 3-2 through Table 3-8.
Notes:
N1,N2,N3: Use value in Table 3-1.
N4: Use same value as N1. Specify as NNNNc, where NNNN = # of cylinders and c = cylinder (e.g.
enter 6674c for OPEN-3*2).
Notes:
N26,N27,N28 : Use values in Table 1-2.
N29: Use same value as N26. Specify as NNNNc, where NNNN = # of cylinders and c = cylinder (e.g.
enter 19930c for OPEN-8*2).
Note: Data cylinders must be less than or equal to 32767, heads must be less
than or equal to 64, blocks per track must be less than or equal to 256 when
these values are specified as parameters of Solaris format type subcommand.
The whole data blocks of OPEN-3*2 ~ OPEN-3*36 can be used by above
parameters.
Notes:
N5,N6,N7: Use value in Table 3-1 and Table 3-2.
N8: Use same value as N5. Specify as NNNNc, where NNNN = # of cylinders and c = cylinder (e.g.
enter 20030c for OPEN-9*2).
Notes:
N30,N31,N32: Use value in Table 3-1.
N33: Use same value as N30. Specify as NNNNc, where NNNN = # of cylinders and c = cylinder (e.g.
enter 19757c for OPEN-E*2).
Note: Data cylinders must be less than or equal to 32767, heads must be less
than or equal to 64, blocks per track must be less than or equal to 256 when
these values are specified as parameters of Solaris format type subcommand.
The whole data blocks of OPEN-E*2∼OPEN-E*10 can be used by above
parameters. About OPEN-E*11~OPEN-E*18, some blocks must become
unusable.
Notes:
N34, N35, N36: Use value in Table 3-1.
N37: Use same value as N34. Specify as NNNNc, where NNNN = # of cylinders and c = cylinder (e.g.
enter 19013c for OPEN-L*2).
Note: Data cylinders must be less than or equal to 32767, heads must be less
than or equal to 64, blocks per track must be less than or equal to 256 when
these values are specified as parameters of Solaris format type subcommand.
The whole data blocks of OPEN-L*2∼OPEN-L*6 can be used by above
parameters. About OPEN-L*7, some blocks must become unusable.
Notes:
N21 # of blocks of LUSE composed by VLL volumes are calculated by:
N21 = N20 × (# of heads) × (# of sectors per track).
N22: N20 – 2, Use total cylinder – 2.
N23, N24: Use value in Table 3-1 and Table 3-2.
N25: Use same value as N22.
Notes:
N21 # of blocks of LUSE composed by VLL volumes are calculated by:
N21 = N20 × (# of heads) × (# of sectors per track).
N22: N20 – 2, Use total cylinder – 2.
N23, N24: Use value in Table 3-1 and Table 3-2.
N25: Use same value as N22.
Example 1:
N22(Cyl)×N23(Head)×N24(Block/Trk)×512(Byte) = or < X
GB(=×1024×1024×1024 Byte) is as follows:
16000(Cyl)×256(Head)×256(Block)×512(Byte)=536870912000Byte=500GB
32000(Cyl)×128(Head)×256(Block)×512(Byte)=536870912000Byte=500GB
Table 3-9 Steps for Creating and Mounting the File Systems
Task
Note: Do not create file systems or mount directories for the FX devices (e.g.,
3390-3A). These devices are accessed as raw devices and do not require any
further configuration after being partitioned and labeled.
To create the mount directories for the newly installed SCSI disk devices:
1. Go to the root directory (see Figure 3-9).
2. Use the mkdir command to create the mount directory.
To delete a mount directory, use the rmdir command (e.g., rmdir
/USP_LU00).
3. Choose a name for the mount directory that identifies both the logical
volume and the partition. For example, to create a mount directory named
USP_LU00, enter:
mkdir /USP_LU00
4. Use the ls -x command to verify the new mount directory.
5. Repeat steps 2 and 3 for each logical partition on each new SCSI disk
device.
To mount and verify the file systems for the new devices (see Figure 3-10):
1. Mount the file system using the mount command. Be sure to use the
correct block-type device file name and mount directory for the
device/partition. For example, to mount the file /dev/dsk/c1t2d0s0 with
the mount directory /USP_LU00, enter:
mount /dev/dsk/c1t2d0s0 /USP_LU00
To unmount a file system, use the umount command (e.g., umount
/USP_LU00).
Note: If you already set the auto-mount parameters (see Setting and
Verifying the Auto-Mount Parameters), you do not need to specify the block-
type device file, only the mount directory.
2. Repeat step 1 for each partition of each newly installed SCSI disk device.
3. Display the mounted devices using the df -k command, and verify that all
new SCSI disk devices are displayed correctly. OPEN-x devices will display
as OPEN-3, OPEN-9, OPEN-E, OPEN-L devices.
4. As a final verification, perform some basic UNIX operations (e.g., file
creation, copying, and deletion) on each logical unit to ensure the new file
systems are fully operational.
To set the auto-mount parameters for the desired devices (see Figure 3-11):
1. Make a backup copy of the /etc/vfstab file:
cp /etc/vfstab /etc/vfstab.standard
2. Edit the /etc/vfstab file to add one line for each device to be auto-
mounted. Table 3-10 shows the auto-mount parameters. If you make a
mistake while editing, exit the vi editor without saving the file, and then
begin editing again.
3. Reboot the Solaris system after you are finished editing the /etc/vfstab
file.
4. Use the df -k command to display the mounted devices and verify that the
desired devices were auto-mounted.
This chapter describes how failover and SNMP operations are supported on the
Hitachi RAID storage system.
Host Failover
Path Failover
SNMP Remote System Management
Note: The user is responsible for configuring the failover and SNMP
management software on the UNIX/PC server host. For assistance with failover
and/or SNMP configuration on the host, refer to the user documentation, or
contact the vendor’s technical support.
Note: You must set HOST MODE=09 before installing Sun Cluster, or the
Quorum Device will not be assigned to the Hitachi RAID storage system.
For assistance with Veritas Cluster Server operations, refer to the Veritas user
documentation, see Appendix_D Note on Using Veritas Cluster Server, or
contact Symantec technical support. For assistance with Sun Cluster
operations, refer to the Sun Cluster user documentation, or contact the
vendor’s technical support. For assistance with specific configuration issues
related to the Hitachi RAID storage system, please contact your Hitachi Data
Systems representative.
Path Failover
The Hitachi RAID storage systems support the Hitachi Dynamic Link Manager
(HDLM) and Veritas Volume Manager path failover products for the Solaris
operating system. Be sure to configure the path failover software and any
other products as needed to recognize and operate with the newly attached
Hitachi RAID storage system devices.
Note: Devices that will be managed by HDLM must have a label (see
Partitioning and Labeling the New Devices).
For assistance with Hitachi Dynamic Link Manager, refer to the Hitachi
Dynamic Link Manager for Solaris User’s Guide or contact your Hitachi Data
Systems representative. For assistance with Veritas Volume Manager
operations, refer to the Veritas user documentation or contact Symantec
technical support.
When a SIM occurs, the SNMP agent initiates trap operations, which alert the
SNMP manager of the SIM condition. The SNMP manager receives the SIM
traps from the SNMP agent, and can request information from the SNMP agent
at any time.
Note: The user is responsible for configuring the SNMP manager on the Solaris
server host. For assistance with SNMP manager configuration on the Solaris
server host, refer to the user documentation, or contact the vendor’s technical
support.
Hitachi RAID
storage system
SNMP
Manager
SIM Private LAN
Error Info.
Public LAN
Service UNIX/PC
Processor Server
Troubleshooting 5-1
For troubleshooting information on the Hitachi RAID storage system, see the
User and Reference Guide for the storage system (e.g., Hitachi Virtual Storage
Platform User and Reference Guide).
The logical devices are not Ensure the READY indicator lights on the storage system are ON.
recognized by the system. Ensure the fibre-channel cables are correctly installed and firmly
connected.
Run dmesg to recheck the fibre buses for new devices.
Verify the contents of /kernel/drv/sd.conf file.
File system cannot be Ensure the character-type device file is specified for newfs command.
created (newfs command) Verify that logical unit is correctly labeled by UNIX format command.
The file system is not Ensure the system was restarted properly.
mounted after rebooting. Ensure the file system attributes are correct.
Ensure the /etc/vfstab file is correctly edited.
The Solaris system does not If the Solaris system is powered off without executing the shutdown
reboot properly after hard process, wait three minutes before restarting the Solaris system. This
shutdown. allows the storage system’s internal time-out process to purge all
queued commands so that the storage system is available (not busy)
during system startup. If the Solaris system is restarted too soon, the
storage system will continue trying to process the queued commands,
and the Solaris system will not reboot successfully.
The Hitachi RAID storage Contact the Hitachi Data Systems Support Center.
system responds Not
Ready, or displays Not
Ready and timed itself out.
The system detects a parity Ensure the HBA is installed properly. Reboot the Solaris system.
error.
5-2 Troubleshooting
ok “ /sbus/fca” select-dev
ok true to fca-verbose
ok boot fcadisk
Error message:
Cannot Assemble drivers for /sbus@1f,0/fcaw@1,0/sd@0,0:a
Cannot Mount root on /sbus@1f,0/fcaw@1,0/sd@0,0:a
Problem:
The process of copying the OS to the fibre channels was not complete, or the drive
specified on the boot command is not the same as the one the OS was constructed on.
Error message:
Can’t open boot device
Problem:
The wwn specified with the set-bootn0-wwn does not correspond to the wwn of the device.
Could also be a cable problem – the adapter cannot initialize.
Error message:
The file just loaded does not appear to be bootable
Problem:
The bootblk was not installed on the target.
Error message:
mount: /dev/dsk/c0t0d0s0 – not of this fs type
Problem:
At this point the process hangs. This happens if the /etc/vfstab
File has not been updated on the fibrechannel boot drive to reflect the new target.
Error message:
Get PortID request rejected by nameserver
Problem:
The wwn of the target is not correct. Select the adapter and perform set-bootn0-wwn. If
this is correct, check the switch to see that target is properly connected.
Error message:
Can’t read disk label
Problem:
The selected target is not a Solaris filesystem.
Troubleshooting 5-3
Error message:
Panic dump not saved
Problem:
After the system is successfully booted to Solaris from the fibrechannel and a panic occurs
the panic does not get saved to the swap device.
This can be the result not properly defined the swap partition.
Use the format command to view the slices on the fibre channel drive.
Take the partition option, then the print option.
The swap partition should look something like this:
1 swap wm 68-459 298.36MB (402/0/0) 611040
Sizes and cylinders will probably be different on your system. Make sure that the flag is
wm and that the sizes are defined (not 0). Then use the label option from partition to
write the label to the drive. After this the panic should be saved to the swap partition.
If the partition needs to be changed chose the partition option, and enter 1 to select
slice 1.
The Hitachi Data Systems customer support staff is available 24 hours a day,
seven days a week. If you need technical support, log on to the Hitachi Data
Systems Portal for contact information: https://hdssupport.hds.com
5-4 Troubleshooting
EF 0 CD 16 B2 32 98 48
E8 1 CC 17 B1 33 97 49
E4 2 CB 18 AE 34 90 50
E2 3 CA 19 AD 35 8F 51
E1 4 C9 20 AC 36 88 52
E0 5 C7 21 AB 37 84 53
DC 6 C6 22 AA 38 82 54
DA 7 C5 23 A9 39 81 55
D9 8 C3 24 A7 40 80 56
D6 9 BC 25 A6 41 7C 57
D5 10 BA 26 A5 42 7A 58
D4 11 B9 27 A3 43 79 59
D3 12 B6 28 9F 44 76 60
D2 13 B5 29 9E 45 75 61
D1 14 B4 30 9D 46 74 62
CE 15 B3 31 9B 47 73 63
72 64 55 80 3A 96 23 112
71 65 54 81 39 97 23 113
6E 66 53 82 36 98 1F 114
6D 67 52 83 35 99 1E 115
6C 68 51 84 34 100 1D 116
6B 69 4E 85 33 101 1B 117
6A 70 4D 86 32 102 18 118
69 71 4C 87 31 103 17 119
67 72 4B 88 2E 104 10 120
66 73 4A 89 2D 105 0F 121
65 74 49 90 2C 106 08 122
63 75 47 91 2B 107 04 123
5C 76 46 92 2A 108 02 124
MPxIO enables you to more effectively to represent and manage devices that
are accessible through multiple I/O controller interfaces within a single
instance of the Solaris operating system. The MPxIO architecture:
• Helps protect against I/O outages due to I/O controller failures. Should one
I/O controller fail, MPxIO automatically switches to an alternate controller.
• Increases I/O performance by load balancing across multiple I/O channels.
6. Check for the target not configured (in red). Then issue the following
command to see the unconfigured LUNs:
Each node of VCS registers the reserve key when importing a disk groups. One
node registers the identical reserve key for all paths of all disks (LU) in the
disk group. The reserve key contains a unique value for each disk group and a
value to distinguish nodes.
Key format:
<Node # + disk group-unique information>
Example:
APGR0000, APGR0001, BPGR0000, and so on
When the Hitachi RAID storage system receives a request to register the
reserve key, the reserve key and port WWN of node are recorded on a key
registration table of each port of the storage system where the registration
request is received. The number of reserve keys that can be registered to one
storage system is 128 for a port. The storage system confirms duplication of
registration by a combination of the node Port WWN and reserve key.
Therefore, the number of entries of the registration table does not increase
even though any request for registering duplicated reserve keys is accepted.
When the number of registered reserve keys exceeds the upper limit of 128,
key registration as well as operations such as installing a LU to the disk group
fail. To avoid failure of reserve key registration, the number of reserve keys
needs to be kept below 128. For this, restrictions such as imposing a limit on
the number of nodes or on the number of server ports using LUN security
function, or maintaining the number of disk groups appropriate are necessary.
When adding a LU to increase disk capacity, do not add the number of disk
groups, but add a LU to the current disk group.
Node A Node B
WWNa0 WWNa1
WWNb0 WWNb1
FC-SW
LU4
LU5
LU6
disk group 2
LU4
LU5
disk group 3
Key registration table for Port-1A Key registration table for Port-2A
6 - - 6 - -
: : : : : :
127 - - 127 - -
Figure D-1 Adding Reserve Keys for LUs to Increase Disk Capacity
blk block
FC fibre-channel
FCA fibre-channel adapter
FX Hitachi Cross-OS File Exchange
GB gigabyte
Gbps gigabits per second
I/O input/output
KB kilobyte
MB megabyte
msec millisecond
mto mainframe-to-open
PA physical address
PB petabyte
TID target ID
TOV time-out value
trk track
Corporate Headquarters
750 Central Expressway
Santa Clara, California 95050-2627
U.S.A.
Phone: 1 408 970 1000
www.hds.com
[email protected]
Europe Headquarters
Sefton Park
Stoke Poges
Buckinghamshire SL2 4HD
United Kingdom
Phone: + 44 (0)1753 618000
[email protected]
MK-96RD632-05