Configure LUNs For ASM Disks Using WWID ASMLIB Linux5
Configure LUNs For ASM Disks Using WWID ASMLIB Linux5
To
Bottom
In this Document
Goal
Solution
1. Configure SCSI_ID to Return Unique Device Identifiers:
2. Configure LUNs for ASM:
4. Create ASM diskgroups:
5. To make the disk available enter the following commands:
7. Ensure that the allocated devices can be seen in /dev/mpath:
8. Ensure that the devices can be seen in /dev/mapper:
9. Check the device type:
10. Setup the ASM parameter (ORACLEASM_SCANORDER), in the file for ASMLIB configuration,
/etc/sysconfig/oracleasm, for forcing ASM to bind with the multipath devices
References
Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.4 to 11.2.0.2 - Release: 10.2 to 11.2
Linux x86-64
Oracle Server Enterprise Edition - Version: 10.2.0.4 to 11.2.0.2
Goal
This document details "how to" steps by using an example that creates devices for Automatic
Storage Management (ASM) using World Wide Identifier (WWID), DM-Multipathing, and
ASMLIB, utilizing a Hitachi Storage Sub-system. The simplified "how to" steps are for Red Hat
Enterprise Linux version 5 (RHEL 5) and Oracle Enterprise Linux version 5 (OEL 5) on Linux
x86-64 for preparing storage to use ASM.
Each multipath device has a World Wide Identifier (WWID), which is guaranteed to be globally
unique and unchanging. By default, the name of a multipath device is set to its WWID.
Alternately, you can set the user_friendly_names option in the multipath configuration file,
which sets the alias to a node-unique name of the form mpathn. When the user_friendly_names
configuration option is set to yes, the name of the multipath device is set to /dev/mpath/mpathn.
Configuring multipathing by modifying the multipath configuration file, /etc/multipath.conf, will
Solution
1. Configure SCSI_ID to Return Unique Device Identifiers:
1a. Whitelist SCSI devices
(System Administrator's Task)
Before being able to configure udev to explicitly name devices, SCSI_ID (scsi_id(8)) should first
be configured to return their device identifiers. SCSI commands are sent directly to the device
via the SG_IO ioctl interface. Modify the /etc/scsi_id.config file - add or replace the 'option=-b'
parameter/value pair (if exists) with 'option=-g', for example:
# cat /etc/scsi_id.config
vendor="ATA",options=-p 0x80
options=-g
360060e80045b2b0000005b2b00001007
360060e80045b2b0000005b2b00001679
360060e80045b2b0000005b2b0000163c
The two initial SCSI ids represent the local disks (/dev/sda and /dev/sdb). The remaining five
represent the SCSI ids of the fibre channel attached LUNs. As a subset of the output string,
scsi_id generates for the fibre LUNs matches the World Wide Identifier (WWID). A simple
example would be a disk connected to two Fibre Channel ports. Should one controller, port or
switch fail, the operating system can route I/O through the remaining controller transparently to
the application, with no changes visible to the applications, other than perhaps incremental
latency.
Important: If Real Application Clusters (RAC), Clusterware devices must be visible and
accessible to all cluster nodes. Typically, cluster node operating systems need to be updated in
order to see newly provisioned (or modified) devices on shared storage i.e. use '/sbin/partprobe
<device>' or '/sbin/sfdisk -r <device>', etc., or simply reboot. Resolve any issues preventing
cluster nodes from correctly seeing or accessing Clusterware devices before proceeding.
1c. Obtain Clusterware device unique SCSI identifiers:
Run the scsi_id(8) command against Clusterware devices from one cluster node to obtain their
unique device identifiers. When running the scsi_id(8) command with the -s argument, the
device path and name passed should be that relative to sysfs directory /sys/ i.e. /block/<device>
when referring to /sys/block/<device>. Record the unique SCSI identifiers of Clusterware
devices - these are required later configuring multipathing, for example:
# for i in `cat /proc/partitions | awk {'print $4'} |grep sd`; do echo "###
$i: `scsi_id -g -u -s /block/$i`"; done
...
### sdh: 360060e80045b2b0000005b2b0000163c
### sdh1:
### sdi: 360060e80045b2b0000005b2b0000163c
### sdi1:
...
### sdk: 360060e80045b2b0000005b2b00001679
### sdk1:
...
### sdm: 360060e80045b2b0000005b2b000006c4
### sdm1:
### sdn: 360060e80045b2b0000005b2b000006d8
### sdn1:
### sdo: 360060e80045b2b0000005b2b00001007
### sd01:
...
### sdz: 360060e80045b2b0000005b2b00001679
### sdz1:
From the output above, note that multiple devices share common SCSI identifiers. It should now
be evident that devices such as /dev/sdh and /dev/sdi refer to the same shared storage device
(LUN).
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1011, default 1): [use default]
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011): [use
default]
Using default value 1011
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
1e. Run the fdisk(8) and/or 'cat /proc/partitions' commands to ensure devices are visible.
(If Real Application Clusters (RAC), Clusterware devices are visible on each node.) For
example:
# cat /proc/partitions
major minor #blocks name
8 0 142577664 sda
8 1 104391 sda1
8 2 52428127 sda2
8 3 33551752 sda3
8 4 1 sda4
8 5 26218048 sda5
8 6 10482381 sda6
8 7 8385898 sda7
8 16 263040 sdb
8 17 262305 sdb1
8 32 263040 sdc
8 33 262305 sdc1
8 48 263040 sdd
8 49 262305 sdd1
8 64 263040 sde
8 65 262305 sde1
8 80 263040 sdf
8 81 262305 sdf1
8 96 263040 sdg
8 97 262305 sdg1
8 112 576723840 sdh
8 113 576709402 sdh1
8 128 576723840 sdi
8 129 576709402 sdi1
8 144 576723840 sdj
8 145 576709402 sdj1
8 160 52429440 sdk
8 161 52428096 sdk1
Note: At this point, if Real Application Clusters (RAC), each node may refer to the would-be
Clusterware devices by different device file names. This is expected. Irrespective of which node
the scsi_id command is run from, the value returned for a given device (LUN) should always be
the same.
Note: DM-Multipath provides a way of organizing the I/O paths logically, by creating a single
multipath device on top of the underlying devices. Each device that comes from Hitachi Storage
Sub-system (360060e80045b2b0000005b2b0000163c - WWID) requires two underlying
physical devices. These two physical devices have to be partitioned (e.g. sdh1 and sdi1). Also,
the WWID device has to be partitioned (e.g. 3360060e80045b2b0000005b2b0000163cp1). Also,
the two physical devices are required to be the same size within that group, both non-partitioned
and partitioned, and across nodes in Real Application Clusters (RAC).
In fact, various device names are created and used to refer to multipathed devices, for
example:
# dmsetup ls | sort
360060e80045b2b0000005b2b000006b0 (253, 0)
360060e80045b2b0000005b2b000006b0p1 (253, 15)
360060e80045b2b0000005b2b000006c4 (253, 1)
360060e80045b2b0000005b2b000006c4p1 (253, 17)
360060e80045b2b0000005b2b000006d8 (253, 2)
360060e80045b2b0000005b2b000006d8p1 (253, 26)
360060e80045b2b0000005b2b0000163c (253,11)
360060e80045b2b0000005b2b0000163cp1 (253,20)
# ll /dev/mpath/
lrwxrwxrwx 1 root
-> ../dm-0
lrwxrwxrwx 1 root
-> ../dm-15
lrwxrwxrwx 1 root
-> ../dm-1
lrwxrwxrwx 1 root
-> ../dm-17
lrwxrwxrwx 1 root
-> ../dm-2
lrwxrwxrwx 1 root
-> ../dm-26
lrwxrwxrwx 1 root
-> ../dm-11
lrwxrwxrwx 1 root
-> ../dm-20
# ll /dev/mapper/
brw-rw---- 1 root disk 253, 0 Jun 27 07:17 360060e80045b2b0000005b2b000006b0
brw-rw---- 1 root disk 253, 15 Jun 27 07:17
360060e80045b2b0000005b2b000006b0p1
brw-rw---- 1 root disk 253, 1 Jun 27 07:17 360060e80045b2b0000005b2b000006c4
brw-rw---- 1 root disk 253, 17 Jun 27 07:17
360060e80045b2b0000005b2b000006c4p1
brw-rw---- 1 root disk 253, 11 Jun 27 07:17 360060e80045b2b0000005b2b0000163c
brw-rw---- 1 root disk 253, 20 Jun 27 07:17
360060e80045b2b0000005b2b0000163cp1
# ls -lR /dev|more
/dev:
drwxr-xr-x 3 root root 60 Jun 27 07:17 bus
lrwxrwxrwx 1 root root 4 Jun 27 07:17 cdrom -> scd0
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrom-hda -> hda
lrwxrwxrwx 1 root root 4 Jun 27 07:17 cdrom-sr0 -> scd0
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdrw-hda -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdwriter -> hda
lrwxrwxrwx 1 root root 3 Jun 27 07:17 cdwriter-hda -> hda
crw------- 1 root root 5, 1 Jun 27 07:18 console
lrwxrwxrwx 1 root root 11 Jun 27 07:17 core -> /proc/kcore
drwxr-xr-x 10 root root 200 Jun 27 07:17 cpu
drwxr-xr-x 6 root root 120 Jun 27 07:17 disk
brw-rw---- 1 root root 253, 0 Jun 27 07:17 dm-0
brw-rw---- 1 root root 253, 1 Jun 27 07:17 dm-1
brw-rw---- 1 root root 253, 10 Jun 27 07:17 dm-10
brw-rw---- 1 root root 253, 11 Jun 27 07:17 dm-11
brw-rw---- 1 root root 253, 12 Jun 27 07:17 dm-12
brw-rw---- 1 root root 253, 13 Jun 27 07:17 dm-13
brw-rw---- 1 root root 253, 14 Jun 27 07:17 dm-14
brw-rw---- 1 root root 253, 15 Jun 27 07:17 dm-15
253,
253,
253,
253,
253,
253,
253,
253,
253,
253,
253,
253,
10
10
10
10
10
10
Jun
Jun
Jun
Jun
Jun
Jun
27
27
27
27
27
27
07:17
07:17
07:17
07:17
07:17
07:17
1 -> ../../sda5
boot1 -> ../../sda1
optapporacle1 -> ../../sda2
SWAP-sda3 -> ../../sda3
tmp1 -> ../../sda7
var1 -> ../../sda6
Important: The version of the ASMLIB driver version has to be the same as the kernel version.
Download the matching ASMLIB driver version from Oracle's website for ASMLIB drivers:
http://www.oracle.com/technetwork/server-storage/linux/downloads/index-088143.html
3b. Check status (If Real Application Clusters (RAC), run this command on each
node):
Failed example:
# /etc/init.d/oracleasm status
Checking if ASM is loaded: no
Checking if /dev/oracleasm is mounted: no
Note: This will configure the on-boot properties of the Oracle ASM library driver. The following
questions will determine whether the driver is loaded on boot and what permissions it will have.
The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer
will keep that current value. Ctrl-C will abort.
3c. Check status again:
# /etc/init.d/oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
Important: The /dev/dm-n devices are internal to device-mapper-multipath and are nonpersistent, so should not be used. The /dev/mpath/ devices are created for multipath devices to be
visible together, however, may not be available during the early stages of the boot process, so
should not typically be used. However, /dev/mapper/ devices are persistent and created early
during boot...These are the only device names that should be used to access multipathed devices.
Note: Use /dev/mapper/[WWID]p1 as the device name for createdisk. If Real Application
Clusters (RAC), do only on the first node. All commands done as root.
4a. Check prior to createdisk command:
# /etc/init.d/oracleasm querydisk DAT
Disk "DAT" does not exist or is not instantiated
# /etc/init.d/oracleasm querydisk
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Device "/dev/mapper/360060e80045b2b0000005b2b0000163cp1" is not marked as an
ASM disk
Note: If using multiple devices/disks within an ASM diskgroup, a good practice would be to use
/etc/init.d/oracleasm createdisk with numbers appended to ASM alias name to create the
members of the diskgroup for example:
# /etc/init.d/oracleasm createdisk DAT01
/dev/mapper/360060e80045b2b0000005b2b0000163cp1
Note: The numbers, [253, 20], indicate major and minor numbers that correspond to the major
and minor numbers in the file, /proc/partitions. These numbers can be used to validate the
multipathed device by cross-referencing these numbers with the file, /proc/partitions, and the
output of "multipath -ll" to ensure that the major and minor numbers match.
# cat /proc/partitions
major minor #blocks name
.
.
.
253 20 524281275 dm-20
lrwxrwxrwx 1
-> ../dm-22
lrwxrwxrwx 1
../dm-16
lrwxrwxrwx 1
-> ../dm-23
lrwxrwxrwx 1
../dm-1
lrwxrwxrwx 1
../dm-2
lrwxrwxrwx 1
../dm-3
lrwxrwxrwx 1
../dm-4
lrwxrwxrwx 1
../dm-5
For example (as root) (f Real Application Clusters (RAC), do on each node.):
# /sbin/blkid | grep oracleasm
/dev/dm-20: LABEL="DAT" TYPE="oracleasm"
/dev/dm-22: LABEL="ARC" TYPE="oracleasm"
/dev/dm-23: LABEL="FRA" TYPE="oracleasm"
/dev/sdh1: LABEL="DAT" TYPE="oracleasm"
/dev/sdx1: LABEL="ARC" TYPE="oracleasm"
/dev/sdj1: LABEL="FRA" TYPE="oracleasm"
/dev/sdi1: LABEL="DAT" TYPE="oracleasm"
/dev/sdy1: LABEL="ARC" TYPE="oracleasm"
/dev/sdz1: LABEL="FRA" TYPE="oracleasm"
>>>>
ORACLEASM_SCANEXCLUDE="sd"
Note: ASMLIB first tries/scans all disks that are in the /proc/partitions file. Within the multipath
directory, /dev/mpath, the alias names and the WWIDs point to/are linked to the multipathed
names dm[-n] names. Furthermore, ASMLIB does not scan any disks
(ORACLEASM_SCANEXCLUDE) that start with "sd". This is all the SCSI disks.
Additional Resources
References
NOTE:564580.1 - Configuring raw devices (multipath) for Oracle Clusterware 10g Release 2
(10.2.0) on RHEL5/OEL5
NOTE:603868.1 - How to Dynamically Add and Remove SCSI Devices on Linux
NOTE:555603.1 - Configuration and Use of Device Mapper Multipathing on Oracle Enterprise
Linux (OEL)
NOTE:743949.1 - Unable To Create ASMLIB Disk
NOTE:967461.1 - "Multipath: error getting device" seen in OS log causes ASM/ASMlib to
shutdown by itself
NOTE:580153.1 - How To Setup ASM on Linux Using ASMLIB Disks, Raw Devices or Block
Devices?
NOTE:1089399.1 - Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux
Supported by Red Hat
NOTE:602952.1 - How To Setup ASM & ASMLIB On Native Linux Multipath Mapper disks?
Related
Products
Oracle Database Products > Oracle Database > Oracle Database > Oracle Server Enterprise Edition > STORAGE > ASM Installation and Patching Issues
Keywords
ASM; ASMLIB; CLUSTER; DISKGROUP; ENTERPRISE LINUX; LINUX; MULTIPATH;
ORACLEASM; SCSI; STORAGE
Errors
CSI-360060; CSI-3600508
Back to Top