0% found this document useful (0 votes)
25 views796 pages

IBM System Storage SAN Volume Controller - Sep06 - sg246423-04

Uploaded by

Stefan Velica
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views796 pages

IBM System Storage SAN Volume Controller - Sep06 - sg246423-04

Uploaded by

Stefan Velica
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 796

Front cover

IBM System Storage


SAN Volume Controller
Install, use, and troubleshoot the SAN
Volume Controller

Learn how to implement block


virtualization

Perform backup and restore


on a cluster

Jon Tate
Thorsten Hoss
Andy McManus
Massimo Rosati

ibm.com/redbooks
International Technical Support Organization

IBM System Storage SAN Volume Controller

September 2006

SG24-6423-04
Note: Before using this information and the product it supports, read the information in “Notices” on
page xxv.

Fifth Edition (September 2006)

This edition applies to Version 4 Release 1 Modification 1 of the IBM System Storage SAN Volume Controller.

© Copyright International Business Machines Corporation 2003, 2004, 2005, 2006. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxx
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxi

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxiii


September 2006, Fifth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxxiii

Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 The need for storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 In-band virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Out-of-band virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Chapter 2. IBM System Storage SAN Volume Controller overview . . . . . . . . . . . . . . . 11


2.1 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Glossary of commonly used terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Virtualization overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Compass architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.1 SAN Volume Controller clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.2 SAN Volume Controller virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.3 SAN Volume Controller multipathing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 SAN Volume Controller logical configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 SAN Volume Controller compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.7 Software licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.8 What’s new and what’s in SVC 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25


3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 Preparing your UPS environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 SAN planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.1 SAN definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.3.2 Fibre Channel switches, fabrics, interswitch links, and hops . . . . . . . . . . . . . . . . 35
3.3.3 General design considerations with the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.4 Boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.5 Configuration saving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.3.6 High availability SAN design and configuration rules with SVC . . . . . . . . . . . . . . 40
3.4 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5.1 Dual room high availability configuration with the SVC. . . . . . . . . . . . . . . . . . . . . 46
3.5.2 Local and remote SAN fabrics with SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. iii
3.5.3 Technologies for extending the distance between two SVC clusters . . . . . . . . . . 47
3.6 SVC disk subsystem planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.6.1 Block virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.6.2 MDGs, I/O groups, virtual disks, and managed disks . . . . . . . . . . . . . . . . . . . . . . 50
3.6.3 Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.6.4 Image mode virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.6.5 Managed mode virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.6.6 Allocation of free extents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.6.7 Selecting MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.6.8 I/O handling and offline conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.6.9 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.6.10 Virtualization operations on virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6.11 Creating an MDisk group (extent size rules) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.12 Creating a managed disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6.13 Creating a virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.6.14 Quality of service on VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.6.15 Creating a host (LUN masking). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.6.16 Port masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.6.17 Standard and persistent reserve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.6.18 Expanding an SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.6.19 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.7 SVC supported capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.7.1 Adding DS8000 storage to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.7.2 Adding ESS storage to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.7.3 Adding DS4000 storage to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.7.4 LUN layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Chapter 4. Performance and capacity planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81


4.1 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.1.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.1.2 Disk subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.1.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2 Planning guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.2.1 I/O queue depth handling in large SANs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2.2 SVC managed and virtual disk layout planning. . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.3 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.3.1 Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.3.2 Cluster wide statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.3.3 Per-node statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Chapter 5. Initial installation and configuration of the SVC . . . . . . . . . . . . . . . . . . . . . 97


5.1 Preparing for installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.2 Secure Shell (SSH) overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.2.1 Generating public and private SSH key pair using PuTTY . . . . . . . . . . . . . . . . . . 99
5.3 Basic installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.3.1 Creating the cluster (first time) using the service panel . . . . . . . . . . . . . . . . . . . 103
5.4 Completing the initial cluster setup using the SAN Volume Controller Console GUI . 106
5.4.1 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.4.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . 114
5.4.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.4.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Chapter 6. Quickstart configuration using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

iv IBM System Storage SAN Volume Controller


6.1 Adding nodes to the cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.2 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.3 Creating host definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.4 Displaying managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.5 Creating managed disk groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.6 Creating a virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.7 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

Chapter 7. Quickstart configuration using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . 139


7.1 Adding nodes to the cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.1.1 Installing certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.2 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7.3 Creating host definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7.4 Displaying managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.5 Creating managed disk groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7.6 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7.7 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Chapter 8. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165


8.1 SAN configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.2 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.3 Switch and zoning configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.3.1 Additional zoning considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.4.2 Support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
8.4.3 Host adapter configuration settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.4.4 SDD installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.4.5 Discovering the assigned VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.4.6 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.4.7 Creating and preparing volumes for use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
8.4.8 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
8.4.9 Removing an SVC volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
8.4.10 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 181
8.5 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
8.5.1 Configuring Windows 2000 and Windows 2003 hosts . . . . . . . . . . . . . . . . . . . . 182
8.5.2 Support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
8.5.3 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8.5.4 SDD installation on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8.5.5 Windows 2003 and MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
8.5.6 Subsystem Device Driver Device Specific Module (SDDDSM) for SVC. . . . . . . 184
8.5.7 Discovering the assigned VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
8.5.8 Expanding a Windows 2000/2003 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
8.5.9 Removing a disk on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
8.5.10 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8.5.11 Running an SVC command line (CLI) from a Windows host system . . . . . . . . 199
8.6 Linux (on Intel) specific information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.6.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
8.6.2 Support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
8.6.3 Host adapter configuration settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
8.6.4 Discovering the assigned VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
8.6.5 Using SDD on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
8.6.6 Creating and preparing volumes for use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Contents v
8.7 SUN Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
8.7.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 207
8.7.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
8.7.3 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
8.8 HP-UX support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
8.9 VMware support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
8.10 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

Chapter 9. SVC configuration and administration using the CLI . . . . . . . . . . . . . . . . 211


9.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
9.1.1 Organizing on-screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
9.1.2 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
9.1.3 Maintaining passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
9.1.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
9.1.5 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
9.1.6 Starting a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
9.1.7 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
9.1.8 Audit Log commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
9.1.9 Status of discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
9.1.10 Status of copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
9.1.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
9.2 Working with nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
9.2.1 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
9.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
9.3 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
9.3.1 Disk controller systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
9.3.2 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
9.3.3 Managed Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
9.4 Working with virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
9.4.1 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
9.4.2 Virtual disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
9.4.3 Tracing a host disk back to its source physical disk . . . . . . . . . . . . . . . . . . . . . . 254
9.5 Managing copy services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
9.6 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
9.6.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
9.6.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
9.6.3 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9.6.4 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
9.6.5 Setting features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
9.6.6 Viewing the feature log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
9.7 SVC cluster configuration backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
9.7.1 Backing up the SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
9.7.2 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.7.3 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
9.9 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
9.10 Scripting and its usage under CLI for SVC task automation . . . . . . . . . . . . . . . . . . . 274

Chapter 10. SVC configuration and administration using the GUI. . . . . . . . . . . . . . . 275
10.1 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
10.1.1 Organizing on-screen content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
10.1.2 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
10.1.3 Maintaining passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

vi IBM System Storage SAN Volume Controller


10.1.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
10.1.5 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
10.1.6 Starting the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
10.1.7 Stopping the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
10.1.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
10.2 Working with nodes using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
10.2.1 I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
10.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
10.3 Viewing progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
10.4 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
10.4.1 Disk controller systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
10.4.2 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
10.4.3 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
10.4.4 Managed Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
10.5 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
10.5.1 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
10.5.2 Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
10.6 Working with virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
10.6.1 Using the Virtual Disks panel for VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
10.6.2 Showing VDisks mapped to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10.7 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.8 Service and maintenance using the GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.8.1 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.8.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
10.8.3 Setting error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
10.8.4 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
10.8.5 Setting features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
10.8.6 Viewing the feature log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10.8.7 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
10.9 Backing up the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
10.9.1 Backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
10.9.2 Restoring the SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
10.9.3 Deleting the configuration backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380

Chapter 11. Copy Services: FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383


11.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
11.1.1 How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
11.1.2 Practical uses for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
11.1.3 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
11.1.4 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
11.1.5 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
11.1.6 FlashCopy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
11.1.7 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
11.1.8 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
11.1.9 Background copy rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
11.1.10 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
11.1.11 Metadata management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
11.1.12 I/O handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
11.1.13 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
11.1.14 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
11.1.15 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
11.2 FlashCopy commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
11.2.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Contents vii
11.2.2 Modifying the mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
11.2.3 Deleting the mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
11.2.4 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 406
11.2.5 Preparing (pre-triggering) the FlashCopy consistency group . . . . . . . . . . . . . . 406
11.2.6 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
11.2.7 Stopping the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
11.2.8 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
11.2.9 Creating the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
11.2.10 Modifying the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . 408
11.2.11 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . 408
11.3 FlashCopy scenario using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
11.4 FlashCopy scenario using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412

Chapter 12. Copy Services: Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425


12.1 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
12.1.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
12.1.2 Supported methods for synchronizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
12.1.3 The importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
12.1.4 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
12.1.5 SVC Metro Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
12.1.6 Metro Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
12.1.7 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
12.2 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
12.2.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
12.2.2 Creating SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
12.2.3 Creating a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
12.2.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
12.2.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
12.2.6 Changing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
12.2.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
12.2.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
12.2.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
12.2.10 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 443
12.2.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
12.2.12 Deleting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
12.2.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
12.2.14 Reversing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 445
12.2.15 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
12.2.16 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
12.3 Metro Mirror scenario using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
12.4 Metro Mirror scenario using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
12.4.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461

Chapter 13. Copy Services: Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489


13.1 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
13.1.1 Supported methods for synchronizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
13.1.2 The importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
13.1.3 Using Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
13.1.4 SVC Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
13.2 How Global Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
13.2.1 Intercluster communication and zoning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
13.2.2 Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
13.2.3 Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498

viii IBM System Storage SAN Volume Controller


13.2.4 Global Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
13.2.5 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
13.3 Global Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
13.3.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
13.3.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
13.3.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
13.3.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
13.3.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
13.3.6 Changing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 507
13.3.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
13.3.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
13.3.9 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
13.3.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 508
13.3.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
13.3.12 Deleting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 509
13.3.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
13.3.14 Reversing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 509
13.3.15 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
13.3.16 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
13.4 Global Mirror scenario using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
13.4.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
13.4.2 Creating SVC partnership between ITSOSVC01 and ITSOSVC02 . . . . . . . . . 515
13.4.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . . 516
13.4.4 Executing Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
13.5 Global Mirror scenario using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
13.5.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
13.5.2 Creating an SVC partnership between ITSOSVC01 and ITSOSVC02 . . . . . . . 529
13.5.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
13.5.4 Creating the Global Mirror relationships for VDISK1 and VDISK2 . . . . . . . . . . 537
13.5.5 Creating the stand-alone Global Mirror relationship for VDISK3. . . . . . . . . . . . 541
13.5.6 Executing Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
13.5.7 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 546
13.5.8 Starting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
13.5.9 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 550
13.5.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 551
13.5.11 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 553
13.5.12 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 554
13.5.13 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 555
13.5.14 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 557

Chapter 14. Migration to and from the SAN Volume Controller . . . . . . . . . . . . . . . . . 559
14.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
14.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
14.2.1 Migrating multiple extents (within an MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
14.2.2 Migrating extents off an MDisk which is being deleted . . . . . . . . . . . . . . . . . . . 561
14.2.3 Migrating a VDisk between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
14.2.4 Migrating the VDisk to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
14.2.5 Migrating a VDisk between I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
14.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
14.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
14.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
14.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
14.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567

Contents ix
14.4 Migrating data from an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
14.4.1 Image mode VDisk migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
14.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
14.5 Data migration for Windows using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
14.5.1 Windows 2000 host system connected directly to the ESS . . . . . . . . . . . . . . . 572
14.5.2 SVC added between the Windows 2000 host system and the ESS . . . . . . . . . 573
14.5.3 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . 580
14.5.4 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . 583
14.5.5 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . 586
14.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
14.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
14.6.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
14.6.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
14.6.4 Migrate the image mode VDisks to managed MDisks . . . . . . . . . . . . . . . . . . . 598
14.6.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
14.6.6 Migrate the VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
14.6.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
14.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
14.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
14.7.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
14.7.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
14.7.4 Migrate the image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
14.7.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
14.7.6 Migrate the managed VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . 623
14.7.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
14.8 Migrating AIX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
14.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
14.8.2 Prepare your SVC to virtualize disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
14.8.3 Move the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
14.8.4 Migrate the image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
14.8.5 Preparing to migrate from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
14.8.6 Migrate the managed VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
14.8.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642

Chapter 15. Master console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647


15.1 Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
15.1.1 Example hardware configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
15.2 Management console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
15.3 Installation planning information for the master console . . . . . . . . . . . . . . . . . . . . . . 651
15.4 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
15.4.1 Uploading SSH public key(s) sample scenarios . . . . . . . . . . . . . . . . . . . . . . . . 652
15.5 Upgrading the Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
15.6 Call Home (service alert). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
15.7 Master console summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662

Appendix A. Copy services and open systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663


AIX specifics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
AIX and FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
AIX and Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
Making updates to the LVM information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
Windows NT and 2000/2003 specifics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Windows NT and Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
Copy Services with Windows Volume Sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673

x IBM System Storage SAN Volume Controller


Windows 2000/2003 and Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676

Appendix B. DS4000 migration scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 689


Initial considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
Scenario 1: Total number of LUNs is less than maximum LUNs per partition . . . . . . . . . . 691
Scenario 2: Total number of LUNs is more than maximum LUNs per partition . . . . . . . . . 696

Appendix C. Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703


Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
Automated VDisk creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
SVC tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
Scripting alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714

Appendix D. Node replacement and node upgrading procedure . . . . . . . . . . . . . . . . 715


Replacing a failed SVC node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
Prerequisites for replacing a failed node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
Replacement process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
Upgrading an SVC 4F2 node cluster to an 8F4 node cluster. . . . . . . . . . . . . . . . . . . . . . . 722
Prerequisites for upgrading a cluster from 4F2 to 8F4 nodes . . . . . . . . . . . . . . . . . . . . . . 722
Replacing the SVC 4F2 nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
Replacing the nodes within the I/O group rezoning the SAN . . . . . . . . . . . . . . . . . . . . 726
Replacing the nodes by rezoning and moving VDisks to new I/O group . . . . . . . . . . . 727

Appendix E. HPUX11i Metro Mirror using PVlinks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729

Appendix F. HPUX11i Metro Mirror with SDD vpath devices . . . . . . . . . . . . . . . . . . . 735


Summary of activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
Preparation for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
Start the Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
Other resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
Referenced Web sites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
IBM Redbooks collections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745

Contents xi
xii IBM System Storage SAN Volume Controller
Figures

1-1 SNIA Block Aggregation Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


1-2 IBM plan for block aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1-3 Conceptual diagram of the IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 5
1-4 SNIA file aggregation model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1-5 The IBM plan for file aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1-6 Conceptual diagram of the IBM SAN file system architecture . . . . . . . . . . . . . . . . . . . . 9
2-1 Extents being used to create a virtual disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2-2 The relationship between physical and virtual disks . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2-3 SAN Volume Controller logical view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2-4 Base software license diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2-5 FlashCopy storage license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2-6 MetroMirror/GlobalMirror intracluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2-7 MetroMirror/GlobalMirror intercluster relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3-1 SVC in its rack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3-2 Node uninterruptible power supply setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3-3 Sample rack layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3-4 Cable connection table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3-5 Master Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3-6 SVC physical topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3-7 SVC logical topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3-8 Simple two-node SVC high availability configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3-9 Host zoning example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3-10 Host zoning example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3-11 An example of name convention in dual SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3-12 High availability SAN Volume Controller cluster in a two-site configuration . . . . . . . . 46
3-13 Disk subsystem shared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3-14 Host connected to ESS and SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3-15 DS4000 supported configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3-16 Disk relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3-17 Simple view of block virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3-18 Remote copy scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3-19 FlashCopy scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3-20 Bad scenario for quorum disk and cluster co-location . . . . . . . . . . . . . . . . . . . . . . . . 59
3-21 Correct HA scenario for quorum disk and cluster co-location . . . . . . . . . . . . . . . . . . . 60
3-22 Storage Allocation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3-23 Configure Host Adapter Ports window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3-24 ESS Modify Host System window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3-25 Volume assignments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3-26 Viewing the two paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3-27 Where to find the Storage Subsystem Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3-28 The Storage Subsystem Profile showing the firmware version. . . . . . . . . . . . . . . . . . 76
3-29 DS4000 mappings view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3-30 Array 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3-31 Array 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3-32 Host type for storage partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3-33 Port mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4-1 Distribute I/O load evenly among SVC node ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4-2 Distributed I/O load always guaranteed among SVC node ports . . . . . . . . . . . . . . . . . 85

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. xiii
4-3 I/O queue depth algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
5-1 PuTTY key generator GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5-2 PuTTY random key generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5-3 Saving the public key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5-4 Saving the private key without passphrase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5-5 SVC 4F2 Node front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5-6 SVC 8F2 Node and SVC 8F4 Node front and operator panel . . . . . . . . . . . . . . . . . . 104
5-7 GUI signon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5-8 Change default password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
5-9 Adding the SVC cluster for management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5-10 Adding Clusters panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5-11 Security Alert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5-12 SVC cluster user ID and password signon window. . . . . . . . . . . . . . . . . . . . . . . . . . 110
5-13 Create New Cluster wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
5-14 Cluster details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5-15 Create New Cluster Progress page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5-16 Error Notification Settings configuration page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5-17 Featurization Settings Configuration page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5-18 Add SSH Public Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
5-19 Adding SSH admin key successful . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5-20 Closing page after successful cluster creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5-21 Cluster selection screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5-22 Invalid SSH fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5-23 Maintaining SSH Keys panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5-24 Using the Viewing Clusters panel to Launch the SAN Volume Controller Application118
5-25 PuTTY Configuration window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5-26 PuTTY SSH Connection Configuration window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
5-27 PuTTY Configuration: Private key location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5-28 PuTTY Configuration: Saving a session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5-29 Open PuTTY command line session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5-30 PuTTY Security Alert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
7-1 GUI signon page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
7-2 GUI Welcome page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7-3 Selecting to launch the SAN Volume Controller application . . . . . . . . . . . . . . . . . . . . 141
7-4 SVC Console Welcome page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7-5 Viewing Nodes panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7-6 Adding a Node to a Cluster panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7-7 Node added successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7-8 Security Alert window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7-9 Certificate Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7-10 Certificate Import Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7-11 Certificate Store panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7-12 Root Certificate Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7-13 Certificate Import successful . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7-14 Cluster Date and Time Settings panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
7-15 Viewing Hosts panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7-16 Creating Hosts panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
7-17 Host added successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7-18 Discover MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
7-19 .Selecting the option to create an MDisk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7-20 Name the group and select the managed disks panel . . . . . . . . . . . . . . . . . . . . . . . 154
7-21 Select Extent Size panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7-22 Verify MDG wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

xiv IBM System Storage SAN Volume Controller


7-23 MDG added successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7-24 Viewing Virtual Disks panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
7-25 Choosing an I/O group and a MDG panel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7-26 Select the Type of VDisk panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
7-27 Name the Virtual Disk(s) panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7-28 Select Attributes for a VDisk panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7-29 Verify VDisk Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7-30 VDisk creation success . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7-31 List of all created VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7-32 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
7-33 Creating VDisk-to-Host Mappings panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7-34 VDisk to host mapping successful . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8-1 SAN Volume controller setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8-2 Zoning for W2K3_1_SVC1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8-3 QLogic FC Host Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8-4 Windows 2003 host system before adding a new volume from SVC . . . . . . . . . . . . . 185
8-5 Windows 2003 host system with tree new volumes from SVC . . . . . . . . . . . . . . . . . . 187
8-6 Number of devices found related to the number of paths . . . . . . . . . . . . . . . . . . . . . . 188
8-7 Volume size before expansion on Windows 2003, disk manager view. . . . . . . . . . . . 190
8-8 Volume size before expansion on Windows 2003, disk properties view. . . . . . . . . . . 191
8-9 Expanded volume in disk manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
8-10 Disk manager after expansion of Disk1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
8-11 The new capacity of Disk1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
8-12 The Disk Manager before removing the disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
8-13 Disk Manager showing the remaining disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
8-14 Datapath query commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
8-15 SDD query information using Web browser at <Win2k_1 ip add>:20001 . . . . . . . . . 199
9-1 Starting PuTTY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
9-2 SDD command example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
10-1 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
10-2 Viewing Virtual Disks: Filtered view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
10-3 Additional filter Icon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
10-4 Show filter row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
10-5 Filter option on Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
10-6 Filtered on Name containing the word copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
10-7 Clear all filter options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
10-8 Selecting Edit Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
10-9 Sorting criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
10-10 Selecting to clear all sorts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
10-11 Online help using the i icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
10-12 Online help using the ? icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
10-13 View Cluster Properties: General properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
10-14 Maintain Passwords panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
10-15 Modifying passwords successful update messages . . . . . . . . . . . . . . . . . . . . . . . . 284
10-16 Modify IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
10-17 Cluster Date and Time Settings panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
10-18 Cluster Date and Time Settings update confirmation . . . . . . . . . . . . . . . . . . . . . . . 286
10-19 Starting collection of statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
10-20 Verifying that statistics collection is on . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
10-21 Stopping the collection of statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
10-22 Verifying that statistics collection is off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
10-23 Shutting down the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
10-24 Viewing Input/Output Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290

Figures xv
10-25 Renaming the I/O group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
10-26 Viewing Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
10-27 General node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
10-28 Adding a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
10-29 Add node Refresh button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
10-30 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
10-31 Deleting node from a cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
10-32 Delete node refresh button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
10-33 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
10-34 Showing MDisk Removal Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
10-35 Disk controller systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
10-36 Viewing general details about a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
10-37 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
10-38 Discovery status view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
10-39 Viewing Managed Disks panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
10-40 Managed disk details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
10-41 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
10-42 Newly discovered managed disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
10-43 Setting a quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
10-44 Viewing Managed Disks: Excluding an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
10-45 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
10-46 Viewing Managed Disks: Verifying the included MDisk . . . . . . . . . . . . . . . . . . . . . 304
10-47 Show MDisk Group select. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
10-48 Show MDisk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
10-49 View MDG details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
10-50 Show VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
10-51 VDisk list from a selected MDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
10-52 Create VDisk in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
10-53 Set attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
10-54 MDG name entry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
10-55 Select extent size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
10-56 Verify MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
10-57 Choose an I/O group and an MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
10-58 Verify imaged VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
10-59 Viewing MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
10-60 MDG details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
10-61 Name the group and select managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
10-62 Select the extent size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
10-63 Verifying the information about the MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
10-64 Renaming an MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
10-65 Renaming an MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
10-66 Deleting an MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
10-67 Confirming forced deletion of an MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
10-68 Adding an MDisk to an existing MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
10-69 Adding MDisks to an MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
10-70 Viewing MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
10-71 Removing MDisks from an MDG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
10-72 Confirming forced deletion of MDisk from MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
10-73 View MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
10-74 Viewing MDisks in an MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
10-75 View MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
10-76 VDisks belonging to selected MDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
10-77 Viewing hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322

xvi IBM System Storage SAN Volume Controller


10-78 Host details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
10-79 Host port details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
10-80 Host mapped I/O groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
10-81 Create a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
10-82 Creating a new host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
10-83 Create host results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
10-84 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
10-85 Modifying a host (choosing a new name) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
10-86 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
10-87 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
10-88 Forcing a deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
10-89 Add ports to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
10-90 Adding ports to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
10-91 Delete ports from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
10-92 Deleting ports from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
10-93 Port delete conformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
10-94 Viewing Fabrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
10-95 Viewing Virtual Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
10-96 VDisk details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
10-97 Creating a VDisk wizard: Choose an I/O group and an MDG . . . . . . . . . . . . . . . . . 335
10-98 Creating a VDisk wizard: Select type of VDisk and number of VDisk . . . . . . . . . . . 336
10-99 Creating a VDisk wizard: Name the VDisks panel . . . . . . . . . . . . . . . . . . . . . . . . . 337
10-100 Creating a VDisk wizard: Select Attributes for Striped-mode VDisk . . . . . . . . . . . 338
10-101 Creating a VDisk wizard: Select attributes for sequential mode VDisks . . . . . . . . 339
10-102 Creating a VDisk wizard: Verify VDisk Striped type . . . . . . . . . . . . . . . . . . . . . . . 340
10-103 Creating a VDisk wizard: Verify VDisk sequential type . . . . . . . . . . . . . . . . . . . . . 340
10-104 Creating a VDisk wizard: final result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
10-105 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
10-106 Deleting a VDisk: Forcing a deletion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
10-107 Deleting a VDisk-to-host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
10-108 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
10-109 Mapping a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
10-110 Progress of VDisk to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
10-111 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
10-112 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
10-113 Migrate to image mode VDisk wizard: Select target MDisk . . . . . . . . . . . . . . . . . 349
10-114 Migrate to image mode VDisk wizard: Select MDG . . . . . . . . . . . . . . . . . . . . . . . 349
10-115 Migrate to image mode VDisk wizard: Select Threads . . . . . . . . . . . . . . . . . . . . . 350
10-116 Migrate to image mode VDisk wizard: Verify migration Attributes . . . . . . . . . . . . 350
10-117 Migrate to image mode VDisk wizard: Progress of Migration . . . . . . . . . . . . . . . . 351
10-118 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
10-119 Showing MDisks used by a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
10-120 Showing an MDG for a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10-121 Show Host to VDisk mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10-122 Select show Capacity Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10-123 Show capacity information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
10-124 VDisk to Host Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
10-125 Deleting VDisk to Host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
10-126 Service and Maintenance functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10-127 Cluster Software Upgrade Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
10-128 Using datapath query commands to check all paths are online . . . . . . . . . . . . . . 358
10-129 Getting software dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
10-130 Downloading software dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

Figures xvii
10-131 Update Software panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10-132 Software Upgrade (file upload) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10-133 File upload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10-134 Software Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10-135 Confirm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10-136 Software Upgrade Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
10-137 Denial of a task command during the software update . . . . . . . . . . . . . . . . . . . . . 362
10-138 Upgrade complete. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
10-139 Maintenance Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
10-140 Maintenance error log with unfixed errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
10-141 Maintenance: error code description. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
10-142 Maintenance procedures: fixing Stage 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
10-143 Maintenance procedure: fixing Stage 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
10-144 Maintenance procedure: fixing Stage 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
10-145 Maintenance procedures: fixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
10-146 Maintenance procedures: close . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
10-147 Setting error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
10-148 Set the SNMP settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
10-149 Current Error Notification settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
10-150 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
10-151 Analyzing Error Log: Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10-152 Analyzing Error Log: Detailed error analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
10-153 Setting features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10-154 License agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10-155 Featurization settings update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10-156 Feature Log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10-157 List Dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
10-158 List Dumps from the partner node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
10-159 Copy dump files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
10-160 List Dumps: Error Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
10-161 List Dumps: Error log detail. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
10-162 Confirm Delete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
10-163 Backing up a Cluster Configuration data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
10-164 Configuration backup successful message and warnings . . . . . . . . . . . . . . . . . . 379
10-165 Deleting a cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
10-166 Deleting a Cluster Configuration confirmation message . . . . . . . . . . . . . . . . . . . . 381
11-1 Implementation of SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
11-2 FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
11-3 FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
11-4 I/O processing with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
11-5 Logical placement of the FlashCopy indirection layer . . . . . . . . . . . . . . . . . . . . . . . 393
11-6 FlashCopy mapping state diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
11-7 FlashCopy scenario using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
11-8 Select FlashCopy Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
11-9 FlashCopy consistency group name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
11-10 Mapping results for created consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
11-11 Viewing FlashCopy consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
11-12 Create FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
11-13 Setting the properties for the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . 416
11-14 Filtering source VDisk candidates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
11-15 Selecting source VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
11-16 Selecting target VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
11-17 Verify FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418

xviii IBM System Storage SAN Volume Controller


11-18 Viewing FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
11-19 Viewing all created FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
11-20 Prepare FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
11-21 Start FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
11-22 Confirm start of FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
11-23 Viewing FlashCopy consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
11-24 Viewing the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
11-25 FlashCopy consistency group, Idle or Copied . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
11-26 Selecting a single mapping to be started. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
11-27 Starting a single FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
11-28 Viewing FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
12-1 Write on VDisk in Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
12-2 Dependent writes for a database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
12-3 Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
12-4 Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
12-5 Metro Mirror mapping state diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
12-6 Metro Mirror scenario using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
12-7 Metro Mirror scenario using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
12-8 Selecting Metro Mirror Cluster Partnership on ITSOSVC01 . . . . . . . . . . . . . . . . . . . 462
12-9 Confirming that a Metro Mirror partnership is to be created . . . . . . . . . . . . . . . . . . . 463
12-10 Selecting SVC partner and specifying bandwidth for background copy . . . . . . . . . 463
12-11 Metro Mirror cluster partnership is partially configured . . . . . . . . . . . . . . . . . . . . . . 464
12-12 Selecting SVC partner and specify bandwidth for background copy . . . . . . . . . . . 464
12-13 Metro Mirror cluster partnership is fully configured . . . . . . . . . . . . . . . . . . . . . . . . . 464
12-14 Selecting Metro Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
12-15 Create a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
12-16 Specifying consistency group name and type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
12-17 There are no defined Metro Mirror relationships to be added . . . . . . . . . . . . . . . . . 466
12-18 Verifying the settings for the Metro Mirror consistency group . . . . . . . . . . . . . . . . . 467
12-19 Viewing Metro Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
12-20 Selecting Metro Mirror Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
12-21 Create a relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
12-22 Naming the Metro Mirror relationship and selecting the auxiliary cluster . . . . . . . . 469
12-23 Defining filter for master VDisk candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
12-24 Selecting the master VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
12-25 Selecting the auxiliary VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
12-26 Selecting the relationship to be part of a consistency group. . . . . . . . . . . . . . . . . . 471
12-27 Verifying the Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
12-28 Viewing Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
12-29 Create a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
12-30 Specifying the Metro Mirror relationship name and auxiliary cluster. . . . . . . . . . . . 473
12-31 Filtering VDisk candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
12-32 Selecting the master VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
12-33 Selecting the auxiliary VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
12-34 Selecting options for the Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . 475
12-35 Verifying the Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
12-36 Viewing Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
12-37 Starting a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
12-38 Selecting options and starting the copy process. . . . . . . . . . . . . . . . . . . . . . . . . . . 477
12-39 Viewing Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
12-40 Selecting Metro Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
12-41 Selecting start copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
12-42 Selecting options and starting the copy process. . . . . . . . . . . . . . . . . . . . . . . . . . . 479

Figures xix
12-43 Viewing Metro Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
12-44 Viewing Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
12-45 View Metro Mirror progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
12-46 Stopping a stand-alone Metro MIrror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . 481
12-47 Enable access to the secondary VDisk while stopping the relationship . . . . . . . . . 481
12-48 Viewing the Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
12-49 Selecting the Metro Mirror consistency group to be stopped . . . . . . . . . . . . . . . . . 482
12-50 Stopping the consistency group, without enabling access to the secondary VDisk 482
12-51 Viewing Metro Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
12-52 Stopping the Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
12-53 Enabling access to the secondary VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
12-54 Viewing the Metro Mirror consistency group is in the Idling state . . . . . . . . . . . . . 483
12-55 Starting a stand-alone Metro Mirror relationship in the Idling state . . . . . . . . . . . . 484
12-56 Starting the copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
12-57 Viewing the Metro Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
12-58 Starting the copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
12-59 Starting the copy process for the consistency group . . . . . . . . . . . . . . . . . . . . . . . 486
12-60 Viewing Metro Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
12-61 Selecting the relationship for which the copy direction is to be changed . . . . . . . . 487
12-62 Selecting the primary VDisk to switch the copy direction . . . . . . . . . . . . . . . . . . . . 487
12-63 Viewing Metro Mirror relationship, after changing the copy direction . . . . . . . . . . . 487
12-64 Selecting the consistency group for which the copy direction is to be changed . . . 488
12-65 Selecting the primary VDisk to switch the copy direction . . . . . . . . . . . . . . . . . . . . 488
13-1 Write on VDisk in Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
13-2 Write on VDisk in Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
13-3 Dependent writes for a database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
13-4 Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
13-5 Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
13-6 Global Mirror mapping state diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
13-7 Global Mirror scenario using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
13-8 Global Mirror scenario using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
13-9 Selecting Global Mirror Cluster Partnership on ITSOSVC01 . . . . . . . . . . . . . . . . . . 529
13-10 Confirming that a Global Mirror partnership is to be created . . . . . . . . . . . . . . . . . 529
13-11 Selecting SVC partner and specifying bandwidth for background copy . . . . . . . . . 530
13-12 Global Mirror cluster partnership is partially configured . . . . . . . . . . . . . . . . . . . . . 530
13-13 Selecting SVC partner and specify bandwidth for background copy . . . . . . . . . . . 531
13-14 Global Mirror cluster partnership is fully configured . . . . . . . . . . . . . . . . . . . . . . . . 531
13-15 Selecting SVC partner and specify bandwidth for background copy . . . . . . . . . . . 533
13-16 Global Mirror cluster partnership is fully configured . . . . . . . . . . . . . . . . . . . . . . . . 533
13-17 Selecting Global Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
13-18 Create a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
13-19 Specifying consistency group name and type. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
13-20 There are no defined Global Mirror relationships to be added . . . . . . . . . . . . . . . . 535
13-21 Verifying the settings for the Global Mirror consistency group . . . . . . . . . . . . . . . . 536
13-22 Viewing Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
13-23 Selecting Global Mirror Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
13-24 Create a relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
13-25 Naming the Global Mirror relationship and selecting the auxiliary cluster. . . . . . . . 538
13-26 Defining filter for master VDisk candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
13-27 Selecting the master VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
13-28 Selecting the auxiliary VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
13-29 Selecting the relationship to be part of a consistency group. . . . . . . . . . . . . . . . . . 540
13-30 Verifying the Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540

xx IBM System Storage SAN Volume Controller


13-31 Viewing Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
13-32 Viewing the Global Mirror relationships after creating GM_REL2. . . . . . . . . . . . . . 541
13-33 Create a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
13-34 Specifying the Global Mirror relationship name and auxiliary cluster . . . . . . . . . . . 542
13-35 Filtering VDisk candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
13-36 Selecting the master VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
13-37 Selecting the auxiliary VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
13-38 Selecting options for the Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 544
13-39 Verifying the Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
13-40 Viewing Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
13-41 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . 546
13-42 Selecting options and starting the copy process. . . . . . . . . . . . . . . . . . . . . . . . . . . 546
13-43 Viewing Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
13-44 Selecting Global Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
13-45 Selecting start copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
13-46 Selecting options and starting the copy process. . . . . . . . . . . . . . . . . . . . . . . . . . . 548
13-47 Viewing Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
13-48 Viewing Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
13-49 Stopping a stand-alone Global MIrror relationship . . . . . . . . . . . . . . . . . . . . . . . . . 550
13-50 Enable access to the secondary VDisk while stopping the relationship . . . . . . . . . 550
13-51 Viewing the Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
13-52 Selecting the Global Mirror consistency group to be stopped. . . . . . . . . . . . . . . . . 551
13-53 Stopping the consistency group, without enabling access to the secondary VDisk 551
13-54 Viewing Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
13-55 Selecting the Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
13-56 Enabling access to the secondary VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
13-57 Viewing the Global Mirror consistency group is in the Idling state . . . . . . . . . . . . . 552
13-58 Starting a stand-alone Global Mirror relationship in the Idling state. . . . . . . . . . . . 553
13-59 Starting the copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
13-60 Viewing the Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
13-61 Starting the copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
13-62 Starting the copy process for the consistency group . . . . . . . . . . . . . . . . . . . . . . . 555
13-63 Viewing Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
13-64 Selecting the relationship for which the copy direction is to be changed . . . . . . . . 556
13-65 Selecting the primary VDisk to switch the copy direction . . . . . . . . . . . . . . . . . . . . 556
13-66 Viewing Global Mirror relationship, after changing the copy direction . . . . . . . . . . 556
13-67 Selecting the consistency group for which the copy direction is to be changed . . . 557
13-68 Selecting the primary VDisk to switch the copy direction . . . . . . . . . . . . . . . . . . . . 557
13-69 Viewing Global Mirror consistency groups, after changing the copy direction . . . . 558
14-1 VDisk migration between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
14-2 Migrating an extent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
14-3 Different states of a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
14-4 Disk management: One volume from ESS with label S: . . . . . . . . . . . . . . . . . . . . . . 572
14-5 Drive S: from ESS with SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
14-6 Volume properties of Drive S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
14-7 Files on volume S: (Volume from ESS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
14-8 Viewing VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
14-9 Create VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
14-10 Select the Type of VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
14-11 Select Attributes for Image-mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
14-12 Verify VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
14-13 Viewing VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
14-14 Viewing Managed Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576

Figures xxi
14-15 Viewing VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
14-16 Creating a VDisk-to-Host mapping winimageVDisk1 . . . . . . . . . . . . . . . . . . . . . . . 577
14-17 The volume S: Volume_from_ESS is online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
14-18 The volume S: Volume_from_ESS is online and is 2145 SDD Disk device . . . . . . 578
14-19 The volume S: Volume_from_ESS is online with the same data . . . . . . . . . . . . . . 579
14-20 The volume S: Volume_from_ESS is online and have four paths to SVC 2145 . . . 579
14-21 Viewing VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
14-22 Migrating VDisks-winimagevdisk1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
14-23 View Managed Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
14-24 Viewing VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
14-25 Viewing MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
14-26 VDisk is now in Migrated_VDisks instead of ess_mdiskgrp0 . . . . . . . . . . . . . . . . . 581
14-27 Details for VDisk winimagevdisk1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
14-28 The MDGs after complete migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
14-29 The MDisks after migration is complete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
14-30 Select VDisk and start migrate to an image mode VDisk . . . . . . . . . . . . . . . . . . . . 583
14-31 Select the target MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
14-32 Select MDG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
14-33 Select the threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
14-34 Verify migration attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
14-35 Progress panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
14-36 Viewing VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
14-37 VDisk details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
14-38 Migrate to an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
14-39 Select the target MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
14-40 Select MDG. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
14-41 Select the threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
14-42 Verify migration attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
14-43 Progress on the migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
14-44 Linux SAN environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
14-45 SAN environment with SVC attached . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
14-46 Obtaining the disk serial numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
14-47 Environment with SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
14-48 ESX server SAN environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
14-49 SAN environment with SVC attached . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
14-50 Obtain your WWN using the VMware Management Console . . . . . . . . . . . . . . . . . 611
14-51 Obtaining the disk serial numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
14-52 VMware ESX Disks and LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
14-53 ESX Guest properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
14-54 Suspend VMware guest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
14-55 Rescan your SAN and discover the LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
14-56 ESX SVC SAN Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
14-57 Rescan your SAN and discover the LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
14-58 AIX SAN environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
14-59 SAN environment with SVC attached . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631
14-60 Obtaining the disk serial numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
14-61 Environment with SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
15-1 2 SVC cluster nodes and UPSs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
15-2 Master console screen and keyboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
15-3 SSH client/server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
15-4 Communication interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
15-5 Inserting the Upgrade CDROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 654
15-6 Launching the upgrade wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655

xxii IBM System Storage SAN Volume Controller


15-7 Product Installation Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
15-8 Installation Confirmation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
15-9 Installation progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
15-10 Installation finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
15-11 Launching the upgraded SVC Master Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
15-12 IBM Director Discovery Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
15-13 Network addresses for SNMP discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
15-14 Director Action Plan Builder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
15-15 Updating the 2145 event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661
15-16 Customize window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
A-1 lspv after pv=clear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
A-2 Recreated FlashCopy target volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668
A-3 Target file system stanza . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
A-4 Windows server before adding FlashCopy target disk . . . . . . . . . . . . . . . . . . . . . . . . 677
A-5 Discovered FlashCopy target disk on Windows server . . . . . . . . . . . . . . . . . . . . . . . 678
A-6 Chose disk and assign drive letter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
A-7 The target disk is now ready on the Windows server with drive letter E: . . . . . . . . . . 679
A-8 The data is both on source and target disk on the same Windows server . . . . . . . . . 680
A-9 Disk configuration before adding dynamic FlashCopy disks . . . . . . . . . . . . . . . . . . . 683
A-10 Dynamic disks added to Disk Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
A-11 Import Foreign Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
A-12 Import two foreign disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
A-13 Disks that is going to be imported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
A-14 Spanned volume going to be imported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
A-15 The flashcopied dynamic disks is online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
A-16 The data is ready to use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
B-1 Initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
B-2 Partition 3 created . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
B-3 Storage moved from partition 2 to partition 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
B-4 Storage moved from partition 0 and 1 to partition 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 694
B-5 All storage under the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
B-6 Scenario 2 initial configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
B-7 Second DS4000 added . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
B-8 Portions created on DS4000-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
B-9 Storage for host C migrated to DS4000-2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
B-10 All storage under control of the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
C-1 Scripting structure for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
C-2 Using a predefined SSH connection with plink. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
C-3 VDiskScript.bat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707

Figures xxiii
xxiv IBM System Storage SAN Volume Controller
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. xxv
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L™ Enterprise Storage Server® RETAIN®
AIX® FlashCopy® Storage Tank™
DB2® FICON® System Storage™
DS4000™ IBM® System x™
DS6000™ OS/2® Tivoli®
DS8000™ Redbooks™ WebSphere®
eServer™ Redbooks (logo) ™

The following terms are trademarks of other companies:

Java, JRE, J2SE, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in
the United States, other countries, or both.

Internet Explorer, Microsoft, Windows NT, Windows Server, Windows, and the Windows logo are trademarks
of Microsoft Corporation in the United States, other countries, or both.

Intel, Itanium, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

xxvi IBM System Storage SAN Volume Controller


Preface

This IBM® Redbook is a detailed technical guide to the IBM System Storage™ SAN Volume
Controller (SVC), a virtualization appliance solution that maps virtualized volumes visible to
hosts and applications to physical volumes on storage devices. Each server within the SAN
has its own set of virtual storage addresses which are mapped to a physical address. If the
physical addresses change, the server continues running using the same virtual addresses
that it had before. This means that volumes or storage can be added or moved while the
server is still running. The IBM virtualization technology improves management of information
at the “block” level in a network, enabling applications and servers to share storage devices
on a network.

Successful businesses require real-time responsiveness to change, whether because of new


customer needs, changes in the supply chain, unexpected competitive moves, external
threats, or changes in the economic climate. Rapid response to change requires an IT
infrastructure that can turn information into a competitive advantage; the IT infrastructure
must provide the maximum benefit at an affordable cost, and must have the flexibility to
support changes in business processes. An on demand operating environment provides a
cost effective and flexible IT environment. With information at the heart of competitiveness,
storage becomes an ever more critical component of an on demand operating environment.

The IBM System Storage strategy addresses some of the most pressing needs currently
facing Chief Information Officers (CIO) and IT managers. As part of its strategy, IBM intends
to deliver industry leading technologies that will help dramatically reduce the total cost of
ownership (TCO) for storage, and help turn fixed costs into variable costs that scale with
business volume.

Success in the on demand world will depend on the ability to leverage information
technology. A greater dependence on information means a greater dependence on storage.
What distinguishes an on demand business is the ability to quickly sense and rapidly respond
to a dynamic marketplace; to do this, there are challenges that an on demand business must
overcome.

At the business level, customers are faced with three major storage challenges:
򐂰 Managing storage growth: Storage needs to continue to grow at over 50% per year.
Managing storage is becoming more complex than ever, because we now have to deal
with multiple server platforms and different operating systems, which may be connected to
a storage area network (SAN) with multiple and diverse storage platforms.
򐂰 Increasing complexity: Although the declining cost of storage per megabyte makes it
attractive to add additional disks, the increasing complexity of managing this storage
results in over-utilized staff and under-utilized IT resources. Combining this with the
shortage of skilled storage administrators, it is possible to add significant cost and
introduce risk to storage management.
򐂰 Maintaining availability: The added complexity of 24x7 environments significantly
reduces, for example, the efficiency of conducting routine maintenance, scheduling
backups, data migration, and introducing new software and hardware. This problem is
compounded by the fact that as availability increases, so does the cost inherent with
making it so.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. xxvii
These challenges still exist, although large SANs do offer desirable and tangible benefits, for
example, better connectivity, improved performance, distance flexibility, and scalability.
However, even these benefits may be outweighed by the added complexity that they
introduce.

As an example, large enterprise SANs often contain different types of storage devices.
These differences could be in the types of disk deployed, their level of performance, or the
functionality provided, such as RAID or mirroring. Often, customers have different vendor
storage devices as the result of mergers or consolidations. The result, however, is that
storage and SAN administrators need to configure storage to servers, and then keep track of
which servers own or have access to that storage. The storage administrative tasks can
become daunting as the SAN grows and as the storage administrators manually attempt to
manage the SAN.

Furthermore, the complexity of having different file systems in the same SAN requires that
storage administrators know how to administer each client operating system (OS) platform.
The management interfaces for each may be different, since there is no common standard
that all vendors adhere to. Lastly, since the file systems are tied to each of the servers,
storage management functions potentially have to be run on hundreds of servers. It is easy to
see why manageability and interoperability are the top areas for concern, especially in a SAN
where the number of possible storage and OS platform permutations is considerable.

These challenges are at odds with the commonly held belief that storage is decreasing in cost
per megabyte. It is clear that the cost of managing storage is greater than the initial purchase
price. A strategy is needed to address storage manageability, while at the same time
addressing the need for interoperability. This strategy is the IBM System Storage Open
Software Family.

This strategy represents the next stage in the evolution of storage networking. It affords you
the opportunity to fundamentally improve your company’s effectiveness and efficiency in
managing its storage resources. With the IBM SAN virtualization products, you are witnessing
IBM deliver on its continued promise to provide superior on demand solutions that will assist
in driving down costs, and reduce TCO.

The team that wrote this redbook


This redbook was produced by a team of specialists from around the world working at the
ITSO in the San Jose Center, San Jose, California.

Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International
Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level 2 support for IBM storage
products. Jon has 20 years of experience in storage software and management, services,
and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist.

Thorsten Hoss is a Product Field Engineer for the SAN Volume Controller and SAN File
System working for IBM Germany in Mainz. He joined IBM in 2000 after finishing his Electrical
Engineering degree at the Fachhochschule Wiesbaden - University of Applied Sciences,
Germany. He also works as a Level 2 support engineer for SAN and IBM storage products.
Thorsten is an IBM SAN Certified Specialist in Networking and Virtualization Architecture.

xxviii IBM System Storage SAN Volume Controller


Andy McManus is an Advisory IT Specialist working in the Integrated Technical Services
department of IBM UK. He has been at IBM for 6 years and has worked within the SAN and
Storage environment since 1997. He is an IBM SAN Certified Specialist, a Brocade Certified
Fabric Professional, and has many years experience with UNIX® Operating Systems,
Multipathing Device Drivers, and Multivendor Disk and Tape Solutions. Andy currently
provides both Level 1 and Level 2 Hardware and Software support to the UK and EMEA for
much of the IBM System Storage product range.

Massimo Rosati is a Senior IT Specialist working for IBM Global Technology Services in
Italy. He has 8 years of experience in implementing and designing storage solutions in the
Open Systems Environment. He joined IBM in 1985, and his areas of expertise include SAN,
Enterprise Disk Solutions and Business Continuity Solutions. He is an IBM Certified
Specialist in Systems Products Services.

We extend our thanks to the following people for their contributions to this project.

There are many people that contributed to this book. In particular, we thank the development
and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues along and
ensuring that they maintained a high profile.

In particular, we thank the previous authors of this redbook:


Matt Amanat
Angelo Bernasconi
Steve Cody
Sean Crawford
Deon George
Amarnath Hiriyannappa
Philippe Jachimczyk
Bent Lerager
Craig McKenna
Joao Marcos Leite
Barry Mellish
Fred Scholten
Robert Symons
Marcus Thordal

We would also like to thank the following people for their contributions:

John Agombar
Alex Ainscow
Iain Bethune
Trevor Boardman
Peter Eccles
Carlos Fuente
Tim Graham
Alex Howell
Gary Jarman
Colin Jewell
Simon Linford
Andrew Martin
Paul Mason
Richard Mawson
Rob Nicholson
Nick O’Leary
Lucy Raw

Preface xxix
Bill Scales
Matt Smith
Steve White
IBM Hursley

Bill Wiegand
IBM Advanced Technical Support

Timothy Crawford
Ross Hagglund
IBM Beaverton

Dorothy Faurot
IBM Raleigh

John Gressett
IBM Rochester

Chris Saul
IBM San Jose

Sharon Wang
IBM Chicago

Craig McKenna
IBM Australia

Fred Borchers
Charlotte Brooks
Tom Cady
Yvonne Lyon
Deanna Polm
Sangam Racherla
IBM ITSO

Tom and Jenny Chang


Garden Inn Hotel, Los Gatos, California

Become a published author


Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with
specific products or solutions, while getting hands-on experience with leading-edge
technologies. You'll team with IBM technical professionals, Business Partners and/or clients.

Your efforts will help increase product acceptance and client satisfaction. As a bonus, you'll
develop a network of contacts in IBM development labs, and increase your productivity and
marketability.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

xxx IBM System Storage SAN Volume Controller


Comments welcome
Your comments are important to us!

We want our Redbooks™ to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
򐂰 Use the online Contact us review redbook form found at:
ibm.com/redbooks
򐂰 Send your comments in an e-mail to:
[email protected]
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Preface xxxi
xxxii IBM System Storage SAN Volume Controller
Summary of changes

This section describes the technical changes made in this edition of the book and in previous
editions. This edition may also include minor corrections and editorial changes that are not
identified.

Summary of Changes
for SG24-6423-04
for IBM System Storage SAN Volume Controller
as created or updated on September 14, 2006.

September 2006, Fifth Edition


This revision reflects the addition, deletion, or modification of new and changed information
described below.

New information
򐂰 Added Global Mirror

Changed information
򐂰 Numerous screen captures and their descriptions

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. xxxiii
xxxiv IBM System Storage SAN Volume Controller
1

Chapter 1. Introduction to storage


virtualization
In this chapter we describe the need for storage virtualization and the IBM approach to both
in-band and out-of-band storage virtualization. The fundamental differences between the two
architectures are articulated to explain why IBM has chosen to use in-band virtualization for
the IBM System Storage SAN Volume Controller (the focus of the remainder of this redbook).

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 1
1.1 The need for storage virtualization
At the business level, clients are faced with three major storage challenges:
򐂰 Managing storage growth: Storage needs continue to grow at a rate that is normally
higher than what has been planned for each year. As an example, storage subsystems
can be purchased to last for 3 to 5 years, however, organizations are finding that they are
filling to capacity much earlier than that.
To fill the growth, customers are then either extending their current storage subsystems in
chunks, or buying different types of storage subsystems to match their storage needs and
budget.
򐂰 Increasing complexity: As storage needs grow, this need can be filled by more than one
disk subsystem, which might not even be from the same vendor.
Together with the variety of server platforms and operating systems in a customers
environment, customers can have storage area networks (SAN) with multiple and diverse
storage subsystems and host platforms.
Combining this with the shortage of skilled storage administrators, the cost and risk of
storage increases as the environment becomes more complex.
򐂰 Maintaining availability: With the increased range of storage options available, the
storage growth rate, and no similar increase in storage budget, customers are facing
having to manage more storage with minimal or no additional staff.
Thus, with the complexity highlighted above, and with business requirements on IT
resources towards higher business system availability, the room for errors increases as
each new storage subsystem is added to the infrastructure.
Additionally, making changes to the storage infrastructure to accommodate storage
growth traditionally leads to outages that might not be acceptable by the business.

Storage needs are rising, and the challenge of managing disparate storage systems is
growing. The IBM System Storage SAN Volume Controller brings storage devices together in
a virtual pool to make all storage appear as:
򐂰 One “logical” device to centrally manage and to allocate capacity as needed
򐂰 One solution to help achieve the most effective use of key storage resources on demand

Virtualization solutions can be implemented in the storage network, in the server, or in the
storage device itself. The IBM storage virtualization solution is SAN-based, which helps allow
for a more open virtualization implementation. Locating virtualization in the SAN, and
therefore in the path of input/output (I/O) activity, helps to provide a solid basis for
policy-based management. The focus of IBM on open standards means its virtualization
solution supports freedom of choice in storage-device vendor selection.

The IBM System Storage SAN Volume Controller solution is designed to:
򐂰 Simplify storage management
򐂰 Reduce IT data storage complexity and costs while enhancing scalability
򐂰 Extend on-demand flexibility and resiliency to the IT infrastructure
򐂰 Increase application availability by making changes in the infrastructure without having to
shut down hosts

1.2 In-band virtualization


In a conventional SAN, the logical unit numbers (LUNs) that are defined within the storage
subsystem are directly presented to the host or hosts. In-band virtualization, otherwise

2 IBM System Storage SAN Volume Controller


known as block aggregation, essentially means having an appliance in the data path that can
take physical storage from one or more storage subsystems and offer it to hosts in the form of
a virtual disk (VDisk).

The Storage Networking Industry Association (SNIA) Block Aggregation Model (Figure 1-1)
specifies that block aggregation can be performed within hosts (servers), in the storage
network (storage routers, storage controllers), or in storage devices (intelligent disk arrays).

Application

File/record subsystem

Database File system


(dbms) (FS)

Resource mgmt, configuration

Redundancy mgmt (backup, …)


High availability (fail-over, …)
Services subsystem
Storage domain

Discovery, monitoring
Host-based block aggregation

Capacity planning
Security, billing
Block
aggregation SN-based block aggregation

Device-based block aggregation

Storage devices (disks, …)

Block
subsystem
Copyright 2000, Storage Network Industry Association

Figure 1-1 SNIA Block Aggregation Model

While each of these approaches has pros and cons and all are available in various forms
from various vendors, IBM chose to develop its latest block aggregation product (IBM System
Storage SAN Volume Controller) within the storage network.

Block aggregation within the storage network provides four significant benefits to clients:
򐂰 Increased storage administrator productivity:
Administrators can manage, add, and migrate physical disks non-disruptively from an
application server point of view. This is accomplished by providing insulation between the
server’s view of the logical disks and the disks as presented by the storage subsystem.
Productivity is improved by allowing administrators to perform management functions
when convenient rather than waiting for ever decreasing maintenance windows.
Downtime requirements are almost eliminated.
򐂰 Providing a common platform for advanced functions:
By providing a logical view of physical storage, advanced functions like disaster recovery
can be done at a single point in the SAN in a consistent way regardless of the underlying
physical storage. FlashCopy®, Metro Mirror — formerly referred to as Peer-to-Peer
Remote Copy (PPRC) — and data migration can also be performed in a consistent way.
This common platform is used to provide other advanced functions over time such as
advanced security and quality of service (QoS) capabilities.

Chapter 1. Introduction to storage virtualization 3


򐂰 Improved capacity utilization:
Spare capacity on underlying physical disks can be reallocated non-disruptively from an
application server point of view irrespective of the server operating system or platform
type. Logical disks can be created from any of the physical disks being managed by the
virtualization device (that is, vendor agnostic).
򐂰 Simplification of connectivity:
Each vendor storage subsystem would traditionally require a vendor’s device driver on the
host to access the subsystem.
Where there are many subsystems in the environment, regardless of whether any one
host is accessing more than one vendor’s storage subsystems, then managing the range
of device drivers is unnecessarily complex.
The IBM approach means that only one device driver, the IBM System Storage Subsystem
Device Driver (SDD), is required to access any virtualized storage on the SAN regardless
of the vendor storage subsystem.

Figure 1-2 shows the IBM approach to block aggregation.

Figure 1-2 IBM plan for block aggregation

In addition to the four major benefits outlined above, abstracting the hosts from directly
accessing the storage subsystem or subsystems has many other benefits over other methods
of block aggregation, including these:
򐂰 It provides the ability to add advanced functions and apply them to the entire storage
infrastructure. The first release of the product offered these functions:
– Copy Services (“Metro Mirror” (formerly referred to as PPRC) and FlashCopy)
– Data migration
– Read and Write Caching
򐂰 Later releases of the product offer such functions as:
– Quality of Service
– Performance based data migration

4 IBM System Storage SAN Volume Controller


– Performance optimization in the data path
– Advanced security
– Copy Services: Global Mirror
򐂰 It does not lock a client into a particular storage hardware vendor.
򐂰 It is not intrusive on the hosts.
򐂰 It can offload function from the hosts.
򐂰 It can support storage management from multiple ISVs.
򐂰 It offers superior scalability.

The IBM virtualization product provides redundant, modular, and scalable solutions. It is
based on a clustered IBM SAN appliance running a Linux® kernel to support high availability
and performance. Additional nodes are capable of being added non-disruptively providing
enterprise class scalability. IBM’s long history of storage controller development has enabled
us to develop systems where, in the exceptionally rare case that a failure occurs, the
virtualization device can fail and recover gracefully. Figure 1-3 shows a representation of the
IBM System Storage SAN Volume Controller.

Figure 1-3 Conceptual diagram of the IBM SAN Volume Controller

In summary, enterprise class block aggregation functionality is added to the storage network.
The IBM solution improves storage administrator productivity, provides a common base for
advanced functions, and provides for more efficient use of storage. The IBM product is
designed to be delivered as a horizontally scalable, integrated solution based on the IBM SAN
appliance, and Linux, using a fault tolerant clustered architecture.

1.3 Out-of-band virtualization


Out-of-band virtualization, otherwise known as file aggregation, is when the virtualization
appliance is not in the data path. Typically, out-of-band virtualization is more geared toward
file sharing across the SAN. To this end, it typically involves a single file system in a single
name space.

Chapter 1. Introduction to storage virtualization 5


File aggregation is a similar technique as block aggregation. However, rather than dealing
with blocks of data, file aggregation addresses the needs of accessing and sharing files in a
storage network. In the SNIA model, hosts get file metadata from file system or Network
Attached Storage (NAS) controllers, and then access the data directly. File aggregation can
be used in conjunction with or independent from block aggregation. Figure 1-4 shows the
SNIA file aggregation model.

Figure 1-4 SNIA file aggregation model

The IBM approach is through the use of a common file system based on the IBM Storage
Tank™ technology initiative. Initially, this file system covers all SAN-based files and is later
expanded to cover all files in an enterprise. IBM provides a metadata server cluster for
managing information about files and has designed its file system clients to access disks
directly. For clients with both SANs and NAS, IBM provides a converged SAN and NAS
solution based on the Storage Tank technology.

The IBM solution, the IBM System Storage SAN File System, is designed to provide a
common file system specifically designed for storage networks. By managing file details
(metadata) on the storage network instead of on individual servers, IBM can make a single
file system available to all application servers on that network. Doing so provides immediate
benefits: a single point of management and a single name space, and common management
for all files in the network, eliminating management of files on a server by server basis.

The SAN File System design automates routine and error prone tasks using policy based
automation, initially to manage file placement and handle “out of space” conditions. The SAN
File System design also allows the first true heterogeneous file sharing, where the reader and
writer of the exact same data can run different operating systems. Initially, the SAN File
System provides a range of the most commonly used operating systems in the enterprise
SANs (see Figure 1-5).

6 IBM System Storage SAN Volume Controller


Figure 1-5 The IBM plan for file aggregation

The SAN File System metadata servers are based on clustered IBM SAN appliances running
Linux to support high availability and performance. The metadata servers provide file locks
and all other file information (such as location) to authorized application servers, which are
running the SAN File System client code (no application changes necessary).

After file information is passed from the metadata server to the application server, the
application server can access the blocks that comprise that file directly through the SAN (not
through the metadata server). By providing direct access to data from the application server
to the underlying storage (virtualized or not), the IBM solution provides the benefits of
heterogeneous file sharing with local file system performance.

Since the metadata servers have a complete understanding of all files on the SAN, including
the essential metadata to make important decisions, it is a logical point to manage the
storage in the network through policy-based controls. For example, when new files are
created, the metadata server can decide where to place each file based on specified criteria
such as file type.

The SAN File System metadata server provides the ability to group storage devices
according to their characteristics, such as reliability, latency and throughput. These
groupings, called storage pools, allow administrators to manage data according to the
characteristics that matter to them. For example, an administrator can define a storage pool
for mission critical applications using highly reliable storage arrays that are backed up nightly
and have full disaster recovery capabilities. The administrator can also define a storage pool
for less critical applications with weekly tape backups and minimal disaster recovery
capabilities. Using this level of storage classification, an administrator can set up automated
policies that determine which files are placed in which storage pools based on the required
service levels.

Chapter 1. Introduction to storage virtualization 7


Because the SAN File System metadata is separate from the application data, files can be
manipulated while remaining active. For example, files being processed by a mission-critical
application can be non-disruptively moved within or across storage pools without stopping the
application. Similarly, data migration from one storage system to another can be handled
non-disruptively by having the metadata server move the pools to new physical disks, and
then disconnecting the old disks, all done without quiescing applications.

The SAN File System offers a logical extension to current NAS and SAN environments.
Although NAS has proven successful in the marketplace, it does not take advantage of a SAN
infrastructure. NAS Filers become the keepers of their own file metadata and must be
managed separately from the files on SAN attached application servers. The IBM approach is
to add NAS capabilities to the SAN File System, thereby allowing storage administrators to
manage the NAS file data with the same tools as for their application servers. This approach
of SAN and NAS convergence helps lower total cost of ownership (TCO) in these
environments.

To facilitate the adoption of the SAN File System, the client code and the client reference
implementation source code are licensed at no cost. In addition, the metadata server
protocols are made publicly available. IBM will work with the industry to encourage
convergence of the SAN File System protocols with other standards.

In summary, the IBM System Storage SAN File System is a common SAN-wide file system
that permits centralization of management and improved storage utilization at the file level.
The SAN File System is delivered in a highly available configuration based on an IBM SAN
appliance with active-active failover and clustering for the metadata servers, providing high
availability and fault tolerance. The SAN File System is also being designed to provide policy
based storage automation capabilities for provisioning and data placement, nondisruptive
data migration, and a single point of management for files on a storage network. The use of
the SAN File System can greatly simplify the management of files on SANs and result in a
significant reduction in TCO.

8 IBM System Storage SAN Volume Controller


Figure 1-6 shows a diagram of the IBM SAN File System architecture.

Figure 1-6 Conceptual diagram of the IBM SAN file system architecture

1.4 Conclusion
In conclusion, the IBM System Storage SAN Volume Controller enables storage virtualization.
This allows clients to reap the benefit of better application business responsiveness,
maximized storage utilization, dynamic resource allocation, improved storage administration
utilization, and reduced storage outage.

In-band and out-of-band virtualization provide two very distinct yet complementary
approaches to virtualization. IBM will extol the virtues of each in two separate products. Both
products fulfill different requirements, and therefore they use different approaches to
virtualization.
򐂰 The rest of this redbook is dedicated to the IBM System Storage SAN Volume Controller
and its method of in-band virtualization. For greater detail about the technology and
implementation of the IBM System Storage SAN File System, see the redbook, IBM
System Storage SAN File System, SG24-7057-03.

Chapter 1. Introduction to storage virtualization 9


10 IBM System Storage SAN Volume Controller
2

Chapter 2. IBM System Storage SAN


Volume Controller overview
In this chapter we describe the major concepts behind the IBM System Storage SAN Volume
Controller to provide the framework for discussion for the remainder of this redbook.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 11
2.1 Maximum supported configurations
For a list of the maximum configurations, go to:
http://www-03.ibm.com/servers/storage/support/software/sanvc/installing.html

Under the Install and Use tab, select the link, V4.1.x configuration requirements and
guidelines, to download the PDF.

2.2 Glossary of commonly used terms


Before providing an overview of the IBM System Storage SAN Volume Controller, we begin
this chapter with a short glossary of terms (in alphabetical order) most commonly used
throughout the remainder of this redbook.

Boss node
A single node acts as the boss node for overall management of the cluster. If the boss node
fails, another node in the cluster will take over the responsibilities.

Configuration node
At any one time, a single node in the cluster is used to manage configuration activity. This
configuration node manages a cache of the configuration information that describes the
cluster configuration and provides a focal point for configuration commands. Similarly, at any
one time, a single node acts as the boss node for overall management of the cluster.

Extent
An extent is a fixed size unit of data that is used to manage the mapping of data between
MDisks and VDisks.

Front-end and back-end


SAN Volume Controller takes managed disks and presents these to application servers
(hosts). The managed disks are looked after by the “back-end” application of the SAN
Volume Controller. The virtual disks presented to hosts are looked after by the “front-end”
application in the SAN Volume Controller.

Grain
A grain is the unit of data represented by a single bit in a FlashCopy bitmap, 256 K in the SAN
Volume Controller.

I/O group
An input/output (I/O) group contains two SAN Volume Controller nodes defined by the
configuration process. Each SAN Volume Controller node is associated with exactly one I/O
group. The nodes in the I/O group provide access to the VDisks in the I/O group.

LU and LUN
Strictly speaking, there is a difference between a logical unit (LU) and a logical unit number
(LUN). A LUN is a unique identifier used on a SCSI bus that enables it to differentiate
between up to eight separate devices (each of which is a logical unit). In practice, the two
terms are used interchangeably. In this book, when we refer to a LUN, we refer to the unit of
storage that is defined in a storage subsystem such as an IBM System Storage Enterprise
Storage Server® (ESS), IBM System Storage DS4000™, DS6000™, and DS8000™ series
Storage Server, or storage servers from other vendors.

12 IBM System Storage SAN Volume Controller


Managed disk
A managed disk (MDisk) is a SCSI disk presented by a RAID controller and managed by the
SAN Volume Controller. The MDisk must not be configured to be visible to host systems on
the SAN.

Managed disk group


The managed disk group (MDG) is a collection of MDisks that jointly contain all the data for a
specified set of VDisks.

Master console
The master console is the platform on which the software used to manage the SAN Volume
Controller runs.

Node
A node is a name given to the individual servers in a SAN Volume Controller cluster on which
the SAN Volume Controller software runs.

SAN Volume Controller


The SAN Volume Controller is a SAN appliance designed for attachment to a variety of host
computer systems, which carries out block level virtualization of disk storage.

Virtual disk
A virtual disk (VDisk) is a SAN Volume Controller device that appears to host systems
attached to the SAN as a SCSI disk. Each VDisk is associated with exactly one I/O group.

2.3 Virtualization overview


The SVC nodes are the hardware elements of the IBM System Storage SAN Volume
Controller, a member of the IBM System Storage virtualization family of solutions. The SAN
Volume Controller combines servers into a high availability cluster. Each of the servers in the
cluster is populated with 8 GB of high-speed memory, which serves as the cluster cache. A
management card is installed in each server to monitor various parameters which the cluster
uses to determine the optimum and continuous data path. The cluster is protected against
data loss by uninterruptible power supplies. The SAN Volume Controller nodes can only be
installed in pairs for high availability.

Storage virtualization addresses the increasing cost and complexity in data storage
management. It addresses this increased complexity by shifting storage management
intelligence from individual SAN disk subsystem controllers into the network via a
virtualization cluster of nodes.

The SAN Volume Controller solution is designed to reduce both the complexity and costs of
managing your SAN-based storage. With the SAN Volume Controller, you can:
򐂰 Simplify management and increase administrator productivity by consolidating storage
management intelligence from disparate disk subsystem controllers into a single view.
򐂰 Improve application availability by enabling data migration between disparate disk storage
devices non-disruptively.
򐂰 Improve disaster recovery and business continuance needs by applying and managing
copy services across disparate disk storage devices within the Storage Area Network
(SAN). These solutions include a Common Information Model (CIM) Agent, enabling
unified storage management based on open standards for units that comply with CIM
Agent standards.

Chapter 2. IBM System Storage SAN Volume Controller overview 13


򐂰 Provide advanced features and functions to the entire SAN, such as:
– Large scalable cache
– Copy Services
– Space management (later releases to include Policy Based Management)
– Mapping based on desired performance characteristics
– Quality of Service (QoS) metering and reporting
򐂰 Simplify device driver configuration on hosts, so all hosts within your network use the
same IBM device driver to access all storage subsystems through the SAN Volume
Controller.

Note: The SAN Volume Controller is not a RAID controller. The disk subsystems attached
to SANs that have the SAN Volume Controller provide the basic RAID setup. The SAN
Volume Controller uses what is presented to it as a managed disk to create virtual disks.

2.4 Compass architecture


The IBM System Storage SAN Volume Controller is based on the COMmodity PArts Storage
System (Compass) architecture developed at the IBM Almaden Research Center.

The overall goal of the Compass architecture is to create storage subsystem software
applications that require minimal porting effort to leverage a new hardware platform. To meet
this goal:
򐂰 Compass, although currently deployed on the Intel® hardware platform, can be ported to
other hardware platforms.
򐂰 Compass, although currently deployed on a Linux kernel, can be ported to other Portable
Operating System Interface (POSIX)-compliant operating systems.
򐂰 Compass uses commodity adapters and parts wherever possible. To the highest extent
possible, it only uses functions in the commodity hardware that are commonly exercised
by the other users of the parts. This is not to say that Compass software could not be
ported to a platform with specialized adapters. However, the advantage in specialized
function must be weighed against the disadvantage of future difficulty in porting and in
linking special hardware development plans to the release plans for applications based on
the Compass architecture.
򐂰 Compass is developed in such a way that it is as easy as possible to troubleshoot and
correct software defects.
򐂰 Compass is designed as a scalable, distributed software application that can run in
increasing sets of Compass nodes with near linear gain in performance while using a
shared data model that provides a single pool of storage for all nodes.
򐂰 Compass is designed so that there is a single configuration and management view of the
entire environment regardless of the number of Compass nodes in use.

The approach is to minimize the dependency on unique hardware, and to allow exploitation of
or migration to new SAN interfaces simply by plugging in new commodity adapters.
Performance growth over time is assured by the ability to port Compass to just about any
platform and remain current with the latest processor and chipset technologies on each. The
SAN Volume Controller implementation of the Compass architecture has exploited Linux as a
convenient development platform to deploy this function. This has, and will continue to
enhance the ability of IBM to deploy robust function in a timely way.

14 IBM System Storage SAN Volume Controller


2.4.1 SAN Volume Controller clustering
In simple terms, a cluster is a collection of servers that, together, provide a set of resources to
a client. The key point is that the client has no knowledge of the underlying physical hardware
of the cluster.

This means that the client is isolated and protected from changes to the physical hardware,
which brings a number of benefits. Perhaps the most important of these benefits is high
availability.

Resources on clustered servers act as highly available versions of unclustered resources.


If a node (an individual computer) in the cluster is unavailable, or too busy to respond to a
request for a resource, the request is transparently passed to another node capable of
processing it. Clients are, therefore, unaware of the exact locations of the resources they are
using.

For example, a client can request the use of an application without being concerned about
either where the application resides or which physical server is processing the request. The
user simply gains access to the application in a timely and reliable manner. Another benefit is
scalability. If you need to add users or applications to your system and want performance to
be maintained at existing levels, additional systems can be incorporated into the cluster.

The IBM System Storage SAN Volume Controller is a collection of up to eight cluster nodes,
added in pairs. In future releases, the cluster size will be increased to permit further
performance scalability. These nodes are managed as a set (cluster) and present a single
point of control to the administrator for configuration and service activity.

Note: Although the SAN Volume Controller code is based on a Linux kernel, the clustering
feature is not based on Linux clustering code. The clustering failover and failback feature is
part of the SAN Volume Controller application software.

Within each cluster, one node is defined as the configuration node. This node is assigned the
cluster IP address and is responsible for transitioning additional nodes into the cluster.

During normal operation of the cluster, the nodes communicate with each other. If a node is
idle for a few seconds, then a heartbeat signal is sent to assure connectivity with the cluster.
Should a node fail for any reason, the workload intended for it is taken over by another node
until the failed node has been restarted and re-admitted to the cluster (which happens
automatically). In the event that the microcode on a node becomes corrupted, resulting in a
failure, the workload is transferred to another node. The code on the failed node is repaired,
and the node is re-admitted to the cluster (again, all automatically).

For I/O purposes, SAN Volume Controller nodes within the cluster are grouped into pairs,
called I/O groups, with a single pair being responsible for serving I/O on a given VDisk. One
node within the I/O group represents the preferred path for I/O to a given VDisk. The other
node represents the non-preferred path. This preference alternates between nodes as each
VDisk is created within an I/O group to balance the workload evenly between the two nodes.

Note: The preferred node by no means signifies absolute ownership. The data can still be
accessed by the partner node in the I/O group in the event of a failure.

Chapter 2. IBM System Storage SAN Volume Controller overview 15


Beyond automatic configuration and cluster administration, the data transmitted from
attached application servers is also treated in the most reliable manner. When data is written
by the host, the preferred node within the I/O group stores a write in its own write cache and
the write cache of its partner (non-preferred) node before sending an “I/O complete” status
back to the host application. The write cache is automatically destaged to disk after two
minutes of no writes to a VDisk. To ensure that data is written in the event of a node failure,
the surviving node empties all of its remaining write cache and proceeds in write-through
mode until the cluster is returned to a fully operational state.

Note: Write-through mode is where the data is not cached in the nodes, but written directly
to the disk subsystem instead. While operating in this mode, performance is somewhat
degraded. More importantly, it ensures that the data makes it to its destination without the
risk of data loss. A single copy of data in cache would constitute exposure to data loss.

As yet another data protection feature, the SAN Volume Controller is supplied with
uninterruptible power supply units. In addition to voltage regulation to protect valuable
electronic components within the SAN Volume Controller configuration, in the event of a main
power outage, the uninterruptible power supply provides enough power to destage data to the
SAN Volume Controller internal disk and shut down the nodes within the SAN Volume
Controller cluster gracefully. This is a feature found in most high-end disk subsystems.

2.4.2 SAN Volume Controller virtualization


The SAN Volume Controller provides block aggregation and volume management for disk
storage within the SAN. In simpler terms, this means that the SAN Volume Controller
manages a number of back-end disk subsystem controllers and maps the physical storage
within those controllers to logical disk images that can be seen by application servers and
workstations in the SAN. The SAN must be zoned in such a way that the application servers
cannot see the same back-end LUNs seen by the SAN Volume Controller, preventing any
possible conflict between the SAN Volume Controller and the application servers both trying
to manage the same back-end LUNs.

As described earlier, when an application server performs I/O to a VDisk assigned to it by the
SAN Volume Controller, it can access that VDisk via either of the nodes in the I/O group.
Each node can only be in one I/O group and since each I/O group only has two nodes, the
distributed redundant cache design in the SAN Volume Controller only needs to be two-way.

The SAN Volume Controller I/O groups are connected to the SAN in such a way that all
back-end storage and all application servers are visible to all of the I/O groups. The SAN
Volume Controller I/O groups see the storage presented to the SAN by the back-end
controllers as a number of disks, known as managed disks. Because the SAN Volume
Controller does not attempt to provide recovery from physical disk failures within the
back-end controllers, MDisks are recommended, but not necessarily required, to be a RAID
array. The application servers should not see the MDisks at all. Instead, they should see a
number of logical disks, known as virtual disks or VDisks, which are presented to the SAN by
the SAN Volume Controller.

MDisks are collected into groups, known as managed disk groups (MDGs). The MDisks that
are used in the creation of a particular VDisk must all come from the same MDG. Each MDisk
is divided into a number of extents. The minimum extent size is 16 MB, and the maximum
extent size is 512 MB, based on the definition of its MDG. These extents are numbered
sequentially from the start to the end of each MDisk. Conceptually, this is represented as
shown in Figure 2-1.

16 IBM System Storage SAN Volume Controller


VDISK 1

Figure 2-1 Extents being used to create a virtual disk

The virtualization function in the SAN Volume Controller maps the VDisks seen by the
application servers to the MDisks presented by the back-end controllers. I/O traffic for a
particular VDisk is, at any one time, handled exclusively by the nodes in a single I/O group.
Although a cluster can have many nodes within it, the nodes handle I/O in independent pairs.
This means that the I/O capability of the SAN Volume Controller scales well (almost linearly),
since additional throughput can be obtained by simply adding additional I/O groups.

Figure 2-2 summarizes the various relationships that bridge the physical disks through to the
virtual disks within the SAN Volume Controller architecture.

Mapping from virtual disks to managed disks


1. Stripe extents across multiple managed disks
2. Sequentially group across one or more mDisks
3. Image mode - one-to-one managing

Figure 2-2 The relationship between physical and virtual disks

Chapter 2. IBM System Storage SAN Volume Controller overview 17


Virtualization mappings
Several different mapping functions are provided by the SAN Volume Controller:
򐂰 Striped: Here a VDisk is mapped to a number of MDisks in a MDG. The extents on the
VDisk are striped over the MDisks. Therefore, if the VDisk is mapped to 5 MDisks, then
the first, sixth, eleventh, etc. extent comes from the first MDisk; the second, seventh,
twelfth (and so on) extent will come from the second MDisk; and so on. This is the default
mapping.
򐂰 Sequential: Here a VDisk is mapped to a single MDisk in a MDG. There is no guarantee
that sequential extents on the MDisk map to sequential extents on the VDisk, although this
might be the case when the VDisk is created.

Note: There are no ordering requirements in the MDisk to VDisk extent mapping
function for either striped or sequential VDisks. This means that if you examine the
extents on an MDisk, it is quite possible for adjacent extents to be mapped to different
VDisks. It is also quite possible for adjacent extents on the MDisk to be mapped to
widely separated extents on the same VDisk, or to adjacent extents on the VDisk. In
addition, the position of the extents on the MDisks is not fixed by the initial mapping,
and can be varied by the user performing data migration operations.

򐂰 Image: Image mode sets up a one-to-one mapping of extents on an MDisk to the extents
on the VDisk. Because the VDisk has exactly the same extent mapping as the underlying
MDisk, any data already on the disk is still accessible when migrated to a SAN Volume
Controller environment. Within the SAN Volume Controller environment, the data can
(optionally) be seamlessly migrated off the image mode VDisk to a striped or sequential
VDisk within a MDG.

2.4.3 SAN Volume Controller multipathing


Each SAN Volume Controller node presents a VDisk to the SAN via multiple paths, usually
four. In normal operation, two nodes provide redundant paths to the same storage. This
means that, depending on zoning, a single host bus adapter (HBA) sees up to eight paths to
each LUN presented by the SAN Volume Controller. Because most operating systems cannot
resolve multiple paths back to a single physical device, IBM provides a multipathing device
driver.

The multipathing driver supported by the SAN Volume Controller is the IBM Subsystem
Device Driver (SDD).

SDD load-balances and optimizes the I/O workload among the preferred paths from the host
to the SAN Volume Controller. In the event of failure of all preferred paths, it load-balances the
non-preferred paths in the same manner. The failback is automatic when the preferred paths
are recovered.

FYI: The SDD code is updated to support both the SAN Volume Controller and the ESS.
Provided that the latest version is used, IBM supports the concurrent connections of a host
to both a SAN Volume Controller and native ESS environment as long as each ESS LUN is
only seen by either the SVC or the host but not by both.

2.5 SAN Volume Controller logical configuration


Figure 2-3 shows an example of a SAN Volume Controller configuration.

18 IBM System Storage SAN Volume Controller


Figure 2-3 SAN Volume Controller logical view

Configuration notes
Here are some basic characteristics and recommendations in regard to the configuration:
򐂰 The Fibre Channel SAN connections between the SAN Volume Controller and the
switches are optical fiber, preferably running at 2 Gbps. However, the SAN Volume
Controller is also supported in 1 Gbps Fibre Channel fabrics.
򐂰 To provide high availability, the SAN Volume Controller nodes should be configured in
redundant SAN fabrics.
򐂰 The Fibre Channel switches need to be zoned to permit the hosts to see the SAN Volume
Controller nodes and the SAN Volume Controller nodes to see the RAID Controllers. The
SAN Volume Controller nodes within a cluster must be able to see each other and the
master console. In addition, if there are two SAN Volume Controller clusters with
MetroMirror (formerly referred to as Peer-to-Peer Remote Copy (PPRC)) and GlobalMirror
services between them, zoning must be set so that all the nodes in both clusters see all
the other nodes in both clusters.
򐂰 In addition to a Fibre Channel connection or connections, each device has an Ethernet
connection for configuration and error reporting. However, only one of the nodes, the
configuration node, binds an IP address to its Ethernet connection.

2.6 SAN Volume Controller compatibility


The SAN Volume Controller is capable of supporting Windows® NT, 2000, and 2003, AIX®,
SuSE Linux Enterprise Server 8 (SLES 8), SuSE Linux Enterprise Server 9 (SLES 9),
and Red Hat EL AS Linux, Novell Netware 6.5 with clustering, VMWare ESX, Sun™
Solaris™, and HP-UX hosts. The SAN switch support is also broad and includes the IBM
System Storage SAN Switches and members of the Brocade, McDATA, CNT, and Cisco
families of SAN Switches and Directors.

Chapter 2. IBM System Storage SAN Volume Controller overview 19


Support of disk subsystems includes the IBM System Storage DS4000, DS6000, and
DS8000 series servers, and the IBM System Storage Enterprise Storage Server (ESS). It
also supports several models of Hitachi Thunder, Lightning, and Tagma Store; EMC Clariion
and Symmetrix; and Hewlett-Packard StorageWorks Modular Arrays and EVA. Furthermore,
the support includes the new IBM System Storage N series and NetApp FAS.

Future releases of the IBM System Storage SAN Volume Controller will continue to grow the
portfolio of supported disk subsystems, SAN switches, HBAs, hosts, and operating systems.

For SAN Volume Controller supported operating systems, hosts, HBAs, SAN switches, and
storage subsystems, see the links for supported hardware list and recommended software
levels for the appropriate SVC code level under the Install and Use tab on the Web at:
http://www.ibm.com/servers/storage/support/software/sanvc/installing.html

2.7 Software licensing


There are three parameters to consider when licensing the SAN Volume Controller software:
򐂰 The total amount of storage that is managed by the SAN Volume Controller cluster:
This can be greater than the amount of storage that is virtualized by the cluster. See
Figure 2-4.

Figure 2-4 Base software license diagram

20 IBM System Storage SAN Volume Controller


򐂰 The amount of virtualized storage that you want to participate in simultaneous FlashCopy
relationships:
You might want to manage 8 TB of storage, but FlashCopy only 1 TB of storage. In this
case, you need a 2 TB FlashCopy license, since there is 1 TB of source and 1 TB of target
FlashCopy volumes. See Figure 2-5.

Figure 2-5 FlashCopy storage license

Chapter 2. IBM System Storage SAN Volume Controller overview 21


򐂰 The amount of virtualized storage that you want to create on another system through
MetroMirror/GlobalMirror.

Note: There is no additional license charge for the usage of GlobalMirror for existing
MetroMirror users.

For example, you might be managing 8 TB of storage and FlashCopy copying 1 TB (2 TB


Fibre Channel license required) and MetroMirror/GlobalMirror copying 1 TB. Here, both
clusters that are taking part in the MetroMirror/GlobalMirror relationship require a 1 TB
MetroMirror/GlobalMirror license. In the case of intracluster MetroMirror/GlobalMirror,
where both primary and secondary volumes are in the same cluster, then the license must
be large enough to cover both.
See Figure 2-6 for the intracluster scenario. For intercluster MetroMirror/GlobalMirror, only
the amount of virtualized storage that is in a MetroMirror/GlobalMirror relationship at that
site needs to be covered by the MetroMirror/GlobalMirror license.

Figure 2-6 MetroMirror/GlobalMirror intracluster

22 IBM System Storage SAN Volume Controller


Figure 2-7 shows the intercluster MetroMirror/GlobalMirror relationship and how this affects
licensing.

Figure 2-7 MetroMirror/GlobalMirror intercluster relationship

You can increase any one of these three licenses independently of the other. That is, you can
increase the total amount of managed storage software without increasing the other licenses
if the amounts of storage being copied remains unchanged. Similarly, you can change the
copy licenses independently of each other, but you have to ensure that the total amount of
SVC managed storage software still covers the sum of all advanced copy service licenses.

2.8 What’s new and what’s in SVC 4.1


For the most up-to-date information as to supported hardware and software, go to:
http://www-03.ibm.com/servers/storage/support/software/sanvc/installing.html

Here is a brief summary of the features available with SVC 4.1:


򐂰 Advanced copy service:
– Global Mirror (asynchronous remote mirror)
򐂰 New IBM System Storage SAN Volume Controller hardware:
– System x™ 366 server
– 8 GB Cache per node
– 4 Gb/s FC Adapter

Chapter 2. IBM System Storage SAN Volume Controller overview 23


򐂰 New additional operating system support:
– Netapp V series
– Sun Solaris 10 for SPARC, including SAN Boot, clustering with Sun Cluster 3.x
– Red Hat Enterprise Linux 4.0
– Hewlett Packard HP-UX 11i V2 for PARISC and Itanium® systems, including SAN
Boot, clustering with HP Service Guard
– Hewlett Packard Open VMS 7.3
– Microsoft® Windows 2003, Enterprise x64 Edition, including SAN Boot
򐂰 Subsystem storage support:
– IBM System Storage N series: N5500/N5200
– Hitachi Data System Tagma Store:
• Universal Storage Platform USP100/USP600/USP1100
• Network Storage Controller NSC55
– Hewlett Packard Storage Works: Enterprise Virtual Array EVA 4000/6000/8000
– Network Appliance: Netapp FAS 3020/3050
򐂰 New SVC features:
– Audit log: This new feature provides an overview about the tasks done by each user.
– New Performance statistics: This is for cluster-wide and per-node collection
– Host Port Mask: This controls the node target ports a host can access
򐂰 New GUI to improve the storage administrator’s working experience:
– An additional host name to VDisk mapping information is now provided.
– The lsnode command now provides information about the node hardware type and the
MAC address. The GUI shows this information when displaying the node Vital Product
Data (VPD)
– A new discovery status command is available in CLI and GUI to query if a long running
LUN discovery is active or inactive.
– Automatic Code Level Check: This provides linkage to a static Web site that shows
whether or not an update is needed.
– Viewing Fabrics: This new panel provides more information about the fabrics that are
associated with this cluster.
– Mapped I/O Groups: This new view provides a quick overview of what I/O groups are
mapped to the host.
򐂰 IBM System Storage SAN Volume Controller 4.1 is backwards compatible with all prior
versions of IBM System Storage SAN Volume Controller.

24 IBM System Storage SAN Volume Controller


3

Chapter 3. Planning and configuration


In this chapter we describe the steps required when planning to install an IBM System
Storage SAN Volume Controller (SVC) in your storage network. We look at the implications
for your storage network.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 25
3.1 General planning rules
To achieve the most benefit from SAN Volume Controller (SVC), pre-installation planning
should include several important steps. These steps ensure that SVC provides the best
possible performance, reliability, and ease of management for your application needs. Proper
configuration also helps minimize downtime by avoiding changes to SVC and the storage
area network (SAN) environment to meet future growth needs.

Planning the SVC requires that you follow these steps:


1. Document the number of hosts (application servers) to attach to the SVC, the traffic profile
activity (read or write, sequential or random), the performance requirements (input/output
(I/O) per second), the total storage capacity, the storage capacity for point-in-time copy
(FlashCopy), the storage capacity for remote copy (Metro and Global Mirror), the storage
capacity per host, the host logical unit number (LUN) quantity and sizes.
2. Define the local and remote SAN fabrics and clusters if a remote copy or a secondary site
are needed.
3. Define the number of clusters and the number of pairs of nodes (between 1 and 4) for
each site. Each pair of nodes (an I/O group) is the container for the virtual disks. How
many I/O groups are needed depends on the overall performance.
4. Design the SAN according to the requirement for high availability and best performance.
Consider the total number of ports and the bandwidth needed between the host and the
SVC, the SVC and the disk subsystem, between the SVC nodes, and for the ISL between
the local and remote fabric.
5. Define the managed disks (MDisks) in the disk subsystem.
6. Define the managed disk groups (MDGs). This depends on the disk subsystem in place
and the data migration needs.
7. Create and re-partition the VDisks between the different I/O groups and the different
MDGs in such a way as to optimize the I/O load between the hosts and the SVC. This can
be an equal re-partition of all the VDisks between the different nodes or a re-partition,
which takes into account the expected load from the different hosts.
8. Plan for the physical location of the equipment in the rack.

Note: See IBM TotalStorage Virtualization Family SAN Volume Controller: Planning
Guide, GA22-1052, for hardware location and connection charts.

9. You need IP addresses for the SVC Cluster, the SVC service IP address, master console,
and switches.

Note: See the IBM TotalStorage Virtualization Family SAN Volume Controller: Planning
Guide, GA22-1052 for more information.

26 IBM System Storage SAN Volume Controller


3.2 Physical planning
There are several main factors to take into account when carrying out the physical planning of
an SVC installation. The physical site must have the following characteristics:
򐂰 Power, cooling, and location requirements are present for the SVC and uninterruptible
power supplies.
򐂰 An SVC node is one EIA unit high.
򐂰 Each of the uninterruptible power supplies (UPSs) that comes with SVC 4.1 version is one
EIA unit high; the UPS shipped with the earlier version of the SVC is two EIA units high.
򐂰 The master console is two EIA units high: one for the server and one for the keyboard and
monitor.
򐂰 Other hardware devices can be in the rack, such as IBM System Storage DS4000, IBM
System Storage DS6000, SAN switches, Ethernet switch, and others.
򐂰 The maximum power rating of the rack and input power supply must not be exceeded.

Chapter 3. Planning and configuration 27


In Figure 3-1 we show the SVC in its rack.

Figure 3-1 SVC in its rack

28 IBM System Storage SAN Volume Controller


3.2.1 Preparing your UPS environment
Ensure that your physical site meets the installation requirements for the uninterruptible
power supply (UPS).

Note: The 2145 UPS-1U is a Powerware 5115 and the 2145 UPS is a Powerware 5125.

2145 UPS-1U
When you configure the 2145 uninterruptible power supply-1U (2145 UPS-1U), the voltage
that is supplied to it must be 200 – 240 V, single phase.

Note: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.

2145 UPS
The SAN Volume Controller 2145-8F2 and SAN Volume Controller 2145-8F4 can only
operate with the 2145 uninterruptible power supply-1U (2145 UPS-1U).

The SAN Volume Controller 2145-4F2 can operate with both the 2145 UPS-1U and the 2145
UPS.

Be aware of the following considerations when configuring the 2145 uninterruptible power
supply (2145 UPS):
򐂰 Each 2145 UPS must be connected to a separate branch circuit.
򐂰 A UL-listed 15 A circuit breaker must be installed in each branch circuit that supplies
power to the 2145 UPS.
򐂰 The voltage that is supplied to the 2145 UPS must be 200 – 240 V, single phase.
򐂰 The frequency supplied to the 2145 UPS must be 50 or 60 Hz.

Ensure that you comply with the following requirements for UPSs:
򐂰 If the UPS is cascaded from another UPS, the source UPS must have at least three times
the capacity per phase, and the total harmonic distortion must be less than 5% with any
single harmonic being less than 1%.
򐂰 The UPS must also have input voltage capture that has a slew rate faster than 3 Hz per
second and 1 msec glitch rejections.

Heat output
The maximum heat output parameters are as follows:
򐂰 142 watts (485 Btu per hour) during normal operation
򐂰 553 watts (1887 Btu per hour) when power has failed and the UPS is supplying power to
the nodes of the SAN Volume Controller

For more information, refer to the IBM System Storage SAN Volume Controller Planning
Guide, GA32-0551.

Chapter 3. Planning and configuration 29


3.2.2 Physical rules
The SVC must be installed in pairs to provide high availability, and each node in an I/O group
must be connected to different UPSs as shown in Figure 3-2.

Figure 3-2 Node uninterruptible power supply setup

In SVC versions prior to SVC 2.1, the Powerware 5125 UPS was shipped with the SVC; in
SVC 4.1, Powerware 5115 UPS is shipped with the SVC. You can upgrade an existing SVC
cluster to 4.1 and still use the UPS Powerware 5125 that was delivered with the SVC prior to
2.1.
򐂰 Each SVC node of an I/O group must be connected to a different UPS.
򐂰 Each UPS shipped with SVC 3.1 and 4.1 support one node only, but each UPS in earlier
versions of SVC supports up to two SVC nodes (in distinct I/O groups).
򐂰 Each UPS pair that supports a pair of nodes must be connected to a different power
domain (if possible) to reduce the chances of input power loss.
򐂰 The UPSs must be installed in the lowest available position in the rack. If necessary, move
lighter units toward the top.
򐂰 A cluster can contain up to 8 SVC nodes.
򐂰 The power and serial connection from a node must be connected to the same UPS,
otherwise the node will not boot.
򐂰 The UPS in SVC 3.1 and 4.1 can be mixed with UPSs from earlier SVC versions, but the
UPS rules above have to be followed, and SVC nodes in the same I/O group must be
attached to the same type of UPSs, though not the same UPS.
򐂰 8F2 and 8F4 hardware models must be connected to 5115 UPS. They will not boot with a
5125 UPS.

Important: Do not share the SVC UPS with any other devices.

30 IBM System Storage SAN Volume Controller


Figure 3-3 shows a layout sample within a rack.

Figure 3-3 Sample rack layout

3.2.3 Cable connections


Complete a cable connection table to document all of the connections required for the setup:
򐂰 Nodes
򐂰 UPS
򐂰 Ethernet
򐂰 Fibre Channel ports
򐂰 Master console

Chapter 3. Planning and configuration 31


Parts of a typical planning chart are shown in Figure 3-4 and Figure 3-5.

Figure 3-4 Cable connection table

Figure 3-5 Master Console

3.3 SAN planning and configuration


SAN storage systems using the SVC can be configured with two, or up to eight, SVC nodes,
arranged in an SVC cluster. These are attached to the SAN fabric, along with disk
subsystems and host systems. The SAN fabric is zoned to allow the SVCs to “see” each
other’s nodes and the disk subsystems, and for the hosts to “see” the SVCs. The hosts are
not able to directly “see” or operate LUNs on the disk subsystems that are assigned to the
SVC cluster. The SVC nodes within an SVC cluster must be able to see each other and all
the storage assigned to the SVC cluster.

The zoning capabilities of the SAN switch are used to create these distinct zones. The SVC in
Release 4 supports 1 Gbps, 2 Gbps, or 4 Gbps Fibre Channel fabric. This depends on the
hardware platform and on the switch where the SVC is connected.

All SVC nodes in the SVC cluster are connected to the same SANs, and present virtual disks
to the hosts. These virtual disks are created from managed disks presented by the disk
subsystems. There are two distinct zones in the fabric:
򐂰 Host zones, to allow host ports to see and address the SVC nodes. There can be multiple
host zones. See 3.3.3, “General design considerations with the SVC” on page 36 for more
information.
򐂰 One disk zone in which the SVC nodes can see and address the LUNs presented by the
disk subsystems.

32 IBM System Storage SAN Volume Controller


Hosts are not permitted to operate on the disk subsystem LUNs directly if the LUNs are
assigned to the SVC. All data transfer happens through the SVC nodes. Under some
circumstances, a disk subsystem can present LUNs to both the SVC (as managed disks,
which it then virtualizes to hosts) and to other hosts in the SAN. There are some configuration
limitations as to how this can be achieved.

Figure 3-6 shows the data flow across the physical topology.

Host Host Host Host

Host zone(s)
SVC
SVC
Disk zone

Managed disks

SVC RAID RAID RAID RAID


SVC Ctrl Ctrl Ctrl Ctrl

Data Transfer
Figure 3-6 SVC physical topology

Chapter 3. Planning and configuration 33


Logically, the three zones can be thought of as three separate logical SANs, leading to the
diagram shown in Figure 3-7.

Host Host Host Host

Host zone

SVC
SVC zone

Disk zone

Managed Disks

RAID RAID RAID


Ctrl Ctrl Ctrl

Figure 3-7 SVC logical topology

3.3.1 SAN definitions


The following definitions are used in this section.

ISL hop
An interswitch link (ISL) is a connection between two switches, and is counted as an “ISL
hop.” The number of “hops” is always counted on the shortest route between two N-ports
(device connections). In an SVC environment, the number of ISL hops is counted on the
shortest route between the pair of nodes farthest apart. It measures distance only in terms of
ISLs in the fabric.

Oversubscription
Oversubscription is the ratio of the sum of the traffic on the initiator N-port connection, or
connections to the traffic on the most heavily loaded ISL(s) where more than one is used
between these switches. This assumes a symmetrical network, and a specific workload
applied evenly from all initiators and directed evenly to all targets. A symmetrical network
means that all the initiators are connected at the same level, and all the controllers are
connected at the same level.

As an example, on a 16-port switch, where there are 14 host connections going through two
ISL connections, the oversubscription is 14:2, or 7:1 (14/2), with seven hosts “sharing” one
ISL. In the SVC environment, each ISL link oversubscription cannot exceed six.

34 IBM System Storage SAN Volume Controller


Redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so
no matter what component fails, data traffic will continue. Connectivity between the devices
within the SAN is maintained, albeit possibly with degraded performance, when an error has
occurred. A redundant SAN design is normally achieved by splitting the SAN into two
independent counterpart SANs (two SAN fabrics), so even if one counterpart SAN is
destroyed, the other counterpart SAN keeps functioning.

Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all the connectivity of the redundant SAN, but without the 100% redundancy. An
SVC node is typically connected to a redundant SAN made out of two counterpart SANs. A
counterpart SAN is often called a SAN fabric. For example, if you have only one switch, this is
one fabric, or one counterpart SAN. However, if you have two switches but they are not
connected together, you have two fabrics, or two counterpart SANs, and one redundant SAN
if the devices are connected to both SANs.

Local fabric
Since the SVC supports remote copy, there might be significant distances between the
components in the local cluster and those in the remote cluster. The local fabric is composed
of those SAN components (switches, cables, and so on), which connect the components
(nodes, hosts, switches) of the local cluster together.

Remote fabric
Since the SVC supports remote copy, there might be significant distances between the
components in the local cluster and those in the remote cluster. The remote fabric is
composed of those SAN components (switches, cables, and so on) which connect the
components (nodes, hosts, switches) of the remote cluster together.

Local and remote fabric interconnect


These are the SAN components that are used to connect the local and remote fabrics. They
might simply be single mode optical fibers driven by high-power GBICs or SFPs. Or they
might be other more sophisticated components such as channel extenders or special SFP
modules, or using the CNT Ultranet Edge Storage Router. This can be used to extend the
distance to thousands of kilometers. Performance will degrade as distance increases.

Fibre Channel port logins


This is the number of hosts that can see any one SVC node port. Some disk subsystems,
such as the IBM System Storage Enterprise Storage Server (ESS) for example, recommend
limiting the number of hosts that use each port, to prevent excessive queuing at that port.
Clearly, if the port fails or the path to that port fails, the host might fail over to another port and
the fan-in criteria might be exceeded in this degraded mode.

Channel extender
A channel extender is a device for long distance communication connecting other SAN fabric
components. Generally, these can involve protocol conversion to asynchronous transfer mode
(ATM) or Internet Protocol (IP) or some other long distance communication protocol.

3.3.2 Fibre Channel switches, fabrics, interswitch links, and hops


The local or remote fabric must not contain more than three hops, in each fabric. Any
configuration that causes this to be exceeded is unsupported. When a local fabric is
connected to a remote fabric for Metro Mirror, the hop count between a local node and a
remote node must not exceed seven.

Chapter 3. Planning and configuration 35


For example, node A in fabric A wishes to connect to node B in fabric B. Within fabric A, node
A takes two hops to reach the ISL that connects fabric A to fabric B. The hop count is at two
at this point in time. Traversing the ISL between fabric A and fabric B takes the hop count to
three. Once it has reached fabric B, it takes three hops to reach node B. The hop count total is
six (2+1+3), which is within our limits.

Another example would be where three hops have been used within fabric A, and three hops
have been used in fabric B. This means that there can only be one hop between fabric A and
fabric B, otherwise the supported hop count limit of seven will be exceeded. Alternatively,
where, for example, fabric A consisted of one hop, and fabric B also consisted of one hop,
then this would leave up to five hops that could be used to interconnect switches between
fabric A and fabric B.

If multiple ISLs are available between switches, we recommend that these ISLs be trunked.
Follow the switch vendor's recommendations for trunking.

Note: The SVC supports the use of distance extender technology to increase the overall
distance between local and remote clusters; this includes DWDM and FCIP extenders. If
this extender technology involves a protocol conversion, then the local and remote fabrics
should be regarded as independent fabrics, limited to three hops each. The only restriction
on the interconnection between the two fabrics is the maximum latency allowed in the
distance extender technology.

For the latest information relating to distance limitations, visit the following Web site:
http://www-03.ibm.com/servers/storage/support/software/sanvc/

3.3.3 General design considerations with the SVC

Note: The SVC is not a RAID controller, so the data integrity has to be on a disk
subsystem that uses RAID to protect the data.

To ensure high availability in SVC installations, keep the following considerations in mind
when you design a SAN with the SVC.

For any SVC cluster


The following general guidelines apply:
򐂰 An SVC node, in this case the 4F2 and 8F2, always contains two host bus adapters
(HBAs), each of which has two Fibre Channel (FC) ports. If an HBA fails, this remains a
valid configuration, and the node operates in degraded mode. If an HBA is physically
removed from an SVC node, then the configuration is unsupported. The 8F4 has one HBA
and four ports.
򐂰 All nodes in a cluster must be on the same IP subnet. This is because the nodes in the
cluster must be able to assume the same cluster, or service IP, address.
򐂰 To maintain application uptime in the unlikely event of an individual SVC node failing, SVC
nodes are always deployed in pairs (I/O groups). If a node fails or is removed from the
configuration, the remaining node operates in a degraded mode, but is still a valid
configuration. The remaining mode operates in write through mode (the cache is disabled
for the write).
򐂰 The UPS must be in the same rack as the nodes, and a maximum of two nodes (in distinct
I/O groups) is allowed to be connected to each UPS. With UPSs shipped with SVC 2.1
and later, the UPS can only have one node connected.

36 IBM System Storage SAN Volume Controller


򐂰 A node must be in the same rack as the UPS from which it is supplied.
򐂰 The Fibre Channel SAN connections between the SVC node and the switches are optical
fiber. These connections can run at either 1 Gbps, 2 Gbps, or 4Gbps depending on your
SVC and switch hardware. The 8F4 SVC nodes autonegotiate the connection speed with
the switch. The 4F2 and 8F2 nodes are capable of a maximum of 2 Gbps, which is
determined by the cluster speed.
򐂰 SVC node ports must be connected to the Fibre Channel fabric only. Direct connections
between SVC and host, or SVC and disk subsystem, are unsupported.
򐂰 We recommend that the two nodes within an I/O group be co-located, and co-location is a
recommendation even for an SVC cluster (all nodes in a cluster should be located close to
one another (within the same set of racks), and within the same room or adjacent rooms
for ease of service and maintenance). An SVC cluster can be connected (via the SAN
fabric switches) to application hosts, disk subsystems, or other SVC clusters, via short
wave only optical FC connections, long wave connections are no longer supported. These
can be distances of up to 150 m (short wave 4 Gbps), 300 m (short wave 2 Gbps), or
500 m (short wave 1 Gbps) between the cluster and the host, and between the cluster and
the disk subsystem. Longer distances are supported between SVC clusters when using
intercluster Metro or Global Mirror.
򐂰 A cluster should be regarded as a single entity for disaster recovery purposes. This
includes the disk subsystem that is providing the quorum disks for that cluster. This means
that the cluster and the quorum disks should be co-located. We do not recommend
locating the components of a single cluster in different physical locations for the purpose
of disaster recovery, because this might lead to issues over maintenance, service, and
quorum disk management.

For multiple SVC clusters


Two SVC clusters cannot share the same disk subsystem. The consequences of sharing the
same disk subsystem can result in data loss. If the same MDisk becomes visible on two
different SVC clusters, then this is an error that can cause data corruption.

For the SAN fabric


The following guidelines apply:
򐂰 The Fibre Channel switch must be zoned to permit the hosts to see the SVC nodes, and
the SVC nodes to see the disk subsystems. The SVC nodes within a cluster must be able
to see each other, the disk subsystems, and the front-end host HBAs.
򐂰 Mixed speeds are permitted within the fabric, but not for intracluster communication. You
can use lower speeds to extend distance or to make use of 1 Gbps components.
򐂰 Each of the local or remote fabrics should not contain more than three ISL hops within
each fabric. Operation with more ISLs is unsupported. When a local and a remote fabric
are connected together for remote copy purposes there should only be one ISL hop
between the two SVC clusters. This means that some ISLs can be used in a cascaded
switch link between local and remote clusters, provided that the local and remote cluster
internal ISL count is less than three. This gives a maximum of seven ISL hops in an SVC
environment with both local and remote fabrics.
򐂰 The switch configuration in an SVC fabric must comply with the switch manufacturer’s
configuration rules. This can impose restrictions on the switch configuration. For example,
a switch manufacturer might limit the number of supported switches in a SAN. Operation
outside the switch manufacturer’s rules is not supported.
򐂰 The SAN contains only supported switches as listed on the Web at:
http://www.ibm.com/storage/support/2145

Chapter 3. Planning and configuration 37


Operation with other switches is unsupported.
򐂰 Host HBAs in dissimilar hosts or dissimilar HBAs in the same host need to be in separate
zones. For example, if you have AIX and Microsoft hosts, they need to be in separate
zones. Here dissimilar means that the hosts are running different operating systems or
use different hardware platforms. Therefore, different levels of the same operating system
are regarded as similar. This is a SAN interoperability issue rather than an SVC
requirement.
򐂰 We recommend that the host zones contain only one initiator (HBA) each, and as many
SVC node ports as you need, depending on the high availability and performance you
want to have from your configuration.

Note: In code version 3.1 and later, there is a new command, svcinfo lsfabric, as
described in “SAN debugging” on page 242, in order to help debug any zoning problem.

Disk subsystem guidelines


The following guidelines apply:
򐂰 In the SAN, disk subsystems are always connected to SAN switches and nothing else.
Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN. For example, in a
DS4000 configuration in which the DS4000 contains two redundant controllers, only two
controller minihubs are normally used. This means that Controller A in the DS4000 is
connected to counterpart SAN A, and controller B in the DS4000 is connected to
counterpart SAN B. Operation with direct connections between host and controller is
unsupported.
򐂰 The SVC is configured to manage LUNs exported only by disk subsystems as listed on the
Web at:
http://www.ibm.com/storage/support/2145
Operation with other disk subsystems is unsupported.
򐂰 All SVC nodes in an SVC cluster must be able to see the same set of disk subsystem
ports on each disk subsystem controller. Operation in a mode where two nodes see a
different set of ports on the same controller becomes degraded. The system logs errors
requesting a repair action. This can occur if inappropriate zoning was applied to the fabric.
It can also occur if inappropriate LUN masking is used. This has important implications for
disk subsystem, such as DS4000, which imposes exclusivity rules on which HBA world
wide names (WWNs) a storage partition can be mapped to. It is up to you to check that the
planned configuration is supported. You can find the supported hardware list at:
http://www.ibm.com/storage/support/2145

Host and application servers guidelines


The following guidelines apply:
򐂰 Each SVC node presents a virtual disk (VDisk) to the SAN through four paths. Since in
normal operation two nodes are used to provide redundant paths to the same storage, this
means that a host with two HBAs can see eight paths to each LUN presented by the SVC.
We suggest using zoning to limit the pathing from a minimum of two paths to the
maximum available of eight paths, depending on the kind of high availability and
performance you want to have in your configuration.
In our implementation, we use zoning to limit the pathing to four paths. The hosts must run
a multipathing device driver to resolve this back to a single device. The multipathing driver
supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native

38 IBM System Storage SAN Volume Controller


Multi-path I/O (MPIO) drivers on selected hosts are supported. For operating system
specific information about MPIO support, see:
http://www.ibm.com/storage/support/2145
򐂰 The number of paths from the SVC nodes to a host must not exceed eight, even if this is
not the maximum paths number handled by SDD. The maximum number of host HBA
ports must not exceed four. Without any switch zoning, note the following calculation:
Number of SVC ports x number of HBA ports on host = number of paths
8 SVC ports x 4 (max) host ports = 32 paths
To restrict the number of paths to a host, the switches should be zoned so that each host
HBA port is zoned with as many SVC node ports of each node from the I/O groups the
host will get VDisks assigned, depending by the kind of high availability and performance
you want to have in your configuration.
򐂰 If a host has multiple HBA ports, then each port should be zoned to a different HBA set of
SVC ports to maximize high availability and performance.
򐂰 In order to configure greater than 256 hosts, you will need to configure host to iogrp
mappings on the SVC. Each iogrp can contain a maximum of 256 hosts, so it is possible to
create 1024 host objects on an 8 node SVC cluster. The mappings can be configured
using the svctask mkhost, svctask addhostiogrp and svctask rmhostiogrp commands.
The mappings can be viewed using the svcinfo lshostiogrp and svcinfo lsiogrphost
commands. VDisks can only be mapped to a host which is associated with the I/O Group
to which the VDisk belongs.

For management
The following guidelines apply:
򐂰 In addition to a Fibre Channel connection, each device has an Ethernet connection for
configuration and error reporting. These connections are aggregated together through an
Ethernet switch.
򐂰 All nodes in an SVC cluster must be in the same IP subnet. This is because the node in
the SVC cluster must be able to assume the same SVC cluster IP address or SVC service
IP address.
򐂰 If IBM System Storage Productivity Center for fabric on the master console is used to
watch the device status on the SAN, the master console must only be in the zones you will
have the IBM System Storage Productivity Center for fabric to monitor. If you include the
master console in the storage zone, this might have an influence on the boot time for the
master console. The master console is not intended to be used for any other purpose, and
therefore does not need storage from the SAN/SVC.

3.3.4 Boot support


The SVC supports SAN boot for AIX and Windows 2003 using MPIO, HP-UX by using
PVLinks as the multipathing software for the boot device and Solaris 9 running Veritas
Volume Manager/DMP, but the SAN boot support could change from time to time, so we
recommend regularly checking the following Web site:
http://www-03.ibm.com/servers/storage/software/virtualization/svc/interop.html

3.3.5 Configuration saving


The configuration data for the SVC cluster is hardened in each node by the advanced
reliability, availability, and serviceability (RAS) design and integration with the uninterruptible
power supply. We recommend that you save the configuration externally, when changes such
as adding new nodes, disk subsystems, and so on have been done in the cluster.

Chapter 3. Planning and configuration 39


3.3.6 High availability SAN design and configuration rules with SVC
Figure 3-8 shows a basic two node configuration. To provide high availability, the SVC should
be configured in redundant SAN fabrics. Our configuration, as shown in Figure 3-8, is a
redundant fabric made up of two 16-port switches.

W2K3_1 AIX_270 Linux W2K3_2


P1 P2 P1 P2 P1 P2 P1 P2

P1 Master

P2 Console

2(P4)
(p3)1
HB PCI 1 HB PCI 1
1(p3) (p4)2
0 3 4 8 9 12 14 14 12 9 8 4 3 0
SVC1 SW11 SAN swith SW21 SAN swith SVC1
node1 node2
1 2 5 5 2 1
2(p2) (p1)1
HBA PCI 2 HBA PCI 2
1(p1) (p2)2

DS4300_A DS4300_B

Figure 3-8 Simple two-node SVC high availability configuration

3.4 Zoning
To manage the zoning in the fabric, port zoning or WWN zoning can be used.

For the latest information, regularly check whether your configuration is supported at the
following Web site:
http://www.ibm.com/storage/support/2145

Example 3-9 shows an example of a host zone where each host adapter is zoned to two SVC
I/O group HBA ports, one from each node in the I/O group. The numbers 11, 12, 13 and 14 are
for node1 FC port 1, 2, 3 and 4. The blue zone consists of A1, 13 and 22 meaning host
adapter FC port 1, node1 FC port 3, and node2 FC port 2. The position in the switch where
the FC port is connected is used when making the zone definitions. For example, 13 is
connected to the switch with domain ID 11 port 3, (11,3); remember that the first port in the
switch starts numbering at zero.

40 IBM System Storage SAN Volume Controller


Host
HOST-A1 Zone HOST-A2 Zone
(21,1;11,2;11,3)
A1 A2 (22,1;12,2;12,3)

ID ID
A1 21 A2 22

Fabric 1 Fabric 2

ID ID
11 22 13 24 11
21 12 23 14 12

Host port A1 and 11 12 13 14 21 22 23 24 Host port A2 and


one SVC port SVC node1 SVC node2 one SVC port
per SVC node per SVC node
Note: No storage
subsystem ports!
Figure 3-9 Host zoning example

With this zoning configuration, each VDisk has 4 paths to the host. A new host will be
configured with zoning that uses the remaining, not yet used SVC nodes ports 11 , 14 , 21 , 24 ,
and in this way we will perform manual load balancing on the SVC nodes in the SVC cluster.
More hosts will use the same route in a round robin fashion.

Chapter 3. Planning and configuration 41


Figure 3-10 shows an example of a host zone where each host adapter is zoned to four SVC
I/O group HBA ports, two from each node in the I/O group. The numbers 11, 12, 13, and 14 are
for node1 FC port 1, 2, 3 and 4. The blue zone consists of A1, 11 , 13, 22, and 24, meaning host
adapter FC port1, node1 FC port 1 and 3 and node2 FC port 2 and 4. The position in the
switch where the FC port is connected is used when making the zone definitions. For
example, 13 is connected to the switch with domain ID 11 port 3, (11,3); remember that the
first port in the switch starts numbering at zero.

Host
HOST-A1 Zone HOST-A2 Zone
(21,1;11,1;11,2;11,3;11,4)
A1 A2 (22,1;12,1;12,2;12,3:12,4)

ID ID
A1 21 A2 22

Fabric 1 Fabric 2

ID ID
11 22 13 24 11
21 12 23 14 12

Host port A1 and Host port A2 and


11 12 13 14 21 22 23 24 two SVC port
two SVC port
per SVC node SVC node1 SVC node2 per SVC node

Note: No storage
subsystem ports!
Figure 3-10 Host zoning example

With this zoning configuration, each VDisk has 8 paths to the host. Every new host to be
added will use the same ports.

3.5 Naming conventions


Naming conventions in the open systems environment have always been a challenge. The
challenges come from finding naming conventions that will continue to be steady as changes
occur to the environment. Everyone has their own way of naming equipment in an IT
infrastructure. When working in a SAN environment where an SVC cluster is installed, we
recommend assigning names that help in locating and identifying equipment, and that provide
information about connections so any changes and troubleshooting are easier.

42 IBM System Storage SAN Volume Controller


One way to do this is to include site name, equipment name, and adapter information in the
naming convention. As an example, in a two-site solution, site A and site B, all equipment in
site A is identified by odd numbers, while all equipment at site B is identified by even
numbers. Such an example could look like this: SVC1N1P1, where SVC1 is the name of the
equipment, number 1 indicates that it is located in site A, N1 is the node name for the SVC
cluster, and the P1 is the SVC FC port number. On site B, the name would have been
SVC2N1P1. (Note that names stemming from popular culture can be amusing, but do not
always give any meaningful information about what the equipment is or where it is located,
which has several disadvantages.)

Figure 3-11 shows an example of a naming convention.

Naming convention

SW11 SW12
SVC1N2 SVC2N2

SVC1N1 SVC2N1
SW21 SW22

FAST1-A FAST1-B FAST2-A FAST2-B

DS4301_A DS4301_B DS4302_A DS4302_B

Figure 3-11 An example of name convention in dual SVC setup

Below, we list examples of names in an SVC cluster setup, and we document those which
you can use as example to build your own naming convention.

SVC naming convention examples:


򐂰 SVC1N1 = SVC cluster 1, node 1
򐂰 SVC2N1 = SVC cluster 2, node 1
򐂰 SVC2N2P3 = SVC cluster 2, node 2, FC port 3

Disk subsystem name convention examples:


򐂰 DS4301_A
򐂰 DS4302_B
򐂰 EVA301_A
򐂰 ESS01B3A1

Chapter 3. Planning and configuration 43


Here is an explanation of names for the disk subsystem:
򐂰 DS4301_A_1, where DS43 tells you the type of storage back-end, here a DS4300
򐂰 DS4301_A_1, where 01 is the number of this DS4300 in your installation, and also gives
you the information that it is placed at site A (1 is an odd number).
򐂰 DS4301_A_1, where _A is the name of the controller in the DS4300
򐂰 DS4301_A_1, where _1 is the FC port number on controller A, but port number is only
used in SAN zoning information, not on the SVC, here we would only recommend DS4301
as the name.
򐂰 DS4302_B, will then mean a DS4300, located at site B, controller B.
򐂰 EVA301_A, will then mean a EVA3000, located at site A, controller A
򐂰 ESS01B3A1, will then mean ESS01, located at site A, Bay3, Adapter 1 for an ESS storage
bay and adapter locations.

SVC cluster 1, Master console FC ports 1 and 2:


򐂰 SVC1MCP1
򐂰 SVC1MCP2

Host Fibre Channel ports FC0 and FC1:


򐂰 HOSTNAMExx_FC0
򐂰 HOSTNAMExx_FC1

Here, xx is a number used to identify the server and give information as to its location. For
example, AIX01_fc0, AIX01_fc1 gives you the type of server, what server it is and where it is
located, in this example at site A.

SAN switch names:


򐂰 SW11 where SW indicates a switch, the first number give a fabric ID, and the last is the
location, and even combining the last two numbers together is also useful as the domain
ID of the switch.
򐂰 SW12 is then SAN switch 2 in fabric 1, located at site B, domain id 12.
򐂰 SW21 is then SAN switch 1 in fabric 2, located at site A, domain id 21.

When using these kinds of names, we have a limit of 10 switches in a fabric, so if we need
more switches, we will use two digits for the switch number, for example, SW101 and so on.

SVC zone names:


򐂰 SVC1 which includes SVC1N1P1, SVC1N1P2, SVC1N1P3, SVC1N1P4, SVC1N2P1 and
so on for all SVC1 nodes.
򐂰 SVC2 which includes all SVC2 node ports.

Storage zone names:


򐂰 SVC1_DS4301
򐂰 SVC2_DS4302

Host zone names:


򐂰 HOSTNAME_FC0_SVC1
򐂰 HOSTNAME_FC1_SVC1

44 IBM System Storage SAN Volume Controller


Master console zone name:
򐂰 SVC1MCP1_SVC1

Metro or Global Mirror zone name:


򐂰 SVC1_SVC2

When changing the domain IDs, this can affect your zoning setup. Therefore, you must
change them first before you change your zoning information. The standard in a SAN address
is made up of switch Domain ID, port number and the address of the port:
xxppaa

Here, xx is the Domain ID, pp is the Area (port number) and aa is the ALPA address of the
port.

Brocade did not have SAN hubs or switches with more than 16 ports in the beginning, and
used the first p to specify the type of the product, hub or switch. Because switches today need
to support more than 16 ports, Brocade created a function called core PID, so that you could
change the addressing in the switch to support addresses using both “pp”s as the port
number.

A change of the domain ID or core PID disrupts some UNIX operating systems and also
affects how the UNIX operating system addresses the device SCSI address. Here is an
example of a Brocade port address:
xxypaa

Here, xx is the domain, y is the PID, p is the port, and aa is the address, 00 for N ports or loop
address for NL ports. This is only for Brocade/IBM switches with 16 or fewer ports; in
switches with more ports, the start address is always 00. Consider this example for a 16 port
Brocade/IBM switch:
010100 domain=01, PID = 0, port=1

Some older Brocade/IBM switches have the PID set to 1 and switches that do not merge with
switches set to 0. This is from the early days of Fibre Channel. Now the switches need to use
both bytes for port addressing. See the IBM SAN Survival Guide, SG24-6143, for more
information regarding switches.

Note: A change of the domain ID or core PID disrupts some UNIX operating systems.
Make sure you check first before you attempt this when storage is already defined and in
use.

Chapter 3. Planning and configuration 45


3.5.1 Dual room high availability configuration with the SVC
Figure 3-12 shows a high availability configuration of the SVC when two SVC clusters are in
two different rooms/locations. We recommend this configuration for maximum availability.

Up to 64 hosts
Host01 Hostnn Host02 Hostnn
in each SVC cluster

SVC cluster1 SVC cluster2


SVC1N1 SVC2N1
I/O group 0 I/O group 0
SVC1N2 SVC2N2

SVC1N3 SVC2N3
I/O group 1 I/O group 1
SVC1N4 SVC2N4

UPS1A SW11 SW12 UPS2A


Up to 256 ports
in each fabric
UPS1B UPS2B
SW21 SW22

DS4301_A DS4301_B Up to 64 RAID controllers DS4302_A DS4302_B

Site A Site B
Figure 3-12 High availability SAN Volume Controller cluster in a two-site configuration

3.5.2 Local and remote SAN fabrics with SVC


The SVC supports both intracluster and intercluster Metro and Global Mirror. From the
intracluster point of view, the only reasonable candidate for a Metro or Global Mirror operation
is the other node in the same I/O group. Intercluster operation needs a pair of clusters,
separated by a number of moderately high bandwidth links, which means that the bandwidth
should be large enough to handle the writes from the Metro or Global Mirror process, but no
reads are done over the links. Here we describe the configuration that is shown in
Figure 3-12:
򐂰 In the SVC, the local and remote fabric interconnect supported is a single ISL hop
between a switch in the local fabric and a switch in the remote fabric, if it is a single mode
fibre, up to 10 km in length. Check the support Web site for any changes to this at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/installing.html
򐂰 SVC 4.1 supports operation with Fibre Channel DWDM extenders and SAN Routers. The
supported distances depend on the SAN fabric vendor, but the latency must be better than
68 ms round-trip delay (34 ms one way).
򐂰 In Metro or Global Mirror configurations, additional zones are required that contain only
the local nodes and the remote nodes. It is unsupported to create a zone that presents
both the local and remote disk subsystem and either local and remote nodes or both.

46 IBM System Storage SAN Volume Controller


3.5.3 Technologies for extending the distance between two SVC clusters
Technologies for extending the distance between two SVC clusters can be divided into two
categories:
򐂰 Fibre Channel Extenders
򐂰 SAN Routers

Fibre Channel Extenders


Fibre Channel extenders simply extend a Fibre Channel link by transmitting Fibre Channel
packets across long links without changing the contents of those packets.

Here is a list of examples:


򐂰 FCIP extenders implemented in CISCO MDS 9500 series switches
򐂰 CNT Ultranet Edge Storage Router
򐂰 DWDM,CWDM and longwave SFP extenders
򐂰 Any Multiprotocol router (for example, Brocade Multiprotocol Routers) only when used in
FCIP tunnelling mode.

The maximum supported one way latency is 34 ms. Any Fibre Channel technology is
supported that is planned, installed, and tested meeting the following requirements:
򐂰 The one-way latency between sites must not exceed 34 ms. Note that 1 ms equates to
approximately 100 km to 150 km, but this will depend on the type of equipment used and
the configuration.
򐂰 The bandwidth between sites must be sized to meet peak workload requirements while
maintaining the maximum latency of 34 ms.
򐂰 If the link between sites is configured with redundancy so that it can tolerate single
failures, then the link must be sized so that the bandwidth and the latency statements
continue to hold true even during such single failure conditions.
򐂰 A channel extender can be used only for inter-cluster links, intra-cluster is not supported.
򐂰 The entire configuration must be tested with the expected peak workload.
򐂰 The configuration is tested to simulate a failure of a primary site, and eventually a fail back
from the secondary site to the primary site (to test recovery procedures).
򐂰 The configuration is tested to confirm that any failover mechanism in the inter cluster links
interoperate fine with the SVC.
򐂰 Particular attention must be taken for the compatibility between switches if they are from
different vendors, and also between the switches and extender.
򐂰 Measurements about latency and bandwidth must be made during installation and the
records must be kept. Testing should be repeated before and after any significant change
in the infrastructure providing the inter-cluster links.

SAN Routers
SAN Routers extend the scope of a SAN by providing “virtual nPorts” on two or more SANs.

The router arranges that traffic at one virtual nPort is propagated to the other virtual nPort, but
the two Fibre Channel fabrics are independent of one another. Thus nPorts on each of the
fabrics cannot directly log into each other.

Chapter 3. Planning and configuration 47


At the time of writing, IBM supports the following systems:
򐂰 McDATA 1620 and 2640: These are supported up to a one way latency of 10 ms. Note that
1 ms equates to approximately 100 km to 150 km, but this will depend on the type of
equipment used, and the configuration.
򐂰 Cisco MDS 9000 series Inter VSAN Routing: The use of Inter VSAN Routing in the
configuration using MD 9000 series switches is supported with latency up to 10 ms. It is
about 100-150km for ms.

You can find the supported list of inter-cluster extenders and routers on the Web at the
following address:
http://www.ibm.com/storage/support/2145

Important: It is the latency that IBM will support, not the distance. The above distances
are provided for illustrative purposes only.

3.6 SVC disk subsystem planning


This section describes the various types of disk groups and their relationships.

In the configuration shown in Figure 3-13, the disk subsystem is presenting a LUN to the SVC
and another LUN to host B. The SVC presents the VDisks created from the MDisk to the host
A. Since the disk subsystem is a DS4000, host B would have RDAC installed to support the
direct attachment, and SDD is installed on host A to support the attachment to the SVC. This
is a supported configuration.

Host A Host B
SDD RDAC

VDisk

SVC SAN
MDisk LUN

Array Array
RAID Controller

Figure 3-13 Disk subsystem shared

48 IBM System Storage SAN Volume Controller


With the ESS, you can attach directly to an ESS LUN and to a VDisk from the SVC that
comes from the ESS, as long as the same LUN is not assigned to both the host and the SVC.
The host uses SDD to access the LUNs presented from both the ESS and the SVC. This is a
supported configuration and is shown in Figure 3-14.

Host A
SDD

VDisk

SVC SAN
MDisk LUN

Array Array
DS8000

Figure 3-14 Host connected to ESS and SVC

In the configuration shown in Figure 3-15, the host needs to have both the RDAC driver
installed for access to the DS4000 and the SDD installed to access the SVC.

The SVC supports the use of the IBM SDD, native Multi-path I/O (MPIO) drivers on selected
operating systems, and some other operating system specific software.

Check for supported configurations on the Web at:


http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002471

Chapter 3. Planning and configuration 49


Host A
SDD/RDAC

VDisk

SVC SAN
MDisk LUN

Array Array
DS4000

Figure 3-15 DS4000 supported configuration

3.6.1 Block virtualization


The managed disk group (MDG) is at the center of the many-to-many relationship between
managed disks and virtual disks. It acts as a container into which managed disks contribute
chunks of disk blocks, known as extents, and from which virtual disks consume these extents
of storage. MDisks in the SVC are LUNs assigned from the disk subsystem to the SVC, and
can be either managed or unmanaged. A managed MDisk is an MDisk assigned to an MDG.
򐂰 MDGs are collections of managed disks. A managed disk is contained within exactly one
MDG.
򐂰 An SVC supports up to 128 MDGs.
򐂰 There is no limit to the number of virtual disks that can be in an MDG other than the limit
per cluster.
򐂰 MDGs are collections of virtual disks. Under normal circumstances, a virtual disk is
associated with exactly one MDG. The exception to this is when a virtual disk is migrated
between MDGs.

3.6.2 MDGs, I/O groups, virtual disks, and managed disks


Figure 3-16 shows three disk subsystems that were configured to provide a number of LUNs.
In SVC terminology, each of these logical units or LUNs is a managed disk. Disk subsystem A
contains two managed disks, known as M1 and M2. Disk subsystem B contains managed
disks M3 and M4. Disk subsystem C contains managed disks M5 and M6.

50 IBM System Storage SAN Volume Controller


I/O Group PQ

SVC Node SVC Node


P Q

I/O Group RS

SVC Node
R

Some other Some other


I/O group I/O group
SVC Node
S
V1 V2 V4 V5

V3

V6 V7

Managed
Disk
Managed Disk Group X Group Y Managed Disk Group Z

M1 M2 M3 M4 M5 M6

RAID Controller A RAID Controller B RAID Controller C

Figure 3-16 Disk relationships

Figure 3-16 also shows three MDGs, X, Y, and Z. Each managed disk is contained within a
single MDG, and that one MDG can span controllers. SVC supports an arbitrary relationship
between disk subsystems and MDGs. The MDG simply contains a collection of managed
disks from the set of available controllers.

Note: We recommend that only LUNs from one disk subsystem form part of an MDG.

Additionally, Figure 3-16 shows virtual disks numbered V1 to V7. Each virtual disk is
contained entirely within an MDG. This is the normal situation. The only exception to this is
during migration.

Virtual disks are also members of another collection, namely I/O groups. Figure 3-16 shows
two I/O groups named PQ and RS. An I/O group contains an arbitrary set of virtual disks and
exactly two SVC nodes (unless one has failed). The I/O group defines which nodes support
I/O access from hosts.

There is no fixed relationship between I/O groups and MDGs. An individual virtual disk is
normally a member of one MDG and one I/O group:
򐂰 The MDG defines which managed disks from the disk subsystem that makes up the virtual
disk.
򐂰 The I/O group defines which SVC nodes provide I/O access to the virtual disk.

Chapter 3. Planning and configuration 51


3.6.3 Extents
A virtual disk occupies an integer number of extents. Its length does not need to be an integer
multiple of the extent size, but must be an integer multiple of the block size. Any space left
over between the last logical block in the virtual disk and the end of the last extent in the
virtual disk is unused.

You can define a VDisk with the smallest granularity of 512 bytes (a block). However, an entire
extent is reserved even if it is partially used. An extent is never shared between virtual disks.
Each extent on a managed disk is contained with at the most one virtual disk. Free extents
are associated with no virtual disk.

SVC supports extent sizes of 16 MB, 32 MB, 64 MB, 128 MB, 256 MB, and 512 MB. The
extent size is a property of the MDG, which is set when the MDG is created. It cannot be
changed and all managed disks, which are contained in the MDG, have the same extent size,
so all virtual disks associated with the MDG must also have the same extent size. Table 3-1
shows the relationship between the extent size and the maximum capacity of the cluster.

Table 3-1 Extent size and maximum cluster capacities


Extent size Maximum cluster capacity

16 MB 64 TB

32 MB 128 TB

64 MB 256 TB

128 MB 512 TB

256 MB 1 PB

512 MB 2 PB

3.6.4 Image mode virtual disk


Image mode provides a direct block-for-block translation from the managed disk to the virtual
disk with no virtualization. This mode is intended to allow virtualization of managed disks,
which already contain data that was written directly, not through an SVC node from a
pre-virtualized disk subsystem. When an image mode virtual disk is created, it directly
corresponds to the managed disk it is created from.

This allows you to insert an SVC into the data path of an existing storage configuration with
minimal downtime. After the SVC is inserted into the data path using image mode, you can
use the migration facilities to migrate the data to managed mode and rearrange the data while
an application is accessing the data.

When you create an image mode virtual disk, the managed disk specified must not be a
member of an MDG. The managed disk is made a member of the specified MDG as a result
of the creation of the image mode virtual disk.

Image mode provides direct mapping from managed disk to virtual disk. You can think of it as
a property of both virtual disks and managed disks.

The capacity specified must be less than or equal to the size of the managed disk. If it is less
than the size of the managed disk, then the unused space in the managed disk is not
available for use in any other virtual disk. There is no facility to specify an offset. Therefore,
logical block address (LBA) “N” on the resulting image mode virtual disk maps directly to LBA
“N” on the image mode managed disk. Image mode virtual disks have a minimum size of one
block (512 bytes) and always occupy at least one extent.

52 IBM System Storage SAN Volume Controller


Image mode managed disks are members of an MDG, but do not contribute free extents to
the pool of free extents. Therefore, an image mode managed disk can have at most one
virtual disk associated with it.

3.6.5 Managed mode virtual disk


Disks operating in managed mode provide a full set of virtualization functions.

Within an MDG, the SVC supports an arbitrary relationship between extents on (managed
mode) virtual disks and extents on managed disks. Subject to the constraint that each
managed disk extent is contained in at most one virtual disk, each virtual disk extent maps to
exactly one managed disk extent (except when in the progress of migrating).

Figure 3-17 shows virtual disk V, which is made up of a number of extents. Each of these
extents is mapped to an extent on one of the managed disks A, B, or C. The mapping table
stores the details of this indirection. You can see that some of the managed disk extents are
unused. That is to say, there is no virtual disk extent which maps to them. These unused
extents are available for use in creating new virtual disks, migration, expansion, and so on.

Figure 3-17 Simple view of block virtualization

Creating managed mode virtual disks


When a virtual disk is created, the SVC needs to know the policy to apply to create the initial
assignment of managed disk extents to virtual disk extents. The supported policies are listed
in the following sections. These policies are only used for the creation of a new virtual disk.
After the virtual disk is created, the policy has no effect and is not considered when making
decisions during migration operations.

Chapter 3. Planning and configuration 53


Striped
When a virtual disk is created using a striped policy, its extents are allocated from the
specified ordered list of managed disks. The allocation algorithm starts with the first managed
disk in the ordered list and attempts to allocate an extent from it, then it moves to the next
disk, and so on, for each managed disk in turn. If the specified managed disk has no free
extents, then it misses its turn and the turn passes to the next managed disk in the list. When
the end of the list is reached, the algorithm loops back to the first disk in the list. Allocation
proceeds until all extents required have been allocated.

When selecting which extent to allocate from the chosen managed disk, the policy followed is
as described in 3.6.6, “Allocation of free extents” on page 57. This allocation policy leads to a
coarse grained striping. The granularity of the striping is at the extent level. This coarse
grained striping is unlikely to result in large bandwidth for sequential transfers but is likely to
spread the workload caused by random small transactions across the managed disks from
which the extents are allocated.

Wide striping increases the probability that the data on the virtual disk will be lost due to the
failure of one of the managed disks across which the virtual disk is striped. It is acceptable for
the list to contain only one disk. In this case, extents are allocated from a single disk as
described in Allocation of free extents. Contrast this with the allocation scheme for the
sequential policy.

Sequential
When a virtual disk is created using a sequential policy, its extents are allocated from a single
specified managed disk. The SVC searches for regions of the target managed disk that
contain free extents that are sequential so the region is large enough to allocate the virtual
disk from completely sequential extents. If it finds more than one such region, it chooses the
smallest region which satisfies this condition. If it doesn’t find any suitable regions, creation of
the virtual disk fails.

Cache modes and cache disabled VDisks


Prior to SVC 3.1, it was not supported to enable any copy services function in a RAID array
controller for a LUN which was being virtualized by SVC. This was not supported because the
behavior of the write-back cache in SVC would have led to data corruption. With the advent of
cache disabled vdisks it becomes possible to enable copy services in the underlying RAID
array controller for LUNs which are virtualized by SVC.

Using underlying controller remote copy with SVC cache disabled VDisks
Where synchronous or asynchronous remote copy is used in the underlying storage
controller, the controller LUNS at both the source and destination must be mapped through
the SVC as image mode disks with the SVC cache disabled. Note that of course it is possible
to access either the source or the target of the remote copy from a host directly, rather than
through the SVC. The SVC copy services can be usefully employed with the image mode
virtual disk representing the primary of the controller remote copy relationship, but it would not
make sense to use SVC copy services with the VDisk at the secondary site. This is because
the SVC does not see the data flowing to this LUN through the controller.

54 IBM System Storage SAN Volume Controller


Using underlying controller FlashCopy with SVC cache disabled VDisks
Where FlashCopy is used in the underlying storage controller, the controller LUNS for both
the source and target must be mapped through the SVC as image mode disks with the SVC
cache disabled. Note that it is possible to access either the source or the target of the
FlashCopy from a host directly, rather than through SVC. The SVC copy services can be
used with the VDisk representing the source for the controller FlashCopy relationship, but it
would not make sense to use SVC copy services with the VDisk representing the controller
FlashCopy target. This is because the SVC does not see the data flowing to this LUN through
the controller.

Controlling copy services on the underlying storage controller


Where a storage controller has a copy services interface that is accessed over an IP
connection (out-of-band), there will be little difference in the way that the copy services are
controlled when the SVC is added between the controller and the servers. Where a storage
controller has a copy services interface that is accessed in-band, it might still be possible to
control the copy services from the hosts through the in-band interface. This should be
addressed on a controller by controller basis.

As stated, with SVC version 3.1 you can choose if you want read and write operations to be
stored in cache by specifying a cache mode. You must specify the cache mode when you
create the VDisk. After the VDisk is created, you cannot change the cache mode.

Table 3-2 describes two different types of cache modes for a VDisk.

Table 3-2 Cache mode parameters


Cache modes Description

readwrite All read and write I/O operations that are performed by the VDisk are stored in
cache. This is the default cache mode for all VDisks.

none All read and write I/O operations that are performed by the VDisk are not stored
in cache.

Figure 3-18 and Figure 3-19 show two different scenarios where you might use cache mode
none instead of readwrite.

The first scenario consists of an Image VDisk related to an MDisk in a Storage Subsystem.
The VDisk needs to be replicated using Storage Subsystem async/sync Remote Copy
function.

Chapter 3. Planning and configuration 55


Figure 3-18 Remote copy scenario

The second scenario consists of a FlashCopy of an Image VDisk related to an MDisk in a


Storage Subsystem, using the Storage Subsystem FlashCopy functions.

Figure 3-19 FlashCopy scenario

56 IBM System Storage SAN Volume Controller


In both scenarios, in order to ensure the data consistency between the two copies at any
given time, we suggest that you use cache mode none.

3.6.6 Allocation of free extents


Migration operations and some of the virtualization operations require the allocation of a
specific number of extents from a specific set of managed disks. The algorithm used to
achieve this is described in the following sections.

Choosing the managed disk to allocate from


Where the set of managed disks to allocate extents from contains more than one disk, extents
are allocated from managed disks in a round robin fashion. If a managed disk has no free
extents when its turn arrives, then its turn is missed and the round robin moves to the next
managed disk in the set which has a free extent.

As the algorithm progresses, disks with no free extents on the previous pass of the round
robin are queried for free extents on each turn of the algorithm in case extents become free.

Choosing the extent to allocate from a specific managed disk


When an extent is to be allocated from a specific managed disk, the allocation policy is to
allocate the next free extent from a list of free extents held by the SVC cluster for the specific
managed disk.

3.6.7 Selecting MDGs


There is only one question you might ask relating to the selection of MDGs:
򐂰 From which MDisk group should I create my VDisk?

The answer to this question is that you need to keep in mind that an individual virtual disk is a
member of one MDG and one I/O group:
򐂰 The MDG defines which managed disks provided by the disk subsystem make up the
virtual disk.
򐂰 The I/O group defines which SVC nodes provide I/O access to the virtual disk.

Note: There is no fixed relationship between I/O groups and MDGs.

Therefore, you could define the virtual disks using the following considerations:
򐂰 Optimize the performance between the hosts and SVC by repartitioning the VDisks
between the different nodes of the SVC cluster. This means spreading the load equally on
the nodes in the SVC cluster.
򐂰 Get the level of performance, reliability, and capacity you require by using the MDG that
corresponds to your needs (you can access any MDG from any node) — meaning, choose
the MDG that fulfils the demands for your VDisk, with respect to performance, reliability,
and capacity.

3.6.8 I/O handling and offline conditions


For a virtual disk to be online, all managed disks in the MDG or MDGs associated with the
virtual disk must be online. This applies to image mode virtual disks and managed mode
virtual disks. A virtual disk is offline if any managed disk in the MDG is offline, even if that
managed disk does not contribute any extents to the virtual disk in question, or the managed
disk has no allocated extents.

Chapter 3. Planning and configuration 57


Note: Normally, a virtual disk is associated with just one MDG. However, for the duration
of a migration between MDGs, the virtual disk is associated with two MDGs. In this case,
the offline rules apply to both MDGs for the duration of the migration only.

Referring back to Figure 3-16 on page 51, this means that if managed disk M1 is taken offline
by disk subsystem A, virtual disks V1 and V2 are taken offline by the SVC.

This notion of offline and online is managed on a node basis. Therefore, if a condition arises
that causes one SVC node to see a managed disk offline, then the affected virtual disks are
taken offline on that node only, but will still be online at the other node.

For example, refer again to Figure 3-16 on page 51. If the SAN connection between disk
subsystem B and SVC node P were to fail, then node P would lose contact with managed
disks M3 and M4. Since M3 is in MDG X and M4 is in MDG Y, this takes offline on node P all
the virtual disks in MDGs X and Y. Therefore, hosts accessing node P see virtual disk V2 go
offline. Hosts accessing V2 via node Q continue to see the virtual disk as online. When using
SDD, the paths to node P show offline, while the paths to node Q show online, and the host
still has access to the virtual disks.

3.6.9 Quorum disks


A quorum disk is used to resolve tie-break situations, when the “voting set” of nodes disagree
on the current cluster state. The voting set is an overview of the SVC cluster configuration
running at a given point in time, and is the set of nodes and quorum disk which are
responsible for the integrity of the SVC cluster. On cluster creation, the voting set consists of
a single node with a unique ID of 1, which was used to create the cluster. When nodes are
integrated into the SVC cluster, they get added to the voting set, and when a node is removed
from the SVC cluster it will also be removed from the voting set. A failed node is considered
as a removed node, and is removed from the voting set.

When MDisks are added to the SVC cluster, it checks the MDisk to see if they can be used as
quorum disk. If the MDisk fulfils the demands, the SVC will assign the three first MDisks as
quorum candidates, and one of them is selected as the active quorum disk. If possible, the
SVC will place the quorum candidates on different disk subsystems. Once the quorum disk
has been selected, however, no attempt is made to ensure that the other quorum candidates
are presented via different disk subsystems. When the set of quorum disk candidates has
been chosen it is fixed. A new quorum disk candidate will only be chosen if:
򐂰 The administrator requests that a specific MDisk becomes a quorum disk using the
svctask setquorum command.
򐂰 An MDisk that is a quorum disk is deleted from an MDG.
򐂰 An MDisk that is a quorum disk changes to image mode.

An MDisk will not be replaced as a quorum disk candidate simply because it is offline.

The cluster must contain at least half of its nodes in the voting set to function. A tie-break
situation can occur if exactly half the nodes in the cluster fail at the same time, or if the cluster
is divided so that exactly half the nodes in the cluster cannot communicate with the other half.

For example, in a cluster of four nodes, if any two nodes fail at the same time, or any two
cannot communicate with the other two, a tie-break condition exists and must be resolved. To
resolve the tie-break condition, a quorum disk is used. The cluster automatically chooses
three managed disks to be quorum disks. One of these disks is used to settle a tie-break

58 IBM System Storage SAN Volume Controller


condition. If a tie-break condition occurs, the first half of the cluster to access of the quorum
disk after the split has occurred locks the disk and continues to operate. The other side stops.
This action prevents both sides from becoming inconsistent with each other.

In a two-site solution, we recommend using two SVC clusters, one at each site, and mirroring
the data by using either host mirror software/functions or by using SVC Metro or Global
Mirror. In a two-site solution with only one SVC cluster, you can have a situation where you
will lose access to the data. For example, in a four node SVC cluster, with two nodes at each
location, the quorum will only be located at one of the sites, and if that site “dies,” the
remaining two nodes cannot get access to the quorum disk, and will also shut down. As a
result, the entire SVC cluster is shut down, even though only one site is out. The same
applies in a two-node SVC cluster, if you put the two nodes in different locations or rooms.

Important: A cluster should be regarded as a single entity for disaster recovery purposes.
This means that the cluster and the quorum disk should be co-located.

Figure 3-20 and Figure 3-21 show two different scenarios with respect to quorum disk and
cluster co-location.

Bad One IO Group Design, if HA is key issue

I/O group 0 SVC1N2


SVC1N1

SVC cluster

UPS1A SW11 SW12 UPS1B

SW21 SW22

DS4302_A DS4302_B
DS4301_A DS4301_B

Quorum disk
Site A Site B

Figure 3-20 Bad scenario for quorum disk and cluster co-location

Chapter 3. Planning and configuration 59


HA Design

SVC cluster1 SVC cluster2


SVC1N1 SVC2N1
I/O group 0 I/O group 0
SVC1N2 SVC2N2

SVC1N3 SVC2N3
I/O group 1 I/O group 1
SVC1N4 SVC2N4

UPS1A SW11 SW12 UPS2A

UPS1B UPS2B
SW21 SW22

DS4302_A DS4302_B
DS4301_A DS4301_B

Quorum disk

Site A Site B

Figure 3-21 Correct HA scenario for quorum disk and cluster co-location

3.6.10 Virtualization operations on virtual disks


You can perform several operations on virtual disks as explained in the following sections.

Expanding a virtual disk


A virtual disk can be expanded. The granularity of expansion is one block (512 bytes). If the
expansion requires the allocation of additional extents, then these are allocated to the virtual
disk from the managed disks specified using the algorithm described in “Allocation of free
extents” on page 57. Expanding a virtual disk using the sequential policy forces the
virtualization policy to be changed to striped.

A virtual disk can also be expanded by a single extent. This gives you the ability to expand the
virtual disk by selecting individual managed disk extents, allowing any desired mapping from
virtual to managed extents to be created. A security clear feature is provided to allow the
resulting additional space to be overwritten with zeros.

Image mode virtual disks cannot be expanded. They must first be migrated to managed
mode.

Warning: Not all operating systems can tolerate the expansion of a virtual disk. A reboot or
remount of the disk might be needed to use the additional space.

60 IBM System Storage SAN Volume Controller


Reducing a virtual disk
A virtual disk can be shrunk. The granularity of shrinking is one block (512 bytes). If the shrink
operation allows extents to be freed, then these are returned to the pool of free extents for
allocation by later virtualization and migration operations.

Image mode virtual disks cannot be reduced in size. They must first be migrated to managed
mode.

Warning: Not all operating systems can tolerate a virtual disk being reduced in size. You
must be cautious and know where data resides, otherwise data loss can occur.

Deleting a virtual disk


A virtual disk can be deleted. When a virtual disk is deleted, all host mappings are deleted
and any cached read or write data is discarded. Also, any FlashCopy mappings, Metro or
Global Mirror relationships in which the disk are participating are also deleted.

If the virtual disk was operating in managed mode, then the extents are returned to the pool of
free extents for allocation by later virtualization operations. If the virtual disk was an image
mode virtual disk, deleting the virtual disk causes the managed disk to be ejected from the
MDG. The mode of the managed disk is returned to “unmanaged” mode. This makes the
delete operation the inverse of the create operation for image mode disks.

3.6.11 Creating an MDisk group (extent size rules)


There are several guidelines or rules that you must follow when creating an MDisk group.

Number of MDGs
The number of MDisk groups depends on the following factors:
򐂰 The need for image mode virtual disks (data migration)
򐂰 The need for managed mode virtual disks with sequential policy
򐂰 The models of the disk subsystem controller (disk subsystem with cache or without,
DS4000, ESS, and so on) that have different properties on performance, availability,
response time, and so on

It is possible to have a common MDG for the SVC cluster. However, a virtual disk (VDisk) is
offline if any managed disk in the MDG is offline, even if that managed disk does not
contribute any extents to the virtual disk in question, or the managed Disk has no allocated
extents. The more managed disks there are in an MDG, the more the VDisk (host LUN) is
striped and the better the performance is.

We recommend that you:


򐂰 Create at least one separate MDG for all the image mode virtual disks.
򐂰 Create one separate MDG for each array (or RAID) type presented from a disk subsystem,
or one separate MDG for each subsystem when the RAID protection is the same for all the
subsystem. So the MDGs are characterized by performance, RAID level, reliability, vendor
and so on. Keep in mind that more MDG granularity reduces the possibility to get some
VDisk offline due to MDisk problem or subsystem maintenance procedures, but this level
of granularity increases the management activity required.

Chapter 3. Planning and configuration 61


Note: It could be wise to keep each disk subsystem in a separate MDisk group. This
prevents a failure in storage subsystem A from affecting VDisks in an MDisk group
from storage subsystem B. If a VDisk is composed of MDisks from both A and B, then
a failure in either A or B causes the VDisk to be unavailable.

򐂰 Name them in such a way that it is easy for you (when you create a virtual disk) to
associate a virtual disk with an MDG that has the appropriate level of performance and
reliability needed; for example: pool1_high_perf_high_rela, pool2_low_perf_low_rela,
mDisk_grp_ESS1, mDisk_grp_DS40002, and mDisk_grp_raid10. We use the following
names in the SVC environment for this redbook: MDG1_DS43, MDG2_DS43, and so on.

Size of extent
If you want to migrate a VDisk from one MDisk group to another MDisk group, the extent size
must be the same between the two MDisk groups. Because of this, it can be useful to set a
common extent size for all the MDisk groups. A value of 32 MB (128 TB) or 64 MB (256 TB)
can be a best trade-off between performance and capacity.

If you need to migrate a VDisk from one MDisk group to another MDisk group that has
another extent size, you need to use Metro or Global Mirror so the VDisks have to belong to
the same I/O group if they are in the same SVC cluster.

Configuration steps
The parameters needed to create an MDG are:
򐂰 The name you want to assign to the MDG
򐂰 List of the managed disk you want to include
򐂰 The extent size you want to use

You must perform the following operations:


1. Create an MDG
2. Add managed disks to the MDG

Details of these operations are provided in the following sections.

3.6.12 Creating a managed disk


First, you need to create the logical disks (LUNs) in your disk subsystem. We recommend that
you use the maximum LUN size to be presented as an MDisk. The discovery of the managed
disk is automatically done by the SVC. The managed disk is in unmanaged mode until you
include it in an MDG.

You need at least one managed disk for the support of the quorum disk used in the cluster. All
SVC nodes must have access at any time to all the managed disks.

The size of the managed disk can be up to 2 TB. You can use some common sizes for all the
managed disks (16 GB, 32 GB). This helps for simplicity and to ensure that as much as
possible all the MDisk are used in the striping process for a managed disk with striped policy.
If you have three managed disks, two of 4 GB and one of 210 GB, then very quickly only the
disk of 210 GB is used in the striping process.

For image mode and managed mode with sequential policy virtual disks, you must create
managed disks (LUNs in the disk subsystem) at a minimum of the same size of the origin disk
you want to migrate (for the image mode), or that you want to copy (for the managed mode
with sequential policy). If no extents are available, the creation of the virtual disks fail.

62 IBM System Storage SAN Volume Controller


Configuration steps
When creating a managed disk, follow these steps:
1. Define the logical or physical disks (logical units) in the disk subsystem.
2. When you include a managed disk into an MDG, the mode changes from unmanaged to
managed.

3.6.13 Creating a virtual disk


An individual virtual disk is a member of one MDG and one I/O group. The MDG defines
which managed disks provide the disk subsystem that makes up the virtual disk. The I/O
group defines which SVC nodes provide I/O access to the virtual disk.

Note: There is no fixed relationship between I/O groups and MDGs.

You would define all the virtual disks in order to:


򐂰 Optimize the performance between the hosts and SVC by repartitioning the VDisks
between the different nodes of the SVC cluster.
򐂰 Get the level of performance, reliability, and capacity that you require by using the MDG
that corresponds to your needs. You can access any MDG from any node.

When you create a VDisk, it is associated to one node of an I/O group. By default, every time
you create a new VDisk, it is associated to the next node using a round robin algorithm. For
example, you might have four hosts (host1, host2, host3, host4) with 100 VDisks for each host
of the same size with the same level of I/O activity, and a four node (two I/O groups) cluster.
The result is 100 VDisks on each node (25 VDisks from host1, 25 VDisks from host2, and so
on).

You can specify a preferred access node. This is the node through which to send I/O to the
VDisk instead of using the round robin algorithm. For example, in one host with four VDisks
(VD1, VD2, VD3, and VD4). VD1 and VD3 have a high level of I/O activity. VD2 and VD4 have
a low level of I/O activity. If you use the round robin algorithm, VD1 and VD3 are on the same
node 1 of the I/O group, and VD2 and VD4 are on the same node 2 of the I/O group. To avoid
this, use the preferred node feature to specify VD1 on node 1 and VD3 on node 2. We
recommend that you use the preferred node for I/O option when you create your VDisks.

A virtual disk is defined for an I/O group which provides the following benefits:
򐂰 The VDisk is “exported” by the two nodes of the I/O group to the host via eight paths (four
paths for each node). We use zoning to limit it to four paths from each node.
򐂰 Each write is copied into the cache of the two nodes before acknowledgment is sent to the
host.

Even if you have eight paths for each virtual disk, all I/O traffic flows only towards one node
(the preferred node). Therefore, only four paths are really used by SDD. The other four are
used only in case of a failure of the preferred node.

Chapter 3. Planning and configuration 63


Before you create a virtual disk, you can check the amount of space that is available in the
MDG. You can determine the free capacity for an MDisk or an MDisk group as shown in
Example 3-1.

Example 3-1 lsmdiskgrp command


IBM_2145:ITSOSVC01:admin>svcinfo lsmdiskgrp ARR36P5N
id 0
name ARR36P5N
status online
mDisk_count 2
vDisk_count 0
capacity 333.9GB
extent_size 32
free_capacity 333.9GB

Creating image mode virtual disks


Use image mode virtual disks when a managed disk already has data on it, from a pre
virtualized disk subsystem. When an image mode virtual disk is created, it directly
corresponds to the managed disk from which it is created. Therefore, virtual disk LBA x =
managed disk LBA x.

When you create an image mode disk, the managed disk must have a mode of unmanaged
and therefore does not belong to any MDG. A capacity of 0 is not allowed. Image mode virtual
disks can be created in sizes with a minimum granularity of 512 bytes, and must be at least
one block (512 bytes) in size.

The SVC can reserve an integer number of extents on which to hold the image mode disk. It
effectively rounds up its size to the nearest whole number of extents.

Creating managed mode virtual disks with sequential policy


When creating a managed mode virtual disk with sequential policy, you must use a managed
disk containing free extents which are sequential and in a size that is equal or greater than the
size of the virtual disk you want to create. This is due to the fact that, even if you know the
number of free extents in a managed disk, you do not know if they are sequential.

Modifying a virtual disk


You can change the I/O group with which a virtual disk is associated. This requires a flush of
the cache within the nodes in the current I/O group to ensure that all data is written to disk. I/O
should be suspended at the host level before you perform this operation.

Configuration steps
The parameters that you need to create a virtual disk are:
򐂰 The name you want to assign to the virtual disk
򐂰 The I/O group you want to use
򐂰 The MDG you want to use
򐂰 The capacity you want to define

You must perform the following operations:


1. Create a managed mode VDisk.
2. Create an image mode VDisk.
3. Modify the VDisk.

64 IBM System Storage SAN Volume Controller


3.6.14 Quality of service on VDisk
You can set the I/O governing rate which is a cap on the amount of I/O that is accepted for this
virtual disk. You can set it in terms of I/O per second or MBs per second. By default, no I/O
governing is set when a virtual disk is created.

An I/O threshold is expressed as a number of I/Os, or a number of MBs, over a minute. The
threshold is evenly divided between all SVC nodes that service the VDisk, that is to say,
between the nodes which form the I/O Group of which the VDisk is a member.

The algorithm operates two levels of policing: while I/O is under threshold, and when
threshold is reached. While a VDisk on each SVC node has been receiving I/O at a rate
below the governed level, then no governing is performed. A check is made every minute that
the VDisk on each node is continuing to receive I/O below the threshold level. If this check
shows that the host has exceeded its threshold on one or more nodes, then policing begins
for new I/Os.

While policing is in force:


򐂰 A threshold quantity is calculated for a one-second period.
򐂰 I/Os are counted over a period of a second.
򐂰 If I/Os are received in excess of the one-second threshold quantity on any node in the I/O,
those I/Os are grouped and later I/Os are pended.
򐂰 When the second expires, a new threshold quantity is established, and any pended I/Os
are re-driven under the new threshold quantity.

If a host stays within its one second threshold quantity on all nodes in the I/O group for a
period of one minute, then the policing is relaxed, and monitoring takes place over the
one-minute period as it was before the threshold was reached.

3.6.15 Creating a host (LUN masking)


In this section we discuss how you would go about creating a host.

Configuration steps
The parameters needed to create a host and virtual disk to host mapping are:
򐂰 The name you want to assign to the host
򐂰 The list of the WWPN of the FC HBAs of the host
򐂰 The name of the VDisks you want to assign to the host

You must perform the following operations:


1. Create a host.
2. Create a VDisk to host mapping.

3.6.16 Port masking


Starting with SVC 4.1.0 you can use a port mask to control the node target ports that a host
can access.

The port mask is an optional parameter of svctask mkhost and chhost commands.

The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to
1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default
value is 1111 (all ports enabled).

Chapter 3. Planning and configuration 65


3.6.17 Standard and persistent reserve
You use the svctask rmvdiskhostmap command to remove standard and persistent
reservations that a host has on the vdisk. Host reservations on vdisk are removed from
software release 4.1.0 or later.

3.6.18 Expanding an SVC cluster configuration


You can expand an SVC cluster configuration as explained in the following sections.

Adding a node to a cluster


SVC clusters of up to 8-nodes are supported. You can easily add new nodes to add new
hosts or to redistribute workload.

Adding a new disk controller to an MDisk group


MDisk groups can span disk subsystems. We recommend that you do not do this. Each
MDisk group should, in normal circumstances, comprise disks from one disk subsystem. See
3.6.2, “MDGs, I/O groups, virtual disks, and managed disks” on page 50.

VDisk size increase and decrease


The SVC allows you to increase and decrease the size of VDisks. Not all operating systems
allow this. See “Expanding a virtual disk” on page 60.

3.6.19 Migration
This facility allows the mapping of virtual disk extents to managed disk extents to be changed,
without interrupting a host’s access to that virtual disk. You can perform this for any virtual
disk managed by the SVC. You can use this for:
򐂰 Redistributing workload within a cluster across the disk subsystem
򐂰 Moving workload onto newly installed storage
򐂰 Moving workload off old or failing storage, ahead of decommissioning it
򐂰 Moving workload to rebalance a changed workload
򐂰 Migrating data from legacy disk subsystem to SVC managed storage
򐂰 Migrate data from one disk subsystem to another

For further details about the migration facility, see Chapter 14, “Migration to and from the SAN
Volume Controller” on page 559.

3.7 SVC supported capabilities


For a list of the maximum configurations, go to:
http://www-03.ibm.com/servers/storage/support/software/sanvc/installing.html

Under the Install and Use tab, select the V4.1.x configuration requirements and
guidelines link to download the PDF.

66 IBM System Storage SAN Volume Controller


3.7.1 Adding DS8000 storage to the SVC
Perform the following steps to add DS8000 storage using the DS Command line Interface:
1. Go to the Web site and check the prerequisites:
http://www.ibm.com/storage/support/2145
2. You might need to upgrade the microcode level of the DS8000 to support this attachment.
3. Before the MDisks can be presented to the SVC, define the DS8000 ports to the storage
zone to all SVC node port.
4. Sign on to the DSCLI of DS8000 (Example 3-2).

Example 3-2 logon to DSCLI


C:\Program Files\ibm\dscli>dscli
Date/Time: 23. June 2006 15:50:43 CET IBM DSCLI Version: 5.1.0.289 DS:IBM.2107-7505081

Display a list of array sites to verify if any are available (Example 3-3).

Example 3-3 lsarraysite command


dscli> lsarraysite
Date/Time: 23. June 2006 16:53:40 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
arsite DA Pair dkcap (10^9B) State Array
=============================================
S1 2 146.0 Assigned A0
S2 2 146.0 Assigned A1
S3 2 146.0 Assigned A2
S4 2 146.0 Assigned A3
S5 2 146.0 Assigned A4
S6 2 146.0 Assigned A5
S7 2 146.0 Assigned A6
S8 2 146.0 Unassigned -

5. Make a new array (Example 3-4).

Example 3-4 mkarray command


dscli> mkarray -raidtype 5 -arsite S8
Date/Time: 23. June 2006 16:54:48 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00004I mkarray: Array A7 successfully created.

6. Display the new array (Example 3-5).

Example 3-5 lsarray command


dscli> lsarray -l
Date/Time: 23. June 2006 16:55:11 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
====================================================================
A0 Assigned Normal 5 (7+P) S1 R0 2 146.0
A1 Assigned Normal 5 (7+P) S2 R1 2 146.0
A2 Assigned Normal 5 (7+P) S3 R2 2 146.0
A3 Assigned Normal 5 (6+P+S) S4 R3 2 146.0
A4 Assigned Normal 5 (6+P+S) S5 R4 2 146.0
A5 Assigned Normal 5 (6+P+S) S6 R5 2 146.0
A6 Assigned Normal 5 (6+P+S) S7 R6 2 146.0
A7 Unassigned Normal 5 (7+P) S8 - 2 146.0

Chapter 3. Planning and configuration 67


7. Display a list of extpools to verify if any are available (Example 3-6).

Example 3-6 lsextpool command


dscli> lsextpool -l
Date/Time: 23. June 2006 16:55:42 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved
numvols numranks
===========================================================================================
==========
extpool_01 P0 ckd 0 below 727 18 826 0
64 1
extpool_02 P1 ckd 1 below 727 18 826 0
64 1
extpool_03 P2 ckd 0 below 896 0 1018 0
0 1
extpool_04 P3 ckd 1 below 769 0 873 0
0 1
extpool_05 P4 fb 0 below 219 71 219 0
17 1
extpool_06 P5 fb 1 below 379 51 379 0
1 1
extpool_07 P6 fb 0 below 379 51 379 0
4 1

8. Make a new extpool (Example 3-7).

Example 3-7 mkextpool command


dscli> mkextpool -rankgrp 1 -stgtype fb "extpool_08"
Date/Time: 23. June 2006 16:57:25 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00000I mkextpool: Extent pool P7 successfully created.

9. Display the new extpool (Example 3-8).

Example 3-8 lsextpool


dscli> lsextpool -l
Date/Time: 23. June 2006 16:57:32 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved
numvols numranks
===========================================================================================
==========
extpool_01 P0 ckd 0 below 727 18 826 0
64 1
extpool_02 P1 ckd 1 below 727 18 826 0
64 1
extpool_03 P2 ckd 0 below 896 0 1018 0
0 1
extpool_04 P3 ckd 1 below 769 0 873 0
0 1
extpool_05 P4 fb 0 below 219 71 219 0
17 1
extpool_06 P5 fb 1 below 379 51 379 0
1 1
extpool_07 P6 fb 0 below 379 51 379 0
4 1
extpool_08 P7 fb 1 below 0 100 0 0
0 0

68 IBM System Storage SAN Volume Controller


10.Create the rank (Example 3-9).

Example 3-9 mkrank


dscli> mkrank -stgtype fb -array A7
Date/Time: 23. June 2006 16:58:00 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00007I mkrank: Rank R7 successfully created.

11.Display the ranks (Example 3-10).

Example 3-10 lsrank


dscli> lsrank -l
Date/Time: 1. Februar 2006 16:58:13 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
========================================================================================
R0 0 Normal Normal A0 5 P0 extpool_01 ckd 1018 192
R1 1 Normal Normal A1 5 P1 extpool_02 ckd 1018 192
R2 0 Normal Normal A2 5 P2 extpool_03 ckd 1018 0
R3 1 Normal Normal A3 5 P3 extpool_04 ckd 873 0
R4 0 Normal Normal A4 5 P4 extpool_05 fb 779 560
R5 1 Normal Normal A5 5 P5 extpool_06 fb 779 400
R6 0 Normal Normal A6 5 P6 extpool_07 fb 779 400
R7 - Configuring Normal A7 5 - - fb 0 -

12.Assign the unassigned rank A7 to extent pool P7 (Example 3-11).

Example 3-11 chrank


dscli> chrank -extpool P7 R7
Date/Time: 23. June 2006 16:59:29 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00008I chrank: Rank R7 successfully modified.

13.Display the new rank (Example 3-12).

Example 3-12 lsrank


dscli> lsrank -l
Date/Time: 23. June 2006 16:59:40 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts
===================================================================================
R0 0 Normal Normal A0 5 P0 extpool_01 ckd 1018 192
R1 1 Normal Normal A1 5 P1 extpool_02 ckd 1018 192
R2 0 Normal Normal A2 5 P2 extpool_03 ckd 1018 0
R3 1 Normal Normal A3 5 P3 extpool_04 ckd 873 0
R4 0 Normal Normal A4 5 P4 extpool_05 fb 779 560
R5 1 Normal Normal A5 5 P5 extpool_06 fb 779 400
R6 0 Normal Normal A6 5 P6 extpool_07 fb 779 400
R7 1 Normal Normal A7 5 P7 extpool_08 fb 909 0

14.Create FB volume with ID 1300,1301,1302, and 1303 (Example 3-13).

Example 3-13 mkfbvol


dscli> mkfbvol -extpool P7 -cap 100 1300-1303
Date/Time: 24. June 2006 09:02:27 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00025I mkfbvol: FB volume 1300 successfully created.
CMUC00025I mkfbvol: FB volume 1301 successfully created.
CMUC00025I mkfbvol: FB volume 1302 successfully created.
CMUC00025I mkfbvol: FB volume 1303 successfully created.

Chapter 3. Planning and configuration 69


15.Create the volume group (Example 3-14).

Example 3-14 mkvolgrp


dscli> mkvolgrp -volume 1300-1303 -type scsimap256 SVC_VG
Date/Time: 24. June 2006 10:00:40 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00030I mkvolgrp: Volume group V1 successfully created.

16.Display the list of volume groups (Example 3-15).

Example 3-15 lsvolgrp


dscli> lsvolgrp
Date/Time: 24. June 2006 10:00:49 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
Name ID Type
=======================================
W2k3_VG V0 SCSI Map 256
SVC_VG V1 SCSI Map 256
All CKD V10 FICON/ESCON All
All Fixed Block-512 V20 SCSI All
All Fixed Block-520 V30 OS400 All

17.Display the volume group you created with its volumes (Example 3-16).

Example 3-16 showvolgrp


dscli> showvolgrp v1
Date/Time: 24. June 2006 10:01:03 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
Name SVC_VG
ID V1
Type SCSI Map 256
Vols 1300 1301 1302 1303

18.Configure IOPort (Example 3-17).

Example 3-17 setioport


dscli> setioport -topology SCSI-FCP IBM.2107-7505081/I0230
Date/Time: 23. June 2006 10:43:22 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00011I setioport: I/O Port I0230 successfully configured..

19.Display the list of IOPorts (Example 3-18).

Example 3-18 lsioport


scli> lsioport
Date/Time: 24. June 2006 10:43:41 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
ID WWPN State Type topo portgrp
===============================================================
I0030 50050763030300AC Online Fibre Channel-LW FICON 0
...
I0103 500507630308C0AC Online Fibre Channel-LW FICON 0
I0230 50050763031300AC Online Fibre Channel-SW SCSI-FCP 0
I0231 50050763031340AC Online Fibre Channel-SW SCSI-FCP 0
I0232 50050763031380AC Online Fibre Channel-SW SCSI-FCP 0

70 IBM System Storage SAN Volume Controller


20.Configure the SVC attachment to the DS8000. You must define eight host connections in a
two-node environment, one for each single Fibre Channel port of SVC (Example 3-19).

Example 3-19 mkhostconnect


dscli> mkhostconnect -wwname 5500507680140234F -profile "IBM SAN Volume Controller"
-volgrp V1 -ioport all SVCN1_hba1
Date/Time: 24. June 2006 10:48:14 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
CMUC00012I mkhostconnect: Host connection 0003 successfully created.

21.Repeat these steps for all FC ports of your SVC.

22.Finally, display the host connections defined (Example 3-20).

Example 3-20 lshostconnect


dscli> lshostconnect
Date/Time: 24. June 2006 10:08:23 CET IBM DSCLI Version: 5.1.0.289 DS: IBM.2107-7505081
Name ID WWPN HostType Profile portgrp volgrpID ESSIOport
=========================================================================================
Blade3_fc0 0000 210000112593288C - Intel - Windows 2003 0 V0 all
morpheus 0001 210000E08B0B6C36 - Intel - Windows 2003 0 V0 all
Blade3_fc1 0002 210000112593288D - Intel - Windows 2003 0 V0 all
SVCN1_hba1 0003 500507680140234F - San Volume Controller 0 V1 all
SVCN1_hba2 0004 500507680130234F - San Volume Controller 0 V1 all
SVCN1_hba3 0005 500507680120234F - San Volume Controller 0 V1 all
SVCN1_hba4 0006 500507680110234F - San Volume Controller 0 V1 all
SVCN2_hba1 0007 5005076801401FAA - San Volume Controller 0 V1 all
SVCN2_hba2 0008 5005076801301FAA - San Volume Controller 0 V1 all
SVCN2_hba3 0009 5005076801201FAA - San Volume Controller 0 V1 all
SVCN2_hba4 000A 5005076801101FAA - San Volume Controller 0 V1 all

Chapter 3. Planning and configuration 71


3.7.2 Adding ESS storage to the SVC
Perform the following steps:
1. Go to the Web site and check the prerequisites:
http://www.ibm.com/storage/support/2145
2. You might need to upgrade the microcode level of the ESS to support this attachment.
3. Before the MDisks can be presented to the SVC, define the ESS ports to the storage zone
to all SVC node port.
4. Sign on to the ESS specialist. On the left side of the window that opens, select Storage
Allocation. Figure 3-22 shows the Storage Allocation window.
5. On the Storage Allocation -- Graphical View panel, click Open System Storage.

Figure 3-22 Storage Allocation window

72 IBM System Storage SAN Volume Controller


6. Go to the Configure Host Adapter Ports window (Figure 3-23). Configure the host bay
ports in the ESS if you haven’t already. Select the port or ports. For Fibre Channel
Topology, select Point to Point (Switched Fabric). For Fibre Channel Protocol, select
FCP (Open Systems).

Figure 3-23 Configure Host Adapter Ports window

7. Go to the Modify Host Systems window (Figure 3-24). Follow these steps:
a. Enter the host system nickname, which is h3_svclnode1_a, in this case.
b. Select its WWPN from the list of WWPNs or type it in manually.
c. Select the Fibre Channel ports that you want to present LUNs to the node.

Figure 3-24 ESS Modify Host System window

Chapter 3. Planning and configuration 73


We defined all four node ports for both nodes to bay1, card 1 port A, and bay 3, card 1 port
A. This distributes access across two bays in the ESS for redundancy. See Figure 3-25.

Note: If you are using an ESS model 800, it is better to connect your bays to the
switches using one of the following configurations to give you better load balancing
between the two ESS internal clusters:
򐂰 Bays 1 and 4 to one switch, and 2 and 3 to the other switch, or
򐂰 Bays 1 and 2 to one switch, and 3 and 4 to the other switch.

This is so, because the ESS 800 has a different internal cluster configuration.

Figure 3-25 Volume assignments

74 IBM System Storage SAN Volume Controller


8. Share the volumes across both ports in the ESS. Now the volumes are presented to the
SVC on both ports. You can find the MDisks on the SVC and rename them to a unique
name identifying the origin. Consider the example of E1_407_14830 - ESS number
1,_volume serial number in ESS 407-14830. You can add the MDisk to an MDG called
ESS_14830, for example. See Figure 3-26.

Note: We recommend that, for the LUN size, you use a full array size that is presented
to the SVC as an MDisk.

Figure 3-26 Viewing the two paths

3.7.3 Adding DS4000 storage to the SVC


To add DS4000 storage to the SVC, follow these steps:
1. Check the prerequisites on the Web at:
http://www.ibm.com/storage/support/2145
2. Check the supported firmware levels and configurations before you connect to the SVC.
You can see the firmware version for the DS4000 in the Storage Manager by choosing
View → Subsystem Profile, as shown in Figure 3-27, Figure 3-28, and Figure 3-29.

Chapter 3. Planning and configuration 75


Figure 3-27 Where to find the Storage Subsystem Profile

Figure 3-28 The Storage Subsystem Profile showing the firmware version

76 IBM System Storage SAN Volume Controller


Figure 3-29 DS4000 mappings view

3. We defined one storage partition in the DS4000 with all of the SVC node ports defined,
one host partition with eight ports. See Figure 3-30.

Figure 3-30 Array 1

Chapter 3. Planning and configuration 77


These arrays were created with names that reflect the quantity and size of the physical
drives. This can be important to reflect performance and size when you create MDisk groups.
Our opinion is that it is more important to map back the MDisk names to the physical RAID
control. Therefore, you can use F1_Array_number_LUNname. The arrays should alternate
between controller A and controller B for the preferred path. See Figure 3-31.

Figure 3-31 Array 2

Figure 3-32 shows the host type for the SVC.

Figure 3-32 Host type for storage partition

78 IBM System Storage SAN Volume Controller


Figure 3-33 shows the port mapping. Now the volumes are presented to the SVC on both
ports. You can find the MDisks on the SVC and rename them to a unique name identifying the
origin. Consider the example of F1_Array_LUN - DS4000 number 1,_array number_LUN
number. You can add the MDisk to an MDisk group called DS4000_14830, for example.

Figure 3-33 Port mapping

3.7.4 LUN layout


When assigning LUNs from your disk subsystem to the SVC, assign the LUN to all ports in
the SVC, and spread the LUNs in the disk subsystem, so you get the best performance and
reliability.

In the DS4000, you should have an equal number of arrays and LUNs, equally spread on the
two controllers. After having assigned spare disks in the DS4000, define your arrays with
RAID protection, and create as few LUNs you can in the array. If possible, make the LUNs the
same size, so you can utilize the full capacity when striping VDisks on the SVC.

In the ESS, you should make a LUN in each disk group (eight pack), so the striping is spread
over all disk groups, device adapters and the two controllers in the ESS.

In an MDG where the VDisks are all striped, we recommend that all the MDisks be the same
size if possible, so you can utilize the full capacity when striping VDisks on the SVC.

Chapter 3. Planning and configuration 79


80 IBM System Storage SAN Volume Controller
4

Chapter 4. Performance and capacity


planning
While storage virtualization with SVC improves flexibility and provides simpler management
of a storage infrastructure, it can also provide a substantial performance advantage for a
variety of workloads. The SVC’s caching capability and its ability to stripe VDisks across
multiple disk arrays is the reason why performance improvement is significant when
implemented with midrange disk subsystems, since this technology is often only provided with
high-end enterprise disk subsystems.

To ensure the desired performance and capacity of your storage infrastructure, we


recommend that you perform a performance and capacity analysis to reveal the business
requirements of your storage environment. When this is done, you can use the guidelines in
this chapter to design a solution that meets the business requirements.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 81
4.1 Performance considerations
When discussing performance for a system, it always comes down to identifying the
bottleneck, and thereby the limiting factor of a given system. At the same time, you must take
into consideration the component for whose workload you do identify a limiting factor, since it
might not be the same component that is identified as the limiting factor for different
workloads.

When designing a storage infrastructure using SVC, or implementing SVC in an existing


storage infrastructure, you must therefore take into consideration the performance and
capacity of the SAN, the disk subsystems, the SVC, and the known/expected workload.

4.1.1 SAN
Today, you can have a SAN with throughput of 2 Gbps full-duplex per connection or 4 Gbps
full-duplex per connection. The SVC is equipped with an FC Adapter Card with throughput of
2 Gbps full-duplex per connection. or 4 Gbps full duplex per connection (model 8F4). When
implementing the SVC, each I/O group (two SVC storage engines) requires eight Fibre
Channel connections; this means that each I/O group has a potential throughput of 16 Gbps
~1.4 GBps full-duplex (assuming 2 Gbps equals 180 MBps actual throughput), which exceeds
the capability of the SVC engines. If you are using the 8F4 hardware model, you can have a
potential throughput of double that. Following best practices for SAN design, it is unlikely that
the SAN will be the limiting factor, and therefore we will not discuss it further in this book.

4.1.2 Disk subsystem


Discussing capacity and performance for various disk subsystems is beyond the scope of this
redbook. In general, you must ensure that the performance of the disk subsystems is
sufficient compared to the workload.

In most cases, the SVC will be able to improve the performance of older disk systems with
slow controllers or uncached disk systems. This happens because of caching in the SVC, and
the ability to stripe VDisks across multiple arrays. Also, the SVC provides the ability to spread
the data load of a specific host across VDisks. These VDisks can be striped across disk
arrays, and can also reside on separate disk subsystems (in different MDGs).

Note: For availability reasons, we recommend that you do not include multiple disk
subsystems in the same MDG, since the failure of one disk subsystem will make the MDG
go offline, and thereby all VDisks belonging to the MDG will go offline.

4.1.3 SVC
The SVC cluster is scalable up to four I/O groups (four pairs of SVC nodes), and the
performance is almost linear when adding more I/O groups to an SVC cluster, until it
becomes limited by other components in the storage infrastructure. While virtualization with
the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN
and disk subsystems that can deliver the desired performance.

In the following sections, we solely discuss the performance of the SVC and assume that
there are no bottlenecks in the SAN or on the disk subsystem.

82 IBM System Storage SAN Volume Controller


Latency
Latency is the delay added to the response time for an I/O operation. All in-band storage
virtualization solutions add some latency to cache miss I/Os. This is not a unique
characteristic of the SVC.

However, the SVC latency is very low. For a 4 KB read operation, the SVC introduces
approximately 60 µs (microseconds, millionths of a second) of additional delay. Considering
that a typical cache miss response time is approximately 10 ms (milliseconds, thousandths of
a second), the delay typically caused by the SVC is negligible (less than 1% of total response
time).

In the real world, the effect of latency is normally even less. All writes to the SVC are cache
hits, so they add no latency. Because of the advanced cache algorithm in the SVC, many
reads are also cache hits. Only cache miss I/Os add latency.

Performance increases
As of SVC 3.1, when running on up-to-date SVC Storage Engines, SVC supports an increase
of over 50 percent in the maximum I/O throughput of the SVC, as measured using 512 byte
read hits (from 805,000 with the previous generation of the product to 1,230,000 I/Os per
second).

We do not intend to cover performance in-depth within this redbook, but rather, we refer the
reader to the excellent IBM white paper, IBM System Storage SAN Volume Controller
Release 3.1 Performance, written by Bruce McNutt and Vernon Miller (both of the IBM
Systems Group), which is available by contacting your IBM account representative.

Within IBM, this paper is available at:


http://w3-1.ibm.com/sales/systems/portal/_s.155/254?navID=f320s260&geoID=All&prodID=IBM%20TotalStorage
%20Products&docID=tstlsvcontroller102605

This paper illustrates the performance and scalability that SVC Release 3.1 can now provide,
using a variety of random-access and sequential workloads.

4.2 Planning guidelines


The areas to be considered when planning a storage infrastructure using SVC, or
implementing the SVC in an existing storage infrastructure, are listed in the following sections.

Determine the number of SVC clusters


If you plan to deploy the SVC in a geographically dispersed environment — for example, a
dual site design for disaster recovery reasons — it is essential to use two SVC clusters. Due
to the design of the SVC cluster, we do not recommend splitting I/O groups geographically
(or the nodes in an I/O group), since this will not provide the resiliency needed for a disaster
recovery solution.

Note: IBM recommends the use of two cluster configurations for all production disaster
recovery systems. Customers who wish to use split cluster operation should contact their
IBM Regional Advanced Technical Specialist for specialist advice relating to their particular
circumstances.

Determine the number of I/O groups


When considering the number of I/O groups to use in an SVC cluster, note that the 1 GB/s
sequential throughput capability of an I/O group is usually the limiting factor when configuring

Chapter 4. Performance and capacity planning 83


attached storage. If you have information about the workloads that you plan to use with the
SVC, you can use that information to size the amount of capacity you can configure per I/O
group. To be conservative, assume a throughput ability of about 800 MB/s per I/O group.

Number of ports in the SAN used by the SVC


Since each I/O group has eight Fibre Channel ports (four on each node), you will need eight
ports in the SAN per I/O group, respectively four ports in each fabric (two from each node).

Number of paths from the SVC to disk subsystems


Generally, all SVC ports in an SVC cluster should see the same ports on the disk
subsystems. This means unless you reserve ports on the disk subsystem for direct host
access, all ports on the disk subsystem must be seen by all SVC nodes ports, in the
respective fabrics.

When configuring, for example, the IBM System Storage DS4300 for use with the SVC, you
should configure two controller ports in the DS4300 to be accessed by all SVC ports in the
cluster.

Number of paths from hosts to the SVC


Each HBA on the host could see two or four SVC ports per I/O group, depending on the high
availability and performance you want to get in your configuration. This means that a host
with two HBAs has four or eight paths to each I/O group. At the same time, if you choose to
have four paths to each I/O group, we recommend that you spread paths evenly for all host
systems across the ports on each SVC node based on the known/expected I/O load per host.
If you choose to have eight paths to each I/O group, this concept will not be applied because
you will always use all the SVC ports for each host.

In Figure 4-1, we illustrate how the paths are distributed evenly across the SVC node ports for
two hosts using four SVC ports per I/O group.

...1100xxx
HBA 0
HBA 0 SVC1
Host A ...1200xxx
IO grp 0
HBA 1 ...1300xxx Node 1
HBA 1
...1400xxx

...1100zzz
HBA 0 SVC1
...1200zzz
HBA 0 IO grp 0
...1300zzz Node 2
Host B
HBA 1 ...1400zzz HBA 1

Fabric 1 Fabric 2
...1100xxx ...1200xxx
...1300xxx ...1400xxx
...1100zzz ...1200zzz
...1300zzz ...1400zzz
HostA_HBA0 HostA_HBA1
HostB_HBA0 HostB_HBA1

Figure 4-1 Distribute I/O load evenly among SVC node ports

84 IBM System Storage SAN Volume Controller


In Figure 4-2, we illustrate how the paths are always distributed across the SVC node ports
for two hosts using eight SVC ports per I/O group.

...1100xxx
HBA 0
HBA 0 SVC1
Host A ...1200xxx
IO grp 0
HBA 1 ...1300xxx Node 1
HBA 1
...1400xxx

...1100zzz
HBA 0 SVC1
...1200zzz
HBA 0 IO grp 0
...1300zzz Node 2
Host B
HBA 1 ...1400zzz HBA 1

Fabric 1 Fabric 2
...1100xxx ...1200xxx
...1300xxx ...1400xxx
...1100zzz ...1200zzz
...1300zzz ...1400zzz
HostA_HBA0 HostA_HBA1
HostB_HBA0 HostB_HBA1

Figure 4-2 Distributed I/O load always guaranteed among SVC node ports

When implementing the SVC in an existing storage infrastructure, we generally recommend


that the zoning is performed so each host has two or four paths per HBA to each I/O group as
explained above. However, if based on a carefully reasoned analysis, you can preserve the
same number of paths to the SVC I/O groups as previously when accessing the disk
subsystems directly.

Important: For large SANs, you should be aware that configuring more than two paths per
HBA potentially increases the risk of reaching the maximum number of queued commands
for the SVC cluster.

For more information, refer to the Web site:


http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002858

4.2.1 I/O queue depth handling in large SANs


The purpose of discussing I/O queue depth is to avoid situations where an SVC node reaches
its maximum number of queued commands.

Note: Unless you have an SVC configuration close to the maximum supported, and all
your hosts are simultaneously busy, it is unlikely that you will encounter problems with I/O
queue depth limitations for your SVC.

As a guideline, you should consult your IBM representative in case your calculation shows
that the I/O queue limit is less than 20; see “Homogeneous queue depth calculation” on
page 87.

Chapter 4. Performance and capacity planning 85


The enqueueing of tasks consumes internal resources in the SVC node. Each SVC node can
handle 10,000 concurrent commands, distributed across all hosts and all VDisks.

Mechanisms are provided to ensure correct operation in the event that the I/O queue is full.
Each host port will be guaranteed to be able to enqueue a single command on an SVC node
(this is per-node, not per VDisk). I/O governing can be used to restrict the I/Os a host can
submit. If the SVC runs out of resources to enqueue an I/O that it has received, the algorithm
shown in Figure 4-3 takes effect, to handle the situation when the maximum of queued
commands is reached.

Unable to enqueue an I/O

Has initiator already consumed Yes


1. Set “Unit Attention - Commands cleared by another initiator”
its specially reserved command on the LUN for that initiator
on this node

2. Discard the command

No If a Unit Attention is already set,


then the command is simply discarded

Has the initiator at least one task Yes Return Task Set Full status,
queued for that LUN on this port using the specially reserved command

No

The initiator has no tasks Then Return Check Condition


queued for that LUN on this port “Unit Attention - Commands aborted by another initiator”
to the received command,
using the specially reserved command

Figure 4-3 I/O queue depth algorithm

This algorithm allows the SVC to discard commands, and to give a valid reason to the host as
to why this has happened. This algorithm is also used when internal recoveries within the
SVC node mean that the SVC is unable to start new host I/Os immediately, and the SVC
consequently runs out of resources to enqueue all the I/Os that are received.

Unfortunately, many host operating systems do not have helpful recovery algorithms if this
situation persists for more than 15 seconds, and the result will often be that one or more hosts
present errors to applications resulting in application failures. Following these
recommendations will hopefully avoid this.

Note: This issue is not in any way specific to the SVC. All controllers and operating
systems have the same issues if the maximum queue depth is reached.

86 IBM System Storage SAN Volume Controller


Calculating a queue depth limit
When calculating the queue depth, consider the following factors:
򐂰 Although the maximum number of queued commands is per-node and there are two
nodes in an I/O group, the system must continue to function when one of the nodes in an
I/O group is not available. Thus you must consider an I/O group to have the same number
of queued commands as a node. However, when a node fails, the number of paths to each
disk is halved. In practice, this effect can be neglected, and you can count nodes rather
than I/O groups in the calculation below.
򐂰 If a VDisk is mapped so that it can be seen by more than one host, then each host that it is
mapped to can send a number of commands to it.
򐂰 Multipathing drivers on most hosts round robin I/Os among the available I/O paths. For
hosts that do not currently do this, it is possible that this behavior might change in the
future, and you need to avoid “breaking” customers’ configurations when this happens.
򐂰 If a device driver times out a command, it will typically re-issue that command almost
immediately. The SVC will have both the original command and the retry in the command
queue in addition to any Error Recovery Process (ERP) commands that are issued.

In order for the maximum queue depth not to be reached, it must be the case for an I/O group
that:
򐂰 For all VDisks associated with the I/O group, and for all hosts that are mapped to be able
to see each VDisk, and for all paths on each host, the sum of the queue depths must be
less than 10,000. Because ERPs can consume some number of queued command slots,
this number is reduced to 7000 to allow a safety margin.

Homogeneous queue depth calculation


This calculation applies to systems where:
򐂰 The available queued commands are to be shared out among all paths, rather than giving
some hosts additional resources.
򐂰 The VDisks are shared out evenly among the I/O groups in the cluster.

Then the queue depth for each vdisk should be set on the hosts using the following
calculation:

q = Round up ((n * 7000) / (v * p*c)1 )

q = Per device path q-depth setting

n = Number of nodes in the cluster

v = Number of VDisks configured in the cluster

p = Number of paths per VDisk per host. A path is a route from a host FC port to an SVC
FC port that is recognized by the host as giving access to the VDisk.

c= The number of hosts which can concurrently access each VDisk. Very few applications
support concurrent access from multiple hosts to a single VDisk. Examples where multiple
hosts have concurrent access to a disk include cases where the SAN File System (SFS) is
in use. Thus, typically c will be 1.

In Example 4-1, we calculate the I/O queue for a homogenous SVC configuration with eight
SVC nodes and the maximum number of supported VDisks.

Chapter 4. Performance and capacity planning 87


Example 4-1 Calculation of I/O queue depth for a homogenous system
n=8: An 8 node SVC cluster
v=4096: Number of VDisks per SVC cluster is max. 4096
c=1: 1hosts is able to access each vdisk
p=4: Each host has 4 paths to each vdisk (2 HBAs each with 2 paths to the I/O group)
Calculating the queue depth, roundup q= (8*7000) / (4096*4*1) = 4

So, the queue depth in the operating systems should be set to four concurrent commands per
path.

Non-homogeneous queue depth calculation


In some cases, it could be appropriate to give favored hosts additional resources to allow
them to queue additional commands, or the number of VDisks supported by each I/O group
might be different. In these cases, the queue depth is calculated in the following way.

Consider each I/O group in turn:


For each VDisk, consider each host to which that VDisk has a mapping. This gives a set of
(host, VDisk) pairs. As long as the total sum of all (host, VDisk) pairs (queue depth) is less
than 7000, the system should not experience problems due to queue full situations.

The above calculation assumes that there is a significant probability that all of the hosts will
initiate the number of concurrent commands that they are limited to. That is to say, each host
is busy. If there are a large number of fairly idle hosts in the configuration, which are not going
to be initiating very many concurrent commands, then it might be possible to reason that the
queue depth does not need to be limited even if the calculation above says that it should be.
If this is the case, we can recommend that the queue depth be increased/not set.

How to limit the queue depth


Once you have determined the appropriate queue depth limit as described above, you must
apply it.

Each operating system has an OS/HBA specific way to limit the queue depth on a per device
path basis. An alternative to setting a per path limit is to set a limit on the HBA. Example 4-2
shows how we calculate the I/O queue depth to be set on each HBA for a host accessing 40
VDisks, in an environment where the maximum of concurrent I/O commands per path limit
has been determined to be 5.

Example 4-2 Calculating the I/O queue depth to be set on each HBA
If the number of concurrent I/O commands per path limit is 5, and the host has access to 40
VDisks through two adapters (4 paths per VDisk, 2 paths per HBA per VDisk)
The calculation is:
4 paths per VDisk, with a limit of 5 concurrent I/O commands per path, equals 20
concurrent I/O commands per VDisk
40 VDisks with 20 concurrent I/O commands per VDisk, equals 800 concurrent I/O commands
for the host
Therefore it may be appropriate to place a queue depth limit of (40*(4*5))/2=400 on each
adapter.

This allows sharing of the queue depth allocation between VDisks.

Note: For a system that is already configured, the result (v*p*c) is actually the same
number as can be determined by issuing datapath query device on all hosts, and
summing-up the number of paths.

88 IBM System Storage SAN Volume Controller


4.2.2 SVC managed and virtual disk layout planning
This section details managed and virtual disk planning.

Extent size for managed disks


When configuring managed disks with the SVC, you should create managed disk groups
(MDGs) to use the largest practical extent size. Doing so maximizes the learning ability of the
SVC adaptive cache. Remember that managed disk space is allocated in units of whole
extents, so VDisks whose size is not a multiple of the extent size can waste space.

For example, an 18 GB VDisk uses 36 extents of 512 MB, but an 18.1 GB VDisk uses 37
such extents. Of course, a 17.9 GB VDisk also uses 36 extents of 512 MB and wastes little
space.

If most of your VDisks are a multiple of 512 MB in size, use 512 MB extents. If you expect to
have many LUNs that are not a multiple of 512 MB, use the largest extent size that results in
acceptable space utilization. Recent performance testing has shown that using an extent size
of 128 MB has no appreciable effect on performance.

Note: The extent size does not have a great impact on the performance of an SVC
installation, and the most important is to be consistent across MDG, meaning that you
should use the same extent size in all MDGs within an SVC cluster to avoid limitations
when migrating VDisks from one MDG to another. At the same time, using the largest
practical extent size increases the maximum capacity of your SVC cluster.

When configuring DS4300 storage servers, consider using a segment size of 256 KB. This
size helps to optimize sequential performance without hurting random I/O performance.

When creating VDisks, the default choice of striping the extents across MDisks in the MDG is
normally the best choice, since this will balance I/Os across all the managed disks in the
MDG, which tends to optimize overall performance and helps to reduce hot spots.

The SVC allocates a preferred path for each VDisk. This is the node within an I/O group that
is normally used to handle I/Os for that particular VDisk. The default, which is to alternate
VDisks between the nodes of an I/O group in the order the VDisks are created, normally
produces good results, and this will generally balance the load well across both nodes of an
I/O group.

In cases where it is known that the VDisks vary greatly in size, or where the I/O load to
different VDisks varies greatly, it is possible that a significant imbalance can arise, which
might impact performance. In these cases, you can use the -node parameter when creating
VDisks (or the equivalent graphical user interface parameter) to specify which node of an I/O
group should be the preferred path, in order to balance the workload evenly for the I/O group.

Defining LUNs on the managed disk subsystems


When installing new storage to be managed by the SVC, you need only define a simple LUN
arrangement. Generally, we recommend to create one LUN per disk array in the disk
subsystems. When placing an existing disk subsystem under the control of the SVC, there is
no specific need to change the LUN definitions. Simply include the LUNs in MDGs and start
to create VDisks.

VDisks are created using managed disks within an MDG. Accordingly, all the managed disks
in a single MDG should have the same (or similar) performance characteristics. If you mix
managed disks with different performance characteristics, VDisks might exhibit uneven
performance where I/Os to different portions of the VDisk perform differently.

Chapter 4. Performance and capacity planning 89


4.3 Performance monitoring
In this section we detail some performance monitoring techniques.

4.3.1 Collecting performance statistics


By default, performance statistics are not collected. You can start performance collection by
using the svctask startstats command as described in 9.8, “Listing dumps” on page 269,
and you can stop them using the svctask stopstats command as described in 9.1.7,
“Stopping a statistics collection” on page 218. Using the lsiostatsdumps command, you can
list the files.

Statistics gathering is enabled or disabled on cluster basis. When gathering is enabled, all
nodes in the cluster gather statistics.

SVC supports sampling periods between the gathering of statistics from 1 to 60 minutes in
steps of one minute.

The gathering of this data is coordinated at a cluster level.

There are two sets of performance statistics:


򐂰 Cluster wide statistics
򐂰 Per-node statistics

Important: Enabling statistics collection with an interval less than 15 minutes will only
enable per-node statistics. Cluster wide statistics will not be collected.

4.3.2 Cluster wide statistics


A number of statistics are collected for every Virtual Disk and every Managed Disk known to
the cluster. The statistics reported are on a per-cluster, rather than a per-node basis. Thus, for
example, the count of I/Os for a given managed disk are the aggregate of the I/Os for that
managed disk across all of the nodes in the cluster.

At the end of each sampling period the statistics gathered during the sampling period are
written to files on the configuration node. Each sampling period results in the creation of one
file for Virtual Disk statistics, and one file for Managed Disk statistics.

Statistics file naming


The files generated are written to the directory /dumps/iostats. The file name is in the
following format:
򐂰 m_stats_<config_node_front_panel_id>_<date>_<time> for MDisk statistics.
򐂰 v_stats_<config_node_front_panel_id>_<date>_<time> for VDisk statistics.

The node_front_panel_id is the node from which the statistics are collected:
򐂰 The panelid is taken from the current configuration node.
򐂰 The date is in the form YYMMDD.
򐂰 The time is in the form HHMMSS.

Example 4-3 shows some typical MDisk and the VDisk statistics filenames.

Example 4-3 Filename of per cluster wide statistics


id iostat_filename
0 m_stats_008875_060628_201807

90 IBM System Storage SAN Volume Controller


1 v_stats_008875_060628_201807
2 m_stats_008875_060628_203410
3 v_stats_008875_060628_203410
4 m_stats_008875_060628_205013
5 v_stats_008875_060628_205013

A maximum number of 12 files for each type can be present in the directory, for example, 12
files for mdisk statistics and 12 files for VDisk statistics.

Statistics collected
For each Virtual Disk and for each Managed Disk the following statistics are collected during
the sample period:
򐂰 Number of SCSI READ commands processed
򐂰 Number of SCSI WRITE commands processed
򐂰 Number of blocks of data read
򐂰 Number of blocks of data written

Contents of statistics files

A cluster wide statistics file is a plain text file. The file contains one entry for every Managed or
Virtual disk.

In Example 4-4, the columns detail a count of reads, writes, block reads, and block writes.

Example 4-4 MDisk per cluster statistics


lun_id : num_reads : num_writes : block_reads : block_writes :
3 : 13824 : 547664 : 7077888 : 53400576 :
4 : 13568 : 540976 : 6946816 : 52808704 :
5 : 13568 : 483584 : 6946816 : 46268416 :
6 : 0 : 0 : 0 : 0 :

4.3.3 Per-node statistics


The collecting of per-node statistics is enabled or disabled in the same way as cluster wide
statistics as described in 4.3, “Performance monitoring” on page 90.

Each node maintains a number of counters, which are reset to zero when a node is booted or
reset. Each of these counters is sampled at the end of each period. The sampled value is the
absolute value of the counter, not the increase of the counter during the sample period.

The file format for these statistics is XML.

Statistics file naming


The files generated are written to the directory /dumps/iostats.

The file name is in the following format:


򐂰 Nm_stats_<node_front panel_id>_<date>_<time> for MDisk statistics.
򐂰 Nv_stats_<node_front panel_id>_<date>_<time> for VDisk statistics.
򐂰 Nv_stats_<node_front panel_id>_<date>_<time> for Node statistics.

The node_front_panel_id is the node from which the statistics are collected:
򐂰 The date is in the format of YYMMDD.
򐂰 The time is in the format of HHMMSS.

Chapter 4. Performance and capacity planning 91


Example 4-5 shows MDisk, VDisk, and the node statistics filename.

Example 4-5 Node statistics filename


id iostat_filename
0 Nm_stats_008875_060628_194555
1 Nv_stats_008875_060628_194555
2 Nn_stats_008875_060628_194555

A maximum number of 16 files for each type can be present in the directory, for example 16
files for MDisk statistics,16 files for VDisk statistics and 16 files for node statistics.

Per-node MDisk statistics


For each MDisk the following statistics are collected during the sample period (note that the
abbreviation in brackets () is the tag used in the XML file in which the statistics are reported):
򐂰 MDisk Read operations (ro)
򐂰 MDisk write operations (wo)
򐂰 MDisk read blocks (rb)
򐂰 MDisk write blocks (wb)
򐂰 MDisk cumulative read external response time in milliseconds (re)
򐂰 MDisk cumulative write external response time (we)
򐂰 MDisk cumulative read queued response time (rq)
򐂰 MDisk cumulative write queued response time (wq)

Example 4-6 shows the output of per-node MDisk statistics collection; the format is XML.

Example 4-6 Per-node MDisk statistics


<?xml version="1.0" encoding="utf-8" ?>
- <diskStatsColl
xmlns="http://ibm.com/storage/management/performance/api/2003/04/diskStats"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://ibm.com/storage/management/performance/api/2003/04/diskStats
schema/SVCPerfStats.xsd" scope="node" id="n3" cluster="ITSOSVC01"
node_id="0x0000000000000001" cluster_id="0x000002006040469e" sizeUnits="512B"
timeUnits="msec" contains="managedDiskStats" timestamp="2006-06-28 20:18:07"
timezone="GMT-8:00">
<mdsk id="mdisk3" ro="13824" wo="537680" rb="7077888" wb="52122624" re="42820"
we="72359810" rq="42840" wq="83027370" />
<mdsk id="mdisk4" ro="13568" wo="531760" rb="6946816" wb="51629056" re="39420"
we="70992740" rq="39420" wq="81369740" />
<mdsk id="mdisk5" ro="13568" wo="483584" rb="6946816" wb="46268416" re="43510"
we="69968760" rq="43530" wq="79881720" />
<mdsk id="mdisk6" ro="0" wo="0" rb="0" wb="0" re="0" we="0" rq="0" wq="0" />

Per-node VDisk read write statistics collected


For each VDisk, the following read and write statistics are collected during the sample period:
򐂰 VDisk read operations (ro)
򐂰 VDisk write operations (wo)
򐂰 VDisk cumulative read response time in milliseconds (rl)
򐂰 VDisk cumulative write response time in milliseconds (wl)
򐂰 VDisk worst read response time in microsecond (rlw)
򐂰 VDisk worst write response time in microsecond (wlw)
򐂰 VDisk cumulative total number of overlapping writes (gwo)
򐂰 VDisk relationship total secondary writes (gws)
򐂰 VDisk relationship cumulative secondary write latency in milliseconds (gwl)

92 IBM System Storage SAN Volume Controller


Per-node VDisk cache statistics collected
For each VDisk, the following cache statistics are collected during the sample period:
򐂰 Track reads (ctr)
򐂰 Track read sector count (ctrs)
򐂰 Track write (ctw)
򐂰 Track write sector count (ctws)
򐂰 Track prestaged (ctp)
򐂰 Prestage sector count (ctps)
򐂰 Track read cache hits (ctrh)
򐂰 Track read cache hits sector count (ctrhs)
򐂰 Track read cache hits on any prestaged area (ctrhp)
򐂰 Read cache hits on prestaged data sector count (ctrhps)
򐂰 Track read cache misses (ctrm)
򐂰 Track read cache misses sector count (ctrms)
򐂰 Track destages (ctd)
򐂰 Track destage sector count (ctds)
򐂰 Track writes in flush through mode (ctwft)
򐂰 Track writes in flush through mode sector count (ctwfts)
򐂰 Track writes in write through mode (ctwwt)
򐂰 Track writes in write through mode sector count (ctwwts)
򐂰 Track writes in fast write mode (ctwfw)
򐂰 Track writes in fast write mode sector count (ctwfws)
򐂰 Track writes in fast write mode that were written in write through due to a lack of memory
(ctwfwsh)
򐂰 Track writes in fast write mode that were written in write through due to a lack of memory
sector count (ctwfwshs)
򐂰 Track misses on dirty data (ctwm)
򐂰 Track misses on dirty data sector count (ctwms)
򐂰 Track write hits on dirty data (ctwh)
򐂰 Track write hits on dirty data sector count (ctwhs)
򐂰 Quantity of write cache data in sectors (cm)
򐂰 Quantity of cache data in sectors (cv)

Example 4-7 shows the output of per-node VDisk statistics collection; the format is XML.

Example 4-7 Per-node VDisk statistics


<?xml version="1.0" encoding="utf-8" ?>
- <diskStatsColl
xmlns="http://ibm.com/storage/management/performance/api/2005/08/vDiskStats"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://ibm.com/storage/management/performance/api/2005/08/vDiskStats
schema/SVCPerfStatsV.xsd" scope="node" id="n3" cluster="ITSOSVC01"
node_id="0x0000000000000001" cluster_id="0x000002006040469e" sizeUnits="512B"
timeUnits="msec" contains="virtualDiskStats" timestamp="2006-06-28 20:18:07"
timezone="GMT-8:00">
<vdsk idx="5" ctr="0" ctrs="0" ctw="327680" ctws="20971520" ctp="0" ctps="0" ctrh="0"
ctrhs="0" ctrhp="0" ctrhps="0" ctrm="0" ctrms="0" ctd="327472" ctds="20958208" ctwft="208"
ctwfts="13312" ctwwt="0" ctwwts="0" ctwfw="327472" ctwfws="20958208" ctwfwsh="0"
ctwfwshs="0" ctwm="327472" ctwms="20958208" ctwh="0" ctwhs="0" cm="0" cv="0" gwo="0"
gws="0" gwl="0" id="Vdisk2005" ro="0" wo="327680" rb="0" wb="20971520" rl="0" wl="640088"
rlw="0" wlw="0" />

Chapter 4. Performance and capacity planning 93


Per-node statistics collected: CPU usage
For each Node, the CPU usage counter statistics are collected during the sample period.
use this formula to calculate the percentage of CPU utilization:
Reported value = busy cycles * 1000 / (Clock rate * (number of physical cpu’s)

Per-node statistics collected: HBA ports


For each node the following statistics are collected about the HBA ports during the sample
period:
򐂰 Port id (id)
򐂰 World wide port name (wwpn)
򐂰 Bytes transmitted to host (hbt)
򐂰 Bytes transmitted to controllers (cbt)
򐂰 Bytes transmitted to other SVC nodes in the same cluster (lnbt)
򐂰 Bytes transmitted to other SVC nodes in other clusters (rmbr)
򐂰 Bytes received from other SVC nodes in the same cluster (lnbr)
򐂰 Bytes received from other SVC nodes in other clusters(rmbr)
򐂰 Exchanges initiated to host (het)
򐂰 Exchanges initiated to controllers (cet)
򐂰 Exchanges initiated to other SVC nodes in the same cluster (lnet)
򐂰 Exchanges initiated to other SVC nodes in other clusters (rmet)
򐂰 Exchanges received from hosts (her)
򐂰 Exchanges received from controllers (cer)
򐂰 Exchanges received from other SVC nodes in the same cluster (lner)
򐂰 Exchanges received from other SVC nodes in other clusters (rmer)

Per-node statistics collected: Node


For each node, the following statistics are collected about the Node during the sample period:
򐂰 Node name (id)
򐂰 Cluster name (cluster)
򐂰 Node unique ID (node_id)
򐂰 Cluster unique ID (cluster_id)
򐂰 Number of messages / bulk data received (ro)
򐂰 Number of messages / bulk data sent (wo)
򐂰 Number of bytes received (rb)
򐂰 Number of bytes sent (wb)

94 IBM System Storage SAN Volume Controller


Example 4-8 shows the output of per-node statistics collection.

Example 4-8 Node statistics


<?xml version="1.0" encoding="utf-8" ?>
- <diskStatsColl
xmlns="http://ibm.com/storage/management/performance/api/2006/01/nodeStats"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://ibm.com/storage/management/performance/api/2006/01/nodeStats
schema/SVCPerfStatsN.xsd" scope="node" id="n3" cluster="ITSOSVC02"
node_id="0x0000000000000001" cluster_id="0x000002006040469e" sizeUnits="512B"
timeUnits="msec" contains="nodeStats" timestamp="2006-06-28 20:18:07" timezone="GMT-8:00">
<node id="n4" cluster="ITSOSVC02" node_id="0x0000000000000004"
cluster_id="0x000002006040469e" ro="107278284" wo="107241289" rb="11189548808"
wb="48451227660" re="1557390" we="22510830" rq="15365690" wq="24127930" />
<node id="node2" cluster="ITSOSVC01" node_id="0x0000000000000002"
cluster_id="0x0000020060e03106" ro="4014" wo="4394" rb="2544092" wb="1453820" re="20"
we="970" rq="600" wq="970" />
<node id="node1" cluster="ITSOSVC01" node_id="0x0000000000000001"
cluster_id="0x0000020060e03106" ro="4275" wo="4721" rb="2730032" wb="1563360" re="0"
we="4550" rq="250" wq="4630" />
<node id="n4" cluster="ITSOSVC02" node_id="0x0000000000000002"
cluster_id="0x000002006040469e" ro="31997953" wo="31924820" rb="3261657636"
wb="72729370656" re="571970" we="15593090" rq="4309450" wq="19087380" />
<node id="node_n4" cluster="ITSOSVC02" node_id="0x0000000000000003"
cluster_id="0x000002006040469e" ro="58781" wo="60949" rb="8961432" wb="109888472" re="1480"
we="22140" rq="7830" wq="22460" />
<node id="SVCNode2" cluster="ITSOSVC01" node_id="0x0000000000000008"
cluster_id="0x0000020061003106" ro="130426" wo="132090" rb="495457964" wb="86541344"
re="143280" we="86120" rq="165090" wq="86220" />
<node id="SVCNode1" cluster="ITSOSVC01" node_id="0x0000000000000001"
cluster_id="0x0000020061003106" ro="1500888" wo="1471477" rb="1010329232" wb="16602896420"
re="627000" we="15194190" rq="3167380" wq="15252110" />
<port id="3" wwpn="0x500507680110234f" hbt="98121" hbr="0" het="0" her="1568"
cbt="26428766208" cbr="3714043724" cet="569617" cer="0" lnbt="30069709814"
lnbr="3496662020" lnet="48359464" lner="62789358" rmbt="4323232101" rmbr="14015912073"
rmet="8283745" rmer="12533659" />
<port id="4" wwpn="0x500507680120234f" hbt="109897" hbr="0" het="0" her="3030"
cbt="24616450048" cbr="70399659908" cet="875646" cer="0" lnbt="29982624303"
lnbr="3503566251" lnet="140716054" lner="97363347" rmbt="4248273757" rmbr="14140682122"
rmet="17232551" rmer="10080917" />
<port id="2" wwpn="0x500507680130234f" hbt="110865" hbr="0" het="0" her="3467"
cbt="3556769792" cbr="3557404440" cet="77998" cer="0" lnbt="29997602139" lnbr="3499059654"
lnet="52125673" lner="99578658" rmbt="4235270899" rmbr="14049156770" rmet="8628029"
rmer="8186209" />
<port id="1" wwpn="0x500507680140234f" hbt="101095" hbr="0" het="0" her="1608"
cbt="22273212416" cbr="90154588" cet="538126" cer="0" lnbt="30037417262" lnbr="3529425041"
lnet="77654314" lner="48149730" rmbt="4391480042" rmbr="14139204698" rmet="16836413"
rmer="14568713" />
<cpu busy="14067780" />
</diskStatsColl>

Chapter 4. Performance and capacity planning 95


96 IBM System Storage SAN Volume Controller
5

Chapter 5. Initial installation and


configuration of the SVC
In this chapter we describe the initial installation and configuration procedures for the IBM
System Storage SAN Volume Controller (SVC) using the service panel and the cluster Web
interface.

Note: The service panel consists of the display window and buttons on the front of each
SVC node.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 97
5.1 Preparing for installation
See Chapter 3, “Planning and configuration” on page 25, for information pertaining to
physical connectivity, storage area network (SAN) zoning, and assigning disk to the SVC.

5.2 Secure Shell (SSH) overview


Secure Shell (SSH) is used to secure data flow between the SVC cluster configuration node
(SSH server) and a client — either a command line client via the command line interface (CLI)
or the CIMOM. The connection is secured by the means of a private key and public key pair:
򐂰 A public key and a private key are generated together as a pair.
򐂰 A public key is uploaded to the SSH server.
򐂰 A private key identifies the client and is checked against the public key during the
connection. The private key must be protected.
򐂰 The SSH server must also identify itself with a specific host key.
򐂰 If the client does not have that host key yet, it is added to a list of known hosts.

Secure Shell is the communication vehicle between the management system (usually the
master console) and the SVC cluster.

SSH is a client-server network application. The SVC cluster acts as the SSH server in this
relationship. The SSH client provides a secure environment from which to connect to a
remote machine. It uses the principles of public and private keys for authentication.

When an SSH client (A) attempts to connect to a server (B), a key is needed to authenticate
the connection. The key consists of two halves: the public and private keys. The public key is
put onto (B). When (A) tries to connect, the private key on (A) can authenticate with its public
half on (B).

SSH keys are generated by the SSH client software. This includes a public key, which is
uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administration and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added and unwanted IDs and keys can be deleted.

To use the CLI or SVC graphical user interface (GUI), an SSH client must be installed on that
system, the SSH key pair must be generated on the client system, and the client’s SSH public
key must be stored on the SVC cluster or clusters.

The SVC master console has the freeware implementation of SSH-2 for Windows called
PuTTY pre-installed. This software provides the SSH client function for users logged into the
master console who want to invoke the CLI or GUI to manage the SVC cluster.

98 IBM System Storage SAN Volume Controller


5.2.1 Generating public and private SSH key pair using PuTTY
Perform the following steps to generate SSH keys on the SSH client system (master console).

Note: These keys will be used in the step documented in 5.4.3, “Configuring the PuTTY
session for the CLI” on page 118.

1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start → Programs → PuTTY → PuTTYgen.
2. On the PuTTY Key Generator GUI window (Figure 5-1), generate the keys:
a. Select the SSH2 RSA radio button.
b. Leave the number of bits in a generated key value at 1024.
c. Click Generate.

Figure 5-1 PuTTY key generator GUI

Chapter 5. Initial installation and configuration of the SVC 99


3. The message in the Key section of the window changes. Figure 5-2 shows this message.

Figure 5-2 PuTTY random key generation

Note: The blank area indicated by the message is the large blank rectangle on the GUI
inside the section of the GUI labelled Key. Continue to move the mouse pointer over the
blank area until the progress bar reaches the far right. This generates random
characters to create a unique key pair.

100 IBM System Storage SAN Volume Controller


4. After the keys are generated, save them for later use as follows:
a. Click Save public key as shown in Figure 5-3.

Figure 5-3 Saving the public key

b. You are is prompted for a name (for example, pubkey) and a location for the public key
(for example, C:\Support Utils\PuTTY). Click Save.
If another name or location is chosen, ensure that a record of them is kept, because
the name and location of this SSH public key must be specified in the step documented
in section 5.4.3, “Configuring the PuTTY session for the CLI” on page 118.

Note: The PuTTY Key Generator saves the public key with no extension by default.
We recommend that you use the string “pub” in naming the public key, for example,
“pubkey”, to easily differentiate the SSH public key from the SSH private key.

c. In the PuTTY Key Generator window, click Save private key.


d. You are prompted with a warning message as shown in Figure 5-4. Click Yes to save
the private key without a passphrase.

Figure 5-4 Saving the private key without passphrase

Chapter 5. Initial installation and configuration of the SVC 101


e. When prompted, enter a name (for example, icat) and location for the private key (for
example, C:\Support Utils\PuTTY). Click Save.
If you choose another name or location, ensure that you keep a record of it, because
the name and location of the SSH private key must be specified when the PuTTY
session is configured in the step documented in 5.4.2, “Uploading the SSH public key
to the SVC cluster” on page 114.

Note: The PuTTY Key Generator saves the private key with the PPK extension.

5. Close the PuTTY Key Generator GUI.


6. Using Windows Explorer on the master console, navigate to the directory where the
private key was saved (for example, C:\Support Utils\PuTTY).
7. Copy the private key file (for example, icat.ppk) to the C:\Program
Files\IBM\svcconsole\cimom directory.

Important: If the private key was named something other than icat.ppk, make sure
that you rename it to icat.ppk in the C:\Program Files\IBM\svcconsole\cimom folder.
The GUI (which will be used later) expects the file to be called icat.ppk and for it to be
in this location.

8. Stop and restart the IBM CIM Object Manager so that the change will take effect:
a. From the Windows desktop, select Start → Settings → Control Panel →
Administrative Tools → Services.
b. Right-click IBM CIM Object Manager. Select Stop.
c. When Windows has stopped the service, click it again. Select Start.

102 IBM System Storage SAN Volume Controller


5.3 Basic installation
This section provides step-by-step instructions for building the SVC cluster initially.

5.3.1 Creating the cluster (first time) using the service panel
This section provides the step-by-step instructions needed to create the cluster for the first
time using the service panel. Use Figure 5-5 as a reference for the SVC Node 4F2 model
buttons to be pushed in the steps that follow, and Figure 5-6 for the SVC Node 8F2 model.

Figure 5-5 SVC 4F2 Node front panel

Chapter 5. Initial installation and configuration of the SVC 103


Figure 5-6 SVC 8F2 Node and SVC 8F4 Node front and operator panel

Prerequisites
Ensure that the SVC nodes are physically installed. Prior to configuring the cluster, ensure
that the following information is available:
򐂰 License: The license indicates whether the customer is permitted to use FlashCopy,
MetroMirror, or both. It also indicates how much capacity the customer is licensed to
virtualize.
򐂰 Cluster IP addresses: These include one for the cluster and another for service access.
򐂰 Subnet IP mask.
򐂰 Gateway IP address.

104 IBM System Storage SAN Volume Controller


Process
After the hardware is physically installed into racks, complete the following steps to initially
configure the cluster through the service panel:
1. Choose any node that is to become a member of the cluster being created.
2. At the service panel of that node, click and release the Up or Down navigation button
continuously until Node: is displayed.

Important: If a time-out occurs when entering the input for the fields during these
steps, you must begin again from step 2. All the changes are lost, so be sure to have all
the information on hand before beginning.

3. Click and release the Left or Right navigation button continuously until Create Cluster?
is displayed.
4. Click the Select button. If IP Address: is displayed on line 1 of the service display, go to
step 5. If Delete Cluster? is displayed in line 1 of the service display, this node is already
a member of a cluster. Either the wrong node was selected, or this node was already used
in a previous cluster. The ID of this existing cluster is displayed in line 2 of the service
display.
a. If the wrong node was selected, this procedure can be exited by clicking the Left,
Right, Up, or Down button (it cancels automatically after 60 seconds).
b. If it is certain that the existing cluster is not required, follow these steps:
i. Click and hold the Up button.
ii. Click and release the Select button. Then release the Up button. This deletes the
cluster information from the node. Go back to step 1 and start again.

Important: When a cluster is deleted, all client data contained in that cluster is lost.

5. Click the Select button.


6. Use the Up or Down navigation button to change the value of the first field of the IP
Address to the value that has been chosen.

Note: Pressing and holding the Up or Down buttons will increment or decrease the IP
address field by 10s. The field value rotates from 0 to 255 with the Down button, and
from 255 to 0 with the Up button.

7. Use the Right navigation button to move to the next field. Use the Up or Down navigation
buttons to change the value of this field.
8. Repeat step 7 for each of the remaining fields of the IP address.
9. When the last field of the IP address has been changed, click the Select button.
10.Click the Right button. Subnet Mask: is displayed.
11.Click the Select button.
12.Change the fields for Subnet Mask in the same way that the IP address fields were
changed.
13.When the last field of Subnet Mask has been changed, click the Select button.
14.Click the Right navigation button. Gateway: is displayed.
15.Click the Select button.

Chapter 5. Initial installation and configuration of the SVC 105


16.Change the fields for Gateway in the same way that the IP address fields were changed.
17.When changes to all Gateway fields have been made, click the Select button.
18.Click the Right navigation button. Create Now? is displayed.
19.When the settings have all been verified as accurate, click the Select navigation button.
To review the settings before creating the cluster, use the Right and Left buttons. Make
any necessary changes, return to Create Now?, and click the Select button.
If the cluster is created successfully, Password: is displayed in line 1 of the service display
panel. Line 2 contains a randomly generated password, which is used to complete the
cluster configuration in the next section.

Important: Make a note of this password now. It is case sensitive. The password is
displayed only for approximately 60 seconds. If the password is not recorded, the
cluster configuration procedure must be started again from the beginning.

20.After the Password: display times out, if the cluster is created successfully, Cluster: is
displayed in line 1 of the service display panel. Also, the cluster IP address is displayed on
line 2 when the initial creation of the cluster. is completed.
If the cluster is not created, Create Failed: is displayed in line 1 of the service display.
Line 2 contains an error code. Refer to the error codes that are documented in the IBM
System Storage Virtualization Family SAN Volume Controller: Service Guide, GC26-7901,
to find the reason why the cluster creation failed and what corrective action to take.

Important: At this time, do not repeat this procedure to add other nodes to the cluster.
Adding nodes to the cluster is accomplished in 6.1, “Adding nodes to the cluster” on
page 128, and in 7.1, “Adding nodes to the cluster” on page 140.

5.4 Completing the initial cluster setup using the SAN Volume
Controller Console GUI
After you have performed the “Basic installation” on page 103 you need to complete the
cluster setup using the SAN Volume Controller Console.

Note: You could also do the cluster creation via a Secure Web browser session. Use a
Web browser from within the SVC cluster’s subnet (make sure that the Cluster IP address
svcclusterip can be reached successfully with a ping command), to access the SVC
cluster. To do this, enter the following line in the Web browser’s address field:
https://svcclusterip/create

Here svcclusterip is the SVC cluster IP address configured in the service panel earlier
(for example, 9.11.120.80).

Important: After these steps, you should open the SAN Volume Controller Console GUI.
Do not select the Create (initialize) Cluster when adding the cluster to the GUI.

Recommendation: We strongly recommend that you follow 5.4.1 to create the cluster.

106 IBM System Storage SAN Volume Controller


5.4.1 Configuring the GUI
If this is the first time that the SAN Volume Controller administration GUI is being used, you
must configure it as explained here:
1. Open the GUI using one of the following methods:
– Double-click the icon marked SAN Volume Controller Console on the master
console’s desktop.
– Open a Web browser on the master console and point to the address:
http://localhost:9080/ica (we accessed the master console using this method).
– Open a Web browser on a separate workstation and point to the address:
http://masterconsoleipaddress:9080/ica
2. On the Signon page (Figure 5-7), type the user ID superuser and the default password of
passw0rd. Click OK.

Note: Passwords for the central administration GUI are separate from the passwords
set for individual SVC clusters.

Figure 5-7 GUI signon

3. The first time you sign on as the superuser, you will be prompted and you must change the
password for the superuser. The Change Password panel is displayed as shown in
Figure 5-8. Enter the new password in the field labelled New Password and re-enter the
same password in the field labelled Re-Enter New Password. Click OK.

Figure 5-8 Change default password

Chapter 5. Initial installation and configuration of the SVC 107


Note: Like all passwords, this is case sensitive.

4. On the GUI Welcome panel (Figure 5-9), click the Add SAN Volume Controller Cluster
button in the center of the panel. If you changed the GUI default password in step 3, this
button might not be displayed. If so, click Clusters in the My Work Panel, then select Add
Cluster from the drop-down menu.

Figure 5-9 Adding the SVC cluster for management

108 IBM System Storage SAN Volume Controller


Important: If you followed the setup method https://svcclusterip/create as stated in
5.4, “Completing the initial cluster setup using the SAN Volume Controller Console GUI”
on page 106, do not select the Create (initialize) Cluster box as stated in step 5.

Doing so will invoke the initial cluster installation process. If it is selected, the cluster will
be re-initialized and any configuration settings entered previously are lost.

5. On the Adding Clusters panel (Figure 5-10), type the IP address of the SVC cluster and
select Create (initialize) Cluster, then click OK.

Figure 5-10 Adding Clusters panel

6. A Security Alert will pop up as shown in Figure 5-11. Click Yes to continue.

Figure 5-11 Security Alert

Chapter 5. Initial installation and configuration of the SVC 109


7. A pop-up window appears and prompts for the user ID and password of the SVC cluster
as shown in Figure 5-12. Enter the user ID admin and the cluster admin password that was
set earlier in Chapter 5.3.1, “Creating the cluster (first time) using the service panel” on
page 103 and click OK.

Figure 5-12 SVC cluster user ID and password signon window

8. The browser accesses the SVC and displays the Create New Cluster wizard window as
shown in Figure 5-13. Click Continue.

Figure 5-13 Create New Cluster wizard

110 IBM System Storage SAN Volume Controller


9. The Create New Cluster page (Figure 5-14) opens. Fill in the following details:
– A new admin password to replace the random one that the cluster generated: The
password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore.
It cannot start with a number and has a minimum of one character, and a maximum of
15 characters.
– A service password to access the cluster for service operation: The password is case
sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start
with a number and has a minimum of one character and a maximum of 15 characters.
– A cluster name: The cluster name is case sensitive and can consist of A to Z, a to z, 0
to 9, and the underscore. It cannot start with a number and has a minimum of one
character and a maximum of 15 characters.
– A service IP address to access the cluster for service operations.

Note: The service IP address is different from the cluster IP address. However,
because the service IP address is configured for the cluster, it must be on the same
IP subnet.

– The fabric speed of the Fibre Channel network for the SVC model types 4F2 and 8F2
(either 1 Gb/s or 2 Gb/s).
– The new SVC 8F4 nodes will autonegotiate the fabric speed on a per port basis (either
1 Gb/s, 2 Gb/s or 4 Gb/s)
– The Administrator Password Policy check box, if selected, enables a user to reset the
password from the service panel (helpful, for example, if the password is forgotten).
This check box is optional.

Note: The SVC should be in a secure room if this function is enabled, because
anyone who knows the correct key sequence can reset the admin password.
The key sequence is as follows:
1. From the Cluster: menu item displayed on the service panel, click the Left or
Right button until Recover Cluster? is displayed.
2. Click the Select button. Service Access? should be displayed.
3. Click and hold the Up button and then click and release the Select button. This
generates a new random password. Write it down.

Important: Be careful because clicking and holding the Down button, and clicking
and releasing the Select button places the node in service mode.

Chapter 5. Initial installation and configuration of the SVC 111


Click the Create New Cluster button (see Figure 5-14).

Figure 5-14 Cluster details

Important: Make sure you confirm and retain the Administrator and Service password for
future use.

112 IBM System Storage SAN Volume Controller


4. A number of progress windows appear as shown in Figure 5-15. Click Continue each time
when prompted.

Figure 5-15 Create New Cluster Progress page

Note: By this time, the service panel display on the front of the configured node should
display the cluster name entered previously (for example, ITSOSVC01).

5. A new page with the Error Notification Settings is shown in Figure 5-16. This setting will be
covered in 9.6.3, “Setting up error notification” on page 262. For now, click Update
Settings and then on the next page, click Continue when prompted.

Figure 5-16 Error Notification Settings configuration page

6. The Featurization Settings window (Figure 5-17) is displayed. To continue, at a minimum


the Virtualization Limit (Gigabytes) field must be filled out. If you are licensed for
FlashCopy and MetroMirror (the panel reflects Remote Copy in this example), the
Enabled radio buttons can also be selected here. Click the Set Features button. Click
Continue when prompted.

Chapter 5. Initial installation and configuration of the SVC 113


Figure 5-17 Featurization Settings Configuration page

7. When the changes are accepted, the cluster displays the Enter Network Password
window again. Type the User Name admin and the new admin password you created
under step 9.

8. Log back in.

Note: The SVC uses the standard of 1 GB = 1024 MB. Therefore, typing 10 GB in the
Featurization Settings page provides you with 10240 MB rather than 10000 MB as with
other disk subsystems. This screen uses the previous term “Remote Copy” to refer to
MetroMirror.

5.4.2 Uploading the SSH public key to the SVC cluster


After updating the featurization settings, the Add SSH Public Key page (Figure 5-18) opens.
1. Browse or type the fully qualified directory path and file name of the public key created
and saved in 5.2.1, “Generating public and private SSH key pair using PuTTY” on
page 99. Then type the name of the user ID to be associated with this admin key pair
(for example, admin) and click Add Key.

Figure 5-18 Add SSH Public Key

114 IBM System Storage SAN Volume Controller


2. On the next page (Figure 5-19), a message is displayed indicating that a new SSH
administrator key associated with the ID admin was added. Click Continue.

Figure 5-19 Adding SSH admin key successful

3. The basic setup requirements for the SVC cluster using the SVC cluster Web interface
have now been completed. Close the following page as illustrated in Figure 5-20.

Figure 5-20 Closing page after successful cluster creation

Chapter 5. Initial installation and configuration of the SVC 115


4. The next step is to complete the installation and configuration of the SVC cluster using
either the CLI or CIM Agent and Console for SVC GUI.
a. The Viewing Clusters panel (Figure 5-21) is displayed.
i. If it does not display automatically, click Clusters in the My Work panel menu.
ii. Click the Select box for the SVC cluster, highlight the option Launch the SAN
Volume Controller Application from the drop-down menu to select it, and click
Go.

Figure 5-21 Cluster selection screen

b. Sometimes the message “Invalid SSH Fingerprint” is shown in the Availability Status;
see Figure 5-22 for an example.

Figure 5-22 Invalid SSH fingerprint

c. In order to correct this situation, use the pull-down list options and choose “Reset
fingerprints” and then click GO. Click OK when prompted. The status should change
to “OK”.
d. If the wrong message persists, then you have an SSH key problem. To correct the SSH
key, follow the steps in “Generating public and private SSH key pair using PuTTY” on
page 99, paying particular attention to the Important notes in that section.

116 IBM System Storage SAN Volume Controller


5. If you need to maintain your SSH keys to add more keys, the Maintaining SSH Keys
panel is displayed as shown in Figure 5-23.
a. Browse to find the public key.
b. In the Access Level box, select the access level you want to assign to your key
(Administrator or Service).
c. Enter a key ID in the ID field.
d. Click the Add Key button in the lower left of the panel.

Note: Using the same ID for both access levels helps identify them both as coming
from the same SSH client for potential later maintenance of SSH key pairs. Any
descriptive string will suffice; the ID does not have to be admin, nor does it have to
match the ID used earlier for administrative access level.

Figure 5-23 Maintaining SSH Keys panel

Chapter 5. Initial installation and configuration of the SVC 117


e. An Added message should be displayed.
6. Click the X in the upper right corner of the Maintaining SSH Keys panel as shown in
Figure 5-23 and you will see the cluster selection panel as shown in Figure 5-24.

Figure 5-24 Using the Viewing Clusters panel to Launch the SAN Volume Controller Application

7. To continue with the SVC configuration, select your cluster with the check box and click
Go.
8. You have now completed the tasks required to configure the GUI for SVC administration.
a. Either close the browser session completely or leave it open on the Welcome panel
and continue to Chapter 6, “Quickstart configuration using the CLI” on page 127 or
Chapter 7, “Quickstart configuration using the GUI” on page 139 to add the second
node to the cluster.
b. If SSH access from other workstations is desired, proceed to the next section(s).

5.4.3 Configuring the PuTTY session for the CLI


Before the CLI can be used, the PuTTY session must be configured using the SSH keys
generated earlier in 5.2.1, “Generating public and private SSH key pair using PuTTY” on
page 99.

Perform these steps to configure the PuTTY session on the SSH client system:
1. From the Windows desktop, select Start → Programs → PuTTY → PuTTY to open the
PuTTY Configuration GUI window.
2. In the PuTTY Configuration window (Figure 5-25), from the Category panel on the left,
click Session.

Note: The items selected in the Category panel affect the content that appears in the
right panel.

118 IBM System Storage SAN Volume Controller


Figure 5-25 PuTTY Configuration window

3. In the right panel, under the Specify your connection by host name or IP address section,
select the SSH radio button. Under the Close window on exit section, select the Only on
clean exit radio button. This ensures that if there are any connection errors, they will be
displayed on the user’s screen.
4. From the Category panel on the left side of the PuTTY Configuration window, click
Connection SSH to display the PuTTY SSH Configuration window as shown in
Figure 5-26.

Figure 5-26 PuTTY SSH Connection Configuration window

5. In the right panel, in the section Preferred SSH protocol version, select radio button 2.
6. From the Category panel on the left side of the PuTTY Configuration window, click
Connection → SSH → Auth.

Chapter 5. Initial installation and configuration of the SVC 119


7. In the right panel, in the Private key file for authentication: field under the Authentication
Parameters section, type the fully qualified directory path and file name of the SSH client
private key file created earlier. See Figure 5-27.

Figure 5-27 PuTTY Configuration: Private key location

Tip: Either click Browse to select the file name from the system directory, or
alternatively, type the fully qualified file name (for example, C:\Support
Utils\PuTTY\icat.PPK).

8. From the Category panel on the left side of the PuTTY Configuration window, click
Session.
9. In the right panel, follow these steps as shown in Figure 5-28:
a. Under the Load, save, or delete a stored session section, select Default Settings and
click Save.
b. For Host Name (or IP address), type the IP address of the SVC cluster.
c. In the Saved Sessions field, type a name (for example, SVC) to associate with this
session.
d. Click Save.

120 IBM System Storage SAN Volume Controller


Figure 5-28 PuTTY Configuration: Saving a session

The PuTTY Configuration window can now either be closed or left open to continue.

Tip: Normally, output that comes from the SVC is wider than the default PuTTY screen
size. We recommend that you change your PuTTY window appearance to use a font with a
character size of 8. To do this, click the Appearance item in the Window tree as in
Figure 5-28 and then click Font. Choose a font with character size of 8.

5.4.4 Starting the PuTTY CLI session


The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the
session as detailed here:
1. From the master console desktop, open the PuTTY application by selecting Start →
Programs → PuTTY.
2. On the PuTTY Configuration window (Figure 5-29), select the session saved earlier (in our
example, SVC) and click Load.
3. Click Open.

Chapter 5. Initial installation and configuration of the SVC 121


Figure 5-29 Open PuTTY command line session

4. If this is the first time the PuTTY application is being used since generating and uploading
the SSH key pair, a PuTTY Security Alert window with a prompt pops up stating there is a
mismatch between the private and public keys as shown in Figure 5-30. Click Yes, which
invokes the CLI.

Figure 5-30 PuTTY Security Alert

122 IBM System Storage SAN Volume Controller


5. At the Login as: prompt, type admin and press Enter (the user ID is case sensitive). As
shown in Example 5-1, the private key used in this PuTTY session is now authenticated
against the public key uploaded to the SVC cluster.

Example 5-1 login example


login as: admin
Authenticating with public key "rsa-key-20050705" from agent
IBM_2145:ITSOSVC01:admin>

You have now completed the tasks required to configure the CLI for SVC administration from
the Master Console. You can close the PuTTY session.

Continue with the next section to configure the GUI on the master console.

Note: Starting with SVC Version 3.1, the CLI prompt has been changed to include the
cluster name in the prompt.

Configuring SSH for non-master console Windows clients


The SVC cluster IP address must be able to be successfully reached using the ping
command from the Windows workstation from which cluster access is desired. The software
putty.exe and puttygen.exe can be downloaded from the following site:
http://www.chiark.greenend.org.uk/~sgtatham/putty/

PuTTY can also be found on the SAN Volume Controller CD-ROM that was shipped with the
SVC nodes. Generate and store the key pair as in the examples above.

To upload the public key onto the SAN Volume Controller, follow these steps:
1. Browse to the SAN Volume Controller Console at:
http://<MasterConsole_IP_Address>:9080/ica
2. Log in using the superuser account.
3. Click Clusters in the My Work panel on the left.
4. Click the Select box to the left of the cluster to which access is desired.
5. From the drop-down menu, select Maintain SSH Keys and click Go.
6. Type a descriptive ID for the workstation in the ID field.
7. Select Administrator or Service for the level of access.
8. Click Browse and locate the SSH public key on the workstation.
9. Click the Add key button.

Configuring SSH for AIX clients


To configure SSH for AIX clients, follow these steps:
1. The SVC cluster IP address must be able to be successfully reached using the ping
command from the AIX workstation from which cluster access is desired.
2. Open SSL must be installed for OpenSSH to work.
3. Install OpenSSH on the AIX client:
a. Installation images can be found at:
http://sourceforge.net/projects/openssh-aix
b. Follow the instructions carefully, as OpenSSL must be installed before using SSH.

Chapter 5. Initial installation and configuration of the SVC 123


4. Generate an SSH key pair:
a. Issue a cd to go to the /.ssh directory.
b. Run the command ssh-keygen -t rsa.
c. The following message is displayed: Generating public/private rsa key pair.
Enter file in which to save the key (//.ssh/id_rsa).
d. Pressing Enter will use the default in parentheses above; otherwise, enter a file name
(for example, aixkey) and press Enter.
e. The following prompt is displayed: Enter a passphrase (empty for no passphrase).
Press Enter.
f. The following prompt is displayed: Enter same passphrase again:. Press Enter again.
g. A message is displayed indicating the key pair has been created. The private key file
will have the name entered above (for example, aixkey). The public key file will have
the name entered above with an extension of .pub (for example, aixkey.pub).
5. Upload the public key onto the SVC by browsing to the Master Console at
http://<MasterConsole_IP_Address>:9080/ica
6. Log in under the superuser account.
7. Click Clusters in the My Work panel on the left.
8. Click the Select box to the left of the cluster to which access is desired.
9. From the drop-down menu, select Maintain SSH Keys and click Go.
10.Type a descriptive ID for the workstation in the ID field.
11.Select Administrator or Service for the level of access.
12.Click Browse and locate the SSH public key on the AIX workstation.
13.Click the Add key button.
14.To SSH from the AIX client to the SVC, on the AIX client type:
ssh admin@<clusterip>
The private key to be used can be specified by typing:
ssh -i <privatekeyname> admin@<clusterip>

5.5 Summary
This chapter provides an overview of the Secure Shell (SSH) client-server network
application used to secure the data flow between a client and the SVC cluster configuration
node (server). It gives you instructions for generating an SSH private and public key pair
using the PuTTY application provided by the SVC master console, uploading the public key to
the SVC cluster and copying and renaming the private key to the master console for use by
the SVC GUI and PuTTY command line interfaces.

Instructions for creating a single-node cluster from the service panel are explained in detail,
followed by instructions for completing the cluster definition using the SVC graphic user
interface (GUI) on the master console.

The next activity outlined is the configuration of the PuTTY session on the master console,
and instructions are provided for invoking the command line interface using PuTTY.

124 IBM System Storage SAN Volume Controller


For users desiring optional access to the SVC cluster from other workstations capable of
accessing the SVC cluster’s subnet, instructions are provided for configuring the SSH client
on Windows and AIX clients at the end of the chapter.

When you have completed the instructions in this chapter, you can manage the SVC cluster
either by using the master console GUI to launch the SVC GUI application or by invoking the
SVC command line interface (CLI) using either PuTTY or the SSH client. At this point, you
can use either method to add additional nodes and manage the cluster.

Chapter 5. Initial installation and configuration of the SVC 125


126 IBM System Storage SAN Volume Controller
6

Chapter 6. Quickstart configuration using


the CLI
In this chapter we describe the basic configuration procedures required to get the IBM
System Storage SAN Volume Controller (SVC) environment up and running as quickly as
possible using the command line interface (CLI).

See Chapter 9, “SVC configuration and administration using the CLI” on page 211, for more
information about these and other configuration and administration procedures.

Important: The CLI is case sensitive.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 127
6.1 Adding nodes to the cluster
After cluster creation is completed through the service panel (the front panel of one of the
SVC nodes) and cluster Web interface, only one node (the configuration node) is set up. To
be a fully functional SVC cluster, you must add a second node to the configuration.

To add a node to a cluster, gather the necessary information as explained in the following
steps:
1. We need some information from the existing node. Gather this information by using the
svcinfo lsnode node1 command as shown in Example 6-1.

Note: The name of the nodes already in the cluster are shown on the service panel
displays on each node. By default, the first node is called node1. We show how to
change this in a later topic.

Example 6-1 svcinfo lsnode command


IBM_2145:ITSOSVC01:admin>svcinfo lsnode node1
id 1
name node1
UPS_serial_number YM100032B422
WWNN 5005076801000364
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id
partner_node_name
config_node yes
UPS_unique_id 20400000C2484082
port_id 5005076801400364
port_status active
port_speed 2Gb
port_id 5005076801300364
port_status active
port_speed 2Gb
port_id 5005076801100364
port_status active
port_speed 2Gb
port_id 5005076801200364
port_status active
port_speed 2Gb
hardware 4F2

The most important information to look for here is the IO_group_name because we use this
when adding our second node to the SVC cluster configuration.

Note: You can see in Example 6-1 that no partner_node_id or partner_node_name


exists for node1 as yet.

2. To see what nodes are available for inclusion in the SVC cluster configuration. Enter the
svcinfo lsnodecandidate command:
IBM_2145:ITSOSVC01:admin>svcinfo lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id hardware
500507680100035A 000683 YM100032B425 20400000C2484085 4F2

128 IBM System Storage SAN Volume Controller


Important: You must also ensure that the nodes within an I/O group are attached to
different UPS’s.
You can determine the UPS to which a node is attached from the output of svcinfo
lsnodecandidate (UPS_serial_number column).

If this command returns no information and your second node is powered on and zones
are correctly defined, then pre-existing cluster configuration data can be stored in it. If you
are sure this node is not part of another active SVC cluster, you can use the service panel
to delete the existing cluster information. After this is complete, re-issue the svcinfo
lsnodecandidate command and you should see it listed.
For information about how to delete an existing cluster configuration using the service
panel, see 5.3.1, “Creating the cluster (first time) using the service panel” on page 103.
For information about storage area network (SAN) zoning, see Chapter 3, “Planning and
configuration” on page 25.

3. Using a combination of the information obtained in the previous steps, add the second
node to the SVC cluster configuration. Enter the svctask addnode command. The full
syntax of the command is:
svctask addnode {-panelname panel_name | -wwnodename wwnn_arg} [-name new_name]
-iogrp iogrp_name_or_id
Note the following explanation:
– panelname: Name of the node as it appears on the panel
– wwnodename: Worldwide node name (WWNN) of the node
– name: Name to be allocated to the node
– iogrp: I/O group to which the node is added

Note: -wwnodename and -panelname are mutually exclusive, only one is required to
uniquely identify the node.

Here is an example of this command:


IBM_2145:ITSOSVC01:admin>svctask addnode -wwnodename 500507680100035a -iogrp io_grp0
Node, id [2], successfully added
In this example:
– 500507680100035a is the ID found using the svcinfo lsnodecandidate command.

Note: The wwnodename is one of the few things in the CLI that is not case sensitive.

– io_grp0 is the name of the I/O group to which node1 belonged as found using the
svcinfo lsnode node1 command.

Note: Because we did not provide the -name parameter, the SVC automatically
generates the name nodeX (where X is the ID sequence number assigned by the SVC
internally). In our case, this is node2.

If you want to provide a name, you can use A to Z, a to z, 0 to 9, and the underscore.
The name can be between one and 15 characters in length. However, it cannot start
with a number or the word node because this prefix is reserved for SVC assignment
only.

Chapter 6. Quickstart configuration using the CLI 129


4. If we display the node information for node1 again, as shown in Example 6-2, node1 now
has a partner_node_id of 2 and a partner_node_name of node2.

Example 6-2 svcinfo lsnode command


IBM_2145:ITSOSVC01:admin>svcinfo lsnode node1
id 1
name node1
UPS_serial_number YM100032B422
WWNN 5005076801000364
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 2
partner_node_name node2
config_node yes
UPS_unique_id 20400000C2484082
port_id 5005076801100364
port_status active
port_speed 2Gb
port_id 5005076801200364
port_status active
port_speed 2Gb
port_id 5005076801300364
port_status active
port_speed 2Gb
port_id 5005076801400364
port_status active
port_speed 2Gb
hardware 4F2

Note: If you have more than two nodes, then you will have to add these nodes to new I/O
groups, since each I/O group consists of exactly two nodes. Follow the foregoing directions
for adding a node, changing the iogrp parameter whenever the current I/O group has
reached its two-node limit.

You have now completed the cluster configuration and you have a fully redundant SVC
environment.

6.2 Setting the cluster time zone and time


Perform the following steps to set the cluster time zone and time:
1. Find out for which time zone your cluster is currently configured. Enter the svcinfo
showtimezone command as shown here:
IBM_2145:ITSOSVC01:admin>svcinfo showtimezone
id timezone
522 UTC
2. To find what time zone code is associated with your time zone, enter the svcinfo
lstimezones command as shown in Example 6-3. A truncated list is provided for this
example. If this setting is correct (for example, 522 UTC), you can skip to Step 4. If not,
continue with Step 3.

Example 6-3 svcinfo lstimezones command


IBM_2145:ITSOSVC01:admin>svcinfo lstimezones
. . .

130 IBM System Storage SAN Volume Controller


508 UCT
509 Universal
510 US/Alaska
511 US/Aleutian
512 US/Arizona
513 US/Central
514 US/Eastern
515 US/East-Indiana
516 US/Hawaii
517 US/Indiana-Starke
518 US/Michigan
519 US/Mountain
520 US/Pacific
521 US/Samoa
. . .

3. Now that you know which time zone code is correct for you (in our example 514), set the
time zone by issuing the svctask settimezone command:
IBM_2145:ITSOSVC01:admin>svctask settimezone -timezone 514
4. Set the cluster time by issuing the svctask setclustertime command:
IBM_2145:ITSOSVC01:admin>svctask setclustertime -time 051215042006
The format of the time is MMDDHHmmYYYY.

You have now completed the tasks necessary to set the cluster time zone and time.

6.3 Creating host definitions


Perform the following steps to create host definitions within the SVC:
1. To determine which hosts ports are eligible for definition, issue the svcinfo
lshbaportcandidate command as shown in Example 6-4.

Example 6-4 svcinfo lshbaportcandidate command


IBM_2145:ITSOSVC01:admin>svcinfo lshbaportcandidate
id
210000E08B09691D
210100E08B25C440
210000E08B08AFD6
210100E08B29691D
210000E08B05C440
210000E08B04D451
10000000C92945EB
210100E08B29951D
210000E08B09951D
210100E08B24D451
10000000C92926C3
210100E08B259C41
210000E08B059C41

This command shows all WWPNs that are visible to the SVC which were not already
defined to a host. If your WWPN does not appear, verify that the host is logged into the
switch and that zoning is updated to allow SVC and host ports to see each other as
explained in Chapter 3, “Planning and configuration” on page 25.
If you are working with an AIX host and do not see your adapter listed in the output of
svcinfo lshbaportcandidate then rerun the cfgmgr command to encourage the host
HBAs to communicate with the SVC.

Chapter 6. Quickstart configuration using the CLI 131


Note: There are situations when the information presented can include host HBA ports
that are no longer logged in or even part of the SAN fabric. For example, a host HBA
port is unplugged from a switch, but svcinfo lshost still shows the WWPN logged in to
all SVC nodes. The incorrect entry will be removed when another device is plugged into
the same switch port that previously contained the removed host HBA port.

2. The output from this command shows that we have eleven QLogic ports (21xxx) and two
Emulex ports (10xxx). By checking the hosts and confirming with the switch nameserver,
you determine that the 10xxx WWPNs belong to the AIX host. Therefore, you have
everything necessary to create a host definition. We can do this in one of two ways:
– You can add WWPN port definitions to a host one at a time using the mkhost and
addhostport commands as shown in Figure 6-5.

Example 6-5 svctask mkhost and addhostport commands


IBM_2145:ITSOSVC01:admin>svctask mkhost -name AIX_270 -hbawwpn 10000000c92945eb -mask 1001
Host id [0] successfully created

IBM_2145:ITSOSVC01:admin>svctask addhostport -hbawwpn 10000000c92926c3 AIX_270

Note: The -name is optional. If you do not specify a -name, the default is hostX, where X is
the ID sequence number assigned by the SVC internally.

The -hbawwpn is mandatory. If you do not specify the -hbawwpn parameter, this will cause
the command to fail.

The -mask is optional. It lets you specify which ports the host object can access. The port
mask must be four characters in length and can be made up of a combination of '0' or '1'.
'0' indicates that the port can not be used, '1' indicates that it can, for example, a mask of
0011 enables port 1 and port 2. The default mask is 1111 (for example, all ports are
enabled).

See this Technote for more information as to port mask and a documentation correction:
http://www-1.ibm.com/support/docview.wss?rs=591&context=STCFKTH&context=STCFKTW&dc=DB500
&uid=ssg1S1002904&loc=en_US&cs=utf-8&lang=en

If you prefer to provide a name for your host (as we have), you can use letters A to Z, a to
z, numbers 0 to 9, and the underscore. It can be between one and 15 characters in length.
However, it cannot start with a number or the word host because this prefix is reserved for
SVC assignment only.

Tip: Some HBA device drivers will not log in to the fabric until they can see target LUNs.
As they do not log in, their WWPNs will not be known as candidate ports. You can specify
the force flag (-force) with this command to stop the validation of the WWPN list.

The -name parameter is used to name the host (in our case aix1) and the -hbawwpn
parameter is filled in using data retrieved from the lshbaportcandidate command.
– Add all ports at the same time using a modification of the mkhost command, specifying
host WWPNs as a colon-separated list as shown here:
IBM_2145:ITSOSVC01:admin>svctask mkhost -name W2K_npsrv3 -hbawwpn
210100e08b259c41:210000e08b059c41 -mask 1001
Host id [1] successfully created

132 IBM System Storage SAN Volume Controller


3. Check that the host definitions were correctly created using the svcinfo lshost command
as shown in Example 6-6.

Example 6-6 svcinfo lshost commands


IBM_2145:ITSOSVC01:admin>svcinfo lshost
id name port_count iogrp_count
0 AIX_270 2 4
1 W2K_npsrv3 2 4
IBM_2145:ITSOSVC01:admin>svcinfo lshost AIX_270
id 0
name AIX_270
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C92945EB
node_logged_in_count 2
state active
WWPN 10000000C92926C3
node_logged_in_count 2
state active

IBM_2145:ITSOSVC01:admin>svcinfo lshost W2K_npsrv3


id 1
name W2K_npsrv3
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210100E08B259C41
node_logged_in_count 2
state active
WWPN 210000E08B059C41
node_logged_in_count2
state active

You have now completed the tasks required to add host definitions to your SVC configuration.

6.4 Displaying managed disks


Perform the following steps to display managed disks (MDisks):
1. First, see which MDisks are available. Enter the svcinfo lsmdiskcandidate command as
shown in Example 6-7. This displays all detected MDisks that are not currently part of a
managed disk group (MDG).

Example 6-7 svcinfo lsmdiskcandidate command


IBM_2145:ITSOSVC01:admin>svcinfo lsmdiskcandidate
id
0
1
2
3
4
5
6

Chapter 6. Quickstart configuration using the CLI 133


Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo
lsmdisk command as shown in Example 6-8.

Example 6-8 svcinfo lsmdisk command


IBM_2145:ITSOSVC01:admin>svcinfo lsmdisk -delim:
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:unmanaged:::200.4GB:0000000000000000:controller0:600a0b80000fbdf0000002753f
b8b1c200000000000000000000000000000000
1:mdisk1:online:unmanaged:::200.4GB:0000000000000001:controller0:600a0b80000fbdfc000002a33f
b8b30900000000000000000000000000000000
2:mdisk2:online:unmanaged:::407.2GB:0000000000000002:controller0:600a0b80000fbdf0000002773f
b8b22a00000000000000000000000000000000
3:mdisk3:online:unmanaged:::407.2GB:0000000000000003:controller0:600a0b80000fbdfc0000029f3f
b8b1f700000000000000000000000000000000
4:mdisk4:online:unmanaged:::817.4GB:0000000000000004:controller0:600a0b80000fbdf0000002793f
b8b2ac00000000000000000000000000000000
5:mdisk5:online:unmanaged:::681.2GB:0000000000000005:controller0:600a0b80000fbdfc000002a13f
b8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:controller0:600a0b80000fbdfc000002a53f
bcda8100000000000000000000000000000000

From this output, you can see additional information about each MDisk (such as current
status). For the purpose of our current task, we are only interested in the unmanaged
disks because they are candidates for MDGs (all MDisks in our case).

Tip: The -delim: parameter collapses output instead of wrapping text over multiple
lines.

2. If not all MDisks that you expect are visible, rescan the Fibre Channel network available by
entering the svctask detectmdisk command:
IBM_2145:ITSOSVC01:admin>svctask detectmdisk
3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are
still not visible, check that the logical unit numbers (LUNs) from your subsystem have
been properly assigned to the SVC and that appropriate zoning is in place (for example,
the SVC can see the disk subsystem). See Chapter 3, “Planning and configuration” on
page 25, for details about how to set up your SAN fabric.

Note: If you have assigned a large number of LUNs to your SVC, the discovery process
could take a while. Check using several times the svcinfo lsmdisk command to see if all
the MDisks you were expecting are present. If not, take the appropriate correct action as
suggested above.

6.5 Creating managed disk groups


Perform the following steps to create managed disk groups (MDGs):
1. From the information obtained in the previous section, add MDisks to MDGs using one of
the following methods:
– Issue the svctask mkmdiskgrp command as shown here where you can add multiple
MDisks to the MDG at the same time:
IBM_2145:ITSOSVC01:admin>svctask mkmdiskgrp -name MDG0_DS43 -ext 32 -mdisk
mdisk0:mdisk2
mDisk Group, id [0], successfully created

134 IBM System Storage SAN Volume Controller


This command creates an MDG called MDG0_DS43. The extent size used within this
group is 32 MB. Two MDisks (mdisk0 and mdisk2) are added to the group.

Note: The -name and -mdisk parameters are optional. If you do not enter a -name,
the default is MDiskgrpX, where X is the ID sequence number assigned by the SVC
internally. If you do not enter the -mdisk parameter, an empty MDG is created.

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and
the underscore. It can be between one and 15 characters in length, but it cannot
start with a number or the word mDiskgrp because this prefix is reserved for SVC
assignment only.

By running the svcinfo lsmdisk command again, you should now see the MDisks
(mdisk0 and mdisk2) as “managed” and part of the MDG MDG0_DS43 as shown in
Example 6-9.

Example 6-9 svcinfo lsmdisk command


IBM_2145:ITSOSVC01:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:controller0:600a0b80000fbdf000
0002753fb8b1c200000000000000000000000000000000
1:mdisk1:online:unmanaged:::200.4GB:0000000000000001:controller0:600a0b80000fbdfc000002a33f
b8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:controller0:600a0b80000fbdf000
0002773fb8b22a00000000000000000000000000000000
3:mdisk3:online:unmanaged:::407.2GB:0000000000000003:controller0:600a0b80000fbdfc0000029f3f
b8b1f700000000000000000000000000000000
4:mdisk4:online:unmanaged:::817.4GB:0000000000000004:controller0:600a0b80000fbdf0000002793f
b8b2ac00000000000000000000000000000000
5:mdisk5:online:unmanaged:::681.2GB:0000000000000005:controller0:600a0b80000fbdfc000002a13f
b8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:controller0:600a0b80000fbdfc000002a53f
bcda8100000000000000000000000000000000

– If you want to add an MDisk to an existing MDG or want to add MDisks one at a time,
combine the mkmdiskgrp command to create the initial MDG and then use the addmdisk
command, as shown in Example 6-10, to add other MDisks to it.

Example 6-10 svctask mkmdiskgrp and addmdisk commands


IBM_2145:ITSOSVC01:admin>svctask mkmdiskgrp -name MDG1_DS43 -ext 32 -mdisk mdisk1
MDisk Group, id [1], successfully created

IBM_2145:ITSOSVC01:admin>svctask addmdisk -mdisk mdisk3 MDG1_DS43

The first command in this example creates an MDisk group called MDG1_DS43. The
extent size used within this group is 32 MB. One MDisk (mdisk1) is added to the group.
The second command adds a second MDisk (mdisk3) to the same MDG.
By running the svcinfo lsmdisk command again, you now see the MDisks (mdisk1
and mdisk3) as “managed” and part of the MDG MDG1_DS43 (see Example 6-11).

Chapter 6. Quickstart configuration using the CLI 135


Example 6-11 svcinfo lsmdisk command
IBM_2145:ITSOSVC01:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:controller0:600a0b80000fbdf000
0002753fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:controller0:600a0b80000fbdfc00
0002a33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:controller0:600a0b80000fbdf000
0002773fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:controller0:600a0b80000fbdfc00
00029f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:unmanaged:::817.4GB:0000000000000004:controller0:600a0b80000fbdf0000002793f
b8b2ac00000000000000000000000000000000
5:mdisk5:online:unmanaged:::681.2GB:0000000000000005:controller0:600a0b80000fbdfc000002a13f
b8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:controller0:600a0b80000fbdfc000002a53f
bcda8100000000000000000000000000000000

For information about other tasks, such as adding MDisks to MDGs, renaming MDGs, or
deleting MDGs, see Chapter 9, “SVC configuration and administration using the CLI” on
page 211.

You have now completed the tasks required to create an MDG.

6.6 Creating a virtual disk


When creating a virtual disk (VDisk), you must enter several parameters (some mandatory,
some optional) at the CLI. The full command syntax is:
svctask mkvdisk -mdiskgrp name|id -iogrp name|id -size size [-fmtdisk]
[-vtype seq|striped|image] [-node name|id] [-unit b|kb|mb|gb|tb|pb]
[-mDisk name|id_list] [-name name] [-instance instance] [-cache readwrite|none]

The parameters are defined as follows:


򐂰 -mdiskgrp: Name or ID of the MDG in which to create the VDisk.
򐂰 -iogrp: Name or ID of I/O group which is to own the VDisk.
򐂰 -udid: Optional parameter to specify a UDID for the VDisk. Valid options are a decimal
number from 0 to 32767, or a hex number from 0 to 0x7FFF. A hex number must be
preceded by ‘0x’ (such as 0x1234). Used in OpenVMS support.
򐂰 -size: Capacity (numerical), not necessary for image mode VDisks.
򐂰 -fmtdisk: Optional parameter to force a format of the new VDisk.
򐂰 -vtype: Optional parameter to specify the type of VDisk (sequential, striped or image
mode). Default (if nothing is specified) is striped.
򐂰 -node: Optional parameter to specify the name or ID of the preferred node for I/O
operations to this VDisk. Default (if nothing is specified) is to alternate between nodes in
the I/O group.
򐂰 -unit: Optional parameter to specify the data units for capacity parameter. Default
(if nothing is specified) is MB.
򐂰 -mdisk: Optional parameter to specify the name or ID of the MDisk or MDisks to be used
for the VDisk. This is only required for sequential and image mode VDisks since striped
VDisks use all MDisks that are available in the MDG by default.

136 IBM System Storage SAN Volume Controller


Note: You can use this parameter for striped VDisks, for example, if you want to
specify that the VDisk only uses a subset of the MDisks available within a MDG.

򐂰 -name: Optional parameter to assign a name to the new VDisk. Default (if nothing is
specified) is to assign the name VDiskX, where X is the ID sequence number assigned by
the SVC internally.

Note: If you want to provide a name, you can use A to Z, a to z, 0 to 9, and the
underscore. It can be between one and 15 characters in length. However, it cannot
start with a number or the word VDisk since this prefix is reserved for SVC assignment
only.

򐂰 -cache: Optionally specifies the caching options for the VDisk. Valid entries are readwrite
or none. The default is readwrite. If -cache is not entered, the default is used.

Now perform the following steps to create VDisks:


1. Create a striped VDisk using the svctask mkvdisk command (we cover sequential and
image mode VDisks in a later section). See Example 6-12. This command creates a
10 GB, striped VDisk called VD0_AIX_270 within the MDG MDG0_DS43 and assigns it to
the I/O group iogrp_0.

Example 6-12 svctask mkvdisk commands


IBM_2145:ITSOSVC01:admin>svctask mkvdisk -mdiskgrp MDG0_DS43 -iogrp io_grp0 -size 10 -vtype
striped -unit gb -name VD0_AIX_270
Virtual Disk, id [0], successfully created

2. Create the VDisks (five in this example) using the previous command several times,
changing the name parameter for each VDisk. The result can be displayed using the
svcinfo lsvdisk command as shown in Example 6-13.

Example 6-13 svcinfo lsvdisk command


IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name:vdisk_UDID
0:VD0_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::60050768018500C47000000000000
000
1:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::60050768018500C47000000000000
001
2:VD2_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::60050768018500C47000000000000
002
3:VD3_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::60050768018500C47000000000000
003
4:VD4_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::60050768018500C47000000000000
004
5:VD5_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:::::60050768018500C47000000000000
005

To display more information about a specific VDisk, enter a variant of the svcinfo lsvdisk
command as shown in Example 6-14 on page 137.

Example 6-14 svcinfo lsvdisk command


IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk VD0_AIX_270
id 0
name VD0_AIX_270

Chapter 6. Quickstart configuration using the CLI 137


IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018500C47000000000000000
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid

For information about other tasks, such as deleting a VDisk, renaming a VDisk, or expanding
a VDisk, see Chapter 9, “SVC configuration and administration using the CLI” on page 211.

You have now completed the tasks required to create a VDisk.

6.7 Assigning a VDisk to a host


Using the VDisk and host definition created in previous sections, assign VDisks to hosts
ready for their use. To do this, use the svctask mkvdiskhostmap command:
IBM_2145:ITSOSVC01:admin>svctask mkvdiskhostmap -host AIX_270 VD0_AIX_270
Virtual Disk to Host map, id [0], successfully created

IBM_2145:ITSOSVC01:admin>svctask mkvdiskhostmap -host AIX_270 VD1_AIX_270


Virtual Disk to Host map, id [1], successfully created

This command assigns VDisks VD0_AIX_270 and VD1_AIX_270 to host AIX_270 as shown
in the following command:
IBM_2145:ITSOSVC01:admin>svcinfo lshostvdiskmap -delim :
id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
0:AIX_270:0:1:VD1_AIX_270:10000000c92945eb:60050768018500C47000000000000001
1:AIX_270:0:0:VD0_AIX_270:10000000c92926c3:60050768018500C47000000000000000

Note: The optional parameter -scsi scsi_num can help assign a specific LUN ID to a
VDisk that is to be associated with a given host. The default (if nothing is specified) is to
increment based on what is already assigned to the host.

For information about other tasks, such as deleting a VDisk to host mapping, see Chapter 9,
“SVC configuration and administration using the CLI” on page 211.

You have now completed all the tasks required to assign a VDisk to an attached host. You are
ready to proceed to Chapter 8, “Host configuration” on page 165, to begin to use the
assigned VDisks.

138 IBM System Storage SAN Volume Controller


7

Chapter 7. Quickstart configuration using


the GUI
In this chapter we describe the basic configuration procedures required to get your IBM
System Storage SAN Volume Controller (SVC) environment up and running as quickly as
possible using the Master Console and its associated Graphical User Interface (GUI).

See Chapter 10, “SVC configuration and administration using the GUI” on page 275, for more
information about these and other configuration and administration procedures.

Important: Data entries made through the GUI are case sensitive.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 139
7.1 Adding nodes to the cluster
After cluster creation is completed through the service panel (the front panel of one of the
SVC nodes) and cluster Web interface, only one node (the configuration node) is set up. To
be a fully functional SVC cluster, at least a second node must be added to the configuration.

Perform the following steps to add nodes to the cluster:


1. Open the GUI using one of the following methods:
– Double-click the SAN Volume Controller Console icon on your master console’s
desktop.
– Open a Web browser on the master console and point to the address:
http://localhost:9080/ica
– Open a Web browser on a separate workstation and point to the address:
http://masteconsoleipaddress:9080/ica
On the Signon page (Figure 7-1), type the user ID superuser and the password passw0rd.
These are the default user ID and password. Click OK.

Figure 7-1 GUI signon page

2. You see the GUI Welcome page as shown in Figure 7-2. This page has several links:
My Work (top left), a Recent Tasks list (bottom left), the GUI version and build level
information (right, under the main graphic), and a hypertext link to the SVC download
page:
http://www.ibm.com/storage/support/2145

140 IBM System Storage SAN Volume Controller


Under My Work on the left, click the Clusters link (Figure 7-2).

Figure 7-2 GUI Welcome page

3. On the Viewing Clusters panel (Figure 7-3), select the radio button next to the cluster on
which you want to perform actions (in our case ITSOSVC01). Select Launch the SAN
Volume Controller application from the drop-down list and click Go.

Figure 7-3 Selecting to launch the SAN Volume Controller application

4. The SAN Volume Controller Console Application launches in a separate browser window
(Figure 7-4). On this page, as with the Welcome page, you can see several links under My
Work (top left), a Recent Tasks list (bottom left), the SVC Console version and build level
information (right, under main graphic), and a hypertext link that will bring you to the SVC
download page:
http://www.ibm.com/storage/support/2145
Under My Work, click the Work with Nodes option and then the Nodes link.

Chapter 7. Quickstart configuration using the GUI 141


Figure 7-4 SVC Console Welcome page

5. The Viewing Nodes panel (Figure 7-5) opens. Note the input/output (I/O) group name (for
example, io_grp0). Select the node you want to add. Ensure that Add a node is selected
from the drop-down list and click Go.

Figure 7-5 Viewing Nodes panel

Note: You can rename the existing node to your own naming convention standards
(we show how to do this later). On your panel, it should appear as node1 by default.

142 IBM System Storage SAN Volume Controller


6. The next panel (Figure 7-6) displays the available nodes. Select the node, from the
Available Candidate Nodes drop-down list. Associate it with an I/O group and provide a
name (for example, SVCNode2). Click OK.

Figure 7-6 Adding a Node to a Cluster panel

Note: If you do not provide a name, the SVC automatically generates the name nodeX,
where X is the ID sequence number assigned by the SVC internally. In our case, this is
SVCNode_2.

If you want to provide a name (as we have), you can use letters A to Z, a to z, numbers
0 to 9, and the underscore. It can be between one and 15 characters in length, but
cannot start with a number or the word node since this prefix is reserved for SVC
assignment only.

In our case, we only have enough nodes to complete the formation of one I/O group.
Therefore, we added our new node to the I/O group that SVCNode_1 was already using,
namely io_grp0 (you can rename from the default of iogrp0 using your own naming
convention standards).
If this panel does not display any available nodes (indicated by the message
CMMVC1100I The candidate nodes are not available), check if your second node is
powered on and that zones are appropriately configured in your switches. It can also be
possible that a pre-existing cluster configuration data is stored on it. If you are sure this
node is not part of another active SVC cluster, use the service panel to delete the existing
cluster information. When this is complete, return to this panel and you should see the
node listed.

Chapter 7. Quickstart configuration using the GUI 143


For information about zoning requirements, see Chapter 3, “Planning and configuration”
on page 25. For information about how to delete an existing cluster configuration using the
service panel, see 5.3, “Basic installation” on page 103.
7. Return to the Viewing Nodes panel (Figure 7-7). It shows the status change of the node
from Adding to Online.

Figure 7-7 Node added successfully

Note: This panel does not automatically refresh. Therefore, you continue to see the
adding status only until you click the Refresh button.

You have completed the cluster configuration and now you have a fully redundant SVC
environment.

7.1.1 Installing certificates


As we continue with setting up the SVC cluster we will come across many instances where
we are prompted with security warnings regarding unrecognized certificates. The security
warning panel (Figure 7-8) shows three options.

Figure 7-8 Security Alert window

144 IBM System Storage SAN Volume Controller


These options are:
򐂰 Yes: Clicking Yes accepts the certificate for this task. This option allows you to proceed
using the unrecognized certificate. Each time you select a task which transmits secure
information, you are prompted to accept another certificate. In most cases, you are
prompted multiple times due to the two-way data exchange, which occurs between the
management workstation and the SVC cluster. In some cases, this can cause your
browser to crash.
򐂰 No (default): Clicking this option rejects the certificate for this task and does not allow you
to proceed.
򐂰 View Certificate: Clicking this option launches the Certificate window (Figure 7-9), from
where you can install the certificate. If you do not want to be prompted repeatedly to
accept or reject certificates, we recommend that you choose this option.

Figure 7-9 Certificate Information

Follow these steps to install a certificate:


1. From the Security Alert window (Figure 7-8 on page 144), select View Certificate.
2. The Certificate window opens (see Figure 7-9 below). Click Install Certificate.

Chapter 7. Quickstart configuration using the GUI 145


3. The Welcome to the Certificate Import Wizard information panel (Figure 7-10) opens.
Click Next.

Figure 7-10 Certificate Import Wizard

4. On the Certificate Store panel (Figure 7-11), click Next.

Figure 7-11 Certificate Store panel

146 IBM System Storage SAN Volume Controller


5. You might be prompted with the Root Certificate Store confirmation window (Figure 7-12).
If you are prompted, click Yes.

Figure 7-12 Root Certificate Store

6. You should see a message stating that the import was successful (Figure 7-13). Click OK.

Figure 7-13 Certificate Import successful

7. You return to the Certificate Information window (Figure 7-9 on page 145) that you saw
earlier. Click OK.
8. Provide the admin user ID and password when prompted.

From this point, you should no longer be asked to accept or reject certificates from the SVC
cluster.

Note: Future code upgrades could result in new certificate IDs, so you might have to go
through this process again.

Chapter 7. Quickstart configuration using the GUI 147


7.2 Setting the cluster time zone and time
Perform the following steps to set the cluster time zone and time:
1. From the SVC Welcome page (Figure 7-4 on page 142), select the Manage Cluster
option and the Set Cluster Time link.
2. The Cluster Date and Time Settings panel opens (Figure 7-14). At the top of the panel,
you see the existing settings. If necessary, make adjustments and ensure that the Update
cluster data and time and Update cluster time zone check boxes are selected. Click
Update.

Figure 7-14 Cluster Date and Time Settings panel

Note: You might be prompted for the cluster user ID and password. If you are, enter
admin and the password you set earlier.

3. You see the messages: The cluster time zone setting has been updated and
The cluster date and time setting have been updated.

You have now completed the tasks necessary to set the cluster time zone and time.

148 IBM System Storage SAN Volume Controller


7.3 Creating host definitions
Perform the following steps to create host objects within the SVC:
1. From the SVC Welcome page (Figure 7-4 on page 142), select the Working with Hosts
option and then the Hosts link.
2. The Filtering Hosts panel (not shown) should appear. Click the Bypass filter button at the
top of this panel.
3. The Viewing Hosts panel opens (see Figure 7-15 below). Select Create a host from the
list and click Go.

Figure 7-15 Viewing Hosts panel

4. On the Creating Hosts panel (Figure 7-16), follow these steps:


a. Type a name for your host (for example, Stargazer).

Note: If you do not provide a name, the SVC automatically generates the name
hostX, where X is the ID sequence number assigned by the SVC internally.

If you want to provide a name (as we have), you can use the letters A to Z, a to z,
numbers 0 to 9, and the underscore. It can be between one and 15 characters in
length. However, it cannot start with a number or the word host because this prefix is
reserved for SVC assignment only.

b. From the Available Port list, select the WWN or WWNs, one at a time, and click the
Add button.
c. When you are done adding the WWNs, click OK.

Note: SVC firmware version 4.1 comes with a new feature called Host access by Port. Its
implementation can be seen in Figure 7-16 in the form of a Port Mask. Here we specify the
SVC FC ports that the host can access on each node. (The rightmost bit is associated with
Fibre Channel port 1 on each node. The leftmost bit is associated with port 4). For
example: 0111 prevents access by host using port 4 on each SVC node; 1100 allows
access on ports 3 and 4 but not on ports 1 and 2.

For more information see:


http://www-1.ibm.com/support/docview.wss?rs=591&context=STCFKTH&context=STCFKTW&dc=DB500
&uid=ssg1S1002904&loc=en_US&cs=utf-8&lang=en

Chapter 7. Quickstart configuration using the GUI 149


Figure 7-16 Creating Hosts panel

Note: This panel shows all WWNs that are visible to the SVC and that have not
already been defined to a host. If your WWN does not appear, check that the host
has logged into the switch and that zoning in the switches is updated to allow SVC
and host ports to see each other. This is described in Chapter 3, “Planning and
configuration” on page 25. Also note that if you are working with an AIX host and do
not see your adapter listed in the creating hosts panel, then rerun the cfgmgr
command to encourage the host HBAs to communicate with the SVC and refresh
this panel.

150 IBM System Storage SAN Volume Controller


5. You return to the Viewing Hosts panel (Figure 7-17) where you should see your newly
created host. (We defined all our hosts connected to the SAN.)

Figure 7-17 Host added successfully

For information about other tasks, such as adding host ports, deleting host ports, or deleting
hosts, see Chapter 10, “SVC configuration and administration using the GUI” on page 275.

You have now completed the tasks required to add host definitions to your SVC configuration.

7.4 Displaying managed disks


Perform the following steps to display MDisks:
1. From the SVC Welcome page (Figure 7-4 on page 142), select the Work with Managed
Disks option and then the Managed Disks link.
2. When the Filtering Managed Disks (MDisk) panel opens, click Bypass filter to open the
Viewing Managed Disks panel.
3. On the Viewing Managed Disks panel (Figure 7-18), if your MDisks are not displayed,
rescan the Fibre Channel network. Select Discover MDisks from the list and click Go.

Chapter 7. Quickstart configuration using the GUI 151


Figure 7-18 Discover MDisks

Note: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from
your subsystem are properly assigned to the SVC (for example, using storage partitioning
with a DS4000 or LUN masking with an ESS) and that appropriate zoning is in place (for
example, SVC can see the disk subsystem). See Chapter 3, “Planning and configuration”
on page 25, for more details about how to setup your storage area network (SAN) fabric.

7.5 Creating managed disk groups


Perform the following steps to create a managed disk group (MDG):
1. From the SVC Welcome page (Figure 7-4 on page 142), select the Work with Managed
Disks option and then the Managed Disks Groups link.
2. When the Filtering Managed Disks (MDisk) Groups panel opens, click Bypass filter.
3. The Viewing Managed Disks Groups panel opens (see Figure 7-19 below). Select Create
an MDisk Group from the list and click Go.

152 IBM System Storage SAN Volume Controller


Figure 7-19 .Selecting the option to create an MDisk group

4. On the Create Managed Disk Group panel, the wizard will give you an overview of what
will be done, click Next.
5. On the Name the group and select the managed disks panel (Figure 7-20), follow these
steps:
a. Type a name for the MDG.

Note: If you do not provide a name, the SVC automatically generates the name
MDiskgrpX, where X is the ID sequence number assigned by the SVC internally.
If you want to provide a name (as we have), you can use the letters A to Z, a to z,
numbers 0 to 9, and the underscore. It can be between one and 15 characters in
length, but cannot start with a number or the word MDiskgrp because this prefix is
reserved for SVC assignment only.

b. From the MDisk Candidates box, one at a time, select the MDisks to put into the MDG.
Click Add to move them to the Selected MDisks box.
c. Click Next.

Chapter 7. Quickstart configuration using the GUI 153


Figure 7-20 Name the group and select the managed disks panel

6. From the list shown in Figure 7-21, select the extent size to use. Then click Next.

Figure 7-21 Select Extent Size panel

154 IBM System Storage SAN Volume Controller


7. On the Verify Managed Disk Group panel (Figure 7-22), verify that the information
specified is correct. Then click Finish.

Figure 7-22 Verify MDG wizard

8. Return to the Viewing Managed Disk Groups panel (Figure 7-23) where the MDG is
displayed (we created more following the same process).

Figure 7-23 MDG added successfully

For information about other tasks, such as adding MDisks to MDGs and renaming MDGs or
deleting MDGs, see Chapter 10, “SVC configuration and administration using the GUI” on
page 275.

You have now completed the tasks required to create an MDG.

Chapter 7. Quickstart configuration using the GUI 155


7.6 Creating a VDisk
Perform the following steps to create VDisks:
1. From the SVC Welcome page (Figure 7-4 on page 142), select the Work with Virtual
Disks option and then the Virtual Disks link.
2. When the Filtering Virtual Disks (VDisks) panel opens, click Bypass filter.
3. The Viewing Virtual Disks panel opens (see Figure 7-24 here). Select Create VDisk from
the list. Click Go.

Figure 7-24 Viewing Virtual Disks panel

4. The Create Virtual Disks wizard will be displayed, click the Next button.
5. On the Choose an I/O Group and a Managed Disk Group panel (Figure 7-25), follow these
steps:
a. Select the I/O group to associate the VDisk with from the list. In our case, we only have
one, io_grp0, so we must select it from the list.

Note: You can let the system choose the preferred node and I/O group.

b. Optionally, choose a preferred node. The default (if nothing is selected) is to alternate
between nodes in the I/O group.
c. Select the MDisk group in which to create the VDisk from the list. In our case we
selected MDG_0.
d. Click Next.

156 IBM System Storage SAN Volume Controller


Figure 7-25 Choosing an I/O group and a MDG panel

Chapter 7. Quickstart configuration using the GUI 157


6. On the Set Attributes panel (Figure 7-26), follow these steps:
a. Select the type of VDisk to create (striped or sequential) from the list.
b. Select the cache mode (read/write, none) from the list.
c. Select a Unit device identifier (numerical number) for this VDisk.
d. Select the number of VDisks to create.
e. Click Next.

Figure 7-26 Select the Type of VDisk panel

158 IBM System Storage SAN Volume Controller


7. On the Name the Virtual Disk(s) creating panel (Figure 7-27), type a name for the VDisk.
In our case we used VDSK_. Click Next.

Note: If you do not provide a name, the SVC automatically generates the name
VDiskX, where X is the ID sequence number assigned by the SVC internally.
If you want to provide a name (as we have), you can use letters A to Z, a to z, numbers
0 to 9, and the underscore. It can be between one and 15 characters in length.
However, it cannot start with a number or the word VDisk because this prefix is
reserved for SVC assignment only.

Figure 7-27 Name the Virtual Disk(s) panel

Chapter 7. Quickstart configuration using the GUI 159


8. On the Select Attributes for <modetype>-mode VDisk panel (modetype is the type of
VDisk you selected in the previous step) as shown in Figure 7-28, follow these steps:
a. Optionally, choose the Managed Disk Candidates upon which to create the VDisk.
Click Add to move them to the Managed Disks Striped in this Order box.
Striped VDisks, by default, use all MDisks within a MDG. Therefore, it is not necessary
to select anything here. However, you might want to select from the list, for example, if
you want to specify that the VDisk only uses a subset of the MDisks available within a
MDG.
For image and sequential VDisks, we do not see the managed disk candidates or
managed disks striped in the Order box. Instead we see the Managed Disk Used to
Create VDisk and the top one in the list selected by default.
b. Type the capacity of the VDisk. Select the unit of capacity from the list.
Remember, capacity is calculated based on 1 GB = 1024 MB. Therefore, an entry of 10
GB actually provides 10240 MB instead of 10000 MB as with other disk subsystems.
c. Optionally, choose to format the VDisk by selecting the check box.

Note: Formatting destroys any existing data on the VDisk.

d. After completing all the necessary entry fields, click Next.

Figure 7-28 Select Attributes for a VDisk panel

160 IBM System Storage SAN Volume Controller


9. On the Verify VDisk panel (Figure 7-29), verify the selections. We can select the Back
button at any time to make changes.

Figure 7-29 Verify VDisk Attributes

10.l After selecting the Finish option we are presented with a screen (Figure 7-30) that tells
us the result of the action.

Figure 7-30 VDisk creation success

Chapter 7. Quickstart configuration using the GUI 161


11.We click Close again and see a list (Figure 7-31) of all created VDisks.

Figure 7-31 List of all created VDisks

For information about other tasks, such as deleting a VDisk, renaming a VDisk or expanding a
VDisk, see Chapter 10, “SVC configuration and administration using the GUI” on page 275.

You have now completed the tasks required to create a VDisk.

7.7 Assigning a VDisk to a host


Perform the following steps to map a VDisk to a host:
1. From the SVC Welcome page (Figure 7-4 on page 142), select the Work with Virtual
Disks option and then the Virtual Disks link.
2. When the Filtering Virtual Disks (VDisks) panel opens, click Bypass filter.
3. On the Viewing Virtual Disks panel (Figure 7-32), select the radio button next to the VDisk
to assign. Select Map a VDisk to a host from the list and click Go.

Figure 7-32 Assigning a VDisk to a host

162 IBM System Storage SAN Volume Controller


4. On the Creating Virtual Disk-to-Host Mappings panel (Figure 7-33), select the target host,
and click OK.

Figure 7-33 Creating VDisk-to-Host Mappings panel

5. You get an information panel that displays the status as shown in Figure 7-34.

Figure 7-34 VDisk to host mapping successful

6. You now return to the Viewing Virtual Disks panel (Figure 7-32 on page 162).

Chapter 7. Quickstart configuration using the GUI 163


For information about other tasks such as deleting a VDisk to host mapping, see Chapter 10,
“SVC configuration and administration using the GUI” on page 275.

You have now completed all the tasks required to assign a VDisk to an attached host. You are
ready to proceed to Chapter 8, “Host configuration” on page 165, to begin using the assigned
VDisks.

164 IBM System Storage SAN Volume Controller


8

Chapter 8. Host configuration


In this chapter we describe the basic host configuration procedures required to connect a
supported host to the IBM System Storage SAN Volume Controller (SVC).

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 165
8.1 SAN configuration
Great care must be taken when setting up the storage area network (SAN) ensuring the
zoning rules, discussed in 3.4, “Zoning” on page 40, are followed accurately. A storage zone
must exist that comprises all SVC ports and all ports on the disk subsystem; and there should
be multiple host zones, each of which consists of one host HBA port and one port from each of
the two SVC nodes.

8.2 SVC setup


Figure 8-1 shows a basic configuration with multiple heterogeneous hosts connected to a two
node SVC cluster through two switches.

W2K3_1 AIX_270 Linux W2K3_2


P1 P2 P1 P2 P1 P2 P1 P2

P1 Master

P2 Console

2(P4)
(p3)1
HB PCI 1 HB PCI 1
1(p3) (p4)2
0 3 4 8 9 12 14 14 12 9 8 4 3 0
SVC1 SW11 SAN switch SW21 SAN switch SVC1
node1 node2
1 2 5 5 2 1
2(p2) (p1)1
HBA PCI 2 HBA PCI 2
1(p1) (p2)2

DS4300_A DS4300_B

Figure 8-1 SAN Volume controller setup

8.3 Switch and zoning configuration


Example 8-1 lists the details of SAN switch SW21 in used in our setup.

Example 8-1 Switch SW21 configuration our “SWITCH_1”


SW21:admin> switchshow
switchName: SW21
switchType: 9.2
switchState: Online
switchMode: Native
switchRole: Principal

166 IBM System Storage SAN Volume Controller


switchDomain: 21
switchId: fffc15
switchWwn: 10:00:00:60:69:51:87:0a
switchBeacon: OFF
Zoning: ON (SVC_Redbook21_Conf)
port 0: id N2 Online F-Port 21:01:00:e0:8b:28:af:d6 --> Master Console
port 1: id N2 Online F-Port 50:05:07:68:01:10:03:64 --> SVC1 node 2 port 1
port 2: id N2 Online F-Port 50:05:07:68:01:10:03:5a --> SVC1 node 1 port 1
port 3: id N2 Online F-Port 50:05:07:68:01:30:03:64 --> SVC1 node 2 port 3
port 4: id N2 Online F-Port 50:05:07:68:01:30:03:5a --> SVC1 node 1 port 3
port 5: id N2 Online F-Port 20:08:00:a0:b8:0f:bd:f1 --> DS4300
port 6: id N2 No_Light
port 7: id N2 No_Light
port 8: id N2 Online F-Port 10:00:00:00:c9:29:26:c3 --> AIX_270 fcs1
port 9: id N2 Online F-Port 21:01:00:e0:8b:25:d8:21 --> Linux
port 10: -- N2 No_Module
port 11: id N2 No_Light
port 12: id N2 Online F-Port 21:00:00:e0:8b:13:82:40 --> W2K3_1
port 13: -- N2 No_Module
port 14: id N2 Online F-Port 21:00:00:e0:8b:0e:27:8c --> W2K3_2
port 15: -- N2 No_Module

SW21:admin> cfgshow
Effective configuration:
cfg: SVC_Redbook21_Conf
SVC_Storage; SVC_Console; AIX_270_SVC1; Linux_SVC1; W2K3_1_SVC1; W2K3_2_SVC1

zone: SVC_Storage 21,1; 21,2; 21,3; 21,4; 21,5


zone: SVC_Console 21,0; 21,1; 21,2; 21,3; 21,4
zone: AIX_270_SVC1 21,1; 21,2; 21,4;
zone: Linux_SVC1 21,1; 21,2; 21,9
zone: W2K3_1_SVC1 21,3; 21,4; 21,12
zone: W2K3_2_SVC1 21,3; 21,4; 21,14

Example 8-2 lists the details of SAN switch SW11 in the zoning used in Figure 8-1 on
page 166.

Example 8-2 Switch SW11 configuration our “SWITCH _2”


SW11:admin> switchshow
switchName: SW11
switchType: 9.2
switchState: Online
switchMode: Native
switchRole: Principal
switchDomain: 11
switchId: fffc0b
switchWwn: 10:00:00:60:69:51:87:2f
switchBeacon: OFF
Zoning: ON (SVC_Redbook21_Conf)
port 0: id N2 Online F-Port 21:00:00:e0:8b:08:af:d6 --> Master console
port 1: id N2 Online F-Port 50:05:07:68:01:20:03:64 --> SVC1 node 2 port 2
port 2: id N2 Online F-Port 50:05:07:68:01:20:03:5a --> SVC1 node 1 port 2
port 3: id N2 Online F-Port 50:05:07:68:01:40:03:64 --> SVC1 node 2 port 4
port 4: id N2 Online F-Port 50:05:07:68:01:40:03:5a --> SVC1 node 1 port 4
port 5: id N2 Online F-Port 20:09:00:a0:b8:0f:bd:f2 --> DS4300
port 6: id N2 No_Light
port 7: id N2 No_Light
port 8: id N2 Online F-Port 10:00:00:00:c9:29:45:eb --> AIX_270 fcs0
port 9: id N2 Online F-Port 21:01:00:e0:8b:02:84:d0 --> Linux

Chapter 8. Host configuration 167


port 10: -- N2 No_Module
port 11: id N2 No_Light
port 12: id N2 Online F-Port 21:00:00:e0:8b:13:7c:47 --> W2K3_1
port 13: -- N2 No_Module
port 14: id N2 Online F-Port 21:00:00:e0:8b:0e:2a:8c --> W2K3_2
port 15: -- N2 No_Module

SW11:admin> cfgshow
Effective configuration:
cfg: SVC_Redbook21_Conf
SVC_Storage; SVC_Console; AIX_270_SVC1; Linux_SVC1; W2K3_1_SVC1; W2K3_2_SVC1

zone: SVC_Storage 11,1; 11,2; 11,3; 11,4; 11,5


zone: SVC_Console 11,0; 11,1; 11,2; 11,3; 11,4
zone: AIX_270_SVC1 11,1; 11,2; 11,4;
zone: Linux_SVC1 11,1; 11,2; 11,9
zone: W2K3_1_SVC1 11,3; 11,4; 11,12
zone: W2K3_2_SVC1 11,3; 11,4; 11,14

Even if there are 16 possible paths (two server ports x eight SVC node ports), only four paths
exist because of the switch zoning. A zone consists of one port of the server, and one port of
each node.

Figure 8-2 shows one of the zones. The zone W2K3_1_SVC1 connects the host “Win2k_1
with SVC1”. There are two paths from the FC Host adapter 1, to port 1 of SVC1 node1, and to
port 1 of SVC1 node 2. Two other paths are from the FC Host adapter 2 to port 2 of SVC1
node 1, and port 2 of SVC1 node 2. That gives the host system “Win2K3_1” a maximum of
four paths.

W2K3_1 AIX_270 Linux W2K3_2


P1 P2 P1 P2 P1 P2 P1 P2

P1 Master

P2 Console

2(P4)
(p3)1
HB PCI 1 HB PCI 1
1(p3) (p4)2
0 3 4 8 9 12 14 14 12 9 8 4 3 0
SVC1 SW11 SAN switch SW21 SAN switch SVC1
node1 node2
1 2 5 5 2 1
2(p2) (p1)1
HBA PCI 2 HBA PCI 2
1(p1) (p2)2

DS4300_A DS4300_B

Figure 8-2 Zoning for W2K3_1_SVC1

168 IBM System Storage SAN Volume Controller


8.3.1 Additional zoning considerations
In this section we discuss some additional considerations that you must take into account.

Host to iogrp mappings


SVC 3.1 introduced the concept of host to iogrp mappings. This allows you to scale up to
1024 hosts with an 8-node cluster by configuring 256 nodes in each iogrp. When zoning, the
host only needs to be zoned to the nodes in the iogrps with which it is associated, but this is
not necessary. By default, a host is created across all iogrps. Refer to 3.3.3, “General design
considerations with the SVC” on page 36 for more information.

Using port masking


From SVC 4.1, it is possible to configure which node ports a host HBA can access using port
masking on the SVC. In this case, the host port can be zoned to all SVC node ports, and load
balancing across node ports is configured on the SVC. Refer to 3.6.16, “Port masking” on
page 65 for more information.

8.4 AIX-specific information


The following section details specific information that relates to the connection of AIX-based
hosts in to an SVC environment.

8.4.1 Configuring the AIX host


To configure the AIX host, follow these steps:
1. Install the HBA or HBAs into the AIX host system.
2. Install and configure the 2145 and SDD drivers.
3. Configure the Fibre Channel switches (zoning) if needed.
4. Connect the AIX host system to the Fibre Channel switches.
5. Configure the host, VDisks, and host mapping on the SAN Volume Controller.
6. Run the cfgmgr command to discover the VDisks created on the SVC.

8.4.2 Support information


This section details the current support information. It is vital that you check the Web sites
listed regularly for any updates.

Operating system versions and maintenance levels


The versions of AIX listed in Table 8-1 are supported. For the latest information, see this site:
http://www.storage.ibm.com/support/2145

Table 8-1 Versions of AIX supported with SVC


Operating system level More information

AIX 4.3.3 http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002865

AIX 5.1 http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002865

AIX 5.2 http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002865

AIX 5.3 http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002865

Chapter 8. Host configuration 169


Subsystem Device Driver (SDD)
SDD is a pseudo device driver designed to support the multipathed configuration
environments within IBM products. It resides on a host system along with the native disk
device driver and provides the following functions:
򐂰 Enhanced data availability
򐂰 Dynamic input/output (I/O) load balancing across multiple paths
򐂰 Automatic path failover protection
򐂰 Concurrent download of licensed internal code

SDD works by grouping each physical path to an SVC LUN, represented by individual hdisk
devices within AIX, into a vpath device (for example, if you have 4 physical paths to an SVC
LUN, this produces 4 new hdisk devices within AIX). From this moment onwards, AIX uses
this vpath device to route I/O to the SVC LUN. Therefore, when making an LVM volume group
using mkvg, we specify the vpath device as the destination and not the hdisk device.

At the time of writing, the following version of SDD for AIX is supported:
򐂰 SDD for AIX version 1.6.1.0

See this Web site for the latest information:


http://www.storage.ibm.com/support/2145

Supported host adapters


For the most current information regarding the HBAs supported for AIX, refer to the following
Web site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002864#_Supported_HBAs_-_by_host_
operating_

8.4.3 Host adapter configuration settings


You can check the availability of the FC Host Adapters by using the command shown in
Example 8-3.

Example 8-3 FC Host Adapter availability


# lsdev -Cc adapter |grep fcs
fcs0 Available 20-58 FC Adapter
fcs1 Available 20-60 FC Adapter

You can also find the worldwide port name (WWPN) of your FC Host Adapter and check the
firmware level as shown in Example 8-4. The Network Address is the WWPN for the FC
adapter.

Example 8-4 FC Host Adapter settings and WWPN


# lscfg -vl fcs0
fcs0 P2-I4/Q1 FC Adapter

Part Number.................09P1162
EC Level....................D
Serial Number...............KT12203617
Feature Code/Marketing ID...2765
Manufacturer................0010
FRU Number..................09P1173
Network Address.............10000000C92945EB
ROS Level and ID............02903331
Device Specific.(Z0)........4002206D

170 IBM System Storage SAN Volume Controller


Device Specific.(Z1)........10020193
Device Specific.(Z2)........3001506D
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF101493
Device Specific.(Z5)........02903331
Device Specific.(Z6)........06113331
Device Specific.(Z7)........07113331
Device Specific.(Z8)........20000000C92945EB
Device Specific.(Z9)........SS3.30X1
Device Specific.(ZA)........S1F3.30X1
Device Specific.(ZB)........S2F3.30X1
Device Specific.(YL)........P2-I4/Q1

# lscfg -vl fcs1


fcs1 P2-I1/Q1 FC Adapter

Part Number.................03N2452
EC Level....................D
Serial Number...............1C13308E15
Manufacturer................001C
Feature Code/Marketing ID...2765
FRU Number..................09P0102
Network Address.............10000000C92926C3
ROS Level and ID............02C03891
Device Specific.(Z0)........1002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C92926C3
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1
Device Specific.(YL)........P2-I1/Q1

8.4.4 SDD installation


At the time of writing, version 1.6.1.0 for AIX of SDD is supported. See the following Web site
for the latest information about SDD for AIX:
http://www.storage.ibm.com/support/2145

After downloading the appropriate version of the SDD, install it using the standard AIX
installation procedure.

In Example 8-5 we show the appropriate version of SDD downloaded into the /tmp/sdd
directory. From here we initiate the inutoc command, which generates a dot.toc (.toc) file that
is needed by the installp command prior to installing SDD. Finally, we initiate the installp
command, which installs SDD onto this AIX host.

Example 8-5 Installing SDD on AIX


# ls -al
total 3016
drwxr-xr-x 2 root system 256 Jun 27 13:45 .
drwxr-xr-x 23 root system 4096 Jun 27 13:45 ..
-rw-r----- 1 root system 1536000 Jun 27 13:45 devices.sdd.53.rte

Chapter 8. Host configuration 171


# inutoc .

# ls -la
total 3024
drwxr-xr-x 2 root system 256 Jun 27 13:46 .
drwxr-xr-x 23 root system 4096 Jun 27 13:45 ..
-rw-r--r-- 1 root system 473 Jun 27 13:46 .toc
-rw-r----- 1 root system 1536000 Jun 27 13:45 devices.sdd.53.rte

# installp -ac -d . all

Example 8-6 checks the installation of SDD and the appropriate SVC drivers.

Example 8-6 Checking 2145 SAN Volume Controller and SDD device driver
# lslpp -l | grep -i sdd
devices.sdd.53.rte 1.6.1.0 COMMITTED IBM Subsystem Device Driver
devices.sdd.53.rte 1.6.1.0 COMMITTED IBM Subsystem Device Driver

# lslpp -l | grep "IBM FCP"


devices.fcp.disk.ibm.rte 1.0.0.6 COMMITTED IBM FCP Disk Device
devices.fcp.disk.ibm.rte 1.0.0.6 COMMITTED IBM FCP Disk Device

Note: There no longer exists a specific “2145” devices.fcp file. The Standard devices.fcp
now has combined support for SVC / ESS / DS8000 / DS6000.

We can also check that the SDD server is operational as shown in Example 8-7.

Example 8-7 SDD server is operational


# lssrc -s sddsrv
Subsystem Group PID Status
sddsrv 5828 active

# ps -eaf |grep sdd


root 5828 7744 0 09:27:13 - 0:22 /usr/sbin/sddsrv
root 18666 17554 1 18:00:55 pts/1 0:00 grep sdd

8.4.5 Discovering the assigned VDisk


Before adding a new volume from the SVC, the AIX host system “AIX_270” had a “vanilla”
configuration as shown in Example 8-8.

Example 8-8 Status of AIX host system ‘AIX_270’

# lspv
hdisk0 000c309d1df43813 rootvg active
hdisk1 000c309df4fd2353 None

# lsvg
rootvg

172 IBM System Storage SAN Volume Controller


In Example 8-9 we show SVC configuration information relating to our AIX host; specifically
the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this
configuration.

Using the SVC CLI we can check that the host WWPNs, as listed in Example 8-4 on
page 170, are logged into the SVC for the host definition “AIX_270”, by entering:
svcinfo lshost AIX_270

We can also find the serial numbers of the VDisks using the following command:
svcinfo lshostvdiskmap

Example 8-9 SVC definitions for host system ‘AIX_270’


IBM_2145:ITSOSVC01:admin>svcinfo lshost AIX_270
id 0
name AIX_270
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C92945EB
node_logged_in_count 2
state active
WWPN 10000000C92926C3
node_logged_in_count 2
state active

IBM_2145:ITSOSVC01:admin>svcinfo lshostvdiskmap
id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
0 / AIX_270 / 0 / 3 / VD1_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000006
0 / AIX_270 / 1 / 4 / VD2_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000007
0 / AIX_270 / 2 / 5 / VD3_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000008
0 / AIX_270 / 3 / 6 / VD4_AIX_270 / 10000000C92945EB / 600507680189801B2000000000000009
0 / AIX_270 / 4 / 7 / VD5_AIX_270 / 10000000C92945EB / 600507680189801B200000000000000A

IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk VD1_AIX_270


id 3
name VD1_AIX_270
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000007
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid

IBM_2145:ITSOSVC01:admin>svcinfo lsvdiskhostmap VD1_AIX_270

Chapter 8. Host configuration 173


id / name / SCSI_id / host_id / host_name / wwpn / vdisk_UID
3 / VD1_AIX_270 / 0 / 0 / AIX_270 / 10000000C92945EB / 600507680189801B2000000000000006
3 / VD1_AIX_270 / 0 / 0 / AIX_270 / 10000000C92926C3 / 600507680189801B2000000000000006

We need to run cfgmgr on the AIX host to discover the new disks and enable us to start the
vpath configuration; if you run the config manager (cfgmgr) on each FC adapter, it will not
create the vpaths, only the new hdisks. To configure the vpaths, we need to run the
cfallvpath command after issuing the cfgmgr command on each of the FC adapters:
# cfgmgr -l fcs0
# cfgmgr -l fcs1
# cfallvpath

Alternatively, use the cfgmgr -vS command to check the complete system. This command
will probe the devices sequentially across all FC adapters and attached disks; however, it is
very time intensive:
# cfgmgr -vS

The raw SVC disk configuration of the AIX host system now appears as shown in
Example 8-10. We can see the multiple hdisk devices, representing the multiple routes to the
same SVC LUN and we can see the vpath devices available for configuration.

Example 8-10 VDisks from SVC added with multiple different paths for each VDisk
# lsdev -Cc disk
hdisk0 Available 10-60-00-8,0 16 Bit SCSI Disk Drive
hdisk1 Available 10-60-00-9,0 16 Bit SCSI Disk Drive
hdisk2 Available 10-70-01 SAN Volume Controller Device
hdisk3 Available 10-70-01 SAN Volume Controller Device
hdisk4 Available 20-58-01 SAN Volume Controller Device
hdisk5 Available 20-58-01 SAN Volume Controller Device
hdisk6 Available 10-70-01 SAN Volume Controller Device
hdisk7 Available 10-70-01 SAN Volume Controller Device
hdisk8 Available 10-70-01 SAN Volume Controller Device
hdisk9 Available 10-70-01 SAN Volume Controller Device
hdisk10 Available 10-70-01 SAN Volume Controller Device
hdisk11 Available 10-70-01 SAN Volume Controller Device
hdisk12 Available 10-70-01 SAN Volume Controller Device
hdisk13 Available 10-70-01 SAN Volume Controller Device
hdisk14 Available 20-58-01 SAN Volume Controller Device
hdisk15 Available 20-58-01 SAN Volume Controller Device
hdisk16 Available 20-58-01 SAN Volume Controller Device
hdisk17 Available 20-58-01 SAN Volume Controller Device
hdisk18 Available 20-58-01 SAN Volume Controller Device
hdisk19 Available 20-58-01 SAN Volume Controller Device
hdisk20 Available 20-58-01 SAN Volume Controller Device
hdisk21 Available 20-58-01 SAN Volume Controller Device
vpath0 Available Data Path Optimizer Pseudo Device Driver
vpath1 Available Data Path Optimizer Pseudo Device Driver
vpath2 Available Data Path Optimizer Pseudo Device Driver
vpath3 Available Data Path Optimizer Pseudo Device Driver
vpath4 Available Data Path Optimizer Pseudo Device Driver

# lspv
hdisk0 000c309d1df43813 rootvg active
hdisk1 000c309df4fd2353 None
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None

174 IBM System Storage SAN Volume Controller


hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
hdisk15 none None
hdisk16 none None
hdisk17 none None
hdisk18 none None
hdisk19 none None
hdisk20 none None
hdisk21 none None
vpath0 none None
vpath1 none None
vpath2 none None
vpath3 none None
vpath4 none None

To make a volumegroup (for example, itsoaixvg) to host the vpath1 device, we use the mkvg
command passing the vpath device as a parameter instead of the hdisk device. This is shown
in Example 8-11.

Example 8-11 Running the mkvg command


# mkvg -y itsoaixvg vpath1

Now, by running the lsvp command, we can see that vpath1 has been assigned into the
itsoaixvg volume group as seen in Example 8-12.
Example 8-12 Showing the vpath assignment into the volume group
# lspv
hdisk0 000c309d1df43813 rootvg active
hdisk1 000c309df4fd2353 None
hdisk2 none None
hdisk3 none None
hdisk4 none None
hdisk5 none None
hdisk6 none None
hdisk7 none None
hdisk8 none None
hdisk9 none None
hdisk10 none None
hdisk11 none None
hdisk12 none None
hdisk13 none None
hdisk14 none None
hdisk15 none None
hdisk16 none None
hdisk17 none None
hdisk18 none None
hdisk19 none None
hdisk20 none None
hdisk21 none None
vpath0 none None
vpath1 000c309d2f0b1e01 itsoaixvg active

Chapter 8. Host configuration 175


vpath2 none None
vpath3 none None
vpath4 none None

The lsvpcfg command also displays the new relationship between vpath1 and the itsoaixvg
volume group, but also shows each hdisk associated to vpath1, shown in Example 8-13.
Example 8-13 Displaying the vpath to hdisk to volume group relationship
# lsvpcfg
vpath0 (Avail ) 600507680189801B2000000000000006 = hdisk2 (Avail ) hdisk7 (Avail ) hdisk12
(Avail ) hdisk17 (Avail )
vpath1 (Avail pv itsoaixvg) 600507680189801B2000000000000007 = hdisk3 (Avail ) hdisk8
(Avail ) hdisk13 (Avail ) hdisk18 (Avail )
vpath2 (Avail ) 600507680189801B2000000000000008 = hdisk4 (Avail ) hdisk9 (Avail ) hdisk14
(Avail ) hdisk19 (Avail )
vpath3 (Avail ) 600507680189801B2000000000000009 = hdisk5 (Avail ) hdisk10 (Avail ) hdisk15
(Avail ) hdisk20 (Avail )
vpath4 (Avail ) 600507680189801B200000000000000A = hdisk6 (Avail ) hdisk11 (Avail ) hdisk16
(Avail ) hdisk21 (Avail )

In Example 8-14 we show that running the command lspv vpath1 shows a more verbose
output for vpath1.

Example 8-14 Verbose details of vpath1


# lspv vpath1
PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvg
PV IDENTIFIER: 000c309d2f0b1e01 VG IDENTIFIER 000c309d00004c00000001003dc0f44b
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 2
TOTAL PPs: 639 (10224 megabytes) VG DESCRIPTORS: 2
FREE PPs: 606 (9696 megabytes) HOT SPARE: no
USED PPs: 33 (528 megabytes)
FREE DISTRIBUTION: 128..95..127..128..128
USED DISTRIBUTION: 00..33..00..00..00

8.4.6 Using SDD


Within SDD we are able to check the status of the adapters and devices now under SDD
control with the use of the datapath command set. In Example 8-15 we can see the status of
both HBA cards as NORMAL and ACTIVE.

Example 8-15 SDD commands used to check the availability of the adapters
# datapath query adapter

Active Adapters :2

Adpt# Adapter Name State Mode Select Errors Paths Active


0 fscsi0 NORMAL ACTIVE 43 0 10 2
1 fscsi1 NORMAL ACTIVE 42 0 10 2

From Example 8-16 we see detailed information about each vpath device. Initially, we see that
vpath1 is the only vpath device in an open status. This is because it is the only vpath currently
assigned to a volume group. Additionally, for vpath1 we see that only path #1 and path #3
have been selected (used) by SDD. This is so, because these are the two physical paths

176 IBM System Storage SAN Volume Controller


which connect to the preferred node of the I/O group of this SVC cluster. The remaining two
paths within this vpath device are only accessed in a failover scenario.

Example 8-16 SDD commands used to check the availability of the devices
# datapath query device
Total Devices : 5

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B2000000000000006
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 CLOSE NORMAL 0 0
1 fscsi0/hdisk7 CLOSE NORMAL 0 0
2 fscsi1/hdisk12 CLOSE NORMAL 0 0
3 fscsi1/hdisk17 CLOSE NORMAL 0 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B2000000000000007
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 0 0
1 fscsi0/hdisk8 OPEN NORMAL 43 0
2 fscsi1/hdisk13 OPEN NORMAL 0 0
3 fscsi1/hdisk18 OPEN NORMAL 42 0

DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B2000000000000008
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk4 CLOSE NORMAL 0 0
1 fscsi0/hdisk9 CLOSE NORMAL 0 0
2 fscsi1/hdisk14 CLOSE NORMAL 0 0
3 fscsi1/hdisk19 CLOSE NORMAL 0 0

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B2000000000000009
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk5 CLOSE NORMAL 0 0
1 fscsi0/hdisk10 CLOSE NORMAL 0 0
2 fscsi1/hdisk15 CLOSE NORMAL 0 0
3 fscsi1/hdisk20 CLOSE NORMAL 0 0

DEV#: 4 DEVICE NAME: vpath4 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000000A
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk6 CLOSE NORMAL 0 0
1 fscsi0/hdisk11 CLOSE NORMAL 0 0
2 fscsi1/hdisk16 CLOSE NORMAL 0 0
3 fscsi1/hdisk21 CLOSE NORMAL 0 0

8.4.7 Creating and preparing volumes for use


The volume group itsoaixvg is created using vpath1 (VD1_AIX_270), A logical volume is
created using the volume group and then the file system created, itsofs1, and mounted on the
mount point /itsofs1, as seen in Example 8-17.

Chapter 8. Host configuration 177


Example 8-17 ‘AIX_270’ host system new volume group and file system configuration
# lsvg -o
itsoaixvg
rootvg

# lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv00 jfslog 1 1 1 open/syncd N/A
lv00 jfs 62 62 1 open/syncd /itsofs1

# df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 16384 7960 52% 1306 16% /
/dev/hd2 1867776 1265440 33% 20505 5% /usr
/dev/hd9var 16384 11020 33% 433 11% /var
/dev/hd3 32768 29196 11% 57 1% /tmp
/dev/hd1 16384 15820 4% 18 1% /home
/proc - - - - - /proc
/dev/hd10opt 32768 26592 19% 294 4% /opt
/dev/lv00 1015808 403224 61% 19 1% /itsofs1

8.4.8 Expanding an AIX volume


It is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Some
operating systems such as AIX version 5.2 and higher versions can handle the volumes being
expanded, even if the host has applications running. The volume group where the VDisk is
assigned, if assigned to any, must not be a concurrent accessible volumegroup. A VDisk that
is defined in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be
expanded, unless the mapping is removed, which means the FlashCopy, Metro Mirror, or
Global Mirror on that VDisk has to be stopped before it is possible to expand the VDisk.

The following steps show how to expand a volume on an AIX host, where the volume is a
VDisk from the SVC:
1. To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. Example 8-18
shows the VDisk aix_v1 that we have allocated to our AIX server before we expand it.
Here, the capacity is 6.0 GB, and the vdisk_UID is 600507680189801B200000000000002A.

Example 8-18 Expanding a VDisk on AIX


IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk aix_v1
id 7
name aix_v1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG1_DS43
capacity 6.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B200000000000002A

178 IBM System Storage SAN Volume Controller


throttling 0
preferred_node_id 12
fast_write_state empty
cache readwrite
udid

2. To identify which vpath this VDisk is associated to on the AIX host, we use the SDD
command, datapath query device as shown in Example 8-19. Here we can see that the
VDisk with vdisk_UID 600507680189801B200000000000002A is associated to vpath0 as the
vdisk_UID matches the SERIAL field on the AIX host.

Example 8-19 Finding the vpath for a VDisk


# datapath query device

Total Devices : 4

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000002A
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk2 OPEN NORMAL 0 0
1 fscsi0/hdisk7 OPEN NORMAL 44 0
2 fscsi1/hdisk11 OPEN NORMAL 0 0
3 fscsi1/hdisk15 OPEN NORMAL 33 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000002B
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 10 0
1 fscsi0/hdisk8 OPEN NORMAL 0 0
2 fscsi1/hdisk12 OPEN NORMAL 13 0
3 fscsi1/hdisk16 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000002C
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk5 OPEN NORMAL 0 0
1 fscsi0/hdisk9 OPEN NORMAL 40 0
2 fscsi1/hdisk13 OPEN NORMAL 0 0
3 fscsi1/hdisk17 OPEN NORMAL 29 0

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized


SERIAL: 600507680189801B200000000000002D
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk6 OPEN NORMAL 12 0
1 fscsi0/hdisk10 OPEN NORMAL 0 0
2 fscsi1/hdisk14 OPEN NORMAL 10 0
3 fscsi1/hdisk18 OPEN NORMAL 0 0

3. To see the size of the volume on the AIX host, we use the lspv command as shown next in
Example 8-20. This shows that the volume size is 6128 MB, equal to 6 GB, as seen in
Example 8-18 on page 178.

Chapter 8. Host configuration 179


Example 8-20 Finding the size of the volume in AIX
# lspv vpath0
PHYSICAL VOLUME: vpath0 VOLUME GROUP: fc_source_vg
PV IDENTIFIER: 000c309d6de458f3 VG IDENTIFIER 000c309d00004c00000001006de474a1
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 1
TOTAL PPs: 383 (6128 megabytes) VG DESCRIPTORS: 2
FREE PPs: 382 (6112 megabytes) HOT SPARE: no
USED PPs: 1 (16 megabytes)
FREE DISTRIBUTION: 77..76..76..76..77
USED DISTRIBUTION: 00..01..00..00..00

4. To expand the volume on the SVC, we use the command svctask expandvdisksize to
increase the capacity on the VDisk. In Example 8-21 we will expand the VDisk by 1GB.

Example 8-21 Expanding a VDisk


IBM_2145:ITSOSVC01:admin>svctask expandvdisksize -size 1 -unit gb aix_v1

5. To check that the VDisk has been expanded, use the svcinfo lsvdisk command. Here
we can see that the VDisk aix_v1 has been expanded to 7GB in capacity (Example 8-22).

Example 8-22 Verifying that the VDisk has been expanded


IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk aix_v1
id 7
name aix_v1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG1_DS43
capacity 7.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B200000000000002A
throttling 0
preferred_node_id 12
fast_write_state empty
cache readwrite
udid

6. AIX has not yet recognized a change in capacity of the vpath0 volume, because no
dynamic mechanism exists within the operating system to provide a configuration update
communication.
a. Therefore, to encourage AIX to recognize the extra capacity on the volume without
stopping any applications, we use the chvg -g fc_source_vg command, where
fc_source_vg is the name of the volumegroup which vpath0 belongs to.
b. If AIX does not return anything, this means that the command was successful and the
volume changes in this volume group have been saved. If AIX cannot see any changes
in the volumes, it will return a message indicating this.

180 IBM System Storage SAN Volume Controller


7. To verify that the size of vpath0 has changed, we use the lspv command again as seen in
Example 8-23.

Example 8-23 Verify that AIX can see the new expanded VDisk
# lspv vpath0
PHYSICAL VOLUME: vpath0 VOLUME GROUP: fc_source_vg
PV IDENTIFIER: 000c309d6de458f3 VG IDENTIFIER 000c309d00004c00000001006de474a1
PV STATE: active
STALE PARTITIONS: 0 ALLOCATABLE: yes
PP SIZE: 16 megabyte(s) LOGICAL VOLUMES: 1
TOTAL PPs: 447 (7152 megabytes) VG DESCRIPTORS: 2
FREE PPs: 446 (7136 megabytes) HOT SPARE: no
USED PPs: 1 (16 megabytes)
FREE DISTRIBUTION: 89..89..89..89..90
USED DISTRIBUTION: 01..00..00..00..00

Here we can see that the volume now has a size of 7152 MB, equal to 7 GB. After this we can
expand the filesystems in this volumegroup to use the new capacity.

8.4.9 Removing an SVC volume on AIX


Before we remove a VDisk assigned to an AIX host, we have to make sure that there is no
data on it, and that no applications are dependent upon the volume. This is a standard AIX
procedure. We move all data off the volume, remove the volume in the volumegroup, and
delete the vpath and the hdisks associated to the vpath. Then we remove the vdiskhostmap
on the SVC for that volume, and that VDisk is not needed any longer. Then we delete it so the
extents will be available when we create a new VDisk on the SVC.

8.4.10 Running SVC commands from an AIX host system


To issue CLI commands, you must install and prepare the SSH client system on the AIX host
system. For AIX 5L™ Power 5.1, 5.2 and 5.3, you get OpenSSH from the Bonus Packs. You
also need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for Power
Systems. For AIX 4.3.3, the software is available from the AIX toolbox for Linux applications.

The AIX installation images from IBM DeveloperWorks are available at this Web site:
http://sourceforge.net/projects/openssh-aix

Here is the procedure to follow:


1. To generate the key files on AIX, issue the following command:
ssh-keygen -t rsa -f filename
The -t specifies the type of key to generate, rsa1, rsa2 or dsa, the value for rsa2 is just
rsa, for rsa1 the type needs to be rsa1. When creating the key to the SVC use type rsa2.
The -f specifies the filename on the AIX server that the private and public key gets (the
public key gets the extension .pub after the filename).
2. Next you have to install the public key on the SVC, which can be done by using the master
console. Copy the public key to the master console, and install the key to the SVC, as
described in the preceding chapters.
3. On the AIX server, make sure that the private key and the public key is in the .ssh
directory, and in the home directory of the user.

Chapter 8. Host configuration 181


4. To connect to the SVC and use a CLI session from the AIX host, issue the following
command:
ssh -l admin -i filename svc
5. You can also issue the commands directly on the AIX host, and this is useful when making
scripts. To do this, add the SVC commands to the previous command. For example, to list
the hosts defined on the SVC, enter the following command:
ssh -l admin -i filename svc svcinfo lshost
In this command, -l admin is the user on the SVC we will connect to, -i filename is the
filename of the private key generated, and svc is the name or ip-address of the SVC; after
that follows the SVC command, svcinfo lshost.

8.5 Windows-specific information


In the following sections we detail specific information about the connection of Windows 2000
based hosts to the SVC environment.

8.5.1 Configuring Windows 2000 and Windows 2003 hosts


To configure the Windows hosts, follow these steps:
1. Install the HBA or HBAs on the Windows 2000/2003 server.
2. Install and configure SDD/MPIO.
3. Shut down the Windows 2000/2003 host system.
4. Configure the switches (zoning) if needed.
5. Connect the Windows 2000/2003 server FC Host adapters to the switches.
6. Restart the Windows 2000/2003 host system.
7. Configure the host, VDisks and host mapping in the SVC.
8. Use Rescan disk in Computer Management of the Windows 2000/2003 server to discover
the VDisks created on the SAN Volume Controller.

8.5.2 Support information


This section details where to obtain various types of support information.

Operating system versions and maintenance levels


At the time of writing, the versions of Windows listed in Table 8-2 are supported. See the
following Web site for the latest information:
http://www.storage.ibm.com/support/2145

Table 8-2 Versions of Windows supported with SVC at the time of this writing
Operating system level Machine level

Windows NT4 Enterprise Server Service Pack 6a

Windows 2000 Server and Advanced Server Service Pack 4 (rollup 1)

Windows 2003 Server (Standard and Enterprise Edition) Service Pack 1

182 IBM System Storage SAN Volume Controller


Attention: For VDisk expansion to work on Windows 2000, apply Windows 2000 Hotfix
Q327020, which is available from the Microsoft Knowledge Base at:
http://update.microsoft.com/windowsupdate/v6/default.aspx?ln=en-us

Supported host adapters


See the supported hardware list for the latest information about supported HBAs and driver
levels for Windows:
http://www.storage.ibm.com/support/2145

8.5.3 Host adapter installation and configuration


Refer to the manufacturer’s instructions for installation and configuration of the HBAs. You can
check that the FC adapter is installed as shown in Figure 8-3.

Figure 8-3 QLogic FC Host Adapter

8.5.4 SDD installation on Windows


At the time of writing, V1.5.1.1 for Windows NT® 4 and 1.6.0.7-1 for Windows 2000 and
Windows 2003 of SDD are supported. See the following Web site for the latest information
about SDD for Windows:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S4000054&loc=en_US&c
s=utf-8&lang=en+en#SVC/

After downloading the appropriate version of SDD from this Web site, run setup to install
SDD. Answer Yes to Windows Digital Signature prompt. Answer Yes to reboot the system.

You can check that the installation of SDD is complete. From the Windows desktop, click
Start → Programs → Subsystem Device Driver → readme.

Chapter 8. Host configuration 183


8.5.5 Windows 2003 and MPIO
Microsoft Multi Path Input Output (MPIO) solutions are designed to work in conjunction with
device specific modules (DSMs) written by vendors, but the MPIO driver package does not,
by itself, form a complete solution. This joint solution allows the storage vendors to design
device specific solutions that are tightly integrated with the Windows operating system.

MPIO drivers: MPIO is not shipped with the Windows operating system, storage vendors
must pack the MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM
(SDDDSM) is the IBM multipath IO solution based on Microsoft MPIO technology, it is a
device specific module specifically designed to support IBM storage devices on Windows
2003 servers.

The intention of MPIO is to get a better integration of multipath storage solution with the
operating system, and allows the use of multipaths in the SAN infrastructure during the boot
process for SAN boot hosts.

8.5.6 Subsystem Device Driver Device Specific Module (SDDDSM) for SVC
Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for
SVC device for the Windows Server® 2003 operating system.

Subsystem Device Driver Device Specific Module (SDDDSM) is the IBM multipath IO solution
based on Microsoft MPIO technology, it is a device specific module specifically designed to
support IBM storage devices. Together with MPIO, it is designed to support the multipath
configuration environments in the IBM System Storage SAN Volume Controller. It resides in a
host system with the native disk device driver and provides the following functions:
򐂰 Enhanced data availability
򐂰 Dynamic I/O load-balancing across multiple paths
򐂰 Automatic path failover protection
򐂰 Concurrent download of licensed internal code
򐂰 Path-selection policies for the host system
To download SDDDSM, go to the Web site:
http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S4000350&lo=
en_US&cs=utf-8&lang=en

184 IBM System Storage SAN Volume Controller


8.5.7 Discovering the assigned VDisk
Before adding a new volume from the SAN Volume Controller, the Windows 2000 host system
had the configuration shown in Figure 8-4, with only local disks.

Figure 8-4 Windows 2003 host system before adding a new volume from SVC

The configuration of the host “W2K3_2”, the VDisk “W2K3_2_1”, and the mapping between
the host and the VDisk are defined in the SAN Volume Controller as described in
Example 8-24.

We can check that the WWPN is logged into the SAN Volume Controller for the host
“W2K3_2” by entering the following command:
svcinfo lshost W2K3_2

We can also find the serial number of the VDisks by entering the following command:
svcinfo lshostvdiskmap

Chapter 8. Host configuration 185


Example 8-24 SVC configuration for Windows 2000/2003 host system
IBM_2145:ITSOSVC01:admin>svcinfo lshost W2K3_2
id 1
name W2K3_2
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B0E2A8C
node_logged_in_count 2
state active
WWPN 210000E08B0E278C
node_logged_in_count 2
state active

IBM_2145:ITSOSVC01:admin>svcinfo lshostvdiskmap W2K3_2


id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
1 / W2K3_2 / 0 / 20 / W2K3_2_1 / 210000E08B0E2A8C / 600507680189801B2000000000000019
1 / W2K3_2 / 1 / 21 / W2K3_2_2 / 210000E08B0E2A8C / 600507680189801B200000000000001A
1 / W2K3_2 / 2 / 22 / W2K3_2_3 / 210000E08B0E2A8C / 600507680189801B200000000000001B

IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk W2K3_2_1


id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 6.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000019
throttling 0
preferred_node_id 12
fast_write_state empty
cache readwrite
udid

IBM_2145:ITSOSVC01:admin>svcinfo lsvdiskhostmap W2K3_2_1


id / name / SCSI_id / host_id / host_name / wwpn / vdisk_UID
20 / W2K3_2_1 / 0 / 1 / W2K3_2 / 210000E08B0E2A8C / 600507680189801B2000000000000019
20 / W2K3_2_1 / 0 / 1 / W2K3_2 / 210000E08B0E278C / 600507680189801B2000000000000019

186 IBM System Storage SAN Volume Controller


After the rescan disks operation is complete from the Computer Management window, the
new disk is found and assigned to the drive W: letter as shown in Figure 8-5.

Figure 8-5 Windows 2003 host system with tree new volumes from SVC

The volume is identified as an IBM 2145 SCSI Disk Device. The number of IBM 2145 SCSI
Disk Devices that you see is equal to:
(# of VDisks) x (# of paths per IO group per HBA) x (# of HBAs)

This is shown in Figure 8-6. This corresponds to the number of paths between the Windows
2003 host system and the SAN Volume Controller. However, we see one IBM 2145 SDD Disk
Device per VDisk.

When following the SAN zoning recommendation, this gives us for one VDisk and a host with
two HBAs: (# of VDisk) x (# of paths per IO group per HBA) x (# of HBAs) = 1 x 2 x 2 = 4
paths.

Chapter 8. Host configuration 187


Figure 8-6 Number of devices found related to the number of paths

8.5.8 Expanding a Windows 2000/2003 volume


It is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Some
operating systems, such as Windows 2000 and Windows 2003, can handle the volumes
being expanded even if the host has applications running. A VDisk that is defined to be in a
FlashCopy, Metro Mirror or Global Mirror mapping on the SVC cannot get expanded unless
the mapping is removed, which means the FlashCopy, Metro Mirror or Global Mirror on that
VDisk has to be stopped, before it is possible to expand the VDisk.

If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all nodes except one, and that applications in the resource which use the volume that is
going to be expanded is stopped, before expanding the volume. Applications running in other
resources can continue. After expanding the volume, start the application and the resource,
and then restart the other nodes in the MSCS.

To expand a volume while in use on Windows 2000 and Windows 2003, we used Diskpart.
The Diskpart tool is part of Windows 2003; for other Windows versions you can download it
free of charge from Microsoft. Diskpart is a tool developed by Microsoft to ease administration
of storage. It is a command line interface where you can manage disks, partitions, and
volumes, by using scripts or direct input on the command line. You can list disks and volumes,
select them, and after selecting get more detailed information, create partitions, extend
volumes, and more. For more information, see the Microsoft Web site:
http://www.microsoft.com
or
http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech

188 IBM System Storage SAN Volume Controller


An example of how to expand a volume on a Windows 2003 host, where the volume is a
VDisk from the SVC, is shown in the following discussion.

To list a VDisk size, use the command svcinfo lsvdisk <VDisk_name>. This gives this
information for the VDisk W2K3_2_1, before expanding the VDisk (Example 8-25).

Example 8-25 Expanding a VDisk attached to Windows 2003


IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk W2K3_2_1
id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 6.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000019
throttling 0
preferred_node_id 12
fast_write_state empty
cache readwrite
udid

Here we can see that the capacity is 6 GB, and also what the vdisk_UID is. To find what vpath
this VDisk is on the Windows 2003 host, we use the SDD command, datapath query device
on the windows host. To open a command window for SDD, from your desktop, click Start →
Programs → Subsystem Device Driver → Subsystem Device Driver Management
(Example 8-26).

Example 8-26 Expanding a VDisk attached to Windows 2003 (continued)


C:\Program Files\IBM\Subsystem Device Driver>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000019
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 6626 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 6641 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B200000000000001A
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 12 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0

Chapter 8. Host configuration 189


2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 15 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B200000000000001B
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 13 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 14 0

Here we can see that the VDisk with vdisk_UID 600507680189801B2000000000000019 is


Disk1 on the Windows host, because the vdisk_UID matches the SERIAL on the Windows
host. To see the size of the volume on the Windows host, we use disk manager as shown in
Figure 8-7 and Figure 8-8.

Figure 8-7 Volume size before expansion on Windows 2003, disk manager view

190 IBM System Storage SAN Volume Controller


Figure 8-8 Volume size before expansion on Windows 2003, disk properties view

This shows that the volume size is 5.99GB, equal to 6 GB. To expand the volume on the SVC
we use the command svctask expandvdisksize, to increase the capacity on the VDisk. In
this example we expand the VDisk by 1 GB:
IBM_2145:ITSOSVC01:admin>svctask expandvdisksize -size 1 -unit gb W2K3_2_1

To check that the VDisk has been expanded, we use the same procedure as before. Here we
can see that the VDisk W2K3_2_1 has been expanded to 7 GB in capacity:
IBM_2145:ITSOSVC01:admin>BM_2145:admin>svcinfo lsvdisk W2K3_2_1
id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 7.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000019
throttling 0
preferred_node_id 12
fast_write_state empty
cache readwrite
udid

Chapter 8. Host configuration 191


In Disk Management on the Windows host, we have to perform a rescan for the disks, after
which the new capacity is shown for the disk, as shown in Figure 8-9.

Figure 8-9 Expanded volume in disk manager

This shows that Disk1 now has 1020MB unallocated new capacity. To make this capacity
available for the filesystem, use the diskpart command at a DOS prompt:
C:\>diskpart

Microsoft DiskPart version 5.2.3790


Copyright (C) 1999-2001 Microsoft Corporation.
On computer: NPSRV3

DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 D CD-ROM 0 B Healthy
* Volume 1 C W2K3 NTFS Partition 19 GB Healthy System
Volume 2 W VDisk NTFS Partition 6142 MB Healthy

This gives a view of all the volumes in the Windows 2003 host, select the disk labeled VDisk:
DISKPART> select volume 2

192 IBM System Storage SAN Volume Controller


Volume 2 is the selected volume:
DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 D CD-ROM 0 B Healthy
Volume 1 C W2K3 NTFS Partition 19 GB Healthy System
* Volume 2 W VDisk NTFS Partition 6142 MB Healthy

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---
* Disk 1 Online 7162 MB 1020 MB

This shows detailed information about the disk, including the unallocated capacity. The size is
the total size of the volume including the unallocated capacity of 1020MB. To get the capacity
available for the filesystem, expand the volume by issuing the extend command:
DISKPART> extend

DiskPart successfully extended the volume:


DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt


-------- ---------- ------- ------- --- ---
* Disk 1 Online 7162 MB 0 B

Here we can see that there is no free capacity on the volume anymore:
DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info


---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 D CD-ROM 0 B Healthy
Volume 1 C W2K3 NTFS Partition 19 GB Healthy System
Volume 2 W VDisk NTFS Partition 7162 MB Healthy

The list volume command now shows that the filesystem’s new size is 7162 MB. And the
result of the expansion is shown in Figure 8-10 and Figure 8-11.

Chapter 8. Host configuration 193


Figure 8-10 Disk manager after expansion of Disk1

Figure 8-11 The new capacity of Disk1

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SVC VDisk. The new space will appear as unallocated space at
the end of the disk.

194 IBM System Storage SAN Volume Controller


In this case you do not need to use the Diskpart Tool, but just Windows Disk Management
functions, to allocate the new space. Expansion works irrespective of the volume type
(simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without
stopping I/O in most cases. The Windows 2000 operating system might require a hotfix as
documented at Microsoft knowledge base article Q327020.

Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data because this operation is disruptive for the data due to a different
position of the LBA in the disks.

8.5.9 Removing a disk on Windows


When we want to remove a disk from Windows, and the disk is an SVC VDisk, we need to
follow the standard Windows procedure to make sure that there is no data we wish to
preserve on the disk, that no applications are using the disk, and that no I/O is going to the
disk. After this we will remove the VDisk mapping on the SVC. Here we need to make sure we
are removing the correct VDisk, and to check this we use SDD to find the Serial number for
the disk, and on the SVC we use lshostvdiskmap to find the VDisk name and number. We
also check that the SDD Serial number on the host matches the UID on the SVC for the
VDisk.

When the VDisk mapping is removed, we will do a rescan for the disk, and Disk Management
on the server will remove the disk, and the vpath will go into the status of close/offline on the
server. We can check this by using the SDD command datapath query device, but the vpath
that is closed will first be removed after a reboot of the server. In the following sequence of
examples, we show how we can remove an SVC VDisk from a Windows server.

Figure 8-27 shows the Disk Manager before removing the disk.

Figure 8-12 The Disk Manager before removing the disk

Chapter 8. Host configuration 195


First, we will remove Disk 2 (E:). To find the correct VDisk information, we find the Serial/UID
number using SDD (Example 8-27).

Example 8-27 Removing SVC disk from Windows server


C:\Program Files\IBM\Subsystem Device Driver>datapath query device

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000019
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 18510 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 18555 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000037
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 3793 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 3762 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

Knowing the Serial/UID of the VDisk and the hostname W2K3_2, we will find the VDisk
mapping to remove using the lshostvdiskmap command on the SVC, and after this we will
remove the actual VDisk mapping (Example 8-28).

Example 8-28 Finding and removing the VDisk mapping


IBM_2145:ITSOSVC01:admin>svcinfo lshostvdiskmap W2K3_2
id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
1 / W2K3_2 / 0 / 20 / W2K3_2_1 / 210000E08B0E2A8C / 600507680189801B2000000000000019
1 / W2K3_2 / 1 / 24 / W2K3_2_tgt / 210000E08B0E2A8C / 600507680189801B2000000000000037
IBM_2145:ITSOSVC01:admin>svctask rmvdiskhostmap -host W2K3_2 W2K3_2_tgt
IBM_2145:ITSOSVC01:admin>svcinfo lshostvdiskmap W2K3_2
id / name / SCSI_id / vdisk_id / vdisk_name / wwpn / vdisk_UID
1 / W2K3_2 / 0 / 20 / W2K3_2_1 / 210000E08B0E2A8C / 600507680189801B2000000000000019

Here we can see that the VDisk is removed from the server. On the server we then do a disk
rescan in Disk Management, and here we now see that the correct disk (Disk2), has been
removed as shown in Figure 8-13. SDD also shows us that the status for Disk2 has changed
to close/offline.

196 IBM System Storage SAN Volume Controller


Figure 8-13 Disk Manager showing the remaining disks

In Example 8-29 we show the results of the previous operations.

Example 8-29 Final results and status

C:\Program Files\IBM\Subsystem Device Driver>datapath query device

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000019
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 18522 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 18560 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680189801B2000000000000037
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 CLOSE OFFLINE 3799 0
1 Scsi Port2 Bus0/Disk2 Part0 CLOSE OFFLINE 0 0
2 Scsi Port3 Bus0/Disk2 Part0 CLOSE OFFLINE 3770 0
3 Scsi Port3 Bus0/Disk2 Part0 CLOSE OFFLINE 0 0

The disk (Disk2) is now removed from the server. However, to remove the SDD information of
the disk, we need to reboot the server, but this can wait until a more suitable time.

Chapter 8. Host configuration 197


8.5.10 Using SDD
To open a command window for SDD, from the desktop, click Start → Programs →
Subsystem Device Driver → Subsystem Device Driver Management.

We can use the SDD-specific commands explained in the IBM System Storage Multipath
Subsystem Device Driver User's Guide, SC30-4096, as shown in Figure 8-14.

Figure 8-14 Datapath query commands

Or we can open our browser with the host’s IP address as shown in Figure 8-15.

Before this can work, we need to configure SDD to activate the Web interface. In SDD 1.5.0.x
(or earlier), sddsrv by default was bound to a TCP/IP port and listening for incoming requests.
In SDD 1.5.1.x (or later), sddsrv does not bind to any TCP/IP port by default, but allows port
binding to be dynamically enabled or disabled.

For all platforms except Linux, the SDD package ships a template file of sddsrv.conf that is
named sample_sddsrv.conf. On all UNIX platforms except Linux, the sample_sddsrv.conf file
is located in the /etc directory. On Windows platforms, the sample_sddsrv.conf file is in the
directory where SDD is installed.

You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as sample_sddsrv.conf by simply copying it and naming the copied file sddsrv.conf. You can
then dynamically change port binding by modifying the parameters in sddsrv.conf.

198 IBM System Storage SAN Volume Controller


Figure 8-15 SDD query information using Web browser at <Win2k_1 ip add>:20001

8.5.11 Running an SVC command line (CLI) from a Windows host system
To issue CLI commands, we must install and prepare the SSH client system on the Windows
host system.

We can install the PuTTY SSH client software on a Windows host using the PuTTY
Installation program. This is in the SSHClient\PuTTY directory of the SAN Volume Controller
Console CD-ROM. Or, you can download PuTTY from the following Web site:
http://www.chiark.greenend.org.uk/~sgtatham/putty/

The following Web site offers SSH client alternatives for Windows:
http://www.openssh.com/windows.html

Cygwin software has an option to install an OpenSSH client. You can download Cygwin from
the following Web site:
http://www.cygwin.com/

8.6 Linux (on Intel) specific information


The following section details specific information pertaining to the connection of Linux on
Intel-based hosts to the SVC environment.

8.6.1 Configuring the Linux host


Follow these steps to configure the Linux host:
1. Install the HBA or HBAs on the Linux server.
2. Install the Kernel.
3. Configure the switches (zoning) if needed.
4. Connect the Linux server FC Host adapters to the switches.
5. Install SDD for Linux.
6. Configure the host, VDisks, and host mapping in the SAN Volume Controller.
7. Reboot the Linux server to discover the VDisks created on SVC.

Chapter 8. Host configuration 199


8.6.2 Support information
For the latest support information about hardware and software, consult the IBM System
Storage SAN Volume controller Web site at:
http://www-03.ibm.com/servers/storage/software/virtualization/svc/interop.html

For version 4.1 of the SVC, the following support information was available at the time of
writing:

Software supported levels:


http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002865

Hardware supported levels:


http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002864

8.6.3 Host adapter configuration settings


See the IBM TotalStorage Virtualization SAN Volume Controller: Host Attachment Guide,
SC26-7563 for detailed information.

8.6.4 Discovering the assigned VDisk


The cat /proc/scsi/scsi command in Example 8-30 shows the devices that the SCSI driver
has probed. In our configuration, we have one HBA installed in our server and we configured
the zoning in order to access our VDisk from four paths.

Example 8-30 cat /proc/scsi/scsi command example


[root@fermium root]# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: IBM-ESXS Model: ST318452LC !# Rev: B841
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 01 Lun: 00
Vendor: IBM-ESXS Model: ST318452LC !# Rev: B841
Type: Direct-Access ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 08 Lun: 00
Vendor: IBM Model: FTlV1 S2 Rev: 0
Type: Processor ANSI SCSI revision: 02
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: IBM Model: 2145 Rev: 0000
Type: Direct-Access ANSI SCSI revision: 04
Host: scsi2 Channel: 00 Id: 01 Lun: 00
Vendor: IBM Model: 2145 Rev: 0000
Type: Direct-Access ANSI SCSI revision: 04
Host: scsi2 Channel: 00 Id: 02 Lun: 00
Vendor: IBM Model: 2145 Rev: 0000
Type: Direct-Access ANSI SCSI revision: 04
Host: scsi2 Channel: 00 Id: 03 Lun: 00
Vendor: IBM Model: 2145 Rev: 0000
Type: Direct-Access ANSI SCSI revision: 04

8.6.5 Using SDD on Linux


To install and configure the SDD for Linux, refer to the IBM TotalStorage Multipath Subsystem
Device Driver User’s Guide, SC30-4096.

200 IBM System Storage SAN Volume Controller


The rpm -ivh IBMsdd-1.6.0.1-11.3.i686.rhel3.rpm command installs the package as seen
in Example 8-31.

Example 8-31 rpm command example


[root@fermium software]# ls
IBMsdd-1.6.0.1-11.3.i686.rhel3.rpm
[root@fermium software]# rpm -ivh IBMsdd-1.6.0.1-11.3.i686.rhel3.rpm
Preparing... ########################################### [100%]
1:IBMsdd ########################################### [100%]
Manually verify the following line is enabled in /etc/inittab:
srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1

SDD is installed to the /opt/IBMsdd/bin directory, as can be seen by the rpm -ql command in
in Example 8-32.

Example 8-32 SDD directory example


[root@fermium root]# rpm -ql IBMsdd
/etc/cron.hourly/sddsrv_log.sh
/etc/init.d/sdd
/etc/logrotate.d/sddsrv_log.d
/etc/sddsrv.conf
/etc/vpath.conf
/opt/IBMsdd
/opt/IBMsdd/LICENSE
/opt/IBMsdd/README
/opt/IBMsdd/bin
/opt/IBMsdd/bin/addpaths
/opt/IBMsdd/bin/cfgvpath
/opt/IBMsdd/bin/datapath
/opt/IBMsdd/bin/lsvpcfg
/opt/IBMsdd/bin/make_sddevs
/opt/IBMsdd/bin/pathtest
/opt/IBMsdd/bin/rmvpath
/opt/IBMsdd/bin/sdd.rcscript
/opt/IBMsdd/bin/sddgetdata
/opt/IBMsdd/bin/sddsrv
/opt/IBMsdd/kernel_list
/opt/IBMsdd/rd_linux.txt
/opt/IBMsdd/sdd-mod.o-2.4.21-15.0.2.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-15.0.2.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-15.0.3.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-15.0.3.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-15.0.4.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-15.0.4.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-15.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-15.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-20.0.1.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-20.0.1.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-20.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-20.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-27.0.1.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-27.0.1.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-27.0.2.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-27.0.2.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-27.0.4.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-27.0.4.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-27.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-27.ELsmp

Chapter 8. Host configuration 201


/opt/IBMsdd/sdd-mod.o-2.4.21-32.0.1.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-32.0.1.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-32.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-32.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-37.0.1.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-37.0.1.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-37.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-37.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-40.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-40.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-9.0.1.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-9.0.1.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-9.0.3.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-9.0.3.ELsmp
/opt/IBMsdd/sdd-mod.o-2.4.21-9.ELhugemem
/opt/IBMsdd/sdd-mod.o-2.4.21-9.ELsmp
/opt/IBMsdd/sddsrv.conf
/opt/IBMsdd/sddsrv_log.d
/opt/IBMsdd/sddsrv_log.sh
/usr/sbin/addpaths
/usr/sbin/cfgvpath
/usr/sbin/datapath
/usr/sbin/lsvpcfg
/usr/sbin/pathtest
/usr/sbin/rmvpath
/usr/sbin/sdd
/usr/sbin/sddsrv
/usr/share/doc/IBMsdd-1.6.0.1
/usr/share/doc/IBMsdd-1.6.0.1/LICENSE
/usr/share/doc/IBMsdd-1.6.0.1/README
/usr/share/doc/IBMsdd-1.6.0.1/rd_linux.txt

To manually load and configure SDD on Linux, use the service sdd start command. (SuSE
Linux users can use the sdd start command.) If you are not running a supported kernel, you
will get an error message as displayed in Example 8-33.

If your kernel is supported, you should see an OK success message.

Example 8-33 Non-supported kernel for SDD


[root@fermium root]# service sdd start
Starting IBMsdd driver load:
Linux kernel 2.4.21-32.EL is not supported. [FAILED]

Issue the cfgvpath query to view the name and serial number of the VDisk configured in the
SAN Volume Controller as shown in Example 8-34.

Example 8-34 cfgvpath query example


[root@fermium IBMsdd]# cfgvpath query
Mount /dev/sda2
Mount /dev/sda1
Mount /dev/sdb2
2: numbootdisks 0
/dev/sda2 sd major 8 devno 0x800!
/dev/sda1 sd major 8 devno 0x800!
/dev/sdb2 sd major 8 devno 0x810!
/dev/sdb1 sd major 8 devno 0x810!
2 bootdisk devices:
model: -1 0x800, xxxxxxxxxxxx

202 IBM System Storage SAN Volume Controller


model: -1 0x810, xxxxxxxxxxxx

ioctl success but serial# is invalid? model -1


ioctl success but serial# is invalid? model -1
/dev/sdc ( 8, 32) host=2 ch=0 id=0 lun=0 vid=IBM pid=2145
serial=60050768018100c47000000000000016 lun_id=60050768018100c47000000000000016 ctlr_flag=1
ctlr_nbr=0 df_ctlr=0
/dev/sdd ( 8, 48) host=2 ch=0 id=1 lun=0 vid=IBM pid=2145
serial=60050768018100c47000000000000016 lun_id=60050768018100c47000000000000016 ctlr_flag=1
ctlr_nbr=1 df_ctlr=0
/dev/sde ( 8, 64) host=2 ch=0 id=2 lun=0 vid=IBM pid=2145
serial=60050768018100c47000000000000016 lun_id=60050768018100c47000000000000016 ctlr_flag=1
ctlr_nbr=0 df_ctlr=0
/dev/sdf ( 8, 80) host=2 ch=0 id=3 lun=0 vid=IBM pid=2145
serial=60050768018100c47000000000000016 lun_id=60050768018100c47000000000000016 ctlr_flag=1
ctlr_nbr=1 df_ctlr=0

The cfgvpath command configures the SDD vpath devices as shown in Example 8-35.

Example 8-35 cfgvpath command example


[root@fermium IBMsdd]# cfgvpath
ls: /dev/IBMsdd: No such file or directory
Making character device file /dev/IBMsdd at major 253
major number 254 assigned to vpath (dev: vpatha)
Added vpatha 254 0 ...
writing out new configuration to file /etc/vpath.conf
[root@fermium IBMsdd]#

The configuration information is saved by default in the file /etc/vpath.conf. You can save the
configuration information to a specified file name by entering the following command:
cfgvpath -f file_name.cfg

Issue the chkconfig command to enable SDD to run at system startup:


chkconfig sdd on

To verify the setting, enter the following command:


chkconfig --list sdd

This is shown in Example 8-36.

Example 8-36 sdd run level example


[root@fermium IBMsdd]# chkconfig sdd on
[root@fermium IBMsdd]# chkconfig --list sdd
sdd 0:off 1:off 2:off 3:on 4:off 5:on 6:off

If necessary, you can disable the startup option by entering:


chkconfig sdd off

Run the datapath query commands to display the online adapters, paths to adapters. Notice
that the preferred paths are used from one of the nodes, that is to say path 0 and 2. Paths 1
and 3 connect to the other node and are used as alternate or backup paths for high
availability as in Example 8-37.

Chapter 8. Host configuration 203


Example 8-37 datapath query command example
[root@fermium root]# datapath query adapter

Active Adapters :1

Adpt# Name State Mode Select Errors Paths Active


0 Host2Channel0 NORMAL ACTIVE 160981 5 4 4
[root@fermium root]# datapath query device

Total Devices : 1

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential


SERIAL: 60050768018100c47000000000000016
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Host2Channel0/sdc OPEN NORMAL 75528 0
1 Host2Channel0/sdd OPEN NORMAL 0 0
2 Host2Channel0/sde OPEN NORMAL 85453 5
3 Host2Channel0/sdf OPEN NORMAL 0 0
[root@fermium root]#

SDD has three different path-selection policy algorithms.


򐂰 Failover only (fo): All I/O operations for the device are sent to the same (preferred) path
unless the path fails because of I/O errors. Then an alternate path is chosen for
subsequent I/O operations.
򐂰 Load balancing (lb): The path to use for an I/O operation is chosen by estimating the
load on the adapter to which each path is attached. The load is a function of the number of
I/O operations currently in process. If multiple paths have the same load, a path is chosen
at random from those paths. Load-balancing mode also incorporates failover protection.
The load-balancing policy is also known as the optimized policy.
򐂰 Round robin (rr): The path to use for each I/O operation is chosen at random from paths
that were not used for the last I/O operation. If a device has only two paths, SDD
alternates between the two.

You can dynamically change the SDD path-selection policy algorithm, by using the SDD
command datapath set device policy.

You can see the SDD path-selection policy algorithm that is active on the device, when you
use the datapath query device command. Example 8-37 shows that the active policy is
optimized which means that the SDD path-selection policy algorithm active is Optimized
Sequential.

Example 8-38 shows the VDisk information from the SVC command line.

Example 8-38 svcinfo redhat1


IBM_2145:ITSOSVC01:admin>svcinfo lshost FERMIUM
id 8
name FERMIUM
port_count 1
type generic
iogrp_count 4
WWPN 210000E08B05EFED
node_logged_in_count 2
status active

204 IBM System Storage SAN Volume Controller


IBM_2145:ITSOSVC01:admin>svcinfo lshostvdiskmap -delim : FERMIUM
id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
8:FERMIUM:0:2:LNX-ALDO2:210000E08B05EFED:60050768018100C47000000000000016

IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk LNX-ALDO2


id 2
name LNX-ALDO2
IO_group_id 0
IO_group_name io_grp0_r
status online
mdisk_grp_id 1
mdisk_grp_name LinuxMdiskGrp
capacity 9.8GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018100C47000000000000016
throttling 0
preferred_node_id 3
fast_write_state empty
cache readwrite
udid

8.6.6 Creating and preparing volumes for use


Follow these steps to create and prepare the volumes:
1. Create a partition on the vpath device as shown in Example 8-39.

Example 8-39 fdisk example


[root@fermium root]# fdisk /dev/vpatha
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 10000.


There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): m


Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table

Chapter 8. Host configuration 205


p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help): n


Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 1
First cylinder (1-10000, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-10000, default 10000):
Using default value 10000

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: If you have created or modified any DOS 6.x


partitions, please see the fdisk manual page for additional
information.
Syncing disks.
[root@fermium root]#

2. Create a file system on the vpath as shown in Example 8-40.

Example 8-40 mkfs command example


[root@fermium root]# mkfs -t ext3 /dev/vpatha
mke2fs 1.26 (3-Feb-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1281696 inodes, 2560000 blocks
128000 blocks (5.00%) reserved for the super user
First data block=0
79 block groups
32768 blocks per group, 32768 fragments per group
16224 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: done


Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be autonatically checked every 27 months or


180 days, whichever comes first. Use tune2fs -c or -i to everride.
[root@fermium root]#

3. Create the mount point and mount the vpath drive as shown in Example 8-41.

206 IBM System Storage SAN Volume Controller


Example 8-41 Mount point
[root@fermium root]# mkdir /ITSOsvc
[root@fermium root]# mount -t ext3 /dev/vpatha /ITSOsvc

[root@fermium root]# df
Filesystem 1k-blocks Used Available Use% Mounted on
/dev/sda2 17385772 1888616 14613984 12% /
/dev/sda1 101089 19616 76254 21% /boot
/dev/sdb2 15472800 845340 13841480 6% /opt
none 1899356 0 1899356 0% /dev/shm
/dev/vpatha 1605632 845340 760292 41% /ITSOsvc
[root@fermium root]# datapath query device

Total Devices : 1

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential


SERIAL: 60050768018100c47000000000000016
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Host2Channel0/sdc OPEN NORMAL 75528 0
1 Host2Channel0/sdd OPEN NORMAL 0 0
2 Host2Channel0/sde OPEN NORMAL 85453 5
3 Host2Channel0/sdf OPEN NORMAL 0 0
[root@fermium root]#

8.7 SUN Solaris support information


For the latest information about SUN Solaris support, see:
http://www-03.ibm.com/servers/storage/software/virtualization/svc/interop.html

8.7.1 Operating system versions and maintenance levels


At the time of writing, Sun Solaris 8 (V5.8), Sun Solaris 9 (V5.9) and Sun Solaris 10 are
supported in 64 bit only.

8.7.2 Multipath solutions supported


At the time of writing, SDD for Solaris is supported, and the current recommended version is
SDD 1.6.1.0-x, in 64 bit only.

For further information on the supported solutions and Multipath solutions co-existence with
SDD, visit this site:
http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002865#_Multipath_SDD

8.7.3 SDD dynamic pathing


Solaris supports dynamic pathing when you either add more paths to an existing VDisk, or if
you present a new VDisk to a host. No user intervention is required. SDD is aware of the
preferred paths which SVC sets per VDisk. SDD will uses a round robin algorithm when failing
over paths; that is, it will try the next known preferred path. If this fails and all preferred paths
have been tried, it will round robin on the non-preferred paths until it finds a path that is
available. If all paths are unavailable the VDisk will go offline. Therefore, it can take some time
to perform path failover when multiple paths go offline.

Chapter 8. Host configuration 207


SDD under Solaris performs load balancing across the preferred paths where appropriate.

Veritas Volume Manager with DMP Dynamic Pathing


Veritas VM with DMP automatically selects the next available I/O path for I/O requests
dynamically without action from the administrator. VM with DMP is also informed when you
repair or restore a connection, and when you add or remove devices after the system has
been fully booted (provided that the operating system recognizes the devices correctly). The
new JNI drivers support the mapping of new VDisks without rebooting the Solaris host.

Note the following support characteristics:


򐂰 Veritas VM with DMP does not support preferred pathing with SVC.
򐂰 Veritas VM with DMP does support load balancing across multiple paths with SVC.

Co-existence with SDD and Veritas VM with DMP


Veritas Volume Manager with DMP will coexist in “pass-thru” mode with SDD. This means that
DMP will use the vpath devices provided by SDD.

SAN Boot support


Note the following support characteristics:
򐂰 Boot from SAN is supported under Solaris 9 running Veritas Volume Manager/DMP.
򐂰 Boot from SAN is not supported when SDD is used as the multi-pathing software.

8.8 HP-UX support information


For the latest information about HP-UX support, refer to:
http://www-03.ibm.com/servers/storage/software/virtualization/svc/interop.html

Operating system versions and maintenance levels


At the time of writing HP-UX V11.0, V11i v1 and V11i v2 for PA-RISC is supported.

Multipath solutions supported


At the time of writing, SDD for HP-UX is supported, and the recommended version is SDD
1.6.0.1-1, in 64 bit only. Multipathing Software PV Link and Cluster Software Service Guard
v11.16 are also supported, but in a cluster environment SDD is recommended. Dynamic
expanding and shrinking of VDisks mapped to HP-UX hosts is not supported.

The maximum number of VDisks supported by SDD on HP-UX is 512, with a maximum of 4
paths to each VDisk.

SDD dynamic pathing


HP-UX supports dynamic pathing when you either add more paths to an existing VDisk, or if
you present a new VDisk to a host.

SDD is aware of the preferred paths which SVC sets per VDisk. SDD will use a round robin
algorithm when failing over paths, that is, it will try the next known preferred path. If this fails
and all preferred paths have been tried, it will round robin on the non-preferred paths until it
finds a path that is available. If all paths are unavailable the VDisk will go offline. It can take
some time, therefore, to perform path failover when multiple paths go offline.

SDD under HP-UX performs load balancing across the preferred paths where appropriate.

208 IBM System Storage SAN Volume Controller


PVLinks (Physical Volume Links) Dynamic Pathing
Unlike SDD, PVLinks does not load balance and is unaware of the preferred paths which SVC
sets per VDisk. Therefore SDD is strongly recommended, except when in a clustering
environment or when using an SVC VDisk as your boot disk.

When creating a Volume Group, specify the primary path you want HP-UX to use when
accessing the Physical Volume presented by SVC. This path, and only this path, will be used
to access the PV as long as it is available, no matter what SVC's preferred path to that VDisk
is. Therefore, care needs to be taken when creating Volume Groups so that the primary links
to the PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on.

When extending a Volume Group to add alternate paths to the PVs, the order you add these
paths is HP-UX's order of preference should the primary path become unavailable. Therefore
when extending a Volume Group, the first alternate path you add should be from the same
SVC node as the primary path, to avoid unnecessary node failover due to an HBA, FC link or
FC switch failure.

Co-existence of SDD and PV Links


If you want to multipath a VDisk with PVLinks while SDD is installed, you need to make sure
SDD does not configure a vpath for that VDisk. To do this, you need to put the serial number
of any VDisks you want SDD to ignore in /etc/vpathmanualexcl.cfg. In the case of SAN Boot,
if you are booting from an SVC VDisk, when you install SDD (from version 1.6 onwards) SDD
will automatically ignore the boot VDisk.

SAN Boot support


SAN Boot is supported on HP-UX by using PVLinks as the multi-pathing software on the boot
device. PVLinks or SDD can be used to provide the multi-pathing support for the other
devices attached to the system.

Using an SVC VDisk as Cluster Lock Disk


ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When
using an SVC VDisk as your lock disk, should the path to FIRST_CLUSTER_LOCK_PV
become unavailable, the HP node will not be able to access the lock disk should a 50-50 split
in quorum occur.

To ensure redundancy, when editing you Cluster Configuration ASCII file, make sure that the
variable FIRST_CLUSTER_LOCK_PV is a different path to the lock disk for each HP node in
your cluster. For example, when configuring a two node HP cluster, make sure that
FIRST_CLUSTER_LOCK_PV on HP server A is on a different SVC node and through a
different FC switch to the FIRST_CLUSTER_LOCK_PV on HP server B.

Support for HP-UX greater than 8 LUNs


HPUX will not recognize more than 8 LUNS per port using the generic SCSI behavior.

In order to accommodate this behavior, SVC supports a “type” associated with a host. This
can be set using the command svctask mkhost and modified using the command svctask
chhost. The type can be set to generic, which is the default or HPUX.

When an initiator port which is a member of a host which is of type HPUX accesses an SVC,
SVC will behave in the following way:
򐂰 Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
򐂰 When an Inquiry command for any page is sent to LUN 0 using Peripheral Device
Addressing, it is reported as Peripheral Device Type 0Ch (controller).

Chapter 8. Host configuration 209


򐂰 When any command other than Inquiry is sent to LUN 0 using Peripheral Device
Addressing SVC will respond as an unmapped LUN 0 would normally respond.
򐂰 When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral
Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown
Device Type.
򐂰 When an inquiry is sent to an unmapped LUN which is not LUN 0 using Peripheral Device
Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh
(Unknown or no device type). This is in contrast to the behavior for generic hosts where
peripheral Device Type 00h is returned.

8.9 VMware support information


For the latest information about VMware support, refer to:
http://www-03.ibm.com/servers/storage/software/virtualization/svc/interop.html

Operating system versions and maintenance levels


At the time of writing, ESX 2.1, ESX 2.5.2, and ESX 2.5.3 versions are supported. Guest
Operating systems include Windows 2000 Advanced Server and Windows 2003 Enterprise
Edition, Red Hat 2.1, Red Hat 3.0, SLES8, and SLES9.

Multipath solutions supported


Only single path is supported in ESX 2.1 and multipathing is supported in ESX 2.5.x.

Dynamic Pathing
VMware multi-pathing software does not provide any dynamic pathing. It does not round robin
on the available paths, nor does it follow any of the SVC preferred path settings.

VMware multi-pathing software statically load balances based upon a host setting that
defines the preferred path for a given volume.

SAN Boot support


SAN Boot of any guest OS is supported under VMware.

Note: The very nature of VMware means that this is a requirement on any guest OS. The
guest OS itself must reside on a SAN disk.

8.10 More information


For more information about host attachment and configuration to the SVC, refer to the IBM
System Storage SAN Volume Controller: Host Attachment Guide, SC26-7563.

For more information about SDD configuration, refer to the IBM TotalStorage Multipath
Subsystem Device Driver User's Guide, SC30-4096.

210 IBM System Storage SAN Volume Controller


9

Chapter 9. SVC configuration and


administration using the CLI
In this chapter we describe how to use the command line interface (CLI) to perform additional
and advanced configuration and administration tasks that were not covered in Chapter 6,
“Quickstart configuration using the CLI” on page 127. We also discuss the backup and
recovery function.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 211
9.1 Managing the cluster
This section details the various configuration and administration tasks that you can perform
on the cluster.

You must issue all of the following commands from a secure SSH command line. To launch
your PuTTY command line, follow these steps:
1. Open the PuTTY application. From your master console desktop, select Start →
Programs → PuTTY.
2. On the main screen (Figure 9-1), select the session you created and saved in 5.4.3,
“Configuring the PuTTY session for the CLI” on page 118 (for example, SVC), and click
Load. Then click Open to begin your session.
3. At the Login as: prompt, type admin and press Enter.

Figure 9-1 Starting PuTTY

Command syntax
Two major command sets are available. The svcinfo command set allows us to query the
various components within the IBM System Storage SAN Volume Controller (SVC)
environment. The svctask command set allows us to make changes to the various
components within the SVC.

When the command syntax is shown, you see some parameters in square brackets, for
example, [parameter]. This indicates that the parameter is optional in most, if not all
instances. Anything that is not in square brackets is required information. You can view the
syntax of a command by entering one of the following commands:
򐂰 svcinfo -?: Shows a complete list of information commands
򐂰 svctask -?: Shows a complete list of task commands
򐂰 svcinfo commandname -?: Shows the syntax of information commands
򐂰 svctask commandname -?: Shows the syntax of task commands

212 IBM System Storage SAN Volume Controller


򐂰 svcinfo commandname -filtervalue?: Shows what filters you can use to reduce output of
the information commands

Note: You can also use -h instead of -?


For example: svcinfo -h or svctask commandname -h.

If you look at the syntax of the command by typing svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue as stated
above.

Tip: You can use the up and down keys on your keyboard to recall commands recently
issued. Then, you can use the left and right, backspace, and delete keys to edit commands
before you resubmit them.

9.1.1 Organizing on-screen content


Sometimes the output of a command can be long and difficult to read on screen. In cases
where you need information about a subset of the total number of available items, you can
use filtering to reduce the output to a more manageable size.

Filtering
To reduce the output that is displayed by an svcinfo command, you can specify a number of
filters depending on which svcinfo command you are running. To see which filters are
available, type the command followed by the -filtervalue? flag to see such output as shown
in Example 9-1.

Example 9-1 svcinfo lsvdisk -filtervalue? command


IBM_2145:itsosvc01:admin>svcinfo lsvdisk -filtervalue?

Filters for this view are :


name
id
IO_group_id
IO_group_name
status
mdisk_grp_name
mdisk_grp_id
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_name
vdisk_id

vdisk_UID

When you know the filters, you can be more selective in generating output.
򐂰 Multiple filters can be combined to create specific searches.
򐂰 You can use an * as a wildcard when using names.
򐂰 When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb

Chapter 9. SVC configuration and administration using the CLI 213


For instance, if we issue the svcinfo lsvdisk command with no filters, we see the output
shown in Example 9-2.

Example 9-2 svcinfo lsvdisk command: No filters


IBM_2145:itsosvc01:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name:vdisk_UID
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
2:VD0_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
3:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
4:VD2_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
5:VD3_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
6:VD4_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
7:VD5_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
8:VD0_UNIX1:0:io_grp0:online:1:MDG0_DS43:2.0GB:striped::::
9:VDisk1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
10:VDisk1T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
11:VDisk2:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
12:VDisk2T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::

Tip: The -delim : parameter truncates the on-screen content and separates data fields
with colons as opposed to wrapping text over multiple lines. That is normally used in case
you need to grab some report during scripts execution.

If we now add a filter to our svcinfo command (such as FC_name), we can reduce the output
dramatically as shown in Example 9-3.

Example 9-3 svcinfo lsvdisk command: With filter


IBM_2145:itsosvc01:admin>svcinfo lsvdisk -filtervalue 'FC_name=FCMap3
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name:vdisk_UID
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::

IBM_2145:itsosvc01:admin>svcinfo lsvdisk -filtervalue 'name=VDisk*' -delim :


id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name:vdisk_UID
9:VDisk1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
10:VDisk1T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap1::
11:VDisk2:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
12:VDisk2T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:3:FCMap3::

The first command shows all Virtual Disks (VDisks) with the FC_name=FCMap3 the SVC
environment. The second command shows us all VDisks with names starting with VDisk. The
wildcard * can be used when names are used.

We are now ready to continue.

214 IBM System Storage SAN Volume Controller


9.1.2 Viewing cluster properties
Use the svcinfo lscluster command to display summary information about all clusters
visible to the SVC. To display more detailed information about a specific cluster, run the
command again and append the cluster name parameter (for example, SVC1). Both of these
commands are shown in Example 9-4.

Example 9-4 svcinfo lscluster command


IBM_2145:itsosvc01:admin>admin>svcinfo lscluster -delim :
id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_address:id_ali
as
00000200626006C8:itsosvc01:local:::9.42.164.155:9.42.164.156:00000200626006C8

IBM_2145:itsosvc01:admin>svcinfo lscluster itsosvc01


id 00000200626006C8
name itsosvc01
location local
partnership
bandwidth
cluster_IP_address 9.42.164.155
cluster_service_IP_address 9.42.164.156
total_mdisk_capacity 2914.2GB
space_in_mdisk_grps 2713.8GB
space_allocated_to_vdisks 142.0GB
total_free_space 2772.2GB
statistics_status off
statistics_frequency 15
required_memory 4096
cluster_locale en_US
SNMP_setting all
SNMP_community public
SNMP_server_IP_address 9.42.164.140
subnet_mask 255.255.255.0
default_gateway 9.42.164.1
time_zone 520 US/Pacific
email_setting none
email_id
code_level 4.1.0.0 (build 4.25.0606080000)
FC_port_speed 2Gb
console_IP 0.0.0.0:80
id_alias 00000200626006C8

9.1.3 Maintaining passwords


Use the svctask chcluster command to change the admin and services passwords. The full
syntax of the svctask chcluster command is:
svctask chcluster [-clusterip ip_address] [-serviceip ip_address] [-name cluster_name]
[-admpwd [password] [-servicepwd [password]] [-gw gateway] [-mask subnet_mask] [-speed
speed] [-icatip icat_ip_address:port] [-alias id_alias]

Note the following explanation:


򐂰 clusterip: IP address to access the cluster
򐂰 serviceip: IP address used if a node has been expelled from the cluster
򐂰 name: Name of the cluster
򐂰 admpwd: Administrator’s password
򐂰 servicepwd: Service user’s password
򐂰 gw: cluster’s gateway IP address

Chapter 9. SVC configuration and administration using the CLI 215


򐂰 mask: Subnet mask
򐂰 speed: Fabric speed
򐂰 autoquorum: Indicates if quorum disks allocate automatically or not (default = true)
򐂰 alias:Alias of the cluster
򐂰 icatip: IP address of the ICAT console

The command to change the admin and the service password is:
IBM_2145:itsosvc01:admin>svctask chcluster -admpwd admin -servicepwd service

This command changes the current admin password to admin and the current service
password to service.

You have now completed the tasks required to change the admin and service passwords for
your SVC cluster.

Note: You can use the letters A to Z, a to z, the numbers 0 to 9, and the underscore in a
password. The password can be between one and 15 characters in length.

Tip: Also, as you can see, these passwords are not encrypted on the screen, so make
sure no one is looking over your shoulder, or follow this recommendation.

If you are changing the password in a public place, enter the parameter without an
associated value and you will be prompted for the password which then will not show.

# svctask chcluster -servicepwd

Enter a value for -servicepwd :

Enter password:

Confirm password:

If the passwords do not match, the following message will appear:

CMMVC6007E The two passwords that were entered do not match

9.1.4 Modifying IP addresses


Using the svctask chcluster command again, we can change the cluster IP address as
shown here:
IBM_2145:itsosvc01:admin>svctask chcluster -clusterip 9.42.164.156 -serviceip 9.42.164.161

This command changes the current IP address of the cluster to 9.42.164.156 and the current
service IP address to 9.42.164.161.

Important: If you specify a new cluster IP address, the existing communication with the
cluster through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address.

Modifying the IP address of the cluster, although quite simple, means some reconfiguration
for other items within the SVC environments (such reconfiguring our PuTTY application
and the central administration GUI).

The -clusterip and -serviceip parameters can be used in isolation (as shown in the last
two examples) or in combination with other chcluster command parameters.

216 IBM System Storage SAN Volume Controller


We have now completed the tasks required to change the IP addresses (cluster and service)
of the SVC environment.

9.1.5 Setting the cluster time zone and time


Perform the following steps to set the cluster time zone and time:
1. Determine what time zone your cluster is currently configured for by issuing the svcinfo
showtimezone command as shown here:
IBM_2145:itsosvc01:admin>svcinfo showtimezone
id timezone
520 US/Pacific
If this setting is correct (for example, 514 US/Eastern), skip to Step 4. If not, continue with
Step 2.
2. Determine the time zone code that is associated with the current time zone. To find this,
enter the svcinfo lstimezones command shown in Example 9-5. The list was edited by us
for purposes of the example.

Example 9-5 svcinfo lstimezones command


IBM_2145:itsosvc01:admin>svcinfo lstimezones
id timezone
0 Africa/Abidjan
1 Africa/Accra
2 Africa/Addis_Ababa
3 Africa/Algiers
4 Africa/Asmera
5 Africa/Bamako
. . .
508 UCT
509 Universal
510 US/Alaska
511 US/Aleutian
512 US/Arizona
513 US/Central
514 US/Eastern
. . .

In this example, the correct time zone code is 514.


3. Set the time zone by issuing the svctask settimezone command:
IBM_2145:itsosvc01:admin>svctask settimezone -timezone 520
4. With the correct time zone, set the cluster time by issuing the svctask setclustertime
command:
IBM_2145:itsosvc01:admin>svctask setclustertime -time 1105180504
The format of the time is MMDDHHmmYY.

We have now completed the tasks necessary to set the cluster time zone and time.

9.1.6 Starting a statistics collection


Use the svctask startstats command to start the collection of statistics within the cluster:
IBM_2145:itsosvc01:admin>svctask startstats -interval 1

The interval we specify (minimum 1, maximum 60) is in minutes. This command starts
statistics collection and gathers data at 1 minute intervals.

Chapter 9. SVC configuration and administration using the CLI 217


Note: To verify that statistics collection is set, display the cluster properties again, as
shown in Example 9-6, and look for statistics_status and statistics_frequency.

Example 9-6 Statistics collection status and frequency


IBM_2145:itsosvc01:admin>svcinfo lscluster itsosvc01
id 00000200626006C8
name itsosvc01
location local
partnership
bandwidth
cluster_IP_address 9.42.164.155
cluster_service_IP_address 9.42.164.156
total_mdisk_capacity 2914.2GB
space_in_mdisk_grps 2713.8GB
space_allocated_to_vdisks 142.0GB
total_free_space 2772.2GB
statistics_status on
statistics_frequency 15
required_memory 4096
cluster_locale en_US
SNMP_setting all
SNMP_community public
SNMP_server_IP_address 9.42.164.140
subnet_mask 255.255.255.0
default_gateway 9.42.164.1
time_zone 522 UTC
email_setting none
email_id
code_level 4.1.0.0 (build 4.25.0606080000)
FC_port_speed 2Gb
console_IP 0.0.0.0:80
id_alias 00000200626006C8

We have now completed the tasks required to start statistics collection on the cluster.

9.1.7 Stopping a statistics collection


Use the svctask stopstats command to start the collection of statistics within the cluster:
IBM_2145:itsosvc01:admin>svctask stopstats

This command stops statistics collection. Do not expect any prompt message from this
command.

Note: To verify that statistics collection is stopped, display the cluster properties again, as
shown in Example 9-7, and look for statistics_status and statistics_frequency.

Example 9-7 Statistics collection status and frequency


IBM_2145:itsosvc01:admin>svctask lscluster itsosvc01
id 00000200626006C8
name itsosvc01
location local
partnership
bandwidth
cluster_IP_address 9.42.164.155

218 IBM System Storage SAN Volume Controller


cluster_service_IP_address 9.42.164.156
total_mdisk_capacity 2914.2GB
space_in_mdisk_grps 2713.8GB
space_allocated_to_vdisks 142.0GB
total_free_space 2772.2GB
statistics_status off
statistics_frequency 15
required_memory 4096
cluster_locale en_US
SNMP_setting all
SNMP_community public
SNMP_server_IP_address 9.42.164.140
subnet_mask 255.255.255.0
default_gateway 9.42.164.1
time_zone 522 UTC
email_setting none
email_id
code_level 4.1.0.0 (build 4.25.0606080000)
FC_port_speed 2Gb
console_IP 0.0.0.0:80
id_alias
00000200626006C8

Notice that the interval parameter is not changed but the status is off. We have now
completed the tasks required to stop statistics collection on our cluster.

9.1.8 Audit Log commands


Starting from the software release 4.1.0, all action commands issued as result of actions in
the CLI, ICAT GUI, and native GUI are logged to the audit log. View commands and
commands in service mode are not logged. The audit log cannot be disabled in any way.

Audit log entries give the following information:


򐂰 The timestamp of the time when the action was initiated on the current configuration node
򐂰 The new CLI name of the action taken
򐂰 All parameters given with the action
򐂰 The status if the action completed successfully

Use the svcinfo catauditlog -first 15 command to return a list of 15 in-memory Audit
Log entries as shown in Example 9-8.

Example 9-8 catauditlog command


IBM_2145:itsosvc01:admin>svcinfo catauditlog -delim '|' -first 15
audit_seq_no|timestamp|cluster_user|ssh_label|icat_user|result|res_obj_id|action
_cmd
133|060621000715|admin|admin|superuser|0|23|svctask mkvdisk -name Vdisk20011 -iogrp 0
-mdiskgrp 0 -size 419430400 -unit b -vtype striped -fmtdisk -mdisk mdisk3:
mdisk5:mdisk4 -udid 0
134|060621000719|admin|admin|superuser|0|24|svctask mkvdisk -name Vdisk20012 -iogrp 0
-mdiskgrp 0 -size 419430400 -unit b -vtype striped -fmtdisk -mdisk mdisk3:
mdisk5:mdisk4 -udid 0
135|060621000723|admin|admin|superuser|0|25|svctask mkvdisk -name Vdisk20013 -iogrp 0
-mdiskgrp 0 -size 419430400 -unit b -vtype striped -fmtdisk -mdisk mdisk3:
mdisk5:mdisk4 -udid 0
136|060621000726|admin|admin|superuser|0|26|svctask mkvdisk -name Vdisk20014 -iogrp 0
-mdiskgrp 0 -size 419430400 -unit b -vtype striped -fmtdisk -mdisk mdisk3:
mdisk5:mdisk4 -udid 0

Chapter 9. SVC configuration and administration using the CLI 219


137|060621000729|admin|admin|superuser|0|27|svctask mkvdisk -name Vdisk20015 -iogrp 0
-mdiskgrp 0 -size 419430400 -unit b -vtype striped -fmtdisk -mdisk mdisk3:
mdisk5:mdisk4 -udid 0
138|060621012510|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080
139|060621012510|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080
140|060621012751|admin(GUI)|||0||svctask addsshkey -label Max -file /tmp/SSH_Key -user
admin
141|060621015445|admin|Max||0||svctask stopstats
142|060621085000|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080
143|060621085000|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080
144|060622013659|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080
145|060622013700|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080
146|060622015709|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080
147|060622015709|admin|admin|superuser|0||svctask chcluster -icatip 9.43.86.42:9080

If you need to dump the contents of the in-memory audit log to a file on the current
configuration node, use the command svctask dumpauditlog. This command does not
provide any feedback, just the prompt. To obtain a list of the audit log dumps, use svcinfo
lsauditlogdumps as described in Example 9-9.

Example 9-9 lsauditlogdumps command


IBM_2145:itsosvc01:admin>svcinfo lsauditlogdumps
id auditlog_filename
0 auditlog_0_147_20060621185709_000002006040469e
1 auditlog_148_148_20060622233746_000002006040469e

9.1.9 Status of discovery


Use the svcinfo lsdiscoverystatus command as shown in Example 9-10 to determine if a
discovery operation is in progress or not. The output of this command is the status of active or
inactive. This command is new to software release 4.1.0.

Example 9-10 lsdiscoverystatus command


IBM_2145:itsosvc01:admin>svcinfo lsdiscoverystatus
status
inactive

9.1.10 Status of copy operation


Use the svcinfo lscopystatus command as shown in Example 9-11 to determine if a file
copy operation is in progress or not. Only one file copy operation can be performed at a time.
The output of this command is the status of active or inactive. This command is new to
software release 4.1.0

Example 9-11 lscopystatus command


IBM_2145:itsosvc01:admin>svcinfo lscopystatus
status
inactive

220 IBM System Storage SAN Volume Controller


9.1.11 Shutting down a cluster
If all input power to an SVC cluster is to be removed for more than a few minutes (for example,
if the machine room power is to be shut down for maintenance), it is important to shut down
the cluster before removing the power. The reason for this is that if the input power is removed
from the uninterruptible power supply units without first shutting down the cluster and the
uninterruptible power supplies themselves, the uninterruptible power supply units remain
operational and eventually become drained of power.

When input power is restored to the uninterruptible power supplies, they start to recharge.
However the SVC does not permit any input/output (I/O) activity to be performed to the
VDisks until the uninterruptible power supplies are charged enough to enable all the data on
the SVC nodes to be destaged in the event of a subsequent unexpected power loss.
Recharging the uninterruptible power supply can take as long as three hours.

Shutting down the cluster prior to removing input power to the uninterruptible power supply
units prevents the battery power from being drained. It also makes it possible for I/O activity to
be resumed as soon as input power is restored.

You can use the following procedure to shut down the cluster:
1. Use the svctask stopcluster command to shut down your SVC cluster:
IBM_2145:itsosvc01:admin>svctask stopcluster
This command shuts down the SVC cluster. All data is flushed to disk before the power is
removed. At this point you lose administrative contact with your cluster, and the PuTTY
application automatically closes.
2. The resulting output will be presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy)
relationships, data migration operations and forced deletions before continuing. Entering y
to this will execute the command. No feedback is then displayed. Entering anything other
than y(es) or Y(ES) will result in the command not executing. No feedback is displayed.

Important: Before shutting down a cluster, quiesce all I/O operations that are destined
for this cluster because you will lose access to all VDisks being provided by this cluster.
Failure to do so can result in failed I/O operations being reported to the host operating
systems. There is no need to do this when you shut down a node.

Begin the process of quiescing all I/O to the cluster by stopping the applications on the
hosts that are using the VDisks provided by the cluster.

3. We have now completed the tasks required to shut down the cluster. To shut down the
uninterruptible power supplies, press the power button on their front panels.

Note: To restart the cluster, you must first restart the uninterruptible power supply units
by pressing the power button on their front panels. Then you go to the service panel of
one of the nodes within the cluster and press the power on button. After it is fully booted
up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the
display panel), you can start the other nodes in the same way.

As soon as all nodes are fully booted, you can re-establish administrative contact using
PuTTY, and your cluster is fully operational again.

Chapter 9. SVC configuration and administration using the CLI 221


9.2 Working with nodes
This section explains the various configuration and administration tasks that you can perform
on the nodes within an SVC cluster.

9.2.1 I/O groups


This section explains the tasks that you can perform at an I/O group level.

Viewing I/O group details


Use the svcinfo lsiogrp command, as shown in Example 9-12, to view information about
I/O groups defined within the SVC environment.
Example 9-12 I/O group details
IBM_2145:itsosvc01:admin>svcinfo lsiogrp
id name node_count vdisk_count host_count
0 io_grp0 2 15 2
1 io_grp1 0 0 0
2 io_grp2 0 0 0
3 io_grp3 0 0 0
4 recovery_io_grp 0 0

As we can see, the SVC predefines five I/O groups. In a two-node cluster (like ours), only one
I/O group is actually in use. In a four node cluster, we would have two I/O groups in use. The
other I/O groups (io_grp2 and io_grp3) are for a six or eight node cluster.

The recovery I/O group is a temporary home for VDisks when all nodes in the I/O group that
normally owns them have suffered multiple failures. This allows us to move the VDisks to the
recovery I/O group and then into a working I/O group. Of course, while temporarily assigned
to the recovery I/O group, I/O access is not possible.

Renaming an I/O group


Use the svctask chiogrp command to rename an I/O group:
IBM_2145:itsosvc01:admin>svctask chiogrp -name io_grpSVC2 io_grp1

This command renames the I/O group io_grp1 to io_grpSVC2.

Note: The chiogrp command specifies the new name first.

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, the dash
’-’ and the underscore ’_’. It can be between one and 15 characters in length. However, it
cannot start with a number, dash, or the word iogrp since this prefix is reserved for SVC
assignment only.

To see whether the renaming was successful, issue the svcinfo lsiogrp command again
and you should see the change reflected.

We have now completed the tasks required to rename an I/O group.

Adding and removing hostiogrp


To map or unmap a specific host object to a specific I/O group in order to reach the maximum
hosts supported by an SVC cluster, use the svctask addhostiogrp command to map a
specific host to a specific I/O group as follows:

IBM_2145:itsosvc01:admin>svctask addhostiogrp io_grp 0:1 LINUX1

222 IBM System Storage SAN Volume Controller


Parameters:
򐂰 -iogrp iogrp_list -iogrpall : Specifies a list of one or more I/O groups that must be mapped
to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall specifies
that all the I/O groups must be mapped to the specified host. This parameter is mutually
exclusive with -iogrp.
򐂰 -host host_id_or_name: Identify the host either by ID or name to which the I/O groups
must be mapped.

Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O group as
follows:

IBM_2145:itsosvc01:admin>svctask rmhostiogrp io_grp 0 LINUX1

Parameters:
򐂰 -iogrp iogrp_list -iogrpall : Specifies a list of one or more I/O groups that must be
unmapped to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall
specifies that all the I/O groups must be unmapped to the specified host. This parameter is
mutually exclusive with -iogrp.
򐂰 -force: If the removal of a host to I/O group mapping will result in the loss of VDisk to host
mappings, then the command must fail if the -force flag has not been used. The -force
flag will, however, override such behavior and force the host to I/O group mapping to be
deleted.
򐂰 host_id_or_name: Identify the host either by ID or name to which the I/O groups must be
mapped.

Listing I/O groups


Starting with SVC version 3.1, we have an SVC command to list all the I/O groups mapped to
the specified host and vice versa.

To list all the I/O groups mapped to the specified host, use the svcinfo lshostiogrp
command as follows:

IBM_2145:itsosvc01:admin>svcinfo lshostiogrp LINUX1

Where LINUX1 is, for example, the host name.

To list all the host object mapped to the specified I/O group use the svcinfo lsiogrphost
command as follows:

IBM_2145:itsosvc01:admin>svcinfo lsiogrphost iogrp_0

Where iogrp_0 is the I/0 group name.

9.2.2 Nodes
This section details the tasks which can be performed at an individual node level.

Viewing node details


Use the svcinfo lsnode command to view summary information about nodes defined within
the SVC environment. To view more details about a specific node, append the node name
(for example, SVC1_node1) to the command.

Both of these commands are shown in Example 9-13.

Chapter 9. SVC configuration and administration using the CLI 223


Tip: The -delim : parameter truncates the on-screen content and separates data fields
with colons as opposed to wrapping text over multiple lines.

Example 9-13 svcinfo lsnode command


IBM_2145:itsosvc01:admin>svcinfo lsnode -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:h
ardware
1:SVC1_node1:YM100032B422:5005076801000364:online:0:io_grp0:yes:20400000C2484082:4F2
11:SVC1_node2:YM100032B425:500507680100035A:online:0:io_grp0:no:20400000C2484085:4F2

IBM_2145:itsosvc01:admin>svcinfo lsnode SVC1_node1


id 1
name SVC1_node1
UPS_serial_number YM100032B422
WWNN 5005076801000364
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 12
partner_node_name SVC1_node2
config_node no
UPS_unique_id 20400000C2484082
port_id 5005076801400364
port_status active
port_speed 2Gb
port_id 5005076801300364
port_status active
port_speed 2Gb
port_id 5005076801100364
port_status active
port_speed 2Gb
port_id 5005076801200364
port_status active
port_speed 2Gb
hardware 4F2

Adding a node
Before you can add a node, you must know which unconfigured nodes you have as
“candidates”. You can find this out by issuing the svcinfo lsnodecandidate command:
IBM_2145:itsosvc01:admin>svcinfo lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id
500507680100035A 000683 YM100032B425 20400000C2484085

Note: The node you want to add must be on a different UPS serial number than the UPS
on the first node.

Now that we know the available nodes, we can use the svctask addnode command to add
the node to the SVC cluster configuration. The complete syntax of the addnode command is:
addnode {-panelname panel_name | -wwnodename wwnn_arg} [-name new_name]
-iogrp iogrp_name_or_id

224 IBM System Storage SAN Volume Controller


In the following explanation, note that panelname and wwnodename are mutually exclusive:
򐂰 panelname: Name of the node as it appears on the panel
򐂰 wwnodename: Worldwide node name (WWNN) of the node
򐂰 name: Name to be allocated to the node
򐂰 iogrp: I/O group to which the node is added

The command to add a node to the SVC cluster is:


IBM_2145:itsosvc01:admin>svctask addnode -panelname 000667 -name SVC_node2 -iogrp io_grp0
Node, id [10], successfully added

This command adds the candidate node with the panelname of 000667 to the I/O group
io_grp0 and name it SVC1_node2.

We used the -panelname parameter (000667), but we could have used the -wwnodename
parameter (5005076801000364) instead, for example:
svctask addnode -wwnodename 5005076801000364 -name SVC1_node2 -iogrp io_grp0

We also used the optional -name parameter (SVC1_node2). If you do not provide the -name
parameter, the SVC automatically generates the name nodeX (where X is the ID sequence
number assigned by the SVC internally). In our case it would be node10.

Note: If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9,
the dash ’-’ and the underscore ’_’. It can be between one and 15 characters in length.
However, it cannot start with a number, dash, or the word node since this prefix is reserved
for SVC assignment only.

Renaming a node
Use the svctask chnode command to rename a node within the SVC cluster configuration.
From now on, we are following our naming convention.
IBM_2145:itsosvc01:admin>chnode -name SVC1N1 SVC1_node1
IBM_2145:itsosvc01:admin>chnode -name SVC1N2 SVC1_node2

This command renames node SVC1_node1 to SVC1N1 and SVC1_node2 to SVC1N2.

Note: The chnode command specifies the new name first. You can use letters A to Z,
a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash, or the word node, since
this prefix is reserved for SVC assignment only.

Deleting a node
Use the svctask rmnode command to remove a node from the SVC cluster configuration:
IBM_2145:itsosvc01:admin>svctask rmnode SVC1_node2

This command removes node SVC1_node2 from the SVC cluster.

Since SVC1_node2 was also the configuration node, the SVC transfers the configuration
node responsibilities to a surviving node (in our case SVC1_node1). Unfortunately the PuTTY
session cannot be dynamically passed to the surviving node. Therefore the PuTTY
application loses communication and closes automatically.

Chapter 9. SVC configuration and administration using the CLI 225


We must restart the PuTTY application to establish a secure session with the new
configuration node.

Important: If this is the last node in an I/O Group, and there are Virtual Disks still assigned
to the I/O Group, the node will not be deleted from the cluster.

If this is the last node in the cluster, and the I/O Group has no Virtual Disks remaining, the
cluster will be destroyed and all virtualization information will be lost. Any data that is still
required should be backed up or migrated prior to destroying the cluster.

Shutting down a node


Earlier we showed how to shut down the complete SVC cluster in a controlled manner. On
occasion, it can be necessary to shut down a single node within the cluster, to perform such
tasks as scheduled maintenance, while leaving the SVC environment up and running.

Use the svctask stopcluster -node command as shown in Example 9-14 to shut down a
node.

Example 9-14 svctask stopcluster -node command


IBM_2145:itsosvc01:admin>svctask stopcluster -node SVC1N1
Are you sure that you want to continue with the shut down? Ensure that you have stopped all
FlashCopy mappings, Remote Copy relationships, data migration operations and forced
deletions before continuing.
yes <ENTER>

This command shuts down SVC1N1 in a graceful manner. When this is done, the other node
in the I/O Group will destage the contents of its cache and will go into write through mode until
the node is powered up and rejoins the cluster.

Note: There is no need to stop FlashCopy mappings, Remote Copy relationships and data
migration operations. The other cluster will handle this, but be aware that this cluster is a
single point of failure now.

If this is the last node in an I/O Group, all access to the Virtual Disks in the I/O Group will be
lost. Ensure that this is what you want to do before executing this command and we will need
to specify the -force flag.

By re-issuing the svcinfo lsnode command (as shown in Example 9-15), we can see that the
node is now offline.

Example 9-15 svcinfo lsnode


IBM_2145:itsosvc01:admin>svcinfo lsnode -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:h
ardware
1:SVC1_N1:YM100032B422:0000000000000000:offline:0:io_grp0:no:20400000C2484082:4F2
12:SVC1_N2:YM100032B425:500507680100035A:online:0:io_grp0:yes:20400000C2484085:4F2

IBM_2145:itsosvc01:admin>svcinfo lsnode SVC1_N1


CMMVC5782E The object specified is offline

To restart the node, simply go to the service panel of the node and push the power on button.

We have now completed the tasks required to view, add, delete, rename, and shut down a
node within an SVC environment.

226 IBM System Storage SAN Volume Controller


9.3 Working with managed disks
This section details the various configuration and administration tasks that you can perform
on the managed disks (MDisks) within the SVC environment.

9.3.1 Disk controller systems


This section details the tasks that you can perform on a disk controller level.

Viewing disk controller details


Use the svcinfo lscontroller command to display summary information about all available
back-end storage systems. To display more detailed information about a specific controller,
run the command again and append the controller name parameter (for example,
controller0). Both of these commands are shown in Example 9-16.

Tip: The -delim : parameter truncates the on-screen content and separates data fields
with colons as opposed to wrapping text over multiple lines.

Example 9-16 svcinfo lscontroller command


IBM_2145:itsosvc01:admin>svcinfo lscontroller -delim :
id:controller_name:ctrl_s/n:vendor_id:product_id_low:product_id_high
0:controller0::IBM :1722-600:

IBM_2145:itsosvc01:admin>svcinfo lscontroller controller0


id 0
controller_name controller0
WWNN 200800A0B80FBDF0
mdisk_link_count 7
max_mdisk_link_count 7
degraded no
vendor_id IBM
product_id_low 1722-600
product_id_high
product_revision 0520
ctrl_s/n
WWPN 200800A0B80FBDF1
path_count 3
max_path_count 6
WWPN 200900A0B80FBDF2
path_count 4
max_path_count 8

Renaming a controller
Use the svctask chcontroller command to change the name of a storage controller. To
verify the change, run the svcinfo lscontroller command. Both of these commands are
shown in Example 9-17.

Example 9-17 svctask chcontroller command


IBM_2145:itsosvc01:admin>svctask chcontroller -name DS4301 controller0

IBM_2145:itsosvc01:admin>svcinfo lscontroller -delim :


id:controller_name:ctrl_s/n:vendor_id:product_id_low:product_id_high
0:DS4301::IBM :1722-600:

Chapter 9. SVC configuration and administration using the CLI 227


This command renames the controller named controller0 to DS4301.

Note: The chcontroller command specifies the new name first. You can use letters A to
Z, a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash, or the word controller
since this prefix is reserved for SVC assignment only.

9.3.2 Managed disks


This section details the tasks that can be performed at an MDisk level.

MDisk information
Use the svcinfo lsmdisk command to display summary information about all available
managed disks. To display more detailed information about a specific MDisk, run the
command again and append the MDisk name parameter (for example, mdisk0). Both of these
commands are shown in Example 9-18.

Example 9-18 svcinfo lsmdisk command


IBM_2145:itsosvc01:admin>svcinfo lmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4300:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4300:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4300:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4300:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4300:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4300:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

IBM_2145:itsosvc01:admin>svcinfo lsmdisk mdisk0


id 0
name mdisk0
status online
mode managed
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 200.4GB
quorum_index 0
block_size 512
controller_name DS4300
ctrl_type 4
ctrl_WWNN 200800A0B80FBDF0
controller_id 0
path_count 1
max_path_count 1
ctrl_LUN_# 0
UID 600a0b80000fbdf0000002753fb8b1c200000000000000000000000000000000
preferred_WWPN 200800A0B80FBDF0
active_WWPN 200800A0B80FBDF0

228 IBM System Storage SAN Volume Controller


Renaming an MDisk
Use the svctask chmdisk command to change the name of an MDisk. To verify the change,
run the svcinfo lsmdisk command. Both of these commands are shown in Example 9-19.

Example 9-19 svctask chmdisk command


IBM_2145:itsosvc01:admin>svctask chmdisk -name newmdisk0 mdisk0

IBM_2145:itsosvc01:admin>svcinfo lsmdisk -delim :


id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:newmdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4300:600a0b80000fbdf00000
02753fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4300:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4300:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4300:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4300:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4300:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

This command renamed the MDisk named mdisk0 to newdisk0.

Note: The chmdisk command specifies the new name first. You can use letters A to Z,
a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash, or the word mdisk since
this prefix is reserved for SVC assignment only.

Discovering MDisks
In general, the cluster detects the MDisks automatically when they appear on the network.
However, some Fibre Channel controllers do not send the required SCSI primitives that are
necessary to automatically discover the new MDisks.

If new storage has been attached and the cluster has not detected it, it might be necessary to
run this command before the cluster will detect the new MDisks.

Use the svctask detectmdisk command to scan for newly added MDisks:
IBM_2145:itsosvc01:admin>svctask detectmdisk

To check whether any newly added MDisks were successfully detected, run the svcinfo
lsmdisk command as before. If the disks do not appear, check that the disk is appropriately
assigned to the SVC in the disk subsystem, and that the zones are properly set up as
explained in Chapter 3, “Planning and configuration” on page 25.

Note: If you have assigned a large number of LUNs to your SVC, the discovery process
could take a while. Check using several times the svcinfo lsmdisk command if all the
MDisks you were expecting are present.

Setting up a quorum disk


The SVC cluster, after the process of node discovery, automatically chooses three MDisks as
quorum disks. Each disk is assigned an index number of 0, 1, or 2.

Chapter 9. SVC configuration and administration using the CLI 229


The quorum disks are only created once when at least one managed MDisk with an available
extent is placed in managed mode.

In the event that half the nodes in a cluster are missing for any reason, the other half cannot
simply assume that the nodes are “dead”. It can simply mean that the cluster state
information is not being successfully passed between nodes for some reason (network failure
for example). For this reason, if half of the cluster disappears from the view of the other half,
each surviving half attempts to lock the active quorum disk.

Note: There can be only one active quorum disk. When SVC first discovers LUNs as
MDisks, it chooses three MDisks as quorum disk candidates. One is then chosen as active,
and the others are not considered quorum disks in any way. Only if the active quorum disk
becomes unavailable, will the cluster go out and choose any of the other two candidates to
take its place. Since the other quorum disk candidates are nothing but candidates, they are
not relevant and are not even considered in any cluster quorum event.

So, in the event of quorum disk index 0 not being available, the next disk (index 1) becomes
the quorum, and so on. The half of the cluster that is successful in locking the quorum disk
becomes the exclusive processor of I/O activity. It attempts to reform the cluster with any
nodes it can still see. The other half will stop processing IO. This provides a tie-break solution
and ensures that both halves of the cluster do not continue to operate. In the case that all
clusters can see the quorum disk, they will use this quorum disk to communicate with each
other and will decide which half will become the exclusive processor of I/O activity.

If for any reason (for example, additional back-end storage has been installed and you want to
move one or two quorum disks on this newly installed back-end storage subsystem) you want
to set your own quorum disks, you can use the svctask setquorum command, as shown in
Example 9-20, to reassign the quorum indexes. The managed disk that is currently assigned
the quorum index number is set to a non-quorum disk.

Example 9-20 svctask setquorum command


IBM_2145:itsosvc01:admin>svctask setquorum -quorum 0 mdisk0

IBM_2145:itsosvc01:admin>svcinfo lsmdisk mdisk0


id 0
name mdisk0
status online
mode managed
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 200.4GB
quorum_index 0
block_size 512
controller_name DS4300
ctrl_type 4
ctrl_WWNN 200800A0B80FBDF0
controller_id 0
path_count 1
max_path_count 1
ctrl_LUN_# 0
UID 600a0b80000fbdf0000002753fb8b1c200000000000000000000000000000000
path_count 2
max_path_count 2
ctrl_LUN_# 0000000000000006
preferred_WWPN 200800A0B80FBDF0
active_WWPN 200800A0B80FBDF0

230 IBM System Storage SAN Volume Controller


As you can see, this command has set mdisk0 as a quorum disk using quorum index 0. You
can also do this for quorum index 1 and 2.

Including an MDisk
If a significant number of errors occur on an MDisk, the SVC automatically excludes it. These
errors can be from a hardware problem, a storage area network (SAN) zoning problem, or the
result of poorly planned maintenance. If it was a hardware fault, you should receive Simple
Network Management Protocol (SNMP) alerts as the state of the disk subsystem (before the
disk was excluded) and undertaken preventative maintenance. If not, the hosts that were
using VDisks, which used the excluded MDisk, now have I/O errors.

By running the svcinfo lsmdisk command, you can see that mdisk3 is excluded in
Example 9-21.

Example 9-21 svcinfo lsmdisk command: Excluded MDisk


IBM_2145:itsosvc01:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4301:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4301:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:excluded:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4301:600a0b80000fbdfc00000
29f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4301:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4301:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

After taking the necessary corrective action to repair the MDisk (for example, replace failed
disk, repair SAN zones, and so on), we must tell the SVC to include the MDisk again by
issuing the svctask includemdisk command:
IBM_2145:itsosvc01:admin>svctask includemdisk mdisk3

Running the svcinfo lsmdisk command again should show mdisk5 online again, as shown
in Example 9-22.

Example 9-22 svcinfo lsmdisk command: Verifying that MDisk is included


IBM_2145:itsosvc01:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4300:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4300:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4300:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4300:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4300:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4300:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4300:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

Chapter 9. SVC configuration and administration using the CLI 231


Showing the MDisk group
Use the svcinfo lsmdisk command as before to display information about the managed disk
group (MDG) to which an MDisk belongs, as shown in Example 9-23.

Example 9-23 svcinfo lsmdisk command


IBM_2145:itsosvc01:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4301:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4301:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4301:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4301:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4301:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:unmanaged:::200.4GB:0000000000000007:DS4301:600a0b80000fbdfc000002a53fbcda8
100000000000000000000000000000000

See 9.3.3, “Managed Disk Groups” on page 234, for more details about MDGs.

Showing a VDisk for an MDisk


Use the svcinfo lsmdiskmember command to display information about the VDisks that use
space on a specific MDisk, as shown in Example 9-24.

Example 9-24 svcinfo lsmdiskmember command


IBM_2145:itsosvc01:admin>svcinfo lsmdiskmember mdisk0
id
0
1
3
4
5
6

This command shows that the VDisks with IDs 0 to 6 are all using space on mdisk0.

To correlate the IDs displayed in this output to VDisk names, we can run the svcinfo lsvdisk
command, which we discuss in more detail in 9.4, “Working with virtual disks” on page 237.

Creating a VDisk in image mode


An image mode disk is a VDisk that has an exact one-to-one (1:1) mapping of VDisk extents
with the underlying MDisk. For example, extent 0 on the VDisk contains the same data as
extent 1 on the MDisk and so on. Without this one-to-one mapping (for example, if extent 0 on
the VDisk mapped to extent 3 on the MDisk), there is little chance that the data on a newly
introduced MDisk is still readable.

Image mode is intended for the purpose of migrating data from an environment outside the
SVC, to an environment within the SVC. A logical unit number (LUN) that was previously
directly assigned to a SAN attached host can now be reassigned to the SVC (possible short
outage) and given back to the same host as an image mode VDisk. During the same outage,
the host and zones can be reconfigured to access the disk via the SVC.

232 IBM System Storage SAN Volume Controller


After access is re-established, the host workload can resume while the SVC manages the
transparent migration of the data to other SVC managed VDisks on the same or another disk
subsystem.

We recommend that during the migration phase of the SVC implementation, you add one
MDisk at a time to the SVC environment. This reduces the possibility of error. It also means
that the short outages required to reassign the LUNs from the subsystem or subsystems and
reconfigure the SAN and host can be staggered over a period of time to minimize the
business impact.

Important: Creating an image mode VDisk can be done only using an unmanaged disk
(for example, before you added it to a MDG). We recommend that you create an empty
MDG, called image_mode, or similar since you need to add your newly created image
mode VDisk to a MDG. See 9.3.3, “Managed Disk Groups” on page 234, for information
about creating a MDG.

Use the svctask mkvdisk command to create an image mode VDisk. The full syntax of this
command is:
svctask mkvdisk -mdiskgrp mdisk_group_id | mdisk_group_name
-iogrp io_group_id | io_group_name -size disk_size [-udid vdisk_udid] [-fmtdisk]
[-vtype seq | striped | image] [-node node_id | node_name] [-unit b|kb|mb|gb|tb|pb]
[-mdisk mdisk_id_list | mdisk_name_list] [-name new_name_arg] [-cache readwrite |
none|]

Here, the parameters are defined as follows:


򐂰 mdiskgrp: Name or ID of the MDG in which to create the VDisk.
򐂰 Iogrp: Name or ID of I/O group which is to own the VDisk.
򐂰 size: Capacity (numerical); not necessary for image mode VDisks.
򐂰 fmtdisk: Optional parameter to force a format of the new VDisk.
򐂰 vtype: Optional parameter to specify the type of VDisk (sequential, striped, or image
mode). Default (if nothing is specified) is striped.
򐂰 node: Optional parameter to specify the name or ID of the preferred node. Default
(if nothing is specified) is to alternate between nodes in the I/O group.
򐂰 unit: Optional parameter to specify the data units for capacity parameter. Default
(if nothing is specified) is megabytes (MB).
򐂰 mdisk: Optional parameter to specify the name or ID of the MDisk or MDisks to be used
for the VDisk. This is only required for sequential and image mode VDisks because striped
VDisks use all MDisks that are available in the MDG by default.

Note: You can use this parameter for striped VDisks, for example, if you want to
specify that the VDisk only uses a subset of the MDisks available within a MDG.

򐂰 name: Optional parameter to assign a name to the new VDisk. Default (if nothing is
specified) is to assign the name vdiskX, where X is the ID sequence number assigned by
the SVC internally.

Chapter 9. SVC configuration and administration using the CLI 233


Note: If you do not provide the -name parameter, the SVC automatically generates the
name vdiskX (where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, You can use letters A to Z, a to z, numbers 0 to 9, the dash
’-’ and the underscore ’_’. It can be between one and 15 characters in length. However, it
cannot start with a number, dash, or the word vdisk, since this prefix is reserved for SVC
assignment only.

򐂰 cache: Optional parameter to specify the cache options for the VDisk. Valid entries are
readwrite or none. The default is readwrite. If cache is not entered, the default is used.
This parameter has been introduced by software release 3.1.

The command to create an image mode VDisk, and the system response, are as follows:
IBM_2145:itsosvc01:admin>svctask mkvdisk -mdiskgrp MDG2_DS43 -iogrp io_grp0 -vtype image
-mdisk mdisk6 -name imagedisk1
Virtual Disk, id [15], successfully created

This command creates an image mode V Disk called imagedisk1 using MDisk mdisk6. The
VDisk belongs to the MDG MDG2_DS43 and is owned by the I/O group io_grp0

If we run the svcinfo lsmdisk command again, notice that mdisk6 now has a status of
image as shown in Example 9-25.
Example 9-25 svcinfo lsmdisk command: mdisk status
IBM_2145:itsosvc01:admin>svcinfo lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
0:mdisk0:online:managed:1:MDG0_DS43:200.4GB:0000000000000000:DS4301:600a0b80000fbdf00000027
53fb8b1c200000000000000000000000000000000
1:mdisk1:online:managed:0:MDG1_DS43:200.4GB:0000000000000001:DS4301:600a0b80000fbdfc000002a
33fb8b30900000000000000000000000000000000
2:mdisk2:online:managed:1:MDG0_DS43:407.2GB:0000000000000002:DS4301:600a0b80000fbdf00000027
73fb8b22a00000000000000000000000000000000
3:mdisk3:online:managed:0:MDG1_DS43:407.2GB:0000000000000003:DS4301:600a0b80000fbdfc0000029
f3fb8b1f700000000000000000000000000000000
4:mdisk4:online:managed:2:MDG2_DS43:817.4GB:0000000000000004:DS4301:600a0b80000fbdf00000027
93fb8b2ac00000000000000000000000000000000
5:mdisk5:online:managed:2:MDG2_DS43:681.2GB:0000000000000005:DS4301:600a0b80000fbdfc000002a
13fb8b25b00000000000000000000000000000000
6:mdisk6:online:image:2:MDG2_DS43:200.4GB:0000000000000007:DS4301:600a0b80000fbdfc000002a53
fbcda8100000000000000000000000000000000

9.3.3 Managed Disk Groups


This section explains the tasks that we can perform at an MDG level.

Viewing MDisk group information


Use the svcinfo lsmdiskgrp command, as shown in Example 9-26, to display information
about the MDGs defined in the SVC.

Example 9-26 svcinfo lsmdiskgrp command


IBM_2145:itsosvc01:admin>svcinfo lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB

234 IBM System Storage SAN Volume Controller


Creating an MDisk group
Use the svctask mkmdiskgrp command to create an MDG. The full syntax of this command is:
svctask mkmdiskgrp [-name name] [-mdisk name|id_list] -ext size

Note the following explanation:


򐂰 name: Name to assign to new group
򐂰 mdisk: List of names or IDs of MDisks to assign to group
򐂰 ext: Size of extents in this group

The command to create an MDG is:


IBM_2145:itsosvc01:admin>svctask mkmdiskgrp -name MDG3_DS43 -ext 32
MDisk Group, id [3], successfully created

This command creates an MDG called MDG3_DS43 with an extent size of 32 MB. Since we did
not specify any MDisks to add to the group with the -mdisk parameter, this is an empty MDG.

If we run the svcinfo lsmdiskgrp command, we should see the MDG created as shown in
Example 9-27.

Example 9-27 svcinfo lsmdiskgrp command


IBM_2145:itsosvc01:admin>svcinfo lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB
3:MDG3_DS43:online:0:0:0:32:0

Renaming an MDisk group


Use the svctask chmdiskgrp command to change the name of an MDG. To verify the
change, run the svcinfo lsmdiskgrp command. Both of these commands are shown in
Example 9-28.

Example 9-28 svctask chmdiskgrp command


IBM_2145:itsosvc01:admin>svctask chmdiskgrp -name MDG3image_DS43 MDG3_DS43

IBM_2145:itsosvc01:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB
3:MDG3image_DS43:online:0:0:0:32:0

This command renamed the MDG from MDG3_DS43 to MDG3image_DS43.

Note: The chmdiskgrp command specifies the new name first. You can use letters A to Z,
a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash, or the word mdiskgrp
since this prefix is reserved for SVC assignment only.

Deleting an MDisk group


Use the svctask rmmdiskgrp command to remove an MDG from the SVC cluster
configuration:

Chapter 9. SVC configuration and administration using the CLI 235


IBM_2145:itsosvc01:admin>svctask rmmdiskgrp MDG3image_DS43

IBM_2145:itsosvc01:admin>svcinfo lsmdiskgrp -delim :


id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity
0:MDG1_DS43:online:2:0:607.3GB:32:607.3GB
1:MDG0_DS43:online:2:15:607.0GB:32:465.0GB
2:MDG2_DS43:online:3:1:1498.7GB:32:1498.5GB

This command removes the MDG MDG3image_DS43 from the SVC configuration.

Note: If there are MDisks within the MDG, you must use the -force flag, for example:
svctask rmmdiskgrp MDG3image_DS43 -force

Ensure that you really want to use this flag as it destroys all mapping information and data
held on the VDisks can not be recovered.

Adding MDisks
If you created an empty MDG as we did, or you simply assign additional MDisks to your SVC
environment later, you can use the svctask addmdisk command to populate the MDG:
IBM_2145:itsosvc01:admin>svctask addmdisk -mdisk mdisk6 MDG3_DS43

You can only add unmanaged MDisks to an MDG. This command adds MDisk mdisk6 to the
MDG named MDG3_DS43.

Important: Do not do this if you want to create an image mode VDisk from the MDisk you
are adding. As soon as you add an MDisk to a MDG, it becomes managed, and extent
mapping is not necessarily 1:1 anymore.

Removing MDisks
Use the svctask rmmdisk command to remove an MDisk from a MDG:
IBM_2145:itsosvc01:admin>svctask rmmdisk -mdisk mdisk6 MDG3_DS43

This command removes the MDisk called mdisk6 from the MDG named MDG3_DS43

Note: If VDisks are using the MDisks you are removing from the MDG, you must use the
-force flag:
svctask rmmdisk -force -mdisk mdisk6 MDG3_DS43

Even then, the removal only takes place if there is sufficient space to migrate the VDisk
data to other extents on other MDisks which remain in the MDG. After you remove the
MDisk group, it takes some time to change the mode from managed to unmanaged.

Showing MDisks in this group


Use the svcinfo lsmdisk -filtervalue command, as shown in Example 9-29, to see which
MDisks are part of a specific MDG. This command shows all MDisks that are part of the MDG
MDG3_DS43.

Example 9-29 svcinfo lsmdisk -filtervalue: mdisks in MDG


IBM_2145:itsosvc01:admin>svcinfo lsmdisk -filtervalue 'mdisk_grp_name=MDG3_DS43' -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID
6:mdisk6:online:managed:3:MDG3_DS43:200.4GB:0000000000000007:DS4301:600a0b80000fbdfc000002a
53fbcda8100000000000000000000000000000000

236 IBM System Storage SAN Volume Controller


Showing VDisks using this group
Use the svcinfo lsvdisk -filtervalue command, as shown in Example 9-30, to see which
VDisks are part of a specific MDG. This command shows all VDisks that are part of the MDG
MDG0_DS43.

Example 9-30 svcinfo lsvdisk -filtervalue: vdisks in MDG


IBM_2145:itsosvc01:admin>svcinfo lsvdisk -filtervalue 'mdisk_grp_name=MDG0_DS43' -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name:vdisk_UID
3:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::

We have now completed the tasks required to manage the disk controller systems, managed
disks, and MDGs within an SVC environment.

9.4 Working with virtual disks


This section details the various configuration and administration tasks which can be
performed on the VDisks within the SVC environment.

9.4.1 Hosts
This section explains the tasks that can be performed at a host level.

Host information
Use the svcinfo lshost command to display summary information about all hosts defined
within the SVC environment. To display more detailed information about a specific host, run
the command again and append the host name parameter (for example, W2K_npsrv3). Both
of these commands are shown in Example 9-31.

Tip: The -delim: parameter truncates the on-screen content and separates data fields with
colons as opposed to wrapping text over multiple lines.

Example 9-31 svcinfo lshost command


IBM_2145:itsosvc01:admin>svcinfo lshost
id name port_count iogrp_count
0 AIX_270 2 4
1 W2K_npsrv3 2 4
2 LINUX1 2 4

IBM_2145:itsosvc01:admin>svcinfo lshost W2K_npsrv3


id 1
name W2K_npsrv3
port_count 2
type generic
WWPN 210100E08B259C41
node_logged_in_count 2
WWPN 210000E08B059C41
node_logged_in_count 2
state=active

Chapter 9. SVC configuration and administration using the CLI 237


Creating a host
Before creating a host, you need to know that its host bus adapter (HBA) worldwide port
names (WWPNs) are visible to the SVC. To do this, issue the svcinfo lshbaportcandidate
command as shown in Example 9-32.

Example 9-32 svcinfo lshbaportcandidate command


IBM_2145:itsosvc01:admin>svcinfo lshbaportcandidate
id
210000E08B09691D
210100E08B25C440
210000E08B08AFD6
210100E08B29691D
210000E08B05C440
210100E08B29951D

After you know the WWPNs that are displayed match our host (use host or SAN switch
utilities to verify), use the svctask mkhost command to create an image mode VDisk. The full
syntax of this command is:
svctask mkhost [-name name] [-hbawwpn wwpn_list] [-iogrp iogrp_list] [-force] [-mask
host_port_mask] [-type generic|hpux]

Note the following explanation:


򐂰 name: Name to be assigned to the host.
򐂰 hbawwpn: List of HBA WWPNs to be added to host.
򐂰 iogrp iogrp_list: Optionally specifies a set of one or more I/O groups that the host
accesses VDisks from. I/O groups are specified using their name or ID, separated by a
colon. Names and IDs can be mixed in the list. If the parameter is omitted, the host will be
associated with all I/O groups.
򐂰 force: Force the creation using the user entered WWPNs. Use this if for any reason the
HBA is not online.
򐂰 mask: An optional parameter that specifies which ports the host object can access. The
port mask must be four characters long in length and can be made up of a combination of
‘0’ or ‘1’. ‘0’ indicates that the port can not be used, ‘1’ indicates that it can. The default
mask is 1111 (all ports are enabled). For example, a mask of 0011 enables port 1 and 2.
򐂰 type: An optional parameter that specifies the type of host. Valid entries are hpux or
generic. The default is generic.

Note: If you do not provide the -name parameter, the SVC automatically generates the
name hostX (where X is the ID sequence number assigned by the SVC internally).

You can use letters A to Z, a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can
be between one and 15 characters in length. However, it cannot start with a number, dash,
or the word host since this prefix is reserved for SVC assignment only.

The command to create a host is shown here:


IBM_2145:itsosvc01:admin>svctask mkhost -name SANFS1 -hbawwpn 210000e08b09951d
Host id [3] successfully created

This command creates a host called SANFS1 using WWPN 210000e08b09951d.

238 IBM System Storage SAN Volume Controller


Note: You can define a host with multiple ports by using the separator (:) between
WWPNs:
svctask mkhost -name SANFS1 -hbawwpn 210000e08b09951d:210100e08b29951d

Or you can use the addport command, which we show later.

Perhaps your WWPN or WWPNs did not display when you issued the svcinfo
lshbaportcandidate command, but you are sure your adapter is functioning (for example,
you see WWPN in the switch name server) and your zones are correctly setup. In this case,
you can type the WWPN of your HBA or HBAs and use the -force flag to create the host
regardless, as shown here:
IBM_2145:itsosvc01:admin>svctask mkhost -name SANFS2 -hbawwpn 210000e08b0995ff -force
Host id [8] successfully created

This command forces the creation of a host called SANFS2 using WWPN 210000e08b0995ff.

Note: WWPNs are one of the few things within the CLI that are not case sensitive.

If you run the svcinfo lshost command again, you should now see your host.

Modify a host
Use the svctask chhost command to change the name of a host. To verify change, run the
svcinfo lshost command. Both of these commands are shown in Example 9-33.

Example 9-33 svctask chhost command


IBM_2145:itsosvc01:admin>svctask chhost -name sanfs1 SANFS1

IBM_2145:itsosvc01:admin>svcinfo lshost
id name port_count iogrp_count
0 AIX_270 2 4
1 W2K_npsrv3 2 4
2 LINUX1 2 4
3 sanfs1 1 4

This command renamed the host from SANFS1 to sanfs1.

Note: The chhost command specifies the new name first. You can use letters A to Z,
a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one and 15
characters in length. However, it cannot start with a number, dash, or the word host since
this prefix is reserved for SVC assignment only.

Note: To get more than eight LUNs support for HP-UX there is a -type flag for this command.
Valid options are -type hpux (for HP-UX only) or -type generic (default)

Deleting a host
Use the svctask rmhost command to delete a host from the SVC configuration. This
command deletes the host called sanfs1 from the SVC configuration.
IBM_2145:itsosvc01:admin>svctask rmhost sanfs1

Chapter 9. SVC configuration and administration using the CLI 239


Note: If there are any VDisks assigned to the host, you must use the -force flag, for
example:
svctask rmhost -force sanfs1

Adding ports
If you add an HBA to a server that is already defined within the SVC, you can use the svctask
addhostport command to add WWPN definitions to it.

Before you add the new WWPN, you need to know that it is visible to the SVC. To do this, you
issue the svcinfo lshbaportcandidate command as shown here:
IBM_2145:itsosvc01:admin>svcinfo lshbaportcandidate
id
210000E08B09691D
210100E08B25C440
210000E08B08AFD6
210100E08B29691D
210000E08B05C440
210100E08B29951D

After you know the WWPNs that are displayed match our host (use host or SAN switch
utilities to verify), use the svctask addhostport command to add the port or ports to the host.
The full syntax of this command is:
svctask addhostport -hbawwpn wwpn_list [-force] host_id|host_name

Note the following explanation:


򐂰 hbawwpn: The list of HBA WWPNs
򐂰 force: Indicates to force the system to use the provided WWPNs
򐂰 host_id: The name the host to which the ports are added

The command to add a host port is:


IBM_2145:itsosvc01:admin>svctask addhostport -hbawwpn 210100e08b29951d SANFS1

This command adds the WWPN of 210100e08b29951d to the host SANFS1.

Note: You can add multiple ports at a time by using the separator (:) between WWPNs, for
example:
svctask addhostport -hbawwpn 210000E08B05C440:210000E08B05C440 SANFS2

Perhaps your WWPN or WWPNs did not display when you issued the svcinfo
lshbaportcandidate command, but you are sure your adapter is functioning (for example,
you see WWN in the switch name server) and your zones are correctly setup. In this case,
you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the
host regardless as shown here:
IBM_2145:itsosvc01:admin>svctask addhostport -hbawwpn 10000000c935b623 -force AIX_270

This command forces the addition of the WWPN 1000000c935b623 to the host called AIX_270.

Note: WWPNs are one of the few things within the CLI that are not case sensitive.

If you run the svcinfo lshost command again, you should see your host with an updated port
count (2 in our example).

240 IBM System Storage SAN Volume Controller


Example 9-34 svcinfo lshost command: port count
IBM_2145:itsosvc01:admin>svcinfo lshost
id name port_count iogrp_count
0 AIX_270 2 4
1 W2K_npsrv3 2 4
2 LINUX1 2 4
3 SANFS1 2 4

Deleting ports
If you make a mistake when adding, or if you remove an HBA from a server that is already
defined within the SVC, you can use the svctask rmhostport command to remove WWPN
definitions from an existing host.

Before you remove the WWPN, be sure that it is the right one. To find this out, you issue the
svcinfo lshost command (our host is SANFS1) as shown in Example 9-35.

Example 9-35 svcinfo lshost command


IBM_2145:itsosvc01:admin>svcinfo lshost SANFS1
id 3
name SANFS1
port_count 2
type generic
WWPN 210000E08B09951D
node_logged_in_count 2
WWPN 210100E08B29951D
node_logged_in_count 2
status:active

When you know the WWPN, use the svctask rmhostport command to delete a host port.
The full syntax of this command is:
svctask rmhostport -hbawwpn wwpn_list [-force] host_id|host_name

Note the following explanation:


򐂰 hbawwpn: List of HBA WWPNs
򐂰 force: Forces the system to use the provided WWPNs
򐂰 host_id: The name the host to which the ports are removed

The command to remove a host port is:


IBM_2145:itsosvc01:admin>svctask rmhostport -hbawwpn 210100e08b29951d SANFS1

This command removes the WWPN of 210100e08b29951d from host SANFS1.

Note: You can remove multiple ports at a time by using the separator (:) between WWPNs,
for example:

svctask rmhostport -hbawwpn 210000E08B05C440:210000E08B05C440 SANFS2

Showing the VDisks mapped to this host


Use the svcinfo lshostvdiskmap command to show which VDisks are assigned to a specific
host:
IBM_2145:itsosvc01:admin>svcinfo lshostvdiskmap -delim : LINUX1
id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
2:LINUX1:0:8:VD0_UNIX1:210000E08B04D451:600507680189801B200000000000000B

Chapter 9. SVC configuration and administration using the CLI 241


From this command, you can see that the host LINUX1 has only one VDisk called VD0_UNIX1
assigned. The SCSI LUN id is also shown. This is the id by which the Virtual Disk is being
presented to the host. If no host is specified, all defined host to VDisk mappings will be
returned.

Note: Although the -delim: flag normally comes at the end of the command string, in this
case, you must specify this flag before the host name. Otherwise, it returns the following
message: CMMVC6070E An invalid or duplicated parameter, unaccompanied argument,
or incorrect argument sequence has been detected. Ensure that the input is as per
the help.

SAN debugging
There are SVC commands to help to debug and to display connectivity between SAN Volume
Controller nodes, storage subsystems and hosts.

Use the svcinfo lsfabric command as shown in Example 9-36. The report of the example
shown has been truncated deliberately in our example output:

Example 9-36 lsfabric command example


IBM_2145:itsosvc01:admin>svcinfo lsfabric
remote_wwpn remote_nportid id node_name local_wwpn local_port local_nportid state name
cluster_name type
5005076801301883 010E00 1 node1 500507680140188E 1 010C00 inactive SVCNode2
itsosvc01 node
5005076801301883 010E00 1 node1 500507680130188E 2 010D00 inactive SVCNode2
itsosvc01 node
5005076801301883 010E00 1 node1 500507680110188E 3 010500 inactive SVCNode2
itsosvc01 node
5005076801301883 010E00 1 node1 500507680120188E 4 011500 inactive SVCNode2
itsosvc01 node 5005076801401883 010F00 1 node1 500507680140188E 1 010C00
inactive SVCNode2 itsosvc01 node
5005076801401883 010F00 1 node1 500507680130188E 2 010D00 inactive SVCNode2
itsosvc01 node
5005076801401883 010F00 1 node1 500507680110188E 3 010500 inactive SVCNode2
itsosvc01 node
5005076801401883 010F00 1 node1 500507680120188E 4 011500 inactive SVCNode2
itsosvc01 node
210000E08B05F0ED 010900 1 node1 500507680140188E 1 010C00 active
host
210000E08B05F0ED 010900 2 SVCNode2 5005076801301883 2 010E00 active
host
.
.
.
.
.
200500A0B8174432 010400 2 SVCNode2 5005076801101883 3 010600 inactive DS4000
controller
200500A0B8174432 010400 2 SVCNode2 5005076801201883 4 011600 inactive DS4000
controller
200400A0B8174432 010000 1 node1 500507680140188E 1 010C00 inactive DS4000
controller
200400A0B8174432 010000 1 node1 500507680130188E 2 010D00 inactive DS4000
controller
200400A0B8174432 010000 1 node1 500507680110188E 3 010500 inactive DS4000
controller
.

242 IBM System Storage SAN Volume Controller


.
.
IBM_2145:itsosvc01:admin>

It can also be truncated using the -delim ; parameter as shown in Example 9-37.

Example 9-37 lsfabric -delim command example


IBM_2145:itsosvc01:admin>svcinfo lsfabric -delim :
remote_wwpn:remote_nportid:id:node_name:local_wwpn:local_port:local_nportid:state:name:cluster_name:type
5005076801301883:010E00:1:node1:500507680140188E:1:010C00:inactive:SVCNode2:itsosvc01:node
5005076801301883:010E00:1:node1:500507680130188E:2:010D00:inactive:SVCNode2:itsosvc01:node
5005076801301883:010E00:1:node1:500507680110188E:3:010500:inactive:SVCNode2:itsosvc01:node
5005076801301883:010E00:1:node1:500507680120188E:4:011500:inactive:SVCNode2:itsosvc01:node
5005076801401883:010F00:1:node1:500507680140188E:1:010C00:inactive:SVCNode2:itsosvc01:node
5005076801401883:010F00:1:node1:500507680130188E:2:010D00:inactive:SVCNode2:itsosvc01:node
5005076801401883:010F00:1:node1:500507680110188E:3:010500:inactive:SVCNode2:itsosvc01:node
5005076801401883:010F00:1:node1:500507680120188E:4:011500:inactive:SVCNode2:itsosvc01:node
210000E08B05F0ED:010900:1:node1:500507680140188E:1:010C00:active:::host
210000E08B05F0ED:010900:2:SVCNode2:5005076801301883:2:010E00:active:::host
200500A0B8174432:010400:1:node1:500507680140188E:1:010C00:inactive:DS4000::controller
200500A0B8174432:010400:1:node1:500507680130188E:2:010D00:inactive:DS4000::controller

These are the parameters:

-node node_id_or_name: Optionally specifies a node ID or name. Mutually exclusive to all


other parameters except -port, which is an optional parameter that can be used in conjunction
with -node. Using this parameter by itself displays output for all ports of a specified SAN
Volume Controller node. Output is given for each SAN Volume Controller port in turn.

-portport_id: Optionally specifies a port ID. An optional parameter that can only be used
together with the -node parameter. The parameter displays a concise view of all the WWPNs
currently logged into a specified SAN Volume Controller node and port. Valid data is a
number in the range 1 - 4 that matches the port with the same number in the VPD or the
actual hex WWPN of the local port.

-wwpn wwpn: Optionally specifies a WWPN. The parameter is mutually exclusive to all other
parameters. This parameter displays a list of all ports which have a login to the specified
WWPN.

-host host_id_or_name: Optionally specifies a host name or ID. This parameter is mutually
exclusive to all other parameters. This command and parameter is equivalent to issuing the
svcinfo lsfabric -wwpn command for every configured WWPN of the specified host. Output
is sorted by remote WWPNs, then SAN Volume Controller WWPNs. For example, a host with
2 ports zoned to 1 port of every node in an 8 node cluster should produce 16 lines of output.

-controller controller_id_or_name: Optionally specifies a controller ID or name. The


parameter is mutually exclusive to all other parameters. This command and parameter is
equivalent to issuing the svcinfo lsfabric -wwpn command for every configured WWPN of
the specified controller. Output is sorted by remote WWPNs, then SAN Volume Controller
WWPNs. For example, a controller with 4 ports connected to an 8 node cluster with 2 counter
part SANs should produce 64 lines of output.

-cluster cluster_id_or_name: Optionally specifies a cluster ID or name. The parameter is


mutually exclusive to all other parameters. This command is equivalent to issuing the svcinfo
lsfabric -wwpn command for every known WWPN in the specified cluster. Output is sorted
by remote WWPNs, then SAN Volume Controller WWPNs. This command can be used to
check the state of connections within the local cluster or between local and remote clusters.

Chapter 9. SVC configuration and administration using the CLI 243


When the local cluster ID or name is specified, each node-to-node connection will be listed
twice: once from each end. For example, an 8 node cluster with 2 counterpart SANs should
produce 8 nodes * 7 other nodes * 2 SANs * 4 point-to-point logins = 448 lines of output.

9.4.2 Virtual disks


This section explains the tasks that you can perform at a virtual disk level.

VDisk information
Use the svcinfo lsvdisk command to display summary information about all VDisks defined
within the SVC environment. To display more detailed information about a specific VDisk, run
the command again and append the VDisk name parameter (for example, aix_vdisk0). Both
of these commands are shown in Example 9-38.

Example 9-38 svcinfo lsvdisk command


IBM_2145:itsosvc01:admin>svcinfo lsvdisk -delim :
id:name:IO_group_id:IO_group_name:status:mdisk_grp_id:mdisk_grp_name:capacity:type:FC_id:FC
_name:RC_id:RC_name:vdisk_UID
0:VdAIX1_FCT:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
1:VdAIX1V1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
2:VD0_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
3:VD1_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
4:VD2_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
5:VD3_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
6:VD4_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
7:VD5_AIX_270:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped::::
8:VD0_UNIX1:0:io_grp0:online:1:MDG0_DS43:2.0GB:striped::::
9:VDisk1:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:0:FCMap1::
10:VDisk1T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:0:FCMap1::
11:VDisk2:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap2::
12:VDisk2T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:1:FCMap2::
13:VDisk3:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:2:FCMap3::
14:VDisk3T:0:io_grp0:online:1:MDG0_DS43:10.0GB:striped:2:FCMap3::
15:vdisk15:0:io_grp0:online:2:MDG2_DS43:200.0MB:striped::::

IBM_2145:itsosvc01:admin>svcinfo lsvdisk VD0_AIX_270


id 2
name VD0_AIX_270
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768017F06BF7800000000000001
throttling 0
preferred_node_id 12
fast_write_state empty
cache readwrite
udid 0

244 IBM System Storage SAN Volume Controller


Creating a VDisk
Use the svctask mkvdisk command to create an image mode VDisk. The full syntax of the
mkvdisk command is:
mkvdisk -mdiskgrp mdisk_group_id | mdisk_group_name
-iogrp io_group_id | io_group_name -size disk_size [-udid vdisk_udid] [-fmtdisk]
[-vtype seq | striped | image] [-node node_id | node_name] [-unit b|kb|mb|gb|tb|pb]
[-mdisk mdisk_id_list | mdisk_name_list] [-name new_name_arg] [-cache readwrite |
none|]

Here the parameters are defined as follows:


򐂰 mdiskgrp: Name or ID of the MDG in which to create the VDisk.
򐂰 iogrp: Name or ID of I/O group that is to own the VDisk.
򐂰 size: Capacity (numerical); not needed for image mode VDisks.
򐂰 fmtdisk: Optional parameter to force a format of the new VDisk.
򐂰 vtype: Optional parameter to specify the type of VDisk (sequential, striped or image
mode). Default (if nothing is specified) is striped.
򐂰 node: Optional parameter to specify the name or ID of the preferred node. Default
(if nothing is specified) is to alternate between nodes in the I/O group.
򐂰 unit: Optional parameter to specify the data units for capacity parameter. Default
(if nothing is specified) is MB.
򐂰 mdisk: Optional parameter to specify the name or ID of the MDisk or MDisks to be used
for the VDisk. This is only required for sequential and image mode VDisks because striped
VDisks use all MDisks that are available in the MDG by default.

Note: You can use this parameter for striped VDisks. For example, you might want to
specify that the VDisk only uses a subset of the MDisks available within a MDG.

򐂰 name: Optional parameter to assign a name to the new VDisk. Default (if nothing is
specified) is to assign the name vdiskX, where X is the ID sequence number assigned by
the SVC internally.

Note: If you do not provide the -name parameter, the SVC automatically generates the
name vdiskX (where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, You can use letters A to Z, a to z, numbers 0 to 9, the
dash ’-’ and the underscore ’_’. It can be between one and 15 characters in length.
However, it cannot start with a number, dash, or the word vdisk since this prefix is
reserved for SVC assignment only.

򐂰 cache: Optional parameter to specify the cache options for the VDisk. Valid entries are
readwrite or none. The default is readwrite. If cache is not entered, the default is used.
This parameter has been introduced by software release 3.1.

The command to create a VDisk is shown here:


IBM_2145:itsosvc01:admin>>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -size 20 -vtype striped
-node 3 -unit gb -mdisk 0:1 -name Vdisk1
Virtual Disk, id [3], successfully created

Chapter 9. SVC configuration and administration using the CLI 245


This command creates a striped (default) VDisk called Vdisk1 of 20 GB. The VDisk belongs
to the mdiskgrp with id:0 and is owned by the I/O group io_grp0 with preferred node, the node
with id:3, and has been striped on the mdisk with id:0 and 1. This could be verified with the
lsmdiskmember command as described in “Showing a VDisk for an MDisk” on page 232.

Note: When you want to create a striped VDisk, you need to specify a list of MDisks
indicating where to stripe the VDisk. For example, the list could be in format 0:1:2......
where the number is the MDisk id — or mdisk1:mdisk2....... with the mdisk name.

To create a sequential VDisk, simply insert the -vtype and -mdisk parameters as shown:
IBM_2145:itsosvc01:admin>svctask mkvdisk -mdiskgrp 0 -iogrp io_grp0 -size 20 -vtype seq
-mdisk mdisk0 -name Vdisk2
Virtual Disk, id [4], successfully created

This command creates a sequential VDisk called Vdisk2 of 20 MB (MB default). The VDisk is
created on MDisk mdisk0 within the MDG with id:0 and is owned by the I/O group io_grp0.

Note: An entry of 1 GB uses 1024 MB.

Deleting a VDisk
When executing this command on an existing managed mode VDisk, any data that remained
on it will be lost. The extents that made up this VDisk will be returned to the pool of free
extents available in the Managed Disk Group.

If any Remote Copy, FlashCopy or Host mappings still exist for this virtual disk, then the
delete will fail unless the -force flag is specified. Now, any mapping that remains will be
deleted and then the VDisk will be deleted.

If the VDisk is currently the subject of a migrate to image mode, then the delete will fail unless
the -force flag is specified. Now the migration will be halted and the VDisk deleted.

If the command succeeds (without the -force flag) for an image mode disk, then the
underlying back-end controller logical unit will be consistent with the data which a host could
previously have read from the Image Mode Virtual Disk. That is, all fast write data will have
been flushed to the underlying LUN. If the -force flag is used, then this guarantee does not
hold.

If any Remote FlashCopy or Host mappings still exist for this VDisk, then the delete will fail
unless the -force flag is specified. Now, any mapping that remains will be deleted and then
the VDisk will be deleted.

If there is any un-destaged data in the fast write cache for this VDisk, then the deletion of the
VDisk will fail unless the -force flag is specified. Now any un-destaged data in the fast write
cache will be deleted.

Use the svctask rmvdisk command to delete a VDisk from your SVC configuration:
IBM_2145:itsosvc01:admin>svctask rmvdisk LINUX3

This command deletes VDisk LINUX3 from the SVC configuration. If the VDisk is assigned to
a host, you need to use the -force flag to delete the VDisk, for example:
svctask rmvdisk -force LINUX3

246 IBM System Storage SAN Volume Controller


Deleting a VDisk-to-host mapping
If you mapped a VDisk to a host by mistake, or you simply want to reassign the VDisk to
another host, use the svctask rmvdiskhostmap command to unmap a VDisk from a host:
IBM_2145:itsosvc01:admin>svctask rmvdiskhostmap -host LINUX1 VD0_UNIX1

This command unmaps the VDisk called VD0_UNIX1 from the host LINUX1.

Note: With SVC 4.1.0. this command removes any persistent reservation that a host is
holding.

Expanding a VDisk
Expanding a VDisk presents a larger capacity disk to your operating system. Although easily
done using the SVC, you must ensure your operating systems supports expansion before
using this function.

Assuming your operating system supports it, you can use the svctask expandvdisksize
command to increase the capacity of a given VDisk. The full syntax of the expandvdisksize
command is:
expandvdisksize -size size [-mdisk name_list|id_list] [-fmtdisk]
[-unit b|kb|mb|gb|pb|tb] vdisk_name|vdisk_id

Note the following explanation:


򐂰 size: Capacity by which to expand
򐂰 mdisk: Disks to use as stripe set (optional for striped)
򐂰 fmtdisk: Format disk before use
򐂰 unit: Unit for capacity
򐂰 vdisk_name | vdisk_id: Name or ID of disk to be expanded

Important: Be very careful here. The format option formats the entire VDisk, not only the
new extents.

A sample of this command is:


IBM_2145:itsosvc01:admin>svctask expandvdisksize -size 20 -unit mb LINUX2

This command expands the 20 MB LINUX2 VDisk by another 20 MB for a total of 40 MB.

Important: If a VDisk is expanded, its type will become striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your VDisk to the
specified size, you receive the following error message: CMMVC5860E
Ic_failed_vg_insufficient_virtual_extents.

Mapping a VDisk to a host


Use the svctask mkvdiskhostmap to map a VDisk to a host. The full syntax is:
svctask mkvdiskhostmap [-force] -host host_id | host_name [-scsi scsi_num_arg] vdisk_name |
vdisk_id

When executed, this command will create a new mapping between the Virtual Disk and the
Host specified. This will essentially present this Virtual Disk to the Host, as if the disk was
directly attached to the Host. It is only after this command is executed that the Host can
perform I/O to the Virtual Disk. Optionally, a SCSI LUN id can be assigned to the mapping.

Chapter 9. SVC configuration and administration using the CLI 247


When the HBA in the host scans for devices attached to it, it will discover all Virtual Disks that
are mapped to its Fibre Channel ports. When the devices are found, each one is allocated an
identifier (SCSI LUN id). For example, the first disk found will generally be SCSI LUN 1, and
so on. You can control the order in which the HBA discovers Virtual Disks by assigning the
SCSI LUN id as required. If you do not specify a SCSI LUN id, then the cluster will
automatically assign the next available SCSI LUN id, given any mappings that already exist
with that Host.

It is worth noting that some HBA device drivers will stop when they find a gap in the SCSI
LUN ids. For example:
򐂰 Virtual Disk 1 is mapped to Host 1 with SCSI LUN id 1
򐂰 Virtual Disk 2 is mapped to Host 1 with SCSI LUN id 2
򐂰 Virtual Disk 3 is mapped to Host 1 with SCSI LUN id 4

When the device driver scans the HBA, it might stop after discovering Virtual Disks 1 and 2,
because there is no SCSI LUN mapped with id 3. Care should therefore be taken to ensure
that the SCSI LUN id allocation is contiguous.

It is not possible to map a virtual disk to a host more than once at different LUN numbers.
IBM_2145:itsosvc01:admin>svctask mkvdiskhostmap -host LINUX1 LINUX2
Virtual Disk to Host map, id [0], successfully created

This command maps the VDisk called LINUX2 to the host called LINUX1.

Modifying a VDisk
Executing the svctask chvdisk command will modify a single property of a Virtual Disk. Only
one property can be modified at a time. So to change the name and modify the I/O Group
would require two invocations of the command. A new name, or label, can be specified. The
new name can be used subsequently to reference the Virtual Disk. The I/O Group with which
this Virtual Disk is associated can be changed. Note that this requires a flush of the cache
within the nodes in the current I/O Group to ensure that all data is written to disk. I/O must be
suspended at the Host level before performing this operation.

The full syntax of the svctask chvdisk command is:


svctask chvdisk [-iogrp iogrp_name|iogrp_id] [-rate throttle_rate [-unitmb]] [-name
new_name_arg] [-force] vdisk_name|vdisk_id

Note the following explanation:


򐂰 iogrp: Name or ID of new I/O group
򐂰 rate: Throttling rate (see I/O governing)
򐂰 unitmb: Specify throttle rate in mb (default is ios)
򐂰 name: New name for disk
򐂰 force: Force the removal of the disk
򐂰 name | id: Existing name, or ID, of disk being changed

Changing the name of a VDisk is quite an obvious task. However, the I/O governing
parameter is a new concept.

I/O governing
I/O governing effectively throttles the amount of I/Os per second (or MBs per second) that can
be achieved to and from a specific VDisk. You might want to do this if you have a VDisk that
has an access pattern, which adversely affects the performance of other VDisks on the same
set of MDisks. For example, it uses most of the available bandwidth.

248 IBM System Storage SAN Volume Controller


Of course, if this application is highly important, then migrating the VDisk to another set of
MDisks might be advisable. However, in some cases, it is an issue with the I/O profile of the
application rather than a measure of its use or importance.

The choice between I/O and MB as the I/O governing throttle should be based on the disk
access profile of the application. Database applications generally issue large amounts of I/O
but only transfer a relatively small amount of data. In this case, setting an I/O governing
throttle based on MBs per second does not achieve much. It is better to use an I/O per
second throttle.

On the other extreme, a streaming video application generally issues a small amount of I/O,
but transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle based on I/Os per second does not achieve much, it is better to use an MB
per second throttle.

Note: An I/O governing rate of 0 (displayed as throttling in CLI output of svcinfo lsvdisk
command) does not mean that zero I/O per second (or MBs per second) can be achieved.
It means that no throttle is set.

An example of the chvdisk command is shown here:


IBM_2145:itsosvc01:admin>svctask chvdisk -rate 20 -unitmb VD0_UNIX1

IBM_2145:itsosvc01:admin>svctask chvdisk -name VD0_LINUX1 VD0_UNIX1

Note: The chvdisk command specifies the new name first. The name can consist of letters
A to Z, a to z, numbers 0 to 9, the dash ’-’ and the underscore ’_’. It can be between one
and 15 characters in length. However, it cannot start with a number, the dash or the word
vdisk because this prefix is reserved for SVC assignment only.

The first command changes the VDisk throttling to 20 MB/sec. while the second command
changes the VDisk name of the VDisk fromVD0_UNIX1 to VD0_LINUX1.

If you want to verify the changes, issue the svcinfo lsvdisk command, as shown in
Example 9-39.

Example 9-39 svcinfo lsvdisk command: Verifying throttling


IBM_2145:svcinfo lsvdisk VD0_LINUX1
id 8
name VD0_LINUX1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 2.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768017F06BF7800000000000001
throttling (MB) 20
preferred_node_id 12

Chapter 9. SVC configuration and administration using the CLI 249


fast_write_state empty
cache readwrite
udid 0

Migrating a VDisk
From time to time, you might want to migrate VDisks from one set of MDisks to another to
retire an old disk subsystem, to have better balance performance across your virtualized
environment, or simply to migrate data into the SVC environment transparently using image
mode. To do so, use the svctask migratevdisk command. The full syntax of the command is:
svctask migratevdisk -mdiskgrp name|id [-threads threads] -vdisk name|id

Note the following explanation:


򐂰 mdiskgrp: MDisk group name or ID
򐂰 threads: Number of threads
򐂰 vdisk: VDisk name or ID

Important: After migration is started, it continues to completion unless it is stopped or


suspended by an error condition or the VDisk being migrated is deleted.

As you can see from the above parameters, before you can migrate your VDisk, you must
know the name of our MDisk and the name of the MDG to which you want to migrate. To find
the name, simply run the svcinfo lsvdisk and svcinfo lsmdiskgrp commands.

When you know these details, you can issue the migratevdisk command as shown here:
IBM_2145:itsosvc01:admin>svctask migratevdisk -mdiskgrp MDG3_DS43 -vdisk VD0_LINUX1

This command moves the VDisk VD0_LINUX1 to the MDG MDG3_DS43.

Note: If insufficient extents are available within your target MDG, you receive an error
message. Make sure the source and target MDisk group have the same extent size.

The optional threads parameter allows you to assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.

You can run the svcinfo lsmigrate command at any time to see the status of the migration
process. This is shown in Example 9-40.

Example 9-40 svcinfo lsmigrate command


IBM_2145:svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 20
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:itsosvc01:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 37
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:itsosvc01:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 50

250 IBM System Storage SAN Volume Controller


migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:itsosvc01:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 3
max_thread_count 4
IBM_2145:itsosvc01:admin>svcinfo lsmigrate
IBM_2145:itsosvc01:admin>

Note: The progress is given as percent complete. If you get no more replies, then the
process has finished.

Migrate a VDisk to an image mode VDisk


Migrating a VDisk to an image mode VDisk allows SVC to be removed from the data
path.This might be useful where SVC is used as a data mover appliance. You can use the
svctask migratetoimage command to do this.

To migrate a VDisk to an image mode VDisk, the following rules apply:


򐂰 The destination MDisk must be greater than or equal to the size of the VDisk.
򐂰 The MDisk specified as the target must be in an unmanaged state.
򐂰 Regardless of the mode that the VDisk starts in, it is reported as managed mode during
the migration.
򐂰 Both of the MDisks involved are reported as being Image mode during the migration.
򐂰 If the migration is interrupted by a cluster recovery or by a cache problem, then the
migration will resume after the recovery completes.

The full syntax of the svctask migratetoimage command is:

svctask migratetoimage -vdisk source_vdisk_id | name -mdisk


unmanaged_target_mdisk_id | name -mdiskgrp managed_disk_group_id | name
[-threads number_of_threads]

򐂰 -vdisk: Specifies the name or id of the source Virtual Disk to be migrated.


򐂰 -mdisk: Specifies the name of the MDisk to which the data must be migrated. This disk
must be unmanaged and large enough to contain the data of the disk being migrated.
򐂰 -mdiskgrp: Specifies the mdisk group into which the MDisk must be placed once the
migration has completed.
򐂰 -threads: Optionally specifies the number of threads to use while migrating these extents,
from 1 to 4.

Here is an example of the command:

IBM_2145:itsosvc01:admin>svctask migratetoimage -vdisk LINUX1_VD_2 -mdisk mdisk8


-mdiskgrp MDG4_DS43_IMG

In this example you migrate the data from LINUX1_VD_2 onto mdisk8 and the MDisk must be
put into the MDisk group MDG4_DS43_IMG.

Chapter 9. SVC configuration and administration using the CLI 251


Shrinking a VDisk
The method that the SVC uses to shrink a VDisk is to remove the required number of extents
from the end of the VDisk. Depending on where your data actually resides on the VDisk, this
can be quite destructive.

For example, you might have a VDisk that consists of 128 extents (0 to 127) of 16 MB (2 GB
capacity) and you want to decrease the capacity to 64 extents (1 GB capacity). In this case,
the SVC simply removes extents 64 to 127. Depending on the operating system, there is no
easy way to ensure that your data resides entirely on extents 0 to 63. Therefore, you might
lose data. Although easily done using the SVC, you must ensure your operating system
supports shrinking, either natively or by using third-party tools, before using this function. In
addition, we recommend that you always have a good up-to-date backup before you execute
this task.

Note: Image Mode Virtual Disks cannot be reduced in size. They must first be migrated to
Managed Mode.

Assuming your operating system supports it, you can use the svctask shrinkvdisksize
command to decrease the capacity of a given VDisk. The full syntax of the svctask
shrinkvdisksize command is:
svctask shrinkvdisksize -size size [-unit b|kb|mb|gb|pb|tb] name|id

Note the following explanation:


򐂰 size: Capacity by which to shrink
򐂰 unit: Units applicable to capacity (default is mb)
򐂰 name | id: The name or ID of the disk to be shrunk

Here is an example of this command:


IBM_2145:itsosvc01:admin>svctask shrinkvdisksize -size 1 -unit gb VD0_LINUX1

This command shrinks the VDisk VD0_LINUX1 by 1 GB to a new total size of 1 GB.

Important: This feature should only be used to make a target or auxiliary VDisk the same
size as the source or master VDisk when creating FlashCopy mappings or Metro Mirror
relationships. You should also ensure that the target VDisk is not mapped to any hosts
prior to performing this operation. If the virtual disk contains data you should not shrink this
disk.

Showing the MDisks


Use the svcinfo lsvdiskmember command as shown in Example 9-41 to show which MDisks
are used by a specific VDisk.

Example 9-41 svcinfo lsvdiskmember command


IBM_2145:itsosvc01:admin>svcinfo lsvdiskmember VD0_LINUX1
id
6

If you want to know more about these MDisks, you can run the svcinfo lsmdisk command as
explained in 9.3.2, “Managed disks” on page 228 (using the ID displayed above rather than
the name).

252 IBM System Storage SAN Volume Controller


Showing the MDisk group
Use the svcinfo lsvdisk command, as shown in Example 9-42, to show to which MDG a
specific VDisk belongs.

Example 9-42 svcinfo lsvdisk command: MDG name


IBM_2145:itsosvc01:admin>svcinfo lsvdisk VD0_LINUX1
id 8
name VD0_LINUX1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 3
mdisk_grp_name MDG3_DS43
capacity 1.0GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768017F06BF7800000000000001
throttling (MB) 20
preferred_node_id 12
fast_write_state empty
cache readwrite
udid 0

If you want to know more about these MDGs, you can run the svcinfo lsmdiskgrp command
as explained in 9.3.3, “Managed Disk Groups” on page 234.

Showing the host to which the VDisk is mapped


To show the hosts to which a specific VDisk has been assigned, run the svcinfo
lsvdiskhostmap command as shown in Example 9-43.

Example 9-43 svcinfo lsvdiskhostmap command


IBM_2145:itsosvc01:admin>svcinfo lsvdiskhostmap -delim : LINUX2
id:name:SCSI_id:host_id:host_name:wwpn:vdisk_UID
16:LINUX2:0:2:LINUX1:210000E08B04D451:600507680189801B2000000000000013
16:LINUX2:0:2:LINUX1:210100E08B24D451:600507680189801B2000000000000013

This command shows the host or hosts to which the VDisk TSM_POOL was mapped. It is
normal for you to see duplicated entries as there are more paths between the cluster and the
host. To be sure that the operating system on the host sees the disk only one time, you must
install and configure a multipath software application like SDD. For more information, see
Chapter 8, “Host configuration” on page 165 where SDD is explained.

Note: Although the optional -delim : flag normally comes at the end of the command
string, in this case you must specify this flag before the VDisk name. Otherwise, the
command does not return any data.

You have now completed the tasks required to manage the hosts and VDisks within an SVC
environment.

Chapter 9. SVC configuration and administration using the CLI 253


Showing the VDisk to which the host is mapped
To show the VDisk to which a specific host has been assigned, run the svcinfo
lshostvdiskmap command as shown in Example 9-44.

Example 9-44 lshostvdiskmap command example


IBM_2145:itsosvc01:admin>svcinfo lshostvdiskmap -delim : KANAGA
id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
3:KANAGA:0:1:AIXVdisk1:10000000C932A7D1:60050768018100C4700000000000000A

This command shows which VDisk or VDisks are mapped to the host called KANAGA.

Note: Although the optional -delim : flag normally comes at the end of the command
string, in this case you must specify this flag before the VDisk name. Otherwise, the
command does not return any data.

9.4.3 Tracing a host disk back to its source physical disk


Follow this procedure:
1. On your host, run the datapath query device command from your host. You see a long
disk serial number for each vpath device as shown in Example 9-45.

Example 9-45 datapath query device


C:\Program Files\IBM\Subsystem Device Driver>datapath query device
Total Devices : 1

DEV#: 0 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED


SERIAL: 600507680183001AD00000000000000B
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 4 0
1 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 3 0
2 Scsi Port5 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port4 Bus0/Disk3 Part0 OPEN NORMAL 0 0

Note: In Example 9-45 the state of each path is OPEN. Sometimes you will find the state
CLOSED, and this does not necessarily indicate any kind of problem, as it might be due to
the stage of processing that it is in.

254 IBM System Storage SAN Volume Controller


2. Run the svcinfo lshostvdiskmap command to return a list of all assigned VDisks.
IBM_2145:itsosvc01:admin>svcinfo lshostvdiskmap -delim : W2K
id:name:SCSI_id:vdisk_id:vdisk_name:wwpn:vdisk_UID
0:W2K:0:0:vdisk0:21000000B3474536:600507680183001AD00000000000000B
Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as W2K.
3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks
that make up the specified VDisk:
IBM_2145:itsosvc01:admin>svcinfo lsvdiskmember vdisk0
id
0
4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN
number information as shown in Example 9-46. The output displays the controller name
and the controller LUN ID, which should be enough (provided you named your controller
something unique such as a serial number) to track back to a LUN within the disk
subsystem.

Example 9-46 svcinfo lsmdisk command


IBM_2145:itsosvc01:admin>svcinfo lsmdisk 0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name mdiskgrp0
capacity 100.0GB
quorum_index 0
block_size 512
controller_name DS4301
ctrl_type 4
ctrl_WWNN 200400A0B80CEB7C
controller_id 0
path_count 2
max_path_count 2
ctrl_LUN_# 7
UID 600a0b80000fbdf0000002753fb8b1c200000000000000000000000000000000
preferred_WWPN 200400A0B80CEB7C
active_WWPN 200400A0B80CEB7C

9.5 Managing copy services


See Chapter 11, “Copy Services: FlashCopy” on page 383, Chapter 12, “Copy Services:
Metro Mirror” on page 425, and Chapter 13, “Copy Services: Global Mirror” on page 489 for
more information.

9.6 Service and maintenance


This section details the various service and maintenance tasks that you can execute within
the SVC environment.

Chapter 9. SVC configuration and administration using the CLI 255


9.6.1 Upgrading software
This section explains how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is four positive integers separated by dots.For
example, a software upgrade package contains something similar to 4.1.0.0

Each software package is given a unique number.

SVC 4.1.0 includes an additional prerequisite that means the code being upgraded must be at
least at level 3.1.0.5.

You can check the recommended software levels on the Web at:
http://www.ibm.com/storage/support/2145

New software utility


This utility, which resides on the master console, will check software levels in the system
against recommended levels which will be documented on the support Web site. You will be
informed if the software levels are up-to-date, or if you need to download and install newer
levels.

After the software file has been uploaded to the cluster (to the /home/admin/upgrade
directory), it can be selected and applied to the cluster. This is performed by the Web script by
using the svcservicetask applysoftware command. When a new code level is applied, it is
automatically installed on all the nodes within the cluster.

The underlying command line tool runs the script sw_preinstall, which checks the validity of
the upgrade file, and whether it can be applied over the current level. If the upgrade file is
unsuitable, the preinstall script deletes the files. This prevents the build up of invalid files on
the cluster.

Precaution before upgrade


Software installation is normally considered to be a customer task. The SVC supports
concurrent software upgrade. That is to say, that software upgrade can be performed
concurrently with I/O user operations and some management activities but only the following
CLI commands will be operational from the time the install command is started until the
upgrade operation has either terminated successfully or been backed-out. All other
commands will fail with a message indicating that a software upgrade is in progress:
svcinfo lsxxxx
svcinfo lsxxxxcandidate
svcinfo lsxxxxprogress
svcinfo lsxxxxmember
svcinfo lsxxxxextent
svcinfo lsxxxxdumps
svcinfo caterrlog
svcinfo lserrlogbyxxxx
svcinfo caterrlogbyseqnum
svctask rmnode

Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications might have I/O failures during the software upgrade.
You can do that using the SDD command as shown in Figure 9-2.

256 IBM System Storage SAN Volume Controller


Figure 9-2 SDD command example

Note: During a software upgrade there are periods where not all of the nodes in the cluster
are operational, and as a result the cache operates in write through mode. This will have
an impact upon throughput, latency, and bandwidth aspects of performance.

It is also worth double checking that your UPS power configuration is also set up correctly
(even if your cluster is running without problems). Specifically, make sure:
򐂰 That your UPSs are all getting their power from an external source, and that they are not
daisy chained. In other words, make sure that each UPS is not supplying power to another
node’s UPS.
򐂰 That the power cable, and the serial cable coming from each node go back to the same
UPS. If the cables are crossed, and are going back to different UPS, then during the
upgrade, as one node is shut down, another node might also be mistakenly shut down.

Important: Do not share the SVC UPS with any other devices.

You must also ensure that all I/O paths are working for each host that is running I/O
operations to the SAN during the software upgrade. You can check the I/O paths by using
datapath query commands. Refer to IBM TotalStorage ESS, SAN Volume Controller, SAN
Volume Controller for Cisco, SC26-7608, for more information about datapath query
commands. You do not need to check for hosts that have no active I/O operations to the SAN
during the software upgrade.

Procedure
To upgrade the SVC cluster software, perform the following steps:
1. Before starting the upgrade, you must back up the configuration (see “Backing up the SVC
cluster configuration” on page 267) and save the backup config file in a safe place.
2. Also save the data collection for support diagnosis just in case of problems, as shown in
Example 9-47.

Chapter 9. SVC configuration and administration using the CLI 257


Example 9-47 svc_snap example
IBM_2145:itsosvc01:admin>svc_snap
WRN: Busy copying files, please wait
snap_data collected in /dumps/snap.008057.060622.022049.tgz

3. List the dump generated by the previous command as shown in Example 9-48.

Example 9-48 List dump example


IBM_2145:itsosvc01:admin>svcinfo ls2145dumps
id 2145_filename
0 svc.config.backup.bak_SVCNode2
1 008057.060613.211821.ups_log.tar.gz
2 008057.060613.212313.ups_log.tar.gz
3 008057.messages.gz
4 svc.config.cron.bak_SVCNode2
5 svc.config.cron.log_SVCNode2
6 svc.config.cron.xml_SVCNode2
7 svc.config.cron.sh_SVCNode2
8 008057.trc
9 svc.config.backup.tmp.xml_SVCNode2
10 svc.config.backup.xml_SVCNode2
11 svc.config.backup.now.xml
12 snap.008057.060622.022049.tgz
13 ups_log.a
14 ups_log.b

4. Save the generated dump in a safe place using the pscp command as in Example 9-49.

Example 9-49 Save dump example


C:\Program Files\PuTTY>pscp [email protected]:/dumps/snap_050830_1803.tgz c:\
Authenticating with public key "rsa-key-20030514"
snap_050830_1803.tgz | 254 kB | 254.6 kB/s | ETA: 00:00:00 | 100%

5. Upload the new software package using PuTTY Secure Copy. Enter the command as
shown in Example 9-50.

Example 9-50 SVC code upload command example


C:\Program Files\PuTTY>pscp [email protected]:/dumps/snap_050830_1803.tgz c:\
Authenticating with public key "rsa-key-20030514"
snap_050830_1803.tgz | 254 kB | 254.6 kB/s | ETA: 00:00:00 | 100%

6. Check the package was successfully delivered through the PuTTY command line
application by entering the svcinfo lssoftwaredumps command as shown in
Example 9-51.

Example 9-51 List dump example


C:\pscp d:\svc_code\IBM2145_INSTALL_4.1.0.0 admin@SVC1:/upgrade
Authenticating with public key "rsa-key-20030514"
IBM2145_INSTALL_4.1.0.0 | 80394 kB | 2063.7 kB/s | ETA: 00:00:12 | 92%

7. Now that the package is uploaded, use the hidden svcservicetask command set to apply
the software upgrade as shown in Example 9-52.

258 IBM System Storage SAN Volume Controller


Example 9-52 Apply upgrade command example
IBM_2145:itsosvc01:admin>svcservicetask applysoftware -file IBM2145_INSTALL_4.1.0.0

8. The new code is distributed and applied to each node in the SVC cluster. After installation,
each node is automatically restarted in turn. If a node does not restart automatically during
the upgrade, it will have to be repaired manually.
9. Eventually both nodes should display Cluster: on line one on the SVC front panel and the
name of your cluster on line 2. Be prepared for a long wait (in our case we waited
approximately 40 minutes).

Note: During this process, both your CLI and GUI vary from sluggish (very slow) to
unresponsive. The important thing is that I/O to the hosts can continue.

10.To verify that the upgrade was successful, you can perform either of the following options:
– Run the svcinfo lscluster and svcinfo lsnodevpd commands as shown in
Example 9-53. We have truncated the lscluster and lsnodevpd information for the
purposes of this example.

Example 9-53 svcinfo lscluster and lsnodevpd commands


IBM_2145:itsosvc01:admin>svcinfo lscluster itsosvc01
id 000002006060311C
name itsosvc01
location local
partnership
bandwidth
cluster_IP_address 9.1.39.29
cluster_service_IP_address 9.1.39.30
code_level 4.1.0.0 (build 4.25.0606080000)
FC_port_speed 2Gb
console_IP 0.0.0.0:80
id_alias 000002006040311C
IBM_2145:itsosvc01:admin>

IBM_2145:itsosvc01:admin>svcinfo lsnodevpd 1
id 1

system board: 17 fields


part_number 64P7826
system_serial_number 75ABWGA
number_of_processors 2
number_of_memory_slots 4
number_of_fans 5
number_of_FC_cards 2
number_of_scsi/ide_devices 3
BIOS_manufacturer IBM
BIOS_version -[T2EH05AUS-1.06]-
BIOS_release_date 09/26/2003
system_manufacturer IBM
system_product eServer System x 335 -[21454F2]-
planar_manufacturer IBM
power_supply_part_number 49P2090
CMOS_battery_part_number 33F8354
power_cable_assembly_part_number 64P7940
service_processor_firmware T28T15A

software: 6 fields

Chapter 9. SVC configuration and administration using the CLI 259


code_level 4.1.0.0 (build 4.25.0606080000)
node_name SVCNode1
ethernet_status 1
WWNN 0x500507680100188e
id 1

IBM_2145:itsosvc01:admin>svcinfo lsnodevpd 3
id 3

system board: 17 fields


part_number 64P7826
system_serial_number 75abwda
number_of_processors 2
number_of_memory_slots 4
number_of_fans 5
number_of_FC_cards 2
number_of_scsi/ide_devices 3
BIOS_manufacturer IBM
BIOS_version -[T2EH05AUS-1.06]-
BIOS_release_date 09/26/2003
system_manufacturer IBM
system_product eServer System x 335 -[21454F2]-
planar_manufacturer IBM
power_supply_part_number 49P2090
CMOS_battery_part_number 33F8354
power_cable_assembly_part_number 64P7940
service_processor_firmware T28T15A

software: 6 fields
code_level 4.1.0.0 (build 4.25.0606080000)
node_name SVCNode2
ethernet_status 1
WWNN 0x5005076801001883
id 3

– Copy the error log to your management workstation as explained in 9.6.2, “Running
maintenance procedures” on page 260. Open it in WordPad and search for Software
Install completed.

You have now completed the tasks required to upgrade the SVC software.

9.6.2 Running maintenance procedures


Use the svctask finderr command to generate a list of any unfixed errors in the system. This
command analyzes the last generated log that resides in the /dumps/elogs/ directory on the
cluster.

If you want to generate a new log before analyzing for unfixed errors, run the svctask
dumperrlog command:
IBM_2145:itsosvc01:admin>svctask dumperrlog

This generates a file called errlog_timestamp, such as errlog_000667_050902_174042,


where:
򐂰 errlog is part of the default prefix for all error log files.
򐂰 000667 is the panel name of the current configuration node.
򐂰 050902 is the date (YYMMDD).
򐂰 174042 is the time (HHMMSS).

260 IBM System Storage SAN Volume Controller


You can add the -prefix parameter to your command to change the default prefix of errlog
to something else, for example:
svctask dumperrlog -prefix svcerrlog

This command creates a file called svcerrlog_timestamp.

To see what the filename is, you must enter the following command:
IBM_2145:itsosvc01:admin>svcinfo lserrlogdumps
IBM_2145:itsosvc01:admin>svcinfo lserrlogdumps
id filename
0 errlog_008057_050714_154230
1 errlog_008057_050715_111027
2 errlog_008057_050831_114246
3 errlog_008057_050831_114327

Note: A maximum of ten error log dump files per node will be kept on the cluster. When the
eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note
that the directory might also hold log files retrieved from other nodes. These files are not
counted. The SVC will delete the oldest file (when necessary) for this node in order to
maintain the maximum number of files. The SVC will not delete files from other nodes
unless you issue the cleandumps command.

After you generate your error log, you can issue the svcinfo finderr command to scan it for
any unfixed errors as shown here:
IBM_2145:itsosvc01:admin>svctask finderr
Highest priority unfixed error code is [1060]

As you can see, we have one unfixed error on our system. To get it analyzed, you need to
download it onto your own PC.

To know more about this unfixed error, you need to look at the error log in more detail. Use
the PuTTY Secure Copy process to copy the file from the cluster to your local management
workstation as shown in Example 9-54.

Example 9-54 pscp command: Copy error logs off SVC


In W2K3 → Start → Run →

C:\D-drive\Redbook-4H02\PuTTY\pscp admin@SVC1:/dumps/elogs/errlog_000683_041111_174042
c:\SVC_Dumps\errlog.txt

This opens a new connection to the SVC1.


Wait until you see 100% on the screen
Authenticating with public key "rsa-key-20030514"
errlog.txt | 367 kB | 367.1 kB/s | ETA: 00:00:00 | 100%

In order to use the Run option, you must know where your pscp.exe is located. In this case it is
in C:\D-drive\Redbook-4H02\PuTTY\

This command copies the file called errlog_000683_041111_174042 to the C:\SVC_Dumps


directory on our local workstation and call the file errlog.txt.

Chapter 9. SVC configuration and administration using the CLI 261


Open the file in WordPad (Notepad does not format the screen as well). You should see the
information similar to Example 9-55. The list was truncated for purposes of this example.

Example 9-55 errlog in WordPad


//-------------------
// Error Log Entries
// ------------------

Error Log Entry 0


Node Identifier : SVC1N1
Object Type : cluster
Object ID : 0
Sequence Number : 101
Root Sequence Number : 101
First Error Timestamp : Wed Nov 3 15:10:37 2004
: Epoch + 1099512637
Last Error Timestamp : Wed Nov 3 15:10:37 2004
: Epoch + 1099512637
Error Count : 1
Error ID : 981001 : Cluster Fabric View updated by fabric discovery
Error Code :
Status Flag :
Type Flag : INFORMATION

06 00 00 00 01 00 00 00 00 00 00 00 01 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 04 02 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Scrolling through, or searching for the term unfixed, you should find more detail about the
problem. There can be more entries in the errorlog that has the status of unfixed.

After you take the necessary steps to rectify the problem, you can mark the error as fixed in
the log by issuing the svctask cherrstate command against its sequence numbers:
IBM_2145:itsosvc01:admin>svctask cherrstate -sequencenumber 195
IBM_2145:itsosvc01:admin>svctask cherrstate -sequencenumber 197

If you accidentally mark the wrong error as fixed, you can mark it as unfixed again by entering
the same command and appending the -unfix flag to the end, for example:
svctask cherrstate -sequencenumber 195 -unfix

9.6.3 Setting up error notification


To set up error notification, use the svctask setevent command. The full syntax of the
setevent command is:
svctask setevent [-snmptrap all|no_state|none] [-snmpip ip_address] [-community community]

Note the following explanation:


򐂰 snmptrap: When to raise a trap
򐂰 snmpip: IP address of host running SNMP
򐂰 community: SNMP community

262 IBM System Storage SAN Volume Controller


An example of the setevent command is shown here:
IBM_2145:itsosvc01:admin>svctask setevent -snmptrap all -snmpip 9.42.164.160 -community SVC

This command sends all events (errors and changes in state) to the SVC community on the
SNMP manager with the IP address 9.42.164.160.

9.6.4 Analyzing the error log


The following types of events and errors are logged in the error log:
򐂰 Events: State changes that are detected by the cluster software and that are logged for
informational purposes. Events are recorded in the cluster error log.
򐂰 Errors: Hardware or software problems that are detected by the cluster software and that
require some repair. Errors are recorded in the cluster error log.
򐂰 Unfixed errors: Errors that were detected and recorded in the cluster error log and that
have not yet been corrected or repaired.
򐂰 Fixed errors: Errors that were detected and recorded in the cluster error log and that have
subsequently been corrected or repaired.

To display the error log, use the svcinfo lserrlog or svcinfo caterrlog commands as
shown in Example 9-56 (output is the same).

Example 9-56 svcinfo caterrlog command


IBM_2145:itsosvc01:admin>svcinfo caterrlog -delim :
id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_number:fi
rst_timestamp:last_timestamp:number_of_errors:error_code
0:cluster:no:no:6:SVC1N1:101:101:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:102:102:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:103:102:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:104:102:041103201037:041103201037:1:00981001
0:cluster:no:no:6:SVC1N1:105:102:041103201052:041103201052:1:00981001
0:cluster:no:yes:6:SVC1N1:106:106:041103202640:041103202640:1:00981001
0:cluster:no:yes:6:SVC1N1:107:106:041103202640:041103202640:1:00981001
0:cluster:no:yes:6:n/a:108:108:041103203957:041103203957:1:00981001
2:node:no:yes:6:n/a:109:109:041103203957:041103203957:1:00987102
0:cluster:no:yes:6:n/a:110:108:041103203957:041103203957:1:00981001
1:node:no:yes:6:SVC1N1:9000002:9000002:041104010001:041104010001:1:00988100
0:cluster:no:yes:6:SVC1N1:111:111:041104175926:041104175926:1:00981001
0:cluster:no:yes:6:n/a:112:111:041104175935:041104175935:1:00981001
0:flash:no:yes:6:n/a:113:113:041104223227:041104223227:1:00983001
0:flash:no:yes:6:n/a:114:114:041104223728:041104223728:1:00983003
2:fc_const_grp:no:yes:6:n/a:115:115:041104225231:041104225231:1:00983001
0:cluster:no:yes:6:n/a:116:116:041104225532:041104225532:1:00981001
2:node:no:yes:6:SVC1N1:117:117:041104225532:041104225532:1:00980371
0:cluster:no:yes:6:SVC1N1:118:116:041104225532:041104225532:1:00981001
0:cluster:no:yes:6:SVC1N1:119:119:041104225917:041104225917:1:00981001
0:cluster:no:yes:6:n/a:120:120:041104225932:041104225932:1:00981001
0:cluster:no:yes:6:SVC1N1:121:120:041104225932:041104225932:1:00981001
0:cluster:no:yes:6:n/a:122:122:041104230103:041104230103:1:00981001
3:node:no:yes:6:SVC1N1:123:123:041104230103:041104230103:1:00980371
.........

This command views the error log that was last generated. It shows that nine events are
logged. Use the method described in 9.6.2, “Running maintenance procedures” on page 260,
to upload and analyze the error log in more detail.

Chapter 9. SVC configuration and administration using the CLI 263


To clear the error log, you can issue the svctask clearerrlog command as shown here:
IBM_2145:itsosvc01:admin>svctask clearerrlog
Do you really want to clear the log? y

Using the -force flag will stop any confirmation requests from appearing.

When executed, this command will clear all entries from the error log. This will proceed even
if there are unfixed errors in the log. It also clears any status events that are in the log.

This is a destructive command for the error log and should only be used when you have either
rebuilt the cluster, or when you have fixed a major problem that has caused many entries in
the error log that you do not wish to manually fix.

9.6.5 Setting features


To change the licensing feature settings, use the svctask chlicense command. The full
syntax of the svctask chlicense command is:
svctask chlicense [-flash on|off] [-remote on|off] [-size capacity]

Note the following explanation:


򐂰 flash: Enable/disable FlashCopy
򐂰 remote: Enable/disable Metro Mirror (Peer-to-Peer Remote Copy (PPRC))
򐂰 size: Set licensed capacity (in GBs)

All three arguments are mutually exclusive.

Before you change the licensing, see what license you already have by issuing the svcinfo
lslicense command as shown in Example 9-57.

Example 9-57 svcinfo lslicense command


IBM_2145:itsosvc01:admin>svcinfo lslicense
feature_flash on
feature_remote off
feature_num_gb 2000

Consider, for example, that you have purchased an additional 4 TB of licensing and the PPRC
premium feature. The command you need to enter is shown here:
IBM_2145:itsosvc01:admin>svctask chlicense -remote on

IBM_2145:itsosvc01:admin>svctask chlicense -size 6000

The first command turns the remote copy feature on. The second command changes the
licensed capacity to 6000 GB (6 TB) which is 4 TB more than what it was.

To verify that the changes you made are reflected in your SVC configuration, you can issue
the svcinfo lslicense command as before. See Example 9-58.

Example 9-58 svcinfo lslicense command: Verifying changes


IBM_2145:itsosvc01:admin>svcinfo lslicense
feature_flash on
feature_remote on
feature_num_gb 6000

264 IBM System Storage SAN Volume Controller


9.6.6 Viewing the feature log
To view the feature log using the CLI, you must first create a feature log dump. Then copy the
feature log to your management workstation using PuTTY Secure Copy. Finally, open the file
in WordPad.

To create the feature log dump, enter the svctask dumpinternallog command as shown
here:
IBM_2145:itsosvc01:admin>svctask dumpinternallog

This creates a file called feature.txt in the /dumps/feature directory on the cluster. To see
whether creation was successful, you can enter the svcinfo lsfeaturedumps command, as
shown here to see that the file was created:
IBM_2145:itsosvc01:admin>svcinfo lsfeaturedumps
id feature_filename
0 feature.txt

Note: Only one of these files exists. Therefore, each time you run the dumpinternallog
command, it overwrites any existing feature.txt file.

Now that you have created the file, you must copy it to your management workstation using
PuTTY Secure Copy as shown here:
C:\PuTTY>pscp admin@SVC1:/dumps/feature/feature.txt c:\svc_dumps\feature.txt
Authenticating with public key "rsa-key-20030514"
feature.txt | 18 kB | 18.5 kB/s | ETA: 00:00:00 | 100%

Now open the file in WordPad (Notepad does a poor job formatting) to view the output. It
should look similar to Example 9-59. The output list was truncated for purposes of this
example.

Example 9-59 Feature dump in WordPad


//---------------------
//---------------------
// Feature Log Entries
//---------------------
time type value0 value1 value2 value3 value4 value5
3ebfe92f 00000001 00000000 00000000 00000000 00000000 00000000 00000000
3ebfe92f 00000003 00000000 00000000 00000000 00000000 00000000 00000000
3ebfe92f 00000005 00000000 00000000 00000400 00000000 00000000 00000000
3ecb9ddf 00000005 00000000 00000400 000007d0 0000015c 00000000 00000000
3ed286b8 00000003 00000000 00000000 00000000 00000000 00000000 00000000
3ed286bd 00000005 00000000 000007d0 00001770 0000002e 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
........
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
30400364 00000100
aa226003 f9e4c5f1 829a130c 6d12be12

Chapter 9. SVC configuration and administration using the CLI 265


9.7 SVC cluster configuration backup and recovery
Important: The svctask dumpconfig command and the svcinfo lsconfigdumps
command are no longer available. You must use the svcconfig backup command instead.

The SVC configuration data is stored on all the nodes in the cluster. In normal circumstances,
the SVC should never lose its configuration settings. However, in exceptional circumstances,
such as a rogue fire sprinkler soaking the SVC cluster, or a multiple hardware failure, this data
might become corrupted or lost.

This section details the tasks that you can perform to save the configuration data from an
SVC configuration node and restore it. The following configuration information is backed up:
򐂰 SVC cluster
򐂰 Storage controllers
򐂰 Hosts
򐂰 I/O groups
򐂰 Software licenses
򐂰 Managed disks
򐂰 MDGs
򐂰 SVC nodes
򐂰 SSH keys
򐂰 Virtual disks
򐂰 VDisk-to-host mappings

Important: Before you begin the restore process, you must consult IBM Support to
determine the cause as to why you cannot access your original configuration data. After
the restore process starts, the original data on the VDisks is destroyed. Therefore, you
must ensure that you have a backup of all user data on the VDisks. IBM has a procedure
guided by L3 support to help you recover your data that is still on the back-end storage.

The svcconfig command line tool is a script, used under the CLI, to save and restore
configuration data. It uses secure communications to communicate with a configuration node.
The tool is designed to work if the hardware configuration for restoration is identical to that
during saving.

The prerequisites for having a successful backup are as follows:


򐂰 All nodes in the cluster must be online.
򐂰 No object name can begin with an underscore (_).
򐂰 Do not run any independent operations that could change the cluster configuration while
the backup command runs.
򐂰 Do not make any changes to the fabric or the cluster between backup and restore. If
changes are made, back up your configuration again or you might not be able to restore it
later.

Note: We recommend that you make a backup of the SVC configuration data after each
major change in the environment, such as defining or changing a VDisks, VDisk-to-host
mappings, etc. In addition, you can make a backup after each change. Be aware that only
two versions of the backup file are maintained for each cluster (the previous one has .bak
appended), unless you copy the XML or XML BAK files to another folder.

266 IBM System Storage SAN Volume Controller


9.7.1 Backing up the SVC cluster configuration
You can back up your cluster configuration by using the Backing Up a Cluster Configuration
panel or the CLI svcconfig command. This section describes the overall procedure for
backing up your cluster configuration and the conditions that must be satisfied to perform a
successful backup.

Important: We recommend that you make a backup of the SVC configuration data after
each major change in the environment, such as defining or changing a VDisks,
VDisk-to-host mappings, etc.

The backup command extracts configuration data from the cluster and saves it to
svc.config.backup.xml in /tmp. A file svc.config.backup.sh is also produced. You can study this
file to see what other commands were issued to extract information.

A log svc.config.backup.log is also produced. You can study this log for details in regard to
what was done and when. This log also includes information about the other commands
issued.

Any pre-existing svc.config.backup.xml file is archived as svc.config.backup.bak. Only one


such archive is kept. We recommend that you immediately move the .XML file and related
KEY files (see limitations below) off the cluster for archiving. Then erase the files from /tmp
using the svcconfig clear -all command. We also recommend that you change all objects
having default names to non-default names. Otherwise, a warning is produced for objects with
default names. Also the object with the default name is restored with its original name with an
“_r” appended. The prefix _(underscore) is reserved for backup and restore command usage,
and should not be used in any object names.

Important: The tool backs up logical configuration data only, not client data. It does not
replace a traditional data backup and restore tool, but supplements such a tool with a way
to back up and restore the client's configuration. To provide a complete backup and
disaster recovery solution, you must back up both user (non-configuration) data and
configuration (non-user) data. After restoration of the SVC configuration, you are expected
to fully restore user (non-configuration) data to the cluster's disks.

Prerequisites
You must have the following prerequisites in place:
򐂰 All nodes must be online.
򐂰 No object name can begin with an underscore.
򐂰 All objects should have non-default names, that is, names that are not assigned by the
SAN Volume Controller.

Although we recommend that objects have non-default names at the time the backup is taken,
this is not mandatory. Objects with default names are renamed when they are restored.

Chapter 9. SVC configuration and administration using the CLI 267


Example 9-60 shows an example of the svcconfig backup command.

Example 9-60 svcconfig backup command


IBM_2145:itsosvc01:admin>svcconfig backup
............
CMMVC6112W io_grp io_grp1 has a default name
.
CMMVC6112W io_grp io_grp2 has a default name
.
CMMVC6112W io_grp io_grp3 has a default name
.
CMMVC6112W io_grp recovery_io_grp has a default name
.................
CMMVC6136W No SSH key file svc.config.barry.admin.key
CMMVC6136W No SSH key file svc.config.service.service.key
...................
IBM_2145:itsosvc01:admin>svcconfig clear -all
IBM_2145:itsosvc01:admin>

Example 9-61 shows the pscp command.

Example 9-61 pscp command


C:\Support Utils\Putty>pscp admin@SVC1:/tmp/svc.config.backup.xml c:\clibackup.xml
Authenticating with public key "rsa-key-20031031"
clibackup.xml | 22 kB | 22.2 kB/s | ETA: 00:00:00 | 100%

C:\Support Utils\Putty>

Context
The following scenario illustrates the value of configuration backup:
1. Use the svcconfig command to create a backup file on the cluster that contains details
about the current cluster configuration.
2. Store the backup configuration on some form of tertiary storage. You must copy the
backup file from the cluster or it becomes lost if the cluster crashes.
3. If a severe enough failure occurs, the cluster might be lost. Both configuration data (for
example, the cluster definitions of hosts, I/O groups, MDGs, MDisks) and the application
data on the virtualized disks are lost. In this scenario, it is assumed that the application
data can be restored from normal client backup procedures. However, before you can
carry this out, you must reinstate the cluster, as configured at the time of the failure. This
means you restore the same MDGs, I/O groups, host definitions, and the VDisks that
existed prior to the failure. Then you can copy the application data back onto these VDisks
and resume operations.
4. Recover the hardware. This includes hosts, SVCs, disk controller systems, disks, and SAN
fabric. The hardware and SAN fabric must physically be the same as those used before
the failure.
5. Re-initialize the cluster, just with the config node, the other nodes will be recovered
restoring the configuration.
6. Restore your cluster configuration using the backup configuration file generated prior to
the failure.
7. Restore the data on your virtual disks (VDisks) using your preferred restore solution or
with help from IBM Service.
8. Resume normal operations.

268 IBM System Storage SAN Volume Controller


9.7.2 Restoring the SVC cluster configuration
In this section we discuss restoration of the SVC cluster configuration.

Important: Always consult IBM Support to restore the SVC cluster configuration from
backup to determine the cause of the loss of your cluster configuration. After the svcconfig
restore -execute command is started, any prior user data on the VDisks should be
considered destroyed and must be recovered from your usual application data backup
process.

See also IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line
Interface User's Guide, SC26-7544.

For a detailed description of the SVC configuration backup and restore functions, see IBM
TotalStorage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543.

9.7.3 Deleting configuration backup


This section details the tasks that you can perform to delete the configuration backup files
from the default folder in the SVC master console. You can do this if you already copied them
to an external and secure place.

Delete the SVC Configuration backup files, using the svcconfig clear -all command.

9.8 Listing dumps


Several commands are available for you to list the dumps that were generated over a period
of time. You can use the lsxxxxdumps command, where xxxx is the object dumps, to return a
list of dumps in the appropriate directory.
svcinfo lserrlogdumps [node_id | node_name]
svcinfo lsfeaturedumps [node_id | node_name]
svcinfo lsiostatsdumps [node_id | node_name]
svcinfo lsiotracedumps [node_id | node_name]
svcinfo lssoftwaredumps [node_id | node_name]
svcinfo ls2145dumps [node_id | node_name]

If no node is specified, the dumps available on the configuration node are listed.

When executed, this command will return a list of the dump files found in the relevant
directory of the specified node. The 2145 will display a list of dumps relating to node assets,
including actual node dumps which are typically named dump.nnn.xxx.yyy and trace files
which are typically named nnn.trc - here nnn, xxx and yyy are a variable length sequence of
characters and xxx, yyy relate to the data/time of the dump.
򐂰 lserrlogdumps displays error log files from the /dumps/elogs directory.
򐂰 lsfeaturedumps displays feature log files from the /dumps/feature directory.
򐂰 lsiotracedumps displays I/O trace files from the /dumps/iotrace directory.
򐂰 lsiostatsdumps displays I/O statistics files from the /dumps/iostats directory.
򐂰 lssoftwaredumps displays software upgrade files from the /home/admin/upgrade.
򐂰 ls2145dumps displays a list of node_assert dumps from the /dumps directory.

Software upgrade packages are contained in the /home/admin/upgrade directory. These


directories exist on every node in the cluster.

Chapter 9. SVC configuration and administration using the CLI 269


Configuration dump
A configuration dump is created by using the svcconfig backup command. This saves the
current configuration of the cluster to “svc.config.backup.xml” and “svc.config.backup.bak”.
When a new “.xml” file is written any existing will be renamed to “.bak”.

Error or event dump


Dumps contained in the /dumps/elogs directory are dumps of the contents of the error and
event log at the time that the dump was taken. An error or event log dump is created by using
the svctask dumperrlog command. This dumps the contents of the error or event log to the
/dumps/elogs directory. If no filename prefix is supplied, the default errlog_ is used. The full,
default file name is errlog_NNNNNN_YYMMDD_HHMMSS. Here NNNNNN is the node front
panel name. If the command is used with the -prefix option, then the value entered for the
-prefix is used instead of errlog.
The command to list all dumps in the /dumps/elogs directory is svcinfo lserrlogdumps.

Featurization log dump


Dumps contained in the /dumps/feature directory are dumps of the featurization log. A
featurization log dump is created by using the svctask dumpinternallog command. This
dumps the contents of the featurization log to the /dumps/feature directory to a file called
feature.txt. Only one of these files exists, so every time the svctask dumpinternallog
command is run, this file is overwritten.
The command to list all dumps in the /dumps/feature directory is svcinfo lsfeaturedumps.

I/O statistics dump


Dumps contained in the /dumps/iostats directory are dumps of the I/O statistics for disks on
the cluster. An I/O statistics dump is created by using the svctask startstats command. As
part of this command, you can specify a time interval at which you want the statistics to be
written to the file (the default is 15 minutes). Every time the time interval is encountered, the
I/O statistics that are collected up to this point are written to a file in the /dumps/iostats
directory. The file names used for storing I/O statistics dumps are
m_stats_NNNNNN_YYMMDD_HHMMSS, or v_stats_NNNNNN_YYMMDD_HHMMSS,
depending on whether the statistics are for MDisks or VDisks. Here NNNNNN is the node
front panel name.
The command to list all dumps in the /dumps/iostats directory is svcinfo lsiostatsdumps.

I/O trace dump


Dumps contained in the /dumps/iotrace directory are dumps of I/O trace data. The type of
data that is traced depends on the options specified by the svctask settrace command.
The collection of the I/O trace data is started by using the svctask starttrace command.
The I/O trace data collection is stopped when the svctask stoptrace command is used.
When the trace is stopped, the data is written to the file. The file name is
prefix_NNNNNN_YYMMDD_HHMMSS. Here NNNNNN is the node front panel name, and
prefix is the value entered by the user for the -filename parameter in the svctask settrace
command.
The command to list all dumps in the /dumps/iotrace directory is svcinfo lsiotracedumps.

Application abends dump


Dumps contained in the /dumps directory are dumps resulting from application abends. Such
dumps are written to the /dumps directory. The default file names are
dump.NNNNNN.YYMMDD.HHMMSS. Here NNNNNN is the node front panel name. In
addition to the dump file, it is possible that there might be some trace files written to this
directory. These are named NNNNNN.trc.
The command to list all dumps in the /dumps directory is svcinfo ls2145dumps.

270 IBM System Storage SAN Volume Controller


Software dump
The final option available in the svcinfo lsxxxxdumps command series is the svcinfo
lssoftwaredumps command. This command lists the contents of the /home/admin/upgrade
directory. Any files in this directory are copied there at the time that you want to perform a
software upgrade.
Example 9-62 shows these commands.

Example 9-62 Listing dumps


IBM_2145:itsosvc01:admin>svcinfo lsconfigdumps
id config_filename
0 SVC1_000683_041111_173553
1 SVC1_000683_041111_173949

IBM_2145:itsosvc01:admin>svcinfo lserrlogdumps
id filename
0 errlog_000683_041111_120658
1 errlog_000683_041111_170137
2 errlog_000683_041111_171618
3 errlog_000683_041111_171725

IBM_2145:itsosvc01:admin>svcinfo lsfeaturedumps
id feature_filename
0 feature.txt

IBM_2145:itsosvc01:admin>svcinfo lsiostatsdumps
id iostat_filename
0 m_stats_000683_041111_145411
1 v_stats_000683_041111_145412
2 m_stats_000683_041111_150911
3 v_stats_000683_041111_150912
4 v_stats_000683_041111_152412
5 m_stats_000683_041111_152412

IBM_2145:itsosvc01:admin>svcinfo ls2145dumps
id 2145_filename
0 svc.config.cron.bak_node2
1 svc.config.cron.xml_node2
2 svc.config.cron.log_node2
3 svc.config.cron.sh_node2
4 svc.config.cron.bak_node3
5 svc.config.cron.log_node3
6 svc.config.cron.xml_node3
7 svc.config.cron.sh_node3
8 dump.000683.041012.201906
9 dump.000683.041028.200908
10 000683.trc.old
11 000683.messages.gz
12 000683.trc
13 ups_log.a
14 ups_log.b

IBM_2145:itsosvc01:admin>svcinfo lsiotracedumps
id iotrace_filename
0 vdisktrace_000683_041111_162005

IBM_2145:itsosvc01:admin>svcinfo lssoftwaredumps
id software_filename
0 040929_full.tgz.gpg

Chapter 9. SVC configuration and administration using the CLI 271


Other node dumps
All of the svcinfo lsxxxxdumps commands can accept a node identifier as input (for example,
append the node name to the end of any of the above commands). If this identifier is not
specified, then the list of files on the current configuration node (in our case ITSO_node2) is
displayed. If the node identifier is specified, then the list of files on that node is displayed.

However, files can only be copied from the current configuration node (using PuTTY Secure
Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a
non-configuration node to the current configuration node. Subsequenty you can copy them to
the management workstation using PuTTY Secure Copy.

For example, you discover a dump file and want to copy it to your management workstation
for further analysis. In this case, you must first copy the file to your current configuration node.

To copy dumps from other nodes to the configuration node, use the svctask cpdumps
command. The full syntax of the svctask cpdumps command is:
svctask cpdumps -prefix prefix name|id

Note the following explanation:


򐂰 prefix: Directory and or files to retrieve
򐂰 name|id: Name or ID of node from which the dumps must be retrieved

The prefix you enter depends on which dumps you hope to retrieve from the remote host. The
valid -prefix directories are:
򐂰 /dumps
򐂰 /dumps/iostats
򐂰 /dumps/iotrace
򐂰 /dumps/feature
򐂰 /dumps/config
򐂰 /dumps/elog
򐂰 /home/admin

In addition to the directory, a file filter or wildcard can be specified. For example
/dumps/elog/*.txt will retrieve all files in the /dumps/elog directory that end in .txt.

One of the following rules must be followed when using wildcard *


򐂰 svctask cpdumps -prefix '/dumps/*.txt' (single-quotes)
򐂰 svctask cpdumps -prefix /dumps/\*.txt (backslash)
򐂰 svctask cpdumps -prefix "/dumps/*.txt" (double-quotes)

If the node specified is the current configuration node, no file will be copied.

An example of the command is shown here:


IBM_2145:itsosvc01:admin>svctask cpdumps -prefix /dumps/configs SVC1N1

Now that you have copied the configuration dump file from SVCN1 to your configuration
node, you can use PuTTY Secure Copy to copy the file to your management workstation for
further analysis as described earlier.

To clear the dumps, you can run the svctask cleardumps command. Again, you can append
the node name if you want to clear dumps off a node other than the current configuration
node (the default for the svctask cleardumps command). The full syntax of the command is:
svctask cleardumps -prefix prefix [name|id]

272 IBM System Storage SAN Volume Controller


Note the following explanation:
򐂰 prefix: Directory or file filter
򐂰 name|id: Name or ID of the node. If not specified, the configuration node is cleaned.

Here, -prefix must be one of these directories:


򐂰 /dumps
򐂰 /dumps/iostats
򐂰 /dumps/iotrace
򐂰 /dumps/feature
򐂰 /dumps/config
򐂰 /dumps/elog
򐂰 /home/admin

In addition to the directory, a file filter or wildcard can be specified. For example
/dumps/elog/*.txt will clear all files in the /dumps/elog directory that end in .txt.

One of the following rules must be followed when using wildcard *


򐂰 svctask cpdumps -prefix '/dumps/*.txt' (single-quotes)
򐂰 svctask cpdumps -prefix /dumps/\*.txt (backslash)
򐂰 svctask cpdumps -prefix "/dumps/*.txt" (double-quotes)

The commands in Example 9-63 clear all logs or dumps from the SVCN1 node.

Example 9-63 svctask cleardumps command


IBM_2145:itsosvc01:admin>svctask cleardumps -prefix /dumps SVCN1
IBM_2145:itsosvc01:admin>svctask cleardumps -prefix /dumps/iostats SVCN1
IBM_2145:itsosvc01:admin>svctask cleardumps -prefix /dumps/iotrace SVCN1
IBM_2145:itsosvc01:admin>svctask cleardumps -prefix /dumps/feature SVCN1
IBM_2145:itsosvc01:admin>svctask cleardumps -prefix /dumps/config SVCN1
IBM_2145:itsosvc01:admin>svctask cleardumps -prefix /dumps/elog SVCN1
IBM_2145:itsosvc01:admin>svctask cleardumps -prefix /home/admin SVCN1

9.9 T3 recovery process


A procedure called “T3 recovery” has been tested and used in select cases where the cluster
has been completely destroyed. (An example would be simultaneously pulling power cords
from all nodes to their UPSs. In this case all nodes would boot up to node error 578 when
power was restored.)

This procedure, in certain circumstances, is able to recover most user data. However, this
procedure is not to be used by the customer or IBM CE without direct involvement from IBM
level 3 support. It is not published, but we refer to it here only to indicate that loss of a cluster
can be recoverable without total data loss requiring restore of application data from backup.
It is a very sensitive procedure and only to be used as a last resort, and cannot recover any
data unstaged from cache at the time of total cluster failure.

Chapter 9. SVC configuration and administration using the CLI 273


9.10 Scripting and its usage under CLI for SVC task automation
Usage of scripting constructs is better for automation of regular operational jobs. You can use
available shells to develop it. And to run an SVC console where the operating system is
Windows 2000, you can either purchase licensed shell emulation software or download
Cygwin from:
http://www.cygwin.com

Scripting enhances the productivity of SVC administrators and integration of their storage
virtualization environment.

We show an example of scripting in Appendix C, “Scripting” on page 703.

You can create your own customized scripts to automate a large number of tasks for
completion at a variety of times and run them through the CLI.

274 IBM System Storage SAN Volume Controller


10

Chapter 10. SVC configuration and


administration using the GUI
In this chapter we describe how to use the IBM System Storage SAN Volume Controller
graphical user interface (GUI). This allows you to perform additional and advanced
configuration and administration tasks, which are not covered in Chapter 7, “Quickstart
configuration using the GUI” on page 139.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 275
10.1 Managing the cluster using the GUI
This section explains the various configuration and administration tasks that you can perform
on the cluster. All screen captures are taken using the GUI version 4.1.0.566, which is
available for SVC V4.1.

Installing certificates: You might have already accepted certificates, as suggested in


7.1.1, “Installing certificates” on page 144. If you did not, you might notice many instances
where you are prompted with security warnings regarding unrecognized certificates.
Return to 7.1.1, “Installing certificates” on page 144, and complete those steps to avoid
getting these messages. Lack of correct certificates might cause the browser to exit.

10.1.1 Organizing on-screen content


In the following sections, there are several panels within the SVC GUI for you to perform
filtering (to minimize the amount of data shown on screen) and sorting (to organize the
content on the screen). As we have not covered these functions elsewhere, this section
provides a brief overview of these functions.

To show how the filtering features work in the GUI, go to the SVC Welcome page, click the
Work with Virtual Disks option, and click the Virtual Disks link.

Filtering
There are two types of filtering: pre-filtering and table filtering.

Pre-filtering
When you select an item, such as the Virtual Disks link, you are first prompted with a filtering
panel (Figure 10-1). On this panel, you can provide the criteria from which the SVC generates
the next panel.

Figure 10-1 Filtering

Note: The asterisk (*) acts as a wildcard.

276 IBM System Storage SAN Volume Controller


After you have entered the filter criteria (in our case, *LNX*), click the OK button. Then the
Viewing Virtual Disks panel (Figure 10-2) is displayed. It shows only those virtual disks
(VDisks) that meet your filtering criteria.

Figure 10-2 Viewing Virtual Disks: Filtered view

You cannot clear the filter by selecting Clear All Filters from the list and click Go, nor by
clicking the small icon highlighted by the mouse pointer in Figure 10-2. This is used by the
additional filtering method. To clear the filtering you must close the Viewing VDisks panel and
re-open it by clicking the Bypass Filter button.

Table filtering
When you are in the Viewing Virtual Disks list, you can use the additional filter option to
further filter this list, which is useful if the list of entries is still too big to work with. You can
change the filtering here as many times as you like, to get further reduced lists or different
views, as it will only be working with the result of the pre-filter that you used above.

Chapter 10. SVC configuration and administration using the GUI 277
Use the Additional Filter Icon as shown in Figure 10-3, or use the Show Filter Row icon in
the pull-down menu and click Go.

Figure 10-3 Additional filter Icon

This will enable you to filter based on the column names, shown in Figure 10-4. The Filter
under each column name shows that no filter is in effect for that column.

Figure 10-4 Show filter row

278 IBM System Storage SAN Volume Controller


If you want to filter on a column, click the word Filter, which will open up a filter dialog shown
in Figure 10-5. Our example shows us filtering on the Name field, to only show entries that
have copy.

Figure 10-5 Filter option on Name

A list with VDisks is displayed which only contains copy somewhere in the name as shown in
Figure 10-6. (Notice the filter line under each column heading showing our filter in place.) If
you want, you can do some additional filtering on the other columns to further narrow your
view.

Figure 10-6 Filtered on Name containing the word copy

Chapter 10. SVC configuration and administration using the GUI 279
The option to reset the filters is shown in Figure 10-7. Use the Clear All Filter Icon or use the
Clear all filter option in the pull-down menu and click Go.

Figure 10-7 Clear all filter options

Sorting
Regardless of whether you use the pre or additional filter options, when you are on the
Viewing Virtual Disks panel, you can sort the displayed data by selecting Edit Sort from the
list and clicking Go.

Or you can click the small icon highlighted by the mouse pointer in Figure 10-8.

Figure 10-8 Selecting Edit Sort

As shown in Figure 10-9, you can sort based on up to three criteria, including: name, I/O
group name, status, MDisk group name, capacity (MB), type, FC pair name, and MM name.

Note: The actual sort criteria differs based on the information that you are sorting.

280 IBM System Storage SAN Volume Controller


Figure 10-9 Sorting criteria

When you finish making your choices, click OK to regenerate the display based on your
sorting criteria. Look at Figure 10-10 at the icons next to each column name to see the sort
criteria currently in use.

If you want to clear the sort, simply select Clear All Sorts from the list and click Go. Or click
the icon highlighted by the mouse pointer in Figure 10-10.

Figure 10-10 Selecting to clear all sorts

Chapter 10. SVC configuration and administration using the GUI 281
Documentation
If you need to access to online documentation, in the upper right corner of the panel, click the
icon. This opens the Help Assistant panel on the left side of the panel as shown in
Figure 10-11.

Figure 10-11 Online help using the i icon

Help
If you need to access online help, in the upper right corner of the panel, click the icon.
This opens a new window called Information center. Here you can search on any item you
want help for (see Figure 10-12).

Figure 10-12 Online help using the ? icon

282 IBM System Storage SAN Volume Controller


General housekeeping
If at any time the content in the right side of the frame is “cut off”, you can collapse the My
Work column by clicking the icon at the top of the My Work column. When collapsed, the
small arrow changes from pointing to the left to pointing to the right ( ). Clicking the small
arrow that points right expands the My Work column back to its original size.

In addition, each time you open a configuration or administration window using the GUI in the
following sections, it creates a link for that panel along the top of your Web browser beneath
the main banner graphic. As a general house keeping task, we recommend that you close
each window when you finish with it by clicking the icon to the right of the panel name, but
below the icon. Be careful not to close the entire browser.

10.1.2 Viewing cluster properties


Perform the following steps to display the cluster properties:
1. From the SVC Welcome page, select the Manage Cluster option and then the View
Cluster Properties link.
2. The Viewing General Properties panel (Figure 10-13) opens. Click the IP Addresses,
Space, SNMP, Statistics or Metro Mirror links and you see additional information that
pertains to your cluster.

Figure 10-13 View Cluster Properties: General properties

10.1.3 Maintaining passwords


Perform the following steps to maintain passwords:
1. From the SVC Welcome page, select the Manage Cluster option and then the Maintain
Passwords link.
2. Before you can access the Maintain Passwords panel, enter the existing SVC
administration user ID and password when prompted. Click OK.
3. The Maintain Passwords panel (Figure 10-14) opens. Enter the new passwords for the
administrator account, the service account, or both. Click Modify Password.

Note: Passwords are a maximum of 15 alphanumeric case-sensitive characters. Valid


characters are uppercase letters [A through Z], lowercase letters [a through z], digits [0
through 9], dash [ - ], and underscore [ _ ]. The first character cannot be a dash [ - ].

Chapter 10. SVC configuration and administration using the GUI 283
4. Before the next panel is displayed, enter the new user ID and password combination when
prompted (Figure 10-14).

Figure 10-14 Maintain Passwords panel

When complete, you see the successful update messages as shown in Figure 10-15.

Figure 10-15 Modifying passwords successful update messages

You have now completed the tasks required to change the admin and service passwords for
your SVC cluster.

10.1.4 Modifying IP addresses


In this section we discuss the modification of IP addresses.

Important: If you specify a new cluster IP address, the existing communication with the
cluster through the GUI is broken. You need to relaunch the SAN Volume Controller
Application from the GUI Welcome page.

Modifying the IP address of the cluster, although quite simple, requires some
reconfiguration for other items within the SVC environments. This will include reconfiguring
the central administration GUI by re-adding the cluster with its new IP address.

284 IBM System Storage SAN Volume Controller


Perform the following steps to modify the cluster and service IP addresses of our SVC
configuration:
1. From the SVC Welcome page, select the Manage Cluster option and the Modify IP
Addresses link.
2. The Modify IP Addresses panel (Figure 10-16) opens. Make any necessary changes.
Then click Modify Settings.

Figure 10-16 Modify IP Addresses

3. You advance to the next panel which shows a message indicating that the IP addresses
were updated.

You have now completed the tasks required to change the IP addresses (cluster, service,
gateway and master console) for your SVC environment.

10.1.5 Setting the cluster time zone and time


Perform the following steps to set the cluster time zone and time:
1. From the SVC Welcome page, select the Working with Clusters options and the Set
Cluster Time link.
2. The Cluster Date and Time Settings panel (Figure 10-17) opens. At the top of panel, you
see the current settings. If necessary, make adjustments and ensure that the Update
cluster data and time and Update cluster time zone check boxes are selected. Click
Update.

Note: You might be prompted for the cluster user ID and password. If you are, type
admin and the password you set earlier.

Chapter 10. SVC configuration and administration using the GUI 285
Figure 10-17 Cluster Date and Time Settings panel

3. You return to the Cluster Date and Time Settings panel (Figure 10-18), which shows the
new settings.

Figure 10-18 Cluster Date and Time Settings update confirmation

You have now completed the tasks necessary to set the cluster time zone and time.

286 IBM System Storage SAN Volume Controller


10.1.6 Starting the statistics collection
Perform the following steps to start statistics collection on your cluster:
1. From the SVC Welcome page, select the Manage Cluster option and the Start Statistics
Collection link.
2. The Starting the Collection of Statistics panel (Figure 10-19) opens. Make an interval
change, if desired. The interval you specify (minimum 1, maximum 60) is in minutes. Click
OK.

Figure 10-19 Starting collection of statistics

3. Although it does not state the current status and it is not obvious, clicking OK turns on the
statistics collection. To verify, click the Cluster Properties link as you did in 10.1.2,
“Viewing cluster properties” on page 283. Then, click the Statistics link. You see the
interval as specified in Step 2 and the status of On as shown in Figure 10-20.

Figure 10-20 Verifying that statistics collection is on

You have now completed the tasks required to start statistics collection on your cluster.

10.1.7 Stopping the statistics collection


Perform the following steps to stop statistics collection on your cluster:
1. From the SVC Welcome page, select the Manage Cluster option and the Stop Statistics
Collection link.
2. The panel Stopping the Collection of Statistics (Figure 10-21) opens, and you see a
message asking whether you are sure that you want to stop the statistics collection. Click
Yes to stop the ongoing task.

Chapter 10. SVC configuration and administration using the GUI 287
Figure 10-21 Stopping the collection of statistics

3. The window closes. To verify that the collection has stopped, click the Cluster Properties
link as you did in 10.1.2, “Viewing cluster properties” on page 283. Then, click the
Statistics link. Now you see the status has changed to Off as shown in Figure 10-22.

Figure 10-22 Verifying that statistics collection is off

You have now completed the tasks required to stop statistics collection on your cluster.

10.1.8 Shutting down a cluster


If all input power to a SAN Volume Controller cluster is to be removed for more than a few
minutes (for example, if the machine room power is to be shut down for maintenance), it is
important that you shut down the cluster before you remove the power. Shutting down the
cluster while still connected to the mains power will ensure that the UPS batteries are still fully
charged (when power is restored).

If you remove the mains power while the cluster is still running, the UPS will detect the loss of
power and instruct the nodes to shut down. This can take several minutes to complete and
while the UPS will have sufficient power to do this, you will be unnecessarily draining the UPS
batteries.

When power is restored the SVC nodes will start, however, one of the first checks they make,
is to ensure that the UPS’s batteries have sufficient power to survive another power failure,
enabling the node to perform a clean shutdown. (We don’t want the UPS to run out of power,
while the node’s shutdown activities haven’t yet completed!) If the UPS’ batteries aren’t fully
charged enough, the node will not start.

It can take up to three hours to charge the batteries sufficiently for a node to start.

Note: When a node shuts down due to loss of power, it will dump the cache to an internal
hard drive, so the cache data can be retrieved when the cluster starts. With SVC 4.1, the
cache is 8GB, and as such, it can take several minutes to dump to the internal drive.

288 IBM System Storage SAN Volume Controller


SVC UPSs are designed to survive at least two power failures in a short time, before nodes
will refuse to start until the batteries have sufficient power (to survive another immediate
power failure). If, during your maintenance activities, the UPS detected power and power-loss
more than once (and thus the nodes start and shut down more than once in a short
timeframe), you might find that you have unknowingly drained the UPS batteries, and have to
wait until they are charged sufficiently before the nodes will start.

Perform the following steps to shut down your cluster:

Important: Before shutting down a cluster, you should quiesce all I/O operations that are
destined for this cluster because you will lose access to all VDisks being provided by this
cluster. Failure to do so might result in failed I/O operations being reported to your host
operating systems.

There is no need to do this if you will only shut down one SVC node.

Begin the process of quiescing all I/O to the cluster by stopping the applications on your
hosts that are using the VDisks provided by the cluster.
1. If you are unsure which hosts are using the VDisks provided by the cluster, follow the
procedure called “Showing the Host to which the VDisk is mapped” on page 353.

Repeat the previous step for all VDisks.

1. From the SVC Welcome page, select the Manage Cluster option and the Shut Down
Cluster link.
2. The Shutting down cluster panel (Figure 10-23) opens. You see a message asking you to
confirm whether you want to shut down the cluster. Ensure that you have stopped all
FlashCopy mappings, Remote Copy relationships, data migration operations, and forced
deletions before continuing. Click Yes to begin the shutdown process.

Note: At this point, you lose administrative contact with your cluster.

Figure 10-23 Shutting down the cluster

You have now completed the tasks required to shut down the cluster. Now you can shut down
the uninterruptible power supplies by pressing the power button on their front panels.

Tip: When you shut down the cluster, it will not automatically start, and will have to be
manually started.

If the cluster shuts down because the UPS has detected a loss of power, it will
automatically restart when the UPS has detected the power has been restored (and the
batteries have sufficient power to survive another immediate power failure).

Chapter 10. SVC configuration and administration using the GUI 289
Note: To restart the SVC cluster, you must first restart the uninterruptible power supply
units by pressing the power button on their front panels. After they are on, go to the service
panel of one of the nodes within your SVC cluster and press the power on button,
releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and
the cluster name on line 2 of the SVC front panel), you can start the other nodes in the
same way.

As soon as all nodes are fully booted and you have re-established administrative contact
using the GUI, your cluster is fully operational again.

10.2 Working with nodes using the GUI


This section discusses the various configuration and administration tasks that you can
perform on the nodes within an SVC cluster.

10.2.1 I/O groups


This section details the tasks that can be performed at an I/O group level.

Renaming an I/O group


Perform the following steps to rename an I/O group:
1. From the SVC Welcome page, select the Work with Nodes option and the I/O Groups
link.
2. The Viewing Input/Output Groups panel (Figure 10-24) opens. Select the radio button to
the left of the I/O group you want to rename. In this case, we select io_grp1. Ensure that
Rename an I/O Group is selected from the drop-down list. Click Go.

Figure 10-24 Viewing Input/Output Groups

290 IBM System Storage SAN Volume Controller


3. On the Renaming I/O Group panel (I/O Group Name is the I/O group you selected in the
previous step), type the New Name you want to assign to the I/O group. Click OK as
shown in Figure 10-25. Our new name is IO_grp_SVC01.

Figure 10-25 Renaming the I/O group

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and
the underscore. It can be between one and 15 characters in length, but cannot start with a
number, the dash, or the word iogrp because this prefix is reserved for SVC assignment
only.

You have now completed the tasks required to rename an I/O group.

10.2.2 Nodes
This section discusses the tasks that you can perform at a node level. You perform each task
from the Viewing Nodes panel (Figure 10-26). To access this panel, from the SVC Welcome
page, select the Work with Nodes options and the Nodes link.

Figure 10-26 Viewing Nodes

The drop-down shows the options available at a node level. We will work in this example with
SVCNode_1.

Chapter 10. SVC configuration and administration using the GUI 291
Viewing the node details
Perform the following steps to view information about a node within the SVC cluster:
1. From the Viewing Nodes panel (Figure 10-26 on page 291), click the highlighted name of
the node (SVCNode_1).
2. The Viewing General Details nodename panel (where nodename is the node you chose
(SVCNode_1)) opens as shown in Figure 10-27. Click the Ports and Vital Product Data
links to view additional information about your selected node.

Figure 10-27 General node details

Adding a node
Perform the following steps to add a node to the SVC cluster:
1. From the Viewing Nodes panel (Figure 10-26 on page 291), select Add a Node and click
Go.
2. On the Adding a Node to a Cluster panel (Figure 10-28), select a node from the list of
available nodes. Select the I/O group to which you want to assign the new node. Enter a
suitable name for the new node. Click OK.

Note: If you do not provide the name, the SVC automatically generates the name
nodeX (where X is the ID sequence number assigned by the SVC internally). The name
can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash, and the
underscore. It can be between one and 15 characters in length, but cannot start with a
number, the dash, or the word node because this prefix is reserved for SVC assignment
only.

292 IBM System Storage SAN Volume Controller


Figure 10-28 Adding a node

3. Use the Refresh button in Figure 10-29 until the new_node has the status Online.

Figure 10-29 Add node Refresh button

Renaming a node
Perform the following steps to rename a node in the SVC cluster:

Chapter 10. SVC configuration and administration using the GUI 293
1. From the Viewing Nodes panel (Figure 10-26 on page 291), select the radio button to the
left of the node you want to rename. Select Rename a Node from the drop-down list, and
click Go.
2. On the Renaming Node nodename panel (where nodename is the node you selected
previously), type the new name you want to assign to the node. Click OK (Figure 10-30).

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash,
and the underscore. It can be between one and 15 characters in length, but cannot start
with a number, the dash, or the word node because this prefix is reserved for SVC
assignment only.

Figure 10-30 Renaming a node

Deleting a node
Perform the following steps to delete a node from the SVC cluster:
1. From the Viewing Nodes panel (Figure 10-26 on page 291), select the radio button to the
left of the node you want to delete. Select Delete a Node from the drop-down list; click Go.
2. On the Deleting a Node from Cluster nodename panel (where nodename is the name of
the node you selected in the previous step), confirm your decision by selecting Yes. See
Figure 10-31.

Figure 10-31 Deleting node from a cluster

294 IBM System Storage SAN Volume Controller


Note: If the node you are deleting is the Configuration Node, then that responsibility will
automatically be passed to another node in the cluster before it is deleted.

3. Use the Refresh button in Figure 10-32 until SVCNode1 is no longer on the list.

Figure 10-32 Delete node refresh button

Shutting down a node


Earlier we showed how to shut down the complete SVC cluster in a controlled manner
(“Shutting down a cluster” on page 288). On occasion, it might be necessary to shut down a
single node within the cluster to perform such tasks as scheduled maintenance, while leaving
the SVC environment up and running. This function shuts down one node in a graceful
manner. When this is done, the other node in the I/O Group will destage the contents of its
cache and will go into write through mode until the node is powered up again and rejoins the
cluster.

To shut down a single node in an SVC cluster, perform the following steps:
1. From the Viewing Nodes panel (Figure 10-26 on page 291), select the radio button to the
left of the node you want to shut down. Select Shut Down a Node from the list. Click Go.
2. On the confirmation panel (Figure 10-33) that appears next, select Yes to continue with
the shutdown process.

Figure 10-33 Shutting down a node

To restart the SVC node, simply go to the front panel of that node and push the power on
button.

Chapter 10. SVC configuration and administration using the GUI 295
Note: The 2145 UPS-1U does not power off when the SAN Volume Controller is shut
down. However the previous models 2145 UPS-2U will go into standby mode after 5
minutes if the last node attached to this UPS is powered down.

To be able to turn on the SVC node running on the 2145 UPS-2U, you will first need to
press the power button on the UPS front panel.

You have now completed the tasks that are required to view, add, delete, rename, and shut
down a node within the SVC environment.

10.3 Viewing progress


With this view you can see the status of activities like VDisk Migration, MDisk Removal,
Image Mode Migration, Extend Migration, FlashCopy, Metro Mirror, and VDisk Formatting.

Figure 10-34 shows the status of a MDisk Removal that we performed in “Removing MDisks”
on page 318.

Figure 10-34 Showing MDisk Removal Status

10.4 Working with managed disks


This section details the various configuration and administration tasks that you can perform
on the managed disks (MDisks) within the SVC environment.

10.4.1 Disk controller systems


This section details the tasks that you can perform at a disk controller level.

Viewing disk controller details


Perform the following steps to view information about a back-end disk controller in use by the
SVC environment:
1. Select the Work with Managed Disks option and then the Disk Controller Systems link.

296 IBM System Storage SAN Volume Controller


2. The Viewing Disk Controller Systems panel (Figure 10-35) opens. For more detailed
information about a specific controller, click its ID (highlighted by the mouse cursor in
Figure 10-35).

Figure 10-35 Disk controller systems

3. When you click the controller Name (Figure 10-35 above) the “Viewing General Details”
panel (Figure 10-36) opens for the controller (where Name is the Controller you selected).
Review the details and click Close to return to the previous panel.

Figure 10-36 Viewing general details about a disk controller

Chapter 10. SVC configuration and administration using the GUI 297
Renaming a disk controller
Perform the following steps to rename a disk controller used by the SVC cluster:
1. Select the radio button to the left of the controller you want to rename. Then select
Rename a Disk Controller System from the list and click Go.
2. On the Renaming Disk Controller System controllername panel (where controllername is
the controller you selected in the previous step), type the new name you want to assign to
the controller and click OK. See Figure 10-37.

Figure 10-37 Renaming a controller

3. You return to the Disk Controller Systems panel. You should now see the new name of
your controller displayed.

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash,
and the underscore. It can be between one and 15 characters in length. However, it
cannot start with a number, the dash, or the word controller because this prefix is
reserved for SVC assignment only.

10.4.2 Discovery status


You can view the status of a managed disk (MDisk) discovery from the Viewing Discovery
Status panel. This will provide you with the information if there is an MDisk discovery ongoing.
A running MDisk discovery will be displayed with a status of Active.

Perform the following steps to view the status of an MDisk discovery:


1. Click Work with Managed Disks > Discovery Status. The Viewing Discovery Status panel
is displayed as shown in Figure 10-38.

298 IBM System Storage SAN Volume Controller


Figure 10-38 Discovery status view

2. Click Close to close this panel.

10.4.3 Managed disks


This section details the tasks which can be performed at an MDisk level. You perform each of
the following tasks from the Managed Disks panel (Figure 10-39). To access this panel, from
the SVC Welcome page, click the Work with Managed Disks option and then the Managed
Disks link.

Note: At the Filtering Managed Disks (MDisks) panel, click Bypass filter.

Figure 10-39 Viewing Managed Disks panel

Chapter 10. SVC configuration and administration using the GUI 299
MDisk information
To retrieve information about a specific MDisk, perform the following steps:
1. On the Viewing Managed Disks panel (Figure 10-40), click the name of any MDisk in the
list to reveal more detailed information about the specified MDisk.

Figure 10-40 Managed disk details

Tip: If at any time, the content in the right side of frame is “cut off”, you can minimize
the My Work column by clicking the arrow to the right of the My Work heading at the top
right of column (highlighted with mouse pointer in Figure 10-39).

After you minimize the column, you see an arrow in the far left position in the same
location where the My Work column formerly appeared. See Figure 10-41.

2. Review the details and then click Close to return to the previous panel.

Renaming an MDisk
Perform the following steps to rename an MDisk controlled by the SVC cluster:
1. Select the radio button to the left of the MDisk that you want to rename in Figure 10-39 on
page 299. Select Rename an MDisk from the list and click Go.
2. On the Renaming Managed Disk MDiskname panel (where MDiskname is the MDisk you
selected in the previous step), type the new name you want to assign to the MDisk and
click OK. See Figure 10-41.

300 IBM System Storage SAN Volume Controller


Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, the dash,
and the underscore. It can be between one and 15 characters in length. However, it
cannot start with a number, the dash, or the word MDisk because this prefix is reserved
for SVC assignment only.

Figure 10-41 Renaming an MDisk

Discovering MDisks
Perform the following steps to discover newly assigned MDisks:
1. Select Discover MDisks from the pull-down list of Figure 10-39 and click Go.
2. Any newly assigned MDisks are displayed in the window shown in Figure 10-42.

Figure 10-42 Newly discovered managed disks

Setting up a quorum disk


The SVC cluster, after the process of node discovery, automatically chooses three MDisks as
quorum disks. Each disk is assigned an index number of either 0, 1, or 2.

Chapter 10. SVC configuration and administration using the GUI 301
In the event that half the nodes in a cluster are missing for any reason, the other half cannot
simply assume that the nodes are “dead”. It can simply mean that the cluster state
information is not being successfully passed between nodes for some reason (network failure
for example). For this reason, if half the cluster disappears from the view of the other, each
surviving half attempts to lock the first quorum disk (index 0). In the event of quorum disk
index 0 not available on any node, the next disk (index 1) becomes the quorum, and so on.

The half of the cluster that is successful in locking the quorum disk becomes the exclusive
processor of I/O activity. It attempts to reform the cluster with any nodes it can still see. The
other half will stop processing I/O. This provides a tie-break solution and ensures that both
halves of the cluster do not continue to operate.

In the case that all clusters can see the quorum disk, then they will use this quorum disk to
communicate with each other, and will decide which half will become the exclusive processor
of I/O activity.

If, for any reason, you want to set your own quorum disks (for example, if you have installed
additional back-end storage and you want to move one or two quorum disks onto this newly
installed back-end storage subsystem), complete the following tasks:
1. Select the radio button to the left of the MDisk that you want to designate as a quorum.
Then select Set a quorum disk from the list and click Go.
2. On the Setting a Quorum Disk in Figure 10-43, assign a quorum index of 0, 1, or 2 and
click OK.

Figure 10-43 Setting a quorum disk

Quorum disks are only created if at least one MDisk is in managed mode (that is, it was
formatted by SVC with extents in it). Otherwise, a 1330 cluster error message is displayed in
the SVC front panel. You can correct it only when you place MDisks in managed mode.

Including an MDisk
If a significant number of errors occurs on an MDisk, the SVC automatically excludes it.
These errors can be from a hardware problem, a storage area network (SAN) zoning problem
or the result of poorly planned maintenance. If it is a hardware fault, you should receive
SNMP alerts in regard to the state of the hardware (before the disk were excluded) and
undertaken preventative maintenance. If not, the hosts that were using VDisks, which used
the excluded MDisk, now have I/O errors.

302 IBM System Storage SAN Volume Controller


From the Viewing Managed Disks panel (Figure 10-44), you can see that mdisk3 is excluded.

Figure 10-44 Viewing Managed Disks: Excluding an MDisk

After you take the necessary corrective action to repair the MDisk (for example, replace the
failed disk, repair SAN zones), you can tell the SVC to include the MDisk again. Select the
radio button to the left of the excluded MDisk. Then select Include an MDisk from the
drop-down list and click Go. See Figure 10-45.

Figure 10-45 Including an MDisk

Chapter 10. SVC configuration and administration using the GUI 303
When you return to the Viewing Managed Disks panel (Figure 10-46), you see that mdisk3 is
now back in an online state.

Figure 10-46 Viewing Managed Disks: Verifying the included MDisk

Showing an MDisk group


To display information about the managed disk group (MDG) to which an MDisk belongs,
perform the following steps:
1. Select the radio button to the left of the MDisk you want to obtain MDG information about.
Select Show MDisk Group from the list and click Go as shown in Figure 10-47.

Figure 10-47 Show MDisk Group select

304 IBM System Storage SAN Volume Controller


2. Click the name of the Managed Disk Group as shown in Figure 10-48.

Figure 10-48 Show MDisk Group

3. You now see a subset (specific to the MDisk you chose in the previous step) as shown in
Figure 10-49.

Figure 10-49 View MDG details

Showing a VDisk for an MDisk


To display information about VDisks that reside on an MDisk, perform the following steps:
1. Select the radio button, as shown in Figure 10-50, to the left of the MDisk you want to
obtain VDisk information about. Select Show VDisks from the list and click Go.

Chapter 10. SVC configuration and administration using the GUI 305
Figure 10-50 Show VDisk

2. You now see a subset (specific to the MDisk you chose in the previous step) of the View
Virtual Disks panel in Figure 10-51. We cover the View Virtual Disks panel in more detail
in 10.5, “Working with hosts” on page 321.

Figure 10-51 VDisk list from a selected MDisk

Creating a VDisk in image mode


An image mode disk is a VDisk that has an exact one-to-one (1:1) mapping of VDisk extents
with the underlying MDisk. For example, extent 0 on the VDisk contains the same data as
extent 1 on the MDisk and so on. Without this 1:1 mapping (for example, if extent 0 on the
VDisk mapped to extent 3 on the MDisk), there is little chance that the data on a newly
introduced MDisk is still readable.

Image mode is intended for the purpose of migrating data from an environment without the
SVC to an environment with the SVC. A LUN that was previously directly assigned to a
SAN-attached host can now be reassigned to the SVC (during a short outage) and returned
to the same host as an image mode VDisk, with the user’s data intact. During the same
outage, the host, cables, and zones can be reconfigured to access the disk, now via the SVC.

306 IBM System Storage SAN Volume Controller


After access is re-established, the host workload can resume while the SVC manages the
transparent migration of the data to other SVC managed MDisks on the same or another disk
subsystem.

We recommend that, during the migration phase of the SVC implementation, you add one
image mode VDisk at a time to the SVC environment. This reduces the possibility of error. It
also means that the short outages required to reassign the LUNs from the subsystem or
subsystems and reconfigure the SAN and host can be staggered over a period of time to
minimize the business impact.

Important: You can create an image mode VDisk only by using an unmanaged disk. That
is, you must do this before you add the MDisk that corresponds to your original logical
volume to a Managed Disk Group.

To create an image mode VDisk, perform the following steps:


1. Select the radio button to the left of the unmanaged MDisk, as shown in Figure 10-52, on
which you want to create an image mode VDisk. Select Create VDisk in image mode
from the list and click Go.

Figure 10-52 Create VDisk in image mode

2. The first thing you see is the MDisk creation wizard; after reading the steps, click Next.
3. Then the Set attributes panel should appear (Figure 10-53) where you enter the name of
the VDisk you want to create. You can also select to have read and write operations stored
in cache by specifying a cache mode. Additionally, you can specify a unit device identifier.
Check the box if you want to create an empty MDisk group. Click Next to continue.

Attention: You must specify the cache mode when you create the VDisk. After the
VDisk is created, you cannot change the cache mode.

Chapter 10. SVC configuration and administration using the GUI 307
a. We describe the VDisk cache modes in Table 10-1.

Table 10-1 VDisk cache modes


Read/Write All read and write I/O operations that are performed by the VDisk are stored in
cache. This is the default cache mode for all VDisks.

None All read and write I/O operations that are performed by the VDisk are not stored in
cache.

b. Figure 10-53 shows how to set the attributes.

Figure 10-53 Set attributes

Note: If you do not provide a name, the SVC automatically generates the name VDiskX
(where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9,
a dash, and the underscore. It can be between one and 15 characters in length, but
cannot start with a number, a dash, or the word VDisk because this prefix is reserved
for SVC assignment only.

308 IBM System Storage SAN Volume Controller


4. In the next panel (Figure 10-54) you must enter the name of the MDG where you want to
add the new MDisk. Click Next to proceed.

Figure 10-54 MDG name entry

5. In the following panel (Figure 10-55) you select the correct extent size for your VDisk. Click
Next to proceed.

Figure 10-55 Select extent size

Chapter 10. SVC configuration and administration using the GUI 309
6. The next panel (Figure 10-56) shows you the results of your previous MDG inputs. Click
Next to proceed.

Figure 10-56 Verify MDG

7. Now (see Figure 10-57) you can select another MDG if the one entered before does not
have enough space available. In our case we had to select AixImgMdiskGrp. Click Next
to proceed.

310 IBM System Storage SAN Volume Controller


Figure 10-57 Choose an I/O group and an MDG

8. This last panel (Figure 10-58) shows you the characteristics of the new image VDisk.

Figure 10-58 Verify imaged VDisk

Chapter 10. SVC configuration and administration using the GUI 311
10.4.4 Managed Disk Groups
This section details the tasks that can be performed at an MDG level.

Viewing MDisk group information


Each of the following tasks are performed from the View Managed Disk Groups panel
(Figure 10-59). To access this panel, from the SVC Welcome page, click the Work with
Managed Disks option and then the Managed Disk Groups link.

Note: At the Filtering Managed Disk (MDisk) Groups panel, click Bypass filter.

Figure 10-59 Viewing MDGs

Viewing MDisk group information


To retrieve information about a specific MDG, perform the following steps:
1. On the Viewing Managed Disk Groups panel (Figure 10-59), click the name of any MDG in
the list.
2. On the View MDisk Group Details panel (Figure 10-60), you see more detailed information
about the specified MDisk. Here you see information pertaining to the number of MDisks
and VDisks as well as the capacity (both total and free space) within the MDG. When you
finish viewing the details, click Close to return to the previous panel.

312 IBM System Storage SAN Volume Controller


Figure 10-60 MDG details

Creating an MDisk group


To create an MDG, perform the following steps:
1. Select Create an MDisk group from the list in Figure 10-59 and click Go.
2. On the Create Managed Disk Group wizard panel, click Next.
3. On the Name the group and select the managed disks panel (Figure 10-61), this is
where you give the MDG a name. Optionally select the MDisk Candidates and add them to
the Selected MDisks list (one at a time) in the desired order.
Selecting No MDisk Candidates creates an “empty” MDG. You can add MDisks to an
“empty” MDG at a later time.
Click Next.

Note: If you do not provide a name, the SVC automatically generates the name
mdiskgrpX (where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to
9, a dash, and the underscore. It can be between one and 15 characters in length.
However, it cannot start with a number, the dash, or the word mdiskgrp because this
prefix is reserved for SVC assignment only.

Chapter 10. SVC configuration and administration using the GUI 313
Figure 10-61 Name the group and select managed disks

4. On the select extent size (Figure 10-62) you select the extent size in MB that you want to
format your MDG. Then click Next.

Figure 10-62 Select the extent size

314 IBM System Storage SAN Volume Controller


5. Verify the information that you specified in the previous panels (Figure 10-63) and if it is
correct, click Finish. If you need to correct something, click Back.

Figure 10-63 Verifying the information about the MDG

Renaming an MDisk group


To rename an MDG, perform the following steps:
1. Select the radio button in Viewing MDGs (Figure 10-64) to the left of the MDG you want to
rename. Select Rename an MDisk Group from the list and click Go.

Figure 10-64 Renaming an MDG

2. From the Renaming Managed Disk Group MDGname panel (where MDGname is the
MDG you selected in the previous step), type the new name you want to assign and click
OK (see Figure 10-65).

Chapter 10. SVC configuration and administration using the GUI 315
Note: The name can consist of letters A to Z, a to z, numbers 0 to 9, a dash, and the
underscore. It can be between one and 15 characters in length, but cannot start with a
number, a dash, or the word mdiskgrp because this prefix is reserved for SVC
assignment only.

Figure 10-65 Renaming an MDG

Deleting an MDisk group


To delete an MDG, perform the following steps:
1. Select the radio button to the left of the MDG you want to delete. Select Delete an MDisk
Group from the list and click Go.
2. On the Deleting a Managed Disk Group MDGname panel (where MDGname is the MDG
you selected in the previous step), click OK to confirm that you want to delete the MDG
(see Figure 10-66).

Figure 10-66 Deleting an MDG

3. If there are MDisks and VDisks within the MDG you are deleting, you are required to click
Forced delete for the MDG (Figure 10-67).

Important: If you delete an MDG with the Forced Delete option, and VDisks were
associated with that MDisk group, you lose the data on your VDisks, since they it is
deleted before the MDisk Group. If you want to save your data, migrate the VDisks to
another MDisk group before you delete the MDisk group previously assigned to it.

316 IBM System Storage SAN Volume Controller


Figure 10-67 Confirming forced deletion of an MDG

Adding MDisks
If you created an empty MDG as we did, or you simply assign additional MDisks to your SVC
environment later, you can add MDisks to existing MDGs by performing the following steps:

Note: You can only add unmanaged MDisks to an MDG.

1. Select the radio button (Figure 10-68) to the left of the MDG to which you want to add
MDisks. Select Add MDisks from the list and click Go.

Figure 10-68 Adding an MDisk to an existing MDG

Chapter 10. SVC configuration and administration using the GUI 317
2. From the Adding Managed Disks to Managed Disk Group MDiskname panel (where
MDiskname is the MDG you selected in the previous step), select the desired MDisk or
MDisks from the MDisk Candidates list (Figure 10-69). After you select all the desired
MDisks, click OK.

Figure 10-69 Adding MDisks to an MDG

Removing MDisks
To remove an MDisk from an MDG, perform the following steps:
1. Select the radio button to the left (Figure 10-70) of the MDG from which you want to
remove an MDisk. Select Remove MDisks from the list and click Go.

Figure 10-70 Viewing MDGs

318 IBM System Storage SAN Volume Controller


2. From the Deleting Managed Disks from Managed Disk Group MDGname panel (where
MDGname is the MDG you selected in the previous step), select the desired MDisk or
MDisks from the list (Figure 10-71). After you select all the desired MDisks, click OK.

Figure 10-71 Removing MDisks from an MDG

3. If VDisks are using the MDisks that you are removing from the MDG, you are required to
click the Forced Delete button to confirm the removal of the MDisk, as shown in
Figure 10-72. Even then, the removal only takes place if there is sufficient space to
migrate the VDisk data to other extents on other MDisks that remain in the MDG.

Figure 10-72 Confirming forced deletion of MDisk from MDG

Chapter 10. SVC configuration and administration using the GUI 319
Showing MDisks in this group
To show a list of MDisks within an MDG, perform the following steps:
1. Select the radio button to the left (Figure 10-73) of the MDG from which you want to
retrieve MDisk information. Select Show MDisks in this group from the list and click Go.

Figure 10-73 View MDGs

2. You now see a subset (specific to the MDG you chose in the previous step) of the Viewing
Managed Disk panel (Figure 10-74) from 10.4.3, “Managed disks” on page 299.

Figure 10-74 Viewing MDisks in an MDG

Note: Remember, you can collapse the column entitled My Work at any time by clicking
the arrow to the right of the My Work column heading.

Showing VDisks using this group


To show a list of VDisks associated with MDisks within an MDG, perform the following steps:
1. Select the radio button to the left (Figure 10-75 on page 321) of the MDG from which you
want to retrieve VDisk information. Select Show VDisks using this group from the list
and click Go.

320 IBM System Storage SAN Volume Controller


Figure 10-75 View MDisks

2. You see a subset (specific to the MDG you chose in the previous step) of the Viewing
Virtual Disks panel in Figure 10-76. We cover the Viewing Virtual Disks panel in more
detail in “VDisk information” on page 334.

Figure 10-76 VDisks belonging to selected MDG

You have now completed the tasks required to manage the disk controller systems, managed
disks, and MDGs within the SVC environment.

10.5 Working with hosts


In this section we describe the various configuration and administration tasks that you can
perform on the VDisks within the SVC environment.

10.5.1 Hosts
This section details the tasks that you can perform at a host level. Each of the following tasks
are performed from the Viewing Hosts panel (Figure 10-77). To access this panel, from the
SVC Welcome page, click the Work with Virtual Disks option and then the Hosts link.

Chapter 10. SVC configuration and administration using the GUI 321
Note: At the Filtering Hosts panel, click Bypass filter.

Figure 10-77 Viewing hosts

Host information
To retrieve information about a specific host, perform the following steps:
1. On the Viewing Hosts panel (see Figure 10-77 above), click the name of any host in the
list displayed.
2. Next, you can get details for the host you requested:
a. On the Viewing General Details panel (Figure 10-78), you can see more detailed
information about the specified host.

Figure 10-78 Host details

322 IBM System Storage SAN Volume Controller


b. You can click the Port Details (Figure 10-79) link to see information about the Fibre
Channel Host Bus Adapters (HBAs) that were defined within the host.

Figure 10-79 Host port details

c. You can click Mapped I/O Group (Figure 10-80) to see which I/O groups this host can
access.

Figure 10-80 Host mapped I/O groups

When you are finished viewing the details, click Cancel to return to the previous panel.

Chapter 10. SVC configuration and administration using the GUI 323
Creating a host
To create a new host, perform the following steps:
1. As shown in Figure 10-81, select the option Create a host from the list and click Go.

Figure 10-81 Create a host

2. On the Creating Hosts panel (Figure 10-82), type a name for your host (Host Name).

Note: If you do not provide a name, the SVC automatically generates the name hostX
(where X is the ID sequence number assigned by the SVC internally).

If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9,
and the underscore. It can be between one and 15 characters in length. However, it
cannot start with a number or the word host because this prefix is reserved for SVC
assignment only. Although using an underscore might work in some circumstances, it
violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and thus can
cause problems. So we recommend that you do not use the underscore in host names.

3. Select the mode (Type) for the host. You must choose HP_UX to get more than 8 LUNs
supported for HP_UX machines. For all other hosts, select Generic mode (default).
You can use a Port Mask to control the node target ports that a host can access. The port
mask applies to logins from the host initiator port that are associated with the host object.

Note: For each login between a host HBA port and node port, the node examines the
port mask that is associated with the host object for which the host HBA is a member
and determines if access is allowed or denied. If access is denied, the node responds
to SCSI commands as if the HBA port is unknown.

The port mask is four binary bits. Valid mask values range from 0000 (no ports
enabled) to 1111 (all ports enabled). The right-most bit in the mask corresponds to the
lowest numbered SVC port (1 not 4) on a node.

324 IBM System Storage SAN Volume Controller


As shown in Figure 10-82, our port mask is 1111; this means that the host HBA port can
access all node ports. If, for example, a port mask is set to 0011, only port 1 and port 2 are
enabled for this host access.
4. Select and add the worldwide port names (WWPNs) that correspond to your HBA or
HBAs. Click OK.
Your WWPN or WWPNs might not display, although you are sure your adapter is
functioning (for example, you see the WWPN in the switch name server) and your zones
are correctly set up. In this case, you can manually type the WWPN of your HBA or HBAs
into the Additional Ports field (type in WWPNs, one per line) at the bottom of the panel
before you click OK.

Figure 10-82 Creating a new host

Chapter 10. SVC configuration and administration using the GUI 325
5. This brings you back to the viewing host panel (Figure 10-83) where you can see the
added host.

Figure 10-83 Create host results

Modifying a host
To modify a host, perform the following steps:
1. Select the radio button to the left of the host you want to rename (Figure 10-84). Select
Rename a host from the list and click Go.

Figure 10-84 Modifying a host

326 IBM System Storage SAN Volume Controller


2. From the Modifying Host panel (Figure 10-85), type the new name you want to assign or
change the Type parameter and click OK.

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, and the
underscore. It can be between one and 15 characters in length.

If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9,
and the underscore. It can be between one and 15 characters in length. However, it
cannot start with a number or the word host because this prefix is reserved for SVC
assignment only. While using an underscore might work in some circumstances, it
violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and thus can
cause problems. So we recommend that you do not use the underscore in host names.

Figure 10-85 Modifying a host (choosing a new name)

Chapter 10. SVC configuration and administration using the GUI 327
Deleting a host
To delete a Host, perform the following steps:
1. Select the radio button to the left of the host you want to delete (Figure 10-86). Select
Delete a host from the list and click Go.

Figure 10-86 Deleting a host

2. On the Deleting Host hostname panel (where hostname is the host you selected in the
previous step), click OK if you are sure you want to delete the host. See Figure 10-87.

Figure 10-87 Deleting a host

3. If you still have VDisks associated with the host, you see a panel (Figure 10-88)
requesting confirmation for the forced deletion of the host. Click OK and all the mappings
between this host and its VDisks are deleted before the host is deleted.

Figure 10-88 Forcing a deletion

328 IBM System Storage SAN Volume Controller


Adding ports
If you add an HBA to a server that is already defined within the SVC, you can simply add
additional ports to your host definition by performing the following steps:
1. Select the radio button to the left of the host to which you want to add WWPNs
(Figure 10-89). Select Add ports from the list and click Go.

Figure 10-89 Add ports to a host

2. From the Adding ports to hostname panel (where hostname is the host you selected in the
previous step), select the desired WWPN from the Available Ports list (one at a time) and
click Add. After you select all the desired WWPNs, click OK. See Figure 10-90.
If your WWPNs are not in the list of the Available Ports and you are sure your adapter is
functioning (for example, you see WWPN in the switch name server) and your zones are
correctly set up, then you can manually type the WWPN of your HBAs into the Add
Additional Ports field at the bottom of the panel before you click OK.

Chapter 10. SVC configuration and administration using the GUI 329
Figure 10-90 Adding ports to a host

Deleting ports
To delete a port from a host, perform the following steps:
1. Select the radio button to the left of the host from which you want to delete a port
(Figure 10-91). Select Delete ports from the list and click Go.

Figure 10-91 Delete ports from a host

330 IBM System Storage SAN Volume Controller


2. On the Deleting Ports From hostname panel (where hostname is the host you selected in
the previous step), select the ports you want to delete from the Available Ports list and
click Add. When you have selected all the ports you want to delete from your host to the
column to the right, click OK. See Figure 10-92.

Figure 10-92 Deleting ports from a host

3. If you have VDisks that are associated with the host, you receive a warning about deleting
a host port. You need to confirm your action when prompted, as shown in Figure 10-93.

Figure 10-93 Port delete conformation

Chapter 10. SVC configuration and administration using the GUI 331
10.5.2 Fabrics
This view was added to the SVC management interface in version 4.1. With it you can easily
collect information about the attached hosts and controller subsystems, their local and remote
WWPN, local and remote N_Port ID, the type of connection (host, node, controller) and the
current state (active or inactive)
1. Click Work with Hosts and then Fabrics.
2. The Viewing Fabrics panel should open as shown in Figure 10-94. In this view you can
search/filter as described 10.1.1, “Organizing on-screen content” on page 276.

Figure 10-94 Viewing Fabrics

You have now completed the tasks required to manage the hosts within an SVC environment.

332 IBM System Storage SAN Volume Controller


10.6 Working with virtual disks
In this section we describe the tasks that you can perform at a VDisk level.

10.6.1 Using the Virtual Disks panel for VDisks


Each of the following tasks are performed from the Viewing Virtual Disks panel
(Figure 10-95). To access this panel, from the SVC Welcome page, click the Work with
Virtual Disks option and then the Virtual Disks link. The drop-down menu contains all the
actions you can perform at Virtual Disk panel.

Note: At the Filtering Virtual Disks (VDisks) panel, click Bypass filter. However, if you
have more than 1024 VDisks, you cannot use the Bypass filter selection.

Figure 10-95 Viewing Virtual Disks

Chapter 10. SVC configuration and administration using the GUI 333
VDisk information
To retrieve information about a specific VDisk, perform the following steps:
1. On the Viewing Virtual Disks panel, click the name of the desired VDisk in the list.
2. The next panel (Figure 10-96) that opens shows detailed information. Review the
information. When you are done, click Close to return to the Viewing Virtual Disks panel.

Figure 10-96 VDisk details

334 IBM System Storage SAN Volume Controller


Creating a VDisk
To create a new VDisk, perform the following steps:
1. Select Create a VDisk from the list (Figure 10-95 on page 333) and click Go.
2. The Create Virtual Disks wizard launches. Click Next.
3. The Choose an I/O group and a Managed Disk Group panel (Figure 10-97) opens. In our
case we cannot select the I/O group, as we only have one SVC cluster. Select the MDG
within which you want to create the VDisk and check if there is enough disk space for the
VDisks you want to create. Click Next.

Figure 10-97 Creating a VDisk wizard: Choose an I/O group and an MDG

Chapter 10. SVC configuration and administration using the GUI 335
4. The Select type of VDisk and number of VDisk panel opens. Choose what type of VDisk
you want to create, striped or sequential. If desired enter a unit device identifier. Enter the
number of VDisks you want to create and click Next (Figure 10-98).

Figure 10-98 Creating a VDisk wizard: Select type of VDisk and number of VDisk

5. In the Name the Virtual Disks panel (Figure 10-99) you can enter the VDisk name if you
create just one VDisk or the naming prefix if you create multiple VDisks. Click Next.

Tip: When you create more than one VDisk, the wizard will not ask you a name for each
VDisk to be created. Instead, the name you use here will have a number, starting at zero,
appended to it as each one is created.

336 IBM System Storage SAN Volume Controller


Figure 10-99 Creating a VDisk wizard: Name the VDisks panel

Note: If you do not provide a name, the SVC automatically generates the name
VDiskX (where X is the ID sequence number assigned by the SVC internally).
If you want to provide a name, you can use the letters A to Z, a to z, the numbers 0 to 9,
and the underscore. It can be between one and 15 characters in length, but cannot
start with a number or the word VDisk because this prefix is reserved for SVC
assignment only.

6. If you selected Striping in Figure 10-98 on page 336, you should get the panel shown
below in Figure 10-100. On the Select Attributes for Striped-mode VDisk panel, you can
Add or Remove the MDisks candidates; and with the up and down arrows, you can set
the striping sequence on the selected MDisks.
7. Enter the size of the VDisk you want to create and select the capacity measurement
(MB or GB) from the list.

Note: An entry of 1 GB uses 1024 MB.

8. Optionally, format the new VDisk by selecting the Format virtual disk check box (write
zeros to its managed disk extents) at the bottom of the panel. Click Next.

Chapter 10. SVC configuration and administration using the GUI 337
Figure 10-100 Creating a VDisk wizard: Select Attributes for Striped-mode VDisk

9. If you selected Sequential in Figure 10-98 on page 336, you should get the panel in
Figure 10-101. If you are creating multiple virtual disks, choose whether to use a single
managed disk or multiple managed disks. Choose the managed disk(s) and type a
capacity for the virtual disk(s). Specify whether to format the virtual disk(s) and click Next.

338 IBM System Storage SAN Volume Controller


Figure 10-101 Creating a VDisk wizard: Select attributes for sequential mode VDisks

10.On the Verify VDisk panel (see Figure 10-102 for striped and Figure 10-103 for
sequential), check if you are satisfied with the information shown, then click Finish to
complete the task. Otherwise, click Back to return and make any corrections.

Chapter 10. SVC configuration and administration using the GUI 339
Figure 10-102 Creating a VDisk wizard: Verify VDisk Striped type

Figure 10-103 Creating a VDisk wizard: Verify VDisk sequential type

340 IBM System Storage SAN Volume Controller


11.The last panel (Figure 10-104) shows you the progress during the creation of your VDisks
on the storage and the final results.

Figure 10-104 Creating a VDisk wizard: final result

Deleting a VDisk
To delete a VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to delete (Figure 10-95 on
page 333). Select Delete a VDisk from the list and click Go.
2. On the Deleting Virtual Disk VDiskname panel (where VDiskname is the VDisk you just
selected), click OK to confirm your desire to delete the VDisk. See Figure 10-105.

Tip: Remember, you can collapse the My Work column by clicking the arrow to the right of
the My Work column heading.

Figure 10-105 Deleting a VDisk

Chapter 10. SVC configuration and administration using the GUI 341
If the VDisk is currently assigned to a host, you receive a secondary message where you
must click Forced Delete to confirm your decision. See Figure 10-106. This deletes the
VDisk-to-host mapping before deleting the VDisk.

Important: Deleting a VDisk is a destructive action related to user’s data residing in


that VDisk.

Figure 10-106 Deleting a VDisk: Forcing a deletion

Deleting a VDisk-to-host mapping


To unmap (unassign) a VDisk from a host, perform the following steps:
1. Select the radio button to the left of the VDisk you want to unmap. Select Delete a
VDisk-to-host mapping from the list and click Go.
2. On the Deleting a VDisk-to-host mapping panel (Figure 10-107), from the Host Name list,
select the host from which to unassign the VDisk. Click OK.

Tip: Make sure that the host is no longer using that disk. Un-mapping a disk from a host
will not destroy its contents.

Un-mapping a disk has the same effect as powering off the computer without first
performing a clean shutdown, and thus might leave the data in an inconsistent state. Also,
any running application that was using the disk will start to receive I/O errors.

Figure 10-107 Deleting a VDisk-to-host mapping

342 IBM System Storage SAN Volume Controller


Expanding a VDisk
Expanding a VDisk presents a larger capacity disk to your operating system. Although you
can do this easily using the SVC, you must ensure that your operating system is prepared for
it and supports the volume expansion before you use this function.

Dynamic expansion of a VDisk is only supported when the VDisk is in use by:
򐂰 AIX 5.2 and above
򐂰 W2K and W2K3 for basic disks
򐂰 W2K and W2K3 with a hot fix from Microsoft (Q327020) for dynamic disks

Assuming your operating system supports it, to expand a VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to expand in Figure 10-95 on
page 333. Select Expand a VDisk from the list and click Go.
2. The Expanding Virtual Disks VDiskname panel (where VDiskname is the VDisk you
selected in the previous step) opens. See Figure 10-108. Follow these steps:
a. Select the new size of the VDisk. This is the increment to add. For example, if you have
a 5 GB disk and you want it to become 10 GB, you specify 5 GB in this field.

b. Optionally, select the managed disk candidates from which to obtain the additional
capacity. The default for a striped VDisk is to use equal capacity from each MDisk in
the MDG.

Notes:
򐂰 With sequential VDisks, you must specify the MDisk from which you want to
obtain space.
򐂰 There is no support for the expansion of image mode VDisks.
򐂰 If there are not enough extents to expand your VDisk to the specified size, you
receive an error message.

c. Optionally, format the full VDisk with zeros by selecting the Format virtual disk (write
zeros to its managed disk extents) check box at the bottom of the panel.

Important: The Format Virtual Disk check box is not selected by default. But if you
check it, the entire VDisk will be formatted, not just the new extents, so be very
careful.

When you are done, click OK.


If there are not enough extents to expand your VDisk to the specified size, you receive an
error message (Figure 10-108).

Chapter 10. SVC configuration and administration using the GUI 343
Figure 10-108 Expanding a VDisk

3. Go to your host and perform necessary operations to discover the additional space and
expand your volumes into it. This procedure differs depending on the operating system.

Mapping a VDisk to a host


To map (assign) a virtual disk to a host, perform the following steps:
1. Select the radio button to the left of the VDisk you want to assign to a host (Figure 10-95
on page 333). Select Map a VDisk to a host from the list and click Go.
2. On the Creating a Virtual Disk-to-Host mapping VDiskname panel (where VDiskname is
the VDisk you selected in the previous step), from the Target Host list, select the desired
host. The SCSI LUN ID is to increment based on what is already assigned to the host.
Click OK. See Figure 10-109.

Tip: The option Allow the virtual disks to be mapped even if they are already mapped to a
host allows you to map a VDisk to more than one host. This would normally be used in
clustered environments, where the responsibility on access to the disks is negotiated
between the hosts (and not enforced by the SVC), or when using global file systems such
as the IBM System Storage SAN File System.

344 IBM System Storage SAN Volume Controller


Figure 10-109 Mapping a VDisk to a host

3. The next panel (Figure 10-110) shows you the progress of the VDisk to host mapping.

Figure 10-110 Progress of VDisk to host mapping

Chapter 10. SVC configuration and administration using the GUI 345
Modifying a VDisk
The Modifying Virtual Disk menu item allows you to rename the VDisk, reassign the VDisk to
another I/O group, and set throttling parameters.

To modify a VDisk, perform the following steps:


1. Select the radio button to the left of the VDisk you want to modify (Figure 10-95 on
page 333). Select Modify a VDisk from the list and click Go.
2. The Modifying virtual disk VDiskname panel (where VDiskname is the VDisk you selected
in the previous step) opens. See Figure 10-111 below. You can perform the following steps
separately or in combination:
a. Type a new name for your VDisk.

Note: The name can consist of the letters A to Z, a to z, the numbers 0 to 9, and the
underscore. It can be between one and 15 characters in length. However, it cannot
start with a number or the word VDisk because this prefix is reserved for SVC
assignment only.

b. Select an alternate I/O group from the list to alter the I/O group to which it is assigned.
c. Set performance throttling for a specific VDisk. In the I/O Governing field, type a
number and select either I/O or MB from the list.
• I/O governing effectively throttles the amount of I/Os per second (or MBs per
second) that can be achieved to and from a specific VDisk. You might want to do
this if you have a VDisk that has an access pattern that adversely affects the
performance of other VDisks on the same set of MDisks. For example, it uses most
of the available bandwidth.
• If this application is highly important, then migrating the VDisk to another set of
MDisks might be advisable. However, in some cases, it is an issue with the I/O
profile of the application rather than a measure of its use or importance.
• The choice between I/O and MB as the I/O governing throttle should be based on
the disk access profile of the application. Database applications generally issue
large amounts of I/O but only transfer a relatively small amount of data. In this case,
setting an I/O governing throttle based on MBs per second does not achieve much.
It is better for you to use an I/O per second throttle. On the other extreme, a
streaming video application generally issues a small amount of I/O, but transfers
large amounts of data. In contrast to the database example, setting an I/O
governing throttle based on I/Os per second does not achieve much. Therefore, you
should use an MB per second throttle.
Click OK when you are done making changes.

346 IBM System Storage SAN Volume Controller


Figure 10-111 Modifying a VDisk

Migrating a VDisk
To migrate a VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to migrate (Figure 10-95 on
page 333). Select Migrate a VDisk from the list and click Go.
2. The Migrating Virtual Disks-VDiskname panel (where VDiskname is the VDisk you
selected in the previous step) opens as shown in Figure 10-112. From the MDisk Group
Name list, select the MDG to which you want to reassign the VDisk. Specify the number of
threads to devote to this process (a value from 1 to 4).
The optional threads parameter allows you to assign a priority to the migration process.
A setting of 4 is the highest priority setting. If you want the process to take a lower priority
over other types of I/O, you can specify 3, 2, or 1.

Important: After a migration is started, there is no way to stop it. Migration continues
until it is complete unless it is stopped or suspended by an error condition or the VDisk
being migrated is deleted.

When you are done making your selections, click OK to begin the migration process.
3. You need to manually refresh your browser or close it and return to the Viewing Virtual
Disks panel periodically to see the MDisk Group Name column in the Viewing Virtual
Disks panel update to reflect the new MDG name.

Chapter 10. SVC configuration and administration using the GUI 347
Figure 10-112 Migrating a VDisk

Migrating a VDisk to an image mode VDisk


Migrating a VDisk to an image mode VDisk allows the SVC to be removed from the data path.
This might be useful where the SVC is used as a data mover appliance.

To migrate a VDisk to an image mode VDisk, the following rules apply:


򐂰 The destination MDisk must be greater than or equal to the size of the VDisk.
򐂰 The MDisk specified as the target must be in an unmanaged state.
򐂰 Regardless of the mode that the VDisk starts in, it is reported as being in managed mode
during the migration.
򐂰 Both of the MDisks involved are reported as being in image mode during the migration.
򐂰 If the migration is interrupted by a cluster recovery, or by a cache problem, then the
migration will resume after the recovery completes.

To accomplish the migration, perform the following steps:


1. Select Migrate to an Image Mode VDisk from the drop-down list (Figure 10-95 on
page 333) and click Go.
2. The Migrate to Image Mode VDisk wizard launches (not shown here). Read the steps on
this panel and click Next.
3. Select the radio button to the left of the MDisk where you want the data to be migrated
(Figure 10-113). Click Next.

348 IBM System Storage SAN Volume Controller


Figure 10-113 Migrate to image mode VDisk wizard: Select target MDisk

4. Select the MDG to which the MDisk will join (Figure 10-114). Click Next.

Figure 10-114 Migrate to image mode VDisk wizard: Select MDG

Chapter 10. SVC configuration and administration using the GUI 349
5. Select the priority of the migration by selecting the number of threads (Figure 10-115).
Click Next.

Figure 10-115 Migrate to image mode VDisk wizard: Select Threads

6. Verify that the information you specified is correct (Figure 10-116). If you are satisfied,
click Finish. If you want to change something, use the Back option.

Figure 10-116 Migrate to image mode VDisk wizard: Verify migration Attributes

350 IBM System Storage SAN Volume Controller


7. The last panel (Figure 10-117) displays the details of the VDisk that you are migrating.

Figure 10-117 Migrate to image mode VDisk wizard: Progress of Migration

Shrinking a VDisk
The method that the SVC uses to shrink a VDisk is to remove the required number of extents
from the end of the VDisk. Depending on where the data actually resides on the VDisk, this
can be quite destructive. For example, you might have a VDisk that consists of 128 extents
(0 to 127) of 16 MB (2 GB capacity) and you want to decrease the capacity to 64 extents
(1 GB capacity). In this case, the SVC simply removes extents 64 to 127. Depending on the
operating system, there is no easy way to ensure that your data resides entirely on extents 0
through 63, so be aware that you might lose data.

Although easily done using the SVC, you must ensure that your operating system supports
shrinking, either natively or by using third-party tools, before using this function.

Dynamic shrinking of a VDisk is only supported when the VDisk is in use by:
򐂰 W2K and W2K3 for basic disks
򐂰 W2K and W2K3 with a special fix from Microsoft (Q327020) for dynamic disks.

In addition, we recommend that you always have a good current backup before you execute
this task.

Shrinking a VDisk is useful in certain circumstances, such as:


򐂰 Reducing the size of a candidate target VDisk of a PPRC relationship to turn it the same
size as the source.
򐂰 Releasing space from VDisks to have free extents in the MDG, provided you do not use
that space any more and take precautions with the remaining data, as explained earlier.

Assuming your operating system supports it, perform the following steps to shrink a VDisk:
1. Perform any necessary steps on your host to ensure that you are not using the space you
are about to remove.
2. Select the radio button to the left of the VDisk you want to shrink (Figure 10-95 on
page 333). Select Shrink a VDisk from the list and click Go.
3. The Shrinking Virtual Disks VDiskname panel (where VDiskname is the VDisk you
selected in the previous step) opens as shown in Figure 10-118. In the Reduce Capacity
By field, enter the capacity you want to reduce. Select MB or GB accordingly. The final
capacity of the VDisk is the Current Capacity minus the capacity that you specify.

Chapter 10. SVC configuration and administration using the GUI 351
Note: Be careful with the capacity information. The Current Capacity field shows it in
MBs, while you can specify a capacity to reduce in GBs. SVC calculates 1 GB as being
1024 MB.

When you are done, click OK. The changes should become apparent on your host.

Figure 10-118 Shrinking a VDisk

Showing the MDisks


To show the MDisks that are used by a specific VDisk, perform the following steps:
1. Select the radio button to the left of the VDisk you want to view MDisk information about
(Figure 10-95 on page 333). Select Show the MDisks from the list and click Go.
2. You see a subset (specific to the VDisk you chose in the previous step) of the Viewing
Managed Disks panel (Figure 10-119).

Figure 10-119 Showing MDisks used by a VDisk

352 IBM System Storage SAN Volume Controller


For information about what you can do on this panel, see 10.4.3, “Managed disks” on
page 299.

Showing the MDisk group


To show the MDG to which a specific VDisk belongs, perform the following steps:
1. Select the radio button to the left of the VDisk you want to view MDG information about
(Figure 10-95 on page 333). Select Show the MDisk Group from the list and click Go.
2. You see a subset (specific to the VDisk you chose in the previous step) of the Viewing
MDGs panel (Figure 10-120).

Figure 10-120 Showing an MDG for a VDisk

Showing the Host to which the VDisk is mapped


To show the Host to which a specific VDisk belongs, perform the following steps:
1. Select the radio button to the left of the VDisk you want to view MDG information about
(Figure 10-95 on page 333). Select Show the Host this VDisk is mapped to from the list
and click Go.
2. Specific to the Host you have chosen in the previous step, this shows you the Host to
which the VDisk is attached (Figure 10-121). Alternatively, you can use the procedure
described in “Showing VDisks mapped to a host” on page 354 to see all VDisk to Host
mappings.

Figure 10-121 Show Host to VDisk mapping

Chapter 10. SVC configuration and administration using the GUI 353
Showing capacity information
To show the capacity information of the cluster, perform the following step:
1. In Figure 10-122, select Show Capacity Information from the drop-down list and click
Go. In the following Figure 10-123 you should then see the capacity information for this
cluster.

Figure 10-122 Select show Capacity Information

2. Figure 10-123 shows you the total MDisk capacity, the space in the MDGs, the space
allocated to the VDisks, and the total free space.

Figure 10-123 Show capacity information

10.6.2 Showing VDisks mapped to a host


To show the VDisks assigned to a specific host, perform the following steps:
1. From the SVC welcome page, click the Work with Virtual Disks option and then the
Virtual Disk to Host Mapping link (Figure 10-124).

354 IBM System Storage SAN Volume Controller


Figure 10-124 VDisk to Host Mapping

2. Now you can see what host that VDisk belongs to. If this is a long list, you can use the
additional filtering and sort option from 10.1.1, “Organizing on-screen content” on
page 276.

Deleting VDisks from a host


In the same panel where you can view the VDisk to Host mapping (Figure 10-124 on
page 355) you can also delete a mapping. Select the radio button to the left of the Host and
VDisk combination you want to delete. Ensure that Delete from Mapping is selected from the
list. Click Go.
1. Confirm the selection you made on Figure 10-125 by clicking the Delete button.

Figure 10-125 Deleting VDisk to Host mapping

2. Now you are back to the panel shown in Figure 10-124. Check that this VDisk (LNX-BEN1)
is no longer mapped to this Host (Helium_1). Now you can assign this VDisk to another
Host as described in “Mapping a VDisk to a host” on page 344.

Chapter 10. SVC configuration and administration using the GUI 355
You have now completed the tasks required to manage virtual disks within an SVC
environment.

10.7 Managing Copy Services


See Chapter 11, “Copy Services: FlashCopy” on page 383, Chapter 12, “Copy Services:
Metro Mirror” on page 425 and Chapter 13, “Copy Services: Global Mirror” on page 489, for
more information about the tasks related to the management of Copy Services in the SVC
environment.

10.8 Service and maintenance using the GUI


This section discusses the various service and maintenance tasks that you can perform
within the SVC environment. To perform all of the following activities, on the SVC Welcome
page (Figure 10-126), select the Service and Maintenance option.

Note: You are prompted for a cluster user ID and password for some of the following tasks.

Figure 10-126 Service and Maintenance functions

10.8.1 Upgrading software


This section explains how to upgrade the SVC software.

Package numbering and version


The format for software upgrade packages is four positive integers separated by dots. For
example, a software upgrade package contains something similar to 4.1.0.0.

356 IBM System Storage SAN Volume Controller


New software utility
Anew software utility, which resides on the master console, checks the software levels in the
system against recommended levels that will be documented on the support Web site. You
are informed if software levels are up-to-date, or if you need to download and install newer
levels. This information is provided after you login to the SVC GUI. As you can see in the
middle of Figure 10-127, some new software is available. Use the provided link to download
the new software and get more information about it.

Important: To use this feature, the Master Console must be able to access the Internet. If
the Master Console cannot access the Internet because of restrictions such as a local
firewall, you will see a message: “The update server cannot be reached at this time.” Use
the Web link provided in the message for the latest software information.

Figure 10-127 Cluster Software Upgrade Status

Precautions before upgrade


In this section we describe precautions you should take before attempting an upgrade.

Important: Before attempting any SVC code update, please read and understand the SAN
volume controller concurrent compatibility and code cross reference matrix. Go to the
following site and click the link for Latest SAN volume controller code.
http://www-03.ibm.com/servers/storage/support/software/sanvc/downloading.html

During the upgrade, each node in your cluster will be automatically shut down and restarted
by the upgrade process. Since each node in an I/O group provides an alternate path to
VDisks, you need to make sure that all I/O paths between all hosts and SANs are working.

If you haven’t performed this check, then some hosts might lose connectivity to their VDisk
and experience I/O errors when the SVC node providing that access is shut down during the
upgrade process (Figure 10-128).

Chapter 10. SVC configuration and administration using the GUI 357
Figure 10-128 Using datapath query commands to check all paths are online

You can check the I/O paths by using datapath query commands as shown here in
Figure 10-128. You do not need to check for hosts that have no active I/O operations to the
SANs during the software upgrade.

Tip: See the Subsystem Device Driver User's Guide for the IBM TotalStorage Enterprise
Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540 for more
information about datapath query commands.

It is also worth double checking that your UPS power configuration is also set up correctly
(even if your cluster is running without problems). Specifically:
򐂰 Ensure that your UPSs are all getting their power from an external source, and that they
are not daisy chained. In other words, make sure that each UPS is not supplying power to
another node’s UPS.
򐂰 Ensure that the power cable, and the serial cable coming from the back of each node goes
back to the same UPS. If the cables are crossed and are going back to different UPSs,
then during the upgrade, as one node is shut down, another node might also be
mistakenly shut down.

Procedure
To upgrade the SVC cluster software, perform the following steps:
1. Use the Run Maintenance Procedure in the GUI and correct all open problems first.
2. Back up the SVC Config as described in “Backup procedure” on page 379.
3. Back up the support data, just in case there is a problem during the upgrade that renders
a node unusable. This information could assist IBM support in determining why the
upgrade might have failed and help with a resolution. Example 10-1 shows the necessary
commands that need to be run. This command is only available in the CLI.

Example 10-1 Creating an SVC snapshot


IBM_2145:ITSOSVC01:admin>svc_snap
WRN: Busy copying files, please wait
snap_data collected in /dumps/snap_008057_060619_214025.tgz

Note: You can ignore the No such file or directory error.

358 IBM System Storage SAN Volume Controller


Then, using the SVC GUI under the Software Maintenance → List Dumps → Software
Dumps, download the dump that was created in Example 10-1 and store it in a safe place
with the SVC Config that you created above (see Figure 10-129 and Figure 10-130).

Figure 10-129 Getting software dumps

Figure 10-130 Downloading software dumps

Chapter 10. SVC configuration and administration using the GUI 359
4. From the SVC Welcome page, click the Service and Maintenance option and then the
Upgrade Software link.
5. When prompted, enter the admin user ID and password, and click Yes if prompted with
security alerts concerning certificates.
6. On the Upgrade Software panel shown in Figure 10-131, you can either upload a new
software upgrade file or list the upgrade files. Click the Upload button to upload the latest
SVC cluster code.

Figure 10-131 Update Software panel

7. On the Software Upgrade (file upload) panel (Figure 10-132), type or browse to the
directory on your management workstation (for example, master console) where you
stored the latest code level and click Upload.

Figure 10-132 Software Upgrade (file upload)

360 IBM System Storage SAN Volume Controller


8. The File Upload panel (Figure 10-133) is displayed if the file is uploaded. Click Continue.

Figure 10-133 File upload

9. The Software Upgrade panel (Figure 10-134) lists the available software packages. Make
sure the radio button next to the package you want to apply is selected. Click the Apply
button.

Figure 10-134 Software Upgrade

10.On the confirmation panel (Figure 10-135), click the Confirm button to begin the upgrade
process.

Figure 10-135 Confirm

The upgrade will start by upgrading one node in each I/O group.

Chapter 10. SVC configuration and administration using the GUI 361
11.The Software Upgrade Status panel (Figure 10-136) opens. Click the Check Upgrade
Status button periodically. This process might take a while to complete. If the software is
completely upgraded, you should get the panel shown later in Figure 10-138.

Figure 10-136 Software Upgrade Status

12.During the upgrade process, you can only issue informational commands. All task
commands such as the creation of a VDisk (as shown in Figure 10-137) are denied. This
applies to both the GUI and the CLI. All tasks such as creation, modifying, mapping, and
deleting are denied.

Figure 10-137 Denial of a task command during the software update

362 IBM System Storage SAN Volume Controller


Figure 10-138 Upgrade complete

13.The new code is distributed and applied to each node in the SVC cluster. After installation,
each node is automatically restarted in turn.
Prior to SVC code 4.1 the CCL (concurrent code load) failed if any node in the cluster
failed to install the new code, then it (and all the other nodes) automatically reverted back
to the previous code level. With SVC 4.1 and higher the SVC cluster will force all other
nodes to complete the code update. The failed node can then be fixed and updated later.
I

Tip: Be patient! After the software update is applied, the first SVC node in a cluster will
update and install the new SVC code version shortly afterwards. If there is more than one
I/O group (up to four I/O groups are possible) in an SVC cluster, the second node of the
second I/O group will load the new SVC code and restart with a 10 minute delay to the first
node. A 30 minute delay between the update of the first node and the second node in an
I/O group ensures that all paths, from a multipathing point of view, are available again.

An SVC cluster update with one I/O group takes approximately one hour.

14.If you run into an error, go to the Analyze Error Log panel. Search for Software Install
completed. Select the radio button Sort by date with the newest first and then click
Perform. This should list the software near the top. For more information about how to
work with the Analyze Error Log panel, see 10.8.4, “Analyzing the error log” on page 368.
You might also find it worthwhile to capture information for IBM support to help you
diagnose what went wrong. We covered this in step 3 on page 358.

You have now completed the tasks required to upgrade the SVC software. Click the X icon in
the upper right corner of the display area to close the Upgrade Software panel. Do not close
the browser by mistake.

Chapter 10. SVC configuration and administration using the GUI 363
10.8.2 Running maintenance procedures
To run the maintenance procedures on the SVC cluster, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
Run Maintenance Procedures link.
2. Click Start Analysis as shown in Figure 10-139. This will analyze the cluster log and
guide you through the maintenance procedures.

Figure 10-139 Maintenance Procedures

3. This generates a new error log file named errlog_008057_060619_150353 in the


/dumps/elogs/ directory (Figure 10-140).
– errlog part of the file name is generic for all error log files.
– 008057 is the panel name of the current configuration node.
– 060619 is the date (YYMMDD).
– 150353 is the time.

Figure 10-140 Maintenance error log with unfixed errors

4. Click the error number in the Error Code column in Figure 10-140. This gives you the
explanation for this error as shown in Figure 10-141.

364 IBM System Storage SAN Volume Controller


Figure 10-141 Maintenance: error code description

5. To perform problem determination, click Continue. If the problem occurred while


something was deliberately removed, just click OK. Your choices are shown in
Figure 10-142.

Figure 10-142 Maintenance procedures: fixing Stage 2

6. If the underlying problem is solved, the SVC maintenance procedure will prompt you to
click OK as shown in Figure 10-143.

Figure 10-143 Maintenance procedure: fixing Stage 3

Chapter 10. SVC configuration and administration using the GUI 365
7. The missing managed disk is now back and is included as shown in Figure 10-144.

Figure 10-144 Maintenance procedure: fixing Stage 4

8. The entry in the error log is now marked as fixed as shown in Figure 10-145. Click Exit.

Figure 10-145 Maintenance procedures: fixed

9. Click the X icon in the upper right corner of the display area in Figure 10-146 to close the
Run Maintenance Procedures panel. Do not close the browser by mistake.

Figure 10-146 Maintenance procedures: close

10.8.3 Setting error notification


To set up error notification, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
Set Error Notifications link.

366 IBM System Storage SAN Volume Controller


2. On the Modify Error Notification Settings panel (Figure 10-147), select the level of
notification (default is None) to apply to both SNMP and e-mail alerting. Click Modify
Settings.

Figure 10-147 Setting error notification

3. Type the IP address of your SNMP Manager and community string to use (Figure 10-148).
Click Continue.

Figure 10-148 Set the SNMP settings

Chapter 10. SVC configuration and administration using the GUI 367
4. The Modifying Error Notification Settings panel now shows the current status as shown in
Figure 10-149.

Figure 10-149 Current Error Notification settings

5. Click the X icon in the upper right corner of the display area to close the Set Error
Notification panel. Do not close the browser by mistake.

10.8.4 Analyzing the error log


The following types of events and errors are logged in the error log:
򐂰 Events: State changes that are detected by the cluster software and that are logged for
informational purposes. Events are recorded in the cluster error log.
򐂰 Errors: Hardware or software problems that are detected by the cluster software and that
require some sort of repair. Errors are recorded in the cluster error log.
򐂰 Unfixed errors: Errors that were detected and recorded in the cluster error log and that
were not yet corrected or repaired.
򐂰 Fixed errors: Errors that were detected and recorded in the cluster error log and that were
subsequently corrected or repaired.

368 IBM System Storage SAN Volume Controller


To display the error log for analysis, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance options and then the
Analyze Error Log link.
2. From the Error Log Analysis panel (Figure 10-150), you can choose either the Process or
Clear Log button.

Figure 10-150 Analyzing the error log

a. Select the appropriate radio buttons and click the Process button to display the log for
analysis. The Analysis Options and Display Options radio button boxes allow you to
filter the results of your log enquiry to reduce the output.
b. You can display the whole log, or you can filter the log so that only errors, events, or
unfixed errors are displayed. You can also sort the results by selecting the appropriate
display options. For example, you can sort the errors by error priority (lowest number =
most serious error) or by date. If you sort by date, you can specify whether the newest
or oldest error is to display at the top of the table. You can also specify the number of
entries you want to display on each page of the table.

Chapter 10. SVC configuration and administration using the GUI 369
c. Click the Log File Options radio button to use the existing log file or to generate a
fresh one. Using the existing log file displays entries that exist in the log file that was
last generated. If this is the first time you are using this option, no error log exists. To
obtain the latest status of your cluster, or if it is the first time you are using this option,
select the Generate a new error log file option. The errlog_008057_060619_150542
error log file is created in the /dumps/elogs/ directory and is ready for analysis
(Figure 10-151):
• errlog: This part of the file name is generic for all error log files.
• 008057: This is the panel name of the current configuration node.
• 060619: This is the date (YYMMDD).
• 150542: This is the time (HHMMSS).

Figure 10-151 Analyzing Error Log: Process

370 IBM System Storage SAN Volume Controller


d. Click a Sequence Number; this gives you the detailed log of this error (Figure 10-152).

Figure 10-152 Analyzing Error Log: Detailed error analysis

e. Click the Clear Log button at the bottom of the panel in Figure 10-150 on page 369 to
clear the log. If the error log contains unfixed errors, a warning message is displayed
when you click Clear Log.
3. Click the X icon in the upper right corner of the display area to close the Analyze Error Log
panel. Do not close the browser by mistake.

10.8.5 Setting features


To change licensing feature settings, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance options and then the
Set Features link.
2. On the Featurization Settings panel (Figure 10-153), consult your license before you make
changes in this panel. If you purchased additional features (for example, FlashCopy or
PPRC) or if you increased the capacity of your license, make the appropriate changes.
Then click the Update Feature Settings button.

Chapter 10. SVC configuration and administration using the GUI 371
Figure 10-153 Setting features

3. You now see a license confirmation panel as shown in Figure 10-154. Review this panel
and ensure that you are in compliance. If you are in compliance, click I Agree to make the
requested changes take effect.

Figure 10-154 License agreement

372 IBM System Storage SAN Volume Controller


4. You return to the Set Features panel (Figure 10-155), where your changes should be
reflected.

Figure 10-155 Featurization settings update

5. Click the X icon in the upper right corner of the display area to close the Set Features
panel. Do not close the browser by mistake.

10.8.6 Viewing the feature log


To view the feature log, which registers the events related to the SVC licensed features,
perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
View Feature Log link.
2. The Feature Log panel (Figure 10-156) opens. It displays the current feature settings and
a log of when changes were made.

Figure 10-156 Feature Log

3. Click the X icon in the upper right corner of the display area to close the View Feature Log
panel. Do not close the browser by mistake.

Chapter 10. SVC configuration and administration using the GUI 373
10.8.7 Listing dumps
To list the dumps that were generated, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
List Dumps link.
2. On the List Dumps panel (Figure 10-157), you see several dumps and log files that were
generated over time on this node.They include the configuration dump we generated in the
previous section. Click any of the available links (the underlined text in the table under the
List Dumps heading) to go to another panel that displays the available dumps. To see the
dumps on the other node, you must click Check other nodes.

Note: By default, the dump and log information that is displayed is available from the
configuration node. In addition to these files, each node in the SVC cluster keeps a
local software dump file. Occasionally, other dumps are stored on them. Click the
Check Other Nodes button at the bottom of the List Dumps panel (Figure 10-157) to
see which dumps or logs exist on other nodes in your cluster.

Figure 10-157 List Dumps

374 IBM System Storage SAN Volume Controller


3. Figure 10-158 shows the list of dumps from the partner node.

Figure 10-158 List Dumps from the partner node

4. To copy a file from this partner node to the config node, you simply click the file you want
to copy as shown in Figure 10-159.

Figure 10-159 Copy dump files

Chapter 10. SVC configuration and administration using the GUI 375
After all the necessary files are copied to the SVC config node, click Cancel to finish the copy
operation, and Cancel again to return to the SVC config node. Now, for example, if you click
the Error Logs link, you should see information similar to that shown in Figure 10-160.

Figure 10-160 List Dumps: Error Logs

376 IBM System Storage SAN Volume Controller


5. From this panel, you can perform either of the following tasks:
– Click any of the available log file links (indicated by the underlined text) to display the
log in complete detail as shown in Figure 10-161.

Figure 10-161 List Dumps: Error log detail

– Delete one or all of the dump or log files. To delete all, click the Delete All button.
To delete some, select the radio button or buttons to the right of the file and click the
Delete button. In Figure 10-162 you have to confirm the deletion by clicking
Confirm Delete.

Figure 10-162 Confirm Delete

6. Click the X icon in the upper right corner of the display area to close the List Dumps panel.
Do not close the browser by mistake.

Chapter 10. SVC configuration and administration using the GUI 377
10.9 Backing up the SVC configuration
The SVC configuration data is stored on all the nodes in the cluster. It is specially hardened
so that, in normal circumstances, the SVC should never lose its configuration settings.
However, in exceptional circumstances, this data might become corrupted or lost.

This section details the tasks that you can perform to save the configuration data from an
SVC configuration node and restore it. The following configuration information is backed up:
򐂰 Storage subsystem
򐂰 Hosts
򐂰 Managed disks (MDisks)
򐂰 Managed disk groups (MDGs)s
򐂰 SVC nodes
򐂰 SSH keys
򐂰 Virtual disks
򐂰 VDisk-to-host mappings
򐂰 FlashCopy mappings
򐂰 FlashCopy consistency groups
򐂰 Mirror relationships
򐂰 Mirror consistency groups

Backing up the cluster configuration enables you to restore your cluster configuration in the
event that it is lost. But only the data that describes the cluster configuration is backed up. In
order to backup your application data you need to use the appropriate backup methods.

To begin the restore process, consult IBM Support to determine the cause as to why you
cannot access your original configuration data.

The prerequisites for having a successful backup are as follows:


򐂰 All nodes in the cluster must be online.
򐂰 No object name can begin with an underscore (_).
򐂰 Do not run any independent operations that could change the cluster configuration while
the backup command runs.
򐂰 Do not make any changes to the fabric or cluster between backup and restore. If changes
are made, back up your configuration again or you might not be able to restore it later.

Note: We recommend that you make a backup of the SVC configuration data after each
major change in the environment, such as defining or changing a VDisks, VDisk-to-host
mappings, etc.

The output of the SVC configuration backup is a file with the name svc.config.backup.xml
that is stored in the C:\Program Files\IBM\svcconsole\cimom\backup\SVCclustername folder
in the SVC master console (where SVCclustername is the SVC cluster name of the
configuration from which you backed up). This differs from backing up the configuration using
CLI. The svc.config.backup.xml file is stored in the /tmp folder on the configuration node and
must be copied to an external and secure place for backup purposes.

Important: We strongly recommend that you change the default names of all objects to
non-default names. For objects with a default name, a warning is produced and the object
is restored with its original name and “_r” appended to it.

378 IBM System Storage SAN Volume Controller


10.9.1 Backup procedure
To back up the SVC configuration data, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance option and then the
Backup Configuration link.
2. On the Backing up a Cluster Configuration panel (Figure 10-163), click the Backup
button.

Figure 10-163 Backing up a Cluster Configuration data

3. After the configuration backup is successfully done, you see the message as shown in
Figure 10-164. Make sure you that you read, understand, action, and document the
warning messages, since they can influence the restore procedure.

Figure 10-164 Configuration backup successful message and warnings

Chapter 10. SVC configuration and administration using the GUI 379
4. Click OK to close the Backing up a Cluster Configuration panel. Do not close the
browser by mistake.

Info: To avoid getting the CMMVC messages that are shown in Figure 10-164, you must
replace all the default names — for example mdisk1, vdisk1, and so on.

10.9.2 Restoring the SVC configuration


It is very important that you perform the configuration backup as described in “Backup
procedure” on page 379 periodically, and every time after you change the configuration of
your cluster.

You can carry out the restore procedure only under the direction of IBM Level 3 support.

10.9.3 Deleting the configuration backup files


This section details the tasks that you can perform to delete the configuration backup files
from the default folder in the SVC master console. You can do this if you have already copied
them to another external and secure place.

To delete the SVC Configuration backup files, perform the following steps:
1. From the SVC Welcome page, click the Service and Maintenance options and then the
Delete Configuration link.

380 IBM System Storage SAN Volume Controller


2. On the Deleting a Cluster Configuration panel (Figure 10-165), click the OK button to
confirm the deletion. This deletes the C:\Program Files\IBM\svcconsole\cimom
\backup\SVCclustername folder (where SVCclustername is the SVC cluster name on
which you are working) on the SVC master console and all its contents.

Figure 10-165 Deleting a cluster configuration

3. Click Delete to confirm the deletion of the configuration backup data. See Figure 10-166.

Figure 10-166 Deleting a Cluster Configuration confirmation message

4. Click the X icon in the upper right corner of the display area to close the Deleting a Cluster
Configuration panel. Do not close the browser by mistake.

Chapter 10. SVC configuration and administration using the GUI 381
382 IBM System Storage SAN Volume Controller
11

Chapter 11. Copy Services: FlashCopy


The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides
the capability to perform a point-in-time (PiT) copy of one or more VDisks.

In this chapter we describe how FlashCopy works on SVC, and we present examples on how
to configure and utilize FlashCopy.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 383
11.1 FlashCopy
The challenge of creating a consistent copy of a data set that is constantly updated can be
met by using the FlashCopy function on the SVC. FlashCopy provides the capability to
perform an instantaneous point-in-time (PiT) copy of one or more VDisks. Since it is
performed at the block level, it is necessary to flush the cache and OS buffers prior to
executing the FlashCopy in order to ensure consistency at the application level.

11.1.1 How it works


FlashCopy works by defining a FlashCopy mapping consisting of one source VDisk together
with one target VDisk. Multiple FlashCopy mappings can be defined and PiT consistency can
be observed across multiple FlashCopy mappings using consistency groups; see
“Consistency groups” on page 387.

When FlashCopy is started, it makes a copy of a source VDisk to a target VDisk, and the
original contents of the target VDisk are overwritten. When the FlashCopy operation is
started, the target VDisk presents the contents of the source VDisk as they existed at the
single point in time (PiT) the FlashCopy was started. This is often also referred to as a
Time-Zero copy(T0 ).

When a FlashCopy is started, the source and target VDisks are instantaneously available.
This is so, because when started, bitmaps are created to govern and redirect I/O to the
source or target VDisk, respectively, depending on where the requested block is present,
while the blocks are copied in the background from the source to the target VDisk.

For more details on background copy, see “Grains and the FlashCopy bitmap” on page 390.

For more details on how I/O to FlashCopy source and target VDisk is directed, see “I/O
handling” on page 402.

Both the source and target VDisks are available for read and write operations, although the
background copy process has not yet completed copying across the data from the source to
target volumes.

384 IBM System Storage SAN Volume Controller


In Figure 11-1, the re-direction of the host I/O towards source and target VDisk is explained.

Source Target FlashCopy command is issued.


Copy is immediately available
T0 after bitmaps (metadata) are built.
Read/write to copy is possible.

Write Read
Blocks that are not yet written to
the target are read from the source.
T0 + t Before a write to the source, data is
copied to the target.
Source Target

When a background copy is


complete, the source and target
are logically independent, and the
FlashCopy mapping can be
deleted without affecting the target.
Figure 11-1 Implementation of SVC FlashCopy

Overview of FlashCopy features


FlashCopy supports these features:
򐂰 The target is the time-zero copy of the source (known as FlashCopy mapping targets).
򐂰 The source VDisk and target VDisk are available (almost) immediately.
򐂰 Consistency groups are supported to enable FlashCopy across multiple VDisks.
򐂰 The target VDisk can be updated independently of the source VDisk.
򐂰 Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of
the SVC I/O group to prevent a single point of failure.
򐂰 It is useful for backup, improved availability, and testing.

11.1.2 Practical uses for FlashCopy


The business applications for FlashCopy are many and various. An important use is
facilitating consistent backups of changing data. In this application, a FlashCopy is created to
capture a PiT copy. The resulting image is backed up to tertiary storage such as tape. After
the copied data is on tape, the FlashCopy target is redundant.

Different tasks can benefit from the use of FlashCopy. In the following sections, we describe
the most common situations.

Moving and migrating data


When you need to move a consistent data set from one host to another, FlashCopy can
facilitate this action with a minimum of downtime for the host application dependent on the
source VDisk.

It is very important to quiesce the application on the host and flush the application and OS
buffers, so that the new VDisk contains data that is “clean” to the application. Failing to do this
might result in the newly created VDisk being a mirrored copy of inconsistent data, and thus it
might not be usable by the application.

Chapter 11. Copy Services: FlashCopy 385


The cache on the SVC is also flushed using the FlashCopy prepare command; see
“Preparing” on page 399 prior to performing the FlashCopy.

The created data set on the FlashCopy target is immediately available as well as the source
VDisk.

Backup
FlashCopy does not impact your backup time, but it allows you to create a PiT consistent data
set (across VDisks), with a minimum of downtime for your source host. The FlashCopy target
can then be mounted on a different host (or the backup server) and backed up. Using this
procedure, the backup speed becomes less important, since the backup time does not
require downtime for the host dependent on the source VDisks.

Restore
You can keep periodically created FlashCopy targets online, to provide very fast restore of
specific files from the PiT consistent data set revealed on the FlashCopy targets, which
simply can be copied to the source VDisk in case a restore is needed.

When a background copy process has completed (that is, entered the copied state; see
“Idling_or_copied” on page 398), and a complete data set restore is needed, it is possible to
delete the FlashCopy mappings and create corresponding FlashCopy mappings in the
opposite direction. This is often referred to as a FlashBack procedure.

This procedure can be used to very quickly restore to the PiT consistent data set obtained
from the preceding FlashCopy.

Application testing
You can test new applications and new operating system releases against a FlashCopy of
your production data. The risk of data corruption is eliminated, and your application does not
need to be taken offline for an extended period of time to perform the copy of the data.

Data mining is a good example of an area where FlashCopy can help you. Data mining can
now extract data without affecting your application.

11.1.3 FlashCopy mappings


In the SVC, FlashCopy occurs between a source VDisk and a target VDisk. The source and
target VDisks must be equal in size. The minimum granularity that SVC supports for
FlashCopy is an entire VDisk, this means it is not possible to only FlashCopy part of a VDisk.

The source and target VDisks must both belong to the same SVC Cluster, but can be in
different I/O groups within that Cluster. SVC FlashCopy associates a source VDisk and a
target VDisk together in a FlashCopy mapping. Each VDisk can be a member of only one
FlashCopy mapping, and a FlashCopy mapping always has exactly one source and one
target VDisk. Therefore, it is not possible for a VDisk to simultaneously be the source for one
FlashCopy mapping and the target for another.

VDisks, which are members of a FlashCopy mapping, cannot have their size increased or
decreased while they are members of the FlashCopy mapping. The SVC supports the
creation of enough FlashCopy mappings to allow every VDisk to be a member of a
FlashCopy mapping.

386 IBM System Storage SAN Volume Controller


A FlashCopy mapping is the act of creating a relationship between a source VDisk and a
target VDisk. FlashCopy mappings can be either stand-alone or a member of a consistency
group. You can perform the act of preparing, starting, or stopping on either the stand-alone
mapping or the consistency group.

Note: Once a mapping is in a consistency group you can only operate on the group and
can no longer prepare, start or stop the individual mapping.

Figure 11-2 illustrates the concept of FlashCopy mapping.

FC_Mapping VDisk1T
VDisk1
FC_Source FC_Target

Figure 11-2 FlashCopy mapping

11.1.4 Consistency groups


Consistency groups address the issue where the objective is to preserve data consistency
across multiple VDisks, because the applications have related data which spans multiple
VDisks. A requirement for preserving the integrity of data being written is to ensure that
“dependent writes” are executed in the application's intended sequence. Because the SVC
provides PiT semantics, a self consistent data set is obtained.

FlashCopy mappings must be part of a consistency group, although if no FlashCopy


consistency group is specified, upon creation the FlashCopy mapping will belong to the
default group 0 (zero). The default consistency group 0 is a pseudo consistency group, and
this means that no commands can be directed at FlashCopy consistency group 0, since it is
intended for FlashCopy mappings which are to be handled as single instances.

FlashCopy commands can be issued to a FlashCopy consistency group, which affects all
FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if not part of
a defined FlashCopy consistency group.

Chapter 11. Copy Services: FlashCopy 387


Figure 11-3 illustrates a consistency group consisting of two FlashCopy mappings.

Consistency Group 1

FC_Mapping 1 VDisk1T
VDisk1
FC_Source FC_Target

FC_Mapping 2 VDisk2T
VDisk2
FC_Source FC_Target

Figure 11-3 FlashCopy consistency group

Dependent writes
To illustrate why it is crucial to use consistency groups when a data set spans multiple
VDisks, consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is to be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that the database update
has completed successfully.

The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next. However if the database log (updates 1 and 3) and the
database itself (update 2) are on different VDisks and a FlashCopy mapping is started during
this update, then you need to exclude the possibility that the database itself is copied slightly
before the database log resulting in the target VDisks seeing writes (1) and (3) but not (2),
since the database was copied before the write was completed.

In this case, if the database was restarted using the backup made from the FlashCopy target
disks, the database log would indicate that the transaction had completed successfully when,
in fact, that is not the case. Because the FlashCopy of the VDisk with the database file was
started (bitmap was created) before the write was on the disk. Therefore, the transaction is
lost and the integrity of the database is in question.

To overcome the issue of dependent writes across VDisks, to create a consistent image of
the client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an
atomic operation. To achieve this condition, the SVC supports the concept of consistency
groups.

388 IBM System Storage SAN Volume Controller


A FlashCopy consistency group can contain an arbitrary number of FlashCopy mappings up
to the maximum number of FlashCopy mappings supported by the SVC Cluster. FlashCopy
commands can then be issued to the FlashCopy consistency group and thereby
simultaneously for all FlashCopy mappings defined in the consistency group. For example,
when issuing a FlashCopy start command to the consistency group, all of the FlashCopy
mappings in the consistency group are started at the same time, resulting in a PiT copy which
is consistent across all of the FlashCopy mappings which are contained in the consistency
group.

Consistency group zero


For FlashCopy mappings where there is no need for the complexity of consistency groups,
SVC allows a FlashCopy mapping to be treated as an independent entity. In this case, the
FlashCopy mapping will become a member of the pseudo consistency group zero.

For FlashCopy mappings that are configured in this way, the prepare and start commands
are directed at the FlashCopy mapping name or FlashCopy mapping ID rather than the
consistency group ID. A prepare or start command directed toward a FlashCopy mapping,
which is a member of any other consistency group, is illegal and fails, and at the same time
the pseudo consistency group zero cannot be started or prepared.

For more information, see “Preparing (pre-triggering) the FlashCopy mapping” on page 406.

Maximum configurations
Table 11-1 shows the FlashCopy properties and maximum configurations.

Table 11-1 FlashCopy properties, maximum configuration.


FlashCopy property Maximum Comment

FC mappings per SVC cluster 2048 Since each FC mapping requires two VDisks, the
maximum number of FC mappings equal half of
the maximum number of VDisks.

FC consistency groups per SVC 128 This is an arbitrary limit policed by the SVC
cluster software.

FC VDisk per I/O group 16TB There is a per I/O group limit of 16 TB on the
quantity of source VDisk address space that can
participate in FC mappings.

FC mappings per consistency 512 This is due to the time taken to prepare a
group consistency group with a large number of
mappings.

11.1.5 FlashCopy indirection layer


The FlashCopy indirection layer governs the I/O to both the source and target VDisks when a
FlashCopy mapping is started, this is done using a FlashCopy bitmap. The purpose of the
FlashCopy indirection layer is to enable both the source and target VDisks for read and write
I/O immediately after the FlashCopy has been started.

To illustrate how the FlashCopy indirection layer works, we look at what happens when a
FlashCopy mapping is prepared and subsequently started.

Chapter 11. Copy Services: FlashCopy 389


When a FlashCopy mapping is started, the following sequence is applied:
򐂰 Flush write data in cache onto source VDisk or VDisks if part of a consistency group.
򐂰 Put cache into write-through on the source VDisk(s).
򐂰 Discard cache for the target VDisk(s).
򐂰 Establish a sync point on all source VDisks in the consistency group (creating the
FlashCopy bitmap).
򐂰 Ensure that the indirection layer governs all I/O to source and target VDisks.
򐂰 Enable cache on both the source and target VDisks.

FlashCopy provides the semantics of a PiT copy, using the indirection layer which intercepts
I/Os targeted at either the source or target VDisks. The act of starting a FlashCopy mapping
causes this indirection layer to become active in the I/O path. This occurs as an atomic
command across all FlashCopy mappings in the consistency group. The indirection layer
makes a decision about each I/O. This decision is based upon:
򐂰 The VDisk and logical block number (LBA) to which the I/O is addressed
򐂰 Its direction (read or write)
򐂰 The state of an internal data structure, the FlashCopy bitmap

The indirection layer either allows the I/O through to the underlying storage, redirects the I/O
from the target VDisk to the source VDisk, or stalls the I/O while it arranges for data to be
copied from the source VDisk to the target VDisk. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.

Grains and the FlashCopy bitmap


When data is copied from the source VDisk to the target VDisk, it is copied in units of address
space known as grains. In the SVC, the grain size is 256 KB. The FlashCopy bitmap contains
one bit for each grain. The bit records whether the associated grain has yet been split, by
copying the grain from the source to the target. The rate at which the grains are copied across
from the source VDisk to the target VDisk is called the copy rate. By default, the copy rate is
50%, though this can be altered. For more information about copy rates see “Background
copy rate” on page 400.

The FlashCopy indirection layer algorithm


Imagine the FlashCopy indirection layer as the I/O traffic cop when a FlashCopy mapping is
active. The I/O is intercepted and handled according to whether it is directed at the source
VDisk or the target VDisk, depending on the nature of the I/O (read or write) and the state of
the grain (has it already been copied or not).

390 IBM System Storage SAN Volume Controller


In Figure 11-4, we illustrate how the background copy is running while I/Os are handled
according to the indirection layer algorithm.

Tunable background
Source copy rate Target

I/Os to data already


copied handled as
normal

Reads to data not yet


copied redirected to
source volume

1 Writes to data that are not


yet copied on either volume
2 cause “copy on demand”,
which is then handled as
normal

Full data access is retained during copy


Figure 11-4 I/O processing with FlashCopy

In the following topics, we describe how the FlashCopy indirection layer handles read and
write I/O respectively to the source and target vdisks.

Source reads
Reads of the source are always passed through to the underlying source disk.

Target reads
In order for FlashCopy to process a read from the target disk, it must consult its bitmap:
򐂰 If the data being read is already copied to the target (grain is split), then the read is sent to
the target disk.
򐂰 If the data being read has not yet been copied (grain is unsplit), then the read is sent to the
source disk.

Clearly, this algorithm requires that while this read is outstanding, no writes are allowed to
execute which would change the data being read from the source. The SVC satisfies this
requirement by a cluster-wide locking scheme.

Note: The current implementation of FlashCopy limits the number of concurrent reads to
an unsplit target grain to one. If more than one concurrent read to an unsplit target grain is
received by the FlashCopy mapping layer, then they are serialized.

Chapter 11. Copy Services: FlashCopy 391


Writes to the source or target
Where writes occur to source or target to an area (grain) which has not yet been copied,
these will usually be stalled while a copy operation is performed to copy data (the grain) from
the source to the target, to maintain the illusion that the target contains its own copy. Once the
copy is successful, the grain is marked as split in the FlashCopy bitmap, and the original write
I/O continues as normal.

SVC has some specific optimization code that will detect if the write to the target is the size of
a complete grain, and if it is then there is no need to copy the source area to the target first
(as it will be completely overwritten anyway). In this case the new grain contents are written to
the target VDisk, and if this succeeds, then the grain is marked as split in the FlashCopy
bitmap. If the write fails, then the grain is not marked as split.

Summary of the FlashCopy indirection layer algorithm


In Table 11-2 the indirection layer algorithm is summarized.

Table 11-2 Summary table of the FlashCopy indirection layer algorithm.


VDisk being Has the grain Host I/O operation
accessed been split (copied)
Read Write

Source No Read from source VDisk Split grain (copy to target


VDisk), then write to source
VDisk

Yes Read from source VDisk Write to source VDisk

Target No Read from source VDisk Split grain (copy to target


VDisk), then write to target
VDisk

Yes Read from target VDisk Write to target VDisk

Interaction with the cache


The copy-on-write process, which is applied by the FlashCopy indirection layer when write I/O
addresses LBAs on the target VDisk, which have not yet been copied (from the source
VDisk), can introduce significant latency into write operations.

To isolate the host application that issued the write I/O from this latency, the FlashCopy
indirection layer is placed logically below the cache. This means that the copy latency is
typically only seen on a destage from the cache rather than for write operations from the host
application which otherwise might be blocked waiting for the write to complete.

392 IBM System Storage SAN Volume Controller


In Figure 11-5, we illustrate the logical placement of the FlashCopy indirection layer.

Figure 11-5 Logical placement of the FlashCopy indirection layer

11.1.6 FlashCopy rules


There is a per I/O group limit of 16 TB on the quantity of the source and target VDisk address
space that can participate in FlashCopy mappings. This address space is allocated in units of
8 GB. That is to say, that creating a FlashCopy mapping between a pair of VDisks whose size
is less than 8 GB consumes 8 GB of FlashCopy mapping address space.

For SVC 3.1 the maximum number of supported FlashCopy mappings are 2048 per SVC
cluster. This equals half the number of supported VDisks 4096 per SVC cluster. This means
that the maximum configuration supports the fact that all VDisks can be part of a FlashCopy
mapping whether source or target VDisk.

Chapter 11. Copy Services: FlashCopy 393


Here is an overview of the rules that apply to FlashCopy on SVC:
򐂰 There is a one-to-one mapping of the source VDisk to the target VDisk.
򐂰 The source and target VDisks can be across I/O groups, but must be within the same
cluster.
򐂰 The complete source VDisk is copied to the target VDisk.
򐂰 Minimum granularity is the entire VDisk.
򐂰 The source and target must be exactly equal in size.
򐂰 One VDisk can only be part of one FlashCopy mapping, either source or target.
򐂰 FlashCopy cannot be performed incrementally.
򐂰 An existing FlashCopy mapping can be stopped.
򐂰 A FlashCopy mapping is persistent until it is explicitly unmapped (deleted).
򐂰 The size of a source and target VDisk cannot be altered (increased or decreased) after
the FlashCopy mapping is created.
򐂰 The maximum quantity of source VDisks per I/O group is 16 TB.

FlashCopy and image mode disks


You can use FlashCopy with an image mode VDisk. Since the source and target VDisks must
be exactly the same size when creating a FlashCopy mapping, a VDisk must be created with
the exact same size as the image mode VDisk. To accomplish this, use the command
svcinfo lsvdisk -bytes VDiskName. The size in bytes is then used to create the VDisk to be
used in the FlashCopy mapping.

In Example 11-1 we list the size of the image mode VDisk VDISK1, in bytes. Subsequently the
VDisk VDISK1T is created specifying the same size.
Example 11-1 Listing the size of a VDisk in bytes and creating a VDisk of equal size.
IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk -bytes VDISK1
id 6
name VDISK1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 5
mdisk_grp_name MDGROUP1
capacity 53687091200
type image
formatted no
mdisk_id 2
mdisk_name MDISK2
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018200C47000000000000006
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
IBM_2145:ITSOSVC01:admin>svctask mkvdisk -size 53687091200 -unit b -name VDISK1T -mdiskgrp
MDGROUP2 -vtype striped -iogrp 0 -mdisk MDISK3
Virtual Disk, id [7], successfully created

394 IBM System Storage SAN Volume Controller


Tip: Alternatively, the expand and shrink VDisk commands can be used to modify the size
as the commands support specification of the size in bytes. See “Expanding a VDisk” on
page 247 and “Shrinking a VDisk” on page 252 for more information.

An image mode VDisk can be used as either a FlashCopy source or target VDisk.

Note: VDisk extents are not used when the SVC verifies the size of a VDisk. The VDisk
size comparison is based on the size available to the host. Therefore, it is needed to query
the size of the VDisk using svctask lsvdisk -bytes, and create the corresponding VDisk
specifying the equal size in bytes.

The number of bytes is always rounded up to 512 bytes (size of an LBA).

11.1.7 FlashCopy mapping events


In this section we explain the series of events that modify the states of a FlashCopy. In
Figure 11-6, the FlashCopy mapping state diagram shows an overview of the states that
apply to a FlashCopy mapping.

Here is an overview of a FlashCopy sequence of events:


1. Associate the source data set with a target location (one or more source and target
VDisks).
2. Create FlashCopy mapping for each source VDisk to the corresponding target VDisk.
The target VDisk must be equal in size to the source VDisk.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush cache for the source.
b. Discard cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.

Below each event (numbered in the diagram) that transitions a FlashCopy mapping from one
state to another, we provide an explanation of that event.

Chapter 11. Copy Services: FlashCopy 395


1 6
Source Target Source Target
Idle or
Copied
Stop

2
Preparing Stop 6b
5 3
Stop Stopped
Prepared

4 Stop
Copying

Stop
4b 4a

Suspended

Figure 11-6 FlashCopy mapping state diagram

򐂰 1. Create: A new FlashCopy mapping is created by specifying the source VDisk and the
target VDisk. The operation fails if either the source or target VDisks are already members
of a FlashCopy mapping. The operation also fails if the source and target VDisks are not
equal in size.
򐂰 2. Prepare: The prepare command is directed to either a consistency group of FlashCopy
mappings or to a stand-alone FlashCopy mapping. The prepare command places the
FlashCopy mappings in the preparing state.

Important: The act of preparing for start might corrupt any data that previously resided
on the target VDisk, because cached writes are discarded. Even if the FlashCopy
mapping is never started, the data from the target might be logically changed by the act
of preparing for start.

򐂰 3. Flush done: The FlashCopy relationship moves from the preparing state to the
prepared state automatically after all cached data for the source is flushed and all cached
data for the target is invalidated.

Note: If the flush of data from the cache cannot be completed, then the FlashCopy
mapping enters the stopped state.

396 IBM System Storage SAN Volume Controller


򐂰 4. Start: After all of the FlashCopy mapping or mappings in a consistency group are in the
prepared state, the FlashCopy mappings can be started. This is often referred to as
triggering the FlashCopy. When started, this is the T0 of the PiT copy.
During the processing of the start command, the following actions occur in sequence:
a. New reads and writes to all source VDisks in the consistency group are paused in the
cache layer until all ongoing reads and writes below the cache layer are completed.
b. After all FlashCopy mappings in the consistency group are paused, internal metadata
is set to allow FlashCopy operation, creating the FlashCopy bitmap.
c. After all FlashCopy mappings in the consistency group have their metadata set, read
and write operations are unpaused on the source VDisks.
d. The target VDisks are brought online.
As part of the start command, read, and write caching is enabled for both the source and
target VDisks.
򐂰 5. Copy completed: After every grain of the source VDisk is split (copy progress is
100%), the FlashCopy mapping enters the copied state.
򐂰 6. Delete: This event requests that the specified FlashCopy mapping be deleted. When
deleting a FlashCopy mapping which is in the copied state, the data on the target VDisk is
not affected, while the source and target VDisks are independent when in the copied
state.

Special FlashCopy events


Here we describe some special FlashCopy events:
򐂰 4a/4b. Bitmap offline/online: If access to both SVC nodes in the I/O group which the
source VDisk belongs to is lost while in the copying state, then the FlashCopy mapping
will enter the suspended state, and access to both the source and target VDisks in the
FlashCopy mapping will be suspended. This happens because the FlashCopy bitmap,
which the indirection layer (and the background copy process) depend on, becomes
inaccessible.
– When the FlashCopy bitmap becomes available again (at least one of the SVC nodes
in the I/O group is accessible), the FlashCopy mapping will return to the copying state,
access to the source and target VDisks will be restored, and the background copy
process resumed.
Unflushed data which was written to the source or target before the FlashCopy was
suspended is pinned in the cache, consuming resources, until the FlashCopy Mapping
leaves the suspended state.
– Normally two copies of the FlashCopy bitmaps are maintained (in non volatile
memory), one on each of the two SVC nodes making up the I/O group of the source
VDisk.
If only one of the SVC nodes in the I/O group which the source VDisk belong to goes
offline then the FlashCopy mapping will continue in the copying state, with a single
copy of the FlashCopy bitmap.
When the failed SVC node recovers, or a replacement SVC node is added to the I/O
group, up-to-date FlashCopy bitmaps will be re-established on the resuming SVC
node, and again provide a redundant location for the FlashCopy bitmaps.

Chapter 11. Copy Services: FlashCopy 397


Note: If both nodes in the I/O group to which the target VDisk belongs become
unavailable, then host access to the target VDisk will not be possible.

This behavior is standard SVC behavior and is unaffected by FlashCopy. The


FlashCopy state is unaffected by this and any background copy will continue, given that
the I/O group which the source VDisk belongs to is available since this is where the
FlashCopy bitmap is located.

򐂰 6b Forced Delete: If a FlashCopy mapping in the stopped state is to be deleted, the


-force flag must be used.
Deleting a FlashCopy mapping in the stopped state can allow unflushed write data from
the cache to be destaged to what was the target VDisk. This does not affect the data
integrity of the system because following a forced delete, nothing can be learned about the
contents of the target VDisk. The data contained in the target VDisk could be anything.
The destaging of old data to what was the target VDisk does not affect the future use of
the VDisk. Any new data is written over this old data, in the cache, or on disk.
򐂰 Modify: A FlashCopy mapping has two parameters which can be modified (apart from
renaming it). These are the background copy rate and the consistency group.
The background copy rate can be modified in any state regardless of whether the
FlashCopy mapping is part of a consistency group. However, attempting to modify a
consistency group in any state other than idle_or_copied or stopped fails.
򐂰 Stop: There are two mechanisms by which a FlashCopy mapping can be stopped. It can
be stopped by a user command or by an I/O error. When a FlashCopy mapping enters the
stopped state the target VDisk is taken offline.

11.1.8 FlashCopy mapping states


In this section, we explain the states of a FlashCopy mapping in more detail.

Idling_or_copied
Read and write caching is enabled for both the source and the target. A FlashCopy mapping
exists between the source and target, but they behave as independent VDisks in this state.

Copying
The FlashCopy indirection layer governs all I/O to the source and target VDisks while the
background copy is running.

Reads and writes are executed on the target as though the contents of the source were
instantaneously copied to the target during the start command.

The source and target can be independently updated. Internally, the target depends on the
source for some tracks.

Read and write caching is enabled on the source and the target.

Stopped
The FlashCopy was stopped either by user command or by an I/O error.

When a FlashCopy mapping is stopped, any useful data in the target VDisk is lost. Because
of this, while the FlashCopy mapping is in this state, the target VDisk is in the offline state.

398 IBM System Storage SAN Volume Controller


To regain access to the target, the mapping must be started again (the previous FlashCopy is
lost) or the FlashCopy mapping must be deleted.

While in the stopped state, any data that was written to the target VDisk and was not flushed
to disk before the mapping was stopped, is pinned in the cache. It cannot be accessed, and
takes up space in the cache. This data is destaged after a subsequent delete command or
discarded during a subsequent prepare command.

The source VDisk is accessible, and read and write caching is enabled for the source.

Suspended
While the FlashCopy mapping was in the copying state the I/O group which the source VDisk
belongs to became inaccessible, this means access to the FlashCopy bitmap is lost.

As a consequence, both the source and target VDisks are offline. The background copy
process is halted.

When the FlashCopy bitmap becomes available again, the FlashCopy mapping returns to the
copying state. Then, access to the source and target VDisks are restored, and the
background copy process is resumed.

Unflushed data that was written to the source or target before the FlashCopy was suspended
is pinned in the cache, consuming resources, until the FlashCopy mapping leaves the
suspended state.

Preparing
Since the FlashCopy function is placed logically below the cache to anticipate any write
latency problem, it demands no read or write data for the target and no write data for the
source in the cache at the time that the FlashCopy operation is started. This ensures that the
resulting copy is consistent.

Performing the necessary cache flush as part of the start command unnecessarily delays
the I/Os received after the start command is executed, since these I/Os need to wait for the
the cache flush to complete.

To overcome this problem, SVC FlashCopy supports the prepare command, which prepares
for a FlashCopy start while still allowing I/Os to continue to the source VDisk.

In the preparing state, the FlashCopy mapping is prepared by the following steps:
򐂰 Flushing any modified write data associated with the source VDisk from the cache. Read
data for the source is left in the cache.
򐂰 Placing the cache for the source VDisk into write through mode, so that subsequent writes
wait until data has been written to disk before completing the write command received
from the host application.
򐂰 Discarding any read or write data associated with the target VDisk from the cache.

While in this state, writes to the source VDisk experience additional latency because the
cache is operating in write through mode. While the FlashCopy mapping is in this state, the
target VDisk is in the offline state.

Before starting the FlashCopy mapping, it is important that any caches on the host level, for
example buffers in host OS or application, are also instructed to flush any outstanding writes
to the source VDisk. We do not intend to cover flushing for the OS or applications in this
redbook as it is beyond the scope.

Chapter 11. Copy Services: FlashCopy 399


Prepared
When in the prepared state, the FlashCopy mapping is ready to perform a start. While the
FlashCopy mapping is in this state, the target VDisk is in the offline state.

In the prepared state, writes to the source VDisk experience additional latency because the
cache is operating in write through mode.

Summary of FlashCopy mapping states


In Table 11-3, the various FlashCopy mapping states and the corresponding state of the
source and target VDisks are listed.

Table 11-3 FlashCopy mapping state summary due to FlashCopy


State Source Target

Online/offline Cache state Online/offline Cache state

Idling/Copied Online Write-back Online Write-back

Copying Online Write-back Online Write-back

Stopped Online Write-back Offline -

Suspended Offline Write-back Offline -

Preparing Online Write-through Offline -

Prepared Online Write-through Offline -

11.1.9 Background copy rate


A FlashCopy mapping has a property Background Copy Rate. This is expressed as a
percentage and can take values between 0 and 100.

The background copy rate can be changed when the FlashCopy mapping is in any state.

If a value of 0 is specified, then background copy is disabled. This is equivalent to a NOCOPY


option. This option is suitable for short lived FlashCopy mappings, which are to be used for
backup purposes only. Since the source data set is not expected to change much during the
lifetime of the FlashCopy mapping, it is more efficient in terms of managed disk I/Os not to
perform a background copy.

The relationship of the background copy rate value to the attempted number of grains to be
split (copied) per second is shown in Table 11-4.

Table 11-4 Background copy rate


User percentage Data copied per second Grains per second

1-10 128KB 0.5

11-20 256KB 1

21-30 512KB 2

31-40 1MB 4

41-50 2MB 8

51-60 4MB 16

61-70 8MB 32

400 IBM System Storage SAN Volume Controller


User percentage Data copied per second Grains per second

71-80 16MB 64

81-90 32MB 128

91-100 64 MB 256

The grains per second numbers represent the maximum number of grains the SVC will copy
per second assuming that the bandwidth to the MDisks can accommodate this.

The SVC is unable to achieve these copy rates if insufficient bandwidth is available from the
SVC nodes to the physical disks making up the managed disks, after taking into account the
requirements of foreground I/O.

If this situation arises, then background copy I/O contends for resources on an equal basis
with I/O arriving from hosts. Both tend to see an increase in latency, and a consequential
reduction in throughput with respect to the situation had the bandwidth not been limited.
Degradation is graceful.

Both background copy and foreground I/O continue to make forward progress, and do not
stop, hang, or cause the node to fail. The background copy is performed by one of the nodes
belonging to the I/O group in which the source VDisk resides. This responsibility is failed over
to the other node in the I/O group in the event of the failure of the node performing the
background copy.

The background copy is performed “backwards” that is to say it starts with the grain
containing the highest LBAs and works backward toward the grain containing LBA 0. This is
done to avoid any unwanted interactions with sequential write streams from the using
application.

Important: We do not recommend setting background copy rates higher than 91% unless
this is tested in your environment, since it can severely impact the response time of
applications.

11.1.10 Synthesis
The FlashCopy functionality in SVC simply creates copy VDisks. All the data in the source
VDisk is copied to the destination VDisk. This includes operating system control information
as well as application data and metadata.

Some operating systems are unable to use FlashCopy without an additional step which is
termed synthesis. In general, synthesis performs some transformation on the operating
system metadata in the target VDisk so that the operating system can use the disk. Operating
system specifics are discussed in Appendix A, “Copy services and open systems” on
page 663.

11.1.11 Metadata management


A bitmap is maintained, with a bit for each grain in the copy FlashCopy mapping.

The bitmap is maintained in non-volatile storage on both the SVC nodes of the I/O group the
source VDisk belongs to. While both of these nodes are functioning members of the cluster
the two copies of the bitmaps are updated and kept consistent with one another.

Chapter 11. Copy Services: FlashCopy 401


Other SVC nodes, which are not members of the I/O group for the source VDisk, maintain a
volatile pessimistic bitmap of the FlashCopy mapping. Pessimistic means that the bitmap
might hold a bit which indicates that the grain must be copied, where in fact it does not need
to be copied.

Access to grains which have not yet been copied are coordinated by the source extent owner
only. The actual reads and writes are performed on the node that wants to access the grain.

No provision is made for storing bitmaps within the back-end storage.

11.1.12 I/O handling


To understand how the SVC cluster behaves in the presence of errors, it is necessary to
describe how the FlashCopy I/O path operates. It is important to note that the binding of
VDisks to I/O groups affects only the cache and layers above the cache. Below the cache in
the SVC software stack, VDisks are available for I/O on all nodes.

As mentioned earlier, the background copy is performed by one of the nodes belonging to the
source VDisk’s I/O group. It should be clear that any of the nodes in an SVC cluster might
want to submit an I/O from the FlashCopy layer to a VDisk.

It is assumed that the nodes and managed disks in the cluster have complete connectivity. If
the nodes comprising the source I/O group do not have access to the managed disk extents
comprising the target VDisk, then an I/O error occurs, and FlashCopy mapping is probably
stopped.

Similarly, the nodes comprising the target I/O group must have access to the managed disk
extents comprising the source VDisk.

11.1.13 Serialization of I/O by FlashCopy


In general, the FlashCopy function in SVC introduces no explicit serialization into the I/O
path. Therefore, many concurrent I/Os are allowed to the source and target VDisks.

However, there is a lock for each grain. The lock can be taken shared or exclusive. The lock
is taken in the following modes under the following conditions:
򐂰 The lock is taken shared for the duration of a read from the target VDisk which touches a
grain which is yet to be split.
򐂰 The lock is taken exclusive during a grain split. This happens prior to FlashCopy actioning
any destage (or write through) from the cache to a grain which is yet to be split (the
destage waits for the grain to be split). The lock is held during the grain split and released
before the destage is processed.

If the lock is held shared, and another process wants to take the lock shared, then this
request is granted unless a process is already waiting to take the lock exclusive.

If the lock is held shared and it is requested exclusive, then the requesting process must wait
until all holders of the shared lock free it.

Similarly, if the lock is held exclusive, then a process wanting to take the lock in either shared
or exclusive mode must wait for it to be freed.

402 IBM System Storage SAN Volume Controller


11.1.14 Error handling
When a FlashCopy mapping is not copying, the FlashCopy function does not affect the error
handling or reporting of errors in the I/O path. Only when a FlashCopy mapping is copying,
are error handling and reporting are affected by FlashCopy.

We describe these scenarios in the following sections.

Node failure
Normally, two copies of the FlashCopy bitmaps are maintained, one on each of the two nodes
making up the I/O group of the source VDisk.

When a node fails, one copy of the bitmaps for all FlashCopy mappings whose source VDisk
is a member of the failing node’s I/O group become inaccessible. FlashCopy continues with a
single copy of the FlashCopy bitmap being stored in non-volatile storage in the remaining
node in the source I/O group. The cluster metadata is updated to indicate that the missing
node no longer holds up to date bitmap information.

When the failing node recovers or a replacement node is added to the I/O group, up-to-date
bitmaps are re-established on the new node. Once again, it provides a redundant location for
the bitmaps.

If access to both nodes in an I/O group is lost, or if access to the single remaining valid copy
of a FlashCopy bitmap is lost, then any FlashCopy mappings that were in the copying state,
and for which the source was in the lost I/O group, enter the suspended state. As stated,
access to both the source and target VDisks in the FlashCopy mapping is suspended.

If both nodes in the I/O group to which the target VDisk belongs become unavailable, then
host access to the target VDisk is not possible. This is standard SVC behavior and is
unaffected by FlashCopy.

Path failure (path offline state)


In a fully functioning cluster, all nodes have a software representation of every VDisk in the
cluster within their application hierarchy.

Since the SAN which links the SVC nodes to each other, and to the managed disks, is made
up of many independent links, it is possible for some subset of the nodes to be temporarily
isolated from some of the managed disks. When this happens, the managed disks are said to
be path offline on some nodes.

Note: Other nodes might see the managed disks as online, because their connection to
the managed disks is still functioning.

When a managed disk enters the path offline state on an SVC node, all the VDisks which
have any extent on the managed disk, also become path offline. Again, this happens only on
the affected nodes. When a VDisk is path offline on a particular SVC node, this means that
host access to that VDisk through the node will fail with the SCSI sense indicating offline.

Path offline for the source VDisk


If a FlashCopy mapping is in the copying state and the source VDisk goes path offline, then
this path offline state is propagated to both source and target VDisks.

Again, note that path offline is a state which exists on a per-node basis. Other nodes might
not be affected.

Chapter 11. Copy Services: FlashCopy 403


If the source VDisk again becomes online then both the target and source VDisks are brought
back online.

Path offline for the target VDisk


If the target VDisk goes path offline but the source VDisk is still online then above FlashCopy
in the application stack, only the target VDisk is in the path offline state. The source VDisk
remains online.

I/O errors caused by path failures


Special handling is invoked where:
򐂰 A FlashCopy mapping is copying.
򐂰 The source has not suffered a path failure (online).
򐂰 The target has suffered a path failure (path offline).
򐂰 A copy operation is requested, either by:
– A background copy operation
– A host I/O to the source (but specifically not the target)
򐂰 The copy operation fails when the write is attempted to the target.

In this case, the FlashCopy mapping is placed into the stopped state. I/O to the source is
allowed to continue unaffected by FlashCopy. The target VDisk is held offline and all I/O to it
fails.

Although this situation arises on just one node, the FlashCopy mapping state is held on a
cluster-wide basis and is propagated to all nodes.

In all other cases, I/Os failing due to path failures simply result in the original I/O being failed
because of the path failure.

11.1.15 Asynchronous notifications


FlashCopy raises informational error logs when mappings or consistency groups make
certain state transitions as detailed below.

These state transitions occur as a result of configuration events which complete


asynchronously, and the informational errors can be used to generate Simple Network
Management Protocol (SNMP) traps to notify the user.

Other configuration events complete synchronously, and no informational errors are logged
as a result of these events.
򐂰 PREPARE_COMPLETED: This is logged when the FlashCopy mapping or consistency
group enters the prepared state as a result of a user request to prepare. The user can now
start (or stop) the mapping or group.
򐂰 COPY_COMPLETED: This is logged when the FlashCopy mapping or consistency group
enters the idle/copied state when it was previously in the copying state. This indicates that
the target disk now contains a complete copy and no longer depends on the source.
򐂰 STOP_COMPLETED: This is logged when the FlashCopy mapping or consistency group
enters the stopped state as a result of a user request to Stop. It is distinct from the error
that is logged when a mapping or group enters the stopped state as a result of an I/O error.

404 IBM System Storage SAN Volume Controller


11.2 FlashCopy commands
In this section we explain the various commands used to create, modify, and delete
FlashCopy mappings. For complete details about the FlashCopy commands, see the IBM
TotalStorage Virtualization Family SAN Volume Controller: Command-Line Interface User's
Guide, SC26-7544.

11.2.1 Creating a FlashCopy mapping


To create a FlashCopy mapping, we use the command svctask mkfcmap.

svctask mkfcmap
The svctask mkfcmap command enables you to create a new FlashCopy mapping, which
maps a source VDisk to a target VDisk ready for subsequent copying.

When executed, this command creates a new FlashCopy mapping logical object. This
mapping persists until it is deleted, and must specify the source and target VDisk. The target
VDisk must be identical in size to the source VDisk, otherwise the command fails. The source
VDisk and target VDisk cannot already be part of an existing mapping. This means a VDisk
can be either a source or target VDisk in one, and only one, FlashCopy mapping.

When created, the mapping can be started at the time the PiT copy is required.

Upon creation the FlashCopy mapping can be named, made a member of a consistency
group and the background copy rate can be set. If not specified, the FlashCopy mapping will
be named fcmap#, be a member of the pseudo consistency group 0, and have a background
copy rate of 50%. The parameters can be changed using svctask chfcmap, if needed.

11.2.2 Modifying the mapping


To change a FlashCopy mapping we use the command svctask chfcmap.

svctask chfcmap
This command allows you to modify the attributes of an existing mapping.

When executed you can change the name of the mapping, the copy rate, or the consistency
group. When modifying the name of a mapping, you cannot modify any of the other attributes
at the same time, since changing the name is mutually exclusive.

Modifying the consistency group the mapping belongs to can only be done when the mapping
is inactive. That is, it has not been triggered, or, if it has been triggered, the copy has run to
completion. Similarly, if the target consistency group is active, the mapping cannot be moved.

11.2.3 Deleting the mapping


To delete a FlashCopy mapping, we use the command svctask rmfcmap.

svctask rmfcmap
This command is used to delete an existing FlashCopy mapping.

When the command is executed, it attempts to delete the FlashCopy mapping specified. If the
FlashCopy mapping is active, the command fails unless the -force flag is specified.

Chapter 11. Copy Services: FlashCopy 405


Deleting a mapping only deletes the logical relationship between the two VDisks. It does not
affect the VDisks themselves. However, when issued on an active FlashCopy mapping
applying the -force flag, the delete renders the data on the FlashCopy mapping target VDisk
as inconsistent.

11.2.4 Preparing (pre-triggering) the FlashCopy mapping


To prepare a FlashCopy mapping, we use the command svctask prestartfcmap.

svctask prestartfcmap
This command prepares a mapping for starting. It flushes the cache of any data destined for
the source VDisk, and forces the cache into write through until the mapping is triggered.

When executed, this command prepares a single FlashCopy mapping. The prepare step
ensures that any data that resides in the cache for the source VDisk is first flushed to disk.
This ensures that when the copy is made it is consistent with what the operating system
thinks is on disk.

When the command is issued, the FlashCopy mapping enters the preparing state, and upon
completion (when in write-through), it enters the prepared state. At this point, the FlashCopy
mapping is ready for triggering.

Preparing, and the subsequent triggering, is usually performed on a consistency group basis.
Only mappings belonging to consistency group 0 can be prepared on their own.

11.2.5 Preparing (pre-triggering) the FlashCopy consistency group


To prepare a FlashCopy consistency group, we use the command svctask
prestartfcconsistsgrp.

svctask prestartfcconsistgrp
This command prepares all FlashCopy mappings in the consistency group for triggering. It
flushes the cache of any data destined for the source VDisks, and forces the cache into write
through until the mapping is triggered.

When executed, this command prepares all FlashCopy mappings in the consistency group.
The prepare step ensures that any data that resides in the cache for the source VDisks is first
flushed to disk. This ensures that when the copy is made, it is consistent with what the
operating system thinks is on the disks.

When the command is issued, the FlashCopy mappings in the consistency group enter the
preparing state, and upon completion (when all are in write-through), the consistency group
enters the prepared state. At this point, the FlashCopy consistency group is ready for
triggering.

11.2.6 Starting (triggering) FlashCopy mappings


The command svctask startfcmap is used to start a single FlashCopy mapping.

svctask startfcmap
The command svctask startfcmap triggers a FlashCopy mapping. When invoked, a PiT
copy of the source VDisk is created on the target VDisk.

406 IBM System Storage SAN Volume Controller


The FlashCopy mapping must be in the prepared state prior to triggering. However, you can
run this command with the optional -prep flag. This prepares the mapping and trigger the
FlashCopy as soon as the preparation is complete. Note that this means it is under the
systems control when the trigger happens. Since the prepare step might take some time to
complete before the trigger is executed, this can potentially prolong the period your
application must be quiesced.

If you want to control the triggering, you should use the prepare command first. See svctask
prestartfcmap and svctask prestartfcconsistgrp.

When the FlashCopy mapping is triggered it enters the copying state. The way the copy
proceeds depends on the background copy rate attribute of the mapping.

If the mapping is set to 0% (NOCOPY), then only data that is subsequently updated on the
source is copied to the destination. This means that the target VDisk can only be used as a
backup copy while the mapping exists in the copying state. If the copy is stopped, the target
VDisk is not usable.

When the intention is to end up with a duplicate copy of the source at the target VDisk, it is
necessary to set the background copy rate greater than 0. This means that the system copies
all the data (even unchanged data) to the target VDisk, and eventually reaches the
idle/copied state. At this time, you can delete the FlashCopy mapping and create an
independent PiT copy of the source at the target VDisk.

11.2.7 Stopping the FlashCopy mapping


The command svctask stopfcmap is used to stop a FlashCopy mapping.

svctask stopfcmap
This command allows you to stop an active (copying) or suspended mapping. When
executed, this command stops a single FlashCopy mapping.

When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by
the SVC. The FlashCopy mapping needs to be reprepared or retriggered to bring the target
VDisk online again.

Note: Stopping a FlashCopy mapping should only be done when the data on the target
VDisk is of no interest.

When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline
by the SVC, regardless of the state of the mapping.

11.2.8 Stopping the FlashCopy consistency group


The command svctask stopfcconsistgrp is used to stop all FlashCopy mappings in a
FlashCopy consistency group.

svctask stopfcconsistgrp
This command allows you to stop an active (copying) or suspended consistency group. When
executed, this command stops all FlashCopy mappings in the consistency group.

When a FlashCopy consistency group is stopped, the target VDisks become invalid and are
set offline by the SVC. The FlashCopy consistency group needs to be reprepared or
retriggered to bring the target VDisks online again.

Chapter 11. Copy Services: FlashCopy 407


11.2.9 Creating the FlashCopy consistency group
The command svctask mkfcconsistgrp is used to create a FlashCopy consistency group.

svctask mkfcconsistgrp
This command allows you to create a new FlashCopy consistency group. When executed, a
new FlashCopy consistency group is created, and the ID of the new group is returned. If you
do not specify a name for the FlashCopy consistency group, a name will automatically be
assigned.

11.2.10 Modifying the FlashCopy consistency group


The command svctask chfcconsistgrp is used to modify a FlashCopy consistency group.

svctask chfcconsistgrp
This command allows you to modify the name of an existing consistency group. When
executed, this command changes the name of the consistency group that is specified.

11.2.11 Deleting the FlashCopy consistency group


The command svctask rmfcconsistgrp is used to delete a FlashCopy consistency group.

svctask rmfcconsistgrp
This command allows you to delete an existing consistency group.

When executed, this command deletes the consistency group specified. If there are
mappings that are members of the group, the command fails unless the -force flag is
specified.

If you want to delete all the mappings in the consistency group as well, you must first delete
the mappings, and then delete the consistency group.

11.3 FlashCopy scenario using the CLI


In the following scenario, we want to FlashCopy the following VDisks:
VDisk1: Database files
VDisk2: Database log files
VDisk3: Application files

408 IBM System Storage SAN Volume Controller


Since data consistency is needed across VDisk1 and VDisk2, we create a consistency group
to handle the FlashCopy of VDisk1 and VDisk2. While in this scenario the application files are
independent of the database, we create a single FlashCopy mapping for VDisk3. The
FlashCopy setup is illustrated in Figure 11-7.

Consistency Group 1

FC_Mapping 1 VDisk1T
VDisk1
FC_Source FC_Target

FC_Mapping 2 VDisk2T
VDisk2
FC_Source FC_Target

VDisk3 FC_Mapping 3 VDisk3T


FC_Source FC_Target

Figure 11-7 FlashCopy scenario using the CLI

Setting up FlashCopy
In the following section, we assume that the target VDisks have already been created.

To set up the FlashCopy, you must perform the following steps:


򐂰 Create a FlashCopy consistency group:
– Name FCCG1
򐂰 Create FlashCopy mapping for VDisk1:
– Source WIN_1
– Target WIN_2
– Name FCMap1
– Consistency group FCCG1
– Copyrate 60%
򐂰 Create FlashCopy mapping for VDisk2:
– Source WIN_3
– Target WIN_4
– Name FCMap2
– Consistency group FCCG1
– Copyrate 60%
򐂰 Create FlashCopy mapping for VDisk3:
– Source LNX_1
– Target LNX_copy_1
– Name FCMap3
– Copyrate 0%
Each of these steps is carried out using the CLI as detailed below.

Chapter 11. Copy Services: FlashCopy 409


In Example 11-2, the consistency group FCCG1 is created.

Example 11-2 Creating a FlashCopy mapping


IBM_2145:ITSOSVC01:admin>svctask mkfcconsistgrp -name FCCG1
FlashCopy Consistency Group, id [1], successfully created

In Example 11-3, the FlashCopy mapping for VDisk1 is created.

Example 11-3 Create FlashCopy mapping for VDisk1


IBM_2145:ITSOSVC01:admin>svctask mkfcmap -source WIN_1 -target WIN_2 -name FCMap1 -consistgrp FCCG1
-copyrate 60
FlashCopy Mapping, id [0], successfully created

In Example 11-4, the FlashCopy mapping for VDisk2 is created.

Example 11-4 Create FlashCopy mapping for VDisk2


IBM_2145:ITSOSVC01:admin>svctask mkfcmap -source WIN_3 -target WIN_4 -name FCMap2 -consistgrp FCCG1
-copyrate 60
FlashCopy Mapping, id [1], successfully created

In Example 11-5, the FlashCopy mapping for VDisk3 is created.

Example 11-5 Create FlashCopy mapping for VDisk3


IBM_2145:ITSOSVC01:admin>svctask mkfcmap -source LNX_1 -target LNX_copy_1 -name FCMap3 -copyrate 0
FlashCopy Mapping, id [2], successfully created

In Example 11-6, we list the FlashCopy mappings to verify they were created as intended.

Example 11-6 Listing the FlashCopy mappings


IBM_2145:ITSOSVC01:admin>svcinfo lsfcmap -delim :
id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:group_id:group_
name:status:progress:copy_rate
0:FCMap1:0:WIN_1:9:WIN_2:1:FCCG1:idle_or_copied:0:60
1:FCMap2:1:WIN_3:8:WIN_4:1:FCCG1:idle_or_copied:0:60
2:FCMap3:4:LNX_1:5:LNX_copy_1:::idle_or_copied:0:0

Executing FlashCopy
Now that we have created the FlashCopy mappings and the consistency group, we are ready
to use the FlashCopy mappings in our environment.

When performing the FlashCopy on the VDisks with the database we want to be able to
control the PiT when the FlashCopy is triggered, in order to keep our quiesce time at a
minimum. To achieve this we prepare the consistency group in order to flush the cache for
the source VDisks.

In Example 11-7 we execute the command prestartfcconsistgrp, when listing the


FlashCopy consistency groups we verify that it has entered the prepared state.

Example 11-7 Prestart FlashCopy consistency group


IBM_2145:ITSOSVC01:admin>svctask prestartfcconsistgrp FCCG1
IBM_2145:ITSOSVC01:admin>svcinfo lsfcconsistgrp
id name status
1 FCCG1 prepared

410 IBM System Storage SAN Volume Controller


Now that the source VDisks are in write-through, we flush all database and OS buffers and
quiesce the database.

Immediately after the quiesce, we execute the command startfcconsistgrp as shown in


Example 11-8, and afterwards the database can be resumed. We have created a PiT
consistent copy of the database on the target VDisks.

Example 11-8 Start FlashCopy consistency group


IBM_2145:ITSOSVC01:admin>svctask startfcconsistgrp FCCG1

When executing the single FlashCopy mapping, we decide to let the SVC perform the
FlashCopy triggering as soon as the FlashCopy enters the prepared state. To do this, we
simply issue the command svctask startfcmap with the flag -prep.

In Example 11-9, the single FlashCopy mapping is executed, and we verify that the
FlashCopy mapping enters the copying state.

Example 11-9 Execution of single FlashCopy mapping


IBM_2145:ITSOSVC01:admin>svctask startfcmap -prep FCMap3
IBM_2145:ITSOSVC01:admin>svcinfo lsfcmap FCMap3
id 2
name FCMap3
source_vdisk_id 4
source_vdisk_name LNX_1
target_vdisk_id 5
target_vdisk_name LNX_copy_1
group_id
group_name
status copying
progress 0
copy_rate 0

We created the single FlashCopy mapping with the background copyrate set to 0%, this
means that unless we issue a command to alter the copyrate, it will stay in the copying state
until stopped. This is often referred to as nocopy.

To monitor the background copy progress of the FlashCopy mappings in the consistency
group, we issue the command svcinfo lsfcmapprogress for each FlashCopy mapping.

Alternatively, the copy progress can also be queried using the command svcinfo lsfcmap.
As shown in Example 11-10 both commands return that the background copy has completed
23%.

Example 11-10 Monitoring background copy progress


IBM_2145:ITSOSVC01:admin>svcinfo lsfcmapprogress FCMap1
id progress
0 23
IBM_2145:ITSOSVC01:admin>svcinfo lsfcmapprogress FCMap2
id progress
1 23
IBM_2145:ITSOSVC01:admin>svcinfo lsfcmap -delim :
id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:group_id:group_
name:status:progress:copy_rate
0:FCMap1:0:WIN_1:9:WIN_2:1:FCCG1:idle_or_copied:23:60
1:FCMap2:1:WIN_3:8:WIN_4:1:FCCG1:idle_or_copied:23:60
2:FCMap3:4:LNX_1:5:LNX_copy_1:::idle_or_copied:0:0

Chapter 11. Copy Services: FlashCopy 411


When the background copy has completed, the FlashCopy mapping enters the
idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this
state the consistency group enters the idle_or_copied state.

When in this state the FlashCopy mapping can be deleted, and the target disk be used
independently, if, for example, another target disk is to be used for the next FlashCopy of the
particular source VDisk.

In Example 11-11, we verify the state of the FlashCopy consistency group, that the
background copy has completed (100% copied), and then we delete the FlashCopy mapping
FCMap2.

Example 11-11 Verifying the copied state and deleting a FlashCopy mapping
IBM_2145:ITSOSVC01:admin>svcinfo lsfcconsistgrp FCCG1
id 1
name FCCG1
status idle_or_copied
FC_mapping_id 0
FC_mapping_name FCMap1
FC_mapping_id 1
FC_mapping_name FCMap2
IBM_2145:ITSOSVC01:admin>svcinfo lsfcmapprogress FCMap1
id progress
0 100
IBM_2145:ITSOSVC01:admin>svcinfo lsfcmapprogress FCMap2
id progress
1 100
IBM_2145:ITSOSVC01:admin>svctask rmfcmap FCMap2
IBM_2145:ITSOSVC01:admin>svcinfo lsfcconsistgrp FCCG1
id 1
name FCCG1
status idle_or_copied
FC_mapping_id 0
FC_mapping_name FCMap1

11.4 FlashCopy scenario using the GUI


In the following we use the same scenario as described in “FlashCopy scenario using the CLI”
on page 408.

The scenario is that we want to FlashCopy the following VDisks:


VDisk1: Database files
VDisk2: Database log files
VDisk3: Application files

Since data consistency is needed across VDisk1 and VDisk2, we create a consistency group,
to handle FlashCopy of VDisk1 and VDisk2. While in this scenario the application files are
independent of the database, we create a single FlashCopy mapping for VDisk3. Refer to the
FlashCopy setup, which is illustrated in Figure 11-7 on page 409.

412 IBM System Storage SAN Volume Controller


Setting up FlashCopy
In the following discussion, we assume that the target VDisks have already been created.

To set up the FlashCopy, you must perform the following steps:


򐂰 Create a FlashCopy consistency group:
– Name FCCG1
򐂰 Create FlashCopy mapping for VDisk1:
– Source WIN_1
– Target WIN_2
– Name FCMap1
– Consistency group FCCG1
– Copyrate 60%
򐂰 Create FlashCopy mapping for VDisk2:
– Source WIN_3
– Target WIN_4
– Name FCMap2
– Consistency group FCCG1
– Copyrate 60%
򐂰 Create FlashCopy mapping for VDisk3:
– Source LNX_1
– Target LNX_copy_1
– Name FCMap3
– Copyrate 0%

Each of these steps is carried out using the GUI as described below.

Creating a FlashCopy consistency group


The first step is to create our FlashCopy consistency group for our VDisk1 and VDisk2 disks.
In the SVC GUI expand Manage Copy Services in the Task pane and select FlashCopy
Consistency Groups.

When prompted for filtering, we select Bypass Filter and click OK. (This will then show us all
the defined consistency groups, if there were any created previously.)

Then, from the drop down list, we select Create a Consistency Group and Go, this can be
seen in Figure 11-8.

Figure 11-8 Select FlashCopy Consistency Groups

Chapter 11. Copy Services: FlashCopy 413


In Figure 11-9 we name the FlashCopy consistency group (FCCG1). Click OK.

Figure 11-9 FlashCopy consistency group name

When prompted with the mapping results in Figure 11-10, we click Close.

Figure 11-10 Mapping results for created consistency group

414 IBM System Storage SAN Volume Controller


In Figure 11-11 the created FlashCopy consistency group is displayed.

Figure 11-11 Viewing FlashCopy consistency groups

The FlashCopy consistency group is now created and ready for use.

Creating the FlashCopy mappings


Next, we will create the FlashCopy mappings for each of our VDisks to their respective
targets. In the SVC GUI, we expand Manage Copy Services in the Task pane and select
FlashCopy mappings.

When prompted for filtering, we select Bypass Filter. (This will then show us all the defined
FlashCopy mappings, if there were any created previously.)

As shown in Figure 11-12, we select Create a Mapping from the scroll menu and click Go to
start the creation process of a FlashCopy mapping.

Figure 11-12 Create FlashCopy mapping

We are then presented with the FlashCopy creation wizard overview of the creation process
for a FlashCopy mapping, and click Next to proceed.

Chapter 11. Copy Services: FlashCopy 415


As shown in Figure 11-13, we name the first FlashCopy mapping FCMap1, select the
previously created consistency group FCCG1, set the background copy priority to 60% and
click Next to proceed.

Figure 11-13 Setting the properties for the FlashCopy mapping

The next step is to select the source VDisk. If there were many source VDisks (that weren’t
already defined in a FlashCopy mapping) then we can filter that list here. In Figure 11-14, we
define the filer * (which will show us all our VDisks) for the source VDisk and click Next to
proceed.

Figure 11-14 Filtering source VDisk candidates

416 IBM System Storage SAN Volume Controller


As shown in Figure 11-15, we select WIN_1 as the source disk and click Next to proceed.

Figure 11-15 Selecting source VDisk

The next step is to select our target VDisk. The FlashCopy mapping wizard will only present a
list of VDisks that are the same size as the source VDisks and not already in a FlashCopy
mapping, nor defined in a Metro Mirror relationship. In Figure 11-16, we select the target
WIN_2 and click Next to proceed.

Figure 11-16 Selecting target VDisk

Chapter 11. Copy Services: FlashCopy 417


Finally, we verify our FlashCopy mapping (Figure 11-17) and click Finish to create it.

Figure 11-17 Verify FlashCopy mapping

After the FlashCopy mapping is successfully created, we are returned to the FlashCopy
mapping list (Figure 11-18), listing all the currently defined FlashCopy mappings.

Figure 11-18 Viewing FlashCopy mappings

The remaining FlashCopy mappings are created in the same way (starting with Figure 11-12
on page 415) according to the properties for the scenario. (Note that for FCMap3 we omitted
using a FlashCopy Consistency Group name.)

In Figure 11-19, the created FlashCopy mappings are displayed.

418 IBM System Storage SAN Volume Controller


Figure 11-19 Viewing all created FlashCopy mappings

Executing FlashCopy
Now that we have created the FlashCopy mappings and the consistency group, we are ready
to use the FlashCopy mappings in our environment.

Note: It is unlikely that you will use the GUI to execute FlashCopy except for testing
purposes, because FlashCopy execution often is performed periodically and at scheduled
times.

To ensure a consistent data set is created, it is crucial to flush application and OS buffers,
and quiesce the application. To do this, scripting using the CLI is much more powerful.

When performing the FlashCopy on the VDisks with the database, we want to be able to
control the PiT when the FlashCopy is triggered, in order to keep our quiesce time at a
minimum. To achieve this we prepare the consistency group in order to flush the cache for
the source VDisks.

In Figure 11-20, we select the FlashCopy consistency group and Prepare a consistency
group from the action list and click Go. The status will go to Preparing, and then finally to
Prepared. Press the Refresh button several times until it is in the Prepared state.

Figure 11-20 Prepare FlashCopy consistency group

Chapter 11. Copy Services: FlashCopy 419


As shown in Figure 11-21, the FlashCopy consistency group enters the prepared state. To
start the FlashCopy consistency group, we select the consistency group and select Start a
Consistency Group from the scroll menu and click Go.

Figure 11-21 Start FlashCopy consistency group

In Figure 11-22, we are prompted to confirm starting the FlashCopy consistency group. We
now flush the database and OS buffers and quiesce the database, then click OK to start the
FlashCopy consistency group.

Note: Since we have already prepared the FlashCopy consistency group, this option is
grayed out when prompted to confirm starting the FlashCopy consistency group.

Figure 11-22 Confirm start of FlashCopy consistency group

420 IBM System Storage SAN Volume Controller


As shown in Figure 11-23, we verify that the consistency group is in the copying state, and
subsequently, we resume the database.

Figure 11-23 Viewing FlashCopy consistency groups

To monitor the progress of the FlashCopy mappings in the consistency group, we navigate to
the FlashCopy mappings window. As shown in Figure 11-24, the progress of the background
copy is displayed; to update the window, we must click the Refresh button.

Figure 11-24 Viewing the background copy progress

Note: Even if you click the Refresh button several times, the SVC only updates progress
of the background copy once a minute.

Chapter 11. Copy Services: FlashCopy 421


When the background copy is completed for all FlashCopy mappings in the consistency
group, the status is changed to Idle or Copied as shown in Figure 11-25.

Figure 11-25 FlashCopy consistency group, Idle or Copied

Executing a single FlashCopy mapping


When executing the single FlashCopy mapping we decide to let the SVC perform the
FlashCopy triggering as soon as the FlashCopy enters the prepared state.

To do this, we select the FlashCopy mapping FCMap3, select Start a Mapping from the scroll
menu, and click Go to proceed, as shown in Figure 11-26.

Figure 11-26 Selecting a single mapping to be started.

422 IBM System Storage SAN Volume Controller


Because we want to start a FlashCopy mapping which has not yet been prepared, the
Prepare box is checked, as shown in Figure 11-27, and we click OK to start the FlashCopy
mapping.

Figure 11-27 Starting a single FlashCopy mapping

After starting the FlashCopy mapping, we are returned to the FlashCopy mappings list shown
in Figure 11-28.

Figure 11-28 Viewing FlashCopy mappings

Note: FlashCopy can be invoked from the SVC graphical user interface (GUI), but this
might not make much sense if you plan to handle a large number of FlashCopy mappings
or consistency groups periodically, or at varying times. In this case, creating a script to use
the CLI is much more powerful.

Chapter 11. Copy Services: FlashCopy 423


424 IBM System Storage SAN Volume Controller
12

Chapter 12. Copy Services: Metro Mirror


In this chapter we describe the Metro Mirror copy service. Metro Mirror in an IBM System
Storage SAN Volume Controller (SVC) is similar to Metro Mirror in the IBM System Storage
DS family. The SVC provides a single point of control while enabling Metro Mirror in your SAN
regardless of the disk subsystems used.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 425
12.1 Metro Mirror
The general application of Metro Mirror is to maintain two real-time synchronized copies of a
data set. Often, the two copies are geographically dispersed on two SVC clusters, though it is
possible to use Metro Mirror in a single cluster (within an I/O group). If the primary copy fails,
the secondary copy can then be enabled for I/O operation.

A typical application of this function is to set up a dual-site solution using two SVC clusters
where the first site is considered the primary production site, and the second site is
considered the failover site, which is activated when a failure of the first site is detected.

Note: Before SVC release 2.1 this function was called PPRC.

12.1.1 Metro Mirror overview


Metro Mirror works by defining a Metro Mirror relationship between VDisks of equal size.
To provide management (and consistency) across a number of Metro Mirror relationships,
consistency groups are supported (as with FlashCopy).

The SVC provides both intracluster and intercluster Metro Mirror as described below.

Intracluster Metro Mirror


Intracluster Metro Mirror can be applied within any single I/O group.

Metro Mirror across I/O groups in the same SVC cluster is not supported, since Intracluster
Metro Mirror can only be performed between VDisks in the same I/O group.

Intercluster Metro Mirror


Intercluster Metro Mirror operations requires a pair of SVC clusters that are separated by a
number of moderately high bandwidth links. The two SVC clusters must be defined in an SVC
partnership, which must be performed on both SVC clusters to establish a fully functional
Metro Mirror partnership.

Using standard single mode connections, the supported distance between two SVC clusters
in a Metro Mirror partnership is 10 km, although greater distances can be achieved by using
extenders. For extended distance solutions, contact your IBM representative.

Note: When a local and a remote fabric are connected together for Metro Mirror purposes,
then the ISL hop count between a local node and a remote node cannot exceed seven.

Metro Mirror remote copy technique


Metro Mirror is a synchronous remote copy which is briefly explained below. To illustrate the
differences between synchronous and asynchronous remote copy, asynchronous remote
copy is also explained.

Synchronous remote copy


Metro Mirror is a fully synchronous remote copy technique which ensures that updates are
committed at both primary and secondary VDisks before the application is given completion
to an update.

Figure 12-1 illustrates how a write to the master VDisk is mirrored to the cache for the
auxiliary VDisk before an acknowledge of the write is sent back to the host issuing the write.
This ensures that the secondary is real-time synchronized, in case it is needed in a failover
situation.

426 IBM System Storage SAN Volume Controller


However, this also means that the application is fully exposed to the latency and bandwidth
limitations of the communication link to the secondary site. This might lead to unacceptable
application performance, particularly when placed under peak load. This is the reason for the
distance limitations when applying Metro Mirror.

1 Write 4 Ack

3 Acknowledge write
Cache Cache
2 Mirror write

Master Auxiliary
VDisk VDisk
Metro Mirror
Relationship
Figure 12-1 Write on VDisk in Metro Mirror relationship

Asynchronous remote copy


In asynchronous remote copy, the application is given completion to an update when it is sent
to the secondary site, but the update is not necessarily committed at the secondary site at
that time. This provides the capability of performing remote copy over distances exceeding
the limitations of synchronous remote copy.

In a failover situation, some updates might be missing at the secondary site, and therefore the
application must have some external mechanism for recovering the missing updates and
reapplying them. This mechanism might involve user intervention.

12.1.2 Supported methods for synchronizing


This section describes three methods that can be used to establish a relationship.

Full synchronization after Create


This is the default method. It is the simplest, in that it requires no administrative activity apart
from issuing the necessary commands. However, in some environments, the bandwidth
available will make this method unsuitable.

The sequence for a single relationship is:


򐂰 A CreateRelationship is issued with CreateConsistent set to FALSE.
򐂰 A Start is issued with Clean set to FALSE.

Synchronized before Create


In this method, the administrator must ensure that the master and auxiliary virtual disks
contain identical data before creating the relationship. There are two ways in which this might
be done:
򐂰 Both disks are created with the security delete feature so as to make all data zero.

Chapter 12. Copy Services: Metro Mirror 427


򐂰 A complete tape image (or other method of moving data) is copied from one disk to the
other

In either technique, no write I/O must take place to either Master or Auxiliary before the
relationship is established.

Then, the administrator must ensure that:


򐂰 A CreateRelationship is issued with CreateConsistent set to TRUE.
򐂰 A Start is issued with Clean set to FALSE.

If these steps are not performed correctly, then Metro Mirror will report the relationship as
being consistent, when it is not. This is likely to make any secondary disk useless. This
method has an advantage over the full synchronization, in that it does not require all the data
to be copied over a constrained link. However, if the data needs to be copied, the master and
auxiliary disks cannot be used until the copy is complete, which might be unacceptable.

Quick synchronization after Create


In this method, the administrator must still copy data from master to auxiliary. But it can be
used without stopping the application at the master. The administrator must ensure that:
򐂰 A CreateRelationship is issued with CreateConsistent set to TRUE.
򐂰 A Stop is issued with EnableAccess set to TRUE.
򐂰 A tape image (or other method of transferring data) is used to copy the entire master disk
to the auxiliary disk.

Once the copy is complete, the administrator must ensure that:


򐂰 A Start is issued with Clean set to TRUE.

With this technique, only the data that has changed since the relationship was created,
including all regions which were incorrect in the tape image, are copied from master and
auxiliary. Like “Synchronized before Create” on page 427, the copy step must be performed
correctly else the auxiliary will be useless, though the copy will report it as being
synchronized.

12.1.3 The importance of write ordering


Many applications that use block storage have a requirement to survive failures such as loss
of power, or a software crash, and not lose data that existed prior to the failure. Since many
applications need to perform large numbers of update operations in parallel to that storage,
maintaining write ordering is key to ensuring the correct operation of applications following a
disruption.

An application that is performing a large set of updates will have been designed with the
concept of dependent writes. These are writes where it is important to ensure that an earlier
write has completed before a later write is started. Reversing the order of dependent writes
can undermine the applications algorithms and can lead to problems such as detected, or
undetected, data corruption.

Dependent writes that span multiple VDisks


The following scenario illustrates a simple example of a sequence of dependent writes, and in
particular what can happen if they span multiple VDisks. Consider the following typical
sequence of writes for a database update transaction:

428 IBM System Storage SAN Volume Controller


1. A write is executed to update the database log, indicating that a database update is to be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that the database update
has completed successfully.

In Figure 12-2 we illustrate the write sequence.

Time

Log Log: Update record xyz ... started

Step 1
DB file

Log: Update record xyz ... started


Log

Step 2
DB file DB: record xyz ...

Log: Update record xyz ... started


Log Log: Update record xyz ... completed

Step 3
DB file DB: record xyz ...

Figure 12-2 Dependent writes for a database

The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next.

Note: All databases have logs associated with them. These logs keep records of database
changes. If a database needs to be restored to a point beyond the last full, offline backup,
logs are required to roll the data forward to the point of failure.

But imagine if the database log and the database itself are on different VDisks and a Metro
Mirror relationship is stopped during this update. In this case, you need to exclude the
possibility that the Metro Mirror relationship for the VDisk with the database file is stopped
slightly before the VDisk containing the database log.

If this were the case, then it could be possible that the secondary VDisks see writes (1) and
(3) but not (2).

Then, if the database was restarted using the backup made from the secondary disks, the
database log would indicate that the transaction had completed successfully, when it is not
the case. In this scenario, the integrity of the database is in question.

Chapter 12. Copy Services: Metro Mirror 429


To overcome the issue of dependent writes across VDisks, and to ensure a consistent data
set, the SVC supports the concept of consistency groups for Metro Mirror relationships. A
Metro Mirror consistency group can contain an arbitrary number of relationships up to the
maximum number of Metro Mirror relationships supported by the SVC Cluster.

Metro Mirror commands are then issued to the Metro Mirror consistency group, and thereby
simultaneously for all Metro Mirror relationships defined in the consistency group. For
example, when issuing a Metro Mirror start command to the consistency group, all of the
Metro Mirror relationships in the consistency group are started at the same time.

12.1.4 Practical use of Metro Mirror


To use Metro Mirror, you must define a relationship between two VDisks.

When creating the Metro Mirror relationship, one VDisk should be defined as the master, and
the other as the auxiliary. The relationship between the two copies is symmetric. When the
Metro Mirror relationship is created, the master VDisk is initially considered the primary copy
(often referred to as the source), and the auxiliary VDisk is considered the secondary copy
(often referred to as the target).

The master VDisk is the production VDisk and updates to this copy are real time mirrored to
the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was
created are destroyed.

Note: The copy direction for a Metro Mirror relationship can be switched so the auxiliary
VDisk becomes the primary and the master VDisk becomes the secondary.

While the Metro Mirror relationship is active, the secondary copy (VDisk) is not accessible for
host application write I/O at any time. The SVC allows read-only access to the secondary
VDisk when it contains a “consistent” image. This is only intended to allow boot time
operating system discovery to complete without error, so that any hosts at the secondary site
can be ready to start up the applications with minimum delay if required.

For instance, many operating systems need to read Logical Block Address (LBA) 0 to
configure a logical unit. Although read access is allowed at the secondary in practice, the data
on the secondary volumes cannot be read by a host. The reason for this is that most
operating systems write a “dirty bit” to the file system when it is mounted. Because this write
operation is not allowed on the secondary volume, the volume cannot be mounted.

This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads performed at the secondary and later
write I/Os performed at the primary.

To enable access to the secondary VDisk for host operations, the Metro Mirror relationship
must be stopped, specifying the -access parameter.

While access to the secondary VDisk for host operations is enabled, the host must be
instructed to mount the VDisk and related tasks before the application can be started, or
instructed to perform a recovery process.

For example, the Metro Mirror requirement to enable the secondary copy for access
differentiates it from third party mirroring software on the host, which aims to emulate a single,
reliable disk regardless of what system is accessing it. Metro Mirror retains the property that
there are two volumes in existence, but suppresses one while the copy is being maintained.

430 IBM System Storage SAN Volume Controller


Using a secondary copy demands conscious policy decision by the administrator, that a
failover is required, and the tasks to be performed on the host involved in establishing
operation on the secondary copy are substantial. The goal is to make this rapid (much faster
when compared to recovering from a backup copy) but not seamless.

The failover process can be automated through failover management software. The SVC
provides Simple Network Management Protocol (SNMP) traps and programming (or
scripting) towards the Command Line Interface (CLI) to enable this automation.

12.1.5 SVC Metro Mirror features


The SVC Metro Mirror supports the following features:
򐂰 Synchronous remote copy of VDisks dispersed over metropolitan scale distances is
supported.
򐂰 SVC implements the Metro Mirror relationship between VDisk pairs, each VDisk in the pair
being managed by an SVC cluster.
򐂰 SVC supports intracluster Metro Mirror, where both VDisks belong to the same cluster
(and IO group).
򐂰 SVC supports intercluster Metro Mirror, where each VDisk belongs to their separate SVC
cluster. A given SVC cluster can be configured for partnership with another cluster. A
given SVC cluster can only communicate with one other cluster. All intercluster Metro
Mirror takes place between the two SVC clusters in the configured partnership.
򐂰 Intercluster and intracluster Metro Mirror can be used concurrently within a cluster for
different relationships.
򐂰 SVC does not require a control network or fabric to be installed to manage Metro Mirror.
For intercluster Metro Mirror, SVC maintains a control link between the two clusters. This
control link is used to control state and co-ordinate updates at either end. The control link
is implemented on top of the same FC fabric connection as SVC uses for Metro Mirror I/O.
򐂰 SVC implements a configuration model which maintains the Metro Mirror configuration
and state through major events such as failover, recovery, and resynchronization to
minimize user configuration action through these events.
򐂰 SVC maintains and polices a strong concept of consistency and makes this available to
guide configuration activity.
򐂰 SVC implements flexible resynchronization support enabling it to re-synchronize VDisk
pairs which have suffered write I/O to both disks and to resynchronize only those regions
which are known to have changed.

How Metro Mirror works


There are several major steps in the Metro Mirror process:
1. An SVC cluster partnership is created between two SVC clusters (for intercluster Metro
Mirror).
2. A Metro Mirror relationship is created between two VDisks of the same size.
3. To manage multiple Metro Mirror relationships as one entity, the relationships can be
made part of a Metro Mirror consistency group. This is to ensure data consistency across
multiple Metro Mirror relationships, or simply for ease of management.
4. The Metro Mirror relationship is started, and when the background copy has completed
the relationship is consistent and synchronized.
5. Once synchronized, the secondary vdisk holds a copy of the production data at the
primary which can be used for disaster recovery.

Chapter 12. Copy Services: Metro Mirror 431


6. To access the auxiliary VDisk, the Metro Mirror relationship must be stopped with access
option enabled before write I/O is submitted to the secondary.
7. The remote host server is mapped to the auxiliary VDisk and the disk is available for I/O.

Intercluster communication and zoning


All intercluster communication is performed through the SAN. Prior to creating intercluster
Metro Mirror relationships, you must create a partnership between the two clusters.

All SVC node ports on each SVC cluster must be able to access each other to facilitate the
partnership creation. Therefore, a zone in each fabric must be defined for intercluster
communication see Chapter 3, “Planning and configuration” on page 25.

SVC cluster partnership


Each SVC cluster can only be in a partnership with one other SVC cluster. When the SVC
cluster partnership has been defined on both clusters, further communication facilities
between the nodes in each of the cluster are established. This comprises:
򐂰 A single control channel, which is used to exchange and coordinate configuration
information
򐂰 I/O channels between each of the nodes in the clusters

These channels are maintained and updated as nodes appear and disappear and as links
fail, and are repaired to maintain operation where possible. If communication between the
SVC clusters is interrupted or lost, an error is logged (and consequently Metro Mirror
relationships will stop).

To handle error conditions the SVC can be configured to raise SNMP traps to the enterprise
monitoring system.

Maintenance of the intercluster link


All SVC nodes maintain a database of the other devices that are visible on the fabric. This is
updated as devices appear and disappear.

Devices that advertise themselves as SVC nodes are categorized according to the SVC
cluster to which they belong. SVC nodes that belong to the same cluster establish
communication channels between themselves and begin to exchange messages to
implement the clustering and functional protocols of SVC.

Nodes that are in different clusters do not exchange messages after the initial discovery is
complete unless they have been configured together to perform Metro Mirror.

The intercluster link carries the control traffic to coordinate activity between the two clusters. It
is formed between one node in each cluster which is termed the focal point. The traffic
between the focal point nodes is distributed among the logins that exist between those nodes.

If the focal point node should fail (or all its logins to the remote cluster fail), then a new focal
point is chosen to carry the control traffic. Changing the focal point causes I/O to pause but
does not cause relationships to become ConsistentStopped.

Metro Mirror relationship


Metro Mirror relationships are similar to FlashCopy mappings. They can be stand-alone or
combined in consistency groups. Start and stop commands can be issued either against the
stand-alone relationship, or the consistency group.

432 IBM System Storage SAN Volume Controller


Figure 12-3 illustrates the Metro Mirror relationship.

MM_Relationship
VDisk1M VDisk1A
MM_Master MM_Auxiliary

Figure 12-3 Metro Mirror relationship

A Metro Mirror relationship is composed of two VDisks equal in size. The master VDisk and
the auxiliary VDisk can be in the same I/O group, within the same SVC cluster (intracluster
Metro Mirror), or can be on separate SVC clusters that are defined as SVC partners
(intercluster Metro Mirror).

Note: Be aware that:


򐂰 A VDisk can only be part of one Metro Mirror relationship at a time.
򐂰 A VDisk that is a FlashCopy target cannot be part of a Metro Mirror relationship.

Metro Mirror relationship between primary and secondary VDisk


When creating a Metro Mirror relationship, initially the master VDisk is assigned as the
primary, and the auxiliary VDisk the secondary. This implies that the initial copy direction is
mirroring the master VDisk to the auxiliary VDisk. After the initial synchronization is complete,
the copy direction can be changed if appropriate.

In most common applications of Metro Mirror, the master VDisk contains the production copy
of the data, and is used by the host application, while the auxiliary VDisk contains the
mirrored copy of the data and is used for failover in disaster recovery scenarios. The terms
master and auxiliary help support this use. If Metro Mirror is applied differently, the terms
master and auxiliary VDisks need to be interpreted appropriately.

Metro Mirror consistency groups


Certain uses of Metro Mirror require the manipulation of more than one relationship. Metro
Mirror consistency groups provides the ability to group relationships, so that they are
manipulated in unison.

Consistency groups address the issue where the objective is to preserve data consistency
across multiple Metro Mirrored VDisks because the applications have related data which
spans multiple VDisks. A requirement for preserving the integrity of data being written is to
ensure that “dependent writes” are executed in the application's intended sequence.

Metro Mirror commands can be issued to a Metro Mirror consistency group, which affects all
Metro Mirror relationships in the consistency group, or to a single Metro Mirror relationship if
not part of a Metro Mirror consistency group.

In Figure 12-4 the concept of Metro Mirror consistency groups is illustrated. Since the
MM_Relationship 1 and 2 are part of the consistency group, they can be handled as one
entity, while the stand-alone MM_Relationship 3 is handled separately.

Chapter 12. Copy Services: Metro Mirror 433


Consistency Group 1

MM_Relationship 1
VDisk1M VDisk1A
MM_Master MM_Auxiliary

VDisk2M MM_Relationship 2 VDisk2A


MM_Master MM_Auxiliary

VDisk3M MM_Relationship 3 VDisk3A


MM_Master MM_Auxiliary

Figure 12-4 Metro Mirror consistency group

򐂰 Metro Mirror relationships can be part of a consistency group, or be stand-alone and


therefore handled as single instances.
򐂰 A consistency group can contain zero or more relationships. An empty consistency group,
with zero relationships in it, has little purpose until it is assigned its first relationship,
except that it has a name.
򐂰 All the relationships in a consistency group must have matching master and auxiliary SVC
clusters.

Although it is possible that consistency groups can be used to manipulate sets of


relationships that do not need to satisfy these strict rules, that manipulation can lead to some
undesired side effects. The rules behind consistency mean that certain configuration
commands are prohibited where this would not be the case if the relationship was not part of
a consistency group.

For example, consider the case of two applications that are completely independent, yet they
are placed into a single consistency group. In the event of an error there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Metro Mirror rejects attempts to enable access to the
Secondary VDisks of either application.

If one application finishes its background copy much more quickly than the other, Metro
Mirror still refuses to grant access to its secondary, even though this is safe in this case,
because the Metro Mirror policy is to refuse access to the entire consistency group if any part
of it is inconsistent.

434 IBM System Storage SAN Volume Controller


Stand-alone relationships and consistency groups share a common configuration and state
model. All the relationships in a non-empty consistency group have the same state as the
consistency group.

12.1.6 Metro Mirror states and events


In this section we explain the different states of a Metro Mirror relationship, and the series of
events that modify these states. In Figure 12-5, the Metro Mirror relationship state diagram
shows an overview of the states that apply to a Metro Mirror relationship in the connected
state.

Create
Metro Mirror
Relationship
1a c)
(ou
to
1b
syn fs
(in yn
c)
Consistent Inconsistent
Stopped Stopped
Fo
rce
(ou dS
to ta
Stop f sy rt Stop
or
nc) or
2a Start 2b Start
Error Error
Stop
enable
4b access 3b
Consistent Inconsistent
Synchronized Background copy complete Copying

Start
Stop 5a (in sync) t
enable Star
d )
4a access rce y nc
s
Fo t of
u
(o

Idling

Figure 12-5 Metro Mirror mapping state diagram

When creating the Metro Mirror relationship, you can specify if the auxiliary VDisk is already
in sync with the master VDisk, and the background copy process is then skipped. This is
especially useful when creating Metro Mirror relationships for VDisks that have been created
with the format option.
1. Step 1 is done as follows:
a. The Metro Mirror relationship is created with the -sync option and the Metro Mirror
relationship enters the Consistent stopped state.
b. The Metro Mirror relationship is created without specifying that the master and auxiliary
VDisks are in sync, and the Metro Mirror relationship enters the Inconsistent stopped
state.

Chapter 12. Copy Services: Metro Mirror 435


2. Step 2 is done as follows:
a. When starting a Metro Mirror relationship in the Consistent stopped state, it enters the
Consistent synchronized state. This implies that no updates (write I/O) has been
performed on the primary VDisk while in the Consistent stopped state, otherwise the
-force option must be specified, and the Metro Mirror relationship then enters the
Inconsistent copying state, while background copy is started.
b. When starting a Metro Mirror relationship in the Inconsistent stopped state, it enters the
Inconsistent copying state, while background copy is started.
3. Step 3 is done as follows:
– When the background copy completes, the Metro Mirror relationship transitions from
the Inconsistent copying state to the Consistent synchronized state.
4. Step 4 is done as follows:
a. When stopping a Metro Mirror relationship in the Consistent synchronized state,
specifying the -access option which enables write I/O on the secondary VDisk, the
Metro Mirror relationship enters the Idling state.
b. To enable write I/O on the secondary VDisk, when the Metro Mirror relationship is in
the Consistent stopped state, issue the command svctask stoprcrelationship
specifying the -access option, and the Metro Mirror relationship enters the Idling state.
5. Step 5 is done as follows:
a. When starting a Metro Mirror relationship which is in the Idling state, it is required to
specify the -primary argument to set the copy direction. Given that no write I/O has
been performed (to either master or auxiliary VDisk) while in the Idling state, the Metro
Mirror relationship enters the Consistent synchronized state.
b. In case write I/O has been performed to either the master or the auxiliary VDisk, then
the -force option must be specified, and the Metro Mirror relationship then enters the
Inconsistent copying state, while background copy is started.

Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an
error), a state transition is applied.
򐂰 For example, this means that Metro Mirror relationships in the Consistent synchronized
state enter the Consistent stopped state and Metro Mirror relationships in the Inconsistent
copying state enter the Inconsistent stopped state.
򐂰 In case the connection is broken between the SVC clusters in a partnership, then all
(intercluster) Metro Mirror relationships enter a disconnected state. For further information
refer to the following topic, “Connected versus disconnected”.

Note: Stand-alone Relationships and Consistency Groups share a common configuration


and state model. This means that all the Metro Mirror relationships in a non-empty
consistency group have the same state as the consistency group.

State overview
The SVC defined concepts of state are key to understanding the configuration concepts and
are therefore explained in more detail below.

Connected versus disconnected


This distinction can arise when a Metro Mirror relationship is created with the two virtual disks
in different clusters.

436 IBM System Storage SAN Volume Controller


Under certain error scenarios, communications between the two clusters might be lost. For
instance, power might fail causing one complete cluster to disappear. Alternatively, the fabric
connection between the two clusters might fail, leaving the two clusters running but unable to
communicate with each other.

When the two clusters can communicate, the clusters and the relationships spanning them
are described as connected. When they cannot communicate, the clusters and the
relationships spanning them are described as disconnected.

In this scenario, each cluster is left with half the relationship and has only a portion of the
information that was available to it before. Some limited configuration activity is possible, and
is a subset of what was possible before.

The disconnected relationships are portrayed as having a changed state. The new states
describe what is known about the relationship, and what configuration commands are
permitted.

When the clusters can communicate again, the relationships become connected once again.
Metro Mirror automatically reconciles the two state fragments, taking into account any
configuration or other event that took place while the relationship was disconnected. As a
result, the relationship can either return to the state it was in when it became disconnected or
it can enter a different connected state.

Relationships that are configured between virtual disks in the same SVC cluster (intracluster)
will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain VDisks operating as secondaries can be described as being
consistent or inconsistent. Consistency groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the secondary to that on the primary virtual disk. It
can be considered a property of the secondary VDisk itself.

A secondary is described as consistent if it contains data that could have been read by a host
system from the primary if power had failed at some imaginary point in time while I/O was in
progress and power was later restored. This imaginary point in time is defined as the recovery
point. The requirements for consistency are expressed with respect to activity at the primary
up to the recovery point:
򐂰 The secondary virtual disk contains the data from all writes to the primary for which the
host had received good completion and that data had not been overwritten by a
subsequent write (before the recovery point)
򐂰 For writes for which the host did not receive good completion (that is it received bad
completion or no completion at all) and the host subsequently performed a read from the
primary of that data and that read returned good completion and no later write was sent
(before the recovery point), the secondary contains the same data as that returned by the
read from the primary.

From the point of view of an application, consistency means that a secondary virtual disk
contains the same data as the primary virtual disk at the recovery point (the time at which the
imaginary power failure occurred).

If an application is designed to cope with unexpected power failure this guarantee of


consistency means that the application will be able to use the Secondary and begin operation
just as if it had been restarted after the hypothetical power failure.

Chapter 12. Copy Services: Metro Mirror 437


Again, the application is dependent on the key properties of consistency:
򐂰 Write ordering
򐂰 Read stability for correct operation at the secondary

If a relationship, or set of relationships, is inconsistent and an attempt is made to start an


application using the data in the secondaries, a number of outcomes are possible:
򐂰 The application might decide that the data is corrupt and crash or exit with an error code.
򐂰 The application might fail to detect the data is corrupt and return erroneous data.
򐂰 The application might work without a problem.

Because of the risk of data corruption, and in particular undetected data corruption, Metro
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.

Consistency as a concept can be applied to a single relationship or a set of relationships in a


consistency group. Write ordering is a concept that an application can maintain across a
number of disks accessed through multiple systems and therefore consistency must operate
across all those disks.

When deciding how to use consistency groups, the administrator must consider the scope of
an application’s data, taking into account all the interdependent systems which communicate
and exchange information.

If two programs or systems communicate and store details as a result of the information
exchanged, then either of the following actions might occur:
򐂰 All the data accessed by the group of systems must be placed into a single consistency
group.
򐂰 The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.

Consistent versus synchronized


A copy which is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the primary and secondary virtual disks are only different in regions where writes
are outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data which was frozen at some point in time in the past. Write I/O might have
continued to a primary and not have been copied to the secondary. This state arises when it
becomes impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between clusters when writing to the secondary.

When communication is lost for an extended period of time, Metro Mirror tracks the changes
that happen at the primary, but not the order of such changes, nor the details of such changes
(write data). When communication is restored, it is impossible to make the secondary
synchronized without sending write data to the secondary out-of-order, and therefore losing
consistency.

Two policies can be used to cope with this:


򐂰 Take a point-in-time copy of the consistent secondary before allowing the secondary to
become inconsistent. In the event of a disaster before consistency is achieved again, the
point-in-time copy target provides a consistent, though out-of-date, image.
򐂰 Accept the loss of consistency, and loss of useful secondary, while making it
synchronized.

438 IBM System Storage SAN Volume Controller


12.1.7 Metro Mirror configuration limits
Table 12-1 lists the Metro Mirror configuration limits.

Table 12-1 Metro Mirror configuration limits


Parameter Value

Number of Metro Mirror 256 per SVC cluster


consistency groups

Number of Metro Mirror 1024 per SVC cluster


relationships

Total VDisk size per I/O group 16TB is the per I/O group limit on the quantity of primary and
secondary VDisk address space that can participate in Metro
Mirror relationships

12.2 Metro Mirror commands


For all the details about the Metro Mirror Commands, see IBM TotalStorage Virtualization
Family SAN Volume Controller: Command-Line Interface User's Guide, SC26-7544.

The command set for Metro Mirror contains two broad groups:
򐂰 Commands to create, delete and manipulate relationships and consistency groups
򐂰 Commands to cause state changes

Where a configuration command affects more than one cluster, Metro Mirror performs the
work to coordinate configuration activity between the clusters. Some configuration commands
can only be performed when the clusters are connected and fail with no effect when they are
disconnected.

Other configuration commands are permitted even though the clusters are disconnected. The
state is reconciled automatically by Metro Mirror when the clusters become connected once
more.

For any given command, with one exception, a single cluster actually receives the command
from the administrator. This is significant for defining the context for a CreateRelationShip
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case, the
cluster receiving the command is called the local cluster.

This exception mentioned previously is the command that sets clusters into a Metro Mirror
partnership. The mkpartnership command must be issued to both the local, and to the
remote cluster.

The commands here are described as an abstract command set. These are implemented as:
򐂰 A command line interface (CLI) which can be used for scripting and automation
򐂰 A graphical user interface (GUI) which can be used for one-off tasks

Chapter 12. Copy Services: Metro Mirror 439


12.2.1 Listing available SVC cluster partners
To create an SVC cluster partnership, you use the command svcinfo lsclustercandidate.

svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for
setting up a two-cluster partnership. This is a prerequisite for creating Metro Mirror
relationships.

12.2.2 Creating SVC cluster partnership


To create an SVC cluster partnership, you use the command svctask mkpartnership.

svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Metro Mirror
partnership between the local cluster and a remote cluster.

To establish a fully functional Metro Mirror partnership, you must issue this command to both
clusters. This step is a prerequisite to creating Metro Mirror relationships between VDisks on
the SVC clusters.

When creating the partnership, you can specify the bandwidth to be used by the background
copy process between the local and the remote SVC cluster, and if not specified, the
bandwidth defaults to 50 MB/s. The bandwidth should be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth impact on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy for the
IBM System Storage Metro Mirror for SAN Volume Controller will be attempted. The
background copy bandwidth can affect foreground I/O latency in one of three ways:
򐂰 The following results can occur if the background copy bandwidth is set too high for the
Metro Mirror intercluster link capacity:
– The background copy I/Os can back up on the Metro Mirror intercluster link.
– There is a delay in the synchronous secondary writes of foreground I/Os.
– The foreground I/O latency will increase as perceived by applications.
򐂰 If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
򐂰 If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary overload the secondary storage and again delay
the synchronous secondary writes of foreground I/Os.

In order to set the background copy bandwidth optimally, make sure that you consider all
three resources (the primary storage, the intercluster link bandwidth and the secondary
storage). Provision the most restrictive of these three resources between the background
copy bandwidth and the peak foreground I/O workload. This provisioning can be done by
calculation as above or alternatively by determining experimentally how much background
copy can be allowed before the foreground I/O latency becomes unacceptable and then
backing off to allow for peaks in workload and some safety margin.

440 IBM System Storage SAN Volume Controller


svctask chpartnership
In case it is needed to change the bandwidth available for background copy in an SVC cluster
partnership, the command svctask chpartnership can be used to specify the new
bandwidth.

12.2.3 Creating a Metro Mirror consistency group


To create a Metro Mirror consistency group, you use the command svctask mkrcconsistgrp.

svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror
consistency group.

The Metro Mirror consistency group name must be unique across all consistency groups
known to the clusters owning this consistency group. If the consistency group involves two
clusters, the clusters must be in communication throughout the create process.

The new consistency group does not contain any relationships and will be in the empty state.
Metro Mirror relationships can be added to the group either upon creation or afterwards, using
the svctask chrelationship command.

12.2.4 Creating a Metro Mirror relationship


To create a Metro Mirror relationship, you use the command svctask mkrcrelationship.

svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Metro Mirror relationship.
This relationship persists until it is deleted.

The auxiliary virtual disk must be equal in size to the master virtual disk or the command will
fail, and if both VDisks are in the same cluster, they must both be in the same I/O group. The
master and auxiliary VDisk cannot be in an existing relationship, nor can they be the target of
a FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.

When creating the Metro Mirror relationship, it can be added to an already existing
consistency group, or be a stand-alone Metro Mirror relationship if no consistency group is
specified.

To check whether the master or auxiliary VDisks comply with the prerequisites to participate
in a Metro Mirror relationship, use the command svcinfo lsrcrelationshipcandidate as
explained below.

svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list available VDisk eligible to
a Metro Mirror relationship.

When issuing the command you can specify the master VDisk name and auxiliary cluster to
list candidates that comply with prerequisites to create a Metro Mirror relationship. If the
command is issued with no flags, all VDisks that are not disallowed by some other
configuration state, such as being a FlashCopy target, are listed.

Chapter 12. Copy Services: Metro Mirror 441


12.2.5 Changing a Metro Mirror relationship
To modify the properties of a Metro Mirror relationship, you use the command svctask
chrcrelationship.

svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a
Metro Mirror relationship:
򐂰 Change the name of a Metro Mirror relationship
򐂰 Add a relationship to a group
򐂰 Remove a relationship from a group, using the -force flag

Note: When adding a Metro Mirror relationship to a consistency group that is not empty,
the relationship must have the same state and copy direction as the group in order to be
added to it.

12.2.6 Changing a Metro Mirror consistency group


To change the name of a Metro Mirror consistency group, you use the command svctask
chrcconsistgrp.

svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Metro Mirror
consistency group.

12.2.7 Starting a Metro Mirror relationship


To start a stand-alone Metro Mirror relationship, you use the command svctask
startrcrelationship.

svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Metro
Mirror relationship.

When issuing the command the copy direction can be set if undefined, and optionally mark
the secondary VDisk of the relationship as clean. The command fails it if it is used to attempt
to start a relationship that is part of a consistency group.

This command can only be issued to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (primary and secondary roles) and begins the
copy process. Otherwise this command restarts a previous copy process that was stopped
either by a stop command or by some I/O error.

If the resumption of the copy process leads to a period when the relationship is not consistent,
then you must specify the -force flag when restarting the relationship. This situation can
arise if, for example, the relationship was stopped, and then further writes were performed on
the original primary of the relationship. The use of the -force flag here is a reminder that the
data on the secondary will become inconsistent while resynchronization (background
copying), and therefore not usable for disaster recovery purposes before the background copy
has completed.

In the idling state, you must specify the primary VDisk to indicate the copy direction. In other
connected states, you can provide the primary argument, but it must match the existing
setting.

442 IBM System Storage SAN Volume Controller


12.2.8 Stopping a Metro Mirror relationship
To stop a stand-alone Metro Mirror relationship, you use the command svctask
stoprcrelationship.

svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a
relationship. It can also be used to enable write access to a consistent secondary VDisk
specifying the -access flag.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a


relationship that is part of a consistency group. You can issue this command to stop a
relationship that is copying from primary to secondary.

If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a svctask startrcrelationship command. Write activity is no longer copied
from the primary to the secondary virtual disk. For a relationship in the
ConsistentSynchronized state, this command causes a consistency freeze.

When a relationship is in a consistent state (that is, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state) then the -access argument can
be used with the stoprcrelationship command to enable write access to the secondary
virtual disk.

12.2.9 Starting a Metro Mirror consistency group


To start a Metro Mirror consistency group, you use the command svctask startrcconsistgrp

svctask startrcconsistgrp
The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group.
This command can only be issued to a consistency group that is connected.

For a consistency group that is idling, this command assigns a copy direction (primary and
secondary roles) and begins the copy process. Otherwise this command restarts a previous
copy process that was stopped either by a stop command or by some I/O error.

12.2.10 Stopping a Metro Mirror consistency group


To stop a Metro Mirror consistency group, you use the command svctask stoprcconsistgrp.

svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Metro
Mirror consistency group. It can also be used to enable write access to the secondary VDisks
in the group if the group is in a consistent state.

If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the primary to the secondary virtual disks belonging to the relationships in the
group. For a consistency group in the ConsistentSynchronized state, this command causes a
consistency freeze.
When a consistency group is in a consistent state (for example, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), then the -access argument can
be used with the svctask stoprcconsistgrp command to enable write access to the
secondary VDisks within that group.

Chapter 12. Copy Services: Metro Mirror 443


12.2.11 Deleting a Metro Mirror relationship
To delete a Metro Mirror relationship, you use the command svctask rmrcrelationship.

svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two virtual disks. It
does not affect the virtual disks themselves.

If the relationship is disconnected at the time that the command is issued, then the
relationship is only deleted on the cluster on which the command is being run. When the
clusters reconnect, then the relationship is automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still wish to remove the relationship on
both clusters, you can issue the rmrcrelationship command independently on both of the
clusters.

A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.

If you delete an inconsistent relationship, the secondary virtual disk becomes accessible even
though it is still inconsistent. This is the one case in which Metro Mirror does not inhibit
access to inconsistent data.

12.2.12 Deleting a Metro Mirror consistency group


To delete a Metro Mirror consistency group, you use the command svctask rmrcconsistgrp.

svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group.
This command deletes the specified consistency group. You can issue this command for any
existing consistency group.

If the consistency group is disconnected at the time that the command is issued, then the
consistency group is only deleted on the cluster on which the command is being run. When
the clusters reconnect, the consistency group is automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still want to remove the consistency
group on both clusters, you can issue the svctask rmrcconsistgrp command separately on
both of the clusters.

If the consistency group is not empty, then the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.

12.2.13 Reversing a Metro Mirror relationship


To reverse a Metro Mirror relationship, you use the command svctask
switchrcrelationship.

svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of primary and
secondary VDisk when a stand-alone relationship is in a consistent state, when issuing the
command the desired primary is specified.

444 IBM System Storage SAN Volume Controller


12.2.14 Reversing a Metro Mirror consistency group
To reverse a Metro Mirror consistency group, you use the command svctask
switchrcconsistgrp

svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of primary and
secondary VDisk when a consistency group is in a consistent state. This change is applied to
all the relationships in the consistency group, and when issuing the command the desired
primary is specified.

12.2.15 Detailed states


In the following sections we detail the states that are portrayed to you, for either consistency
groups or relationships, as well as the extra information available in each state. The different
major states are constructed to provide guidance as to the configuration commands that are
available.

InconsistentStopped
This is a connected state. In this state, the primary is accessible for read and write I/O but the
secondary is not accessible for either. A copy process needs to be started to make the
secondary consistent.

This state is entered when the relationship or consistency group was InconsistentCopying
and has either suffered a persistent error or received a Stop command which has caused the
copy process to stop.

A Start command causes the relationship or consistency group to move to the


InconsistentCopying state. A Stop command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the secondary side


transitions to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

InconsistentCopying
This is a connected state. In this state, the primary is accessible for read and write I/O but the
secondary is not accessible for either read or write I/O.

This state is entered after a Start command is issued to an InconsistentStopped relationship


or consistency group. It is also entered when a forced start is issued to an idling or
ConsistentStopped relationship or consistency group.

A background copy process runs which copies data from the primary to the secondary virtual
disk.

In the absence of errors, an InconsistentCopying relationship is active, and the Copy


Progress increases until the copy process completes. In some error situations, the copy
progress might freeze or even regress.

A persistent error or Stop command places the relationship or consistency group into
InconsistentStopped state. A Start command is accepted, but has no effect.

If the background copy process completes on a stand-alone relationship, or on all


relationships for a consistency group, the relationship or consistency group transitions to
ConsistentSynchronized.

Chapter 12. Copy Services: Metro Mirror 445


If the relationship or consistency group becomes disconnected, then the secondary side
transitions to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

ConsistentStopped
This is a connected state. In this state, the secondary contains a consistent image, but it
might be out-of-date with respect to the primary.

This state can arise when a relationship was in ConsistentSynchronized state and suffers an
error which forces a consistency freeze. It can also arise when a relationship is created with a
CreateConsistentFlag set to TRUE.

Normally, following an I/O error, subsequent write activity cause updates to the primary and
the secondary is no longer synchronized (set to FALSE). In this case, to re-establish
synchronization, consistency must be given up for a period. A Start command with the Force
option must be used to acknowledge this, and the relationship or consistency group
transitions to InconsistentCopying. Do this only after all outstanding errors are repaired.

In the unusual case where the primary and secondary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a Start command takes the
relationship to ConsistentSynchronized. No Force option is required. Also in this unusual
case, a Switch command is permitted which moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the primary and secondary.

If the relationship or consistency group becomes disconnected, then the secondary side
transitions to ConsistentDisconnected. The primary side transitions to IdlingDisconnected.

An informational status log is generated every time a relationship or consistency group enters
the ConsistentStopped with a status of Online state. This can be configured to enable an
SNMP trap and provide a trigger to automation software to consider issuing a Start
command following a loss of synchronization.

ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O.
The secondary VDisk is accessible for read-only I/O.

Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks.
Either good completion must be received for both writes, or the write must be failed to the
host, or a state transition out of ConsistentSynchronized must take place before a write is
completed to the host.

A Stop command takes the relationship to ConsistentStopped state. A Stop command with
the -access argument takes the relationship to the Idling state.

A Switch command leaves the relationship in the ConsistentSynchronized state, but reverses
the primary and secondary roles.

A Start command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the same transitions are
made as for ConsistentStopped.

Idling
This is a connected state. Both master and auxiliary disks are operating in the primary role.
Consequently both are accessible for write I/O.

446 IBM System Storage SAN Volume Controller


In this state, the relationship or consistency group accepts a Start command. Metro Mirror
maintains a record of regions on each disk which received write I/O while Idling. This is used
to determine what areas need to be copied following a Start command.

The Start command must specify the new copy direction. A Start command can cause a
loss of consistency if either virtual disk in any relationship has received write I/O. This is
indicated by the synchronized status. If the Start command leads to loss of consistency, then
a Forced flag must be specified.

Following a Start command, the relationship or consistency group transitions to


ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is
such a loss.

Also, while in this state, the relationship or consistency group accepts a Clean option on the
Start command. If the relationship or consistency group becomes disconnected, then both
sides change their state to IdlingDisconnected.

IdlingDisconnected
This is a disconnected state. The virtual disk or disks in this half of the relationship or
consistency group are all in the primary role and accept read or write I/O.

The main priority in this state is to recover the link and make the relationship or consistency
group connected once more.

No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transition to a connected state. The
exact connected state which is entered depends on the state of the other half of the
relationship or consistency group, which depends on:
򐂰 The state when it became disconnected
򐂰 The write activity since it was disconnected
򐂰 The configuration activity since it was disconnected

If both halves are IdlingDisconnected, then the relationship becomes idling when
reconnected.

While IdlingDisconnected, if a write I/O is received which causes loss of synchronization


(synchronized attribute transitions from TRUE to FALSE) and the relationship was not
already stopped (either through user stop or a persistent error), then an error log is raised to
notify this. This error log is the same as that raised when the same situation arises when
ConsistentSynchronized.

InconsistentDisconnected
This is a disconnected state. The virtual disks in this half of the relationship or consistency
group are all in the secondary role and do not accept read or write I/O.

No configuration activity except for deletes is permitted until the relationship becomes
connected again.

When the relationship or consistency group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either:
򐂰 The relationship was InconsistentStopped when it became disconnected
򐂰 The user issued a Stop while disconnected

In either case, the relationship or consistency group becomes InconsistentStopped.

Chapter 12. Copy Services: Metro Mirror 447


ConsistentDisconnected
This is a disconnected state. The VDisks in this half of the relationship or consistency group
are all in the secondary role and accept read I/O but not write I/O.

This state is entered from ConsistentSynchronized or ConsistentStopped when the


secondary side of a relationship becomes disconnected.

In this state, the relationship or consistency group displays an attribute of FreezeTime which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time it had in that state. When entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This corresponds to the time of the last successful heartbeat to the other
cluster.

A Stop with EnableAccessFlag set to TRUE transitions the relationship or consistency group
to IdlingDisconnected state. This allows write I/O to be performed to the virtual disks and is
used as part of a disaster recovery scenario.

When the relationship or consistency group becomes connected again, the relationship or
consistency group becomes ConsistentSynchronized only if this does not lead to a loss of
Consistency. This is the case provided:
򐂰 The relationship was ConsistentSynchronized when it became disconnected.
򐂰 No writes received successful completion at the primary while disconnected.

Otherwise the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to consistency groups. It is the state of a consistency group which has
no relationships and no other state information to show.

It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group at which point the state of the relationship becomes the state
of the consistency group.

12.2.16 Background copy


Metro Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships which are in
InconsistentCopying state with a Status of Online.

The quota of background copy (configured on the intercluster link) is divided evenly between
the nodes that are performing background copy for one of the eligible relationships. This
allocation is made without regard for the number of disks that node is responsible for. Each
node in turn divides its allocation evenly between the multiple relationships performing a
background copy.

For intracluster relationships, each node is assigned a static quota of 25 MBps.

12.3 Metro Mirror scenario using the CLI

Note: This example is for intercluster only. In case you wish to set up intracluster, we
highlight those parts of the following procedure that you do not need to perform.

448 IBM System Storage SAN Volume Controller


In the following scenario, we want to set up an intercluster Metro Mirror relationship for the
following VDisks between SVC cluster ITSOSVC01 and SVC cluster ITSOSVC02 at the
secondary site:
VDISK1: Database files
VDISK2: Database log files
VDISK3: Application files

Since data consistency is needed across VDISK1 and VDISK2, we create a consistency
group, to handle Metro Mirror for them. While, in this scenario, the application files are
independent of the database, we create a stand-alone Metro Mirror relationship for VDISK3.
The Metro Mirror setup is illustrated in Figure 12-6.

Figure 12-6 Metro Mirror scenario using the CLI

Setting up Metro Mirror


In the following section, we assume that the source and target VDisks have already been
created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate.

To set up the Metro Mirror, the following steps must be performed:


򐂰 Create SVC partnership between ITSOSVC01 and ITSOSVC02, on both SVC clusters:
򐂰 Create a Metro Mirror consistency group:
– Name CG_W2K_MM
򐂰 Create the Metro Mirror relationship for VDISK1:
– Master VDISK1
– Auxiliary VDISK1T
– Auxiliary SVC cluster ITSOSVC02
– Name MMREL1
– Consistency group CG_W2K_MM

Chapter 12. Copy Services: Metro Mirror 449


򐂰 Create the Metro Mirror relationship for VDISK2:
– Master VDISK2
– Auxiliary VDISK2T
– Auxiliary SVC cluster ITSOSVC02
– Name MMREL2
– Consistency group CG_W2K_MM
򐂰 Create the Metro Mirror relationship for VDISK3:
– Master VDISK3
– Auxiliary VDISK3T
– Auxiliary SVC cluster ITSOSVC02
– Name MMREL3

In the following section, each step is carried out using the CLI.

Creating SVC partnership between ITSOSVC01 and ITSOSVC02


We create the partnership on both clusters.

Note: If you are creating an intracluster Metro Mirror do not perform the next step; instead
go to “Creating a Metro Mirror Consistency Group” on page 451.

To verify that both clusters can communicate with each other, we can use the svcinfo
lsclustercandidate command. Example 12-1, confirms that our clusters are communicating
as ITSOSVC02 is an eligible SVC cluster candidate for the SVC cluster partnership.

Example 12-1 Listing available SVC cluster for partnership


IBM_2145:ITSOSVC01:admin>svcinfo lsclustercandidate
id configured cluster_name
000002006040469E no ITSOSVC02

In Example 12-2, we create the partnership between ITSOSVC01 and ITSOSVC02 specifying the
bandwidth to be used for a background copy of 500 MBps.

To verify the creation of the partnership we issue the command svcinfo lscluster, and see
that the partnership is only partially configured. It will remain partially configured until we run
mkpartnership on the other node.

Example 12-2 Creating the partnership from ITSOSVC01 to ITSOSVC02


IBM_2145:ITSOSVC01:admin>svctask mkpartnership -bandwidth 500 ITSOSVC02

IBM_2145:ITSOSVC01:admin>svcinfo lscluster
id name location partnership bandwidth
cluster_IP_address cluster_service_IP_address id_alias
000002006180311C ITSOSVC01 local 9.43.86.29
9.43.86.30 000002006180311C
000002006040469E ITSOSVC02 remote partially_configured_local 500
9.43.86.40 9.43.86.41 000002006040469E

In Example 12-3, we create the partnership between ITSOSVC02 back to ITSOSVC01 specifying
the bandwidth to be used for a background copy of 500 MBps.

For completeness, we issue svcinfo lscluster and svcinfo lsclustercandidate prior to


creating the partnership.

After creating the partnership, we verify that the partnership is fully configured by re-issuing
the svcinfo lscluster command.

450 IBM System Storage SAN Volume Controller


Example 12-3 Creating the partnership from ITSOSVC02 to ITSOSVC01
IBM_2145:ITSOSVC02:admin>svcinfo lscluster
id name location partnership bandwidth
cluster_IP_address cluster_service_IP_address id_alias
000002006040469E ITSOSVC02 local 9.43.86.40
9.43.86.41 000002006040469E

IBM_2145:ITSOSVC02:admin>svcinfo lsclustercandidate
id configured cluster_name
000002006180311C yes ITSOSVC01

IBM_2145:ITSOSVC02:admin>svctask mkpartnership -bandwidth 500 ITSOSVC01

IBM_2145:ITSOSVC02:admin>svcinfo lscluster
id name location partnership bandwidth
cluster_IP_address cluster_service_IP_address id_alias
000002006040469E ITSOSVC02 local 9.43.86.40
9.43.86.41 000002006040469E
000002006180311C ITSOSVC01 remote fully_configured 500
9.43.86.29 9.43.86.30 000002006180311C

Creating a Metro Mirror Consistency Group


In Example 12-4, we create the Metro Mirror consistency group using the svctask
mkrcconsistgrp command. This consistency group will be used for the Metro Mirror
relationships for the database VDisks and is named CG_W2K_MM.

Example 12-4 Creating the Metro Mirror consistency group CG_W2K_MM


IBM_2145:ITSOSVC01:admin>svctask mkrcconsistgrp -cluster ITSOSVC02 -name CG_W2K_MM
RC Consistency Group, id [255], successfully created
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type
255 CG_W2K_MM 000002006180311C ITSOSVC01 000002006040469E
ITSOSVC02 empty 0 empty_group

Creating the Metro Mirror relationship for VDISK1 and VDISK2


In Example 12-5, we create the Metro Mirror relationships MMREL1 and MMREL2 and make them
members of the Metro Mirror consistency group CG_W2K_MM.

To verify the created Metro Mirror relationships we list them with the command svcinfo
lsrcrelationship.

Example 12-5 Creating the Metro Mirror relationships MMREL1 and MMREL2
IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
name vdisk_UID
6 VDISK3 0 io_grp0 online
600507680181011A7800000000000039
5 VDISK2 0 io_grp0 online
MMREL2 600507680181011A7800000000000038
4 VDISK1 0 io_grp0 online
MMREL1 600507680181011A7800000000000037

IBM_2145:ITSOSVC01:admin>svctask mkrcrelationship -master VDISK1 aux VDISK1T -cluster


ITSOSVC02 -consistgrp CG_W2K_MM -name MMREL1
RC Relationship, id [4], successfully created

Chapter 12. Copy Services: Metro Mirror 451


IBM_2145:ITSOSVC01:admin>svctask mkrcrelationship -master VDISK2 -aux VDISK2T -cluster
ITSOSVC02 / -consistgrp CG_W2K_MM -name MMREL2
RC Relationship, id [5], successfully created

IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship -delim :


id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster
_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_g
roup_name:state:bg_copy_priority:progress:copy_type
4:MMREL1:000002006180311C:ITSOSVC01:4:VDISK1:000002006040469E:ITSOSVC02:20:VDISK1T:master:2
55:CG_W2K_MM:inconsistent_stopped:50:0:metro
5:MMREL2:000002006180311C:ITSOSVC01:5:VDISK2:000002006040469E:ITSOSVC02:19:VDISK2T:master:2
55:CG_W2K_MM:inconsistent_stopped:50:0:metro

Creating the stand-alone Metro Mirror relationship for VDISK3


In Example 12-6, we create the stand-alone Metro Mirror relationship MMREL3 for VDISK3. Once
it is created, we will check the status of each of our Metro Mirror relationships.

You will note that the state of MMREL3 is consistent_stopped, and this is because it was
created with the -sync option. The -sync option indicates that the secondary (auxiliary) virtual
disk is already synchronized with the primary (master) virtual disk. The initial background
synchronization is skipped when this option is used.

MMREL2 and MMREL1 are in the inconsistent_stopped state, because they were not created with
the -sync option, so their auxiliary VDisks need to be synchronized with their primary VDisks.

Example 12-6 Creating a stand-alone Metro Mirror relationship and checking


IBM_2145:ITSOSVC01:admin>svctask mkrcrelationship -master VDISK3 -aux VDISK3T -sync
-cluster ITSOSVC02 -name MMREL3
RC Relationship, id [6], successfully created

IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL3


id 6
name MMREL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name VDISK3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 18
aux_vdisk_name VDISK3T
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress
freeze_time
status online
sync in_sync
copy_type metro

IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL2


id 5
name MMREL2
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01

452 IBM System Storage SAN Volume Controller


master_vdisk_id 5
master_vdisk_name VDISK2
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 19
aux_vdisk_name VDISK2T
primary master
consistency_group_id 255
consistency_group_name CG_W2K_MM
state inconsistent_stopped
bg_copy_priority 50
progress 0
freeze_time
status online
sync
copy_type metro

IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL1


id 4
name MMREL1
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 4
master_vdisk_name VDISK1
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 20
aux_vdisk_name VDISK1T
primary master
consistency_group_id 255
consistency_group_name CG_W2K_MM
state inconsistent_stopped
bg_copy_priority 50
progress 0
freeze_time
status online
sync
copy_type metro

Executing Metro Mirror


Now that we have created the Metro Mirror consistency group and relationships, we are
ready to use the Metro Mirror relationships in our environment.

When implementing Metro Mirror, the goal is to reach a consistent and synchronized state
which can provide redundancy for a dataset, in case a hardware failure occurs that affects the
SAN at the production site.

In the following section, we show how to stop and start the stand-alone Metro Mirror
relationships and the consistency group.

Starting a stand-alone Metro Mirror relationship


In Example 12-7, we start the stand-alone Metro Mirror relationship MMREL3. Because the
Metro Mirror relationship was in the Consistent stopped state and no updates have been
made to the primary VDisk, the relationship enters the Consistent synchronized state.

Chapter 12. Copy Services: Metro Mirror 453


Example 12-7 Starting the stand-alone Metro Mirror relationship
IBM_2145:ITSOSVC01:admin>svctask startrcrelationship MMREL3
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL3
id 6
name MMREL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name VDISK3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 18
aux_vdisk_name VDISK3T
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro

Starting a Metro Mirror consistency group


In Example 12-8, we start the Metro Mirror consistency group CG_W2K_MM. Because the
consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying
state until the background copy has completed for all relationships in the consistency group.

Upon completion of the background copy, it enters the Consistent synchronized state (see
Figure 12-5 on page 435).

Example 12-8 Starting the Metro Mirror consistency group


IBM_2145:ITSOSVC01:admin>svctask startrcconsistgrp CG_W2K_MM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_MM
id 255
name CG_W2K_MM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary master
state inconsistent_copying
relationship_count 2
freeze_time
status online
sync
copy_type metro
RC_rel_id 4
RC_rel_name MMREL1
RC_rel_id 5
RC_rel_name MMREL2

Monitoring background copy progress


To monitor the background copy progress, we can use the svcinfo lsrcrelationship
command. This command will show us all defined Metro Mirror relationships if used without
any arguments.

454 IBM System Storage SAN Volume Controller


Our Metro Mirror relationship is shown in Example 12-9.

Note: Setting up SNMP traps for the SVC enables automatic notification when Metro
Mirror consistency groups or relationships change state.

Example 12-9 Monitoring background copy progress example


IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL1
id 4
name MMREL1
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 4
master_vdisk_name VDISK1
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 20
aux_vdisk_name VDISK1T
primary master
consistency_group_id 255
consistency_group_name CG_W2K_MM
state inconsistent_copying
bg_copy_priority 50
progress 6
freeze_time
status online
sync
copy_type metro
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL2
id 5
name MMREL2
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 5
master_vdisk_name VDISK2
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 19
aux_vdisk_name VDISK2T
primary master
consistency_group_id 255
consistency_group_name CG_W2K_MM
state inconsistent_copying
bg_copy_priority 50
progress 6
freeze_time
status online
sync
copy_type metro

When all the Metro Mirror relationships complete the background copy, the consistency group
enters the consistent synchronized state, as shown in Example 12-10.

Example 12-10 Listing the Metro Mirror consistency group


IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_MM
id 255
name CG_W2K_MM
master_cluster_id 000002006180311C

Chapter 12. Copy Services: Metro Mirror 455


master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status online
sync
copy_type metro
RC_rel_id 4
RC_rel_name MMREL1
RC_rel_id 5
RC_rel_name MMREL2

Stopping a stand-alone Metro Mirror relationship


In Example 12-11, we stop the stand-alone Metro Mirror relationship, while enabling access
(write I/O) to both the primary and the secondary VDisk, and the relationship enters the Idling
state.

Example 12-11 Stopping stand-alone Metro Mirror relationship & enabling access to secondary VDisk
IBM_2145:ITSOSVC01:admin>svctask stoprcrelationship -access MMREL3
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL3
id 6
name MMREL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name VDISK3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 18
aux_vdisk_name VDISK3T
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro

Stopping a Metro Mirror consistency group


In Example 12-12, we stop the Metro Mirror consistency group without specifying the -access
flag. This means that consistency group enters the Consistent stopped state.

Example 12-12 Stopping a Metro Mirror consistency group


IBM_2145:ITSOSVC01:admin>svctask stoprcconsistgrp CG_W2K_MM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_MM
id 255
name CG_W2K_MM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E

456 IBM System Storage SAN Volume Controller


aux_cluster_name ITSOSVC02
primary master
state consistent_stopped
relationship_count 2
freeze_time 2006/06/29/23/57/48
status online
sync in_sync
copy_type metro
RC_rel_id 4
RC_rel_name MMREL1
RC_rel_id 5
RC_rel_name MMREL2

If afterwards we want to enable access (write I/O) to the secondary VDisk, we can re-issue
the svctask stoprcconsistgrp, specifying the -access flag, and the consistency group
transitions to the Idling state, as shown in Example 12-13.

Example 12-13 Stopping a Metro Mirror consistency group and enabling access to the secondary
IBM_2145:ITSOSVC01:admin>svctask stoprcconsistgrp -access CG_W2K_MM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_MM
id 255
name CG_W2K_MM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
RC_rel_id 4
RC_rel_name MMREL1
RC_rel_id 5
RC_rel_name MMREL2

Restarting a Metro Mirror relationship in the Idling state


When restarting a Metro Mirror relationship in the Idling state, we must specify the copy
direction.
If any updates have been performed on either the master or the auxiliary VDisk, consistency
will be compromised. Therefore, we must issue the -force flag to re-start the relationship. If
the -force flag is not used, the command will fail, as shown in Example 12-14.
Example 12-14 Restarting a Metro Mirror relationship after updates in the Idling state
BM_2145:ITSOSVC01:admin>svctask startrcrelationship -primary master MMREL3
CMMVC5978E The operation was not performed because the relationship is not synchronized.
IBM_2145:ITSOSVC01:admin>svctask startrcrelationship -primary master -force MMREL3
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL3
id 6
name MMREL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name VDISK3
aux_cluster_id 000002006040469E

Chapter 12. Copy Services: Metro Mirror 457


aux_cluster_name ITSOSVC02
aux_vdisk_id 18
aux_vdisk_name VDISK3T
primary master
consistency_group_id
consistency_group_name
state inconsistent_copying
bg_copy_priority 50
progress 2
freeze_time
status online
sync
copy_type metro

Restarting a Metro Mirror consistency group in the Idling state


When restarting a Metro Mirror consistency group in the Idling state, we must specify the
copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the
Metro Mirror relationships in the consistency group, then consistency will be compromised.
Therefore we must issue the -force flag to start the relationship. If the -force flag is not used,
then the command will fail.

In Example 12-15, we change the copy direction by specifying the auxiliary VDisks to be the
primaries.

Example 12-15 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSOSVC01:admin>svctask startrcconsistgrp -primary aux CG_W2K_MM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_MM
id 255
name CG_W2K_MM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status online
sync
copy_type metro
RC_rel_id 4
RC_rel_name MMREL1
RC_rel_id 5
RC_rel_name MMREL2

Switching copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the consistent synchronized state, we can change the
copy direction for the relationship, using the command svctask switchrcrelationship,
specifying the primary VDisk.

If the primary is specified when issuing the command, and it is already the primary, the
command has no effect.

458 IBM System Storage SAN Volume Controller


In Example 12-16, we change the copy direction for the stand-alone Metro Mirror relationship,
specifying the auxiliary VDisk to be the primary.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisk which transitions from primary to secondary, since all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using the svctask switchrcrelationship command.

Example 12-16 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL3
id 6
name MMREL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name VDISK3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 18
aux_vdisk_name VDISK3T
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro

IBM_2145:ITSOSVC01:admin>svctask switchrcrelationship -primary aux MMREL3


IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship MMREL3
id 6
name MMREL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name VDISK3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 18
aux_vdisk_name VDISK3T
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro

Chapter 12. Copy Services: Metro Mirror 459


Switching copy direction for a Metro Mirror consistency group
When a Metro Mirror consistency group is in the consistent synchronized state, we can
change the copy direction for the relationship, using the command svctask
switchrcconsistgrp, specifying the primary VDisk.

If the primary specified, when issuing the command, is already the primary, the command has
no effect.

In Example 12-17, we change the copy direction for the Metro Mirror consistency group,
specifying the auxiliary to be the primary.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisks which transitions from primary to secondary, since all I/O will be inhibited when
they become the secondary. Therefore, careful planning is required prior to using the
svctask switchrcconsistgrp command.

Example 12-17 Switching the copy direction for a Metro Mirror consistency group
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_MM
id 255
name CG_W2K_MM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status online
sync
copy_type metro
RC_rel_id 4
RC_rel_name MMREL1
RC_rel_id 5
RC_rel_name MMREL2

IBM_2145:ITSOSVC01:admin>svctask switchrcconsistgrp -primary aux CG_W2K_MM


IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_MM
id 255
name CG_W2K_MM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status online
sync
copy_type metro
RC_rel_id 4
RC_rel_name MMREL1
RC_rel_id 5
RC_rel_name MMREL2

460 IBM System Storage SAN Volume Controller


12.4 Metro Mirror scenario using the GUI
We show how to Metro Mirror using the GUI.

Note: This example is for intercluster only, if you wish to setup intracluster we will highlight
those parts of the following procedure you do not need to perform.

In the following scenario, we will set up Metro Mirror for the following VDisks from ITSOSVC01
to ITSOSVC02 at the secondary site:
VDISK1: Database files
VDISK2: Database log files
VDISK3: Application files

Since data consistency is needed across VDISK1 and VDISK2, we will create a consistency
group, to ensure that those two VDisks maintain consistency. While, in this scenario, the
application files are independent of the database, we create a stand-alone Metro Mirror
relationship for VDISK3. The Metro Mirror setup is illustrated in Figure 12-7.

Figure 12-7 Metro Mirror scenario using the GUI

12.4.1 Setting up Metro Mirror


In the following section, we assume that the source and target VDisks have already been
created and that the ISLs and zoning, are in place enabling the SVC clusters to communicate.

Chapter 12. Copy Services: Metro Mirror 461


To set up the Metro Mirror, you must perform the following steps:
򐂰 Create SVC partnership between SVC and ITSOSVC02, on both SVC clusters
򐂰 Create a Metro Mirror consistency group
– Name CG_W2K_MM
򐂰 Create the Metro Mirror relationship for VDISK1
– Master VDISK1
– Auxiliary VDISK1T
– Auxiliary SVC cluster ITSOSVC02
– Name MMREL1
– Consistency group CG_W2K_MM
򐂰 Create the Metro Mirror relationship for VDISK2
– Master VDISK2
– Auxiliary VDISK2T
– Auxiliary SVC cluster ITSOSVC02
– Name MMREL2
– Consistency group CG_W2K_MM
򐂰 Create the Metro Mirror relationship for VDISK3
– Master VDISK3
– Auxiliary VDISK3T
– Auxiliary SVC cluster ITSOSVC02
– Name MMREL3

Creating an SVC partnership between ITSOSVC01 and ITSOSVC02


In this section, each step is carried out using the GUI. We do this operation on both clusters.

Note: If you are creating an intracluster Metro Mirror, do not perform the next step,
“Creating Cluster Partnership”; instead go to “Creating a Metro Mirror Consistency Group”
on page 465.

To create a Metro Mirror partnership between the SVC clusters using the GUI we launch the
SVC GUI for ITSOSVC01. Then we select Manage Copy Services and click Metro & Mirror
Cluster Partnership, as shown in Figure 12-8.

Figure 12-8 Selecting Metro Mirror Cluster Partnership on ITSOSVC01

462 IBM System Storage SAN Volume Controller


To verify that we want to create a Metro Mirror SVC Cluster partnership, we click Create, as
shown in Figure 12-9.

Figure 12-9 Confirming that a Metro Mirror partnership is to be created

In Figure 12-10, the available SVC cluster candidates are listed, which in our case is only
ITSOSVC02. We select ITSOSVC02 and specify the available bandwidth for background copy, in
this case 500 MBps and then click OK.

Figure 12-10 Selecting SVC partner and specifying bandwidth for background copy

In the resulting window shown in Figure 12-11, the created Metro Mirror cluster partnership is
shown as Partially Configured.

To fully configure the Metro Mirror cluster partnership, we must carry out the same steps on
ITSOSVC02 as we did on ITSOSVC01, and for simplicity, in the following figures only the last
two windows are shown.

Chapter 12. Copy Services: Metro Mirror 463


Figure 12-11 Metro Mirror cluster partnership is partially configured

Launching the SVC GUI for ITSOSVC02, we select ITSOSVC01 for the Metro Mirror cluster
partnership and specify the available bandwidth for background copy, again 500 Mbps, and
then click OK, as shown in Figure 12-12.

Figure 12-12 Selecting SVC partner and specify bandwidth for background copy

Now that both sides of the SVC Cluster Partnership are defined, the resulting window shown
in Figure 12-13 confirms that our Metro Mirror cluster partnership is Fully Configured.

Figure 12-13 Metro Mirror cluster partnership is fully configured

The GUI for ITSOSVC02 is no longer needed. Close this and use the GUI for cluster ITSOSVC01
for all further steps.

464 IBM System Storage SAN Volume Controller


Creating a Metro Mirror Consistency Group
To create the consistency group to be used for the Metro Mirror relationships for the VDisks
with the database and log files, we select Manage Copy Services and click Metro Mirror
Consistency Groups, as shown in Figure 12-14.

Figure 12-14 Selecting Metro Mirror Consistency Groups

Next we have the opportunity to filter the defined listed of consistency groups, however, we
will just click on Bypass Filter to continue to the next window.

To start the creation process, we select Create Consistency Group from the scroll menu
and click Go, as shown in Figure 12-15.

Figure 12-15 Create a consistency group

We are presented with an overview of the steps in the process of creating a consistency
group, we click Next to proceed.

Chapter 12. Copy Services: Metro Mirror 465


As shown in Figure 12-16, we specify the consistency group name and whether it is to be
used for inter-cluster or intra-cluster relationships. In our scenario we select inter-cluster and
click Next.

Figure 12-16 Specifying consistency group name and type

As shown in Figure 12-17, there are currently no defined Metro Mirror relationships (since we
have not defined any at this point) to be included in the Metro Mirror consistency group and
we click Next to proceed.

Figure 12-17 There are no defined Metro Mirror relationships to be added

466 IBM System Storage SAN Volume Controller


As shown in Figure 12-18, we verify the settings for the consistency group and click Finish to
create the Metro Mirror consistency group.

Figure 12-18 Verifying the settings for the Metro Mirror consistency group

When the Metro Mirror consistency group is created, we are returned to the list of defined
consistency groups shown in Figure 12-19.

Figure 12-19 Viewing Metro Mirror consistency groups

Chapter 12. Copy Services: Metro Mirror 467


Creating the Metro Mirror relationships for VDISK1 and VDISK2
To create the Metro Mirror relationships for VDISK1 and VDISK2 we select Manage Copy
Services and click Metro Mirror Cluster Relationships, as shown in Figure 12-20.

Figure 12-20 Selecting Metro Mirror Relationships

Next we have the opportunity to filter the defined listed of Metro Mirror relationships, however,
we will just click Bypass Filter to continue to the next window.

To start the creation process we select Create a Relationship from the scroll menu and click
Go, as shown in Figure 12-21.

Figure 12-21 Create a relationship

Next we are presented with an overview of the steps in the process of creating a relationship;
click Next to proceed.

468 IBM System Storage SAN Volume Controller


As shown in Figure 12-22, we name our first Metro Mirror relationship (MMREL1) and decide
that the Relationship will be intercluster.

Figure 12-22 Naming the Metro Mirror relationship and selecting the auxiliary cluster

The next step will enable us to select a master VDisk. As this list could potentially be large,
the Filtering Master VDisks Candidates window appears which will enable us to reduce the
list of eligible VDisks based on a defined filter.

In Figure 12-23, use the filter for VDISK* and click Next.

Figure 12-23 Defining filter for master VDisk candidates

Chapter 12. Copy Services: Metro Mirror 469


As shown in Figure 12-24, we select VDISK1 to be the master VDisk of the relationship, and
click Next to proceed.

Figure 12-24 Selecting the master VDisk

The next step will require us to select an auxiliary VDisk. The SVC wizard will automatically
filter this list, so that only eligible VDisks are shown. Eligible VDisks are those that have the
same size as the master VDisk and are not already part of a Metro Mirror relationship.

As shown in Figure 12-25, we select the VDISK1T as the auxiliary VDisk of the relationship,
and click Next to proceed.

Figure 12-25 Selecting the auxiliary VDisk

470 IBM System Storage SAN Volume Controller


As shown in Figure 12-26, we select the relationship to be part of the consistency group that
we created and click Next to proceed.

Figure 12-26 Selecting the relationship to be part of a consistency group

Finally, in Figure 12-27 we verify the Metro Mirror relationship and click Finish to create it.

Figure 12-27 Verifying the Metro Mirror relationship

Once the relationship is successfully created, we are returned to the Metro Mirror relationship
list.

Chapter 12. Copy Services: Metro Mirror 471


Following the same process, we create the other Metro Mirror relationships, MMREL2 and
MMREL3, and we have listed them in Figure 12-28.

Figure 12-28 Viewing Metro Mirror relationships

Creating the stand-alone Metro Mirror relationship for VDISK3


To create the stand-alone Metro Mirror relationship, we start the creation process by selecting
Create a Relationship from the scroll menu and click Go, as shown in Figure 12-29.

Figure 12-29 Create a Metro Mirror relationship

Next we are presented with an overview of the steps in the process of creating a consistency
group, we click Next to proceed.

472 IBM System Storage SAN Volume Controller


As shown in Figure 12-30, we name the relationship (MMREL3) and specify that it is an
intercluster relationship and click Next.

Figure 12-30 Specifying the Metro Mirror relationship name and auxiliary cluster

As shown in Figure 12-31, we are queried for a filter prior to presenting the master VDisk
candidates. We select to filter for VDISK* and click Next.

Figure 12-31 Filtering VDisk candidates

Chapter 12. Copy Services: Metro Mirror 473


As shown in Figure 12-32, we select VDISK3 to be the master VDisk of the relationship, and
click Next to proceed.

Figure 12-32 Selecting the master VDisk

As shown in Figure 12-33, we select the VDISK3T as the auxiliary VDisk of the relationship,
and click Next to proceed.

Figure 12-33 Selecting the auxiliary VDisk

474 IBM System Storage SAN Volume Controller


As shown in Figure 12-34, we specify that the master and auxiliary VDisk are already
synchronized (for the purpose of this example, we can assume that they are pristine). As we
did not select a consistency group, we are creating a stand-alone Metro Mirror relationship.

Figure 12-34 Selecting options for the Metro Mirror relationship

Note: To add a Metro Mirror relationship to a consistency group, it must be in the same
state as the consistency group.

Even if we intended to make the Metro Mirror relationship MMREL3 part of the consistency
group CG_W2K_MM, we are not offered the option since the state of the relationship MMREL3 is
Consistent stopped, because we selected the synchronized option, and the state of the
consistency group CG_W2K_MM is currently Inconsistent stopped.

The status of the Metro Mirror relationships can be seen in Figure 12-36.

Chapter 12. Copy Services: Metro Mirror 475


Finally, Figure 12-35 shows the actions that will be performed. We click Finish to create this
new relationship.

Figure 12-35 Verifying the Metro Mirror relationship

After successful creation, we are returned to the Metro Mirror relationship screen.
Figure 12-36 now shows all our defined Metro Mirror relationships.

Figure 12-36 Viewing Metro Mirror relationships

Executing Metro Mirror


Now that we have created the Metro Mirror consistency group and relationships, we are
ready to use the Metro Mirror relationships in our environment.

When performing Metro Mirror the goal is to reach a consistent and synchronized state which
can provide redundancy for a dataset, in case a hardware failure occurs that effects the SAN
at the production site.

In the following section, we show how to stop and start the stand-alone Metro Mirror
relationship and the consistency group.

Starting a stand-alone Metro Mirror relationship


In Figure 12-37, we select the stand-alone Metro Mirror relationship MMREL3, and from the
scroll menu, we select Start Copy Process and click Go.

476 IBM System Storage SAN Volume Controller


Figure 12-37 Starting a stand-alone Metro Mirror relationship

In Figure 12-38, we do not need to change the Forced start, Mark as clean, or Copy direction
parameters, as this is the first time we are invoking this Metro Mirror relationship (and we
defined the relationship as already synchronized in Figure 12-34 on page 475). We click OK
to start the stand-alone Metro Mirror relationship MMREL3.

Figure 12-38 Selecting options and starting the copy process

Since the Metro Mirror relationship was in the Consistent stopped state and no updates have
been made on the primary VDisk the relationship enters the Consistent synchronized state
shown in Figure 12-39.

Figure 12-39 Viewing Metro Mirror relationships

Chapter 12. Copy Services: Metro Mirror 477


Starting a Metro Mirror consistency group
To start the Metro Mirror consistency group CG_W2K_MM, we select Metro Mirror Consistency
Groups shown in Figure 12-40.

Figure 12-40 Selecting Metro Mirror Consistency Groups

Click Next to Bypass Filter.

In Figure 12-41, we select the Metro Mirror consistency group CG_W2K_MM, and from the scroll
menu, we select Start Copy Process and click Go.

Figure 12-41 Selecting start copy process

478 IBM System Storage SAN Volume Controller


As shown in Figure 12-42, we click OK to start the copy process. (We cannot select the
Forced start, Mark as clean, or Copy Direction options, as our consistency group is currently
in the Inconsistent stopped state.

Figure 12-42 Selecting options and starting the copy process

As shown in Figure 12-43, we are returned to the Metro Mirror consistency group list and the
consistency group CG_W2K_MM has transitioned to the Inconsistent copying state.

Figure 12-43 Viewing Metro Mirror consistency groups

Since the consistency group was in the Inconsistent stopped state it enters the Inconsistent
copying state until the background copy has completed for all relationships in the consistency
group. Upon completion of the background copy for all relationships in the consistency group,
it enters the Consistent synchronized state.

Chapter 12. Copy Services: Metro Mirror 479


Monitoring background copy progress
The status of the background copy progress can either be shown on the Viewing Metro Mirror
Relationships page in Figure 12-44, or open under My Work, Manage progress view, and
click View progress. This will allow you to view the Metro Mirror progress as shown in
Figure 12-45.

Figure 12-44 Viewing Metro Mirror relationships

Figure 12-45 View Metro Mirror progress

Note: Setting up SNMP traps for the SVC enables automatic notification when Metro
Mirror consistency group or relationships change state.

480 IBM System Storage SAN Volume Controller


Stopping a stand-alone Metro Mirror relationship
To stop a Metro Mirror relationship, while enabling access (write I/O) to both the primary and
the secondary VDisk we select the relationship and Stop Copy Process from the scroll menu
and click Go, shown in Figure 12-46.

Figure 12-46 Stopping a stand-alone Metro MIrror relationship

As shown in Figure 12-47, we check the enable access option and click OK to stop the Metro
Mirror relationship.

Figure 12-47 Enable access to the secondary VDisk while stopping the relationship

As shown in Figure 12-48, the Metro Mirror relationship transitions to the Idling state when
stopped while enabling access to the secondary VDisk.

Figure 12-48 Viewing the Metro Mirror relationships

Chapter 12. Copy Services: Metro Mirror 481


Stopping a Metro Mirror consistency group
As shown in Figure 12-49, we select the Metro Mirror consistency group and Stop Copy
Process from the scroll menu and click Go.

Figure 12-49 Selecting the Metro Mirror consistency group to be stopped

As shown in Figure 12-50, we click OK without specifying access to the secondary VDisk.

Figure 12-50 Stopping the consistency group, without enabling access to the secondary VDisk

As shown in Figure 12-51, the consistency group enters the Consistent stopped state, when
stopped without enabling access to the secondary.

Figure 12-51 Viewing Metro Mirror consistency groups

If afterwards we want to enable access (write I/O) to the secondary VDisks, we can reissue
the Stop Copy Process specifying access to be enabled to the secondary VDisks.

482 IBM System Storage SAN Volume Controller


In Figure 12-52 we select the Metro Mirror relationship and Stop Copy Process from the
scroll menu and click Go.

Figure 12-52 Stopping the Metro Mirror consistency group

As shown in Figure 12-53, we check the Enable access box and click OK.

Figure 12-53 Enabling access to the secondary VDisks

When applying the enable access option, the consistency group transitions to the Idling state
shown in Figure 12-54.

Figure 12-54 Viewing the Metro Mirror consistency group is in the Idling state

Chapter 12. Copy Services: Metro Mirror 483


Restarting a Metro Mirror relationship in the Idling state
When restarting a Metro Mirror relationship in the Idling state, we must specify the copy
direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the
Metro Mirror relationships in the consistency group, then consistency will have been
compromised. In this situation, we must check the Force option to start the copy process
otherwise the command will fail.

As shown in Figure 12-55, we select the Metro Mirror relationship and Start Copy Process
from the scroll menu and click Go.

Figure 12-55 Starting a stand-alone Metro Mirror relationship in the Idling state

As shown in Figure 12-56, we check the Force option since write I/O has been performed
while in the Idling state and we select the copy direction by defining the master VDisk as the
primary and click OK.

Figure 12-56 Starting the copy process

484 IBM System Storage SAN Volume Controller


As shown in Figure 12-57, the Metro Mirror relationship enters the Consistent copying state.
When the background copy is complete the relationship transitions to the Consistent
synchronized state.

Figure 12-57 Viewing the Metro Mirror relationships

Restarting a Metro Mirror consistency group in the Idling state


When restarting a Metro Mirror consistency group in the Idling state, we must specify the
copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the
Metro Mirror relationships in the consistency group, then consistency will have been
compromised. In this situation, we must check the Force option to start the copy process
otherwise the command will fail.

As shown in Figure 12-58, we select the Metro Mirror consistency group and Start Copy
Process from the scroll menu and click Go.

Figure 12-58 Starting the copy process

Chapter 12. Copy Services: Metro Mirror 485


As shown in Figure 12-59, we check the Force option and set the copy direction by selecting
the primary as the master.

Figure 12-59 Starting the copy process for the consistency group

When the background copy completes the Metro Mirror consistency group enters the
Consistent synchronized state shown in Figure 12-60.

Figure 12-60 Viewing Metro Mirror consistency groups

Switching copy direction for a Metro Mirror relationship


When a Metro Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship.

In Figure 12-61, we select the relationship MMREL3 and Switch Copy Direction from the scroll
menu and click Go.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisk which transitions from primary to secondary, since all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using switching the copy direction for a Metro Mirror relationship.

486 IBM System Storage SAN Volume Controller


Figure 12-61 Selecting the relationship for which the copy direction is to be changed

In Figure 12-62, we see that the current primary VDisk is the master, so to change the copy
direction for the stand-alone Metro Mirror relationship we specify the auxiliary VDisk to be the
primary, and click OK.

Figure 12-62 Selecting the primary VDisk to switch the copy direction

The copy direction is now switched and we are returned to the Metro Mirror relationship list,
where we see that the copy direction has been switched as shown in Figure 12-63.

Figure 12-63 Viewing Metro Mirror relationship, after changing the copy direction

Chapter 12. Copy Services: Metro Mirror 487


Switching copy direction for a Metro Mirror consistency group
When a Metro Mirror consistency group is in the Consistent synchronized state, we can
change the copy direction for the Metro Mirror consistency group.

In Figure 12-64 we select the consistency group CG_W2K_MM and Switch Copy Direction from
the scroll menu and click Go.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisks which transitions from primary to secondary, since all I/O will be inhibited when
they become the secondary. Therefore, careful planning is required prior to using switching
the copy direction.

Figure 12-64 Selecting the consistency group for which the copy direction is to be changed

In Figure 12-65, we see that currently the primary VDisks are the master, so to change the
copy direction for the Metro Mirror consistency group, we specify the auxiliary VDisks to
become the primary, and click OK.

Figure 12-65 Selecting the primary VDisk to switch the copy direction

The copy direction is now switched and we are returned to the Metro Mirror consistency
group list.

488 IBM System Storage SAN Volume Controller


13

Chapter 13. Copy Services: Global Mirror


In this chapter we describe the Global Mirror (GM) copy service. GM provides and maintains
a consistent mirrored copy of a source VDisk to a target VDisk. Data is written from the
source VDisk to the target VDisk asynchronously. This method was previously known as
Asynchronous Peer-to-Peer Remote Copy.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 489
13.1 Global Mirror
Global Mirror works by defining a GM relationship between two VDisks of equal size and
maintains the data consistency in an asynchronous manner. Therefore, when a host writes to
a source VDisk, the data is copied from the source VDisk cache to the target VDisk cache. At
the initiation of that data copy confirmation of I/O completion is transmitted back to the host.

Note: The minimum firmware requirement for GM functionality is v4.1.1. Any cluster or
partner cluster not running this minimum level will not have GM functionality available.
Even if you have a Global Mirror relationship running on a downlevel partner cluster and
you only wish to use intracluster GM, the functionality will not be available to you.

The SVC provides both intracluster and intercluster Global Mirror which are described below.

Intracluster Global Mirror


Although Global Mirror is available for intracluster, it has no functional value for production
use. Intracluster Global Mirror provides the same capability for less overhead. However,
leaving this functionality in place simplifies testing and does allow customer experimentation
and testing (for example, to validate server failover on a single test cluster).

Intercluster Global Mirror


Intercluster Global Mirror operations require a pair of SVC clusters that are commonly
separated by a number of moderately high bandwidth links. The two SVC clusters must each
be defined in an SVC cluster partnership to establish a fully functional Global Mirror
relationship.

Note: When a local and a remote fabric are connected together for Global Mirror purposes,
the ISL hop count between a local node and a remote node should not exceed seven hops.

The Global Mirror remote copy technique


Global Mirror is an asynchronous remote copy which is briefly explained below. To illustrate
the differences between synchronous and asynchronous remote copy, synchronous remote
copy is also explained below.

Synchronous remote copy


Metro Mirror is a fully synchronous remote copy technique which ensures that updates are
committed at both primary and secondary VDisks before the application is given completion
to an update.

Figure 13-1 illustrates how a write operation to the master VDisk is mirrored to the cache for
the auxiliary VDisk before an acknowledge of the write is sent back to the host issuing the
write. This ensures that the secondary is real-time synchronized, in case it is needed in a
failover situation.

However, this also means that the application is fully exposed to the latency and bandwidth
limitations of the communication link to the secondary site. This might lead to unacceptable
application performance, particularly when placed under peak load. This is the reason for the
distance limitations when applying Metro Mirror.

490 IBM System Storage SAN Volume Controller


1 Write 4 Ack

3 Acknowledge write
Cache Cache
2 Mirror write

Master Auxiliary
VDisk VDisk
Metro Mirror
Relationship

Figure 13-1 Write on VDisk in Metro Mirror relationship

Asynchronous remote copy


In an asynchronous remote copy, the application is given completion to an update when it is
sent to the secondary site, but the update is not necessarily committed at the secondary site
at that time. This provides the capability of performing remote copy over distances exceeding
the limitations of synchronous remote copy.

Figure 13-2 illustrates that a write operation to the master VDisk is acknowledged back to the
the host issuing the write before it is mirrored to the cache for the auxiliary VDisk. In a failover
situation, where the secondary site needs to become the primary source of your data then
any applications which will use this data must have their own built-in recovery mechanisms,
for example, transaction log replay.

1 Write 2 Acknowledge write

Cache Cache
3 Write to remote

Master Auxiliary
VDisk VDisk
Global Mirror
Relationship
Figure 13-2 Write on VDisk in Global Mirror relationship

Chapter 13. Copy Services: Global Mirror 491


13.1.1 Supported methods for synchronizing
This section describes three methods that can be used to establish a relationship.

Full synchronization after Create


This is the default method. It is the simplest, in that it requires no administrative activity apart
from issuing the necessary commands. However, in some environments, the bandwidth
available will make this method unsuitable.

The sequence for a single relationship is:


򐂰 A mkrcrelationship is issued without specifying -sync flag.
򐂰 A startrcrelationship is issued without -clean.

Synchronized before Create


In this method, the administrator must ensure that the master and auxiliary virtual disks
contain identical data before creating the relationship. There are two ways in which this might
be done:
򐂰 Both disks are created with the -fmtdisk feature so as to make all data zero.
򐂰 A complete tape image (or other method of moving data) is copied from one disk to the
other.

In either technique, no write I/O must take place to either Master or Auxiliary before the
relationship is established.

Then, the administrator must ensure that:


򐂰 A mkrcrelationship is issued with -sync flag.
򐂰 A startrcrelationship is issued without -clean flag.

If these steps are not performed correctly, the relationship will be reported as being
consistent, when it is not. This is likely to make any secondary disk useless. This method has
an advantage over the full synchronization, in that it does not require all the data to be copied
over a constrained link. However, if the data needs to be copied, the master and auxiliary
disks cannot be used until the copy is complete, which might be unacceptable.

Quick synchronization after Create


In this method, the administrator must still copy data from master to auxiliary. But it can be
used without stopping the application at the master. The administrator must ensure that:
򐂰 A mkrcrelationship is issued with -sync flag.
򐂰 A stoprcrelationship is issued with -access flag.
򐂰 A tape image (or other method of transferring data) is used to copy the entire master disk
to the auxiliary disk.

Once the copy is complete, the administrator must ensure that:


򐂰 A startrcrelationship is issued with -clean flag.

With this technique, only the data that has changed since the relationship was created,
including all regions that were incorrect in the tape image, are copied from master and
auxiliary. As with “Synchronized before Create” on page 492, the copy step must be
performed correctly, or else the auxiliary will be useless, though the copy will report it as being
synchronized.

492 IBM System Storage SAN Volume Controller


13.1.2 The importance of write ordering
Many applications that use block storage have a requirement to survive failures such as loss
of power, or a software crash, and not lose data that existed prior to the failure. Since many
applications need to perform large numbers of update operations in parallel to that storage,
maintaining write ordering is key to ensuring the correct operation of applications following a
disruption.

An application that is performing a large set of updates will have been designed with the
concept of dependent writes. These are writes where it is important to ensure that an earlier
write has completed before a later write is started. Reversing the order of dependent writes
can undermine the applications algorithms and can lead to problems such as detected, or
undetected, data corruption.

Dependent writes that span multiple VDisks


The following scenario illustrates a simple example of a sequence of dependent writes, and in
particular what can happen if they span multiple VDisks. Consider the following typical
sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is to be
performed.
2. A second write is executed to update the database.
3. A third write is executed to update the database log, indicating that the database update
has completed successfully.

In Figure 13-3 the write sequence is illustrated.

Time

Log Log: Update record xyz ... started

Step 1
DB file

Log: Update record xyz ... started


Log

Step 2
DB file DB: record xyz ...

Log: Update record xyz ... started


Log Log: Update record xyz ... completed

Step 3
DB file DB: record xyz ...

Figure 13-3 Dependent writes for a database

Chapter 13. Copy Services: Global Mirror 493


The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next.

Note: All databases have logs associated with them. These logs keep records of database
changes. If a database needs to be restored to a point beyond the last full, offline backup,
logs are required to roll the data forward to the point of failure.

But imagine if the database log and the database itself are on different VDisks and a Global
Mirror relationship is stopped during this update. In this case you need to exclude the
possibility that the Global Mirror relationship for the VDisk with the database file is stopped
slightly before the VDisk containing the database log. If this were the case, then it could be
possible that the secondary VDisks see writes (1) and (3) but not (2).

Then, if the database was restarted using the backup made from the secondary disks, the
database log would indicate that the transaction had completed successfully, when it is not
the case. In this scenario, the integrity of the database is in question.

To overcome the issue of dependent writes across VDisks, and to ensure a consistent data
set, the SVC supports the concept of consistency groups for Global Mirror relationships. A
Global Mirror consistency group can contain an arbitrary number of relationships up to the
maximum number of Global Mirror relationships supported by the SVC cluster.

Global Mirror commands are then issued to the Global Mirror consistency group, and thereby
simultaneously for all Global Mirror relationships defined in the consistency group. For
example, when issuing a Global Mirror start command to the consistency group, all of the
Global Mirror relationships in the consistency group are started at the same time.

13.1.3 Using Global Mirror


To use Global Mirror, a relationship must be defined between two VDisks.

When creating the Global Mirror relationship, one VDisk is defined as the master, and the
other as the auxiliary. The relationship between the two copies is asymmetric. When the
Global Mirror relationship is created the master VDisk is initially considered the primary copy
(often referred to as the source), and the auxiliary VDisk is considered the secondary copy
(often referred to as the target).

The master VDisk is the production VDisk, and updates to this copy are real time mirrored to
the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was
created are destroyed.

Note: The copy direction for a Global Mirror relationship can be switched so the auxiliary
VDisk becomes the primary and the master VDisk becomes the secondary.

While the Global Mirror relationship is active, the secondary copy (VDisk) is not accessible for
host application write I/O at any time. The SVC allows read-only access to the secondary
VDisk when it contains a “consistent” image. This is only intended to allow boot time
operating system discovery to complete without error, so that any hosts at the secondary site
can be ready to start up the applications with minimum delay if required.

For instance, many operating systems need to read Logical Block Address (LBA) 0 to
configure a logical unit. Although read access is allowed at the secondary in practice, the data
on the secondary volumes cannot be read by a host. The reason for this is that most
operating systems write a “dirty bit” to the file system when it is mounted. Because this write
operation is not allowed on the secondary volume, the volume cannot be mounted.

494 IBM System Storage SAN Volume Controller


This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads performed at the secondary and later
write I/Os performed at the primary.

To enable access to the secondary VDisk for host operations, the Global Mirror relationship
must be stopped, specifying the -access parameter.

While access to the secondary VDisk for host operations is enabled, the host must be
instructed to mount the VDisk and related tasks before the application can be started, or
instructed to perform a recovery process.

The Global Mirror requirement to enable the secondary copy for access differentiates it from,
for example, third party mirroring software on the host, which aims to emulate a single,
reliable disk regardless of which system is accessing it. Global Mirror retains the property that
there are two volumes in existence, but suppresses one while the copy is being maintained.

Using a secondary copy demands a conscious policy decision by the administrator that a
failover is required, and the tasks to be performed on the host involved in establishing
operation on the secondary copy are substantial. The goal is to make this rapid (much faster
when compared to recovering from a backup copy) but not seamless.

The failover process can be automated through failover management software. The SVC
provides Simple Network Management Protocol (SNMP) traps and programming (or
scripting) towards the Command Line Interface (CLI) to enable this automation.

13.1.4 SVC Global Mirror features


SVC Global Mirror supports the following features:
򐂰 Asynchronous remote copy of VDisks dispersed over metropolitan scale distances is
supported.
򐂰 SVC implements the Global Mirror relationship between VDisk pairs, with each VDisk in
the pair being managed by an SVC cluster.
򐂰 SVC supports intracluster Global Mirror, where both VDisks belong to the same cluster
(and IO group). Although, as stated earlier, this functionality is better suited to Metro
Mirror.
򐂰 SVC supports intercluster Global Mirror, where each VDisk belongs to their separate SVC
cluster. A given SVC cluster can be configured for partnership with another cluster. A
given SVC cluster can only communicate with one other cluster. All intercluster Global
Mirror takes place between the two SVC clusters in the configured partnership.
򐂰 Intercluster and intracluster Global Mirror can be used concurrently within a cluster for
different relationships.
򐂰 SVC does not require a control network or fabric to be installed to manage Global Mirror.
For intercluster Global Mirror the SVC maintains a control link between the two clusters.
This control link is used to control state and co-ordinate updates at either end. The control
link is implemented on top of the same FC fabric connection as the SVC uses for Global
Mirror I/O.
򐂰 SVC implements a configuration model which maintains the Global Mirror configuration
and state through major events such as failover, recovery, and resynchronization to
minimize user configuration action through these events.
򐂰 SVC maintains and polices a strong concept of consistency and makes this available to
guide configuration activity.

Chapter 13. Copy Services: Global Mirror 495


򐂰 SVC implements flexible resynchronization support enabling it to re-synchronize VDisk
pairs which have suffered write I/O to both disks and to resynchronize only those regions
which are known to have changed.

13.2 How Global Mirror works


There are several steps in the Global Mirror process:
1. An SVC cluster partnership is created between two SVC clusters (for intercluster Global
Mirror).
2. A Global Mirror relationship is created between two VDisks of the same size.
3. To manage multiple Global Mirror relationships as one entity, the relationships can be
made part of a Global Mirror consistency group. This is to ensure data consistency across
multiple Global Mirror relationships, or simply for ease of management.
4. The Global Mirror relationship is started, and when the background copy has completed
the relationship is consistent and synchronized.
5. Once synchronized, the secondary vdisk holds a copy of the production data at the
primary which can be used for disaster recovery.
6. To access the auxiliary VDisk, the Global Mirror relationship must be stopped with the
access option enabled before write I/O is submitted to the secondary.
7. The remote host server is mapped to the auxiliary VDisk and the disk is available for I/O.

13.2.1 Intercluster communication and zoning


All intercluster communication is performed through the SAN. Prior to creating intercluster
Global Mirror relationships, you must create a partnership between the two clusters.

All SVC node ports on each SVC cluster must be able to access each other to facilitate the
partnership creation. Therefore, a zone in each fabric must be defined for intercluster
communication; see Chapter 3, “Planning and configuration” on page 25.

SVC cluster partnership


Each SVC cluster can only be in a partnership with one other SVC cluster. When the SVC
cluster partnership has been defined on both clusters, further communication facilities
between the nodes in each of the cluster are established. This comprises:
򐂰 A single control channel, which is used to exchange and coordinate configuration
information
򐂰 I/O channels between each of the nodes in the clusters

These channels are maintained and updated as nodes appear and disappear and as links
fail, and are repaired to maintain operation where possible. If communication between the
SVC clusters is interrupted or lost, an error is logged (and consequently Global Mirror
relationships will stop).

To handle error conditions the SVC can be configured to raise SNMP traps to the enterprise
monitoring system.

Maintenance of the intercluster link


All SVC nodes maintain a database of the other devices that are visible on the fabric. This is
updated as devices appear and disappear.

496 IBM System Storage SAN Volume Controller


Devices that advertise themselves as SVC nodes are categorized according to the SVC
cluster to which they belong. SVC nodes that belong to the same cluster establish
communication channels between themselves and begin to exchange messages to
implement the clustering and functional protocols of SVC.

Nodes that are in different clusters do not exchange messages after the initial discovery is
complete unless they have been configured together to perform Global Mirror.

The intercluster link carries the control traffic to coordinate activity between the two clusters. It
is formed between one node in each cluster which is termed the focal point. The traffic
between the focal point nodes is distributed among the logins that exist between those nodes.

If the focal point node should fail (or all its logins to the remote cluster fail), then a new focal
point is chosen to carry the control traffic. Changing the focal point causes I/O to pause but
does not cause relationships to become ConsistentStopped.

13.2.2 Global Mirror relationship


Global Mirror relationships are similar to FlashCopy mappings. They can be stand-alone or
combined in consistency groups. Start and stop commands can be issued either against the
stand-alone relationship, or the consistency group.

Figure 13-4 illustrates the Global Mirror relationship.

Figure 13-4 Global Mirror relationship

A Global Mirror relationship is composed of two VDisks that are equal in size. The master
VDisk and the auxiliary VDisk can be in the same I/O group, within the same SVC cluster
(intracluster Global Mirror), or can be on separate SVC clusters that are defined as SVC
partners (intercluster Global Mirror).

Note: Be aware that:


򐂰 A VDisk can only be part of one Global Mirror relationship at a time.
򐂰 A VDisk that is a FlashCopy target cannot be part of a Global Mirror relationship.

Global Mirror relationship between primary and secondary VDisk


When creating a Global Mirror relationship, initially the master VDisk is assigned as the
primary, and the auxiliary VDisk the secondary. This implies that the initial copy direction is
mirroring the master VDisk to the auxiliary VDisk. After the initial synchronization is complete,
the copy direction can be changed if appropriate.

In most common applications of Global Mirror, the master VDisk contains the production copy
of the data, and is used by the host application, while the auxiliary VDisk contains the
mirrored copy of the data and is used for failover in disaster recovery scenarios. The terms
master and auxiliary help support this use. If Global Mirror is applied differently, the terms
master and auxiliary VDisks need to be interpreted appropriately.

Chapter 13. Copy Services: Global Mirror 497


13.2.3 Global Mirror consistency groups
Certain uses of Global Mirror require the manipulation of more than one relationship. Global
Mirror consistency groups provides the ability to group relationships, so that they are
manipulated in unison.

Consistency groups address the issue where the objective is to preserve data consistency
across multiple Global Mirrored VDisks because the applications have related data which
spans multiple VDisks. A requirement for preserving the integrity of data being written is to
ensure that “dependent writes” are executed in the application's intended sequence.

Global Mirror commands can be issued to a Global Mirror consistency group, which affects all
Global Mirror relationships in the consistency group, or to a single Global Mirror relationship if
not part of a Global Mirror consistency group.

In Figure 13-5 the concept of Global Mirror consistency groups is illustrated. Since the
MM_Relationship 1 and 2 are part of the consistency group, they can be handled as one
entity, while the stand-alone MM_Relationship 3 is handled separately.

Figure 13-5 Global Mirror consistency group

򐂰 Global Mirror relationships can be part of a consistency group, or be stand-alone and


therefore handled as single instances.
򐂰 A consistency group can contain zero or more relationships. An empty consistency group,
with zero relationships in it, has little purpose until it is assigned its first relationship,
except that it has a name.
򐂰 All the relationships in a consistency group must have matching master and auxiliary SVC
clusters.

498 IBM System Storage SAN Volume Controller


Although it is possible that consistency groups can be used to manipulate sets of
relationships that do not need to satisfy these strict rules, that manipulation can lead to some
undesired side effects. The rules behind consistency mean that certain configuration
commands are prohibited where this would not be the case if the relationship was not part of
a consistency group.

For example, consider the case of two applications that are completely independent, yet they
are placed into a single consistency group. In the event of an error there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Global Mirror rejects attempts to enable access to the
secondary VDisks of either application.

If one application finishes its background copy much more quickly than the other, Global
Mirror still refuses to grant access to its secondary, even though this is safe in this case,
because the Global Mirror policy is to refuse access to the entire consistency group if any part
of it is inconsistent.

Stand-alone relationships and consistency groups share a common configuration and state
model. All the relationships in a non-empty consistency group have the same state as the
consistency group.

13.2.4 Global Mirror states and events


In this section we explain the different states of a Global Mirror relationship, and the series of
events that modify these states. In Figure 13-6, the Global Mirror relationship state diagram
shows an overview of the states that apply to a Global Mirror relationship in the connected
state.

Create
Global
Mirror
Relationship
1a )
(ou
to
1b
nc fs
n sy yn
( i c)
Consistent Inconsistent
Stopped Stopped
Fo
rce
( ou dS
to tar
Stop fs t Stop
yn
or c) or
2a Start 2b Start
Error Error
Stop
enable
4b access 3b
Consistent Inconsistent
Synchronized Background copy complete Copying

Start
Stop 5a (in sync) ar
t
enable d St c)
4a rc
e n
access sy
Fo t of
u
(o

Idle

Figure 13-6 Global Mirror mapping state diagram

Chapter 13. Copy Services: Global Mirror 499


When creating the Global Mirror relationship, you can specify if the auxiliary VDisk is already
in sync with the master VDisk, and the background copy process is then skipped. This is
especially useful when creating Global Mirror relationships for VDisks that have been created
with the format option.
1. Step 1 is done as follows:
a. The Global Mirror relationship is created with the -sync option and the Global Mirror
relationship enters the Consistent stopped state.
b. The Global Mirror relationship is created without specifying that the master and
auxiliary VDisks are in sync, and the Global Mirror relationship enters the Inconsistent
stopped state.
2. Step 2 is done as follows:
a. When starting a Global Mirror relationship in the Consistent stopped state, it enters the
Consistent synchronized state. This implies that no updates (write I/O) have been
performed on the primary VDisk while in the Consistent stopped state, otherwise the
-force option must be specified, and the Global Mirror relationship then enters the
Inconsistent copying state, while background copy is started.
b. When starting a Global Mirror relationship in the Inconsistent stopped state, it enters
the Inconsistent copying state, while background copy is started.
3. Step 3 is done as follows:
– When the background copy completes, the Global Mirror relationship transitions from
the Inconsistent copying state to the Consistent synchronized state.
4. Step 4 is done as follows:
a. When stopping a Global Mirror relationship in the Consistent synchronized state,
specifying the -access option which enables write I/O on the secondary VDisk, the
Global Mirror relationship enters the Idling state.
b. To enable write I/O on the secondary VDisk, when the Global Mirror relationship is in
the Consistent stopped state, issue the command svctask stoprcrelationship
specifying the -access option, and the Global Mirror relationship enters the Idling
state.
5. Step 5 is done as follows:
a. When starting a Global Mirror relationship which is in the Idling state, it is required to
specify the -primary argument to set the copy direction. Given that no write I/O has
been performed (to either master or auxiliary VDisk) while in the Idling state, the
Global Mirror relationship enters the Consistent synchronized state.
b. In case write I/O has been performed to either the master or the auxiliary VDisk, then
the -force option must be specified, and the Global Mirror relationship then enters the
Inconsistent copying state, while background copy is started.

Stop or Error: When a Global Mirror relationship is stopped (either intentionally or due to an
error), a state transition is applied:
򐂰 For example, this means that Global Mirror relationships in the Consistent synchronized
state enter the Consistent stopped state and Global Mirror relationships in the Inconsistent
copying state enter the Inconsistent stopped state.
򐂰 In case the connection is broken between the SVC clusters in a partnership, then all
(intercluster) Global Mirror relationships enter a disconnected state. For further
information refer to the following topic, “Connected versus disconnected”.

500 IBM System Storage SAN Volume Controller


Note: Stand-alone relationships and consistency groups share a common configuration
and state model. This means that all the Global Mirror relationships in a non-empty
consistency group have the same state as the consistency group.

State overview
The SVC defined concepts of state are key to understanding the configuration concepts and
are therefore explained in more detail below.

Connected versus disconnected


This distinction can arise when a Global Mirror relationship is created with the two virtual
disks in different clusters.

Under certain error scenarios, communications between the two clusters might be lost. For
instance, power might fail causing one complete cluster to disappear. Alternatively, the fabric
connection between the two clusters might fail, leaving the two clusters running but unable to
communicate with each other.

When the two clusters can communicate, the clusters and the relationships spanning them
are described as connected. When they cannot communicate, the clusters and the
relationships spanning them are described as disconnected.

In this scenario, each cluster is left with half the relationship and has only a portion of the
information that was available to it before. Some limited configuration activity is possible, and
is a subset of what was possible before.

The disconnected relationships are portrayed as having a changed state. The new states
describe what is known about the relationship, and what configuration commands are
permitted.

When the clusters can communicate again, the relationships become connected once again.
Global Mirror automatically reconciles the two state fragments, taking into account any
configuration or other event that took place while the relationship was disconnected. As a
result, the relationship can either return to the state it was in when it became disconnected or
it can enter a different connected state.

Relationships that are configured between virtual disks in the same SVC cluster (intracluster)
will never be described as being in a disconnected state.

Consistent versus inconsistent


Relationships that contain VDisks operating as secondaries can be described as being
consistent or inconsistent. Consistency groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the secondary to that on the primary virtual disk. It
can be considered a property of the secondary VDisk itself.

A secondary is described as consistent if it contains data that could have been read by a host
system from the primary if power had failed at some imaginary point in time while I/O was in
progress and power was later restored. This imaginary point in time is defined as the recovery
point. The requirements for consistency are expressed with respect to activity at the primary
up to the recovery point:
򐂰 The secondary virtual disk contains the data from all writes to the primary for which the
host had received good completion and that data had not been overwritten by a
subsequent write (before the recovery point)

Chapter 13. Copy Services: Global Mirror 501


򐂰 For writes for which the host did not receive good completion (that is it received bad
completion or no completion at all) and the host subsequently performed a read from the
primary of that data and that read returned good completion and no later write was sent
(before the recovery point), the secondary contains the same data as that returned by the
read from the primary.

From the point of view of an application, consistency means that a secondary virtual disk
contains the same data as the primary virtual disk at the recovery point (the time at which the
imaginary power failure occurred).

If an application is designed to cope with unexpected power failure this guarantee of


consistency means that the application will be able to use the Secondary and begin operation
just as if it had been restarted after the hypothetical power failure.

Again, the application is dependent on the key properties of consistency:


򐂰 Write ordering
򐂰 Read stability for correct operation at the secondary

If a relationship, or set of relationships, is inconsistent and an attempt is made to start an


application using the data in the secondaries, a number of outcomes are possible:
򐂰 The application might decide that the data is corrupt and crash or exit with an error code.
򐂰 The application might fail to detect the data is corrupt and return erroneous data.
򐂰 The application might work without a problem.

Because of the risk of data corruption, and in particular undetected data corruption, Global
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.

Consistency as a concept can be applied to a single relationship or a set of relationships in a


consistency group. Write ordering is a concept that an application can maintain across a
number of disks accessed through multiple systems and therefore consistency must operate
across all those disks.

When deciding how to use consistency groups, the administrator must consider the scope of
an application’s data, taking into account all the interdependent systems which communicate
and exchange information.

If two programs or systems communicate and store details as a result of the information
exchanged, then either of the following actions might occur:
򐂰 All the data accessed by the group of systems must be placed into a single consistency
group.
򐂰 The systems must be recovered independently (each within its own consistency group).
Then, each system must perform recovery with the other applications to become
consistent with them.

Consistent versus synchronized


A copy which is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the primary and secondary virtual disks are only different in regions where writes
are outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at some point in time in the past. Write I/O might have continued
to a primary and not have been copied to the secondary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between clusters when writing to the secondary.

502 IBM System Storage SAN Volume Controller


When communication is lost for an extended period of time, Global Mirror tracks the changes
that happen at the primary, but not the order of such changes, nor the details of such changes
(write data). When communication is restored, it is impossible to make the secondary
synchronized without sending write data to the secondary out-of-order, and therefore losing
consistency.

Two policies can be used to cope with this:


򐂰 Take a point-in-time copy of the consistent secondary before allowing the secondary to
become inconsistent. In the event of a disaster before consistency is achieved again, the
point-in-time copy target provides a consistent, though out-of-date, image.
򐂰 Accept the loss of consistency, and loss of useful secondary, while making it
synchronized.

13.2.5 Global Mirror configuration limits


Table 13-1 lists the Global Mirror configuration limits.

Table 13-1 Global Mirror configuration limits


Parameter Value

Number of Global Mirror 256 per SVC cluster


consistency groups

Number of Global Mirror 1024 per SVC cluster


relationships

Total VDisk size per I/O group 16TB is the per I/O group limit on the quantity of primary and
secondary VDisk address space that can participate in Global
Mirror relationships

13.3 Global Mirror commands


Here we summarize some of the most important GM commands. For complete details about
all the Global Mirror Commands, see IBM System Storage SAN Volume Controller:
Command-Line Interface User's Guide, SC26-7903.

The command set for Global Mirror contains two broad groups:
򐂰 Commands to create, delete and manipulate relationships and consistency groups
򐂰 Commands which cause state changes

Where a configuration command affects more than one cluster, Global Mirror performs the
work to coordinate configuration activity between the clusters. Some configuration commands
can only be performed when the clusters are connected and fail with no effect when they are
disconnected.

Other configuration commands are permitted even though the clusters are disconnected. The
state is reconciled automatically by Global Mirror when the clusters become connected once
more.

For any given command, with one exception, a single cluster actually receives the command
from the administrator. This is significant for defining the context for a CreateRelationShip
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command in which case, the
cluster receiving the command is called the local cluster.

Chapter 13. Copy Services: Global Mirror 503


This exception, as mentioned previously, is the command that sets clusters into a Global
Mirror partnership. The mkpartnership command must be issued to both the local and to the
remote cluster.

The commands are described here as an abstract command set. These are implemented as:
򐂰 A command line interface (CLI) which can be used for scripting and automation
򐂰 A graphical user interface (GUI) which can be used for one-off tasks

13.3.1 Listing available SVC cluster partners


To create an SVC cluster partnership, we use the command svcinfo lsclustercandidate.

svcinfo lsclustercandidate
The svcinfo lsclustercandidate command is used to list the clusters that are available for
setting up a two-cluster partnership. This is a prerequisite for creating Global Mirror
relationships.

To display the characteristics of the cluster, we use the command svcinfo lscluster
specifying the name of the cluster.

svctask chcluster
There are three new parameters for Global Mirror in the command output:

-gmlinktolerance link_tolerance

Specifies the maximum period of time that the system will tolerate delay before stopping GM
relationships. Specify values between 60 and 86400 in increments of 10. The default value is
300. Do not change this value except under direction of IBM support.

-gminterdelaysimulation link_tolerance

Specifies the number of milliseconds that I/O activity (intercluster copying to a secondary
VDisk) is delayed. This permits you to test performance implications before deploying Global
Mirror and obtaining a long distance link. Specify a value from 0 to 100 in 1 millisecond
increments. The default value is 0. Use this argument to test each intercluster Global Mirror
relationship separately.

-gmintradelaysimulation link_tolerance

Specifies the number of milliseconds that I/O activity (intracluster copying to a secondary
VDisk) is delayed. This permits you to test performance implications before deploying Global
Mirror and obtaining a long distance link. Specify a value from 0 to 100 in 1 millisecond
increments. The default value is 0. Use this argument to test each intracluster Global Mirror
relationship separately.

Using svctask chcluster to adjust these values should be done as follows:


svctask chcluster -gmlinktolerance 300

You can view all the above parameter values with the svcinfo lscluster <clustername>
command.

13.3.2 Creating an SVC cluster partnership


To create an SVC cluster partnership, we use the command svctask mkpartnership.

504 IBM System Storage SAN Volume Controller


svctask mkpartnership
The svctask mkpartnership command is used to establish a one-way Global Mirror
partnership between the local cluster and a remote cluster.

To establish a fully functional Global Mirror partnership, you must issue this command to both
clusters. This step is a prerequisite to creating Global Mirror relationships between VDisks on
the SVC clusters.

When creating the partnership you can specify the bandwidth to be used by the background
copy process between the local and the remote SVC cluster, and if not specified the
bandwidth defaults to 50 MB/s. The bandwidth should be set to a value that is less than or
equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth impact on foreground I/O latency


The background copy bandwidth determines the rate at which the background copy for the
IBM System Storage Global Mirror for SAN Volume Controller will be attempted. The
background copy bandwidth can affect foreground I/O latency in one of three ways:
򐂰 The following result can occur if the background copy bandwidth is set too high for the
Global Mirror intercluster link capacity:
– The background copy I/Os can back up on the Global Mirror intercluster link.
– There is a delay in the synchronous secondary writes of foreground I/Os.
– The foreground I/O latency will increase as perceived by applications.
򐂰 If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
򐂰 If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary overload the secondary storage and again delay
the synchronous secondary writes of foreground I/Os.

In order to set the background copy bandwidth optimally, make sure that you consider all
three resources (the primary storage, the intercluster link bandwidth and the secondary
storage). Provision the most restrictive of these three resources between the background
copy bandwidth and the peak foreground I/O workload. This provisioning can be done by
calculation as above or alternatively by determining experimentally how much background
copy can be allowed before the foreground I/O latency becomes unacceptable and then
backing off to allow for peaks in workload and some safety margin.

svctask chpartnership
To change the bandwidth available for background copy in an SVC cluster partnership, the
command svctask chpartnership can be used to specify the new bandwidth.

13.3.3 Creating a Global Mirror consistency group


To create a Global Mirror consistency group, we use the command svctask mkrcconsistgrp.

svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new, empty Global Mirror
consistency group.

The Global Mirror consistency group name must be unique across all consistency groups
known to the clusters owning this consistency group. If the consistency group involves two
clusters, the clusters must be in communication throughout the create process.

Chapter 13. Copy Services: Global Mirror 505


The new consistency group does not contain any relationships and will be in the empty state.
Global Mirror relationships can be added to the group, either upon creation or afterwards,
using the svctask chrelationship command.

13.3.4 Creating a Global Mirror relationship


To create a Global Mirror relationship, we use the command svctask mkrcrelationship.

Note: If you do not use the -global optional parameter a Metro Mirror relationship will be
made instead of a Global Mirror relationship.

svctask mkrcrelationship
The svctask mkrcrelationship command is used to create a new Global Mirror relationship.
This relationship persists until it is deleted.

The auxiliary virtual disk must be equal in size to the master virtual disk or the command will
fail, and if both VDisks are in the same cluster, they must both be in the same I/O group. The
master and auxiliary VDisk cannot be in an existing relationship, nor can they be the target of
a FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.

When creating the Global Mirror relationship, it can be added to an already existing
consistency group, or be a stand-alone Global Mirror relationship if no consistency group is
specified.

To check whether the master or auxiliary VDisks comply with the prerequisites to participate
in a Global Mirror relationship, use the command svcinfo lsrcrelationshipcandidate as
explained below.

svcinfo lsrcrelationshipcandidate
The svcinfo lsrcrelationshipcandidate command is used to list the available VDisks
eligible to form a Global Mirror relationship.

When issuing the command you can specify the master VDisk name and auxiliary cluster to
list candidates that comply with the prerequisites to create a Global Mirror relationship. If the
command is issued with no parameters, all VDisks that are not disallowed by some other
configuration state, such as being a FlashCopy target, are listed.

13.3.5 Changing a Global Mirror relationship


To modify the properties of a Global Mirror relationship, we use the command svctask
chrcrelationship.

svctask chrcrelationship
The svctask chrcrelationship command is used to modify the following properties of a
Global Mirror relationship:
򐂰 Change the name of a Global Mirror relationship
򐂰 Add a relationship to a group
򐂰 Remove a relationship from a group, using the -force flag

Note: When adding a Global Mirror relationship to a consistency group that is not empty,
the relationship must have the same state and copy direction as the group in order to be
added to it.

506 IBM System Storage SAN Volume Controller


13.3.6 Changing a Global Mirror consistency group
To change the name of a Global Mirror consistency group, we use the command svctask
chrcconsistgrp.

svctask chrcconsistgrp
The svctask chrcconsistgrp command is used to change the name of a Global Mirror
consistency group.

13.3.7 Starting a Global Mirror relationship


To start a stand-alone Global Mirror relationship, we use the command svctask
startrcrelationship.

svctask startrcrelationship
The svctask startrcrelationship command is used to start the copy process of a Global
Mirror relationship.

When issuing the command the copy direction can be set if undefined, and optionally mark
the secondary VDisk of the relationship as clean. The command fails it if it is used to attempt
to start a relationship that is part of a consistency group.

This command can only be issued to a relationship that is connected. For a relationship that
is idling, this command assigns a copy direction (primary and secondary roles) and begins
the copy process. Otherwise this command restarts a previous copy process that was
stopped either by a stop command or by some I/O error.

If the resumption of the copy process leads to a period when the relationship is not
consistent, then you must specify the -force parameter when restarting the relationship. This
situation can arise if, for example, the relationship was stopped, and then further writes were
performed on the original primary of the relationship. The use of the -force parameter here is
a reminder that the data on the secondary will become inconsistent while resynchronization
(background copying) takes place, and therefore is not usable for disaster recovery purposes
before the background copy has completed.

In the idling state, you must specify the primary VDisk to indicate the copy direction. In other
connected states, you can provide the primary argument, but it must match the existing
setting.

13.3.8 Stopping a Global Mirror relationship


To stop a stand-alone Global Mirror relationship, we use the command svctask
stoprcrelationship.

svctask stoprcrelationship
The svctask stoprcrelationship command is used to stop the copy process for a
relationship. It can also be used to enable write access to a consistent secondary VDisk
specifying the -access parameter.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a


relationship that is part of a consistency group. You can issue this command to stop a
relationship that is copying from primary to secondary.

Chapter 13. Copy Services: Global Mirror 507


If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue an svctask startrcrelationship command. Write activity is no longer copied
from the primary to the secondary virtual disk. For a relationship in the
ConsistentSynchronized state, this command causes a consistency freeze.

When a relationship is in a consistent state (that is, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), then the -access parameter can
be used with the stoprcrelationship command to enable write access to the secondary
virtual disk.

13.3.9 Starting a Global Mirror consistency group


To start a Global Mirror consistency group, we use the command svctask
startrcconsistgrp.

svctask startrcconsistgrp
The svctask startrcconsistgrp command is used to start a Global Mirror consistency
group. This command can only be issued to a consistency group that is connected.

For a consistency group that is idling, this command assigns a copy direction (primary and
secondary roles) and begins the copy process. Otherwise this command restarts a previous
copy process that was stopped either by a stop command or by some I/O error.

13.3.10 Stopping a Global Mirror consistency group


To stop a Global Mirror consistency group, we use the command svctask stoprcconsistgrp

svctask stoprcconsistgrp
The svctask startrcconsistgrp command is used to stop the copy process for a Global
Mirror consistency group. It can also be used to enable write access to the secondary VDisks
in the group if the group is in a consistent state.

If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the primary to the secondary virtual disks belonging to the relationships in the
group. For a consistency group in the ConsistentSynchronized state, this command causes a
consistency freeze.

When a consistency group is in a consistent state (for example, in the ConsistentStopped,


ConsistentSynchronized, or ConsistentDisconnected state), then the -access parameter can
be used with the svctask stoprcconsistgrp command to enable write access to the
secondary VDisks within that group.

13.3.11 Deleting a Global Mirror relationship


To delete a Global Mirror relationship, we use the command svctask rmrcrelationship.

svctask rmrcrelationship
The svctask rmrcrelationship command is used to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two virtual disks. It
does not affect the virtual disks themselves.

508 IBM System Storage SAN Volume Controller


If the relationship is disconnected at the time that the command is issued, then the
relationship is only deleted on the cluster on which the command is being run. When the
clusters reconnect, then the relationship is automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still wish to remove the relationship on
both clusters, you can issue the rmrcrelationship command independently on both of the
clusters.

A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.

If you delete an inconsistent relationship, the secondary virtual disk becomes accessible even
though it is still inconsistent. This is the one case in which Global Mirror does not inhibit
access to inconsistent data.

13.3.12 Deleting a Global Mirror consistency group


To delete a Global Mirror consistency group, we use the command svctask rmrcconsistgrp.

svctask rmrcconsistgrp
The svctask rmrcconsistgrp command is used to delete a Global Mirror consistency group.
This command deletes the specified consistency group. You can issue this command for any
existing consistency group.

If the consistency group is disconnected at the time that the command is issued, then the
consistency group is only deleted on the cluster on which the command is being run. When
the clusters reconnect, the consistency group is automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still want to remove the consistency
group on both clusters, you can issue the svctask rmrcconsistgrp command separately on
both of the clusters.

If the consistency group is not empty, then the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.

13.3.13 Reversing a Global Mirror relationship


To reverse a Global Mirror relationship, we use svctask switchrcrelationship.

svctask switchrcrelationship
The svctask switchrcrelationship command is used to reverse the roles of primary and
secondary VDisk when a stand-alone relationship is in a consistent state, when issuing the
command the desired primary is specified.

13.3.14 Reversing a Global Mirror consistency group


To reverse a Global Mirror consistency group, we use the command svctask
switchrcconsistgrp.

Chapter 13. Copy Services: Global Mirror 509


svctask switchrcconsistgrp
The svctask switchrcconsistgrp command is used to reverse the roles of primary and
secondary VDisk when a consistency group is in a consistent state. This change is applied to
all the relationships in the consistency group, and when issuing the command the desired
primary is specified.

13.3.15 Detailed states


The following sections detail the states which are portrayed to the user, for either consistency
groups or relationships. It also details the extra information available in each state. The
different major states are constructed to provide guidance as to the configuration commands
that are available.

InconsistentStopped
This is a connected state. In this state, the primary is accessible for read and write I/O but the
secondary is not accessible for either. A copy process needs to be started to make the
secondary consistent.

This state is entered when the relationship or consistency group was InconsistentCopying
and has either suffered a persistent error or received a Stop command which has caused the
copy process to stop.

A Start command causes the relationship or consistency group to move to the


InconsistentCopying state. A Stop command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the secondary side


transitions to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

InconsistentCopying
This is a connected state. In this state, the primary is accessible for read and write I/O but the
secondary is not accessible for either read or write I/O.

This state is entered after a Start command is issued to an InconsistentStopped relationship


or consistency group. It is also entered when a forced start is issued to an idling or
ConsistentStopped relationship or consistency group.

A background copy process runs which copies data from the primary to the secondary virtual
disk.

In the absence of errors, an InconsistentCopying relationship is active, and the Copy


Progress increases until the copy process completes. In some error situations, the copy
progress might freeze or even regress.

A persistent error or Stop command places the relationship or consistency group into
InconsistentStopped state. A Start command is accepted, but has no effect.

If the background copy process completes on a stand-alone relationship, or on all


relationships for a consistency group, the relationship or consistency group transitions to
ConsistentSynchronized.

If the relationship or consistency group becomes disconnected, then the secondary side
transitions to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

510 IBM System Storage SAN Volume Controller


ConsistentStopped
This is a connected state. In this state, the secondary contains a consistent image, but it
might be out-of-date with respect to the primary.

This state can arise when a relationship was in ConsistentSynchronized state and suffers an
error which forces a consistency freeze. It can also arise when a relationship is created with a
CreateConsistentFlag set to TRUE.

Normally, following an I/O error, subsequent write activity cause updates to the primary and
the secondary is no longer synchronized (set to FALSE). In this case, to re-establish
synchronization, consistency must be given up for a period. A Start command with the Force
option must be used to acknowledge this, and the relationship or consistency group
transitions to InconsistentCopying. Do this only after all outstanding errors are repaired.

In the unusual case where the primary and secondary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a Start command takes the
relationship to ConsistentSynchronized. No Force option is required. Also, in this unusual
case, a Switch command is permitted which moves the relationship or consistency group to
ConsistentSynchronized and reverses the roles of the primary and secondary.

If the relationship or consistency group becomes disconnected, then the secondary side
transitions to ConsistentDisconnected. The primary side transitions to IdlingDisconnected.

An informational status log is generated every time a relationship or consistency group enters
the ConsistentStopped with a status of Online state. This can be configured to enable an
SNMP trap and provide a trigger to automation software to consider issuing a Start
command following a loss of synchronization.

ConsistentSynchronized
This is a connected state. In this state, the primary VDisk is accessible for read and write I/O.
The secondary VDisk is accessible for read-only I/O.

Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks.
Either good completion must be received for both writes, or the write must be failed to the
host, or a state transition out of ConsistentSynchronized must take place before a write is
completed to the host.

A Stop command takes the relationship to ConsistentStopped state. A Stop command with
the -access parameter takes the relationship to the Idling state.

A Switch command leaves the relationship in the ConsistentSynchronized state, but reverses
the primary and secondary roles.

A Start command is accepted, but has no effect.

If the relationship or consistency group becomes disconnected, the same transitions are
made as for ConsistentStopped.

Idling
This is a connected state. Both master and auxiliary disks are operating in the primary role.
Consequently, both are accessible for write I/O.

In this state, the relationship or consistency group accepts a Start command. Global Mirror
maintains a record of regions on each disk which received write I/O while Idling. This is used
to determine what areas need to be copied following a Start command.

Chapter 13. Copy Services: Global Mirror 511


The Start command must specify the new copy direction. A Start command can cause a
loss of consistency if either virtual disk in any relationship has received write I/O. This is
indicated by the synchronized status. If the Start command leads to loss of consistency, then
a Force parameter must be specified.

Following a Start command, the relationship or consistency group transitions to


ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is
such a loss.

Also, while in this state, the relationship or consistency group accepts a Clean option on the
Start command. If the relationship or consistency group becomes disconnected, then both
sides change their state to IdlingDisconnected.

IdlingDisconnected
This is a disconnected state. The virtual disk or disks in this half of the relationship or
consistency group are all in the primary role and accept read or write I/O.

The main priority in this state is to recover the link and make the relationship or consistency
group connected once more.

No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transitions to a connected state. The
exact connected state which is entered depends on the state of the other half of the
relationship or consistency group, which depends on:
򐂰 The state when it became disconnected
򐂰 The write activity since it was disconnected
򐂰 The configuration activity since it was disconnected

If both halves are IdlingDisconnected, then the relationship becomes idling when
reconnected.

While IdlingDisconnected, if a write I/O is received which causes loss of synchronization


(synchronized attribute transitions from TRUE to FALSE) and the relationship was not
already stopped (either through user stop or a persistent error), then an error log is raised to
notify this. This error log is the same as that raised when the same situation arises when
ConsistentSynchronized.

InconsistentDisconnected
This is a disconnected state. The virtual disks in this half of the relationship or consistency
group are all in the secondary role and do not accept read or write I/O.

No configuration activity except for deletes is permitted until the relationship becomes
connected again.

When the relationship or consistency group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either:
򐂰 The relationship was InconsistentStopped when it became disconnected.
򐂰 The user issued a Stop while disconnected.

In either case, the relationship or consistency group becomes InconsistentStopped.

ConsistentDisconnected
This is a disconnected state. The VDisks in this half of the relationship or consistency group
are all in the secondary role and accept read I/O but not write I/O.

512 IBM System Storage SAN Volume Controller


This state is entered from ConsistentSynchronized or ConsistentStopped when the
secondary side of a relationship becomes disconnected.

In this state, the relationship or consistency group displays an attribute of FreezeTime which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time it had in that state. When entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or consistency group was known to
be consistent. This corresponds to the time of the last successful heartbeat to the other
cluster.

A Stop with EnableAccessFlag set to TRUE transitions the relationship or consistency group to
IdlingDisconnected state. This allows write I/O to be performed to the virtual disks and is used
as part of a disaster recovery scenario.

When the relationship or consistency group becomes connected again, the relationship or
consistency group becomes ConsistentSynchronized only if this does not lead to a loss of
Consistency. This is the case provided:
򐂰 The relationship was ConsistentSynchronized when it became disconnected.
򐂰 No writes received successful completion at the primary while disconnected.

Otherwise the relationship become ConsistentStopped. The FreezeTime setting is retained.

Empty
This state only applies to consistency groups. It is the state of a consistency group which has
no relationships and no other state information to show.

It is entered when a consistency group is first created. It is exited when the first relationship is
added to the consistency group at which point the state of the relationship becomes the state
of the consistency group.

13.3.16 Background copy


Global Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships which are in the
InconsistentCopying state with a Status of Online.

The quota of background copy (configured on the intercluster link) is divided evenly between
the nodes that are performing background copy for one of the eligible relationships. This
allocation is made without regard for the number of disks that node is responsible for. Each
node in turn divides its allocation evenly between the multiple relationships performing a
background copy.

For intracluster relationships, each node is assigned a static quota of 25 MBps.

13.4 Global Mirror scenario using the CLI


In the following scenario, we want to set up Global Mirror for the following VDisks from
ITSOSVC01 to ITSOSVC02 at the secondary site:
VDISK1: Database files
VDISK2: Database log files
VDISK3: Application files

Chapter 13. Copy Services: Global Mirror 513


Since data consistency is needed across VDISK1 and VDISK2, we create a consistency
group, to handle Global Mirror for them. Although in this scenario the application files are
independent of the database, we create a stand-alone Global Mirror relationship for VDISK3.
The Global Mirror setup is illustrated in Figure 13-7.

Consistency Group:
CG_W2K_GM
Site1: Site2:
ITSOSVC01 ITSOSVC01

GM_Relationship 1
GM_VDisk1 GM_VDisk_T1
GM_Master GM_Auxiliary

GM_VDisk2 GM_Relationship 2 GM_VDisk_T2


GM_Master GM_Auxiliary

GM_VDisk3 GM_Relationship 3 GM_VDisk_T3


GM_Master GM_Auxiliary

Figure 13-7 Global Mirror scenario using the CLI

13.4.1 Setting up Global Mirror


In the following section, we assume that the source and target VDisks have already been
created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate.

To set up the Global Mirror, the following steps must be performed:


򐂰 Create SVC partnership between ITSOSVC01 and ITSOSVC02, on both SVC clusters:
– Bandwidth 100 MBps
򐂰 Create a Global Mirror consistency group:
– Name CG_W2K_GM
򐂰 Create the Global Mirror relationshi2p for VDISK1:
– Master GM_VDisk1
– Auxiliary GM_VDisk_T1
– Auxiliary SVC cluster ITSOSVC02
– Name GM_REL1
– Consistency group CG_W2K_GM
򐂰 Create the Global Mirror relationship for VDISK2:
– Master GM_VDisk2
– Auxiliary GM_VDisk_T2
– Auxiliary SVC cluster ITSOSVC02
– Name GM_REL2
– Consistency group CG_W2K_GM

514 IBM System Storage SAN Volume Controller


򐂰 Create the Global Mirror relationship for VDISK3:
– Master GM_VDisk3
– Auxiliary GM_VDisk_T3
– Auxiliary SVC cluster ITSOSVC02
– Name GM_REL3

In the following section, each step is carried out using the CLI.

13.4.2 Creating SVC partnership between ITSOSVC01 and ITSOSVC02


To verify that both clusters can communicate with each other, we can use the svcinfo
lsclustercandidate command. Example 13-1 confirms that our clusters are communicating,
as ITSOSVC02 is an eligible SVC cluster candidate for the SVC cluster partnership.

Example 13-1 Listing available SVC cluster for partnership


IBM_2145:ITSOSVC01:admin>svcinfo lsclustercandidate
id configured cluster_name
000002006040469E no ITSOSVC02

In Example 13-2 we create the partnership from ITSOSVC01 to ITSOSVC02 specifying the
bandwidth to be used for background copy to 100 MBps.

To verify the creation of the partnership we issue the command svcinfo lscluster, and see
that the partnership is only partially configured. It will remain partially configured until we run
mkpartnership on the other node.

Example 13-2 Creating the partnership from ITSOSVC01 to ITSOSVC02

IBM_2145:ITSOSVC01:admin>svctask mkpartnership -bandwidth 100 ITSOSVC02

IBM_2145:ITSOSVC01:admin>svcinfo lscluster
id name location partnership bandwidth
cluster_IP_address cluster_service_IP_address id_alias
000002006180311C ITSOSVC01 local 9.43.86.29
9.43.86.30 000002006180311C
000002006040469E ITSOSVC02 remote partially_configured_local 100
9.43.86.40 9.43.86.41 000002006040469E

In Example 13-3, we create the partnership from ITSOSVC02 back to ITSOSVC01 specifying the
bandwidth to be used for background copy to 100 MBps.

For completeness we issue svcinfo lscluster and svcinfo lsclustercandidate prior to


creating the partnership.

After creating the partnership, we verify that the partnership is fully configured by re-issuing
the svcinfo lscluster command.
Example 13-3 Creating the partnership from ITSOSVC02 to ITSOSVC01
IBM_2145:ITSOSVC02:admin>svcinfo lscluster
id name location partnership bandwidth
cluster_IP_address cluster_service_IP_address id_alias
000002006040469E ITSOSVC02 local 9.43.86.40
9.43.86.41 000002006040469E

IBM_2145:ITSOSVC02:admin>svcinfo lsclustercandidate

Chapter 13. Copy Services: Global Mirror 515


id configured cluster_name
000002006180311C yes ITSOSVC01

IBM_2145:ITSOSVC02:admin>svctask mkpartnership -bandwidth 100 ITSOSVC01


IBM_2145:ITSOSVC02:admin>svcinfo lscluster
id name location partnership bandwidth
cluster_IP_address cluster_service_IP_address id_alias
000002006040469E ITSOSVC02 local 9.43.86.40
9.43.86.41 000002006040469E
000002006180311C ITSOSVC01 remote fully_configured 100
9.43.86.29 9.43.86.30 000002006180311C

13.4.3 Changing link tolerance and cluster delay simulation


The gm_link_tolerance defines how sensitive the SVC is to inter-link overload conditions. The
value is the number of seconds of continuous link difficulties that will be tolerated, before the
SVC will stop the rcrelationships in order to prevent impacting host I/O at the primary site.
In order to change the value, use the following command:
svctask chcluster -gmlinktolerance link_tolerance

The “link tolerance” values are between 60 and 86400 in increments of 10. The default value
for the Link Tolerance is 300 seconds.

Recommendation: We strongly recommend that you use the default value. If the link is
overloaded for a period which would impact host I/O at the primary site, the relationships
will be stopped to protect those hosts.

Intercluster and Intracluster Delay Simulation


This Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This
feature allows testing to be performed that detects colliding writes and so can be used to test
an application before full deployment of the Global Mirror feature. The Delay Simulation can
be enabled separately for each of Intracluster or Intercluster Global Mirror. To enable this
feature you need to run the following command either for the intracluster or intercluster
simulation:
Intercluster: svctask chcluster -gminterdelaysimulation inter_cluster_delay_simulation
and
Intracluster: svctask chcluster -gmintradelaysimulation intra_cluster_delay_simulation

“intra_cluster_delay_simulation” expresses the amount of time that Intracluster secondary


I/Os are delayed. “inter_cluster_delay_simulation” expresses the amount of time that
Intercluster secondary I/Os are delayed. These values specifying the number of milliseconds
that I/O activity (copying primary VDisk to a secondary VDisk) is delayed. A value from 0 to
100 in 1 millisecond increments can be set for the “link_tolerance” in the commands above. A
value of zero disables the feature.

To check the current settings for the Delay simulation use the following command:
svcinfo lscluster clustername

In Example 13-4 we show the modification of the delay simulation and a change of the Global
Mirror link tolerance. We also show the changed values for the Global Mirror link tolerance
and delay feature settings

516 IBM System Storage SAN Volume Controller


Example 13-4 Delay simulation modification
IBM_2145:ITSOSVC01:admin>svctask chcluster -gmintradelaysimulation 20

IBM_2145:ITSOSVC01:admin>svctask chcluster -gminterdelaysimulation 40

IBM_2145:ITSOSVC01:admin>svctask chcluster -gmlinktolerance 200

IBM_2145:ITSOSVC01:admin>svcinfo lscluster ITSOSVC01


id 000002006180311C
name ITSOSVC01
location local
partnership
bandwidth
cluster_IP_address 9.43.86.29
cluster_service_IP_address 9.43.86.30
total_mdisk_capacity 410.2GB
space_in_mdisk_grps 264.0GB
space_allocated_to_vdisks 6.0GB
total_free_space 403.3GB
statistics_status on
statistics_frequency 10
required_memory 4096
cluster_locale en_US
SNMP_setting none
SNMP_community
SNMP_server_IP_address 0.0.0.0
subnet_mask 255.255.255.0
default_gateway 9.43.86.1
time_zone 520 US/Pacific
email_setting none
email_id
code_level 4.1.0.10 (build 5.6.0606080000)
FC_port_speed 2Gb
console_IP 9.43.85.141:9080
id_alias 000002006180311C
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 40
gm_intra_cluster_delay_simulation 20

Creating a Global Mirror consistency group


In Example 13-5, we create the Global Mirror consistency group using the svctask
mkrcconsistgrp command. This consistency group will be used for the Global Mirror
relationships for the database VDisks and is named CG_W2K_GM.

Example 13-5 Creating the Global Mirror consistency group CG_W2K_GM


IBM_2145:ITSOSVC01:admin>svctask mkrcconsistgrp -cluster ITSOSVC02 -name CG_W2K_GM
RC Consistency Group, id [255], successfully created
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type
255 CG_W2K_GM 000002006180311C ITSOSVC01 000002006040469E
ITSOSVC02 empty 0 empty_group

Chapter 13. Copy Services: Global Mirror 517


Creating the Global Mirror relationship for GM_VDisk1 and GM_VDisk2
In Example 13-6 we create the Global Mirror relationships GM_REL1 and GM_REL2 and make
them members of the Global Mirror consistency group CG_W2K_GM.

To verify the created Global Mirror relationships we list them with the command svcinfo
lsrcrelationship.

Example 13-6 Creating the Global Mirror relationships GM_REL1 and GM_REL2
IBM_2145:ITSOSVC01:admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID
2 GM_VDisk3 0 io_grp0 online 0
MDSKGRP_0 3.0GB striped
60050768018600C4700000000000000B
5 GM_VDisk2 0 io_grp0 online 0
MDSKGRP_0 2.0GB striped
60050768018600C47000000000000008
6 GM_VDisk1 0 io_grp0 online 0
MDSKGRP_0 1.0GB striped
60050768018600C47000000000000007

IBM_2145:ITSOSVC01:admin>svctask mkrcrelationship -master GM_VDisk1 -aux GM_VDisk_T1


-cluster ITSOSVC02 -consistgrp CG_W2K_GM -name GM_REL1 -global
RC Relationship, id [6], successfully created
IBM_2145:ITSOSVC01:admin>svctask mkrcrelationship -master GM_VDisk2 -ai ux GM_VDisk_T2
-cluster ITSOSVC02 -consistgrp CG_W2K_GM -name GM_REL2 -global
RC Relationship, id [5], successfully created

IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship -delim :


id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster
_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_g
roup_name:state:bg_copy_priority:progress:copy_type
5:GM_REL2:000002006180311C:ITSOSVC01:5:GM_VDisk2:000002006040469E:ITSOSVC02:26:GM_VDisk_T2:
master:255:CG_W2K_GM:inconsistent_stopped:50:0:global
6:GM_REL1:000002006180311C:ITSOSVC01:6:GM_VDisk1:000002006040469E:ITSOSVC02:27:GM_VDisk_T1:
master:255:CG_W2K_GM:inconsistent_stopped:50:0:global

Creating the stand-alone Global Mirror relationship for GM_VDisk3


In Example 13-7, we create the stand-alone Global Mirror relationship GM_REL3 for GM_VDisk3.
Once created we will check the status of each of our Global Mirror relationships.

You will note that the status of GM_REL3 is consistent_stopped, and this is because it was
created with the -sync option. The -sync option indicates that the secondary (auxiliary) virtual
disk is already synchronized with the primary (master) virtual disk. The initial background
synchronization is skipped when this option is used.

GM_REL2 and GM_REL1 are in the inconsistent_stopped state, because they were not created
with the -sync option, so their auxiliary VDisks need to be synchronized with their primary
VDisks.

Example 13-7 Creating a stand-alone Global Mirror relationship and checking


IBM_2145:ITSOSVC01:admin>IBM_2145:ITSOSVC01:admin>svctask mkrcrelationship -master
GM_VDisk3 -aux GM_VDisk_T3 -sync -cluster ITSOSVC02 -name GM_REL3 -global
RC Relationship, id [2], successfully created
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL3
id 2

518 IBM System Storage SAN Volume Controller


name GM_REL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 2
master_vdisk_name GM_VDisk3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 25
aux_vdisk_name GM_VDisk_T3
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 0
freeze_time
status online
sync
copy_type global

IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL2


id 5
name GM_REL2
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 5
master_vdisk_name GM_VDisk2
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 26
aux_vdisk_name GM_VDisk_T2
primary master
consistency_group_id 255
consistency_group_name CG_W2K_GM
state inconsistent_stopped
bg_copy_priority 50
progress 0
freeze_time
status online
sync
copy_type global

IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL1


id 6
name GM_REL1
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name GM_VDisk1
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 27
aux_vdisk_name GM_VDisk_T1
primary master
consistency_group_id 255
consistency_group_name CG_W2K_GM
state inconsistent_stopped
bg_copy_priority 50
progress 0
freeze_time

Chapter 13. Copy Services: Global Mirror 519


status online
sync
copy_type global
RC Relationship, id [2], successfully created

13.4.4 Executing Global Mirror


Now that we have created the Global Mirror consistency group and relationships, we are
ready to use the Global Mirror relationships in our environment.

When implementing Global Mirror, the goal is to reach a consistent and synchronized state
which can provide redundancy in case a hardware failure occurs that affects the SAN at the
production site.

In the following section we show how to stop and start the stand-alone Global Mirror
relationships and the consistency group.

Starting a stand-alone Global Mirror relationship


In Example 13-8, we start the stand-alone Global Mirror relationship GM_MREL3. Because the
Global Mirror relationship was in the Consistent stopped state and no updates have been
made to the primary VDisk, the relationship enters the Consistent synchronized state.

Example 13-8 Starting the stand-alone Global Mirror relationship


IBM_2145:ITSOSVC01:admin>svctask startrcrelationship GM_REL3
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL3
id 2
name GM_REL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 2
master_vdisk_name GM_VDisk3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 25
aux_vdisk_name GM_VDisk_T3
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global

Starting a Global Mirror consistency group


In Example 13-9, we start the Global Mirror consistency group CG_W2K_GM. Because the
consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying
state until the background copy has completed for all relationships in the consistency group.

Upon completion of the background copy, it enters the Consistent synchronized state (see
Figure 13-6 on page 499).

520 IBM System Storage SAN Volume Controller


Example 13-9 Starting the Global Mirror consistency group
IBM_2145:ITSOSVC01:admin>svctask startrcconsistgrp CG_W2K_GM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_GM
id 255
name CG_W2K_GM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary master
state inconsistent_copying
relationship_count 2
freeze_time
status online
sync
copy_type global
RC_rel_id 5
RC_rel_name GM_REL2
RC_rel_id 6
RC_rel_name GM_REL1

Monitoring background copy progress


To monitor the background copy progress, we can use the svcinfo lsrcrelationship
command. This command will show us all defined Global Mirror relationships if used without
any parameters.

Our Global Mirror relationship is shown in Example 13-10.

Note: Setting up SNMP traps for the SVC enables automatic notification when Global
Mirror consistency groups or relationships change state.

Example 13-10 Monitoring background copy progress example


IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL2
id 5
name GM_REL2
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 5
master_vdisk_name GM_VDisk2
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 26
aux_vdisk_name GM_VDisk_T2
primary master
consistency_group_id 255
consistency_group_name CG_W2K_GM
state inconsistent_copying
bg_copy_priority 50
progress 29
freeze_time
status online
sync
copy_type global
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL1
id 6
name GM_REL1

Chapter 13. Copy Services: Global Mirror 521


master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 6
master_vdisk_name GM_VDisk1
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 27
aux_vdisk_name GM_VDisk_T1
primary master
consistency_group_id 255
consistency_group_name CG_W2K_GM
state inconsistent_copying
bg_copy_priority 50
progress 67
freeze_time
status online
sync
copy_type global

When all the Global Mirror relationships complete the background copy, the consistency
group enters the consistent synchronized state, as shown in Example 13-11.

Example 13-11 Listing the Global Mirror consistency group


IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_GM
id 255
name CG_W2K_GM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status online
sync
copy_type global
RC_rel_id 5
RC_rel_name GM_REL2
RC_rel_id 6
RC_rel_name GM_REL1

Stopping a stand-alone Global Mirror relationship


In Example 13-12, we stop the stand-alone Global Mirror relationship, while enabling access
(write I/O) to both the primary and the secondary VDisk, and the relationship enters the Idling
state.

Example 13-12 Stopping stand-alone Global Mirror relationship & enabling access to secondary VDisk
IBM_2145:ITSOSVC01:admin>svctask stoprcrelationship -access GM_REL3
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL3
id 2
name GM_REL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 2
master_vdisk_name GM_VDisk3
aux_cluster_id 000002006040469E

522 IBM System Storage SAN Volume Controller


aux_cluster_name ITSOSVC02
aux_vdisk_id 25
aux_vdisk_name GM_VDisk_T3
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type global

Stopping a Global Mirror consistency group


In Example 13-13 we stop the Global Mirror consistency group without specifying the -access
parameter. This means that the consistency group enters the Consistent stopped state.

Example 13-13 Stopping a Global Mirror consistency group without -access


IBM_2145:ITSOSVC01:admin>svctask stoprcconsistgrp CG_W2K_GM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_GM
id 255
name CG_W2K_GM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary master
state consistent_stopped
relationship_count 2
freeze_time 2006/07/06/04/19/54
status online
sync in_sync
copy_type global
RC_rel_id 5
RC_rel_name GM_REL2
RC_rel_id 6
RC_rel_name GM_REL1

If afterwards we wanted to enable access (write I/O) to the secondary VDisk, we can re-issue
the svctask stoprcconsistgrp, specifying the -access parameter and the consistency group
transitions to the Idling state, as shown in Example 13-14.

Example 13-14 Stopping a Global Mirror consistency group and enabling access to the secondary
IBM_2145:ITSOSVC01:admin>svctask stoprcconsistgrp -access CG_W2K_GM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_GM
id 255
name CG_W2K_GM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary
state idling
relationship_count 2
freeze_time
status

Chapter 13. Copy Services: Global Mirror 523


sync in_sync
copy_type global
RC_rel_id 5
RC_rel_name GM_REL2
RC_rel_id 6
RC_rel_name GM_REL1

Restarting a Global Mirror relationship in the Idling state


When restarting a Global Mirror relationship in the Idling state, we must specify the copy
direction.

If any updates have been performed on either the master or the auxiliary VDisk, consistency
will be compromised. Therefore, we must issue the -force parameter to re-start the
relationship. If the -force parameter is not used the command will fail, as shown in
Example 13-15.

Example 13-15 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSOSVC01:admin>svctask startrcrelationship -primary master GM_REL3
CMMVC5978E The operation was not performed because the relationship is not synchronized.
IBM_2145:ITSOSVC01:admin>svctask startrcrelationship -primary master -force GM_REL3
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL3
id 2
name GM_REL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 2
master_vdisk_name GM_VDisk3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 25
aux_vdisk_name GM_VDisk_T3
primary master
consistency_group_id
consistency_group_name
state inconsistent_copying
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global

Restarting a Global Mirror consistency group in the Idling state


When restarting a Global Mirror consistency group in the Idling state, we must specify the
copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the
Global Mirror relationships in the consistency group, then consistency will be compromised.
Therefore we must issue the -force parameter to start the relationship. If the -force
parameter is not used, then the command will fail.

In Example 13-16, we change the copy direction by specifying the auxiliary VDisks to be the
primaries.

524 IBM System Storage SAN Volume Controller


Example 13-16 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSOSVC01:admin>svctask startrcconsistgrp -primary aux CG_W2K_GM
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_GM
id 255
name CG_W2K_GM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status online
sync
copy_type global
RC_rel_id 5
RC_rel_name GM_REL2
RC_rel_id 6
RC_rel_name GM_REL1

Switching copy direction for a Global Mirror relationship


When a Global Mirror relationship is in the consistent synchronized state, we can change the
copy direction for the relationship, using the command svctask switchrcrelationship,
specifying the primary VDisk.

If the primary is specified when issuing the command, and it is already the primary, the
command has no effect.

In Example 13-17, we change the copy direction for the stand-alone Global Mirror
relationship, specifying the auxiliary VDisk to be the primary.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisk which transitions from primary to secondary, since all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using the svctask switchrcrelationship command.

Example 13-17 Switching the copy direction for a Global Mirror consistency group
IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL3
id 2
name GM_REL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 2
master_vdisk_name GM_VDisk3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 25
aux_vdisk_name GM_VDisk_T3
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time

Chapter 13. Copy Services: Global Mirror 525


status online
sync
copy_type global

IBM_2145:ITSOSVC01:admin>svctask switchrcrelationship -primary aux GM_REL3


IBM_2145:ITSOSVC01:admin>svcinfo lsrcrelationship GM_REL3
id 2
name GM_REL3
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
master_vdisk_id 2
master_vdisk_name GM_VDisk3
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
aux_vdisk_id 25
aux_vdisk_name GM_VDisk_T3
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global

Switching copy direction for a Global Mirror consistency group


When a Global Mirror consistency group is in the consistent synchronized state, we can
change the copy direction for the relationship using the command svctask
switchrcconsistgrp, specifying the primary VDisk.

If the primary specified, when issuing the command, is already the primary, the command has
no effect.

In Example 13-18 we change the copy direction for the Global Mirror consistency group,
specifying the auxiliary to become the primary.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisks which transitions from primary to secondary, since all I/O will be inhibited when
they become the secondary. Therefore, careful planning is required prior to using the
svctask switchrcconsistgrp command.

Example 13-18 Switching the copy direction for a Global Mirror consistency group
IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_GM
id 255
name CG_W2K_GM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary master
state consistent_synchronized
relationship_count 2
freeze_time

526 IBM System Storage SAN Volume Controller


status online
sync
copy_type global
RC_rel_id 5
RC_rel_name GM_REL2
RC_rel_id 6
RC_rel_name GM_REL1

IBM_2145:ITSOSVC01:admin>svctask switchrcconsistgrp -primary aux CG_W2K_GM


IBM_2145:ITSOSVC01:admin>svcinfo lsrcconsistgrp CG_W2K_GM
id 255
name CG_W2K_GM
master_cluster_id 000002006180311C
master_cluster_name ITSOSVC01
aux_cluster_id 000002006040469E
aux_cluster_name ITSOSVC02
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status online
sync
copy_type global
RC_rel_id 5
RC_rel_name GM_REL2
RC_rel_id 6
RC_rel_name GM_REL1

13.5 Global Mirror scenario using the GUI


In the following scenario, we will set up Global Mirror for the following VDisks from ITSOSVC01
to ITSOSVC02 at the secondary site:
VDISK1: Database files
VDISK2: Database log files
VDISK3: Application files

Since data consistency is needed across VDisk1 and VDisk2, we will create a consistency
group, to ensure that those two VDisks maintain consistency. While, in this scenario, the
application files are independent of the database, we create a stand-alone Global Mirror
relationship for VDisk3. The Global Mirror setup is illustrated in Figure 13-8.

Chapter 13. Copy Services: Global Mirror 527


Figure 13-8 Global Mirror scenario using the GUI

13.5.1 Setting up Global Mirror


In the following section we assume that the source and target VDisks have already been
created and that the ISLs and zoning are in place enabling the SVC clusters to communicate.

To set up the Global Mirror, you must perform the following steps:
򐂰 Create SVC partnership between ITSOSVC01 and ITSOSVC02, on both SVC clusters:
– Bandwidth 100 MB/s
򐂰 Create a Global Mirror consistency group:
– Name CG_W2K_GM
򐂰 Create the Global Mirror relationship for VDISK1:
– Master GM_VDisk1
– Auxiliary GM_VDisk_T1
– Auxiliary SVC cluster ITSOSVC02
– Name GM_REL1
– Consistency group CG_W2K_GM
򐂰 Create the Global Mirror relationship for VDISK2:
– Master GM_VDisk2
– Auxiliary GM_VDisk_T2
– Auxiliary SVC cluster ITSOSVC02
– Name GM_REL2
– Consistency group CG_W2K_GM
򐂰 Create the Global Mirror relationship for VDISK3:
– Master GM_VDisk3
– Auxiliary GM_VDisk_T3
– Auxiliary SVC cluster ITSOSVC02
– Name GM_REL3

528 IBM System Storage SAN Volume Controller


In the following section, each step is carried out using the GUI.

13.5.2 Creating an SVC partnership between ITSOSVC01 and ITSOSVC02


We do this on both clusters.

To create a Global Mirror partnership between the SVC clusters using the GUI we launch the
SVC GUI for ITSOSVC01. Then we select Manage Copy Services and click Metro & Global
Mirror Cluster Partnership, as shown in Figure 13-9.

Figure 13-9 Selecting Global Mirror Cluster Partnership on ITSOSVC01

To verify that we want to create a Global Mirror SVC Cluster partnership, we click Create, as
shown in Figure 13-10.

Figure 13-10 Confirming that a Global Mirror partnership is to be created

In Figure 13-11, the available SVC cluster candidates are listed, which in our case is only
ITSOSVC02. We select ITSOSVC02 and specify the available bandwidth for background copy, in
this case 500 MBps and then click OK.

Chapter 13. Copy Services: Global Mirror 529


Figure 13-11 Selecting SVC partner and specifying bandwidth for background copy

In the resulting window shown in Figure 13-12, the created Global Mirror cluster partnership
is shown as Partially Configured.

To fully configure the Global Mirror cluster partnership, we must carry out the same steps on
ITSOSVC02 as we did on ITSOSVC01, and for simplicity, in the following figures only the last
two windows are shown.

Figure 13-12 Global Mirror cluster partnership is partially configured

Launching the SVC GUI for ITSOSVC02, we select ITSOSVC01 for the Global Mirror cluster
partnership and specify the available bandwidth for background copy, again 500 Mbps, and
then click OK, as shown in Figure 13-13.

530 IBM System Storage SAN Volume Controller


Figure 13-13 Selecting SVC partner and specify bandwidth for background copy

Now that both sides of the SVC Cluster Partnership are defined, the resulting window shown
in Figure 13-14 confirms that our Global Mirror cluster partnership is Fully Configured.

Figure 13-14 Global Mirror cluster partnership is fully configured

Info: Link Tolerance, Intercluster Delay Simulation and Intracluster Delay Simulation are
introduced with the use of the Global Mirror feature.

Global Mirror Link Tolerance


The “gm_link_tolerance” defines how sensitive the SVC is to inter-link overload conditions.
The value is the number of seconds of continuous link difficulties that will be tolerated, before
the SVC will stop the rcrelationships in order to prevent impacting host I/O at the primary site.
In order to change the value use the following command:
svctask chcluster -gmlinktolerance link_tolerance

The “link tolerance” values are between 60 and 86400 in increments of 10. The default value
for the Link Tolerance is 300 seconds.

Recommendation: We strongly recommend to use the default value. If the link is


overloaded for a period which would impact host I/O at the primary site, the relationships
will be stopped to protect those hosts.

Chapter 13. Copy Services: Global Mirror 531


Intercluster and Intracluster Delay Simulation
This Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This
feature allows testing to be performed that detects colliding writes and so can be used to test
an application before full deployment of the Global Mirror feature. The Delay Simulation can
be enabled separately for either intracluster or intercluster Global Mirror. To enable this
feature, you need to run the following command, either for the intercluster or intracluster
simulation:
Intercluster: svctask chcluster -gminterdelaysimulation inter_cluster_delay_simulation
and
Intracluster: svctask chcluster -gmintradelaysimulation intra_cluster_delay_simulation

“intra_cluster_delay_simulation” expresses the amount of time that Intracluster secondary


I/Os are delayed. “inter_cluster_delay_simulation” expresses the amount of time that
Intercluster secondary I/Os are delayed. These values specifying the number of milliseconds
that I/O activity (copying primary VDisk to a secondary VDisk) is delayed. A value from 0 to
100 in 1 millisecond increments can be set for the “link_tolerance” in the commands above. A
value of zero disables the feature.

To check the current settings for the delay simulation, use the following command:
svcinfo lscluster clustername

In Example 13-19 we show the modification of the delay simulation and the change of the
Global Mirror link tolerance. We also show the changed values for the Global Mirror link
tolerance and delay feature settings

Example 13-19 View GM link tolerance and delay settings


IBM_2145:ITSOSVC01:admin>svctask chcluster -gmintradelaysimulation 20

IBM_2145:ITSOSVC01:admin>svctask chcluster -gminterdelaysimulation 40

IBM_2145:ITSOSVC01:admin>svctask chcluster -gmlinktolerance 200

IBM_2145:ITSOSVC01:admin>svcinfo lscluster ITSOSVC01


id 000002006180311C
name ITSOSVC01
location local
partnership
bandwidth
cluster_IP_address 9.43.86.29
cluster_service_IP_address 9.43.86.30
total_mdisk_capacity 410.2GB
space_in_mdisk_grps 264.0GB
space_allocated_to_vdisks 6.0GB
total_free_space 403.3GB
statistics_status on
statistics_frequency 10
required_memory 4096
cluster_locale en_US
SNMP_setting none
SNMP_community
SNMP_server_IP_address 0.0.0.0
subnet_mask 255.255.255.0
default_gateway 9.43.86.1
time_zone 520 US/Pacific
email_setting none
email_id

532 IBM System Storage SAN Volume Controller


code_level 4.1.0.10 (build 5.6.0606080000)
FC_port_speed 2Gb
console_IP 9.43.85.141:9080
id_alias 000002006180311C
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 40
gm_intra_cluster_delay_simulation 20

Launching the SVC GUI for ITSOSVC02, we select ITSOSVC01 for the Global Mirror cluster
partnership and specify the available bandwidth for background copy and click OK, as shown
in Figure 13-15.

Figure 13-15 Selecting SVC partner and specify bandwidth for background copy

Now that both sides of the SVC Cluster Partnership are defined, the resulting window shown
in Figure 13-16 confirms that our Global Mirror cluster partnership is now Fully Configured.

Figure 13-16 Global Mirror cluster partnership is fully configured

After performing the steps to create a fully configured cluster on SVC GUI for ITSOSVC02
close the SVC GUI for ITSOSVC02. The following steps will be performed from the SVC GUI
for ITSOSVC01.

Chapter 13. Copy Services: Global Mirror 533


13.5.3 Creating a Global Mirror consistency group
To create the consistency group to be used for the Global Mirror relationships for the VDisks
with the database and log files we select Manage Copy Services and click Global Mirror
Consistency Groups, as shown in Figure 13-17.

Figure 13-17 Selecting Global Mirror Consistency Groups

Next we have the opportunity to filter the list of consistency groups, however, we will just click
on Bypass Filter to continue to the next window.

To start the creation process, we select Create Consistency Group from the scroll menu and
click Go, as shown in Figure 13-18.

Figure 13-18 Create a consistency group

We are presented with an overview of the steps in the process of creating a consistency
group, we click Next to proceed.

534 IBM System Storage SAN Volume Controller


As shown in Figure 13-19, we specify the consistency group name and whether it is to be
used for intercluster or intracluster relationships. In our scenario we select Intercluster and
click Next.

Figure 13-19 Specifying consistency group name and type

As shown in Figure 13-20, there are currently no defined Global Mirror relationships (since we
have not defined any at this point) to be included in the Global Mirror consistency group and
we click Next to proceed.

Figure 13-20 There are no defined Global Mirror relationships to be added

Chapter 13. Copy Services: Global Mirror 535


As shown in Figure 13-21, we verify the settings for the consistency group and click Finish to
create the Global Mirror consistency group.

Figure 13-21 Verifying the settings for the Global Mirror consistency group

When the Global Mirror consistency group is created we are returned to the list of defined
consistency groups shown in Figure 13-22.

Figure 13-22 Viewing Global Mirror consistency groups

536 IBM System Storage SAN Volume Controller


13.5.4 Creating the Global Mirror relationships for VDISK1 and VDISK2
To create the Global Mirror relationships for VDISK1 and VDISK2 we select Manage Copy
Services and click Global Mirror Cluster Relationships, as shown in Figure 13-23.

Figure 13-23 Selecting Global Mirror Relationships

Next we have the opportunity to filter the list of Global Mirror relationships, however, we will
just click on Bypass Filter to continue to the next window.

To start the creation process we select Create a Relationship from the scroll menu and click
Go, as shown in Figure 13-24.

Figure 13-24 Create a relationship

Next we are presented with an overview of the steps in the process of creating a relationship;
click Next to proceed.

Chapter 13. Copy Services: Global Mirror 537


As shown in Figure 13-25, we name our first Global Mirror relationship (GM_REL1) and decide
that the Relationship will be intercluster.

Figure 13-25 Naming the Global Mirror relationship and selecting the auxiliary cluster

The next step will enable us to select a master VDisk. As this list could potentially be large,
the Filtering Master VDisks Candidates window appears which will enable us to reduce the
list of eligible VDisks based on a defined filter.

In Figure 13-26, use the filter for *VDisk* and click Next.

Figure 13-26 Defining filter for master VDisk candidates

538 IBM System Storage SAN Volume Controller


As shown in Figure 13-27, we select GM_VDisk1 to be the master VDisk of the relationship,
and click Next to proceed.

Figure 13-27 Selecting the master VDisk

The next step will require us to select an auxiliary VDisk. The SVC wizard will automatically
filter this list so that only eligible VDisks are shown. Eligible VDisks are those that have the
same size as the master VDisk and are not already part of a Global Mirror relationship.

As shown in Figure 13-28, we select GM_VDisk_T1 as the auxiliary VDisk of the relationship,
and click Next to proceed.

Figure 13-28 Selecting the auxiliary VDisk

Chapter 13. Copy Services: Global Mirror 539


As shown in Figure 13-29 we select the relationship to be part of the consistency group that
we created and click Next to proceed.

Figure 13-29 Selecting the relationship to be part of a consistency group

Finally, in Figure 13-30, we verify the Global Mirror relationship and click Finish to create it.

Figure 13-30 Verifying the Global Mirror relationship

540 IBM System Storage SAN Volume Controller


Once the relationship is successfully created, we are returned to the Global Mirror
relationship list as shown in Figure 13-31.

Figure 13-31 Viewing Global Mirror relationships

We create the second Global Mirror relationship, GM_REL2, starting at Figure 13-24 on
page 537. After creating all our relationships, we have listed them here in Figure 13-32.

Figure 13-32 Viewing the Global Mirror relationships after creating GM_REL2

13.5.5 Creating the stand-alone Global Mirror relationship for VDISK3


To create the stand-alone Global Mirror relationship we start the creation process by selecting
Create a Relationship from the scroll menu and click Go, as shown in Figure 13-33.

Figure 13-33 Create a Global Mirror relationship

Chapter 13. Copy Services: Global Mirror 541


Next we are presented with an overview of the steps in the process of creating a consistency
group, we click Next to proceed.

As shown in Figure 13-34, we name the relationship (GM_REL3) and specify that it is an
intercluster relationship and click Next.

Figure 13-34 Specifying the Global Mirror relationship name and auxiliary cluster

As shown in Figure 13-35 we are queried for a filter prior to presenting the master VDisk
candidates. We select that we want to filter for *VDisk* and click Next.

Figure 13-35 Filtering VDisk candidates

542 IBM System Storage SAN Volume Controller


As shown in Figure 13-36 we select GM_VDisk3 to be the master VDisk of the relationship, and
click Next to proceed.

Figure 13-36 Selecting the master VDisk

As shown in Figure 13-37, we select GM_VDisk_T3 as the auxiliary VDisk of the relationship,
and click Next to proceed.

Figure 13-37 Selecting the auxiliary VDisk

Chapter 13. Copy Services: Global Mirror 543


As shown in Figure 13-38 we specify that the master and auxiliary VDisk are already
synchronized (for the purpose of this example we can assume that they are pristine). As we
did not select a consistency group, we are creating a stand-alone Global Mirror relationship.

Figure 13-38 Selecting options for the Global Mirror relationship

Note: To add a Global Mirror relationship to a consistency group it must be in the same
state as the consistency group.

Even if we intended to make the Global Mirror relationship GM_REL3 part of the consistency
group CG_W2K_GM, we are not offered the option since the state of the relationship GM_REL3
is Consistent stopped, because we selected the synchronized option, and the state of the
consistency group CG_W2K_GM is currently Inconsistent stopped.

The status of the Global Mirror relationships can be seen in Figure 13-40.

544 IBM System Storage SAN Volume Controller


Finally, Figure 13-30 shows the actions that will be performed. We click Finish to create this
new relationship.

Figure 13-39 Verifying the Global Mirror relationship

After successful creation we are returned to the Global Mirror relationship screen.
Figure 13-40 now shows all our defined Global Mirror relationships.

Figure 13-40 Viewing Global Mirror relationships

13.5.6 Executing Global Mirror


Now that we have created the Global Mirror consistency group and relationships, we are
ready to use the Global Mirror relationships in our environment.

When performing Global Mirror the goal is to reach a consistent and synchronized state
which can provide redundancy in case a hardware failure occurs that affects the SAN at the
production site.

In the following section, we show how to stop and start the stand-alone Global Mirror
relationship and the consistency group.

Chapter 13. Copy Services: Global Mirror 545


13.5.7 Starting a stand-alone Global Mirror relationship
In Figure 13-41 we select the stand-alone Global Mirror relationship GM_REL3, and from the
scroll menu, we select Start Copy Process and click Go.

Figure 13-41 Starting a stand-alone Global Mirror relationship

In Figure 13-42 we do not need to change the Forced start, Mark as clean, or Copy direction
parameters, as this is the first time we are invoking this Global Mirror relationship (and we
defined the relationship as being already synchronized in Figure 13-38 on page 544). We
click OK to start the stand-alone Global Mirror relationship GM_REL3.

Figure 13-42 Selecting options and starting the copy process

546 IBM System Storage SAN Volume Controller


Since the Global Mirror relationship was in the Consistent stopped state and no updates have
been made on the primary VDisk, the relationship enters the Consistent synchronized state
shown in Figure 13-43.

Figure 13-43 Viewing Global Mirror relationships

13.5.8 Starting a Global Mirror consistency group


To start the Global Mirror consistency group CG_W2K_GM, we select Global Mirror
Consistency Groups shown in Figure 13-44.

Figure 13-44 Selecting Global Mirror Consistency Groups

Click Next to Bypass Filter.

Chapter 13. Copy Services: Global Mirror 547


In Figure 13-45 we select the Global Mirror consistency group CG_W2K_GM, and from the scroll
menu, we select Start Copy Process and click Go.

Figure 13-45 Selecting start copy process

As shown in Figure 13-46 we click OK to start the copy process. We cannot select the Forced
start, Mark as clean, or Copy Direction options, as our consistency group is currently in the
Inconsistent stopped state.

Figure 13-46 Selecting options and starting the copy process

548 IBM System Storage SAN Volume Controller


As shown in Figure 13-47, we are returned to the Global Mirror consistency group list and the
consistency group CG_W2K_GM has transitioned to the Inconsistent copying state.

Figure 13-47 Viewing Global Mirror consistency groups

Since the consistency group was in the Inconsistent stopped state it enters the Inconsistent
copying state until the background copy has completed for all relationships in the consistency
group. Upon completion of the background copy for all relationships in the consistency group,
it enters the Consistent synchronized state.

Monitoring background copy progress


The status of the background copy progress can be shown on the Viewing Global Mirror
Relationships page shown in Figure 13-48 or alternatively use the Manage Progress section
under My Work and check the Mirror copy progress there.

Figure 13-48 Viewing Global Mirror relationships

Note: Setting up SNMP traps for the SVC enables automatic notification when Global
Mirror consistency group or relationships change state.

Chapter 13. Copy Services: Global Mirror 549


13.5.9 Stopping a stand-alone Global Mirror relationship
To stop a Global Mirror relationship, while enabling access (write I/O) to both the primary and
the secondary VDisk we select the relationship and Stop Copy Process from the scroll menu
and click Go, shown in Figure 13-49.

Figure 13-49 Stopping a stand-alone Global MIrror relationship

As shown in Figure 13-50 we check the Enable write access option and click OK to stop the
Global Mirror relationship.

Figure 13-50 Enable access to the secondary VDisk while stopping the relationship

As shown in Figure 13-51 the Global Mirror relationship transitions to the Idling state when
stopped while enabling write access to the secondary VDisk.

Figure 13-51 Viewing the Global Mirror relationships

550 IBM System Storage SAN Volume Controller


13.5.10 Stopping a Global Mirror consistency group
As shown in Figure 13-52 we select the Global Mirror consistency group and Stop Copy
Process from the scroll menu and click Go.

Figure 13-52 Selecting the Global Mirror consistency group to be stopped

As shown in Figure 13-53 we click OK without specifying Enable write access to the
secondary VDisk.

Figure 13-53 Stopping the consistency group, without enabling access to the secondary VDisk

As shown in Figure 13-54 the consistency group enters the Consistent stopped state when
stopped.

Figure 13-54 Viewing Global Mirror consistency groups

Chapter 13. Copy Services: Global Mirror 551


If afterwards we want to enable access (write I/O) to the secondary VDisks, we can reissue
the Stop Copy Process specifying access to be enabled to the secondary VDisks.

In Figure 13-55 we select the Global Mirror relationship and Stop Copy Process from the
scroll menu and click Go.

Figure 13-55 Selecting the Global Mirror consistency group

As shown in Figure 13-56 we check the Enable write access box and click OK.

Figure 13-56 Enabling access to the secondary VDisks

When applying the Enable write access option, the consistency group transitions to the Idling
state shown in Figure 13-57.

Figure 13-57 Viewing the Global Mirror consistency group is in the Idling state

552 IBM System Storage SAN Volume Controller


13.5.11 Restarting a Global Mirror relationship in the Idling state
When restarting a Global Mirror relationship in the Idling state, we must specify the copy
direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the
Global Mirror relationships in the consistency group, then consistency will have been
compromised. In this situation, we must check the Force option to start the copy process
otherwise the command will fail.

As shown in Figure 13-58, we select the Global Mirror relationship and Start Copy Process
from the scroll menu and click Go.

Figure 13-58 Starting a stand-alone Global Mirror relationship in the Idling state

As shown in Figure 13-59, we check the Force option since write I/O has been performed
while in the Idling state and we select the copy direction by defining the master VDisk as the
primary and click OK.

Figure 13-59 Starting the copy process

As shown in Figure 13-60, the Global Mirror relationship enters the Consistent copying state.
When the background copy is complete the relationship transitions to the Consistent
synchronized state.

Chapter 13. Copy Services: Global Mirror 553


Figure 13-60 Viewing the Global Mirror relationships

13.5.12 Restarting a Global Mirror consistency group in the Idling state


When restarting a Global Mirror consistency group in the Idling state, we must specify the
copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the
Global Mirror relationships in the consistency group, then consistency will have been
compromised. In this situation, we must check the Force option to start the copy process
otherwise the command will fail.

As shown in Figure 13-61, we select the Global Mirror consistency group and Start Copy
Process from the scroll menu and click Go.

Figure 13-61 Starting the copy process

As shown in Figure 13-62, we check the Force option and set the copy direction by selecting
the primary as the master.

554 IBM System Storage SAN Volume Controller


Figure 13-62 Starting the copy process for the consistency group

When the background copy completes the Global Mirror consistency group enters the
Consistent synchronized state shown in Figure 13-63.

Figure 13-63 Viewing Global Mirror consistency groups

13.5.13 Switching copy direction for a Global Mirror relationship


When a Global Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship.

In Figure 13-64, we select the relationship GM_REL3 and Switch Copy Direction from the
scroll menu and click Go.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisk that transitions from primary to secondary since all I/O will be inhibited to that
VDisk when it becomes the secondary. Therefore, careful planning is required prior to
using switching the copy direction for a Global Mirror relationship.

Chapter 13. Copy Services: Global Mirror 555


Figure 13-64 Selecting the relationship for which the copy direction is to be changed

In Figure 13-65, we see that the current primary VDisk is the master, so to change the copy
direction for the stand-alone Global Mirror relationship we specify the auxiliary VDisk to be the
primary, and click OK.

Figure 13-65 Selecting the primary VDisk to switch the copy direction

The copy direction is now switched and we are returned to the Global Mirror relationship list,
where we see that the copy direction has been switched as shown in Figure 13-66.

Figure 13-66 Viewing Global Mirror relationship, after changing the copy direction

556 IBM System Storage SAN Volume Controller


13.5.14 Switching copy direction for a Global Mirror consistency group
When a Global Mirror consistency group is in the Consistent synchronized state, we can
change the copy direction for the Global Mirror consistency group.

In Figure 13-67 we select the consistency group CG_W2K_GM and Switch Copy Direction from
the scroll menu and click Go.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to
the VDisks which transitions from primary to secondary, since all I/O will be inhibited when
they become the secondary. Therefore, careful planning is required prior to using switching
the copy direction.

Figure 13-67 Selecting the consistency group for which the copy direction is to be changed

In Figure 13-68 we see that currently the primary VDisks are the master, so to change the
copy direction for the Global Mirror consistency group, we specify the auxiliary VDisks to
become the primary, and click OK.

Figure 13-68 Selecting the primary VDisk to switch the copy direction

The copy direction is now switched and we are returned to the Global Mirror consistency
group list, where we see that the copy direction has been switched. Figure 13-69 shows that
the auxiliary is now the primary.

Chapter 13. Copy Services: Global Mirror 557


Figure 13-69 Viewing Global Mirror consistency groups, after changing the copy direction

We are now finished with the GM GUI.

558 IBM System Storage SAN Volume Controller


14

Chapter 14. Migration to and from the SAN


Volume Controller
In this chapter we explain how to migrate from a conventional storage infrastructure to a
virtualized storage infrastructure applying the SVC. We also explain how the SVC can be
phased out of a virtualized storage infrastructure after a trial period, for example.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 559
14.1 Migration overview
The SVC allows the mapping of Virtual Disk (VDisk) extents to Managed Disk (MDisk) extents
to be changed, without interrupting host access to the VDisk. This functionality is utilized
when performing VDisk migrations, and can be performed for any VDisk defined on the SVC.

This functionality can be used for:


򐂰 Redistribution of VDisks, and thereby the workload within an SVC cluster across back-end
storage:
– Moving workload onto newly installed storage.
– Moving workload off old/failing storage, ahead of decommissioning it.
– Moving workload to re-balance a changed workload.
򐂰 Migrating data from legacy back-end storage to SVC managed storage.
򐂰 Migrating data from one back-end controller to another using the SVC as a data block
mover and afterwards removing the SVC from the SAN.
򐂰 Migrating data from Managed Mode back into Image mode prior to removing the SVC
from a SAN.

14.2 Migration operations


Migration can be performed at either the VDisk or the extent level depending on the purpose
of the migration. The different supported migration activities are:
򐂰 Migrating extents within a Managed Disk Group (MDG), redistributing the extents of a
given VDisk on the MDisks in the MDG
򐂰 Migrating extents off an MDisk which is to be removed from the MDG (to other MDisks in
the MDG)
򐂰 Migrating a VDisk from one MDG to another MDG
򐂰 Migrating a VDisk to change the virtualization type of the VDisk to Image
򐂰 Migrating a VDisk between I/O Groups

14.2.1 Migrating multiple extents (within an MDG)


A number of VDisk extents can be migrated at once using the migrateexts command.
Extents are allocated on the destination MDisk using the algorithm described in 3.6.6,
“Allocation of free extents” on page 57.

When executed, this command migrates a given number of extents from the source MDisk
where the extents of the VDisk specified resides, to a defined target MDisk which must be
part of the same MDG.

The number of migration threads which will be used in parallel can be specified from 1 to 4.

If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is
migrated while the MDisk access mode transitions from image to managed.

The syntax of the CLI command is:


svctask migrateexts -source src_mdisk_id | src_mdisk_name -exts num_extents -target
target_mdisk_id | target_mdisk_name [-threads number_of_threads] -vdisk vdisk_id |
vdisk_name

560 IBM System Storage SAN Volume Controller


The parameters for the CLI command are:
򐂰 -vdisk: Specifies the VDisk ID or name to which the extents belong.
򐂰 -source: Specifies the source Managed Disk ID or name on which the extents currently
reside.
򐂰 -exts: Specifies the number of extents to migrate.
򐂰 -target: Specifies the target MDisk ID or name onto which the extents are to be migrated.
򐂰 -threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.

14.2.2 Migrating extents off an MDisk which is being deleted


When an MDisk is deleted from an MDG using the rmmdisk -force command, any occupied
extents on the MDisk are migrated off the MDisk (to other MDisks in the MDG) prior to its
deletion.

In this case, the extents that need to be migrated are moved onto the set of MDisks which are
not being deleted, the extents are distributed according to the algorithm described in 3.6.6,
“Allocation of free extents” on page 57. This statement holds true if multiple MDisks are being
removed from the MDG at the same time, and MDisks which are being removed are not
candidates for supplying free extents to the allocation of free extents algorithm.

If a VDisk uses one or more extents which need to be moved as a result of a delete mdisk
command, then the virtualization type for that VDisk is set to striped (if it previously was
sequential or image).

If the MDisk is operating in image mode, the MDisk transitions to manage mode while the
extents are being migrated, and upon deletion it transitions to unmanaged.

The syntax of the CLI command is:


svctask rmmdisk -mdisk mdisk_id_list | mdisk_name_list [-force]
mdisk_group_id | mdisk_group_name

The parameters for the CLI command are:


򐂰 -mdisk: Specifies one, or more MDisk ids or names to delete from the group.
򐂰 -force: Migrate any data that belongs to other VDisks before removing the MDisk

Note: If the -force flag is not supplied, and VDisk(s) occupy extents on one or more of the
MDisks specified, the command will fail.

When the -force flag is supplied, and VDisks exist that are made from extents on one or
more of the MDisks specified, all extents on the MDisk(s) will be migrated to the other
MDisks in the MDG, if there are enough free extents in the MDG. The deletion of the
MDisk(s) is postponed until all extents are migrated which can take some time. In case
there is not enough free extents in the MDG the command will fail.

When the -force flag is supplied, the command will complete asynchronously.

Chapter 14. Migration to and from the SAN Volume Controller 561
14.2.3 Migrating a VDisk between MDGs
An entire VDisk can be migrated from one MDG to another MDG using the migratevdisk
command. A VDisk can be migrated between MDGs regardless of the virtualization type
(image, striped, or sequential), though it will transition to the virtualization type of striped.
The command will vary depending on the type of migration as shown in Table 14-1.

Table 14-1 Migration type


MDG to MDG type Command

Managed to managed migratevdisk

Image to managed migratevdisk

Managed to image migratetoimage

Image to image migratetoimage

The syntax of the CLI command is:


svctask migratevdisk -mdiskgrp mdisk_group_id | mdisk_group_name [-threads
number_of_threads] -vdisk vdisk_id | vdisk_name

The parameters for the CLI command are:


򐂰 -vdisk: Specifies the VDisk ID or name to migrate into another MDG.
򐂰 -mdiskgrp: Specifies the target MDG ID or name.
򐂰 -threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.

The syntax of the CLI command is:


svctask migratetoimage -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id |
name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads]

The parameters for the CLI command are:


򐂰 -vdisk: Specifies the name or ID of the source VDisk to be migrated.
򐂰 -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk
must be unmanaged and large enough to contain the data of the disk being migrated).
򐂰 -mdiskgrp: Specifies the MDG into which the MDisk must be placed once the migration
has completed.
򐂰 -threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.

562 IBM System Storage SAN Volume Controller


In Figure 14-1 we illustrate how the VDisk V3 is being migrated from MDG1 to MDG2.

Important: In order for the migration to be “legal”, the source and destination MDisk must
have the same extent size.

I/O G r o u p 0

S V C 1 IO -G rp 0
Node 1

S V C 1 IO -G r p 0
Node 2

V1 V2 V4 V3 V3 V6

V5

MDG 1 MDG 2 MDG 3

M1 M2 M3 M4 M5 M6 M7

R A ID C o n tr o lle r A R A ID C o n tr o lle r B

Figure 14-1 VDisk migration between MDGs

Extents are allocated to the migrating VDisk, from the set of MDisks in the target MDG using
the extent allocation algorithm described in 3.6.6, “Allocation of free extents” on page 57.

The process can be prioritized by specifying the number of threads to use while migrating,
using only one thread will put the least background load on the system. If a large number of
extents are being migrated, you can specify the number of threads which will be used in
parallel (from 1 to 4).

For the duration of the move, the offline rules described in 3.6.8, “I/O handling and offline
conditions” on page 57, apply to both MDGs. Therefore, referring back to Figure 14-1, if any
of the MDisks M4, M5, M6, or M7 go offline, then VDisk V3 goes offline. If MDisk M4 goes
offline, then V3 and V5 goes offline but V1, V2, V4 and V6 remains online.

If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is
migrated while the MDisk access mode transitions from image to managed.

For the duration of the move, the VDisk is listed as being a member of the original MDG. For
the purposes of configuration, the VDisk moves to the new MDG instantaneously at the end
of the migration.

Chapter 14. Migration to and from the SAN Volume Controller 563
14.2.4 Migrating the VDisk to image mode
The facility to migrate a VDisk to an image mode VDisk can be combined with the ability to
migrate between MDGs. The source for the migration can be a managed mode or an image
mode VDisk. This leads to four possibilities:
򐂰 Migrate image mode to image mode within a MDG.
򐂰 Migrate managed mode to image mode within a MDG.
򐂰 Migrate image mode to image mode between MDGs.
򐂰 Migrate managed mode to image mode between MDGs.

To be able to migrate:
򐂰 The destination MDisk must be greater than or equal to the size of the VDisk.
򐂰 The MDisk specified as the target must be in an unmanaged state at the time the
command is run.
򐂰 If the migration is interrupted by a cluster recovery, then the migration will resume after the
recovery completes.
򐂰 If the migration involves moving between Managed Disk groups, then the VDisk behaves
as described in “Migrating a VDisk between MDGs” on page 562.

The syntax of the CLI command is:


svctask migratetoimage -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id |
name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads]

The parameters for the CLI command are:


򐂰 -vdisk: Specifies the name or ID of the source VDisk to be migrated.
򐂰 -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk
must be unmanaged and large enough to contain the data of the disk being migrated).
򐂰 -mdiskgrp: Specifies the MDG into which the MDisk must be placed once the migration
has completed.
򐂰 -threads: Optional parameter that specifies the number of threads to use while migrating
these extents, from 1 to 4.

Regardless of the mode that the VDisk starts in, it is reported as managed mode during the
migration. Also, both of the MDisks involved are reported as being image mode during the
migration. At completion of the command, the VDisk is classified as an image mode VDisk.

14.2.5 Migrating a VDisk between I/O groups


A VDisk can be migrated between I/O groups using the svctask chvdisk command. This is
only supported if the VDisk is not in a FlashCopy Mapping or Remote Copy relationship.

In order to move a VDisk between I/O groups, the cache must be flushed. The SVC will
attempt to destage all write data for the VDisk from the cache during the I/O group move. This
flush will fail if data has been pinned in the cache for any reason (such as a MDG being
offline). By default this will cause the migration between I/O groups to fail, but this behavior
can be overridden using the -force flag. If the -force flag is used and if the SVC is unable to
destage all write data from the cache, then the result is that the contents of the VDisk are
corrupted by the loss of the cached data. During the flush, the VDisk operates in cache
write-through mode.

564 IBM System Storage SAN Volume Controller


You must quiesce host I/O before the migration for two reasons:
򐂰 If there is significant data in cache which takes a long time to destage, then the command
line will time out.
򐂰 SDD vpaths associated with the VDisk are deleted before the VDisk move takes place in
order to avoid data corruption. So, data corruption could occur if I/O is still ongoing at a
particular LUN ID when it is re-used for another VDisk.

When migrating a VDisk between I/O Groups, you do not have the ability to specify the
preferred node. The preferred node is assigned by the SVC.

The syntax of the CLI command is:


svctask chvdisk [-iogrp io_group_id|io_group_name [-force]] [-rate throttle_rate [-unitmb]]
[-name new_name_arg] [-force ] vdisk_name | vdisk_id [-udid vdisk_udid]

The parameters for the CLI command are:


򐂰 -iogrp: Optionally specifies a new I/O Group to move the VDisk to, either by I/O or name.
-force can be used together with this parameter in order to force the removal of the VDisk
from the io group.
򐂰 -rate: Optionally sets I/O governing rates for the VDisk. The default units are IO/S, but
can be used in conjunction with -unitmb (-mb) to specify it in terms of MB/s
򐂰 -name: Optionally specifies a new name to assign to the VDisk
򐂰 -force: Specifies that the I/O group be changed without completing the destage of the
cache. This can corrupt the contents of the VDisk.
򐂰 -udid: Optionally specifies the udid for the disk. Valid options are a decimal number from 0
to 32767, or a hex number from 0 to 0x7FFF. A hex number must be preceded by '0x' (for
example, 0x1234). If this parameter is omitted then the default udid is 0.
򐂰 vdisk id: Entered as the last entry on the CLI and specifies the VDisk name to modify,
either by ID or by name.

Notes: The three optional items are mutually exclusive. So, to change the name and
modify the I/O Group would require two invocations of the command.

A VDisk which is a member of a FlashCopy or Remote Copy relationship cannot be moved


to another I/O Group, and this cannot be overridden by using the force flag.

14.2.6 Monitoring the migration progress


To monitor the progress of ongoing migrations, use the CLI command:
svcinfo lsmigrate

To determine the extent allocation of MDisks and VDisks, use the following commands:
򐂰 To list the VDisk IDs and the corresponding number of extents the VDisks are occupying
on the queried MDisk, use the CLI command:
svcinfo lsmdiskextent <mdiskname>
򐂰 To list the MDisk IDs and the corresponding number of extents the queried VDisks are
occupying on the listed MDisks, use the CLI command:
svcinfo lsvdiskextent <vdiskname>

To list the number of free extents available on an MDisk, use the CLI command:
svcinfo lsfreeextents <mdiskname>

Chapter 14. Migration to and from the SAN Volume Controller 565
Important: After a migration has been started, there is no way for you to stop the
migration. The migration runs to completion unless it is stopped or suspended by an error
condition, or if the VDisk being migrated is deleted.

14.3 Functional overview of migration


This section describes the functional view of data migration.

14.3.1 Parallelism
Some of the activities described below can be carried out in parallel.

Per cluster
An SVC cluster supports up to 32 active concurrent instances of members of the set of
migration activities:
򐂰 Migrate multiple extents
򐂰 Migrate between MDGs
򐂰 Migrate off deleted MDisk
򐂰 Migrate to Image Mode

These high-level migration tasks operate by scheduling single extent migrations as described
following.

Up to 256 single extent migrations can run concurrently. This number includes single extent
migrates which result from the operations listed above, and single extent operations that you
requested.

The Migrate Multiple Extents and Migrate Between MDGs command support a flag which
allows you to specify the number of “threads” to use between 1 and 4. This parameter affects
the number of extents which will be concurrently migrated for that migration operation. Thus, if
the thread value is set to 4, up to four extents can be migrated concurrently for that operation,
subject to other resource constraints.

Per MDisk
The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not
take into account whether the MDisk is the source or the destination. If more than four single
extent migrates are scheduled for a particular MDisk, further migrations are queued pending
the completion of one of the currently running migrations.

14.3.2 Error handling


If a medium error occurs on a read from the source, and the destinations medium error table
is full, or if an I/O error occurs on a read from the source repeatedly, or if the MDisk(s) go
offline repeatedly, the migration is suspended or stopped.

566 IBM System Storage SAN Volume Controller


The migration will be suspended if any of the following conditions exist, otherwise it will be
stopped:
򐂰 The migration is between Managed Disk Groups and has progressed beyond the first
Extent. These migrations are always suspended rather than being stopped because
stopping a migration in progress would leave a VDisk spanning MDGs which is not a valid
configuration other than during a migration.
򐂰 The migration is a Migrate to Image Mode (even if it is processing the first extent). These
migrations are always suspended rather than being stopped because stopping a migration
in progress would leave the VDisk in an inconsistent state.
򐂰 A migration is waiting for a metadata checkpoint which has failed.

If a migration is stopped, then if any migrations are queued awaiting the use of the MDisk for
migration, these migrations are now considered. If, however, a migration is suspended, then
the migration continues to use resources, and so another migration is not started.

The SVC will attempt to resume the migration if the error log entry is marked as fixed using
the CLI or the GUI. If the error condition no longer exists, then the migration will proceed. The
migration might resume on a different node to the one which started the migration.

14.3.3 Migration algorithm


This section describes the effect of the migration algorithm.

Chunks
Regardless of the extent size for the MDG, data is migrated in units of 16 MB. In this
description, this unit is referred to as a chunk.

The algorithm followed to migrate an extent is as follows:


1. Pause (this means to queue all new I/O requests in the virtualization layer in SVC and wait
for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the
SVC cluster. The I/O to other extents is unaffected.
2. Unpause I/O on all of the source MDisk extent apart from writes to the specific chunk
which is being migrated. Writes to the extent are mirrored to the source and destination as
described below.
3. On the node performing the migrate, for each 256K section of the chunk:
– Synchronously read 256K from the source.
– Synchronously write 256K to the target.
4. Once the entire chunk has been copied to the destination, repeat the process for the next
chunk within the extent.
5. Once the entire extent has been migrated, pause all I/O to the extent being migrated,
checkpoint the extent move to on-disk metadata, redirect all further reads to the
destination, and stop mirroring writes (writes only to destination).
6. If the checkpoint fails, then the I/O is unpaused.

During the migration, the extent can be divided into three regions as shown in Figure 14-2.
Region B is the chunk which is being copied. Writes to region B are queued (paused) in the
virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the
destination since this data has already been copied. Writes to Region A are written to both
the source and the destination extent in order to maintain the integrity of the source extent.
Reads and writes to Region C are directed to the source because this region has yet to be
migrated.

Chapter 14. Migration to and from the SAN Volume Controller 567
The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack (such as cache
destages) are held back. If the back-end storage is operating with significant latency, then it is
possible that this operation might take some time (minutes) to complete. This can have an
adverse affect on the overall performance of the SVC. To avoid this situation, if the migration
of a particular chunk is still active after one minute, then the migration is paused for 30
seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the
migration of the chunk is resumed. This algorithm is repeated as many times as necessary to
complete the migration of the chunk.

Managed Disk Extents

Extent N-1 Extent N Extent N+1

Region A Region B Region C


(already copied) (copying) (yet to be copied)
reads/writes go reads/writes reads/writes go to
to destination paused source

Not to scale
16 MB
Figure 14-2 Migrating an extent

SVC guarantees read stability during data migrations even if the data migration is stopped by
a node reset or a cluster shutdown. This is possible because SVC disallows writes on all
nodes to the area being copied, and upon a failure the extent migration is restarted from the
beginning.

14.4 Migrating data from an image mode VDisk


This section describes how to migrate data from an image mode VDisk to a VDisk.

14.4.1 Image mode VDisk migration concept


First, we describe the concepts associated with this operation.

568 IBM System Storage SAN Volume Controller


MDisk modes
There are three different MDisk modes:
1. Unmanaged MDisk:
An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged
MDisk is not associated with any VDisks and has no metadata stored on it. The SVC will
not write to an MDisk which is in unmanaged mode except when it attempts to change the
mode of the MDisk to one of the other modes.
2. Image Mode MDisk:
Image Mode provides a direct block-for-block translation from the MDisk to the VDisk with
no virtualization. Image Mode VDisks have a minimum size of one block (512 bytes) and
always occupy at least one extent. An Image Mode MDisk is associated with exactly one
VDisk.
3. Managed Mode MDisk:
Managed Mode Mdisks contribute extents to the pool of extents available in the MDG.
Zero or more Managed Mode VDisks might use these extents.

Transitions between the different modes


The following state transitions can occur to an MDisk (see Figure 14-3):
1. Unmanaged mode to managed mode:
This occurs when an MDisk is added to an MDisk group. This makes the MDisk eligible for
the allocation of data and metadata extents.
2. Managed mode to unmanaged mode:
This occurs when an MDisk is removed from an MDisk group.
3. Unmanaged mode to image mode:
This occurs when an image mode MDisk is created on an MDisk which was previously
unmanaged. It also occurs when an MDisk is used as the target for a Migrate to Image
Mode.
4. Image mode to unmanaged mode:
There are two distinct ways in which this can happen:
– When an image mode VDisk is deleted. The MDisk which supported the VDisk
becomes unmanaged.
– When an image mode VDisk is migrated in image mode to another MDisk, the MDisk
which it is being migrated from remains in image mode until all data has been moved
off it. It then transitions to unmanaged mode.
5. Image mode to managed mode:
This occurs when the image mode VDisk which is using the MDisk is migrated into
managed mode.
6. Managed mode to image mode is not possible:
There is no operation that will take an MDisk directly from managed mode to image mode.
This can be achieved by performing operations which convert the MDisk to unmanaged
mode and then to image mode.

Chapter 14. Migration to and from the SAN Volume Controller 569
add to group

Managed
Not in group
mode
remove from group

delete image
mode vdisk

complete migrate start migrate to


managed mode

create image
mode vdisk

Migrating to
Image mode
image mode start migrate to image mode

Figure 14-3 Different states of a VDisk

Image mode VDisks have the special property that the last extent in the VDisk can be a
partial extent. Managed mode disks do not have this property.

To perform any type of migration activity on an image mode VDisk, the image mode disk must
first be converted into a managed mode disk. If the image mode disk has a partial last extent,
then this last extent in the image mode VDisk must be the first to be migrated. This migration
is handled as a special case.

After this special migration operation has occurred, the VDisk becomes a managed mode
VDisk and is treated in the same way as any other managed mode VDisk. If the image mode
disk does not have a partial last extent, then no special processing is performed, the image
mode VDisk is simply changed into a managed mode VDisk, and is treated in the same way
as any other managed mode VDisk.

After data is migrated off a partial extent, there is no way to migrate data back onto the partial
extent.

570 IBM System Storage SAN Volume Controller


14.4.2 Migration tips
You have several methods to migrate an image mode VDisk into a managed mode VDisk:
򐂰 If your image mode VDisk is in the same MDG as the MDisks on which you want to
migrate the extents, you can:
– Migrate a single extent. You have to migrate the last extent of the image mode VDisk
(number N-1).
– Migrate multiple extents.
– Migrate all the in-use extents from an MDisk. Migrate extents off an MDisk which is
being deleted.
򐂰 If you have two MDGs, one for the image mode VDisk, and one for the managed mode
VDisks, you can migrate a VDisk from one MDG to another.

The recommended method is to have one MDG for all the image mode VDisks, and other
MDGs for the managed mode VDisks, and to use the migrate VDisk facility.

Do not forget to check that enough extents are available in the target MDG.

14.5 Data migration for Windows using the GUI


This configuration is the same as the one shown in Figure 14-18 on page 578. This scenario
consists of the migration of one volume (from ESS label S:) of a Windows 2000 host from an
ESS to an SVC. The ESS is, both before and after the migration, connected directly to the
Windows 2000 host (via the 2109 switches).

Before the migration, the LUN masking is defined in the ESS to give access to the Windows
2000 host system for the volume from ESS label S:.

After the migration, LUN masking is defined in the ESS to give access to the SVC nodes for
the volume from ESS label S:.

The following actions occur for the migration:


1. Shut down the Windows 2000 host system before changing LUN masking in the ESS.
2. The volume is first discovered as an “MDisk unmanaged” by the SVC. It is MDisk10.
3. A VDisk is created in image mode using this MDisk.
4. This new VDisk is mapped to the host system Win2K.
5. Restart the Windows 2000 host system.
6. The VDisk is again available for the host system Win2K.

The VDisk can be migrated from image mode to managed mode, concurrently to the access
to the Win2K host system. The mode of migration is to migrate a VDisk from an MDisk group
to another MDisk group.

Chapter 14. Migration to and from the SAN Volume Controller 571
14.5.1 Windows 2000 host system connected directly to the ESS
Figure 14-4 shows the disk from the ESS.

Figure 14-4 Disk management: One volume from ESS with label S:

Figure 14-5 shows the properties using the Subsystem Device Driver (SDD).

Figure 14-5 Drive S: from ESS with SDD

572 IBM System Storage SAN Volume Controller


Figure 14-6 shows the volume properties.

Figure 14-6 Volume properties of Drive S

Figure 14-7 shows the files on volume S.

Figure 14-7 Files on volume S: (Volume from ESS)

14.5.2 SVC added between the Windows 2000 host system and the ESS
After you change LUN masking in the ESS, a new MDisk is discovered in the SVC, MDisk10.
A VDisk named VDisk0 already exists. It was created after migrating one hdisk from a host
system.

Chapter 14. Migration to and from the SAN Volume Controller 573
Create a new VDisk named winimageVDisk1 in image mode using the MDisk10 in the MDisk
group ess_MDiskgrp0:
1. As shown in Figure 14-8, select Create a VDisk from the list and click Go.

Figure 14-8 Viewing VDisks

2. The Create VDisks panel (Figure 14-9) of the wizard is displayed. Click Next.

Figure 14-9 Create VDisks

574 IBM System Storage SAN Volume Controller


3. On the Select the Type of VDisk panel (Figure 14-10), type the name of the disk, select the
I/O group, select the MDisk group, and select the type of VDisk. Then click Next.

Figure 14-10 Select the Type of VDisk

4. On the Select Attributes for Image-mode VDisk panel (Figure 14-11), select the preferred
node for I/O and the MDisk used to create the VDisk. Then click Next.

Figure 14-11 Select Attributes for Image-mode VDisk

Chapter 14. Migration to and from the SAN Volume Controller 575
5. Verify the options that you selected as shown in Figure 14-12. Click Finish.

Figure 14-12 Verify VDisk

6. You can view the VDisk that you created as shown in Figure 14-13.

Figure 14-13 Viewing VDisks

7. The MDisk view is shown in Figure 14-14.

Figure 14-14 Viewing Managed Disks

576 IBM System Storage SAN Volume Controller


8. Map the VDisk again to the Windows 2000 host system “WIN2K”. As shown in
Figure 14-15, select the name of the VDisk. Then select Map a VDisk to a host and click
Go.

Figure 14-15 Viewing VDisks

9. Select the target host as shown in Figure 14-16 and click OK.

Figure 14-16 Creating a VDisk-to-Host mapping winimageVDisk1

Chapter 14. Migration to and from the SAN Volume Controller 577
10.Restart the Windows 2000 host system. Figure 14-17 shows the result.

Figure 14-17 The volume S: Volume_from_ESS is online

11.Figure 14-18 shows the disk properties.

Figure 14-18 The volume S: Volume_from_ESS is online and is 2145 SDD Disk device

578 IBM System Storage SAN Volume Controller


12.The volume is now online with the same data as before the migration. See Figure 14-19.

Figure 14-19 The volume S: Volume_from_ESS is online with the same data

13.Figure 14-20 shows the properties.

Figure 14-20 The volume S: Volume_from_ESS is online and have four paths to SVC 2145

Chapter 14. Migration to and from the SAN Volume Controller 579
14.5.3 Migrating the VDisk from image mode to managed mode
Now the VDisk is migrated to managed mode by migrating the completed VDisk from the
MDG ess_MDiskgrp0 to the MDG Migrated _VDisk. This MDG is based on FAStT 600
back-end storage instead of ESS. It consists of MDisk4 and MDisk5.
1. As shown in Figure 14-21, select the VDisk. Then select Migrate a VDisk from the list and
click Go.

Figure 14-21 Viewing VDisks

2. Select the MDG to which to migrate the disk as shown in Figure 14-22. Click OK.

Figure 14-22 Migrating VDisks-winimagevdisk1

3. You can now view the MDG as shown in Figure 14-23.

Figure 14-23 View Managed Disk Groups

580 IBM System Storage SAN Volume Controller


Before the migration is complete, the VDisk still belongs to ess_mdiskgrp0 as shown in
Figure 14-24.

Figure 14-24 Viewing VDisks

After the migration is complete, you see the results shown in Figure 14-25.

Figure 14-25 Viewing MDisks

Viewing the VDisks gives the results shown in Figure 14-26.

Figure 14-26 VDisk is now in Migrated_VDisks instead of ess_mdiskgrp0

Chapter 14. Migration to and from the SAN Volume Controller 581
Figure 14-27 shows the details of the VDisk.

Figure 14-27 Details for VDisk winimagevdisk1

Viewing the MDGs after the migration, you see the results shown in Figure 14-28.

Figure 14-28 The MDGs after complete migration

582 IBM System Storage SAN Volume Controller


Finally, viewing the MDisks, you see the information as shown in Figure 14-29.

Figure 14-29 The MDisks after migration is complete

14.5.4 Migrating the VDisk from managed mode to image mode


The VDisk in managed mode can be migrated to image mode. One reason for doing this
would be after an SVC virtualization trial period has expired, and you are returning the volume
to its original state.

In this example we migrate VDisk VD1_LINUX1 from managed mode to image mode.
1. Select VD1_LINUX1 and select migrate to an image mode VDisk from the pull-down menu
(Figure 14-30). Click Go.

Figure 14-30 Select VDisk and start migrate to an image mode VDisk

2. Select a target MDisk (mdisk3) by clicking the radio button (Figure 14-31). Click Next.

Figure 14-31 Select the target MDisk

Chapter 14. Migration to and from the SAN Volume Controller 583
3. Select a MDG (MDG2_DS43) by clicking the radio button (Figure 14-32). Click Next.

Figure 14-32 Select MDG

4. Select the number of threads (1 to 4). The higher the number, the higher the priority
(Figure 14-33). Click Next.

Figure 14-33 Select the threads

5. Verify the migration attributes (Figure 14-34) and click Finish.

Figure 14-34 Verify migration attributes

584 IBM System Storage SAN Volume Controller


6. Check the progress panel (Figure 14-35) and click Close.

Figure 14-35 Progress panel

7. This brings you back into the viewing VDisks panel (Figure 14-36). Now you can see that
VD1_LINUX1 is in image mode.

Figure 14-36 Viewing VDisks

Chapter 14. Migration to and from the SAN Volume Controller 585
8. Click VD1_LINUX1 to see the details (Figure 14-37).

Figure 14-37 VDisk details

9. Free the data from the SVC by using the procedure in “Deleting a VDisk” on page 341.

If the command succeeds, then the underlying back-end storage controller will be consistent
with the data which a host could previously have read from the Image Mode VDisk
(VD1_LINUX1). That is, all fast write data will have been flushed to the underlying LUN.
Deleting an Image Mode VDisk causes the MDisk (mdisk8) associated with the VDisk
(VD1_LINUX1) to be ejected from the MDG. The mode of the MDisk (mdisk8) will be returned
to “Unmanaged”.

14.5.5 Migrating the VDisk from image mode to image mode


Migrating a VDisk from image mode to image mode is used to move image mode VDisks from
one storage subsystem to another storage subsystem. The data stays available for the
applications during this migration. So, this is a zero downtime data move from one disk
subsystem to another. This procedure is nearly the same as in “Migrating the VDisk from
managed mode to image mode” on page 583

In this example we migrate VDisk VD9_img from image mode to image mode.

586 IBM System Storage SAN Volume Controller


1. Select VD9_img and select migrate to an image mode VDisk from the pull-down menu
(Figure 14-38). Click Go.

Figure 14-38 Migrate to an image mode VDisk

2. Select a target MDisk (mdisk8) by clicking the radio button (Figure 14-39). Click Next.

Figure 14-39 Select the target MDisk

3. Select an MDG (MDG2_DS43) by clicking the radio button (Figure 14-40). Click Next.

Figure 14-40 Select MDG

Chapter 14. Migration to and from the SAN Volume Controller 587
4. Select the number of threads (1 to 4). The higher the number, the higher the priority
(Figure 14-41). Click Next.

Figure 14-41 Select the threads

5. Verify the migration attributes (Figure 14-42) and click Finish.

Figure 14-42 Verify migration attributes

6. Check the progress panel (Figure 14-43) and click close.

Figure 14-43 Progress on the migration

7. Free the data from the SVC by using the procedure in “Deleting a VDisk” on page 341.

If the command succeeds, then the underlying back-end storage controller will be consistent
with the data which a host could previously have read from the Image Mode VDisk
(VD9_img). That is, all fast write data will have been flushed to the underlying LUN. Deleting
an Image Mode VDisk causes the MDisk (mdisk8) associated with the VDisk (VD9_img) to be
ejected from the MDG. The mode of the MDisk (mdisk8) will be returned to Unmanaged.

588 IBM System Storage SAN Volume Controller


14.6 Migrating Linux SAN disks to SVC disks
In this section we will move the two LUNs from a Linux server that is currently booting directly
off of our DS4000 storage subsystem over to the SVC.

We will then manage those LUNs with SVC, move them between other managed disks, and
then finally move them back to image mode disks, so that those LUNs can then be
masked/mapped back to the Linux server directly.

Using this example will help you perform any one of the following activities in your
environment:
򐂰 Move a Linux server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs via the SVC. This would be the first activity that you would do when introducing SVC
into your environment. This section shows that your host downtime is only a few minutes
while you remap/remask disks using your storage subsystem LUN management tool. This
step is detailed in “Prepare your SVC to virtualize disks” on page 593.
򐂰 Move data between storage subsystems while your Linux server is still running and
servicing your business application. You might perform this activity if you were removing a
storage subsystem from your SAN environment, or wanting to move the data onto LUNs
that are more appropriate for the type of data stored on those LUNs taking into account
availability, performance and redundancy. This step is covered in “Migrate the image mode
VDisks to managed MDisks” on page 598.
򐂰 Move your Linux server’s LUNs back to image mode VDisks so that they can be
remapped/remasked directly back to the Linux server. This step is detailed in “Preparing to
migrate from the SVC” on page 600.

These three activities can be used individually, or together, enabling you to migrate your Linux
server’s LUNs from one storage subsystem to another storage subsystem using SVC as your
migration tool. If you do not use all three activities, it will enable you to introduce or remove
the SVC from your environment.

The only downtime required for these activities will be the time it takes you to remask/remap
the LUNs between the storage subsystems and your SVC.

Chapter 14. Migration to and from the SAN Volume Controller 589
In Figure 14-44 we show our Linux environment.

Zoning for migration scenarios

LINUX
Host

SAN

Green Zone
IBM or OEM
Storage
Subsystem

Figure 14-44 Linux SAN environment

Figure 14-44 shows our Linux server connected to our SAN infrastructure. It has two LUNs
that are masked directly to it from our storage subsystem:
򐂰 LOCHNESS_BOOT_LUN_0 - has the host operating system (our host is RedHat Enterprise
Linux 3.0) and this LUN is used to boot the system directly from the storage subsystem.

Note: To successfully boot a host off of the SAN, the LUN needs to have been
assigned as SCSI LUN ID 0.

This LUN is seen by Linux as our /dev/sda disk, on which we have created three partitions:
򐂰 /dev/sda1 is the /boot filesystem, which holds our boot kernels.
򐂰 /dev/sda2 is the / (root) filesystem.
򐂰 /dev/sda3 is a Logical Volume Manager (LVM) partition, which holds the vgroot LVM
group. This LVM group provides our Linux system with the /tmp, /usr, /var, /opt and
/home filesystems, and a swap volume.
򐂰 LOCHNESS_DATA_LUN_1 - is being used as our application disk. If our Linux server had an
application running, then its binary and data files would be stored on this volume.
This disk has been partitioned and is also using LVM providing the vgdata LVM group. We
have created a 50GB filesystem, which we mounted on /data.

590 IBM System Storage SAN Volume Controller


To simulate an application, we have created 10 files and filled those files with random data.
We then calculated the sha1 checksum for each of the files so that while we move the data
between LUNs, we can recalculate the sha1 checksum and verify that we have 100% data
integrity during the moves. Example 14-1 shows the files and calculated sha1 checksums.

Note: sha1sum is a useful tool to validate the integrity of any file. It calculates a SHA1
(160 bit) checksum, which should be the same every time it calculates it on the same file.

Example 14-1 Displaying our random data


[root@lochness data]# df /data
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vgdata/lvdata 49541936 20532872 26492484 44% /data

[root@lochness data]# ls /data


total 20500065
drwxr-xr-x 3 root root 4096 Sep 13 16:59 .
drwxr-xr-x 21 root root 1024 Sep 13 15:53 ..
drwx------ 2 root root 16384 Sep 13 15:28 lost+found
-rw-r--r-- 1 root root 2097152000 Sep 13 16:53 random.0
-rw-r--r-- 1 root root 2097152000 Sep 13 16:52 random.1
-rw-r--r-- 1 root root 2097152000 Sep 13 16:53 random.2
-rw-r--r-- 1 root root 2097152000 Sep 13 16:53 random.3
-rw-r--r-- 1 root root 2097152000 Sep 13 16:52 random.4
-rw-r--r-- 1 root root 2097152000 Sep 13 16:53 random.5
-rw-r--r-- 1 root root 2097152000 Sep 13 16:52 random.6
-rw-r--r-- 1 root root 2097152000 Sep 13 16:52 random.7
-rw-r--r-- 1 root root 2097152000 Sep 13 16:53 random.8
-rw-r--r-- 1 root root 2097152000 Sep 13 16:53 random.9
-rw-r--r-- 1 root root 510 Sep 13 17:11 sha1sum.txt

[root@lochness data]# sha1sum random.? > sha1sum.txt


[root@lochness data]# cat sha1sum.txt
b827b294b2c1c86198f7b79be0e0d060666198c1 random.0
dc36b6f6550509f7a74a953125c62f340c04d1d7 random.1
c772dba05d387d5881647f5e85abed1ce6e08c71 random.2
c00c6ab83bd7b15113230f2c4c80e1f22c37551b random.3
21d01919a7ac7f54bc89069e91d2592829bf103c random.4
ef991bdff7c92458e59acb6642b8c1bc44ea403b random.5
2c33f20329fa2d9649eb884042cd4a5fdf4fdabd random.6
69f7f9d5b5213b87d34c12251a60624be2fc7392 random.7
85924adcd23641a8f26e29336814ebf3d50f7af4 random.8
ba812a6bc6e24c3c9984bc3ae155143cb1715b6e random.9

Our Linux server represents a typical SAN environment with a host directly using LUNs
created on a SAN storage subsystem, as shown in Figure 14-44 on page 590:
򐂰 The Linux server’s HBA cards are zoned so that they are in the Green zone, with our
storage subsystem,
򐂰 LOCHNESS_BOOT_LUN_0 and LOCHNESS_DATA_LUN_1 are two LUNs that have been defined on
the storage subsystem, and using LUN masking are directly available to our Linux server.

Chapter 14. Migration to and from the SAN Volume Controller 591
14.6.1 Connecting the SVC to your SAN fabric
This section covers the basic steps that you would take to introduce the SVC into your SAN
environment. While this section only summarizes these activities, you should be able to
accomplish this without any downtime to any host or application that is also using your
storage area network.

If you have an SVC already connected, then you can safely go to “Prepare your SVC to
virtualize disks” on page 593.

Be very careful connecting the SVC into your storage area network, as it will require you to
connect cables to your SAN switches, and alter your switch zone configuration. Doing these
activities incorrectly could render your SAN inoperable, so make sure you fully understand the
impact of everything you are doing.

Connecting the SVC to your SAN fabric will require you to:
򐂰 Assemble your SVC components (nodes, UPS, master console), cable it correctly, power it
on, and verify that it is visible on your storage area network. This is covered in much
greater detail in Chapter 3, “Planning and configuration” on page 25.
򐂰 Create and configure your SVC cluster. This is covered in greater detail in Chapter 5,
“Initial installation and configuration of the SVC” on page 97 and Chapter 6, “Quickstart
configuration using the CLI” on page 127.
򐂰 Create these additional zones:
– An SVC node zone (our Black zone in Figure 14-45). This zone should just contain all
the ports (or WWN) for each of the SVC nodes in your cluster. Our SVC is made up of
a 2 node cluster, where each node has 4 ports. So our Black zone has 8 WWNs
defined.
– A storage zone (our Red zone). This zone should also have all the ports/WWN from the
SVC node zone as well as the ports/WWN for all the storage subsystems that SVC will
virtualize.
– A host zone (our Blue zone). This zone should contain the ports/WWNs for each host
that will access VDisk, together with the ports defined in the SVC node zone.

Attention: Do not put your storage subsystems in the host (Blue) zone. This is an
unsupported configuration and could lead to data loss!

592 IBM System Storage SAN Volume Controller


Our environment has been set up as described above and can be seen in Figure 14-45.

Zoning per Migration Scenarios

LINUX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

By Pinocchio 12-09-2005

Figure 14-45 SAN environment with SVC attached

14.6.2 Prepare your SVC to virtualize disks


This section covers the preparation tasks that we can perform before taking our Linux server
offline.

These are all non-disruptive activities and should not affect your SAN fabric, nor your existing
SVC configuration (if you already have a production SVC in place).

Create a managed disk group


When we move the two Linux LUNs to the SVC they will first be used in image mode, and as
such we need a managed disk group to hold those disks.

First we need to create an empty manage disk group for each of the disks, using the
commands in Example 14-2. Our managed disk group will be called LIN-BOOT-MDG and
LIN-DATA-MDG to hold our boot LUN and data LUN respectively.
Example 14-2 Create empty mdiskgroup
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name LIN-BOOT-MDG -ext 64
MDisk Group, id [8], successfully created
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name LIN-DATA-MDG -ext 64
MDisk Group, id [7], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
7 LIN-DATA-MDG online 0 0 0 64 0
8 LIN-BOOT-MDG online 0 0 0 64 0

Chapter 14. Migration to and from the SAN Volume Controller 593
Create your host definition
If your zone preparation (as described above) has been performed correctly, the SVC should
be able to see the Linux server’s HBA adapters on the fabric (our host only had one HBA).

First we will get the WWN for our Linux server’s HBA as we have many hosts connected to
our SAN fabric and in the Blue zone. We want to make sure we have the correct WWN to
reduce our Linux servers downtime. Example 14-3 shows the commands to get the WWN,
our host has a WWN of 210000E08B18558E.

Example 14-3 Find out your WWN


[root@lochness data]# grep port /proc/scsi/qla2300/?
scsi-qla0-adapter-port=210000e08b18558e;

The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC
can see on the SAN fabric that has not yet been allocated to a host. Example 14-4 shows the
output of the nodes it found on our SAN fabric. (If the port did not show up, it would indicate
that we have a zone configuration problem.)

Example 14-4 Add the host to the SVC


IBM_2145:itsosvc1:admin>svcinfo lshbaportcandidate
id
210000E08B1A5996
210100E08B3A5996
210000E08B05F3ED
210000E08B05F2ED
210000E08B18558E

After verifying that the SVC can see our host (LOCHNESS), we will create the host entry and
assign the WWN to this entry. These commands can be seen in Example 14-5.

Example 14-5 Create the host entry


IBM_2145:itsosvc1:admin>svctask mkhost -name LOCHNESS -hbawwpn 210000E08B18558E
Host id [13] successfully created
IBM_2145:itsosvc1:admin>svcinfo lshost LOCHNESS
id 13
name LOCHNESS
port_count 1
type generic
iogrp_count 4
WWPN 210000E08B18558E
node_logged_in_count 2

Verify that we can see our storage subsystem


If our zoning has been performed correctly, the SVC should also be able to see the storage
subsystem with the svcinfo lscontroller command (Example 14-6). We will also rename
the storage subsystem to something more meaningful. (If we had many storage subsystems
connected to our SAN fabric, then renaming them makes it considerably easier to identify
them.)

Example 14-6 Discover and rename the storage controller


IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 controller0 IBM 1742-900
IBM_2145:itsosvc1:admin>svctask chcontroller -name DS4000 controller0

594 IBM System Storage SAN Volume Controller


IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900

Get the disk serial numbers


To help avoid the possibility of creating the wrong VDisks from all the available unmanaged
MDisks (in case there are many seen by the SVC), we will get the LUN serial numbers from
our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we will confirm that we have the right serial numbers before
we create the image mode VDisks.

If you are also using a DS4000 family storage subsystem, Storage Manager will provide the
LUN serial numbers. Right-click your logical drive and choose Properties. Our serial
numbers are shown in Figure 14-46.

Figure 14-46 Obtaining the disk serial numbers

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as VDisks.

14.6.3 Move the LUNs to the SVC


In this step, we will move the LUNs assigned to the Linux server and reassign them to the
SVC.

Our Linux server has two LUNs: One LUN is for our boot disk and operating system
filesystems; and our other LUN holds our application and data files. Moving both LUNs at
once will require the host to be shut down.

If we only wanted to move the LUN that holds our application and data files, then we could do
that without rebooting the host. The only requirement would be that we unmount the file
system, and vary off the volume group to ensure data integrity between the re-assignment.

Chapter 14. Migration to and from the SAN Volume Controller 595
Before you start: Moving LUNs to the SVC will require that the SDD device driver is
installed on the Linux server. This could also be installed ahead of time, however it might
require an outage to your host to do so.

As we will move both LUNs at the same time, these are the required steps:
1. Confirm that the SDD device driver is installed.
2. Shut down the host.
If you were just moving the LUNs that contained the application and data, then you could
follow this procedure instead:
a. Stop the applications that are using the LUNs.
b. Unmount those filesystems, with the umount MOUNT_POINT command.
c. If the filesystems are an LVM volume, then deactivate that volume group with the
vgchange -a n VOLUMEGROUP_NAME.
d. If you can, also unload your HBA driver, using rmmod DRIVER_MODULE. This will
remove the SCSI definitions from the kernel (we will reload this module and rediscover
the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks
without requiring you to unload the HBA driver, however these details are not provided
here.
3. Using Storage Manager (our storage subsystem management tool), we can
unmap/unmask the disks from the Linux server and remap/remask the disks to the SVC.
4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks
will be discovered and named as mdiskN, where N is the next available MDisk number
(starting from 0). Example 14-7 shows the commands we used to discover our MDisks
and verify that we have the correct ones.

Example 14-7 Discover the new MDisks.


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 mdisk0 online unmanaged 50.0GB 0000000000000000
DS4000 600a0b80001744310000000542d658ce00000000000000000000000000000000
1 mdisk1 online unmanaged 50.0GB 0000000000000001
DS4000 600a0b80001742330000000b431d7d1900000000000000000000000000000000

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
task display) with the serial number you took earlier (in Figure 14-46 on page 595).

5. Once we have verified that we have the correct MDisks, we will rename them, to avoid
confusion in the future when we perform other MDisk related tasks (Example 14-8).

Example 14-8 Rename the MDisks


IBM_2145:itsosvc1:admin>svctask chmdisk -name LIN-BOOT-MD mdisk0
IBM_2145:itsosvc1:admin>svctask chmdisk -name LIN-DATA-MD mdisk1
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 LIN-BOOT-MD online unmanaged 50.0GB 0000000000000000
DS4000 600a0b80001744310000000542d658ce00000000000000000000000000000000

596 IBM System Storage SAN Volume Controller


1 LIN-DATA-MD online unmanaged 50.0GB 0000000000000001
DS4000 600a0b80001742330000000b431d7d1900000000000000000000000000000000

6. We create our image mode VDisks with the svctask mkvdisk command (Example 14-9).
This command will virtualize the disks, in the exact same layout as if they were not
virtualized.

Example 14-9 Create the image mode VDisks


IBM_2145:itsosvc1:admin>svctask mkvdisk -mdiskgrp LIN-BOOT-MDG -iogrp 0 -vtype image -mdisk
LIN-BOOT-MD -name LIN-BOOT-VD
Virtual Disk, id [10], successfully created
IBM_2145:itsosvc1:admin>svctask mkvdisk -mdiskgrp LIN-DATA-MDG -iogrp 0 -vtype image -mdisk
LIN-DATA-MD -name LIN-DATA-VD
Virtual Disk, id [11], successfully created

7. Finally, we can map the new image mode VDisks to the host (Example 14-10).

Example 14-10 Map the VDisks to the host


IBM_2145:itsosvc1:admin>svctask mkvdiskhostmap -host LOCHNESS -scsi 0 LIN-BOOT-VD
Virtual Disk to Host map, id [0], successfully created
IBM_2145:itsosvc1:admin>svctask mkvdiskhostmap -host LOCHNESS -scsi 1 LIN-DATA-VD
Virtual Disk to Host map, id [1], successfully created
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
13 LOCHNESS 0 10 LIN-BOOT-VD 210000E08B18558E 60050768018200C4700000000000000F
13 LOCHNESS 1 11 LIN-DATA-VD 210000E08B18558E 60050768018200C4700000000000000E

Since one of the disks is our boot disk, we need to make sure that it has SCSI ID 0, so that
the BIOS recognizes it as a bootable disk.

Tip: If one of the disks is used to boot your Linux server, than you need to make sure that
it is presented back to the host as SCSI ID 0, so that the FC adapter BIOS finds it during its
initialization.

Note: While the application is in a quiescent state, you could choose to FlashCopy the new
image VDisks onto other VDisks. You will not need to wait until the FlashCopy has
completed before starting your application.

We are now ready to restart the Linux server.

If you only moved the application LUN to the SVC and left your Linux server running, then you
would need to follow these steps to see the new VDisk:
1. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, then you can issue commands to the kernel to rescan the
SCSI bus to see the new VDisks (these details are beyond the scope of this redbook).
2. Check your syslog and verify that the kernel found the new VDisks. On Red Hat Enterprise
Linux, the syslog is stored in /var/log/messages.
3. If your application and data is on an LVM volume, run vgscan to rediscover the volume
group, then run the vgchange -a y VOLUME_GROUP to activate the volume group.
4. Mount your filesystems with the mount /MOUNT_POINT command.
5. You should be ready to start your application.

Chapter 14. Migration to and from the SAN Volume Controller 597
To verify that we did not lose any data or compromise the integrity during this process, we
re-ran the sha1 checksum calculations on all our files in the /data directory. Example 14-11
verifies that everything is intact.

Example 14-11 Verify the sha1 checksums


[root@lochness data]# cd /data
[root@lochness data]# sha1sum -c sha1sum.txt
random.0: OK
random.1: OK
random.2: OK
random.3: OK
random.4: OK
random.5: OK
random.6: OK
random.7: OK
random.8: OK
random.9: OK

14.6.4 Migrate the image mode VDisks to managed MDisks


While the Linux server is still running, and our filesystems are in use, we will now migrate the
image mode VDisks onto striped VDisks, with the extents being spread over the other three
MDisks.

Preparing MDisks for striped mode VDisks


From our storage subsystem, we have:
򐂰 Created and allocated 3 LUNs to the SVC.
򐂰 Discovered them as MDisks.
򐂰 Renamed these LUNs to something more meaningful.
򐂰 Created a new MDisk group.
򐂰 Finally, put all these MDisks into this group.

You can see the output of our commands in Example 14-12.

Example 14-12 Create a new MDisk group


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name IBMOEM-LIN-MDG -ext 64
MDisk Group, id [5], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 LIN-BOOT-MD online image 8 LIN-BOOT-MDG 50.0GB
0000000000000000 DS4000
600a0b80001744310000000542d658ce00000000000000000000000000000000
1 LIN-DATA-MD online image 7 LIN-DATA-MDG 50.0GB
0000000000000001 DS4000
600a0b80001742330000000b431d7d1900000000000000000000000000000000
2 mdisk2 online managed 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 mdisk3 online managed 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 mdisk4 online managed 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

598 IBM System Storage SAN Volume Controller


IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-LIN-MD1 mdisk2
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-LIN-MD2 mdisk3
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-LIN-MD3 mdisk4
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-LIN-MD1 IBMOEM-LIN-MDG
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-LIN-MD2 IBMOEM-LIN-MDG
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-LIN-MD3 IBMOEM-LIN-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
0 LIN-BOOT-MD online image 8 LIN-BOOT-MDG 50.0GB
0000000000000000 DS4000
600a0b80001744310000000542d658ce00000000000000000000000000000000
1 LIN-DATA-MD online image 7 LIN-DATA-MDG 50.0GB
0000000000000001 DS4000
600a0b80001742330000000b431d7d1900000000000000000000000000000000
2 IBMOEM-LIN-MD1 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-LIN-MD2 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-LIN-MD3 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

Migrate the VDisks


We are now ready to migrate the image mode VDisks onto striped VDisks in the
IBMOEM-LIN-MDG, with the svctask migratevdisk command (Example 14-13).

While the migration is running, our Linux server is still running, and we can continue to run the
sha1 checksum calculations on all our random files in the /data directory.

To check the overall progress of the migration, we will use the svcinfo lsmigrate command
as seen in Example 14-13. Listing the MDisk group with the svcinfo lsmdiskgrp command
shows that the free capacity on the old MDisk groups is slowly increasing as those extends
are moved to the new MDisk group.

Example 14-13 Migrating image mode VDisks to striped VDisks


IBM_2145:itsosvc1:admin>svctask migratevdisk -vdisk LIN-BOOT-VD -mdiskgrp IBMOEM-LIN-MDG
IBM_2145:itsosvc1:admin>svctask migratevdisk -vdisk LIN-DATA-VD -mdiskgrp IBMOEM-LIN-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 2
migrate_source_vdisk_index 11
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_type MDisk_Group_Migration
progress 9
migrate_source_vdisk_index 10
migrate_target_mdisk_grp 5
max_thread_count 4
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-LIN-MDG online 3 0 149.1GB 64 49.1GB
7 LIN-DATA-MDG online 1 1 50.0GB 64 2.3GB
8 LIN-BOOT-MDG online 1 1 50.0GB 64 6.9GB

Chapter 14. Migration to and from the SAN Volume Controller 599
Once this task has completed, Example 14-14 shows that the VDisks are now spread over
three MDisks.

Example 14-14 Migration complete


IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-LIN-MDG online 3 2 149.1GB 64 49.1GB
7 LIN-DATA-MDG online 1 0 50.0GB 64 50.0GB
8 LIN-BOOT-MDG online 1 0 50.0GB 64 50.0GB
IBM_2145:itsosvc1:admin>svcinfo lsvdiskmember LIN-BOOT-VD
id
2
3
4
IBM_2145:itsosvc1:admin>svcinfo lsvdiskmember LIN-DATA-VD
id
2
3
4

Our migration to the SVC is now complete. The original MDisks (LIN-BOOT-MD and
LIN-DATA-MD) can now be removed from the SVC, and these LUNs removed from the storage
subsystem.

If these LUNs were the last used LUNs on our storage subsystem, then we could remove it
from our SAN fabric.

14.6.5 Preparing to migrate from the SVC


Before we move the Linux servers LUNs from being accessed by the SVC as virtual disks, to
become directly accessed from the storage subsystem, we need to convert the VDisks into
image mode VDisks.

You might want to perform this activity for any one of these reasons:
򐂰 You purchased a new storage subsystem, and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no
longer need that host connected to the SVC.
򐂰 You want to ship a host and its data that is currently connected to the SVC to a site where
there is no SVC.
򐂰 Changes to your environment no longer require this host to use the SVC.

There are also some other preparation activities that we can do before we need to shut down
the host, and reconfigure the LUN masking/mapping. This section covers those activities.

If you are moving the data to a new storage subsystem, it is assumed that storage subsystem
is connected to your SAN fabric, powered on, and visible from your SAN switches. Your
environment should look similar to ours as shown in Figure 14-47.

600 IBM System Storage SAN Volume Controller


Zoning for migration scenarios

LINUX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 14-47 Environment with SVC

Make fabric zone changes


The first step is to set up the SAN configuration so that all the zones are created. The new
storage subsystem should be added to the Red zone so that the SVC can talk to it directly.

We will also need a Green zone for our host to use when we are ready for it to directly access
the disk, after it has been removed from the SVC.

It is assumed that you have created the necessary zones.

Once your zone configuration is set up correctly, the SVC should see the new storage
subsystems controller using the svcinfo lscontroller command as shown in
Example 14-15. It is also a good idea to rename it to something more useful.

Example 14-15 Discovering the new storage subsystem


IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900
1 controller1 IBM 2107-921
IBM_2145:itsosvc1:admin>svctask chcontroller -name DS8000 controller1
IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900
1 DS8000 IBM 2107-921

Chapter 14. Migration to and from the SAN Volume Controller 601
Create new LUNs
On our storage subsystem we created two LUNs, and masked the LUNs so that the SVC can
see them. These two LUNs will eventually be given directly to the host, removing the VDisks
that it currently has. To check that the SVC can use them, issue the svctask detectmdisk
command as shown in Example 14-16.

Example 14-16 Discover the new MDisks


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
0 mdisk0 online unmanaged 50.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 mdisk1 online unmanaged 50.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-LIN-MD1 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-LIN-MD2 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-LIN-MD3 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

Even though the MDisks will not stay in the SVC for long, we still recommend that you rename
them to something more meaningful, just so that they do not get confused with other MDisks
being used by other activities. Also, we will create the MDisk groups to hold our new MDisks.
This is shown in Example 14-17.

Example 14-17 Rename the MDisks


IBM_2145:itsosvc1:admin>svctask chmdisk -name LIN-BOOT-MD mdisk0
IBM_2145:itsosvc1:admin>svctask chmdisk -name LIN-DATA-MD mdisk1
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name LIN-BOOT-MDG -ext 64
MDisk Group, id [8], successfully created
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name LIN-DATA-MDG -ext 64
MDisk Group, id [7], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-LIN-MDG online 3 2 149.1GB 64 49.1GB
7 LIN-DATA-MDG online 0 0 0 64 0
8 LIN-BOOT-MDG online 0 0 0 64 0

Our SVC environment is now ready for the VDisk migration to image mode VDisks.

602 IBM System Storage SAN Volume Controller


14.6.6 Migrate the VDisks to image mode VDisks
While our Linux server is still running, we will migrate the managed VDisks onto the new
MDisks using image mode VDisks. The command to perform this action is svctask
migratetoimage and is shown in Example 14-18.

Example 14-18 Migrate the VDisks to image mode VDisks


IBM_2145:itsosvc1:admin>svctask migratetoimage -vdisk LIN-BOOT-VD -mdisk LIN-BOOT-MD
-mdiskgrp LIN-BOOT-MDG
IBM_2145:itsosvc1:admin>svctask migratetoimage -vdisk LIN-DATA-VD -mdisk LIN-DATA-MD
-mdiskgrp LIN-DATA-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id misk_grp_name capacity
ctrl_LUN_# controller_name UID
0 LIN-BOOT-MD online image 8 LIN-BOOT-MDG 50.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 LIN-DATA-MD online image 7 LIN-DATA-MDG 50.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-LIN-MD1 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-LIN-MD2 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-LIN-MD3 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000
IBM_2145:itsosvc1:admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 9
migrate_source_vdisk_index 10
migrate_target_mdisk_index 0
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_type Migrate_to_Image
progress 15
migrate_source_vdisk_index 11
migrate_target_mdisk_index 1
migrate_target_mdisk_grp 7
max_thread_count 4

During the migration our Linux server will not be aware that its data is being physically moved
between storage subsystems. We can continue to run the sha1 checksum calculations on our
files to verify that we still have 100% data integrity during the move.

Once the migration has completed, the image mode VDisks will be ready to be removed from
the Linux server, and the real LUNs can be mapped/masked directly to the host using the
storage subsystems tool.

Chapter 14. Migration to and from the SAN Volume Controller 603
14.6.7 Remove the LUNs from the SVC
The next step will require downtime to the Linux server as we will remap/remask the disks so
that the host sees them directly via the green zone.

Our Linux server has two LUNs, one LUN being our boot disk and operating system
filesystems, and the other LUN holds our application and data files. Moving both LUNs at
once will require the host to be shut down.

If we only wanted to move the LUN that holds our application and data files, then we could do
that without rebooting the host. The only requirement would be that we unmount the file
system, and vary off the volume group to ensure data integrity between the re-assignment.

Before you start: Moving LUNs to the another storage system might need a different
driver than SDD. Check with the storage subsystems vendor to see what driver you will
need. You might be able to install this driver ahead of time.

As we will move both LUNs at the same time, here are the required steps:
1. Confirm that the correct device driver for the new storage subsystem is loaded. As we are
moving to a DS8000, we can continue to use the SDD device driver.
2. Shut down the host.
If you were just moving the LUNs that contained the application and data, then you could
follow this procedure instead:
a. Stop the applications that are using the LUNs.
b. Unmount those filesystems with the umount MOUNT_POINT command.
c. If the filesystems are an LVM volume, then deactivate that volume group with the
vgchange -a n VOLUMEGROUP_NAME.
d. If you can, also unload your HBA driver, using rmmod DRIVER_MODULE. This will
remove the SCSI definitions from the kernel (we will reload this module and rediscover
the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks
without requiring you to unload the HBA driver, however these details are not provided
here.
3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command
(Example 14-19). To double check that you have removed the VDisks, use the svcinfo
lshostvdiskmap command, which should show that these disks are no longer mapped to
the Linux server.

Example 14-19 Remove the VDisks from the host


IBM_2145:itsosvc1:admin>svctask rmvdiskhostmap -host LOCHNESS LIN-BOOT-VD
IBM_2145:itsosvc1:admin>svctask rmvdiskhostmap -host LOCHNESS LIN-DATA-VD
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap LOCHNESS
IBM_2145:itsosvc1:admin>

4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make
them unmanaged as seen in Example 14-20.

604 IBM System Storage SAN Volume Controller


Note: When you run the svctask rmvdisk command, the SVC will first double check
that there is no outstanding dirty cache data for the VDisk being removed. If there is still
uncommitted cached data, then the command will fail with the error message:

CMMVC6212E The command failed because data in the cache has not been committed
to disk

You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the VDisk.

The SVC will automatically de-stage uncommitted cached data two minutes after the
last write activity for the VDisk. Depending on how much data there is to destage, and
how busy the I/O subsystem is, will determine how long this command takes to
complete.

You can check if the VDisk has uncommitted data in the cache by using the command
svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This
attribute has the following meanings:
empty No modified data exists in the cache
not_empty Some modified data might exist in the cache
corrupt Some modified data might have existed in the cache, but any
such data has been lost

Example 14-20 Remove the VDisks from the SVC


IBM_2145:itsosvc1:admin>svctask rmvdisk LIN-BOOT-VD
IBM_2145:itsosvc1:admin>svctask rmvdisk LIN-DATA-VD
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id misk_grp_name capacity
ctrl_LUN_# controller_name UID
0 LIN-BOOT-MD online unmanaged 50.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 LIN-DATA-MD online unmanaged 50.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-LIN-MD1 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-LIN-MD2 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-LIN-MD3 online managed 5 IBMOEM-LIN-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the
disks from the SVC back to the Linux server.

Since one of the disks is our boot disk, we need to make sure that it has SCSI ID 0, so that
the BIOS recognizes it as a bootable disk.

Tip: If one of the disks is used to boot your Linux server, than you need to make sure that
it is presented back to the host as SCSI ID 0, so that the FC adapter BIOS finds it during its
initialization.

Chapter 14. Migration to and from the SAN Volume Controller 605
Important: This is the last step that you can perform and still safely back out everything
you have done so far.

Up to this point you can reverse all the actions that you have performed so far to get the
server back online without data loss, that is:
򐂰 Remap/remask the LUNs back to the SVC.
򐂰 Run the svctask detectmdisk to rediscover the MDisks.
򐂰 Recreate the VDisks with svctask mkvdisk.
򐂰 Remap the VDisks back to the server with svctask mkvdiskhostmap.

Once you start the next step, you might not be able to turn back without the risk of data
loss.

We are now ready to restart the Linux server. If all the zoning and LUN masking/mapping was
done successfully, our Linux server should boot as if nothing has happened.

If you only moved the application LUN to the SVC, and left your Linux server running, then
you would need to follow these steps to see the new VDisk:
1. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, then you can issue commands to the kernel to rescan the
SCSI bus to see the new VDisks (these details are beyond the scope of this Redbook).
2. Check your syslog and verify that the kernel found the new VDisks. On RedHat Enterprise
Linux, the syslog is stored in /var/log/messages.
3. If your application and data is on an LVM volume, run vgscan to rediscover the volume
group, then run the vgchange -a y VOLUME_GROUP to activate the volume group.
4. Mount your filesystems with the mount /MOUNT_POINT command.
5. You should be ready to start your application.

To verify that we did not lose any data or compromise the integrity during this process, we
re-ran the sha1 checksum calculations on all our files in the /data directory. Example 14-21
verifies that everything is intact.

Example 14-21 Verify the sha1 checksums


[root@lochness data]# cd /data
[root@lochness data]# sha1sum -c sha1sum.txt
random.0: OK
random.1: OK
random.2: OK
random.3: OK
random.4: OK
random.5: OK
random.6: OK
random.7: OK
random.8: OK
random.9: OK

And finally, to make sure that the MDisks are removed from the SVC, run the svctask
detectmdisk command. The MDisks will first be discovered as offline, and then will
automatically be removed once the SVC determines that there are no VDisks associated with
these MDisks.

606 IBM System Storage SAN Volume Controller


14.7 Migrating ESX SAN disks to SVC disks
In this section we will move the two LUNs from our VMware ESX server which is currently
booting directly off our DS4000 storage subsystem to the SVC.

We will then manage those LUNs with the SVC, move them between other managed disks,
and then finally move them back to image mode disks, so that those LUNs can then be
masked/mapped back to the VMware ESX server directly.

This example should help you perform any one of the following activities in your environment:
򐂰 Move your ESX server’s data LUNs (that are your VMware vmfs filesystems where you
might have your virtual machines stored), which are directly accessed from a storage
subsystem, to virtualized disks under the control of the SVC.
򐂰 Move your ESX server’s boot disk, which is directly accessed from your storage
subsystem, to virtualized disks under the control of the SVC.
򐂰 Move LUNs between storage subsystems while your VMware virtual machines are still
running. You might perform this activity if you were removing a storage subsystem from
your SAN environment, or wanting to move the data onto LUNs that are more appropriate
for the type of data stored on those LUNs taking into account availability, performance and
redundancy. This step is covered in “Migrate the image mode VDisks” on page 618.
򐂰 Move your VMware ESX server’s LUNs back to image mode VDisks so that they can be
remapped/remasked directly back to the server. This step starts in “Preparing to migrate
from the SVC” on page 620.

These three activities can be used individually, or together, enabling you to migrate your
VMware ESX server’s LUNs from one storage subsystem to another storage subsystem,
using SVC as your migration tool. If you do not use all three activities, it will enable you to
introduce or remove the SVC from your environment.

The only downtime required for these activities will be the time it takes you to remask/remap
the LUNs between the storage subsystems and your SVC.

Chapter 14. Migration to and from the SAN Volume Controller 607
In Figure 14-48 we show our environment.

Figure 14-48 ESX server SAN environment

Figure 14-48 shows our ESX server connected to the SAN infrastructure. It has two LUNs
that are masked directly to it from our storage subsystem:
򐂰 LOCHNESS_BOOT_LUN_0 - has the VMware ESX server operating system and this LUN is
used to boot the system directly from the storage subsystem.

Note: To successfully boot a host off of the SAN, it needs to have LUN ID 0.

򐂰 LOCHNESS_DATA_LUN_1 - is a vmfs filesystem and has our virtual machines disks on this file
system. We have created a RedHat Enterprise 3.0 server virtual machine and a Windows
2000 server virtual machine, which will remain running throughout this exercise.

Our ESX server represents a typical SAN environment with a host directly using LUNs
created on a SAN storage subsystem. As shown in the diagram in Figure 14-48:
򐂰 The ESX Server’s HBA cards are zoned so that they are in the Green zone with our
storage subsystem.
򐂰 LOCHNESS_BOOT_LUN_0 and LOCHNESS_DATA_LUN_1 are two LUNs that have been defined on
the storage subsystem, and using LUN masking are directly available to our ESX server.

608 IBM System Storage SAN Volume Controller


14.7.1 Connecting the SVC to your SAN fabric
This section covers the basic steps to take to introduce the SVC into your SAN environment.
While this section only summarizes these activities, you should be able to accomplish this
without any downtime to any host or application that is also using your storage area network.

If you have an SVC already connected, then you can safely jump to the instructions given in
“Prepare your SVC to virtualize disks” on page 610.

Be very careful connecting the SVC into your storage area network, as it will require you to
connect cables to your SAN switches, and alter your switch zone configuration. Doing these
activities incorrectly could render your SAN inoperable, so make sure you fully understand the
impact of everything you are doing.

Connecting the SVC to your SAN fabric will require you to:
򐂰 Assemble your SVC components (nodes, UPS, master console), cable it correctly, power it
on, and verify that it is visible on your storage area network.
򐂰 Create and configure your SVC cluster.
򐂰 Create these additional zones:
– An SVC node zone (our Black zone in our picture on Figure 14-49). This zone should
just contain all the ports (or WWN) for each of the SVC nodes in your cluster. Our SVC
is made up of a two node cluster where each node has four ports. So our Black zone
has eight WWNs defined.
– A storage zone (our Red zone). This zone should also have all the ports/WWN from the
SVC node zone as well as the ports/WWN for all the storage subsystems that SVC will
virtualize.
– A host zone (our Blue zone). This zone should contain the ports/WWNs for each host
that will access VDisks, together with the ports defined in the SVC node zone.

Attention: Do not put your storage subsystems in the host (Blue) zone. This is an
unsupported configuration and could lead to data loss!

Our environment has been set up as described above and can be seen in Figure 14-49.

Chapter 14. Migration to and from the SAN Volume Controller 609
Figure 14-49 SAN environment with SVC attached

14.7.2 Prepare your SVC to virtualize disks


This section covers the preparatory tasks that we will perform before taking our ESX server or
virtual machines offline.

These are all non-disruptive activities, and should not affect your SAN fabric, nor your existing
SVC configuration (if you already have a production SVC in place).

Create a managed disk group


When we move the two ESX LUNs to the SVC, they will first be used in image mode, and as
such we need a managed disk group to hold those disks.

First we need to create an empty managed disk group for each of the disks, using the
commands in Example 14-22. Our managed disk group will be called ESX-BOOT-MDG and
ESX-DATA-MDG to hold our boot LUN and data LUN, respectively.
Example 14-22 Create empty mdiskgroup
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name ESX-BOOT-MDG -ext 64
MDisk Group, id [8], successfully created
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name ESX-DATA-MDG -ext 64
MDisk Group, id [7], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
7 ESX-DATA-MDG online 0 0 0 64 0
8 ESX-BOOT-MDG online 0 0 0 64 0

610 IBM System Storage SAN Volume Controller


Create our host definition
If your zone preparation above has been performed correctly, the SVC should be able to see
the ESX server’s HBA adapters on the fabric (our host only had one HBA).

First we will get the WWN for our ESX server’s HBA, as we have many hosts connected to our
SAN fabric and in the Blue zone. We want to make sure we have the correct WWN to reduce
our ESX servers downtime.

Log into your VMware management console as root and navigate to Options and then select
Storage Management. A new browser window will open and choose the Adapter Bindings
tab. Figure 14-50 shows our WWN which is 210000E08B18558E.

Figure 14-50 Obtain your WWN using the VMware Management Console

The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC
can see on the SAN fabric that has not yet been allocated to a host. Example 14-23 shows
the output of the nodes it found on our SAN fabric. (If the port did not show up, it would
indicate that we have a zone configuration problem.)

Example 14-23 Add the host to the SVC


IBM_2145:itsosvc1:admin>svcinfo lshbaportcandidate
id
210000E08B1A5996
210100E08B3A5996
210000E08B05F3ED
210000E08B05F2ED
210000E08B18558E

After verifying that the SVC can see our host (LOCHNESS), we will create the host entry and
assign the WWN to this entry. These commands can be seen in Example 14-24.

Example 14-24 Create the host entry


IBM_2145:itsosvc1:admin>svctask mkhost -name LOCHNESS -hbawwpn 210000E08B18558E
Host id [13] successfully created
IBM_2145:itsosvc1:admin>svcinfo lshost LOCHNESS
id 13
name LOCHNESS
port_count 1
type generic
iogrp_count 4
WWPN 210000E08B18558E
node_logged_in_count 2

Chapter 14. Migration to and from the SAN Volume Controller 611
Verify that you can see your storage subsystem
If our zoning has been performed correctly, the SVC should also be able to see the storage
subsystem with the svcinfo lscontroller command (Example 14-25). We will also rename
the storage subsystem to something more meaningful. (If we had many storage subsystems
connected to our SAN fabric, then renaming them makes it considerably easier to identify
them.)

Example 14-25 Discover and rename the storage controller


IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 controller0 IBM 1742-900
IBM_2145:itsosvc1:admin>svctask chcontroller -name DS4000 controller0
IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900

Get your disk serial numbers


To help avoid the possibility of creating the wrong VDisks from all the available unmanaged
MDisks (in case there are many seen by the SVC), we will get the LUN serial numbers from
our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we will confirm that we have the right serial numbers before
we create the image mode VDisks.

If you are also using a DS4000 family storage subsystem, Storage Manager will provide the
LUN serial numbers. Right-click your logical drive and choose Properties. Our serial
numbers are shown in Figure 14-51.

Figure 14-51 Obtaining the disk serial numbers

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as VDisks.

612 IBM System Storage SAN Volume Controller


14.7.3 Move the LUNs to the SVC
In this step we will move the LUNs assigned to the ESX server and reassign them to the SVC.

As our ESX server has two LUNs: on our LOCHNESS_BOOT_LUN_0, we have just the ESX server
software (that is, no VMware guests), so in order to move this LUN under the control of the
SVC, we will have to shut down the ESX server. (Thus, this will mean that all VMware guests
will need to be stopped/suspended before this can occur.)

On our LOCHNESS_DATA_LUN_1 we have virtual machines so in order to move this LUN under
the control of the SVC, we will only have to stop/suspend all VMware guests that are using
this LUN.

We will perform this action in two phases, first moving the LOCHNESS_DATA_LUN_1, and then the
LOCHNESS_BOOT_LUN_0, just to demonstrate that we do not need to reboot the ESX server
when we are just moving LUNs used by guests. You could move both at once.

Move VMware guest LUNs


We first move our LOCHNESS_DATA_LUN_1 using these steps:
1. Identify how ESX has labelled this LUN.
Using our storage subsystem tool (Storage Manager), we have identified that this LUN is
given to the ESX Server as SCSI LUN ID 1.
Using the VMware ESX management console and navigating to Options, Storage
Management, we then identify which LUN is SCSI ID 1. Figure 14-52 shows the only disk
with LUN ID 1 is vmhba0:4:1:1 which has a VMware label of VMWARE-GUESTS.

Tip: VMware ESX see disks using the following syntax:

vmhbaA:B:C:D

Where:
A SCSI channel number, the first adapter discovered will be assigned 0, the 2nd adapter
discovered will be assigned 1, etc.
B SCSI target number (you can look at the Adapter Bindings tab to see the WWN of the
controllers at this target)
C SCSI LUN ID
D Partition number, where the LUN has been partitioned into multiple smaller partitions.

Chapter 14. Migration to and from the SAN Volume Controller 613
Figure 14-52 VMware ESX Disks and LUNs

2. Next, identify all the VMware guests that are using this LUN (with the label
VMWARE-GUESTS). One way to identify them is to look at their Hardware properties in the
VMware management interface. Figure 14-53 shows that our Linux virtual machine is
using this LUN.

Figure 14-53 ESX Guest properties

614 IBM System Storage SAN Volume Controller


3. As both our Windows and Linux virtual machines have their virtual disk on this LUN, we
will suspend them (Figure 14-54).

Figure 14-54 Suspend VMware guest

4. Once the guests are suspended, we will use Storage Manager (our storage subsystem
management tool) to unmap/unmask the disks from the ESX server and remap/remask
the disks to the SVC.
5. From the SVC, discover the new disks with the svctask detectmdisk command. The disks
will be discovered and named as mdiskN, where N is the next available MDisk number
(starting from 0). Example 14-26 shows the commands we used to discover our MDisks
and verify that we have the correct ones.

Example 14-26 Discover the new MDisks


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 mdisk0 online unmanaged 20.0GB 0000000000000000
DS4000 600a0b80001744310000000542d658ce00000000000000000000000000000000

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
task display) with the serial number you obtained earlier (in Figure 14-51 on page 612).

6. Once we have verified that we have the correct MDisks, we will rename them to avoid
confusion in the future when we perform other MDisk related tasks (Example 14-27).

Example 14-27 Rename the MDisks


IBM_2145:itsosvc1:admin>svctask chmdisk -name ESX-DATA-MD mdisk0
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 ESX-DATA-MD online unmanaged 20.0GB 0000000000000000
DS4000 600a0b80001744310000000542d658ce00000000000000000000000000000000

7. We create our image mode VDisks with the svctask mkvdisk command (Example 14-28).
This command will virtualize the disks in the exact same layout as if they were not
virtualized.

Chapter 14. Migration to and from the SAN Volume Controller 615
Example 14-28 Create the image mode vdisks
IBM_2145:itsosvc1:admin>svctask mkvdisk -mdiskgrp ESX-DATA-MDG -iogrp 0 -vtype image -mdisk
ESX-DATA-MD -name ESX-DATA-VD
Virtual Disk, id [11], successfully created

8. Finally, we can map the new image mode VDisks to the host (Example 14-29).

Example 14-29 Map the VDisks to the host


IBM_2145:itsosvc1:admin>svctask mkvdiskhostmap -host LOCHNESS -scsi 1 ESX-DATA-VD
Virtual Disk to Host map, id [1], successfully created
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
13 LOCHNESS 1 11 ESX-DATA-VD 210000E08B18558E 60050768018200C4700000000000000F

9. Now, using the VMware management console, rescan to discover the new VDisk
(Figure 14-55).

Figure 14-55 Rescan your SAN and discover the LUNs

During the rescan you might receive geometry errors, as ESX discovers that the old disk
has disappeared. Your VDisk will appear with a new vmhba address, and VMware will
recognize it as our VMWARE-GUESTS disk.
10.We are now ready to restart the VMware guests. If you wanted to move the ESX servers
boot disk at the same time, you could continue with the next steps.

Move VMware boot LUNs


Moving the VMware ESX servers boot LUNs to the SVC will require the system to be shut
down, while we reassign the LUNs.

616 IBM System Storage SAN Volume Controller


Note: We found that while this process worked successfully when we started the VMware
ESX server, it had not activated its swap file when it was restarted.

We believe that the possible reason for this was because the swapfile was created on
VMware device vmhba0:4:0:6, however when the VMware server was restarted (when the
LUN was under the control of the SVC), the swap file was now at location vmhba0:1:0:6.

One method that could eliminate this problem would be to label your vmfs file system on
your boot LUN, and recreate the swap file selecting this labeled file system to store it.

We followed this procedure:


1. Suspend all the VMware guests; you can follow the same procedure in Figure 14-54 on
page 615.
2. Shut down our ESX server.
3. Using Storage Manager (our storage subsystem management tool), we can
unmap/unmask the disks from the ESX server and remap/remask the disks to the SVC.
4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks
will be discovered and named as mdiskN, where N is the next available MDisk number
(starting from 0). Example 14-30 shows the commands we used to discover our MDisks,
and verify that we have the correct ones.

Example 14-30 Discover the new MDisks


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 ESX-DATA-MD online image 7 ESX-DATA-MDG 20.0GB 0000000000000000
DS4000 600a0b80001744310000000542d658ce00000000000000000000000000000000
1 mdisk1 online unmanaged 20.0GB 0000000000000001
DS4000 600a0b80001742330000000b431d7d1900000000000000000000000000000000

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
task display) with the serial number you obtained earlier (in Figure 14-51 on page 612).

5. Once we verified that we have the correct MDisks, we will rename them, to avoid
confusion in the future when we perform other MDisk related tasks (Example 14-31).

Example 14-31 Rename the MDisks


IBM_2145:itsosvc1:admin>svctask chmdisk -name ESX-BOOT-MD mdisk1
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 ESX-DATA-MD online image 7 ESX-DATA-MDG 20.0GB 0000000000000000
DS4000 600a0b80001744310000000542d658ce00000000000000000000000000000000
1 ESX-BOOT-MD online unmanaged 20.0GB 0000000000000001
DS4000 600a0b80001742330000000b431d7d1900000000000000000000000000000000

6. We create our image mode VDisks with the svctask mkvdisk command (Example 14-32).
This command will virtualize the disks, in the exact same layout as if they were not
virtualized.

Chapter 14. Migration to and from the SAN Volume Controller 617
Example 14-32 Create the image mode vdisks
IBM_2145:itsosvc1:admin>svctask mkvdisk -mdiskgrp ESX-BOOT-MDG -iogrp 0 -vtype image -mdisk
ESX-BOOT-MD -name ESX-BOOT-VD
Virtual Disk, id [10], successfully created

7. Finally, we can map the new image mode VDisks to the host (Example 14-33).

Example 14-33 Map the VDisks to the host


IBM_2145:itsosvc1:admin>svctask mkvdiskhostmap -host LOCHNESS -scsi 0 ESX-BOOT-VD
Virtual Disk to Host map, id [0], successfully created
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
13 LOCHNESS 0 10 ESX-BOOT-VD 210000E08B18558E 60050768018200C4700000000000000E
13 LOCHNESS 1 11 ESX-DATA-VD 210000E08B18558E 60050768018200C4700000000000000F

Tip: Since ESX-BOOT-VD will be used to boot our ESX server, we must make sure that it
is presented back to the host as SCSI ID 0, so that the FC adapter BIOS finds it during its
initialization.

Note: While the ESX server’s boot LUNs are in a quiescent state, at this point you could
choose to FlashCopy the new image VDisk onto other VDisks. You will not need to wait
until the FlashCopy has completed before starting your application.

The ESX boot LUN migration to the SVC is now complete, and you can now restart your ESX
server, and resume your VMware guests.

14.7.4 Migrate the image mode VDisks


While the VMware server and its virtual machines are still running, we will now migrate the
image mode VDisks onto striped VDisks, with the extents being spread over three other
MDisks.

Preparing MDisks for striped mode VDisks


From our storage subsystem, we have:
򐂰 Created and allocated three LUNs to the SVC.
򐂰 Discovered them as MDisks.
򐂰 Renamed these LUNs to something more meaningful.
򐂰 Created a new MDisk group.
򐂰 Finally, put all these MDisks into this group.

You can see the output of our commands in Example 14-34.

Example 14-34 Create a new MDisk group


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name IBMOEM-LIN-MDG -ext 64
MDisk Group, id [5], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 ESX-DATA-MD online image 7 ESX-DATA-MDG 20.0GB
0000000000000000 DS4000
600a0b80001744310000000542d658ce00000000000000000000000000000000

618 IBM System Storage SAN Volume Controller


1 ESX-BOOT-MD online image 8 ESX-BOOT-MDG 20.0GB
0000000000000001 DS4000
600a0b80001742330000000b431d7d1900000000000000000000000000000000
2 mdisk2 online managed 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 mdisk3 online managed 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 mdisk4 online managed 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-ESX-MD1 mdisk2
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-ESX-MD2 mdisk3
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-ESX-MD3 mdisk4
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-ESX-MD1 IBMOEM-ESX-MDG
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-ESX-MD2 IBMOEM-ESX-MDG
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-ESX-MD3 IBMOEM-ESX-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
0 ESX-BOOT-MD online image 8 ESX-BOOT-MDG 20.0GB
0000000000000000 DS4000
600a0b80001744310000000542d658ce00000000000000000000000000000000
1 ESX-DATA-MD online image 7 ESX-DATA-MDG 20.0GB
0000000000000001 DS4000
600a0b80001742330000000b431d7d1900000000000000000000000000000000
2 IBMOEM-ESX-MD1 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-ESX-MD2 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-ESX-MD3 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

Migrate the VDisks


We are now ready to migrate the image mode VDisks onto striped VDisks in the
IBMOEM-ESX-MDG, with the svctask migratevdisk command (Example 14-35).

While the migration is running, our VMware ESX server will remain running, as will our
VMware guests.

To check the overall progress of the migration we will use the svcinfo lsmigrate command
as shown in Example 14-35. Listing the MDisk group with the svcinfo lsmdiskgrp command
shows that the free capacity on the old MDisk group is slowly increasing as those extents are
moved to the new MDisk group.

Example 14-35 Migrating image mode VDisks to striped VDisks


IBM_2145:itsosvc1:admin>svctask migratevdisk -vdisk ESX-BOOT-VD -mdiskgrp IBMOEM-ESX-MDG
IBM_2145:itsosvc1:admin>svctask migratevdisk -vdisk ESX-DATA-VD -mdiskgrp IBMOEM-ESX-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 4
migrate_source_vdisk_index 10
migrate_target_mdisk_grp 5
max_thread_count 4

Chapter 14. Migration to and from the SAN Volume Controller 619
migrate_type MDisk_Group_Migration
progress 3
migrate_source_vdisk_index 11
migrate_target_mdisk_grp 5
max_thread_count 4
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-ESX-MDG online 3 0 149.1GB 64 109.1GB
7 ESX-DATA-MDG online 1 1 20.0GB 64 2.3GB
8 ESX-BOOT-MDG online 1 1 20.0GB 64 6.9GB

Once this task is completed, Example 14-36 shows that the VDisks are now spread over
three MDisks.

Example 14-36 Migration complete


IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-ESX-MDG online 3 2 149.1GB 64 109.1GB
7 ESX-DATA-MDG online 1 0 20.0GB 64 20.0GB
8 ESX-BOOT-MDG online 1 0 20.0GB 64 20.0GB
IBM_2145:itsosvc1:admin>svcinfo lsvdiskmember ESX-BOOT-VD
id
2
3
4
IBM_2145:itsosvc1:admin>svcinfo lsvdiskmember ESX-DATA-VD
id
2
3
4

Our migration to the SVC is now complete. The original MDisks (ESX-BOOT-MD and
ESX-DATA-MD) can now be removed from the SVC, and these LUNs removed from the storage
subsystem.

If these LUNs were the last used LUNs on our storage subsystem, then we could remove
them from our SAN fabric.

14.7.5 Preparing to migrate from the SVC


Before we move the ESX servers LUNs from being accessed by the SVC as virtual disks, to
become directly accessed from the storage subsystem, we need to convert the VDisks into
image mode VDisks.

You might want to perform this activity for any one of these reasons:
򐂰 You purchased a new storage subsystem, and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
򐂰 You used SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no
longer need that host connected to the SVC.
򐂰 You want to ship a host and its data that currently is connected to the SVC, to a site where
there is no SVC.
򐂰 Or changes to your environment no longer require this host to use the SVC.

There are also some other preparatory activities that we can do before we need to shut down
the host and reconfigure the LUN masking/mapping. This section covers those activities.

620 IBM System Storage SAN Volume Controller


If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment should look similar to ours as shown in Figure 14-56.

Figure 14-56 ESX SVC SAN Environment

Make fabric zone changes


The first step is to set up the SAN configuration so that all the zones are created. The new
storage subsystem should be added to the Red zone so that the SVC can talk to it directly.

We will also need a Green zone for our host to use when we are ready for it to directly access
the disk, after it has been removed from the SVC.

We assume that you have created the necessary zones.

Once your zone configuration is set up correctly, the SVC should see the new storage
subsystems controller using the svcinfo lscontroller command as shown in
Example 14-37. It is also a good idea to rename it to something more meaningful.

Example 14-37 Discovering the new storage subsystem


IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900
1 controller1 IBM 2107-921
IBM_2145:itsosvc1:admin>svctask chcontroller -name DS8000 controller1
IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900
1 DS8000 IBM 2107-921

Chapter 14. Migration to and from the SAN Volume Controller 621
Create new LUNs
On our storage subsystem we created two LUNs and masked the LUNs so that the SVC can
see them. These two LUNs will eventually be given directly to the host, removing the VDisks
that it currently has. To check that the SVC can use them, issue the svctask detectmdisk
command as shown in Example 14-38.

Example 14-38 Discover the new MDisks


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
0 mdisk0 online unmanaged 20.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 mdisk1 online unmanaged 20.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-ESX-MD1 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-ESX-MD2 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-ESX-MD3 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

Even though the MDisks will not stay in the SVC for long, it is still recommended to rename
them to something more meaningful, just so that they do not get confused with other MDisks
being used by other activities. Also, we will create the MDisk groups to hold our new MDisks.
This is all shown in Example 14-39.

Example 14-39 Rename the MDisks


IBM_2145:itsosvc1:admin>svctask chmdisk -name ESX-BOOT-MD mdisk1
IBM_2145:itsosvc1:admin>svctask chmdisk -name ESX-DATA-MD mdisk0
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name ESX-BOOT-MDG -ext 64
MDisk Group, id [8], successfully created
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name ESX-DATA-MDG -ext 64
MDisk Group, id [7], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-LIN-MDG online 3 2 149.1GB 64 109.1GB
7 ESX-DATA-MDG online 0 0 0 64 0
8 ESX-BOOT-MDG online 0 0 0 64 0

Our SVC environment is now ready for the VDisk migration to image mode VDisks.

622 IBM System Storage SAN Volume Controller


14.7.6 Migrate the managed VDisks to image mode VDisks
While our ESX server is still running, we will migrate the managed VDisks onto the new
MDisks using image mode VDisks. The command to perform this action is svctask
migratetoimage and is shown in Example 14-40.

Example 14-40 Migrate the VDisks to image mode VDisks


IBM_2145:itsosvc1:admin>svctask migratetoimage -vdisk ESX-BOOT-VD -mdisk ESX-BOOT-MD
-mdiskgrp ESX-BOOT-MDG
IBM_2145:itsosvc1:admin>svctask migratetoimage -vdisk ESX-DATA-VD -mdisk ESX-DATA-MD
-mdiskgrp ESX-DATA-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id misk_grp_name capacity
ctrl_LUN_# controller_name UID
0 ESX-DATA-MD online image 7 ESX-DATA-MDG 20.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 ESX-BOOT-MD online image 8 ESX-BOOT-MDG 20.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-ESX-MD1 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-ESX-MD2 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-ESX-MD3 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000
IBM_2145:itsosvc1:admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 17
migrate_source_vdisk_index 11
migrate_target_mdisk_index 0
migrate_target_mdisk_grp 7
max_thread_count 4
migrate_type Migrate_to_Image
progress 15
migrate_source_vdisk_index 10
migrate_target_mdisk_index 1
migrate_target_mdisk_grp 8
max_thread_count 4

During the migration, our ESX server will not be aware that its data is being physically moved
between storage subsystems. We can continue to run and use the virtual machines running
on the server.

Once the migration has completed, the image mode VDisks will be ready to be removed from
the ESX server, and the real LUNs can be mapped/masked directly to the host using the
storage subsystem’s tool.

14.7.7 Remove the LUNs from the SVC


Depending on how your ESX server is configured, this will determine in what order your LUNs
are removed from the control of the SVC, and whether you need to reboot the ESX server as
well as suspending the VMware guests.

Chapter 14. Migration to and from the SAN Volume Controller 623
On our LOCHNESS_BOOT_LUN_0, we have just the ESX server software (that is, no VMware
guests), so in order to move this LUN under the control of the SVC we will have to shut down
the ESX server. (Thus, this will mean that all VMware guests will need to be stopped/
suspended before this can occur.)

On our LOCHNESS_DATA_LUN_1, we have just virtual machine disks, so in order to move this
LUN under the control of the SVC, we will only have to stop/suspend all VMware guests that
are using this LUN.

Remove VMware guest LUNs


We will first move our LOCHNESS_DATA_LUN_1 using these steps:
1. Shut down/suspend all our guests using this LUN. You can use the same method as used
in “Move VMware guest LUNs” on page 613 to identify the guests using this LUN.
2. Remove the VDisks from the host by using the svctask rmvdiskhostmap command
(Example 14-41). To double check that you have removed the VDisks, use the svcinfo
lshostvdiskmap command, which should show that these VDisks are no longer mapped to
the ESX server.

Example 14-41 Remove the VDisks from the host


IBM_2145:itsosvc1:admin>svctask rmvdiskhostmap -host LOCHNESS ESX-DATA-VD
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap LOCHNESS
13 LOCHNESS 0 10 ESX-BOOT-VD 210000E08B18558E 60050768018200C4700000000000000E

3. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make
the MDisks unmanaged as shown in Example 14-42.

Note: When you run the svctask rmvdisk command, the SVC will first double check
that there is no outstanding dirty cache data for the VDisk being removed. If there is still
uncommitted cached data, then the command will fail with the error message:

CMMVC6212E The command failed because data in the cache has not been committed
to disk

You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the VDisk.

The SVC will automatically de-stage uncommitted cached data two minutes after the
last write activity for the VDisk. Depending on how much data there is to destage, and
how busy the I/O subsystem is will determine how long this command takes to
complete.

You can check if the VDisk has uncommitted data in the cache by using the command
svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This
attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Some modified data might exist in the cache.
corrupt Some modified data might have existed in the cache, but any such
data has been lost.

624 IBM System Storage SAN Volume Controller


Example 14-42 Remove the VDisks from the SVC
IBM_2145:itsosvc1:admin>svctask rmvdisk ESX-DATA-VD
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id misk_grp_name capacity
ctrl_LUN_# controller_name UID
0 ESX-DATA-MD online unmanaged 20.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 ESX-BOOT-MD online image 8 ESX-BOOT-MDG 20.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-ESX-MD1 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-ESX-MD2 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-ESX-MD3 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

4. Using Storage Manager (our storage subsystem management tool), unmap/unmask the
disks from the SVC back to the ESX server.

Important: This is the last step that you can perform, and still safely back out
everything you have done so far.

Up to this point you can reverse all the actions that you have performed so far to get the
server back online without data loss, that is:
򐂰 Remap/remask the LUNs back to the SVC.
򐂰 Run the svctask detectmdisk to rediscover the MDisks.
򐂰 Recreate the VDisks with svctask mkvdisk.
򐂰 Remap the VDisks back to the server with svctask mkvdiskhostmap.

Once you start the next step, you might not be able to turn back without the risk of data
loss.

5. Now, using the VMware management console, rescan to discover the new VDisk as
shown in Figure 14-57.

Chapter 14. Migration to and from the SAN Volume Controller 625
Figure 14-57 Rescan your SAN and discover the LUNs

During the rescan you might receive geometry errors as ESX discovers that the old disk
has disappeared. Your VDisk will appear with a new vmhba address, and VMware will
recognize it as our VMWARE-GUESTS disk.
6. We are now ready to restart the VMware guests. If you wanted to move the ESX servers
boot disk at the same time, you could continue with the next steps.

And finally, to make sure that the MDisks are removed from the SVC, run the svctask
detectmdisk command. The MDisks will first be discovered as offline, and then will
automatically be removed, once the SVC determines that there are no VDisks associated
with these MDisks.

Remove VMware BOOT LUNs


These steps will cover moving our LOCHNESS_BOOT_LUN_0 LUN off of the SVC. We will use
these steps:
1. Since this is the VMware ESX servers boot LUN, we need to shut down/suspend all our
virtual machines. Once they have all been suspended or shut down, we will shut down the
ESX server.
2. Remove the VDisks from the host by using the svctask rmvdiskhostmap command
(Example 14-43). To double check that you have removed the VDisks, use the svcinfo
lshostvdiskmap command, which should show that these VDisks are no longer mapped to
the ESX server.

Example 14-43 Remove the VDisks from the host


IBM_2145:itsosvc1:admin>svctask rmvdiskhostmap -host LOCHNESS ESX-BOOT-VD
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap LOCHNESS
IBM_2145:itsosvc1:admin>

626 IBM System Storage SAN Volume Controller


3. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make
the MDisks unmanaged as shown in Example 14-44.

Note: When you run the svctask rmvdisk command, the SVC will first double check
that there is no outstanding dirty cache data for the VDisk being removed. If there is still
uncommitted cached data, then the command will fail with the error message:

CMMVC6212E The command failed because data in the cache has not been committed
to disk

You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the VDisk.

The SVC will automatically de-stage uncommitted cached data two minutes after the
last write activity for the VDisk. Depending on how much data there is to destage, and
how busy the I/O subsystem is will determine how long this command takes to
complete.

Example 14-44 Remove the VDisks from the SVC


IBM_2145:itsosvc1:admin>svctask rmvdisk ESX-BOOT-VD
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id misk_grp_name capacity
ctrl_LUN_# controller_name UID
1 ESX-BOOT-MD online unmanaged 20.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-ESX-MD1 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-ESX-MD2 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-ESX-MD3 online managed 5 IBMOEM-ESX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

4. Using Storage Manager (our storage subsystem management tool), unmap/unmask the
disks from the SVC back to the ESX server.

Important: This is the last step that you can perform and still safely back out everything
you have done so far.

Up to this point, you can reverse all the actions that you have performed so far to get
the server back online without data loss, that is:
򐂰 Remap/remask the LUNs back to the SVC.
򐂰 Run svctask detectmdisk to rediscover the MDisks.
򐂰 Recreate the VDisks with svctask mkvdisk.
򐂰 Remap the VDisks back to the server with svctask mkvdiskhostmap.

Once you start the next step, you might not be able to turn back without the risk of data
loss.

Chapter 14. Migration to and from the SAN Volume Controller 627
Note: One of the LUNs is the ESX server’s boot disk; this will need to be given directly
to the ESX server as LUN ID 0.

5. We are now ready to restart the ESX Server. If our zoning configuration and our
masking/mapping has been performed correctly, the ESX server will start unaware that
anything has occurred.

And finally, to make sure that the MDisks are removed from the SVC, run the svctask
detectmdisk command. The MDisks will first be discovered as offline, and then will
automatically be removed, once the SVC determines that there are no VDisks associated
with these MDisks.

14.8 Migrating AIX SAN disks to SVC disks


In this section we will move the two LUNs from an AIX server which is currently booting
directly off of our DS4000 storage subsystem over to the SVC.

We will then manage those LUNs with the SVC, move them between other managed disks,
and then finally move them back to image mode disks, so that those LUNs can then be
masked/mapped back to the AIX server directly.

If you use this example, this should help you perform any one of the following activities in your
environment:
򐂰 Move an AIX server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs via the SVC. This would be the first activity that you would do when introducing the
SVC into your environment. This section shows that your host downtime is only a few
minutes while you remap/remask disks using your storage subsystem LUN management
tool. This step starts in “Prepare your SVC to virtualize disks” on page 631.
򐂰 Move data between storage subsystems while your AIX server is still running and
servicing your business application. You might perform this activity if you were removing a
storage subsystem from your SAN environment, all wanting to move the data onto LUNs
that are more appropriate for the type of data stored on those LUNs taking into account
availability, performance and redundancy. This step is covered in “Migrate the image mode
VDisks” on page 637.
򐂰 Move your AIX server’s LUNs back to image mode VDisks, so that they can be
remapped/remasked directly back to the AIX server. This step starts in “Preparing to
migrate from the SVC” on page 639.

These three activities can be used individually, or together, enabling you to migrate your AIX
server’s LUNs from one storage subsystem to another storage subsystem, using the SVC as
your migration tool. If you do not use all three activities, it will enable you to introduce or
remove the SVC from your environment.

The only downtime required for these activities will be the time it takes you to remask/remap
the LUNs between the storage subsystems and your SVC.

628 IBM System Storage SAN Volume Controller


In Figure 14-58 we show our AIX environment.

Zoning for migration scenarios

AIX
Host

SAN

Green Zone
IBM or OEM
Storage
Subsystem

Figure 14-58 AIX SAN environment

Figure 14-58 shows our AIX server connected to our SAN infrastructure. It has two LUNs
(hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.

These disks both make up the LVM group ITSOvg, which has one filesystem mounted at
/ITSOfs. This filesystem has two random files as shown in Example 14-45.

Example 14-45 AIX SAN configuration


# lsdev -Ccdisk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device
# lspv
hdisk0 0009cddaea97bf61 rootvg active
hdisk1 0009cdda43c9dfd5 rootvg active
hdisk2 0009cdda43ca4d87 rootvg active
hdisk3 0009cdda6f85f4fe ITSOvg active
hdisk4 0009cdda6f85f666 ITSOvg active
# lsvg
rootvg
datavg
ITSOvg
# lsvg -l ITSOvg
ITSOvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
ITSOlv jfs 32 32 1 closed/syncd N/A

Chapter 14. Migration to and from the SAN Volume Controller 629
loglv03 jfslog 1 1 1 open/syncd N/A
lv02 jfs 512 512 2 open/syncd /ITSOfs
# ls -la /ITSOfs/ITSOSVCTEST
total 745488
drwxr-sr-x 2 root sys 512 Sep 19 13:09 .
drwxr-sr-x 4 sys sys 512 Sep 19 13:04 ..
-rw-r--r-- 1 root sys 631242752 Sep 19 13:11 random1
-rw-r--r-- 1 root sys 14680064 Sep 19 13:09 random2

Our AIX server represents a typical SAN environment with a host directly using LUNs created
on a SAN storage subsystem, as shown in Figure 14-58 on page 629.
򐂰 The AIX server’s HBA cards are zoned so that they are in the Green zone, with our
storage subsystem.
򐂰 hdisk3 and hdisk4 are two LUNs that have been defined on the storage subsystem, and
using LUN masking, are directly available to our AIX server.

14.8.1 Connecting the SVC to your SAN fabric


This section covers the basic steps that you would take to introduce the SVC into your SAN
environment. While this section only summarizes these activities, you should be able to
accomplish this without any downtime to any host or application that is also using your
storage area network.

If you have an SVC already connected, then you can safely jump to “Prepare your SVC to
virtualize disks” on page 631.

Be very careful, as connecting the SVC into your storage area network will require you to
connect cables to your SAN switches, and alter your switch zone configuration. Doing these
activities incorrectly could render your SAN inoperable, so make sure you fully understand the
impact of everything you are doing.

Connecting the SVC to your SAN fabric will require you to:
򐂰 Assemble your SVC components (nodes, UPS, master console), cable it correctly, power it
on, and verify that it is visible on your storage area network.
򐂰 Create and configure your SVC cluster.
򐂰 Create these additional zones:
– An SVC node zone (our Black zone in our picture on Figure 14-59). This zone should
just contain all the ports (or WWN) for each of the SVC nodes in your cluster. Our SVC
is made up of a two node cluster, where each node has four ports. So our Black zone
has eight WWNs defined.
– A storage zone (our Red zone). This zone should also have all the ports/WWN from the
SVC node zone as well as the ports/WWN for all the storage subsystems that SVC will
virtualize.
– A host zone (our Blue zone). This zone should contain the ports/WWNs for each host
that will access the VDisk, together with the ports defined in the SVC node zone.

Attention: Do not put your storage subsystems in the host (Blue) zone. This is an
unsupported configuration and could lead to data loss!

630 IBM System Storage SAN Volume Controller


Our environment has been set up as described above and can be seen in Figure 14-59.

Zoning for migration scenarios

AIX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 14-59 SAN environment with SVC attached

14.8.2 Prepare your SVC to virtualize disks


This section covers the preparatory tasks that we will perform before taking our AIX server
offline. These are all non-disruptive activities and should not affect your SAN fabric, nor your
existing SVC configuration (if you already have a production SVC in place).

Create a managed disk group


When we move the two AIX LUNs to the SVC, they will first be used in image mode, and as
such we need a managed disk group to hold those disks.

First, we need to create an empty managed disk group for each of the disks, using the
commands in Example 14-46. Our managed disk group will be called KANAGA_MDG_0 and
KANAGA_MDG_1 to hold our boot LUN and data LUN respectively.

Example 14-46 Create empty mdiskgroup


IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name KANAGA_MDG_0 -ext 64
MDisk Group, id [8], successfully created
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name KANAGA_MDG_1 -ext 64
MDisk Group, id [7], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
7 KANAGA_MDG_1 online 0 0 0 64 0
8 KANAGA_MDG_0 online 0 0 0 64 0

Chapter 14. Migration to and from the SAN Volume Controller 631
Create our host definition
If your zone preparation above has been performed correctly, the SVC should be able to see
the AIX server’s HBA adapters on the fabric (our host only had one HBA).

First we will get the WWN for our AIX server’s HBA, as we have many hosts connected to our
SAN fabric and in the Blue zone. We want to make sure we have the correct WWN to reduce
our AIX servers downtime. Example 14-47 shows the commands to get the WWN; our host
has a WWN of 10000000C932A7FB.

Example 14-47 Find out your WWN


# lsdev -Ccadapter|grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC Adapter
# lscfg -vpl fcs0
fcs0 U0.1-P2-I4/Q1 FC Adapter

Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Feature Code/Marketing ID...2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC

Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1

The svcinfo lshbaportcandidate command on the SVC will list all the WWNs that the SVC
can see on the SAN fabric that have not yet been allocated to a host. Example 14-48 shows
the output of the nodes it found on our SAN fabric. (If the port did not show up, it would
indicate that we have a zone configuration problem.)

Example 14-48 Add the host to the SVC


IBM_2145:itsosvc1:admin>svcinfo lshbaportcandidate
id
210000E08B1A5996
210100E08B3A5996
210000E08B05F3ED

632 IBM System Storage SAN Volume Controller


210000E08B05F2ED
10000000C932A7FB

After verifying that the SVC can see our host (KANAGA), we will create the host entry and
assign the WWN to this entry. These commands can be seen in Example 14-49.

Example 14-49 Create the host entry


IBM_2145:itsosvc1:admin>svctask mkhost -name KANAGA -hbawwpn 10000000C932A7FB
Host id [13] successfully created
IBM_2145:itsosvc1:admin>svcinfo lshost KANAGA
id 13
name KANAGA
port_count 1
type generic
iogrp_count 4
WWPN 10000000C932A7FB
node_logged_in_count 2

Verify that we can see our storage subsystem


If our zoning has been performed correctly, the SVC should also be able to see the storage
subsystem with the svcinfo lscontroller command (Example 14-50). We will also rename
the storage subsystem to something more meaningful. (If we had many storage subsystems
connected to our SAN fabric, then renaming them makes it considerably easier to identify
them.)

Example 14-50 Discover and rename the storage controller


IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 controller0 IBM 1742-900
IBM_2145:itsosvc1:admin>svctask chcontroller -name DS4000 controller0
IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900

Get the disk serial numbers


To help avoid the possibility of creating the wrong VDisks from all the available unmanaged
MDisks (in case there are many seen by the SVC), we will get the LUN serial numbers from
our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we will confirm that we have the correct serial numbers
before we create the image mode VDisks.

If you are also using a DS4000 family storage subsystem, Storage Manager will provide the
LUN serial numbers. Right-click your logical drive and choose Properties. Our serial
numbers are shown in Figure 14-60.

Chapter 14. Migration to and from the SAN Volume Controller 633
Figure 14-60 Obtaining the disk serial numbers

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks
and give them back to the host as VDisks.

14.8.3 Move the LUNs to the SVC


In this step, we will move the LUNs assigned to the AIX server and reassign them to the SVC.

As we only wanted to move the LUN that holds our application and data files, then we can do
that without rebooting the host. The only requirement would be that we unmount the file
system, and vary off the volume group to ensure data integrity between the re-assignment.

Before you start: Moving LUNs to the SVC will require that the SDD device driver is
installed on the AIX server. This could also be installed ahead of time; however, it might
require an outage to your host to do so.

As we will move both LUNs at the same time, here are the required steps:
1. Confirm that the SDD device driver is installed.
2. Unmount and vary off the volume groups:
a. Stop the applications that are using the LUNs.
b. Unmount those filesystems, with the umount MOUNT_POINT command.
c. If the filesystems are an LVM volume, then deactivate that volume group with the
varyoffvg VOLUMEGROUP_NAME.
Figure 14-51 shows our commands that we ran on Kanaga.

634 IBM System Storage SAN Volume Controller


Example 14-51 AIX command sequence
# umount /ITSOfs
# varyoffvg ITSOvg
# lsvg -o
rootvg
# lsvg
rootvg
datavg
ITSOvg
# df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd4 131072 46584 65% 2016 16% /
/dev/hd2 3473408 982172 72% 49677 19% /usr
/dev/hd9var 65536 32796 50% 490 7% /var
/dev/hd3 589824 355456 40% 1410 2% /tmp
/dev/hd1 360448 327516 10% 485 1% /home
/proc - - - - - /proc
/dev/hd10opt 360448 213828 41% 1442 3% /opt

3. Using Storage Manager (our storage subsystem management tool), we can


unmap/unmask the disks from the AIX server and remap/remask the disks to the SVC.
4. From the SVC, discover the new disks, with the svctask detectmdisk command. The
disks will be discovered and named as mdiskN, where N is the next available mdisk
number (starting from 0). Example 14-52 shows the commands we used to discover our
mdisks and verify that we have the correct ones.

Example 14-52 Discover the new mdisks.


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 mdisk0 online unmanaged 20.0GB 0000000000000000
DS4000 600a0b800017443100000016432e9a4a00000000000000000000000000000000
1 mdisk1 online unmanaged 20.0GB 0000000000000001
DS4000 600a0b800017443100000017432e9a7000000000000000000000000000000000

Important: Match your discovered mdisk serial numbers (UID on the svcinfo lsmdisk task
display) with the serial number you took earlier (in Figure 14-60 on page 634).

5. Once we have verified that we have the correct MDisks, we will rename them to avoid
confusion in the future when we perform other MDisk related tasks (Example 14-53).

Example 14-53 Rename the MDisks


IBM_2145:itsosvc1:admin>svctask chmdisk -name AIX-DAT1-MD mdisk0
IBM_2145:itsosvc1:admin>svctask chmdisk -name AIX-DAT2-MD mdisk1
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 AIX-DAT1-MD online unmanaged 20.0GB 0000000000000000
DS4000 600a0b800017443100000016432e9a4a00000000000000000000000000000000
1 AIX-DAT2-MD online unmanaged 20.0GB 0000000000000001
DS4000 600a0b800017443100000017432e9a7000000000000000000000000000000000

6. We create our image mode VDisks with the svctask mkvdisk command (Example 14-54).
This command will virtualize the disks, in the exact same layout as if they were not
virtualized.

Chapter 14. Migration to and from the SAN Volume Controller 635
Example 14-54 Create the image mode vdisks
IBM_2145:itsosvc1:admin>svctask mkvdisk -mdiskgrp KANAGA_MDG_0 -iogrp 0 -vtype image -mdisk
AIX-DAT1-MD -name AIX-DAT1-VD
Virtual Disk, id [10], successfully created
IBM_2145:itsosvc1:admin>svctask mkvdisk -mdiskgrp KANAGA_MDG_1 -iogrp 0 -vtype image -mdisk
AIX-DAT2-MD -name AIX-DAT2-VD
Virtual Disk, id [11], successfully created

7. Finally, we can map the new image mode VDisks to the host (Example 14-55).

Example 14-55 Map the VDisks to the host


IBM_2145:itsosvc1:admin>svctask mkvdiskhostmap -host KANAGA AIX-DAT1-VD
Virtual Disk to Host map, id [0], successfully created
IBM_2145:itsosvc1:admin>svctask mkvdiskhostmap -host KANAGA AIX-DAT2-VD
Virtual Disk to Host map, id [1], successfully created
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
13 KANAGA 0 10 AIX-DAT1-VD 10000000C932A7FB 60050768018200C4700000000000000F
13 KANAGA 1 11 AIX-DAT2-VD 10000000C932A7FB 60050768018200C4700000000000000E

Note: While the application is in a quiescent state, you could choose to FlashCopy the new
image VDisks onto other VDisks. You will not need to wait until the FlashCopy has
completed before starting your application.

We are now ready to restart the AIX server.

If you only moved the application LUN to the SVC and left your AIX server running, then you
would need to follow these steps to see the new VDisk:
1. Remove the old disk definitions, if you have not done so already.
2. Run cfgmgr to rediscover the available LUNs.
3. If your application and data is on an LVM volume, run vgscan to rediscover the volume
group, then run the varyonvg VOLUME_GROUP to activate the volume group.
4. Mount your filesystems with the mount /MOUNT_POINT command.
5. You should be ready to start your application.

To verify that we did not lose any data integrity during this process, we ran lqueryvg -Atp
ITSOvg. Example 14-56 verifies that everything is intact.

Example 14-56 Verify that the LVM group is intact


# lqueryvg -Atp ITSOvg
Max LVs: 256
PP Size: 26
Free PPs: 93
LV count: 3
PV count: 2
Total VGDAs: 3
Conc Allowed: 0
MAX PPs per PV 1016
MAX PVs: 32
Conc Autovaryo 0
Varied on Conc 0
Logical: 0009cdda00004c00000001066f85f8aa.1 ITSOlv 1
0009cdda00004c00000001066f85f8aa.2 loglv03 1
0009cdda00004c00000001066f85f8aa.3 lv02 1

636 IBM System Storage SAN Volume Controller


Physical: 0009cdda6f85f4fe 2 0
0009cdda6f85f666 1 0
Total PPs: 638
LTG size: 128
HOT SPARE: 0
AUTO SYNC: 0
VG PERMISSION: 0
SNAPSHOT VG: 0
IS_PRIMARY VG: 0
PSNFSTPP: 4352
VARYON MODE: 0

14.8.4 Migrate the image mode VDisks


While the AIX server still running, and our filesystems are in use, we will now migrate the
image mode VDisks onto striped VDisks, with the extents being spread over other three
MDisks.

Preparing MDisks for stripped mode VDisks


From our storage subsystem, we have:
򐂰 Created and allocated three LUNs to the SVC.
򐂰 Discovered them as MDisks.
򐂰 Renamed these LUNs to something more meaningful.
򐂰 Created a new MDisk group.
򐂰 Finally, put all these MDisks into this group.

You can see all the output of our commands in Example 14-57.

Example 14-57 Create a new MDisk group


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name IBMOEM-AIX-MDG -ext 64
MDisk Group, id [5], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 AIX-DAT1-MD online image 8 KANAGA_MDG_0 20.0GB
0000000000000000 DS4000
600a0b800017443100000016432e9a4a00000000000000000000000000000000
1 AIX-DAT2-MD online image 7 KANAGA_MDG_1 20.0GB
0000000000000001 DS4000
600a0b800017443100000017432e9a7000000000000000000000000000000000
2 mdisk2 online managed 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 mdisk3 online managed 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 mdisk4 online managed 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-AIX-MD1 mdisk2
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-AIX-MD2 mdisk3
IBM_2145:itsosvc1:admin>svctask chmdisk -name IBMOEM-AIX-MD3 mdisk4
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-AIX-MD1 IBMOEM-AIX-MDG
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-AIX-MD2 IBMOEM-AIX-MDG
IBM_2145:itsosvc1:admin>svctask addmdisk -mdisk IBMOEM-AIX-MD3 IBMOEM-AIX-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmdisk

Chapter 14. Migration to and from the SAN Volume Controller 637
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
0 AIX-DAT1-MD online image 8 KANAGA_MDG_0 20.0GB
0000000000000000 DS4000
600a0b800017443100000016432e9a4a00000000000000000000000000000000
1 AIX-DAT2-MD online image 7 KANAGA_MDG_1 20.0GB
0000000000000001 DS4000
600a0b800017443100000017432e9a7000000000000000000000000000000000
2 IBMOEM-AIX-MD1 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-AIX-MD2 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-AIX-MD3 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

Migrate the VDisks


We are now ready to migrate the image mode VDisks onto striped VDisks in the
IBMOEM-AIX-MDG, with the svctask migratevdisk command (Example 14-13).

While the migration is running, our AIX server is still running, and we can continue to run the
sha1 checksum calculations on all our random files in the /data directory.

To check the overall progress of the migration, we will use the svcinfo lsmigrate command
as shown in Example 14-58. Listing the MDisk group with the svcinfo lsmdiskgrp command
shows that the that the free capacity on the old MDisk group is slowly increasing as those
extents are moved to the new MDisk group.

Example 14-58 Migrating image mode VDisks to striped VDisks


IBM_2145:itsosvc1:admin>svctask migratevdisk -vdisk AIX-DAT1-VD -mdiskgrp IBMOEM-AIX-MDG
IBM_2145:itsosvc1:admin>svctask migratevdisk -vdisk AIX-DAT2-VD -mdiskgrp IBMOEM-AIX-MDG
IBM_2145:itsosvc1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 2
migrate_source_vdisk_index 11
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_type MDisk_Group_Migration
progress 9
migrate_source_vdisk_index 10
migrate_target_mdisk_grp 5
max_thread_count 4
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-AIX-MDG online 3 0 149.1GB 64 109.1GB
7 KANAGA_MDG_1 online 1 1 20.0GB 64 2.3GB
8 KANAGA_MDG_0 online 1 1 20.0GB 64 6.9GB

638 IBM System Storage SAN Volume Controller


Once this task has completed, Example 14-59 shows that the VDisks are now spread over
three MDisks.

Example 14-59 Migration complete


IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-AIX-MDG online 3 2 149.1GB 64 109.1GB
7 KANAGA_MDG_1 online 1 0 20.0GB 64 20.0GB
8 KANAGA_MDG_0 online 1 0 20.0GB 64 20.0GB
IBM_2145:itsosvc1:admin>svcinfo lsvdiskmember AIX-DAT1-VD
id
2
3
4
IBM_2145:itsosvc1:admin>svcinfo lsvdiskmember AIX-DAT2-VD
id
2
3
4

Our migration to the SVC is now complete. The original MDisks (AIX-DAT1-MD and
AIX-DAT2-MD) can now be removed from the SVC, and these LUNs removed from the storage
subsystem.

If these LUNs were the last used LUNs on our storage subsystem, then we could remove
them from our SAN fabric.

14.8.5 Preparing to migrate from the SVC


Before we move the AIX servers LUNs from being accessed by the SVC as virtual disks, to
being directly accessed from the storage subsystem, we need to convert the VDisks into
image mode VDisks.

You might want to perform this activity for any one of these reasons:
򐂰 You purchased a new storage subsystem, and you were using the SVC as a tool to
migrate from your old storage subsystem, to this new storage subsystem.
򐂰 You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk and you no
longer need that host connected to the SVC.
򐂰 You want to ship a host and its data that is currently connected to the SVC, to a site where
there is no SVC.
򐂰 Or changes to your environment no longer require this host to use the SVC.

There are also some other preparatory activities that we can do before we need to shut down
the host, and reconfigure the LUN masking/mapping. This section covers those activities.

If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on and visible from your SAN switches.
Your environment should look similar to ours as shown in Figure 14-61.

Chapter 14. Migration to and from the SAN Volume Controller 639
Zoning for migration scenarios

AIX
Host

SVC
I/O grp0
SVC
SVC
SAN

Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone

Figure 14-61 Environment with SVC

Make fabric zone changes


The first step is to set up the SAN configuration so that all the zones are created. The new
storage subsystem should be added to the Red zone so that the SVC can talk to it directly.

We will also need a Green zone for our host to use when we are ready for it to directly access
the disk, after it has been removed from the SVC.

It is assumed that you have created the necessary zones.

Once your zone configuration is set up correctly, the SVC should see the new storage
subsystems controller using the svcinfo lscontroller command as shown in
Example 14-60. It is also a good idea to rename it to something more meaningful.

Example 14-60 Discovering the new storage subsystem


IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900
1 controller1 IBM 2107-921
IBM_2145:itsosvc1:admin>svctask chcontroller -name DS8000 controller1
IBM_2145:itsosvc1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4000 IBM 1742-900
1 DS8000 IBM 2107-921

640 IBM System Storage SAN Volume Controller


Create new LUNs
On our storage subsystem we created two LUNs, and masked the LUNs so that the SVC can
see them. These two LUNs will eventually be given directly to the host, removing the VDisks
that it currently has. To check that the SVC can use them issue the svctask detectmdisk
command as shown in Example 14-61.

Example 14-61 Discover the new MDisks


IBM_2145:itsosvc1:admin>svctask detectmdisk
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
0 mdisk0 online unmanaged 20.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 mdisk1 online unmanaged 20.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-AIX-MD1 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-AIX-MD2 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-AIX-MD3 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

Even though the MDisks will not stay in the SVC for long, we still recommend that you rename
them to something more meaningful, just so that they do not get confused with other MDisks
being used by other activities. Also, we will create the MDisk groups to hold our new MDisks.
This is shown in Example 14-62.

Example 14-62 Rename the MDisks


IBM_2145:itsosvc1:admin>svctask chmdisk -name AIX-DAT1-MD mdisk0
IBM_2145:itsosvc1:admin>svctask chmdisk -name AIX-DAT2-MD mdisk1
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name KANAGA_MDG_0 -ext 64
MDisk Group, id [8], successfully created
IBM_2145:itsosvc1:admin>svctask mkmdiskgrp -name KANAGA_MDG_1 -ext 64
MDisk Group, id [7], successfully created
IBM_2145:itsosvc1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
5 IBMOEM-AIX-MDG online 3 2 149.1GB 64 109.1GB
7 KANAGA_MDG_1 online 0 0 0 64 0
8 KANAGA_MDG_0 online 0 0 0 64 0

Our SVC environment is now ready for the VDisk migration to image mode VDisks.

14.8.6 Migrate the managed VDisks


While our AIX server is still running, we will migrate the managed VDisks onto the new
MDisks using image mode VDisks. The command to perform this action is svctask
migratetoimage and is shown in Example 14-63.

Chapter 14. Migration to and from the SAN Volume Controller 641
Example 14-63 Migrate the VDisks to image mode VDisks
IBM_2145:itsosvc1:admin>svctask migratetoimage -vdisk AIX-DAT1-VD -mdisk AIX-DAT1-MD
-mdiskgrp KANAGA_MDG_0
IBM_2145:itsosvc1:admin>svctask migratetoimage -vdisk AIX-DAT2-VD -mdisk AIX-DAT2-MD
-mdiskgrp KANAGA_MDG_1
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id misk_grp_name capacity
ctrl_LUN_# controller_name UID
0 AIX-DAT1-MD online image 8 KANAGA_MDG_0 20.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 AIX-DAT2-MD online image 7 KANAGA_MDG_1 20.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-AIX-MD1 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-AIX-MD2 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-AIX-MD3 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000
IBM_2145:itsosvc1:admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 9
migrate_source_vdisk_index 10
migrate_target_mdisk_index 0
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_type Migrate_to_Image
progress 15
migrate_source_vdisk_index 11
migrate_target_mdisk_index 1
migrate_target_mdisk_grp 7
max_thread_count 4

During the migration, our AIX server will not be aware that its data is being physically moved
between storage subsystems.

Once the migration has completed, the image mode VDisks will be ready to be removed from
the AIX server, and the real LUNs can be mapped/masked directly to the host using the
storage subsystems tool.

14.8.7 Remove the LUNs from the SVC


The next step will require downtime to the AIX, as we will remap/remask the disks so that the
host sees them directly via the green zone.

As both our LUNs only hold data files, and are on a unique volume group, then we could do
that without rebooting the host. The only requirement would be that we unmount the file
system and vary off the volume group to ensure data integrity between the re-assignment.

Before you start: Moving LUNs to another storage system might need a different driver
than SDD. Check with the storage subsystems vendor to see what driver you will need.
You might be able to install this driver ahead of time.

642 IBM System Storage SAN Volume Controller


As we will move both LUNs at the same time, here are the required steps:
1. Confirm that the correct device driver for the new storage subsystem is loaded. As we are
moving to a DS8000, we can continue to use the SDD device driver.
2. Shut down any applications and unmount the file systems:
a. Stop the applications that are using the LUNs.
b. Unmount those filesystems, with the umount MOUNT_POINT command.
c. If the filesystems are an LVM volume, then deactivate that volume group with the
varyoffvg VOLUMEGROUP_NAME.
3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command
(Example 14-64). To double check that you have removed the VDisks, use the svcinfo
lshostvdiskmap command, which should show that these disks are no longer mapped to
the AIX server.

Example 14-64 Remove the VDisks from the host


IBM_2145:itsosvc1:admin>svctask rmvdiskhostmap -host KANAGA AIX-DAT1-VD
IBM_2145:itsosvc1:admin>svctask rmvdiskhostmap -host KANAGA AIX-DAT2-VD
IBM_2145:itsosvc1:admin>svcinfo lshostvdiskmap KANAGA
IBM_2145:itsosvc1:admin>

4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This will make
the MDisks unmanaged as seen in Example 14-65.

Note: When you run the svctask rmvdisk command, the SVC will first double check
that there is no outstanding dirty cache data for the VDisk being removed. If there is still
uncommitted cached data, then the command will fail with the error message:

CMMVC6212E The command failed because data in the cache has not been committed
to disk

You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the VDisk.

The SVC will automatically de-stage uncommitted cached data two minutes after the
last write activity for the VDisk. Depending on how much data there is to destage, and
how busy the I/O subsystem is will determine how long this command takes to
complete.

You can check if the VDisk has uncommitted data in the cache by using the command
svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This
attribute has the following meanings:
empty No modified data exists in the cache
not_empty Some modified data might exist in the cache
corrupt Some modified data might have existed in the cache, but any such
data has been lost

Example 14-65 Remove the VDisks from the SVC


IBM_2145:itsosvc1:admin>svctask rmvdisk AIX-DAT1-VD
IBM_2145:itsosvc1:admin>svctask rmvdisk AIX-DAT2-VD
IBM_2145:itsosvc1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id misk_grp_name capacity
ctrl_LUN_# controller_name UID

Chapter 14. Migration to and from the SAN Volume Controller 643
0 AIX-DAT1-MD online unmanaged 20.0GB
0000000000000000 DS8000
600a0b80005551212000000542d658ce00000000000000000000000000000000
1 AIX-DAT2-MD online unmanaged 20.0GB
0000000000000001 DS8000
600a0b80005551212000000b431d7d1900000000000000000000000000000000
2 IBMOEM-AIX-MD1 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000002 DS4000
600a0b80001744310000000342d658a800000000000000000000000000000000
3 IBMOEM-AIX-MD2 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000003 DS4000
600a0b80001742330000000a42d65b1500000000000000000000000000000000
4 IBMOEM-AIX-MD3 online managed 5 IBMOEM-AIX-MDG 50.0GB
0000000000000004 DS4000
600a0b8000174233000000114328075f00000000000000000000000000000000

5. Using Storage Manager (our storage subsystem management tool), unmap/unmask the
disks from the SVC back to the AIX server.

Important: This is the last step that you can perform and still safely back out everything
you have done so far.

Up to this point, you can reverse all the actions that you have performed so far to get
the server back online without data loss, that is:
򐂰 Remap/remask the LUNs back to the SVC.
򐂰 Run the svctask detectmdisk to rediscover the MDisks.
򐂰 Recreate the VDisks with svctask mkvdisk.
򐂰 Remap the VDisks back to the server with svctask mkvdiskhostmap.

Once you start the next step, you might not be able to turn back without the risk of data
loss.

We are now ready to access the LUNs from the AIX server. If all the zoning and LUN
masking/mapping was done successfully, our AIX server should boot as if nothing happened.
1. Remove the references to all the old disks as seen in Example 14-66.

Example 14-66 Remove references to old paths


# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 SAN Volume Controller Device
hdisk4 Available 1Z-08-02 SAN Volume Controller Device
hdisk5 Available 1Z-08-02 SAN Volume Controller Device
hdisk6 Available 1Z-08-02 SAN Volume Controller Device
hdisk7 Available 1D-08-02 SAN Volume Controller Device
hdisk8 Available 1D-08-02 SAN Volume Controller Device
hdisk9 Available 1D-08-02 SAN Volume Controller Device
hdisk10 Available 1D-08-02 SAN Volume Controller Device
vpath0 Available Data Path Optimizer Pseudo Device Driver
vpath1 Available Data Path Optimizer Pseudo Device Driver
#
# for i in 3 4 5 6 7 8 9 10;do rmdev -dl hdisk$i -R;done
hdisk3 deleted

644 IBM System Storage SAN Volume Controller


hdisk4 deleted
hdisk5 deleted
hdisk6 deleted
hdisk7 deleted
hdisk8 deleted
hdisk9 deleted
hdisk10 deleted
#
# rmdev -dl dpo -R
vpath0 deleted
vpath1 deleted
dpo deleted

2. Run cfgmgr -S to discover the storage subsystem.


3. Use lsdev -Ccdisk to verify that you discovered your new disk.
4. If your application and data is on an LVM volume, run vgscan to rediscover the volume
group, then run the varyonvg VOLUME_GROUP to activate the volume group.
5. Mount your filesystems with the mount /MOUNT_POINT command.
6. You should be ready to start your application.

To verify that we did not lose any data integrity during this process, we used lqueryvg -Atp
ITSOvg. Example 14-67 verifies that everything is intact.

Example 14-67 Verify that the volume group is recognizable


# lqueryvg -AtpITSOvg
Max LVs: 256
PP Size: 26
Free PPs: 93
LV count: 3
PV count: 2
Total VGDAs: 3
Conc Allowed: 0
MAX PPs per PV 1016
MAX PVs: 32
Conc Autovaryo 0
Varied on Conc 0
Logical: 0009cdda00004c00000001066f85f8aa.1 ITSOlv 1
0009cdda00004c00000001066f85f8aa.2 loglv03 1
0009cdda00004c00000001066f85f8aa.3 lv02 1
Physical: 0009cdda6f85f4fe 2 0
0009cdda6f85f666 1 0
Total PPs: 638
LTG size: 128
HOT SPARE: 0
AUTO SYNC: 0
VG PERMISSION: 0
SNAPSHOT VG: 0
IS_PRIMARY VG: 0
PSNFSTPP: 4352
VARYON MODE: 0

And finally, to make sure that the MDisks are removed from the SVC, run the svctask
detectmdisk command. The MDisks will first be discovered as offline, and then will
automatically be removed, once the SVC determines that there are no VDisks associated
with these MDisks.

Chapter 14. Migration to and from the SAN Volume Controller 645
646 IBM System Storage SAN Volume Controller
15

Chapter 15. Master console


In this chapter we present an overview of the SVC master console (MC), including the
required hardware and software components. The master console is provided to help
manage the virtualization environment and consists of both a hardware and software unit.
From MC software release 2.1, the master console can optionally be ordered separately as a
software-only package available on a set of CDs to be installed on hardware that meets or
exceeds minimum requirements. Individual component software packages needed to provide
the full MC capability are available on the Web, except for IBM DB2® and IBM Director. The
CDs will not be available on the Web.

When using the master console with the SAN Volume Controller, you need to install and
configure it before configuring the SAN Volume Controller. The installation and configuration
steps are different between the hardware master console and the software-only master
console. For the hardware master console (in which the software is preinstalled), you will
need to customize the default factory settings.

This console can be used to manage the SVC clusters using the CIM Agent for SVC by
means of ICAT via a Web browser, or through the Administrative command line interface (CLI)
via a Secure Shell (SSH) session.

For a detailed guide to the master console in both cases, we recommend that you refer to the
IBM TotalStorage Master Console: Installation and User’s Guide, GC30-4090

The master console provides the ability to diagnose problems remotely. It can be configured
to send an e-mail or electronic page if a configured event occurs. Historically, the customer
had the option to configure and initiate a Virtual Private Network (VPN) connection which
would have allowed IBM support to remotely access the master console. Today the preferred
method of remotely connecting to the MC is via Virtually on Site (VoS). For further information
regarding VoS, go to the following URL:
https://www-304.ibm.com/jct03004c/support/electronic/portal/!ut/p/_s.7_0_A/7_0_CI?category=
4&locale=en_US

To set up the configuration of the dial-home facility, see the IBM TotalStorage Virtualization
Family SAN Volume Controller: Configuration Guide, SC26-7543, and the IBM TotalStorage
Master Console: Installation and User’s Guide, GC30-4090 at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 647
15.1 Hardware
In Figure 15-1 we show the SVC and the UPSs.

Figure 15-1 2 SVC cluster nodes and UPSs

648 IBM System Storage SAN Volume Controller


In Figure 15-2 we show the master console screen and keyboard.

Figure 15-2 Master console screen and keyboard

For a complete and current list of the hardware components that are included with the master
console, refer to the SVC support page at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

If a master console is not already installed, the customer must obtain a rack-mounted,
high-performance, highly-reliable Intel server (such as the IBM eServer System x Model 306
or equivalent) with the following options:
򐂰 One PentiumR 4 processor, minimum 2.6 GHz.
򐂰 Minimum of 4 GB of system memory.
򐂰 Two IDE hard disk drives, minimum 40 GB each. As a required step in the installation
process, these drives must be configured as mirrored.
򐂰 CD-ROM and diskette drives.
򐂰 Two 1 Gb ports for Ethernet connections (FC or copper).
򐂰 Keyboard, such as the Space Saver NLS keyboard or equivalent.

Chapter 15. Master console 649


򐂰 Monitor, such as Netbay 1U Flat Panel Monitor Console kit (without keyboard) or
equivalent.
򐂰 Mouse or equivalent pointing device.

15.1.1 Example hardware configuration


Here is a list of hardware for an example configuration:
򐂰 IBM System x 306 server (1U).
򐂰 Intel Pentium® 4 3 GHz processor.
򐂰 4GB memory DIMM (256 MB comes with base unit).
򐂰 Two 70 GB IDE hard disk drives (one comes with base unit).
򐂰 Two 10/100/1000 Copper Ethernet ports on planar.
򐂰 NetBay 1U Flat Panel Monitor Console Kit with keyboard.

15.2 Management console software


The master console requires that you obtain the following software:
򐂰 Operating system:
– The hardware master console is shipped with Windows Server 2003 preinstalled.
– The software master console requires that one of the following operating systems is
provided on your hardware platform:
• Microsoft Windows Server 2003 Enterprise Edition
• Microsoft Windows Server 2003 Standard Edition
• Microsoft Windows 2000 Server Edition with Service Pack 4 or higher
• Microsoft Windows 2000 with Update 818043
– To obtain this update:
i. Point your browser to this Web site:
http://v4.windowsupdate.microsoft.com/catalog/en/default.asp
ii. Click Find updates for Windows operating systems.
iii. Select Windows 2000 Professional SP4.
iv. Select Advanced search options.
v. In the field, Contains these words, enter 818043, and then click Search.
vi. Follow the directions on the Web site to download the update.
vii. After you have downloaded the update, you will need to navigate to the location
where the update was downloaded, and run the .exe file to install the update.
򐂰 Microsoft Windows Internet Explorer® version 6.0 with Service Pack 1.
򐂰 Antivirus software (not required but strongly recommended).
򐂰 J2SE™ Java™ Runtime Environment (JRE™) 1.4.2.
You can obtain JRE 1.4.2 by going to the following Web site and then clicking
Downloads, Java & Technologies, Java 2 Platform, Standard Edition 1.4, and then
Download J2SE JRE: www.sun.com/

For a complete and current list of the supported software levels for the master console, refer
to the SVC Support page at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

650 IBM System Storage SAN Volume Controller


15.3 Installation planning information for the master console
Take the following steps when planning the master console installation:
򐂰 Verify that the hardware and software prerequisites have been met.
򐂰 Determine the cabling required.
򐂰 Determine the Network IP address: A static IP address is required to use IBM Tivoli® SAN
Manager.
򐂰 Determine the master console host name.
򐂰 Determine the location in the rack where the master console is to be installed.
򐂰 Determine how the ports on the master console will be configured.
򐂰 Determine the switch zoning that will be used for the master console.

For detailed installation guidance, see the IBM TotalStorage Virtualization Family SAN
Volume Controller: Configuration Guide, SC26-7543 and IBM TotalStorage Master Console:
Installation and User’s Guide, GC30-4090 at:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

15.4 Secure Shell overview


Secure Shell (SSH) is used to secure data flow between the SVC cluster (SSH server) and a
client (either a command line client (CLI), or the CIMOM). The connection is secured by the
means of a private/public key pair:
򐂰 A public key and a private key are generated together as a pair.
򐂰 A public key is uploaded to the SSH server.
򐂰 A private key identifies the client and is checked against the public key during the
connection. The private key must be protected.
򐂰 The SSH server must also identify itself with a specific host key.
򐂰 If the client does not have that host key yet, it is added to a list of known hosts.
򐂰 If the client already has a key related to that server's IP address:
– The client can overwrite the existing key, or
– The client can refuse the connection to allow checking of that key.

These mechanisms (public/private key pair and host key) are used so that each party is sure
about the identity of the other one, as shown in Figure 15-3.

I am me (public/private key pair)


SSH SSH
client server
I am me too (host key)

My My
Zdss5yXZCKgeQrhtKPqmXXqJYZ
message message
Encryption
Figure 15-3 SSH client/server

Chapter 15. Master console 651


The communication interfaces are shown in Figure 15-4.

Native SVC CLI over secured (SSH) IP xmlCIM over HTTP

SVC Cluster ICAT Proxy / Master console


2-4 node pairs

Ethernet
ICAT CIM Agent HTTP
SVC Native CLI

linux-based kernel ICAT GUI

Web Browser
ICAT GUI
Native SVC CLI over secured (SSH) IP CLI client Client

Figure 15-4 Communication interfaces

15.4.1 Uploading SSH public key(s) sample scenarios


For each SVC cluster to be managed by the SVC master console, the SSH public key must
be uploaded from the master console. A public key must also be uploaded from every other
system that requires access to each new SVC cluster.

Perform this task using a Web browser. This same information is included in the IBM
TotalStorage Virtualization Family SAN Volume Controller: Configuration Guide, SC26-7543.

Important: If the SSH public key from a specified server is not stored onto a particular
SVC cluster, the SVC access software cannot connect to that particular cluster from that
specified server.

Here is a summary of the main steps:


1. Start the browser to access the SVC console.
2. Log onto the SVC console using the superuser account and password.
3. Identify the SVC cluster to the SVC master console.
4. Store the SSH public key on the SVC cluster.
5. Launch the secondary browser window to manage the selected cluster.

The detailed procedure follows:


1. Start a browser and log onto the server on which the SVC console is installed by pointing
to the uniform resource locator (URL):
http://<masterconsoleipaddress>:9080/ica
2. Log onto the SAN Volume Controller Console using the superuser account and password.

652 IBM System Storage SAN Volume Controller


3. Identify the SVC clusters to the SVC master console. The steps required depend on the
current status of the cluster to be configured:
– SVC cluster which has not yet been initialized: If an SVC cluster has not yet been
created using the front panel of the SVC cluster, that phase of the cluster creation will
need to be performed first. See 5.3.1, “Creating the cluster (first time) using the service
panel” on page 103. A special password is displayed on the SVC front (service) panel
for 60 seconds to be used in later steps to initialize the SVC master console.
After completing the first phase to create the SVC cluster using the front panel of an
SVC node, the next step required is to complete the creation of the cluster by using the
SVC console native Web interface, as described in 5.4, “Completing the initial cluster
setup using the SAN Volume Controller Console GUI” on page 106.
– Previously initialized SVC cluster: If the SVC cluster has completed the initialization
(creation) process but is not yet registered with the SVC console, log on with the
superuser ID and password, select Add Cluster from the list in the SVC Welcome
page to add the cluster. Enter the IP address of the cluster to be added but do not
select the Create (Initialize) Cluster check box, which is above the OK button. When
you click the OK button, the system displays the page to provide the SSH public key for
upload to the cluster. Step 4 continues with the SSH key input description.
As part of this process, the program prompts you to enter the network password. Type
the admin user name and the password which is configured for the cluster.
4. Store the SSH public key on the SAN Volume Controller cluster. Each key is associated
with an ID string that is user-defined and that can consist of up to 30 characters. Up to 100
keys can be stored on a cluster. Keys can be added to provide either administrator access
or service access.
5. Launch the secondary browser window to manage the new cluster. Select the specific
cluster to be managed and then launch the browser window specifically for that cluster.
a. Under the My Work section of the browser window, click Clusters. A new view is
displayed in the work area (main frame).
b. In the Select column, select the check box to the left of the cluster to be managed.
Select Launch the SAN Volume Controller Application from the drop-down menu in the
work area and click Go. A secondary browser window opens to the SVC application to
work with the specific SVC cluster which was selected. Notice the ClusterName
parameter in the browser location URL, which identifies the IP address of the cluster
currently being managed, as shown here:
http://9.43.147.38:9080/ica/Console?Console.loginToken=79334064:f46d035f31:-7ff1
&Console.ClusterName=9.43.225.208

There is an issue with Windows registration of an SSH key when a cluster is deleted and then
recreated with the same IP address. When a cluster definition is added to the ICAT for
management, SVC will send a host key to the master console. This host key is saved in the
Windows registry. If a cluster is deleted and another cluster is created with the same IP
address, SVC will again send a host key to the master console.

Since a key for this IP address is already saved, the Windows registry is not updated with the
new key and the cluster cannot be managed from the ICAT. This is for security reasons since
the console erroneously detects that another device is attempting to access it. The
workaround is to delete the host key from the registry after deleting the cluster and before the
new cluster is recreated. There is a function (Reset SSH Fingerprint) provided in the
drop-down list to correct this situation. This is not an issue with the command line SSH, since
you are prompted to overwrite the host key.

Chapter 15. Master console 653


To establish an SSH connection to the cluster, the master console stores the public key sent
by SVC in the following path \HKEY_USERS\.DEFAULT\Software\SimonTatham\PuTTY\
SshHostKeys. The name of the registry key is rsa2@22:cluster_IP_address.

The reset function fixes the registry to use the correct public SSH key sent from SVC.

15.5 Upgrading the Master Console


This section takes you through the steps to upgrade your existing SVC Master Console. You
can also use these steps to install a new Master Console on another server. It is assumed
that you already received your SVC Master Console 4.1 installation/upgrade CD.

Proceed as follows:
1. When you insert the CDROM, the autorun should start and present you with the
installation wizard as shown in Figure 15-5.

Figure 15-5 Inserting the Upgrade CDROM

654 IBM System Storage SAN Volume Controller


2. Click Installation wizard to start the setup wizard. This first screen (as shown in
Figure 15-6) will ask you to:
– Shut down any running Windows programs.
– Review the README file on the installation CD.

Once you are ready, click Next.

Figure 15-6 Launching the upgrade wizard

3. The installation should detect your existing SVC Master Console installation (if you are
upgrading). If it does, it will ask you to:
– Select Preserve Configuration if you want to keep your old configuration.
(You should make sure that this is checked).
– Manually shut down the Master Console services, namely:
• IBM CIM Object Manager - SVC
• Service Location Protocol
• IBM WebSphere® Application Server V5 - SVC
– We also shut down the IBM WebSphere Application Server V5 - ITSANM Manager
service as well, just to be sure that there were no conflicts during the installation.
This can be seen in Figure 15-7.

Chapter 15. Master console 655


Figure 15-7 Product Installation Check

Important: If you want to keep your SVC configuration, then make sure you check the
Preserve Configuration check box. If you omit this, you will lose your entire SVC console
setup, and will have to reconfigure your console as if it was a fresh install.

656 IBM System Storage SAN Volume Controller


4. The installation wizard will then check that appropriate services are shut down and take
you to the Installation Confirmation screen as shown in Figure 15-8. If the wizard
detects any problems, it first shows you a page detailing the possible problems, giving you
time to fix them before proceeding.

Figure 15-8 Installation Confirmation

5. The progress of the installation is shown in Figure 15-9. For our environment it took
approximately 20 minutes to complete.

Figure 15-9 Installation progress

Chapter 15. Master console 657


6. On successful completion the wizard will restart all the appropriate SVC Master Console
processes and then give you a summary of the installation (shown in Figure 15-10).

Figure 15-10 Installation finished

7. And finally, to see what the new interface looks like, you can launch the SVC Master
Console by using the icon on the desktop; login and confirm that the upgrade was
successful by noting the Console Version number on the right hand side of the screen
(under the graphic). See Figure 15-11.

Figure 15-11 Launching the upgraded SVC Master Console

You have now completed the upgrade of your SVC Master Console.

15.6 Call Home (service alert)


The SVC makes all error reports available to network managers using industry standard error
reporting techniques. Call Home is supported by providing the error data needed by an
external call home agent. The necessary machine information to generate a call home record
is sent to network management software via a SNMP trap.

658 IBM System Storage SAN Volume Controller


The SVC uses an e-mail to RETAIN® to Call Home. The agent that receives the SNMP trap
and generates the e-mail in the correct format is called IBM Director. The e-mail to RETAIN
Call Home system works by sending an e-mail to an IBM catcher machine, which reformats
the data based on the machine type and model information and then opens a fault call within
the RETAIN system.

The e-mail address for Call Home for the United States and Canada is:
mailto:[email protected]

For all other countries, it is:


mailto:[email protected]

To configure IBM Director settings for the SVC Call Home feature, complete these steps:
1. To launch the IBM Director, click the IBM Director icon on the SVC console desktop area.
2. Log in using the master console Windows user ID and password.
3. After IBM Director becomes active, select Options → Discovery Preferences as shown
in Figure 15-12.

Figure 15-12 IBM Director Discovery Preferences

Chapter 15. Master console 659


4. The Discovery Preferences window (Figure 15-13) opens. Select the SNMP Discovery
tab. Change the entry to the IP address of the master console and the switches using a
subnet mask of all zeros. Then click Add or Replace.

Figure 15-13 Network addresses for SNMP discovery

5. With the Director console application launched, select Task → Event Action Plan
Builder, as shown in Figure 15-14.

Figure 15-14 Director Action Plan Builder

660 IBM System Storage SAN Volume Controller


6. On the Event Action Plan Builder window (Figure 15-15), in the Actions panel, select Send
an Internet (SMTP) E-mail. Then right-click 2145 and select Update.

Figure 15-15 Updating the 2145 event

7. The Customize action:2145 window (Figure 15-16) opens. Complete the following items:
– Internet E-mail address to which the SVC alerts are sent: type the IBM Retain e-mail
address:
[email protected] for the U.S.A. and Canada clients
[email protected] for all other countries
– Reply to: Type the e-mail address to which any replies should be directed.
– SMTP E-mail server: type the address of the e-mail (SMTP) server.
– SMTP Port: change this, if required, to the SMTP server port number.
– Subject of E-mail Message: type 2145 Error Notification.
– Body of the E-mail Message: type the following information:
• Contact name
• Contact Phone number
• Off-shift phone number
• Machine location
• Record Type = 1

Chapter 15. Master console 661


Figure 15-16 Customize window

Important: Do not change the line with text Record Type = 1.

15.7 Master console summary


This chapter has presented an overview of the SVC master console hardware and software.
For customers using IBM Tivoli SAN Manager (ITSANM), many specific examples of its
configuration and typical usage are provided. Topics covered in detail by the master console
Installation and User’s Guide are then listed. These topics include installation planning,
installation, configuration, and management functions. An overview of the SSH application is
provided, with two sample scenarios for uploading the SSH public key to the SVC from the
master console. Finally, Call Home (or service alert) configuration steps are described in
detail.

662 IBM System Storage SAN Volume Controller


A

Appendix A. Copy services and open systems


In this appendix we describe the basic tasks that you need to perform on the individual host
systems when using IBM System Storage SAN Volume Controller (SVC) Copy Services. We
explain how to bring FlashCopy target vpaths online to the same host as well as to a second
host. This appendix covers AIX and Windows.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 663
AIX specifics
In this section we describe what is necessary to use Copy Services on AIX systems.

AIX and FlashCopy


The FlashCopy functionality in SVC Copy Services makes available the entire contents of a
source virtual disk (VDisk) to a target VDisk. If the source VDisk is defined to the AIX Logical
Volume Manager (LVM), all of its data structures and identifiers are copied to the target VDisk
as well. This includes the Volume Group Descriptor Area (VGDA), which contains the
physical volume identifier (PVID) and volume group identifier (VGID).

For AIX LVM, it is currently not possible to activate a volume group with a disk that contains a
VGID in the VGDA and a PVID that is already used in an active volume group on the same
server, even if the PVID is cleared and reassigned with the following two commands:
chdev -l <vpath#> -a pv=clear
chdev -l <vpath#> -a pv=yes

Therefore, it is necessary to redefine the volume group information on the FlashCopy target
using the recreatevg command. Refer to “AIX recreatevg command” on page 666 of this
redbook for further details. This alters the VGID on the VGDA of the FlashCopy target so that
there are no conflicts with existing VGIDs on active volume groups. If you do not redefine the
volume group information prior to importing the volume group, then the importvg command
fails.

Accessing FlashCopy target on another AIX host


Accessing a FlashCopy target on another AIX host poses some problems. As a result of
copying the entire VDisk contents, all the data structures and identifiers used by the LVM is
also copied, but only what is located on the specified VDisks. If you FlashCopy only one half
of the volumes (hdisks) that are part of a mirror, 50% of the VDisks in the VGDA would be
missing. Therefore, it is necessary to either turn off quorum for the volume group that you are
FlashCopying, or you might need to force the vary on of the volume group on the target
VDisks.

Important: When using FlashCopy make sure to FlashCopy all volumes in the
volumegroup, and at the same time. This can be done by using FlashCopy Consistency
Groups.

The following procedure makes the data of the FlashCopy target VDisk available to another
AIX that has no prior definitions in its configuration database:
1. The target volume (VDisk) is new to AIX. Therefore the Configuration Manager should be
run on all Fibre Channel adapters:
cfgmgr -l <host_adapter>
2. When using Subsystem Device Driver (SDD), the cfallvpath command should be run to
make the vpaths for the new volumes discovered:
cfallvpath

Note: If you just execute cfgmgr or cfgmgr -vS, it will discover the new disks and make
the vpaths automatically. Sometimes the cfgmgr command can take a while and will
affect the server configuration at all, therefore the cfgmgr -l command can be the
fastest way to discover the new disks.

664 IBM System Storage SAN Volume Controller


3. Check which new vpath is on the host; this is your FlashCopy target:
lspv | grep vpath
4. Import the volume group:
importvg -y <volume_group_name> <vpath#>
5. The importvg command should vary on the volume group, if not, use the varyonvg
command:
varyonvg <volume_group_name>
6. Verify the access to the volume group using:
lqueryvg -Atp <vg_name>
7. Verify that all the paths are working fine using:
datapath query device
8. Verify consistency of all file systems on the FlashCopy target:
fsck -y <filesystem_name>
9. Mount all the target file systems:
mount <filesystem_name>

The data is now available. You can create a backup of the data on the FlashCopy volume to a
tape device, or use the data for other purposes, such as testing software updates. This
procedure can be run after the relationship between FlashCopy source and target is
established, even if data is still being copied from the source to the target in the background.

It might be the case that the disks containing the target VDisk were previously defined to the
target AIX system, for example, if you periodically do backups from the same VDisk. In this
case, perform the following actions on the target system:
1. Unmount all file systems in the target volume group:
umount <tgt_filesystem>
2. Vary off the volume group:
varyoffvg <tgt_volume_group_name>
3. Export the volume group:
exportvg <tgt_volume_group_name>
4. Delete the target volumes and vpath:
rmdev -dl <hdisk#>
rmdev -dl <vpath#>
5. Perform the FlashCopy to the target volumes.
6. The target volume (VDisk) is new to AIX. Therefore, the Configuration Manager should be
run on all Fibre Channel adapters:
cfgmgr -l <host_adapter>
7. When using Subsystem Device Driver (SDD), the cfallvpath command should be run to
make the vpaths for the new volumes discovered.
cfallvpath
8. Check which new vpath is on the host, this is your FlashCopy target:
lspv | grep vpath
9. Import the volume group:
importvg -y <volume_group_name> <vpath#>

Appendix A. Copy services and open systems 665


10.The importvg command should vary on the volume group, if not use the varyonvg
command:
varyonvg <volume_group_name>
11.Verify the access to the volume group using:
lqueryvg -Atp <vg_name>
12.Verify that all the paths are working fine using:
datapath query device
13.Verify consistency of all file systems on the FlashCopy target:
fsck -y <filesystem_name>
14.Mount all the target file systems:
mount <filesystem_name>
15.Perform tasks as though the volumes were new to the system as previously described.

Accessing FlashCopy source and target on the same AIX host


This section describes a method to access the FlashCopy target volume on a single AIX host
while the source is active on the same server. The procedure is intended to be used as a
guide and might not cover all scenarios.

AIX recreatevg command


Point-in-time (PiT) copying of a source volume's content in all cases causes all of the data
structures and identifiers used by the AIX LVM to be duplicated to the target volume. The
duplicate definitions, in turn, cause conflicts within LVM. Until recently, none of the existing
set of LVM commands have had the capability to access the logical volumes or file systems
on the target disk. This problem is solved now using the AIX command recreatevg.

The recreatevg command is packaged as a PTF for AIX 4.3.3 in APAR IY10456 and higher.
It is officially available in:
򐂰 AIX 4.3.3 Recommended Maintenance Level 05 (RML05) or higher
򐂰 AIX 5L

The recreatevg command overcomes the problem of duplicated LVM data structures and
identifiers caused by a disk copying process such as FlashCopy. It is used to recreate an AIX
volume group (VG) on a set of disks that are copied from a set of disks belonging to a specific
volume group. The command allocates new PVIDs for the member disks and a new VGID to
the volume group. The command also provides options to rename the logical volumes with a
prefix you specify, and options to rename “labels” to specify different mount points for file
systems.

Here is the AIX man page synopsis:


recreatevg [-y VGname] [-p] [-f] [-Y lv_prefix | -l LvNameFile] [-L label_prefix] [-n] \
PVname...

You can use this command to recreate a volume group on a set of disks that are mirrored
from a set of disks belonging to a specific volume group. This command allocates new PVID
for the member disks since the PVIDs are also duplicated by the disk mirroring. Similarly,
other LVM logical members that are duplicated are also changed to new names with the
specified prefixes.

666 IBM System Storage SAN Volume Controller


Note the following flags:
򐂰 -y VGname specifies the volume group name rather than having the name generated
automatically. Volume group names must be unique system wide and can range from one
to 15 characters. The name cannot begin with a prefix already defined in the PdDv class in
the device configuration database for other devices. The volume group name that is
created is sent to standard output.
򐂰 -p disables the automatic generation of the new PVIDs. If a -p flag is used, you must
ensure that there are no duplicated PVIDs on the system. All the disks that were hardware
mirrored must have had their PVIDs changed to a unique value.
򐂰 -Y lv_prefix causes the logical volumes on the volume group being recreated renamed
with this prefix. For the number of characters in the prefix, the total length of the prefix and
the logical volume name must be less than or equal to 15 characters. If the length exceeds
15 characters, the logical volume is renamed with the default name. The name cannot
begin with a prefix already defined in the PdDv class in the device configuration database
for other devices, nor can it be a name already used by another device.
򐂰 -L label_prefix causes the labels of logical volumes on the volume group being recreated
to be changed with this prefix. User must modify the /etc/file systems stanza manually if a
simple modification of the mount point is not enough to define the stanza uniquely.
򐂰 -l LvNameFile entries in the LvNameFile must be in the format LV1:NEWLV1. After
recreatevg, LV1 is renamed with NEWLV1. All the logical volumes that are not included in
the LvNameFile will be recreated with the default system generated name.
򐂰 -f allows a volume group to be recreated that does not have all disks available.
򐂰 -n: After recreatevg, the volume group is imported but varied off. The default is imported
and vary on.

Note the following points:


򐂰 To use this command, you must have root user authority.
򐂰 All the physical volumes (hdisk) of the volume group must be specified on the command
line. The command fails if the input list does not match with the list compiled from the
VGDA.
򐂰 If you perform a Copy Services function on one half of a RAID-1 pair to reduce the
capacity required for FlashCopy targets or Metro Mirror secondary volumes, then use the
-f option to force the creation of the volume group. Otherwise the VGDA has PVIDs of
volumes that made up the other half of the mirror at the source or primary site.
򐂰 In some situations the volume group is imported or recreated on the hdisks. In this case
SDD has no affect on the volume group access. To switch your VG PCID from hdisk to
vpath use the command:
dpovgfix <vg_name>
or
hd2vp <vg_name>

An example of accessing a FlashCopy target on same host using recreatevg


If we want to mount the FlashCopy target volume on the same AIX host, as the source
volume is located, we have to use the recreatevg command. For example, we have a volume
group that contains two volumes (vpaths). We want to FlashCopy the volumes for the
purpose of creating a backup. To achieve this, we must have two target VDisks available of
equal size to, or greater than the sources in the same SVC cluster.

In this example, the source volume group is fc_source_vg containing vpath0 and vpath1, and
the target volume group is fc_target_vg containing vpath2 and vpath3.

Appendix A. Copy services and open systems 667


Perform the following tasks to create the FlashCopy and make the target volumes available to
AIX:

Stop all applications that access the FlashCopy source volumes.


16.Unmount all related file systems for the short period of FlashCopy establishment.
17.Establish the FlashCopy pairs with the copy parameter set to zero, if you don’t want all the
data to be physically copied.

Attention: If you use the copy parameter set to zero, and you lose your source VDisk, you
will not be able to restore data from the target VDisk, because no “real” data is flashed
from the source VDisk to the target VDisk, only the pointers.

18.Mount all related file systems, if previously unmounted.


19.Restart applications that access the FlashCopy source volumes.
20.The target vpath2 and vpath3 now have the same volume group data structures as the
source vpath0 and vpath1. Clear the PVIDs from the target vpaths to allow a new volume
group to be made:
chdev -l vpath2 -a pv=clear
chdev -l vpath3 -a pv=clear
The output from lspv shows the result. See Figure A-1.

# lspv
vpath0 000c309d6de458f3 fc_source_vg active
vpath1 000c309d6de45f7d fc_source_vg active
vpath2 none None
vpath3 none None

Figure A-1 lspv after pv=clear

21.Create the target volume group and prefix all file system path names with /backup and
prefix all AIX logical volumes with bkup:
recreatevg -y fc_target_vg -L /backup -Y bkup vpath2 vpath3
Specify the vpath names of all disk volumes participating in the volume group. The
output from lspv illustrates the new volume group definition. See Figure A-2.

# lspv
vpath0 000c309d6de458f3 fc_source_vg active
vpath1 000c309d6de45f7d fc_source_vg active
vpath2 000c309d6e021590 fc_target_vg active
vpath3 000c309d6e021bf6 fc_target_vg active

Figure A-2 Recreated FlashCopy target volumes

An extract from /etc/filesystems shows how recreatevg generates a new file system
stanza. The file system named /u01 in the source volume group is renamed to
/backup/u01 in the target volume group. Also, the directory /backup/u01 is created. Notice
also that the logical volume and JFS log logical volume are renamed. The remainder of
the stanza is the same as the stanza for /u01. See Figure A-3.

668 IBM System Storage SAN Volume Controller


/backup/u01:
dev = /dev/bkupelv001
vfs = jfs
log = /dev/bkupelvlog001
mount = true
check = false
options = rw
account = false

Figure A-3 Target file system stanza

22.Mount the new file systems that belong to the target to make them accessible.

AIX and Metro Mirror


When you have Metro Mirror primary and secondary volumes in a copy pair relationship,
it is not possible to read from the secondary volume. Therefore, if you need the secondary
volumes mounted onto a target server, it is necessary to terminate the copy pair relationship.
When using Metro Mirror you have an active set of volumes, the primary volumes, where all
updates and changes are made to, these changes are then copied to the secondary volumes
by the SVC, and on the secondary volumes writes are not permitted from any hosts. In an AIX
environment when you mount a volume, the operating system automatically performs a write
to the volumes, so it is not possible to have both the primary and the secondary volumes
mounted at the same time on any hosts.

When the volumes are in a simplex state, the secondary volumes can be configured (cfgmgr)
into the target systems customized device class (CuDv). To get the volumegroup information
in to the target server, the copy pair relationship has to be stopped, and the command cfgmgr
needs to be executed on the taget system, or the procedure described in “Making updates to
the LVM information” on page 670 has to be performed. Because these volumes are new to
the system, there is no conflict with existing PVIDs. The volume group on the secondary
volumes containing the LV and file system information can now be imported into the Object
Data Manager (ODM) and the /etc/filesystems file using the importvg command.

If the Metro Mirror secondary volumes were previously defined on the target AIX system as
hdisks or vpaths, but the original volume group information was destroyed on the volumes,
you must remove the old volume group and disk definitions (using exportvg and rmdev) and
redefine (cfgmgr) them before running the importvg command to attain the new volume group
definitions. If this is not done first, the importvg command imports the volume group
improperly and the file systems are not accessible.

When you execute the lsdev -Cc disk command, you observe that the state of the original
Metro Mirror secondary volumes becomes Defined during reboot.

It is important to execute both the lspv and lsdev commands back-to-back, so that you can
be certain which disks are the phantoms. From the lspv output, the phantom disks have no
PVIDs and are not assigned to a volume group. From the lsdev output, the phantom is in a
Defined state. The original disks have PVIDs, are assigned to a volume group, and are
marked in an Available state.

Run the following command on each phantom disk device in order to remove the phantom
hdisks from the configuration database:
rmdev -dl <hdisk_name> -R

Set the original hdisks and vpath to an Available state with the mkdev command.

Appendix A. Copy services and open systems 669


You can reactivate the volume group, using the varyonvg and mount its file systems. While
you do this the copy pair relationship has to be stopped.

Making updates to the LVM information


When performing Metro Mirror between primary and secondary volumes, the primary AIX
host might create, modify, or delete existing LVM information from a volume group. When in a
Metro Mirror relationship the secondary volume is not accessible, and the LVM information on
the secondary AIX host is out-of-date. Therefore, you need to get the secondary AIX host
updated every time you make changes to the LVM information.

If you don’t want to have scheduled periods where write I/Os to the primary Metro Mirror
volume can be quiesced and file systems unmounted so the copy pair relationship can be
terminated, and the secondary AIX host can perform a learn on the volume group (importvg
-L), you can execute the varyonvg command on the primary AIX host for the volumegroup
you have made changes to, that will remove the SCSI lock on the volumes in the SVC.
The parameters needed for the varyonvg command are -b -u, thereafter you execute the
importvg -L command on the secondary AIX host, and after the LVM changes are updated
on the secondary AIX host, you need to execute the varyonvg command on the primary AIX
host again, to activate the SCSI lock on the volumes on the SVC.

The import -L <volumegroup> command takes a volume group and learns about possible
changes performed to that volume group. Any new logical volumes created as a result of this
command emulate the ownership, group identification, and permissions of the /dev special
file for the volume group listed in the -y flag. The -L flag performs the functional equivalent of
the -F and -n flags during execution.

Restrictions:
򐂰 The volume group must not be in an active state on the system executing the -L flag.
򐂰 The volume group’s disks must be unlocked on all systems that have the volume group
varied on and operational. Volume groups and their disks might be unlocked, remain
active, and used via the varyonvg -b -u command.
򐂰 The physical volume name provided must be of a good and known state; the disk named
might not be in the missing or removed state.
򐂰 If an active node has both added and deleted logical volumes on the volume group, the -L
flag might produce inconsistent results. The -L flag should be used after each addition or
deletion, rather than being deferred until after a sequence of changes.
򐂰 If a logical volume name clash is detected, the command will fail. Unlike the basic
importvg actions, clashing logical volume names will not be renamed.

Here is an example of how to use the -L on a multi-tailed system:


򐂰 Primary AIX node A has the volume group datavg varied on.
򐂰 Secondary AIX node B is aware of datavg, but it is not varied on.
򐂰 Primary AIX node A: varyonvg -b -u datavg
򐂰 Secondary AIX node B: importvg -L datavg hdisk07
򐂰 Primary AIX node A: varyonvg datavg

In this case, datavg is the name of the volumegroup, and hdisk07 is the physical volume (PV)
that contain the volumegroup.

If you prefer to have scheduled periods where write I/Os to the primary Metro Mirror volume
can be quiesced and file systems unmounted, then after the updates are read into the
secondary AIX hosts ODM, you can re-establish the Metro Mirror copy pair again.

670 IBM System Storage SAN Volume Controller


Windows NT and 2000/2003 specifics
This section describes the tasks that are necessary when performing Copy Services
operations on volumes owned by Microsoft Windows NT and 2000/2003 operating systems.

Windows NT and Copy Services


This section explains the actions that you need to perform on Metro Mirror and FlashCopy
volumes owned by Microsoft Windows NT operating systems.

Windows NT handles disks in a way that is not similar to any other operating system covered
in this book. The need to reboot a server to scan for new disks and the need to run a
GUI-based Disk Administrator to manipulate the disks are the main factors that restrict the
routine use of Metro Mirror and FlashCopy and make automation virtually impossible. It is
possible to automate the actions of the GUI-based Disk Administrator using third-party
software to remotely reboot the server. It is also possible to remotely assign the drive letter
from the server that starts the Copy Services task. This was not tested during our project.

If you are going to create an automated script with Windows NT, you need to be careful about
data consistency. It could be that some part of the automation process might run a script on a
source server, and subsequent actions might be taken by a script on a target server.
Therefore, interprocess communication across servers might be required for timing.
Otherwise, you might get inconsistent data. Not all applications allow this.

You have two options on how to make the Metro Mirror or FlashCopy target available to the
server: with reboot or without reboot. We recommend that you reboot the server. It is safer
because then it is guaranteed that all the registry entries are created. However, using Metro
Mirror or FlashCopy without rebooting is faster.

Registering the Metro Mirror and FlashCopy volumes to Windows NT


If you are going to reboot the server, you do not have to make the target disks known to
Windows NT before you do the Metro Mirror or FlashCopy. However, we recommend that you
preassign and register them in the server. The “assign disk and run Metro Mirror or
FlashCopy” approach is useful for a non-routine Metro Mirror or FlashCopy, for example, for
testing or migration.

For routine purposes, we recommend that you have target disks already present in Disk
Administrator with partitions created and partition information saved. Click Start →
Programs → Administrative Tools → Disk Administrator. Then follow these steps:
1. If the target disk was not previously seen by the system, Disk Administrator issues a
pop-up message saying “No signature on Disk X. Should I write a signature?”,
where X is the number assigned to the newly present disk.
Click OK to save the signature on the target disk.
2. The Disk Administrator opens. Click the disk that is to be used as the Metro Mirror or
FlashCopy target (it should be gray and marked as free space) and select Create.
3. Confirm the partition parameters and click OK. The partition appears as Unknown.
4. Click the newly created partition and select Commit Changes Now.
5. Right-click the partition and select Assign Drive letter.
6. Assign a drive letter and click OK.
7. Exit Disk Administrator.

Appendix A. Copy services and open systems 671


After this procedure, the Metro Mirror or FlashCopy target is properly registered in the
Windows NT.

Bringing down the target server


Bring down the server that will use the target if you want to use the safer method. Also keep in
mind that if you assign the volume to the host just before you perform the Metro Mirror or
FlashCopy, you must use the volume serial number for the target.

Performing a Metro Mirror or FlashCopy


Stop all applications using the source volume. Now flush the data to the source volume. Click
Start → Programs → Administrative Tools → Disk Administrator. Then follow these
steps:
1. Right-click the disk that is to be used as the Metro Mirror or FlashCopy source. It should
have a drive letter assigned and be formatted. Then select Assign Drive letter.
2. From the pop-up window, select Do not assign a drive letter and click OK.
3. Now the data is flushed to the source. You can start the Metro Mirror or FlashCopy task
from the SVC Copy Services Web Interface or from any server CLI.
4. Observe the GUI. Or enter the following command to see if the Metro Mirror or FlashCopy
task successfully started:
svcinfo lsfcmapprogress and svcinfo lsrcrelationshipprogress
5. Reassign the drive letter to the source volume. Right-click the disk that is a Metro Mirror or
FlashCopy source and select Assign Drive Letter.
6. Assign a drive letter and click OK.
7. Exit Disk Administrator.

You can resume using the source volume.

Bringing up the target server


Next you can boot up the target server. In this case, you just assigned the target volumes to
the host that will create the disk entry in the Windows NT registry. To verify that the registry
entry is created, complete these tasks:
1. Click Start → Settings → Control Panel → Hardware → Device Manager.
2. In Control Panel, double-click Disk Drives.
3. Click the adapter that has the target volume attached.
4. A list of targets opens. Verify the list includes the target ID and LUN of the volume you just
made available to the server. If you are using SDD, you see each disk entry several times
[(# of vDisks) x (# of Nodes) x (4 Ports/Node) x (# of HBAs/host)], which is the
number of paths to the volume that you have.

You can also run the datapath query device command from the SDD command line to
check whether the Metro Mirror or FlashCopy targets are listed between the volumes. This
command also enables you to check volume serial numbers and gives you a more
understandable overview of the volumes and their paths.

672 IBM System Storage SAN Volume Controller


Making the Metro Mirror or FlashCopy target available
Log in, start the Windows NT Disk Administrator, write a signature if necessary (do not write a
signature if data was already copied into this volume), and assign a drive letter. To begin, click
Start → Programs → Administrative Tools → Disk Administrator. Then follow these
steps:
1. If the disk was not previously seen by this system, Disk Administrator issues the “No
signature on Disk X. Should I write a signature?” message, where X is the number
assigned to the newly present disk. Click OK to save the signature on the target disk.
2. The Disk Administrator opens. Click the disk that is a Metro Mirror or FlashCopy target.
You should see a formatted partition on it. Select Assign Drive Letter.
3. If you cannot assign a drive letter, the target might be corrupt. Try repeating the whole
process and consider the scenario that includes reboot.
4. Assign a drive letter and click OK. Exit Disk Administrator.
5. From a Windows NT command prompt, run the following command, where x is the letter
assigned to the Metro Mirror or FlashCopy target:
chkdsk x: /f /r
An option is to run the disk check from Properties of a disk in Windows NT Explorer.

After you complete this procedure, the Metro Mirror or FlashCopy target is available to the
Windows NT and can be handled like normal disk.

Copy Services with Windows Volume Sets


This section explains how to perform Copy Services functions with Windows Volume Sets.

Copy Services with Windows NT Volume Sets


Both Metro Mirror and FlashCopy are supported when using normal disks and Volume Sets.
When using either Metro Mirror or FlashCopy with Volume Sets, because these outboard
copy features do not copy the Volume Set information in the Windows Registry, certain
limitations exist and a special procedure is required as outlined in below. After SP6, it is
possible to have the FlashCopy source and target volumes accessible by the same server.
Prior to SP6, the FlashCopy source and target volumes must be attached to different servers.
Metro Mirror primary and secondary volumes must be attached to different servers.

Procedure for using Metro Mirror and FlashCopy with Volume Sets
This special procedure is required to FlashCopy or Metro Mirror a Windows NT volume set.
The procedure can also be applied to other Windows NT fault tolerant disk configurations,
such as mirrored sets, striped sets, and striped sets with parity.

Consider the case where the target disks are in the same order as the source disks, and the
target disks are contiguous (that is all the disks are next to each other as viewed by the target
machine’s Disk Administrator). Then simply create an identical volume set on the target
machine and reboot prior to performing the FlashCopy. You do this before you perform
FlashCopy or Metro Mirror for the first time. Subsequent copies should work as expected,
provided that the file system is unmounted (the drive letter is unassigned) on the target prior
to performing a copy.

If the target disks do not appear contiguous to Windows NT or appear in a different order than
on the source machine, then a different procedure must be used. Microsoft’s FTEDIT,
available on the NT Resource Kit, is a Microsoft supported tool designed to write volume set
information into the registry. Using FTEDIT is much safer than editing the registry directly.

Appendix A. Copy services and open systems 673


Important: Incorrect use of FTEDIT could result in loss of access to software RAID arrays.
We recommend that you use Disk Administrator to save your disk configuration before
using FTEDIT. In general, most errors made using FTEDIT are recoverable. For more
information about how to recover from FTEDIT errors, and on FTEDIT in general, see the
Microsoft Knowledge Base article for Q131658:
http://support.microsoft.com/default.aspx?scid=kb;en-us;131658

The following procedure explains how to use FlashCopy and Metro Mirror with FTEDIT.

Preparation
On the target machine, complete the following tasks:
1. Back up the disk data using Disk Administrator, and registry information using REGEDIT.
2. If the target disks were previously used, delete all of the target disks in Disk Administrator.
Do not simply unmount them, but delete all of the partitions on the target disks. Commit
the changes.
3. In the control panel, double-click Devices. Make sure that Ftdisk is started and set to start
on boot. Ftdisk is the driver used by Windows NT to identify and access fault tolerant
drives, such as volume sets. If there are any fault tolerant drives in use on the system,
Ftdisk is started and set to start on boot. If it is not started, one way to start it is to create a
fault tolerant drive on a couple of spare disks. This requires a reboot.

On the source machine, obtain the order in which the disks were added to the volume set.
One way to do this is to use a freeware utility called diskkey.exe, available from:
http://www.sysinternals.com

This utility is not supported by IBM and is known to report disk numbering and other
information that is different than what Disk Administrator reports. However, the order in which
the disks are included in the volume set is correct. Also the correct ordering of the disks is the
information required to create a duplicate volume set on the target server.

Map the disks on the source machine to the disks on the target machine. For example,
determine that Disk6 on the source is FlashCopy copied to Disk9 on the target.

Performing the Metro Mirror and FlashCopy


On the target machine, follow these steps:
1. Run the FlashCopy establish or Metro Mirror terminate tasks.
2. Start Disk Administrator. If it asks you to write a signature on any of the disks, click No
(except in the special cases, see the following Important box). After Disk Administrator is
up, commit the changes (this is very important), and close Disk Administrator.

674 IBM System Storage SAN Volume Controller


Important: Disk Administrator asks to write a signature when the FlashCopy is
performed to the same machine because it detects a duplicate disk signature (the
source and target volumes have the same disk signature) and needs to write a new
one. It is safe to do this, but be sure that you are writing the signature to the FlashCopy
target disk. If a signature is written to the wrong disk, it might cause data corruption.

When FlashCopying to a different machine, usually the disk signature on the target
machine's disks are different than the FlashCopy source disks’ signature, so Disk
Administrator does not need to write a new signature to the target disks to use it. It is
unlikely, but possible, that by coincidence the disk signature one of the source disks is
the same as one of the disks on the target machine. In this case, you must write a
signature on the target disk before you use it. Again, it is safe to do this, but be sure
that you are writing the signature to the right disk.

3. Start FTEDIT. Select Start → Resource Kit 4.0 → Disk Tools → Fault Tolerance
Editor.
4. Read the warning and click OK.
5. There are two panes in the FTEDIT window. On the left pane is a list of the disks in the
system. On the right pane is the list of partitions on that disk. You must add the disks to
the volume set in the right order. Use the results of diskkey.exe to determine the order in
which the disks were added on the source volume set.

Note: If active Metro Mirror target volumes are on the target, then the disk numbering
used in FTEDIT might differ from the disk numbering used in the Disk Administrator.
The Metro Mirror target volumes are not seen by FTEDIT and are not included in the
disk numbering scheme. Adjust your disk choices accordingly.

6. Click Make FT set in the lower left corner.


7. When it asks you what kind of set you want, choose Volume set and click OK.
8. Click the first target disk in the left pane.
9. The list of partitions on that disk should appear in the right pane. Choose the partition that
contains the volume set on that disk (usually Partition 1). Double-click Partition 1 in the
right pane. This adds this disk or partition to the volume set, in order.
10.Repeat Steps 8 and 9 for the rest of the disks. If you make a mistake, you can cancel and
start from scratch. The disks must be added in the correct order.
11.After you add all of the disks, click Save FT set at the bottom.
12.Click Edit → Save Changes to System.
13.Close Ftedit.
14.Reboot the system.
15.When Windows NT restarts, start Disk Administrator. The target disks should be yellow
now, indicating that they are in a volume set. Assign a drive letter and commit the
changes. If the drives are not usable at this point, then the disks were probably added in
the wrong order.

As long as the disk configuration does not change on the source or target, FlashCopy should
work as expected. If the disk configuration is changed in anyway, such as adding an
additional disk to the volume set or rearranging the disks, then you have to perform this
procedure again.

Appendix A. Copy services and open systems 675


Windows 2000/2003 and Copy Services
Windows 2000/2003 handles its disks differently than Windows NT does. Windows
2000/2003 incorporates a stripped-down version of the Veritas Volume Manager, called the
logical disk manager (LDM).

With the LDM, you can create logical partitions, perform disk mounts, and create dynamic
volumes. There are five types of dynamic volumes: simple, spanned, mirrored, striped, and
RAID-5.

On Windows NT, the information relating to the disks was stored in the Windows NT registry.
With Windows 2000, this information is stored on the disk drive itself in a partition called the
LDM database, which is kept on the last few tracks of the disk. Each volume has its own
128-bit Globally Unique Identifier (GUID). This is similar to the disk PVID in AIX. Since the
LDM is stored on the physical drive itself, with Windows 2000, it is possible to move disk
drives between different computers.

Copy Services limitations with Windows 2000 and Windows 2003


Having the drive information stored on the disk itself imposes some limitations when using
Copy Services functionality on a Windows 2000/2003 system:
򐂰 The source and target volumes must be of the same physical size for two reasons:
– The LDM database holds information relating to the size of the volume. Since this is
copied from the source to the target, if the target volume is a different size from the
source, then the database information is incorrect, and the host system returns an
exception.
– The LDM database is stored at the end of the volume. The copy process is a
track-by-track copy, unless the target is an identical size to the source, the database is
not at the end of the target volume.
򐂰 It is not possible to have the source and target FlashCopy Volume on the same Windows
2000/2003 system when they were created as Windows 2000/2003 dynamic volumes.
The reason is that each dynamic volume must have its own 128-bit GUID. As its name
implies, the GUID must be unique on one system. When you perform FlashCopy, the
GUID is copied as well. This means that if you tried to mount the source and target volume
on the same host system, you would have two volumes with exactly the same GUID. This
is not allowed, and you are not able to mount the target volume.

Copy Services on basic volumes


Basic disks are the same as the Windows NT disks with the same restrictions. Dynamic disks
are supported for both Metro Mirror and FlashCopy and the primary source and secondary
target volumes must be attached to different servers. For basic disk on the other hand it is
possible to attach the secondary target volume to the same server. In the following steps we
show how to make a FlashCopy on a basic disk, and mount it on the same server as the
source disk is mounted.

676 IBM System Storage SAN Volume Controller


Before making any FlashCopy on the Windows server, we have two disks, the internal C drive
and an SVC disk W: (see Figure A-4).

Figure A-4 Windows server before adding FlashCopy target disk

To make a FlashCopy on Windows 2000/2003, we need to make a VDisk that we will use as
FlashCopy target, and the VDisk needs to be exactly the same size as the source VDisk. To
ensure that the size is exactly the same, we first list the source VDisk on the SVC using the
parameter -bytes, that gives the precise size. If we don’t use the parameter -bytes the size
will be in GB, and 8 GB is not an exact size information, but can be the rounded size. We
create a new VDisk with the same size and give it the name W2K3_2_tgt, also using the size
unit in bytes (-unit b)
IBM_2145:ITSOsvc1:admin>svcinfo lsvdisk -bytes W2K3_2_1
id 20
name W2K3_2_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name MDG2_DS43
capacity 8589934592
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680189801B2000000000000019
throttling 0

Appendix A. Copy services and open systems 677


preferred_node_id 12
fast_write_state not_empty

IBM_2145:ITSOsvc1:admin>svctask mkvdisk -mdiskgrp 2 -iogrp 0 -size 8589934592 -unit b -name


W2K3_2_tgt
Virtual Disk, id [24], successfully created

After creating the taget VDisk, we define the FlashCopy mapping on the SVC and start the
actual FlashCopy process.
IBM_2145:ITSOsvc1:admin>svctask mkfcmap -source W2K3_2_1 -target W2K3_2_tgt -name basic_fc
-copyrate 90
FlashCopy Mapping, id [2], successfully created
IBM_2145:ITSOsvc1:admin>svctask startfcmap -prep basic_fc

Now we have a point-in-time copy of the source volume from the Windows host. Now we map
the target VDisk to the same Windows host and perform a rescan at the server.
IBM_2145:ITSOsvc1:admin>svctask mkvdiskhostmap -host W2K3_2 W2K3_2_tgt
Virtual Disk to Host map, id [1], successfully created

After the rescan on the Windows server, it now shows the new volume and that it is already in
NTFS format, but doesn’t have a drive letter assigned yet, as shown in Figure A-5.

Figure A-5 Discovered FlashCopy target disk on Windows server

678 IBM System Storage SAN Volume Controller


We then select the disk, assign a drive letter, and check that the data is on the target disk.
This is shown in Figure A-6, Figure A-7, and Figure A-8.

Figure A-6 Chose disk and assign drive letter

Figure A-7 The target disk is now ready on the Windows server with drive letter E:

Appendix A. Copy services and open systems 679


Figure A-8 The data is both on source and target disk on the same Windows server

Enlarging an extended basic volume


The Copy Services source might initially be a single simple volume. However, as
requirements change on the application servers, the logical volume might be extended. You
should not independently extend the target volumes on the Windows host, but let Windows
detect the correct sequence of the extended volumes during the import process of the
extended SVC target volume.

When you have an extended volume, the logical drive might in time grow to include more of
the initial volume (extended disk). When this occurs, it is necessary, to perform a rescan of
the disks, and use the diskpart command to extend the disk, so the capacity can be available
to an existing partition.

On the initial FlashCopy, reboot the target server to configure the additional disks, or perform
a rescan of disks. On subsequent FlashCopy copies to the target volume group, run only a
chkdsk <drive letter> /F command on the target volume, to make Windows aware of the
changes on the target volume, if you don’t execute the chkdsk /F command, you cannot rely
on the data. Using our previous example, the complete command would be chkdsk E: /F

When expanding a basic disk using diskpart that is subsequently being used as a
FlashCopy source, the process to keep the FlashCopy in a usable state on the target disk,
either on the same server or on a different server, includes the following steps.
򐂰 Remove the FlashCopy map between the source and target VDisk on the SVC.
򐂰 Extend the VDisk on the SVC for the Windows volume that is going to be extended.
򐂰 Rescan for disks on the server where the source volume is located.
򐂰 Extend the Windows volume using the diskpart program.
򐂰 Remove the target disk on the server.
򐂰 Remove the FlashCopy mapping on the SVC
򐂰 Remove the mapping of the target VDisk.
򐂰 Rescan for disk on the target server, to remove old target disk.
򐂰 Extend the target VDisk to match the new size for the source VDisk on the SVC.
򐂰 Make a “new” FlashCopy mapping on the SVC for the VDisks.
򐂰 Make a new FlashCopy.
򐂰 Rescan for disks on the server where the target volume is located.
򐂰 Assign drive letter to the new extended target volume.

680 IBM System Storage SAN Volume Controller


Copy Services on dynamic volumes
To see target dynamic volumes on a second Windows 2000/2003 host, you have to complete
these tasks:
1. Perform the Metro Mirror or FlashCopy function onto the target volume. Ensure that when
using Metro Mirror, that the primary and secondary volumes were in consistent mode, and
write I/O was ceased prior to terminating the copy pair relationship.
2. Map the target volume (VDisk) to the second Windows 2000/2003 host.
3. Click Computer Management → Disk Management.
4. Find the Disk that is associated with your volume. There are two “panes” for each disk.
The left pane should read Dynamic and Foreign. It is likely that no drive letter is
associated with that volume.
5. Right-click that pane, and select Import Foreign Disks. Select OK, and then OK again.
The volume now has a drive letter assigned to it. It is of Simple Layout and Dynamic Type.
You can read and write to that volume.

Tip: Disable the fast-indexing option on the source disk. Otherwise, operations to that
volume are cached to speed up disk access. However, this means that data is not
flushed from memory and the target disk might have copies of files or folders that were
deleted from the source system.

When performing subsequent Metro Mirror or FlashCopy copies to the target volume, to
detect any changes to the contents of the target volume, it is necessary to run the following
command on the target volume:
chkdsk.exe /F

Example of a FlashCopy spanned over two dynamic disks


Next we describe an example of how to make a FlashCopy on a volume that is spanned over
two dynamic disks.

Find the VDisk that you will make a FlashCopy of. On the Windows source host, use the SDD
command datapath query device, to find the VDisk UID; and on the SVC, check the VDisks
information. If this is the first time you make the FlashCopy, make sure that you have a target
VDisk of the same size, if not create it. As to size, use the unit byte (b) so the VDisk will be
exactly the same size.

Here we list the source VDisk, using the parameter -bytes, to get the exact size.
IBM_2145:ITSOsvc1:admin>svcinfo lsvdisk -bytes dyn2_src
id 18
name dyn2_src
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG0_DS43
capacity 10736344064
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name

Appendix A. Copy services and open systems 681


vdisk_UID 600507680189801B2000000000000017
throttling 0
preferred_node_id 12
fast_write_state empty

Here we make the new target VDisk, using the unit byte, so the VDisk has the exact same
size.
IBM_2145:ITSOsvc1:admin>svctask mkvdisk -mdiskgrp MDG1_DS43 -iogrp 0 -size 10736344064
-unit b -name dyn2_tgt

Because it is a spanned disk over two dynamic disks, we have to make sure that the
FlashCopy has been taken at the exact same time. To ensure this make the two FlashCopy
mapping members of a consistency group on the SVC. First, we have to make the FlashCopy
consistency group, to do this we execute the command svctask mkfcconsistgrp with the
right parameters:
IBM_2145:ITSOsvc1:admin>svctask mkfcconsistgrp dyn_fc_grp -name
FlashCopy Consistency Group, id [3], successfully created

We need to add a FlashCopy mapping for each of the VDisks we are going to perform the
FlashCopy on. In this example the source VDisks are named dyn1_src and dyn2_src, while
the target VDisks are named dyn1_tgt and dyn2_tgt.

Adding the FlashCopy mappings to the consistgrp.


IBM_2145:ITSOsvc1:admin>svctask mkfcmap -source dyn1_src -target dyn1_tgt -name dyn1_fc
-consistgrp dyn_fc_grp -copyrate 80
FlashCopy Mapping, id [3], successfully created
IBM_2145:ITSOsvc1:admin>svctask mkfcmap -source dyn2_src -target dyn2_tgt -name dyn2_fc
-consistgrp dyn_fc_grp -copyrate 80
FlashCopy Mapping, id [4], successfully created

Before we can do the FlashCopy, we need to make sure that there is no I/O to the disk on the
Windows host, and we need to prepare the FlashCopy consistency group to be flashcopied.
To prepare the FlashCopy consistency group, we execute the svctask
prestartfcconsistgrp command. This command makes sure that all data on the involved
VDisks is flushed from the cache in the SVC.
IBM_2145:ITSOsvc1:admin>svctask prestartfcconsistgrp dyn_fc_grp

When this is done, we check that the consistency group is prepared with the command
svcinfo lsconsistgrp:
IBM_2145:ITSOsvc1:admin>svcinfo sfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FC_AIX_GRP idle_or_copied
3 dyn_fc_grp prepared

682 IBM System Storage SAN Volume Controller


Now we are ready to start the FlashCopy on the FlashCopy consistency group, so all the
VDisks will be flashcopied at the exact same time.
IBM_2145:ITSOsvc1:admin>svctask startfcconsistgrp dyn_fc_grp

It is possible to see how the FlashCopy is progressing in the process of copying data from
source to target. We don’t need to wait until the copy is done before we add the copy to
another Windows host. To see the progress, we use the svcinfo lsfcmapprogress
command, but we need to do it to both disks, as we can’t use the command on the FlashCopy
consistency group:
IBM_2145:ITSOsvc1:admin>svcinfo lsfcmapprogress dyn1_fc12
id progress
4 18
IBM_2145:ITSOsvc1:admin>svcinfo lsfcmapprogress dyn2_fc1_fc
id progress
3 15

On the Windows host we now need to scan for new disks. Before we made the FlashCopy,
the target Windows system we used had the disk configuration shown in Figure A-9.

Figure A-9 Disk configuration before adding dynamic FlashCopy disks

Appendix A. Copy services and open systems 683


After performing a diskscan, the Disk Manager on the Windows host looked like that shown in
Figure A-10.

Figure A-10 Dynamic disks added to Disk Manager

684 IBM System Storage SAN Volume Controller


The disks, Disk 4 and Disk 5, are shown as Dynamic and Foreign, because this is the first
time these disks have been discovered by the Windows host. To make the disks available to
the Windows host, we need to import the disks. To do this, we right-click one of the disks and
select import foreign disk as shown in Figure A-11.

Figure A-11 Import Foreign Disk

When importing the new disks to the Windows host, and you only had chosen one of the
dynamic disks, it will tell you how many disks are going to be imported, as shown in
Figure A-12.

Figure A-12 Import two foreign disks

This tells you that two disks are going to be imported and that the two dynamic disks make
one spanned volume. If you want to make sure what disks are going to be imported, you can
select the Disks button, which gives the information shown in Figure A-13.

Figure A-13 Disks that is going to be imported

Appendix A. Copy services and open systems 685


When you click OK, Windows tells you that it is a spanned volume on the dynamic disks you
are importing, as shown in Figure A-14.

Figure A-14 Spanned volume going to be imported

This is what we had expected, so we accept this by clicking the OK button. When we have
done this, the Windows Disk Manager shows the dynamic disks as online, and that as they
are known to the host they are not foreign anymore, as shown in Figure A-15.

Figure A-15 The flashcopied dynamic disks is online

All we need to do now is to assign a drive letter to the spanned volume, and we are able to
use the data on the target Windows host, as shown in Figure A-16.

Figure A-16 The data is ready to use

686 IBM System Storage SAN Volume Controller


Metro Mirror and Windows spanned volumes
We followed this procedure when carrying out the Metro Mirror of a Windows 2000 spanned
volume set from Server (A) to Server (B):
1. On the source server (A), we created a Windows spanned volume set of multiple dynamic
disks.
2. We rebooted the target server (B), imported multiple target disks, and wrote a disk
signature on each as basic disks.
3. We established Metro Mirror between the source (A) and target volumes (B).
4. After the source and target volumes were synchronized, we terminated Metro Mirror.
5. We rebooted the target host (B.)
6. We started Disk Manager, the Metro Mirror target volumes were seen as Foreign Dynamic
Disks.
7. The disks were imported into the target host and were seen as a spanned volume.

To demonstrate failback to the original setup, we carried out the following steps:
1. We removed the original paths and re-established them in the reverse direction from (B) to
(A).
2. We removed the spanned volume drive letter from the original source, the spanned
volume on server (A).
3. We established Metro Mirror from (B) to (A) and wrote some data onto the spanned
volume.
4. Metro Mirror was terminated.
5. We restored the drive letter to the spanned volume on server (A).
6. The contents of the spanned volume could now be read from server (A).

Appendix A. Copy services and open systems 687


688 IBM System Storage SAN Volume Controller
B

Appendix B. DS4000 migration scenarios


In this appendix we present a high-level overview of migrating from “normal” storage area
network (SAN) attached storage to virtualized storage. In all of the examples, we use the IBM
System Storage DS4000 Series as the storage system.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 689
Initial considerations
Here are some basic factors that you must take into account in all situations before starting:
򐂰 Dependant upon the firmware version, a DS4000 can scale up to 2048 logical unit
numbers (LUNs) across 64 host partitions.
򐂰 Host device drivers must be changed, so all LUNs in a host partition must be moved to the
SVC partition in one step.
򐂰 Each partition can only access a unique set of host bus adapter (HBA) ports, as defined
by worldwide port names (WWPNs) of those adapters.
򐂰 Only one storage partition must be created that includes any IBM System Storage SVC
ports of nodes that are in the same SVC cluster.
򐂰 The contents of an existing partition must be moved to the SVC partition at the same time.
Some configurations might require backup, reconfigure, and restore.
򐂰 Some versions of DS4000 allow RAID arrays to be expanded allowing their capacity to
increase, which is not recommended, although it might be helpful for some configurations.

Important: If you have more logical units than is supported in one partition, then some
spare storage is required to allow for temporary migration of the data while the DS4000
is re-configured to have fewer logical units.

690 IBM System Storage SAN Volume Controller


Scenario 1: Total number of LUNs is less than maximum LUNs
per partition
Existing host or hosts attached to DS4000 have fewer LUNs than what is supported in one
partition on the DS4000. Only one partition per host or both partitions on a single host have
less than maximum numbers of supported LUNs when combined. Figure B-1 shows the initial
configuration.

Note: Access LUN (A) is used by Host A for in-band configuration of the DS4000. This is
deleted and DS4000 is configured to use SVC master console over Ethernet. Access LUN
is not required by SVC.

Host A Host B Host C

Figure B-1 Initial configuration

Appendix B. DS4000 migration scenarios 691


Then we add the SVC and create an SVC partition on the DS4000. See Figure B-2. The
following steps are required for this task:
1. Modify the zoning so that only the SVC can “see” the DS4000. This allows partition 3 to be
created, and access to partitions 0, 1, and 2 can continue.
2. The port or ports of the SVC master console must not be in any partition.

Host C
Host A Host B

Figure B-2 Partition 3 created

692 IBM System Storage SAN Volume Controller


We move the storage for host C from host partition 2 to partition 3 (SVC partition), to be
managed by the SVC. See Figure B-3. Note the following points:
򐂰 Concurrent access from host C to its logical units is not possible.
򐂰 Host C requires reconfiguration from RDAC device drivers to SDD. Changes to the
adapter configuration and microcode levels, settings, and so on, might also be required.
򐂰 Switch zoning changes are also required to prevent host C from “seeing” the DS4000
ports and instead “seeing” the SVC ports.
򐂰 LUNs from host partition 2, that are now in partition 3, must be configured as image mode
VDisks in SVC and mapped to host C.

Note: Partition 2 should now be deleted after all logical units are moved to partition 3.

Host C
Host A Host B

Figure B-3 Storage moved from partition 2 to partition 3

Appendix B. DS4000 migration scenarios 693


The next step is to move the storage for host A and B from host partition 0 and1 to partition 3
(SVC partition), to be managed by the SVC. This is shown in Figure B-4.

The following steps are required to do this:


1. Stop access from host A and B to its logical units.
2. Host A and B requires reconfiguration from RDAC device drivers to SDD, changes to
adapter configuration and microcode levels, settings, and so on, might also be required.
3. Switch zoning changes are also required to prevent host A and B from “seeing” the
DS4000 ports and instead “seeing” the SVC ports.
4. LUNs from host partition 0 and 1, which are now in partition 3, must be configured as
image mode VDisks in the SVC and mapped to host A and B as they were before the
migration, using their original logical unit numbers.
5. Partition 0 and 1 can be deleted if required, after all LUNs are moved to partition 3. Note
that LUNs moved from partition 0 and 1 to partition 3 have different logical unit numbers
on the DS4300, but the SVC will present the LUNs to the hosts with the same logical unit
numbers as before the migration.

Note: Access LUN (A) can no longer be used by host A for in-band configuration of the
DS4000. This can therefore be deleted and the DS4000 configured to use the SVC
master console over the Ethernet. The access LUN is not required by SVC.

Host C
Host A Host B

Figure B-4 Storage moved from partition 0 and 1 to partition 3

694 IBM System Storage SAN Volume Controller


We must now move any remaining host storage moved from the host partitions to partition 3
(SVC partition). We use the previous steps to accomplish this. This gives us the configuration
shown in Figure B-5. Image mode VDisks can now be converted to managed MDisks using
data migration commands as required.

Host C
Host A Host B

Figure B-5 All storage under the SVC

Appendix B. DS4000 migration scenarios 695


Scenario 2: Total number of LUNs is more than maximum LUNs
per partition
It is not possible to use the same solution as in “Scenario 1: Total number of LUNs is less than
maximum LUNs per partition” on page 691, because we will exceed the number of supported
LUNs in one partition.

An easy way to do the migration is shown here, but it is also possible to solve this without the
need of another DS4000 if there is free capacity, and the sum of the LUNs in the largest
partition and the SVC partition, do not exceed the maximum number of support LUNs in one
partition.

In this case you need to follow the procedure in the previous scenario but for one partition at
a time, and move the LUNs from image mode to managed mode disk in the SVC, and the
image mode MDisks are ejected from the group automatically.

Thereafter you can move the next partition into the SVC partition, but before you do this you
might need to expand the capacity for the SVC using the free capacity in the DS4000 that has
now become free after removing the old LUNs.

The initial configuration is shown in Figure B-6. Note the following points:
򐂰 More LUNs than maximum supported in one partition on DS4000-1
򐂰 New DS4000 providing new storage, larger than or equal to the capacity of DS4000-1
򐂰 Only one partition per host

696 IBM System Storage SAN Volume Controller


Host C
Host A Host B

Figure B-6 Scenario 2 initial configuration

Appendix B. DS4000 migration scenarios 697


We then add another DS4000 and carry out the following steps:
1. Create RAID arrays on DS4000-2, one LUN per array, using equal numbers of arrays.
2. Rezone the switch to allow SVC ports to access DS4000-2 ports.
3. Create the partition, including all LUNs and SVC ports.

This is shown in Figure B-7.

Host C
Host A Host B

Figure B-7 Second DS4000 added

698 IBM System Storage SAN Volume Controller


We then move the storage for host C under the control of the SVC. This is shown in
Figure B-8.

Host C
Host A Host B

Figure B-8 Portions created on DS4000-2

Appendix B. DS4000 migration scenarios 699


We carry out the following steps:
1. Stop host C.
2. Rezone the switch so that host C port accesses the SVC ports as required, not the
DS4000-1 ports.
3. Rezone the switch to allow the SVC ports to access DS4000-1 ports.
4. Change host C device drivers, settings, software etc., to support the SVC.
5. Change partition 2 to SVC host type and change port names to SVC ports removing ports
of host C.
6. Create SVC managed mode disks from storage in partition 0 on DS4000-2.
7. Create SVC image mode disks from storage in partition 2 on DS4000-1.
8. Migrate image mode VDisks for host C to managed disks on DS4000-2.
9. When migration completes, delete LUNs and partition 2 on DS4000-1.

Figure B-9 shows the result of this.

Host C
Host A Host B

Figure B-9 Storage for host C migrated to DS4000-2

700 IBM System Storage SAN Volume Controller


We repeat this procedure for the remaining host until all the storage is migrated to the control
of the SVC. This is shown in Figure B-10. DS4000-1 is now unused.

Note: Although we used a second DS4000 in this scenario, it is possible to carry out a
similar procedure if there is enough spare capacity on DS4000-1.

Host C
Host A Host B

Figure B-10 All storage under control of the SVC

Appendix B. DS4000 migration scenarios 701


702 IBM System Storage SAN Volume Controller
C

Appendix C. Scripting
In this appendix we present a high-level overview of how to automate different tasks by
creating scripts using the SVC Command Line Interface (CLI).

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 703
Scripting structure
When creating scripts to automate tasks on the SVC, use the structure illustrated in
Figure C-1.

Create
connection
(SSH) to the
SVC

Scheduled
activation
Run the or
command(s)
command Manual
activation

Perform
logging

Figure C-1 Scripting structure for SVC task automation

Creating a connection (SSH) to the SVC


When creating a connection to the SVC, if you are running the script, you must have access
to a private key that corresponds to a public key previously uploaded to the SVC. The private
key is used to establish the SSH connection needed to use the CLI on the SVC.

PuTTY provides a command line connection utility called plink.exe (you can use another
SSH). In the following examples we will use plink to create the SSH connection to the SVC.

Executing the command(s)


When using the CLI you can use the examples in Chapter 9, “SVC configuration and
administration using the CLI” on page 211 for inspiration, or refer to the SVC Command-Line
Interface User’s Guide which can be downloaded from the SVC documentation page for each
SVC code-level.

Performing logging
When using the CLI not all commands provide usable response to determine the status of the
invoked command, therefore we recommend that you always create checks that can be
logged for monitoring and troubleshooting purposes.

704 IBM System Storage SAN Volume Controller


Automated VDisk creation
In the following example, we create a simple bat script to be used to automate VDisks
creation to illustrate how scripts are created. Creating scripts to automate SVC administration
tasks is not limited to bat scripting, and you can, in principle, encapsulate the CLI commands
in scripts using any programming language you prefer, or even program applets to be used to
perform routine tasks.

Connecting to the SVC using a predefined SSH connection


The easiest way to create a SSH connection to the SVC is when plink can call a predefined
PuTTY session as illustrated in Figure C-2.

Define a session, including:


򐂰 The Auto-login username, and setting that to your SVC admin user name (for example,
admin). This parameter is set under the Connection → Data category.
򐂰 The Private key for authentication (for example, icat.ppk). This is the private key that you
have already created. This parameter is set under the Connection → Session → Auth
category.
򐂰 The IP address of the SVC cluster. This parameter is set under the Session category.
򐂰 A session name. Our example uses SVC:cluster1.

Your version of PuTTY might have these parameters set in a different categories.

Appendix C. Scripting 705


Figure C-2 Using a predefined SSH connection with plink

To use this predefined PuTTY session the syntax is:


plink SVC1:cluster1

If a predefined PuTTY session is not used the syntax is:


plink [email protected] -i "C:\DirectoryPath\KeyName.PPK"

Creating VDisks command using the CLI


In our example we decided the following parameters are variables when creating the VDisks:
VDisk size (in GB) %1
VDisk name %2
Managed Disk Group (MDG)%3
svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3

Listing created VDisks


To log that our script created the VDisk we defined when executing the script, we use the
filtervalue:
svcinfo lsvdisk -filtervalue 'name=%2' >> C:\DirectoryPath\VDiskScript.log

706 IBM System Storage SAN Volume Controller


Invoking the sample script VDiskScript.bat
Finally, putting it all together our sample bat script for creating a VDisk is created shown in
Figure C-3.

-------------------------------------VDiskScript.bat------------------------------------

plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb
-name %2 -mdiskgrp %3

plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=%2' >>


C:\DirectoryPath\VDiskScript.log
----------------------------------------------------------------------------------------

Figure C-3 VDiskScript.bat

Using the script we now create a VDisk with the following parameters:
򐂰 VDisk size (in GB): 20 (%1)
򐂰 VDisk name: Host1_F_Drive (%2)
򐂰 Managed Disk Group (MDG): 1 (%3)
This is illustrated in Example C-1.

Example: C-1 Executing the script to create the VDisk


C:\SVC_Jobs>VDiskScript 20 Host1_F_Drive 1
C:\SVC_Jobs>plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size 20
-unit gb -name Host1_F_Drive -mdiskgrp 1
Authenticating with public key "rsa-key-20040116"

C:\SVC_Jobs>plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=Host1_F_Drive


' 1>>C:\SVC_Jobs\VDiskScript.log
Authenticating with public key "rsa-key-20040116"

From the output of the log as displayed in Example C-2 we verify the VDisk is created as
desired.

Example: C-2 Logfile output from Example C-1 VDiskScript.bat


id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name
55 Host1_F_Drive 0 io_grp0 online 1 MDiskGrp1_FST2 20.0GB striped

SVC tree
Here is another example of using scripting to talk to the SVC. This script will display a
tree-like structure for the SVC, shown in Example C-3. It has been written in perl and should
work without modification using perl on UNIX systems (for example, AIX, Linux), perl for
Windows or perl in a Windows Cygwin environment.

The script is shown in Example C-4 on page 709.

Appendix C. Scripting 707


Example: C-3 SVC Tree script output
$ ./svctree.pl 9.1.39.29 admin /cygdrive/d/home/deon/ssh/icat.ssh
+ itsosvc1 (9.1.39.29)
+ CONTROLLERS
+ DS4000 (0)
+ mdisk0 (ID: 0 CAP: 20.0GB MODE: unmanaged)
+ mdisk1 (ID: 1 CAP: 20.0GB MODE: unmanaged)
+ IBMOEM-ESX-MD1 (ID: 2 CAP: 50.0GB MODE: managed)
+ IBMOEM-ESX-MD2 (ID: 3 CAP: 50.0GB MODE: managed)
+ IBMOEM-ESX-MD3 (ID: 4 CAP: 50.0GB MODE: managed)
+ MDISK GROUPS
+ Kanaga_MDgrp_1 (ID: 4 CAP: 0 FREE: 0)
+ VSS_MDGRP (ID: 5 CAP: 149.1GB FREE: 128.9GB)
+ IBMOEM-ESX-MD1 (ID: 2 CAP: 50.0GB MODE: managed)
+ IBMOEM-ESX-MD2 (ID: 3 CAP: 50.0GB MODE: managed)
+ IBMOEM-ESX-MD3 (ID: 4 CAP: 50.0GB MODE: managed)
+ Kanaga_MDgrp_0 (ID: 6 CAP: 0 FREE: 0)
+ ESX-DATA-MDG (ID: 7 CAP: 0 FREE: 0)
+ ESX-BOOT-MDG (ID: 8 CAP: 0 FREE: 0)
+ io_grp0 (0)
+ NODES
+ node1 (1)
+ SVCnode2 (4)
+ HOSTS
+ FERMIUM (0)
+ VSS_RESERVED (1)
+ KANAGA (8)
+ WISLA (10)
+ VSS_FREE (12)
+ VSS_VD_T (ID: 9 CAP: 10.0GB TYPE: striped STAT: online)
+ LOCHNESS (13)
+ vdisk10 (ID: 10 CAP: 100.0MB TYPE: striped STAT: online)
+ VDISKS
+ VSS_VD_T (ID: 9 CAP: 10.0GB TYPE: striped)
+ IBMOEM-ESX-MD1 (ID: 2 CAP: 50.0GB MODE: managed CONT: DS4000)
+ IBMOEM-ESX-MD2 (ID: 3 CAP: 50.0GB MODE: managed CONT: DS4000)
+ IBMOEM-ESX-MD3 (ID: 4 CAP: 50.0GB MODE: managed CONT: DS4000)
+ vdisk10 (ID: 10 CAP: 100.0MB TYPE: striped)
+ IBMOEM-ESX-MD1 (ID: 2 CAP: 50.0GB MODE: managed CONT: DS4000)
+ VSS_VD_S (ID: 11 CAP: 10.0GB TYPE: striped)
+ IBMOEM-ESX-MD1 (ID: 2 CAP: 50.0GB MODE: managed CONT: DS4000)
+ IBMOEM-ESX-MD2 (ID: 3 CAP: 50.0GB MODE: managed CONT: DS4000)
+ IBMOEM-ESX-MD3 (ID: 4 CAP: 50.0GB MODE: managed CONT: DS4000)
+ io_grp1 (1)
+ NODES
+ HOSTS
+ FERMIUM (0)
+ KANAGA (8)
+ WISLA (10)
+ LOCHNESS (13)
+ VDISKS
+ io_grp2 (2)
+ NODES
+ HOSTS
+ VDISKS
+ io_grp3 (3)
+ NODES
+ HOSTS
+ FERMIUM (0)
+ VDISKS

708 IBM System Storage SAN Volume Controller


+ recovery_io_grp (4)
+ NODES
+ HOSTS
+ itsosvc1 (2200642269468)
+ VDISKS

Example: C-4 svctree.pl


#!/usr/bin/perl

$SSHCLIENT = “ssh”; # (plink or ssh)

$HOST = $ARGV[0];
$USER = ($ARGV[1] ? $ARGV[1] : “admin”);
$PRIVATEKEY = ($ARGV[2] ? $ARGV[2] : “/path/toprivatekey”);
$DEBUG = 0;

die(sprintf(“Please call script with cluster IP address. The syntax is: \n%s ipaddress
loginname privatekey\n”,$0))
if (! $HOST);

sub TalkToSVC() {
my $COMMAND = shift;
my $NODELIM = shift;
my $ARGUMENT = shift;
my @info;

if ($SSHCLIENT eq “plink” || $SSHCLIENT eq “ssh”) {


$SSH = sprintf(‘%s -i %s %s@%s ‘,$SSHCLIENT,$PRIVATEKEY,$USER,$HOST);

} else {
die (“ERROR: Unknown SSHCLIENT [$SSHCLIENT]\n”);
}

if ($NODELIM) {
$CMD = “$SSH svcinfo $COMMAND $ARGUMENT\n”;
} else {
$CMD = “$SSH svcinfo $COMMAND -delim : $ARGUMENT\n”;
}
print “Running $CMD” if ($DEBUG);

open SVC,”$CMD|”;
while (<SVC>) {
print “Got [$_]\n” if ($DEBUG);
chomp;
push(@info,$_);
}
close SVC;

return @info;
}

sub DelimToHash() {
my $COMMAND = shift;
my $MULTILINE = shift;
my $NODELIM = shift;
my $ARGUMENT = shift;
my %hash;

@details = &TalkToSVC($COMMAND,$NODELIM,$ARGUMENT);

Appendix C. Scripting 709


print “$COMMAND: Got [“,join(‘|’,@details).”]\n” if ($DEBUG);

my $linenum = 0;
foreach (@details) {
print “$linenum, $_” if ($debug);

if ($linenum == 0) {
@heading = split(‘:’,$_);

} else {
@line = split(‘:’,$_);

$counter = 0;
foreach $id (@heading) {
printf(“$COMMAND: ID [%s], value [%s]\n”,$id,$line[$counter]) if ($DEBUG);
if ($MULTILINE) {
$hash{$linenum,$id} = $line[$counter++];
} else {
$hash{$id} = $line[$counter++];
}
}
}

$linenum++;
}

return %hash;
}

sub TreeLine() {
my $indent = shift;
my $line = shift;
my $last = shift;

for ($tab=1;$tab<=$indent;$tab++) {
print “ “;
}

if (! $last) {
print “+ $line\n”;
} else {
print “| $line\n”;
}
}

sub TreeData() {
my $indent = shift;
my $printline = shift;
*data = shift;
*list = shift;
*condition = shift;
my $item;

foreach $item (sort keys %data) {


@show = ();
($numitem,$detail) = split($;,$item);
next if ($numitem == $lastnumitem);
$lastnumitem = $numitem;

710 IBM System Storage SAN Volume Controller


printf(“CONDITION:SRC [%s], DST [%s], DSTVAL
[%s]\n”,$condition{“SRC”},$condition{“DST”},$data{$numitem,$condition{“DST”}}) if ($DEBUG);
next if (($condition{“SRC”} && $condition{“DST”}) && ($condition{“SRC”} !=
$data{$numitem,$condition{“DST”}}));

foreach (@list) {
push(@show,$data{$numitem,$_})
}

&TreeLine($indent,sprintf($printline,@show),0);
}
}

# Gather our cluster information.


%clusters = &DelimToHash(‘lscluster’,1);
%iogrps = &DelimToHash(‘lsiogrp’,1);
%nodes = &DelimToHash(‘lsnode’,1);
%hosts = &DelimToHash(‘lshost’,1);
%vdisks = &DelimToHash(‘lsvdisk’,1);
%mdisks = &DelimToHash(‘lsmdisk’,1);
%controllers = &DelimToHash(‘lscontroller’,1);
%mdiskgrps = &DelimToHash(‘lsmdiskgrp’,1);

# We are now ready to display it.


# CLUSTER
$indent = 0;
foreach $cluster (keys %clusters) {
($numcluster,$detail) = split($;,$cluster);
next if ($numcluster == $lastnumcluster);
$lastnumcluster = $numcluster;

&TreeLine($indent,sprintf(‘%s
(%s)’,$clusters{$numcluster,’name’},$clusters{$numcluster,’cluster_IP_address’}),0);

# CONTROLLERS
&TreeLine($indentiogrp+1,’CONTROLLERS’,0);
$lastnumcontroller = ““;
foreach $controller (sort keys %controllers) {
$indentcontroller = $indent+2;

($numcontroller,$detail) = split($;,$controller);
next if ($numcontroller == $lastnumcontroller);
$lastnumcontroller = $numcontroller;

&TreeLine($indentcontroller,
sprintf(‘%s (%s)’,
$controllers{$numcontroller,’controller_name’},
$controllers{$numcontroller,’id’})
,0);

# MDISKS
&TreeData($indentcontroller+1,
‘%s (ID: %s CAP: %s MODE: %s)’,
*mdisks,
[‘name’,’id’,’capacity’,’mode’],

{“SRC”=>$controllers{$numcontroller,’controller_name’},”DST”=>”controller_name”});

Appendix C. Scripting 711


# MDISKGRPS
&TreeLine($indentiogrp+1,’MDISK GROUPS’,0,[]);
$lastnummdiskgrp = ““;
foreach $mdiskgrp (sort keys %mdiskgrps) {
$indentmdiskgrp = $indent+2;

($nummdiskgrp,$detail) = split($;,$mdiskgrp);
next if ($nummdiskgrp == $lastnummdiskgrp);
$lastnummdiskgrp = $nummdiskgrp;

&TreeLine($indentmdiskgrp,
sprintf(‘%s (ID: %s CAP: %s FREE: %s)’,
$mdiskgrps{$nummdiskgrp,’name’},
$mdiskgrps{$nummdiskgrp,’id’},
$mdiskgrps{$nummdiskgrp,’capacity’},
$mdiskgrps{$nummdiskgrp,’free_capacity’})
,0);

# MDISKS
&TreeData($indentcontroller+1,
‘%s (ID: %s CAP: %s MODE: %s)’,
*mdisks,
[‘name’,’id’,’capacity’,’mode’],
{“SRC”=>$mdiskgrps{$nummdiskgrp,’id’},”DST”=>”mdisk_grp_id”});
}

# IOGROUP
$lastnumiogrp = ““;
foreach $iogrp (sort keys %iogrps) {
$indentiogrp = $indent+1;

($numiogrp,$detail) = split($;,$iogrp);
next if ($numiogrp == $lastnumiogrp);
$lastnumiogrp = $numiogrp;

&TreeLine($indentiogrp,sprintf(‘%s
(%s)’,$iogrps{$numiogrp,’name’},$iogrps{$numiogrp,’id’}),0);

$indentiogrp++;

# NODES
&TreeLine($indentiogrp,’NODES’,0);
&TreeData($indentiogrp+1,
‘%s (%s)’,
*nodes,
[‘name’,’id’],
{“SRC”=>$iogrps{$numiogrp,’id’},”DST”=>”IO_group_id”});

# HOSTS
&TreeLine($indentiogrp,’HOSTS’,0);
$lastnumhost = ““;
%iogrphosts = &DelimToHash(‘lsiogrphost’,1,0,$iogrps{$numiogrp,’id’});
foreach $host (sort keys %iogrphosts) {
my $indenthost = $indentiogrp+1;

($numhost,$detail) = split($;,$host);
next if ($numhost == $lastnumhost);
$lastnumhost = $numhost;

&TreeLine($indenthost,

712 IBM System Storage SAN Volume Controller


sprintf(‘%s (%s)’,$iogrphosts{$numhost,’name’},$iogrphosts{$numhost,’id’}),
0);

# HOSTVDISKMAP
%vdiskhostmap = &DelimToHash(‘lshostvdiskmap’,1,0,$hosts{$numhost,’id’});

$lastnumvdisk = ““;
foreach $vdiskhost (sort keys %vdiskhostmap) {
($numvdisk,$detail) = split($;,$vdiskhost);
next if ($numvdisk == $lastnumvdisk);
$lastnumvdisk = $numvdisk;

next if ($vdisks{$numvdisk,’IO_group_id’} != $iogrps{$numiogrp,’id’});

&TreeData($indenthost+1,
‘%s (ID: %s CAP: %s TYPE: %s STAT: %s)’,
*vdisks,
[‘name’,’id’,’capacity’,’type’,’status’],
{“SRC”=>$vdiskhostmap{$numvdisk,’vdisk_id’},”DST”=>”id”});

}
}

# VDISKS
&TreeLine($indentiogrp,’VDISKS’,0);
$lastnumvdisk = ““;
foreach $vdisk (sort keys %vdisks) {
my $indentvdisk = $indentiogrp+1;

($numvdisk,$detail) = split($;,$vdisk);
next if ($numvdisk == $lastnumvdisk);
$lastnumvdisk = $numvdisk;

&TreeLine($indentvdisk,
sprintf(‘%s (ID: %s CAP: %s TYPE: %s)’,
$vdisks{$numvdisk,’name’},
$vdisks{$numvdisk,’id’},
$vdisks{$numvdisk,’capacity’},
$vdisks{$numvdisk,’type’}),
0)
if ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’});

# VDISKMEMBERS
if ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’}) {
%vdiskmembers = &DelimToHash(‘lsvdiskmember’,1,1,$vdisks{$numvdisk,’id’});

foreach $vdiskmember (sort keys %vdiskmembers) {


&TreeData($indentvdisk+1,
‘%s (ID: %s CAP: %s MODE: %s CONT: %s)’,
*mdisks,
[‘name’,’id’,’capacity’,’mode’,’controller_name’],
{“SRC”=>$vdiskmembers{$vdiskmember},”DST”=>”id”});

}
}
}
}
}

Appendix C. Scripting 713


Scripting alternatives
For an alternative to scripting, see:
http://www-306.ibm.com/software/tivoli/products/storage-mgr-hardware/

Additionally, IBM provides a suite of scripting tools based on perl. These can be downloaded
from:

http://www.alphaworks.ibm.com/tech/svctools

714 IBM System Storage SAN Volume Controller


D

Appendix D. Node replacement and node


upgrading procedure
You can use this Appendix in two ways. The first is to document how to replace a failing SVC
node with a spare. The second is to describe how to perform a concurrent upgrade of an SVC
cluster with 4F2 nodes to an SVC cluster running with 8F4 nodes.

Although both processes are very similar, be aware that the upgrade process requires the
SVC code 4.1. at a minimum. There are two possible routes for the SVC node upgrade. A
customer can chose between a simple, albeit disruptive, upgrade from 4F2 to 8F2, or a
non-disruptive upgrade. For either of the chosen routes, the SVC node upgrade will have two
SVC node types within the same I/O group. Any code versions below SVC code 4.1 will not
support this feature.

The hardware update from an 8F2 node to a 8F4 node is not covered in this appendix as the
only difference between these nodes is the speed of the FC card. The 4 Gbps HBA update for
the 8F2 models is available via the IBM ordering system. Ask your local sales representative
for more information.

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 715
Replacing a failed SVC node
If an IBM System Storage SAN Volume Controller (SVC) node fails, the SVC cluster
continues to operate with degraded performance until the failed node is repaired. Sometimes,
in environments where it is critical that the best availability and performance are required at
any time, it might be useful to replace the failed node with a spare.

However, to replace a failed node without interrupting I/O, and without any risk to data
integrity when a new spare node is reconnected to the SAN fabric, a specific procedure must
be followed. The procedure involves changing the worldwide node name (WWNN) of a node.
This procedure must be executed with care since inaccurate or duplicate WWNNs can cause
data corruption.

In this appendix we describe the basic tasks involved in setting up a node as a standby node
and how to clone a node, particularly a failed node in the cluster. See also the online hint and
tip titled SAN Volume Controller Node Replacement Procedure at:
http://www-1.ibm.com/support/docview.wss?rs=591&context=STCFPBW&context=STCFMTF&dc=DB500&q1
=replacement&uid=ssg1S1001909&loc=en_US&cs=utf-8&lang=en

Important: The advantage of cloning a node as opposed to simply adding a new node is
that, by replicating the failed node’s WWNN, the new node’s worldwide port names
(WWPNs) are assigned to be the same as the failed node. With cloning, all logical unit
number (LUN) maskings, persistent bindings, and zonings by WWNN that are set up in the
storage, hosts, and switches attached to the cluster can remain unchanged.

Prerequisites for replacing a failed node


Before attempting to replace a failed node, the following prerequisites must be met:
1. Have SVC software V1.1.1 or higher installed on the SVC cluster and on the spare node.
2. Know the name of the cluster that contains the failed node.
3. Have a spare node in the same rack as the SVC cluster that contains the failed node.
4. Make a record of the last five characters of the original WWNN of the spare node since
this might be needed again if the customer wants to stop using a spare node and use this
node as a normal node that can be assigned to any cluster. Complete the following steps
to display the WWNN of the node:
a. Display the node status on the front panel display of the node.
b. With the node status displayed on the front panel, press and hold the Down button.
Press and release the Select button. Release the Down button.
c. The text "WWNN" is displayed on line one of the display. Line 2 of the display contains
the last five characters of the WWNN.
d. Record this number in a safe place. It will be needed when the customer wants to stop
using a spare node.

716 IBM System Storage SAN Volume Controller


Replacement process
When a node is swapped, the following actions might occur:
򐂰 Front Panel ID can be changed. This is the number that is printed on the front of the node
and used to select the node that is to be added to a cluster.
򐂰 Node Name can be changed. If the SVC application is permitted to assign default names
when adding nodes to the cluster, it creates a new name each time a node is added. If you
choose to assign node names, then you must type in the desired node name. If the
customer is using scripts to perform management tasks on the cluster and those scripts
use the node name, then, by assigning the original name to a replacement node, it is not
necessary to make changes to the scripts following service activity on the cluster.
򐂰 Node ID changes. A new node ID is assigned each time a node is added to a cluster. The
node ID or the node name can be used when performing management tasks on the
cluster. If scripts are being used to perform those tasks, we recommend that the node
name be used instead of the node ID. This is because the node name remains unchanged
following service activity on the cluster.
򐂰 World Wide Node Name does not change. The WWNN is used to uniquely identify the
node and the Fibre Channel ports. The node replacement procedure changes the WWNN
of the spare node to match that of the failed node. The node replacement procedures must
be followed exactly to avoid any duplication of WWNNs.
򐂰 The World Wide Port Name of each Fibre Channel port does not change. The WWPNs
are derived from the WWNN that is written to the replacement node as part of this
procedure.

Tip: When the FC adapter in one node of the cluster is replaced the WWPN doesn’t
change because it is derived from the Node WWNN, and no action is required in order to
reconfigure zoning or LUN masking.

Complete these steps:


1. Use the SVC console or the command line interface (CLI) to gather and record the
following information about the failed node:
– Node name: To display the node name using the console:
i. From the Welcome window, select Work with Nodes -> Nodes.
ii. The failed node is offline. Note the name.
To display the node name using the CLI:
i. Use the command:
svcinfo lsnode
ii. The failed node is offline. Note the name as in Example D-1.

Example: D-1 lsnode example


IBM_2145:itsosvc01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node
UPS_unique_id hardware
1 node1 YM100053N094 500507680100188E online 0 io_grp0 yes
2040000143780244 4F2
3 node2 YM100053N093 5005076801001883 offline 0 io_grp0 no
2040000143780243 4F2

Appendix D. Node replacement and node upgrading procedure 717


– I/O group name: To display the I/O group name using the console:
i. From the Welcome window, select Work with Nodes → Nodes.
ii. The failed node is offline. Note the I/O group name.
To display the node name using the CLI:
i. Use the command:
svcinfo lsnode
ii. The failed node is offline. Note the I/O_group_name as in Example D-1 on
page 717.
– The last five characters of the WWNN: To display the WWNN using the console:
i. From the Welcome window, select Work with Nodes → Nodes.
ii. The failed node is offline. Note the last five characters of the WWNN.
To display the WWNN using the CLI:
i. Use the command:
svcinfo lsnodevpd <node_name>
Here <node_name> is the name recorded in Step 1.
ii. Find the WWNN field in the output. Note the last five characters of the WWNN as in
Example D-2.

Example: D-2 lsnodevpd command example


IBM_2145:itsosvc01:admin>svcinfo lsnodevpd node3
id 3

system board: 17 fields


part_number 64P7826
system_serial_number 75abwda
number_of_processors 2
number_of_memory_slots 4
number_of_fans 5
number_of_FC_cards 2
number_of_scsi/ide_devices 3
BIOS_manufacturer IBM
BIOS_version -[T2EH05AUS-1.06]-
BIOS_release_date 09/26/2003
system_manufacturer IBM
system_product eServer System x 335 -[21454F2]-
planar_manufacturer IBM
power_supply_part_number 49P2090
CMOS_battery_part_number 33F8354
power_cable_assembly_part_number 64P7940
service_processor_firmware T28T15A

software: 6 fields
code_level 3.1.0.0 (build 3.12.0509190000)
node_name node3
ethernet_status 1
WWNN 0x5005076801001883
id 3
MAC_address 00 0d 60 1c d8 0a
.
.
.
front panel assembly: 3 fields
part_number 64P7858

718 IBM System Storage SAN Volume Controller


front_panel_id 008057
front_panel_locale en_US
UPS: 10 fields
electronics_assembly_part_number 64P8104
battery_part_number 18P5880
UPS_assembly_part_number 18P5864
input_power_cable_part_number CountryDependant
UPS_serial_number YM100053N093
UPS_type 2145UPS
UPS_internal_part_number P64P8103
UPS_unique_id 0x2040000143780243
UPS_main_firmware 1.09
UPS_comms_firmware 2.06

– Front panel ID: To display the front panel id using the console:
i. From the Welcome window, select Work with Nodes → Nodes.
ii. The failed node is offline. Click the name of the offline node.
iii. Select the Vital Product Data tab.
iv. The front panel ID is under the Front panel assembly section of the VPD. Note the
front panel ID.
To display the front panel ID using the CLI:
i. Use the command:
svcinfo lsnodevpd <node_name>
Here <node_name> is the name recorded in the step above.
ii. Find the front_panel_id field in the output. Note the front panel ID as in
Example D-2 on page 718.
– The uninterruptible power supply serial number: To display the uninterruptible
power supply serial number using the console:
i. From the Welcome window, select Work with Nodes → Nodes.
ii. The failed node is offline. Click the name of the offline node.
iii. Select the Vital Product Data tab.
iv. The uninterruptible power supply serial number is in the UPS section of the VPD.
Note the serial number.
To display the node VPD using the CLI:
i. Enter the command:
svcinfo lsnodevpd <node_name>
Here <node_name> is the name recorded in Step 1.
ii. Find the UPS_serial_number field in the output. Note the uninterruptible power
supply serial number as in Example D-2 on page 718.
2. Use the front panel ID to locate the failed node. Disconnect all four Fibre Channel cables
from the node.

Important: The cables must not be reconnected until the node is repaired and the
WWNN has been changed to match the default spare WWNN.

3. Connect the power or signal cable from the spare node to the uninterruptible power supply
with the serial number noted in Step 1.

Appendix D. Node replacement and node upgrading procedure 719


Note: The signal cable can be plugged into any vacant position on the top row of the
serial connectors on the uninterruptible power supply. If no spare serial connectors are
available on the uninterruptible power supply, disconnect the cables from the failed
node.

4. Power on the spare node.


5. Display the node status on the service panel.
6. Complete the following steps to change the WWNN of the spare node:
a. With the node status displayed on the front panel, press and hold the Down button.
Press and release the Select button. Release the Down button.
The text "WWNN" is displayed on line one of the display. Line two of the display
contains the last five characters of the WWNN.
b. With the WWNN displayed on the service panel, press and hold the Down button.
Press and release the Select button. Release the Down button. This switches the
display into edit mode.
c. Change the displayed number to match the WWNN recorded in Step 1. To edit the
displayed number, use the Up and Down buttons to increase or decrease the numbers
displayed. Use the left and right buttons to move between fields. When the five
characters match the number recorded in Step 1, press the Select button twice to
accept the number.
7. Connect the four Fibre Channel cables that were disconnected from the failed node to the
spare node.
8. Depending on how the original node failed, the replacement node might or might not
automatically join the cluster. If the node does not rejoin the cluster, delete the offline node
on the master console. Then, add the spare node into the cluster on the master console
as in Example D-3.

Example: D-3 removing and adding node sequence


IBM_2145:itsosvc01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node
UPS_unique_id hardware
1 node1 YM100053N094 500507680100188E online 0 io_grp0 yes
2040000143780244 4F2
3 node2 YM100053N093 0000000000000000 offline 0 io_grp0 no
2040000143780243 unknown
IBM_2145:itsosvc01:admin>svctask rmnode node2
IBM_2145:itsosvc01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node
UPS_unique_id hardware
1 node1 YM100053N094 500507680100188E online 0 io_grp0 yes
2040000143780244 4F2
3 node2 YM100053N093 0000000000000000 pending 0 io_grp0 no
2040000143780243 unknown
IBM_2145:itsosvc01:admin>svctask rmnode node2
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node
UPS_unique_id hardware
1 node1 YM100053N094 500507680100188E online 0 io_grp0 yes
2040000143780244 4F2
IBM_2145:itsosvc01:admin>
IBM_2145:itsosvc01:admin>svcinfo lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id hardware
5005076801001883 008057 YM100053N093 2040000143780243 4F2

720 IBM System Storage SAN Volume Controller


IBM_2145:itsosvc01:admin>svctask addnode -wwnodename 5005076801001883 -iogrp 0
Node, id [4], successfully added
IBM_2145:itsosvc01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node
UPS_unique_id hardware
1 node1 YM100053N094 500507680100188E online 0 io_grp0 yes
2040000143780244 4F2
4 node4 YM100053N093 0000000000000000 adding 0 io_grp0 no
2040000143780243 unknown
IBM_2145:itsosvc01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node
UPS_unique_id hardware
1 node1 YM100053N094 500507680100188E online 0 io_grp0 yes
2040000143780244 4F2
4 node4 YM100053N093 5005076801001883 online 0 io_grp0 no
2040000143780243 4F2
IBM_2145:itsosvc01:admin>

9. If you wish to keep the same node name as before, you can change the node name as
shown in Example D-4.

Example: D-4 chnode command example


IBM_2145:itsosvc01:admin>svctask chnode -name Node2 node4

10.Use the Subsystem Device Driver (SDD) management tool on the host systems to verify
that all paths are now online.

When the failed node is repaired, do not connect the Fibre Channel cables to it. Connecting
the cables might cause data corruption. When the node repair returns the node to an
operational state, perform the following steps:
1. Display the node status on the service panel.
2. With the SVC status displayed on the front panel, press and hold the Down button. Press
and release the Select button. Release the Down button.
The text "WWNN" is displayed on line one of the display. Line two of the display contains
the last five characters of the WWNN.
3. With the WWNN displayed on the service panel, press and hold the Down button, press
and release the Select button, and release the Down button. This switches the display
into edit mode.
4. Change the displayed number to 00000. To edit the displayed number, use the Up and
Down buttons to increase or decrease the numbers displayed. Use the left and right
buttons to move between fields. When the number is set to 00000, press the Select button
twice to accept the number.

This SVC node can now be used as a spare node. If this SVC is no longer required as a
spare and is to be used for normal attachment to a cluster, the procedure described
previously must be used to change the WWNN to the number saved when a spare was being
created. See “Replacing a failed SVC node” on page 716. Using any other number might
cause data corruption.

Important: Never connect a node with a WWNN of "00000" to the cluster.

Appendix D. Node replacement and node upgrading procedure 721


Upgrading an SVC 4F2 node cluster to an 8F4 node cluster
The customer has the freedom of choice between three different methods of upgrading the
SVC cluster nodes from a cluster running 4F2 nodes to a cluster running 8F2 nodes. For
more info about the actual upgrade process please see the related publication SVC
configuration guide Version 4.1 SC26-7902. All latest publications are available under the
following link:
http://www-03.ibm.com/servers/storage/support/software/sanvc/installing.html

Prerequisites for upgrading a cluster from 4F2 to 8F4 nodes


Before attempting to replace a failed node, the following prerequisites must be met:
1. Have SVC software V4.1 or higher installed on the SVC cluster and on the spare node.
2. Have a 2145 uninterruptible power supply 1U (2145 UPS-1U) unit for each new SVC
2145-8F4 node
3. Know the name of the cluster that contains the nodes to replace.
4. No open errors in the cluster and all nodes. Ensure that all VDisks, MDisks and MDGs and
the nodes are online.
5. Have spare nodes in the same rack as the SVC cluster that contains the nodes to be
replaced.
6. Make a record of the last five characters of the original WWNN of the spare node since
this might be needed again if the customer wants to stop using a spare node and use this
node as a normal node that can be assigned to any cluster. Complete the following steps
to display the WWNN of the node:
a. Display the node status on the front panel display of the node.
b. With the node status displayed on the front panel, press and hold the Down button.
Press and release the Select button. Release the Down button.
c. The text "WWNN" is displayed on line one of the display. Line 2 of the display contains
the last five characters of the WWNN.
d. Record this number in a safe place. It will be needed when the customer wants to stop
using a spare node.

Replacing the SVC 4F2 nodes


Check the prerequisites before replacing the nodes in Appendix , “Prerequisites for upgrading
a cluster from 4F2 to 8F4 nodes” on page 722.

Perform the following steps to replace nodes:


1. Perform the following steps to record the WWNN of the node that you want to replace:
a. Issue the following command from the command-line interface (CLI):
svcinfo lsnode node_name or node_id
Where node_name or node_id is the name or ID of the node for which you want to
determine the WWNN.
b. Record the WWNN of the node that you want to replace.

722 IBM System Storage SAN Volume Controller


Example: D-5 WWNN from the svcinfo lsnode command
IBM_2145:ITSOSVC01:admin>svcinfo lsnode 1
id 1
name node1
UPS_serial_number YM100053N093
WWNN 500507680100188E
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 2
partner_node_name node2
config_node yes
UPS_unique_id 2040000143780243
port_id 500507680140188E
port_status active
port_speed 2Gb
port_id 500507680130188E
port_status active
port_speed 2Gb
port_id 500507680110188E
port_status active
port_speed 2Gb
port_id 500507680120188E
port_status active
port_speed 2Gb
hardware 4F2

2. Issue the following CLI command to delete this node from the cluster and I/O group:
svctask rmnode node_name or node_id
Where node_name or node_id is the name or ID of the node that you want to delete.

Note: The node is not deleted until the SAN Volume Controller cache is destaged to disk.
During this time, the partner node in the I/O group transitions to write through mode. You
can use the command-line interface (CLI) to verify that the deletion process has
completed.

3. Issue the following CLI command to ensure that the node is no longer a member of the
cluster:
svcinfo lsnode node_name or node_id
Where node_name or node_id is the name or ID of the node. The node is not listed in the
command output. In the following Example D-6. the first output shows both nodes with the
status online. Then after removing node2 this node will first show status pending, and after
a few seconds this node will disappear from the list node output.

Example: D-6 Show lsnode with one remaining node


IBM_2145:ITSOSVC01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id
IO_group_name config_node UPS_unique_id hardware
1 node1 YM100053N093 500507680100188E online 0
io_grp0 yes 2040000143780243 4F2
2 node2 YM100053N094 5005076801001883 online 0
io_grp0 no 2040000143780244 4F2
IBM_2145:ITSOSVC01:admin>svctask rmnode node2
IBM_2145:ITSOSVC01:admin>svcinfo lsnode

Appendix D. Node replacement and node upgrading procedure 723


id name UPS_serial_number WWNN status IO_group_id
IO_group_name config_node UPS_unique_id hardware
1 node1 YM100053N093 500507680100188E online 0
io_grp0 yes 2040000143780243 4F2
2 node2 YM100053N094 0000000000000000 pending 0
io_grp0 no 2040000143780244 unknown
IIBM_2145:ITSOSVC01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id
IO_group_name config_node UPS_unique_id hardware
1 node1 YM100053N093 500507680100188E online 0
io_grp0 yes 2040000143780243 4F2
IBM_2145:ITSOSVC01:admin>

4. Perform the following steps to change the WWNN of the node that you just deleted from
the cluster to FFFFF:
a. From the front panel of the node, press the up button and then use the right and left
navigation buttons to display the Node Status menu.
b. Press and hold the up button and press the select button. The WWNN of the node is
displayed.
c. Press the down and select buttons to start the WWNN edit mode. The first character of
the WWNN is highlighted.
d. Press the up or down button to increment or decrement the character that is displayed.
Note: The characters wrap F to 0 or 0 to F.
e. Press the left navigation button to move to the next field or the right navigation button
to return to the previous field and repeat step 4d for each field. At the end of this step,
the characters that are displayed must be FFFFF.
f. Press the select button to retain the characters that you have updated and return to the
WWNN Selection menu.
g. Press the select button to apply the characters as the new WWNN for the node.
5. Power off and remove the node from the rack.

Tip: Record and mark the order of the fibre-channel cables so that you can use the same
order for the replacement node.

6. Install the replacement node in the rack and connect the 2145 UPS-1U cables.

Important: Do not connect the fibre-channel cables during this step.

7. Power-on the node.


8. Perform the following steps to change the WWNN of the replacement node to match the
WWNN that you recorded in step 1:
a. From the front panel of the node, press the up button and then use the right and left
navigation buttons to display the Node Status menu.
b. Press and hold the up button and press the select button. The WWNN of the node is
displayed.
c. Press the down and select buttons to start the WWNN edit mode. The first character of
the WWNN is highlighted.
d. Press the up or down button to increment or decrement the character that is displayed.
Note: The characters wrap F to 0 or 0 to F.

724 IBM System Storage SAN Volume Controller


e. Press the left navigation button to move to the next field or the right navigation button
to return to the previous field and repeat step 8d for each field. At the end of this step,
the characters that are displayed must be the same as the WWNN you recorded in
step 1.
f. Press the select button to retain the characters that you have updated and return to the
WWNN Selection menu.
g. Press the select button to apply the characters as the new WWNN for the node.
9. Connect the fibre-channel cables to the node.
10.Issue the following CLI command to verify that the last five characters of the WWNN are
correct:
svcinfo lsnodecandidate

Important: If the WWNN is not correct, you must repeat step 8.

Example: D-7 Show candidate node


IBM_2145:ITSOSVC01:admin>svcinfo lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id hardware
5005076801001883 010977 YM100053N094 2040000143780244 8F4

11.Add the node to the cluster and I/O group.


svctask addnode -wwnodename WWNN -iogrp io_grp0
For more info about the svctask addnode CLI command see the IBM System Storage
SAN Volume Controller: Command-Line Interface User’s Guide.

Example: D-8 Show addnode


IBM_2145:ITSOSVC01:admin>svctask addnode -wwnodename 5005076801001883 -iogrp io_grp0
Node, id [3], successfully added
IBM_2145:ITSOSVC01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id
IO_group_name config_node UPS_unique_id hardware
1 node1 YM100053N093 500507680100188E online 0
io_grp0 yes 2040000143780243 4F2
3 node3 YM100053N094 0000000000000000 adding 0
io_grp0 no 2040000143780244 unknown
IBM_2145:ITSOSVC01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id
IO_group_name config_node UPS_unique_id hardware
1 node1 YM100053N093 500507680100188E online 0
io_grp0 yes 2040000143780243 4F2
3 node3 YM100053N094 5005076801001883 adding 0
io_grp0 no 2040000143780244 8F4
IBM_2145:ITSOSVC01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id
IO_group_name config_node UPS_unique_id hardware
1 node1 YM100053N093 500507680100188E online 0
io_grp0 yes 2040000143780243 4F2
3 node3 YM100053N094 5005076801001883 adding 0
io_grp0 no 2040000143780244 8F4
IBM_2145:ITSOSVC01:admin>svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id
IO_group_name config_node UPS_unique_id hardware
1 node1 YM100053N093 500507680100188E online 0
io_grp0 yes 2040000143780243 4F2

Appendix D. Node replacement and node upgrading procedure 725


3 node3 YM100053N094 5005076801001883 online 0
io_grp0 no 2040000143780244 8F4
IBM_2145:ITSOSVC01:admin>

Important: Both nodes in the I/O group cache data; however, the cache sizes are
asymmetric if the remaining partner node in the I/O group is a SAN Volume Controller
2145-4F2 node. The replacement node is limited by the cache size of the partner node in
the I/O group. Therefore, the replacement node does not utilize the full size of its cache.
From the host point of view you do not have to reconfigure the host multipathing device
drivers because the replacement node uses the same WWNN as the previous node. The
multipathing device drivers should detect the recovery of paths that are available to the
replacement node. The host multipathing device drivers take up to approximately 30
minutes to recover the paths, but in most cases it is significantly quicker.

12.See the documentation that is provided with your multipathing device driver for information
on how to query paths to ensure that all paths have been recovered before proceeding to
the next step.
13.Repeat steps 1 to 12 for each node that you want to replace.

Note: If you upgrade both SAN Volume Controller 2145-4F2 nodes in the I/O group to SAN
Volume Controller 2145-8F4 nodes, the cache sizes are symmetric and the full 8 GB of
cache is utilized.

Replacing the nodes within the I/O group rezoning the SAN
Check the prerequisites before replacing the nodes in Appendix , “Prerequisites for upgrading
a cluster from 4F2 to 8F4 nodes” on page 722.
1. Quiesce all I/O from the hosts that access the I/O group of the node that you are replacing.
2. Delete the node that you want to replace from the cluster and I/O group.

Note: The node is not deleted until the SAN Volume Controller cache is destaged to disk.
During this time, the partner node in the I/O group transitions to write through mode. You
can use the command-line interface (CLI) or the SAN Volume Controller Console to verify
that the deletion process has completed.

3. Ensure that the node is no longer a member of the cluster.


4. Power-off the node and remove it from the rack.
5. Install the replacement (new) 8F4 node in the rack and connect the uninterruptible power
supply (UPS) cables and the Fibre Channel cables.
6. Power-on the node.
7. Rezone your switch zones to remove the ports of the node that you are replacing from the
host and storage zones. Replace these ports with the ports of the replacement node.
8. Add the replacement node to the cluster and I/O group.

Important: Both nodes in the I/O group cache data; however, the cache sizes are
asymmetric. The replacement node is limited by the cache size of the partner node in the
I/O group. Therefore, the replacement node does not utilize the full size of its cache.

726 IBM System Storage SAN Volume Controller


9. From each host, issue a rescan of the multipathing software to discover the new paths to
VDisks.

Note: If your system is inactive, you can perform this step after you have replaced all
nodes in the cluster. The host multipathing device drivers take up to approximately 30
minutes to recover the paths, but in most cases it is significantly quicker.

10.See the documentation that is provided with your multipathing device driver for information
on how to query paths to ensure that all paths have been recovered before proceeding to
the next step.
11.Repeat steps 1 to 10 for the partner node in the I/O group.

Note: After you have upgraded both nodes in the I/O group, the cache sizes are symmetric
and the full 8 GB of cache is utilized.

12.Repeat steps 1 to 11 for each node in the cluster that you want to replace
13.Resume host I/O

Replacing the nodes by rezoning and moving VDisks to new I/O group
Check the prerequisites before replacing the nodes in Appendix , “Prerequisites for upgrading
a cluster from 4F2 to 8F4 nodes” on page 722.
1. Quiesce all I/O from the hosts that access the I/O groups of the nodes that you are
replacing.
2. Zone the ports from the replacement (new) 8F4 nodes.
3. Add two replacement nodes to the cluster to create a new I/O group.
4. Move all of the VDisks from the I/O group of the nodes you are replacing to the new I/O
group.
5. From each host, issue a rescan of the multipathing software to discover the new paths to
VDisks. The host multipathing device drivers take up to approximately 30 minutes to
recover the paths, but in most cases it is significantly quicker.
6. See the documentation that is provided with your multipathing device driver for information
on how to query paths to ensure that all paths have been recovered before proceeding to
the next step.
7. Delete the nodes that you are replacing from the cluster and remove the ports from the
switch zones.
8. Repeat steps 1 to 7 for each node in the cluster that you want to replace.

Appendix D. Node replacement and node upgrading procedure 727


728 IBM System Storage SAN Volume Controller
E

Appendix E. HPUX11i Metro Mirror using


PVlinks

Note: This appendix was provided by:

John Cooper ([email protected])


Azeem Jafer ([email protected])
Clement Yau ([email protected])

All questions about this information should be directed to these individuals.

In this test the source Vdisk remains varied online, filesystem mounted. SDD was previously
installed but was removed (from the last example), and in doing, our existing volume group
physical volumes were moved over to the corresponding native device files.

Especially, when implementing Metro Mirror using native HPUX PVlinks, it is critical that we
be able to determine which files on the host represent our source and target LUNs. Without
SDD we have no straightforward way to get the serial number of the disk (HP diskinfo does
not provide it).

We have used a utility called scu to obtain the device serial number. This utility can be
obtained at the following URL:
http://home.comcast.net/~SCSIguy/SCSI_FAQ/RMiller_Tools/scu.html

Here is an example of using the tool to determine the LUN serial number:
cd /opt/PA-RISC
# ./scu -f /dev/rdsk/c125t0d0

Then run the following commands:


scu> show inquiry pages
Unit Serial Number Page:
Page Code: 0x80
Page Length: 16

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 729
Product Serial Number: 02006120029aXX00
Device Identification Page:
Page Code: 0x83
Page Length: 56
Code Set: 0x1 (identifier is binary)
Identifier Type: 0x3
Identifier Length: 16
FC-PH Name Identifier: 6005-0768-0184-800a-6800-0000-0000-001d

The above value will correspond to a vdisk UID, allowing proper identification of a device file
to a particular SVC vdisk.

Query the source vdisk volume group


# vgdisplay -v /dev/srcvg
--- Volume groups ---
VG Name /dev/srcvg
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 1016
VGDA 2
PE Size (Mbytes) 4
Total PE 512
Alloc PE 250
Free PE 262
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

--- Logical volumes ---


LV Name /dev/srcvg/srclv
LV Status available/syncd
LV Size (Mbytes) 1000
Current LE 250
Allocated PE 250
Used PV 1

--- Physical volumes ---


PV Name /dev/dsk/c125t0d0
PV Name /dev/dsk/c82t0d0 Alternate Link
PV Status available
Total PE 512
Free PE 262
Autoswitch On

Varyoff and export target vg (tgtvg), leave source vg (srcvg) online.

730 IBM System Storage SAN Volume Controller


Remove metromirror relationships at SVC (from previous testing with SDD).
IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name
aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary
consistency_group_id consistency_group_name state bg_copy_priority progress
23 hp_rc_tst 000002006120029A CLSSSVC 23 hp_src 000002006120029A CLSSSVC 24 hp_tgt
idling 50
IBM_2145:CLSSSVC:admin>svctask rmrcrelationship 23
IBM_2145:CLSSSVC:admin>
IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship
IBM_2145:CLSSSVC:admin>

Recreate metromirror
IBM_2145:CLSSSVC:admin>svctask mkrcrelationship -master hp_src -aux hp_tgt -name
hp_rc_tst -cluster 000002006120029A
RC Relationship, id [23], successfully created
IBM_2145:CLSSSVC:admin>
IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
:master:::inconsistent_stopped:50:0
IBM_2145:CLSSSVC:admin>

Observe that our source vg is still mounted:


# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 204800 73072 123631 37% /
/dev/vg00/lvol1 299157 62130 207111 23% /stand
/dev/vg00/lvol8 1560576 845573 671140 56% /var
/dev/vg00/lvol7 1310720 1106402 191579 85% /usr
/dev/vg00/lvol4 360448 132548 214707 38% /tmp
/dev/vg00/lvol6 1228800 992841 221379 82% /opt
/dev/vg00/lvol5 20480 2468 16952 13% /home
/dev/e20vg/e20lv 12022068 1585474 9234387 15% /e20
/dev/ds8kvg/ds8klv 8014635 6940 7206231 0% /ds8kfs
/dev/srcvg/srclv 1001729 9 901547 0% /srcfs

Start the metromirror


IBM_2145:CLSSSVC:admin>svctask startrcrelationship hp_rc_tst
Created a large file on source vg, metromirror active
# cd /srcfs
# dd if=/dev/zero of=bigfile count=10000 bs=32k

Query status of metromirror


BM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
:master:::inconsistent_copying:50:18

Appendix E. HPUX11i Metro Mirror using PVlinks 731


Metromirror consistent/synched
IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
:master:::consistent_synchronized:50:
IBM_2145:CLSSSVC:admin>

Stop Metro Mirror relationship, ensure -access flag is used to allow target volume to be used
IBM_2145:CLSSSVC:admin>svctask stoprcrelationship -access hp_rc_tst
IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
::::idling:50:

Run vgchgid on pvlink for the target vdisk


# vgchgid -f /dev/rdsk/c125t0d1
The -f option will alter the VGID on physical volume "". Do you want to continue
(y/n)?
y

Create target vg
# cd /dev/
# mkdir tgtvg
# cd /tmp
# vgimport -m srcvg.map -v /dev/tgtvg /dev/dsk/c125t0d1
Beginning the import process on Volume Group "/dev/tgtvg".
Logical volume "/dev/tgtvg/srclv" has been successfully created
with lv number 1.
Volume group "/dev/tgtvg" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating
the volume group.

Varyon target vg
# vgchange -a y /dev/tgtvg
Activated volume group
Volume group "/dev/tgtvg" has been successfully changed.
#

Extend target vg by its alternate link


# vgextend tgtvg /dev/dsk/c82t0d1
Volume group "tgtvg" has been successfully extended.
Volume Group configuration for /dev/tgtvg has been saved in
/etc/lvmconf/tgtvg.conf

732 IBM System Storage SAN Volume Controller


Run FSCK
# fsck -F hfs -y /dev/tgtvg/srclv
** /dev/tgtvg/srclv
** Last Mounted on /srcfs
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
13 files, 0 icont, 1001721 used, 8 free (8 frags, 0 blocks)
***** MARKING FILE SYSTEM CLEAN *****

***** FILE SYSTEM WAS MODIFIED *****

Check contents of filesystem - note our bigfile is there


# mount /dev/tgtvg/srclv /tgtfs
# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 204800 73073 123629 37% /
/dev/vg00/lvol1 299157 62130 207111 23% /stand
/dev/vg00/lvol8 1560576 845699 671022 56% /var
/dev/vg00/lvol7 1310720 1106402 191579 85% /usr
/dev/vg00/lvol4 360448 132548 214707 38% /tmp
/dev/vg00/lvol6 1228800 992841 221379 82% /opt
/dev/vg00/lvol5 20480 2468 16952 13% /home
/dev/e20vg/e20lv 12022068 1585474 9234387 15% /e20
/dev/ds8kvg/ds8klv 8014635 6940 7206231 0% /ds8kfs
/dev/srcvg/srclv 1001729 1001721 0 100% /srcfs
/dev/tgtvg/srclv 1001729 1001721 0 100% /tgtfs
# cd /tgtfs
# ls
a bigfile d lll ooo y
b c jjj lost+found x z
#

Appendix E. HPUX11i Metro Mirror using PVlinks 733


734 IBM System Storage SAN Volume Controller
F

Appendix F. HPUX11i Metro Mirror with SDD


vpath devices

Note: This appendix was provided by:

John Cooper ([email protected])


Azeem Jafer ([email protected])
Clement Yau ([email protected])

All questions about this information should be directed to these individuals.

Summary of activities
These are the activities we performed:
򐂰 Define host, assign Vdisk from SVC.
򐂰 Create vg, filesystem on source vdisk.
򐂰 Create some files.
򐂰 Varyoff source.
򐂰 Perform Metro Mirror.
򐂰 Stop Metro Mirror, enabling write access to target vdisk.
򐂰 Varyon, remount source filesystem.
򐂰 Vgchgid, vgimport against target vdisk.
򐂰 Varyon, mount target.
򐂰 Compare the source and target contents.

Create HP host on 2145


IBM_2145:CLSSSVC:admin>svctask mkhost -name clsshp -hbawwpn 50060b000009ca1c
-iogrp 0 -type hpux -force

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 735
Map 2 LUNs
IBM_2145:CLSSSVC:admin>svcinfo lshostvdiskmap 6
id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
6 clsshp 0 23 hp_src
50060B000009CA1C 600507680184800A680000000000001D
6 clsshp 1 24 hp_tgt
50060B000009CA1C 600507680184800A680000000000001E

Modified switch zone to allow HPUX host access to a pair of SVC ports
G3LAB_F32a:admin> zoneshow CLSSHP_Disk
zone: CLSSHP_Disk
CLSSE20_B3; CLSSE20_B4; CLSSF20_B1; CLSSF20_B4; CLSSHP;
DS8Ka_I0000; DS8Ka_i0100; DS8Ka_i0130; DS8Kb_i0200;
DS8Kb_i0300; DS8Kb_i0330; SVC_N1P1_Host; SVC_N1P2_Host;
SVC_N2P1_Host; SVC_N2P2_Host

Create devices and vpaths on HPUX host


# ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
===========================================================================
disk 1 0/0/2/0.6.0 sdisk CLAIMED DEVICE IBM-PSG ST39204LC !#
/dev/dsk/c1t6d0 /dev/rdsk/c1t6d0
disk 42 1/12/0/0.16.5.0.44.0.2 sdisk CLAIMED DEVICE IBM 2105E20
/dev/dsk/c120t0d2 /dev/rdsk/c120t0d2
disk 137 1/12/0/0.16.8.0.42.0.0 sdisk CLAIMED DEVICE IBM 2107900
/dev/dsk/c124t0d0 /dev/rdsk/c124t0d0
disk 140 1/12/0/0.16.9.0.0.0.0 sdisk CLAIMED DEVICE IBM 2145
disk 141 1/12/0/0.16.9.0.0.0.1 sdisk CLAIMED DEVICE IBM 2145
disk 138 1/12/0/0.17.2.0.0.0.0 sdisk CLAIMED DEVICE IBM 2145
disk 139 1/12/0/0.17.2.0.0.0.1 sdisk CLAIMED DEVICE IBM 2145
disk 58 1/12/0/0.17.21.0.42.0.0 sdisk CLAIMED DEVICE IBM 2107900
/dev/dsk/c122t0d0 /dev/rdsk/c122t0d0
disk 10 1/12/0/0.17.26.0.44.0.2 sdisk CLAIMED DEVICE IBM 2105E20
/dev/dsk/c119t0d2 /dev/rdsk/c119t0d2
# insf -evC disk
insf: Installing special files for sdisk instance 1 address 0/0/2/0.6.0
insf: Installing special files for sdisk instance 42 address
1/12/0/0.16.5.0.44.0.2
insf: Installing special files for sdisk instance 137 address
1/12/0/0.16.8.0.42.0.0
insf: Installing special files for sdisk instance 140 address
1/12/0/0.16.9.0.0.0.0
making dsk/c82t0d0 b 31 0x520000
making rdsk/c82t0d0 c 188 0x520000
insf: Installing special files for sdisk instance 141 address
1/12/0/0.16.9.0.0.0.1
making dsk/c82t0d1 b 31 0x520100
making rdsk/c82t0d1 c 188 0x520100
insf: Installing special files for sdisk instance 138 address
1/12/0/0.17.2.0.0.0.0
making dsk/c125t0d0 b 31 0x7d0000
making rdsk/c125t0d0 c 188 0x7d0000

736 IBM System Storage SAN Volume Controller


insf: Installing special files for sdisk instance 139 address
1/12/0/0.17.2.0.0.0.1
making dsk/c125t0d1 b 31 0x7d0100
making rdsk/c125t0d1 c 188 0x7d0100
insf: Installing special files for sdisk instance 58 address
1/12/0/0.17.21.0.42.0.0
insf: Installing special files for sdisk instance 10 address
1/12/0/0.17.26.0.44.0.2
# 11:40:21 AM

# cfgvpath -r
Running Dynamic reconfiguration
Add disk: vpath3
Add disk: vpath4
Vpath: After reconfiguration we have 4 devices 11:40:52 AM
Dev#: 2 Device Name: vpath3 Type: 2145 Policy: Optimized
Serial: 600507680184800A680000000000001E
==================================================================================
Path# Adapter H/W Path Hard Disk State Mode Select Error
0 1/12/0/0 c125t0d1 CLOSE NORMAL 0 0
1 1/12/0/0 c82t0d1 CLOSE NORMAL 0 0

Dev#: 3 Device Name: vpath4 Type: 2145 Policy: Optimized


Serial: 600507680184800A680000000000001D
==================================================================================
Path# Adapter H/W Path Hard Disk State Mode Select Error
0 1/12/0/0 c125t0d0 CLOSE NORMAL 0 0
1 1/12/0/0 c82t0d0 CLOSE NORMAL 0 0 11:41:14 AM

Create HPUX volume group using the source vdisk23 (hp_src)


# vgdisplay -v /dev/srcvg
--- Volume groups ---
VG Name /dev/srcvg
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 1016
VGDA 2
PE Size (Mbytes) 4
Total PE 512
Alloc PE 250
Free PE 262
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

--- Logical volumes ---


LV Name /dev/srcvg/srclv
LV Status available/syncd
LV Size (Mbytes) 1000

Appendix F. HPUX11i Metro Mirror with SDD vpath devices 737


Current LE 250
Allocated PE 250
Used PV 1

--- Physical volumes ---


PV Name /dev/dsk/vpath4
PV Status available
Total PE 512
Free PE 262
Autoswitch On

Filesystem mounted
# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 204800 76067 120820 39% /
/dev/vg00/lvol1 299157 62564 206677 23% /stand
/dev/vg00/lvol8 1560576 845843 670883 56% /var
/dev/vg00/lvol7 1310720 1106850 191156 85% /usr
/dev/vg00/lvol4 360448 132548 214707 38% /tmp
/dev/vg00/lvol6 1228800 995817 218580 82% /opt
/dev/vg00/lvol5 20480 2468 16952 13% /home
/dev/e20vg/e20lv 12022068 1585470 9234391 15% /e20
/dev/ds8kvg/ds8klv 8014635 6940 7206231 0% /ds8kfs
/dev/srcvg/srclv 1001729 9 901547 0% /srcfs

Touch some files


# cd /srcfs
# pwd
l/srcfs
# ls
a b c d lost+found

Observe we have selects on our vpath


# 12:13:35 PM
Dev#: 3 Device Name: vpath4 Type: 2145 Policy: Optimized
Serial: 600507680184800A680000000000001D
==================================================================================
Path# Adapter H/W Path Hard Disk State Mode Select Error
0 1/12/0/0 c125t0d0 OPEN NORMAL 0 0
1 1/12/0/0 c82t0d0 OPEN NORMAL 778 0

Preparation for Metro Mirror


Unmount filesystem/varyoff volume group
# umount /srcfs
# vgchange -a n /dev/srcvg
Volume group "/dev/srcvg" has been successfully changed.

Preview export, create mapfile


# cd /tmp
# vgexport -m srcvg.map -p /dev/srcvg

738 IBM System Storage SAN Volume Controller


Define Metro Mirror
svctask mkrcrelationship -master hp_src -aux hp_tgt -name hp_rc_copy_tst -cluster
000002006120029A

Query the relationship


IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
:master:::inconsistent_stopped:50:0

IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship hp_rc_tst


id 23
name hp_rc_tst
master_cluster_id 000002006120029A
master_cluster_name CLSSSVC
master_vdisk_id 23
master_vdisk_name hp_src
aux_cluster_id 000002006120029A
aux_cluster_name CLSSSVC
aux_vdisk_id 24
aux_vdisk_name hp_tgt
primary master
consistency_group_id
consistency_group_name
state inconsistent_stopped
bg_copy_priority 50
progress 0
freeze_time
status online
sync

Start the Metro Mirror relationship


List again
IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship
id name master_cluster_id master_cluster_name
master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id
aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress
23 hp_rc_tst 000002006120029A CLSSSVC 23
hp_src 000002006120029A CLSSSVC 24 hp_tgt
master inconsistent_copying 50
24

IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :


id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
:master:::inconsistent_copying:50:33

Appendix F. HPUX11i Metro Mirror with SDD vpath devices 739


IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship hp_rc_tst
id 23
name hp_rc_tst
master_cluster_id 000002006120029A
master_cluster_name CLSSSVC
master_vdisk_id 23
master_vdisk_name hp_src
aux_cluster_id 000002006120029A
aux_cluster_name CLSSSVC
aux_vdisk_id 24
aux_vdisk_name hp_tgt
primary master
consistency_group_id
consistency_group_name
state inconsistent_copying
bg_copy_priority 50
progress 39
freeze_time
status online
sync

It's done!
IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
:master:::consistent_synchronized:50:

Remounting the source volume


# vgchange -a y /dev/srcvg
Activated volume group
Volume group "/dev/srcvg" has been successfully changed.
# mount /dev/srcvg/srclv /srcfs

Touch more files


# cd /srcfs
# ls
a b c d lost+found
# touch x y z
# ls -l
total 16
-rw-r--r-- 1 root sys 0 Jun 1 13:47 a
-rw-r--r-- 1 root sys 0 Jun 1 13:47 b
-rw-r--r-- 1 root sys 0 Jun 1 13:47 c
-rw-r--r-- 1 root sys 0 Jun 1 13:47 d
drwxr-xr-x 2 root root 8192 Jun 1 13:47 lost+found
-rw-r--r-- 1 root sys 0 Jun 1 14:03 x
-rw-r--r-- 1 root sys 0 Jun 1 14:03 y
-rw-r--r-- 1 root sys 0 Jun 1 14:03 z

740 IBM System Storage SAN Volume Controller


Stop the Metro Mirror relationship, and permit read/write access to the
target volume
IBM_2145:CLSSSVC:admin>svctask stoprcrelationship -access hp_rc_tst

List status again


IBM_2145:CLSSSVC:admin>svcinfo lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:au
x_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_grou
p_id:consistency_group_name:state:bg_copy_priority:progress
23:hp_rc_tst:000002006120029A:CLSSSVC:23:hp_src:000002006120029A:CLSSSVC:24:hp_tgt
::::idling:50:

Modify VGID on PV for the secondary volume (cannot be the same)


Required if the target vdisk is used on the same server as the source.
# vgchgid -f /dev/rdsk/vpath3
The -f option will alter the VGID on physical volume "". Do you want to continue
(y/n)?
y

Re-import the VG using the target vpath device


# cd /tmp
# vgimport -m srcvg.map -v /dev/tgtvg /dev/dsk/vpath3
Beginning the import process on Volume Group "/dev/tgtvg".
Logical volume "/dev/tgtvg/srclv" has been successfully created
with lv number 1.
Volume group "/dev/tgtvg" has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating
the volume group.
# vgchange -a y /dev/tgtvg
Activated volume group
Volume group "/dev/tgtvg" has been successfully changed.
# fsck -F hfs -y /dev/tgtvg/srclv
** /dev/tgtvg/srclv
** Last Mounted on /srcfs
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
9 files, 0 icont, 9 used, 1001720 free (8 frags, 125214 blocks)
***** MARKING FILE SYSTEM CLEAN *****

***** FILE SYSTEM WAS MODIFIED *****


#

Appendix F. HPUX11i Metro Mirror with SDD vpath devices 741


Check contents of the target volume
# mkdir /tgtfs
# mount /dev/tgtvg/srclv /tgtfs
# cd /tgtfs
# ls -l
total 16
-rw-r--r-- 1 root sys 0 Jun 1 13:47 a
-rw-r--r-- 1 root sys 0 Jun 1 13:47 b
-rw-r--r-- 1 root sys 0 Jun 1 13:47 c
-rw-r--r-- 1 root sys 0 Jun 1 13:47 d
drwxr-xr-x 2 root root 8192 Jun 1 13:47 lost+found
-rw-r--r-- 1 root sys 0 Jun 1 14:31 x
-rw-r--r-- 1 root sys 0 Jun 1 14:31 y
-rw-r--r-- 1 root sys 0 Jun 1 14:31 z
#

# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 204800 76069 120817 39% /
/dev/vg00/lvol1 299157 62564 206677 23% /stand
/dev/vg00/lvol8 1560576 845976 670758 56% /var
/dev/vg00/lvol7 1310720 1106850 191156 85% /usr
/dev/vg00/lvol4 360448 132549 214706 38% /tmp
/dev/vg00/lvol6 1228800 995817 218580 82% /opt
/dev/vg00/lvol5 20480 2468 16952 13% /home
/dev/e20vg/e20lv 12022068 1585470 9234391 15% /e20
/dev/ds8kvg/ds8klv 8014635 6940 7206231 0% /ds8kfs
/dev/srcvg/srclv 1001729 9 901547 0% /srcfs
/dev/tgtvg/srclv 1001729 9 901547 0% /tgtfs

Making changes, unmount filesystems, varyoff vg's , export


# cd /
# cd /tmp
# umount /srcfs
# umount /tgtfs

# vgchange -a n /dev/srcvg
Volume group "/dev/srcvg" has been successfully changed.
# vgexport -m srcvg.map -p /dev/srcvg

# vgchange -a n /dev/tgtvg
Volume group "/dev/tgtvg" has been successfully changed.
# vgexport -m /dev/null /dev/tgtvg

742 IBM System Storage SAN Volume Controller


Related publications

The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.

IBM Redbooks
For information about ordering these publications, see “How to get IBM Redbooks” on
page 744.
򐂰 Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687
򐂰 IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
򐂰 IBM TotalStorage: Introducing the SAN File System, SG24-7057
򐂰 Virtualization in a SAN, REDP3633
򐂰 SAN: What is a VSAN?, TIPS0199

Other resources
These publications are also relevant as further information sources:
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
򐂰 IBM System Storage Master Console: Installation and User’s Guide, GC30-4090
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface User's Guide, SC26-7544
򐂰 IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
򐂰 IBM TotalStorage Multipath Subsystem Device Driver User's Guide, SC30-4096
򐂰 IBM System Storage Open Software Family SAN Volume Controller: Host Attachment
Guide, SC26-7563

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 743
Referenced Web sites
These Web sites are also relevant as further information sources:
򐂰 IBM TotalStorage home page:
http://www.storage.ibm.com
򐂰 SAN Volume Controller supported platform:
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
򐂰 Download site for Windows SSH freeware:
http://www.chiark.greenend.org.uk/~sgtatham/putty
򐂰 IBM site to download SSH for AIX:
http://oss.software.ibm.com/developerworks/projects/openssh
򐂰 Open source site for SSH for Windows and Mac:
http://www.openssh.com/windows.html
򐂰 Cygwin Linux-like environment for Windows:
http://www.cygwin.com
򐂰 IBM Tivoli Storage Area Network Manager site:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNetworkMana
ger.html
򐂰 Microsoft Knowledge Base Article 131658:
http://support.microsoft.com/support/kb/articles/Q131/6/58.asp
򐂰 Microsoft Knowledge Base Article 149927:
http://support.microsoft.com/support/kb/articles/Q149/9/27.asp
򐂰 Sysinternals home page:
http://www.sysinternals.com
򐂰 Subsystem Device Driver download site:
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
򐂰 IBM TotalStorage Virtualization Home Page:
http://www-1.ibm.com/servers/storage/software/virtualization/index.html

How to get IBM Redbooks


Hardcopy Redbooks can be ordered, and softcopy Redbooks viewed, downloaded, or
searched at the following Web site:
ibm.com/redbooks

Additional materials (code samples or diskette/CD-ROM images) can also be downloaded


from that site.

IBM Redbooks collections


Redbooks are also available on CD-ROMs. Click the CD-ROMs button on the Redbooks Web
site for information about all the CD-ROMs offered, as well as updates and formats.

744 IBM System Storage SAN Volume Controller


Index
available managed disks 228
A
abends 270
abends dump 270 B
access pattern 248, 346 back-end application 12
Accessing FlashCopy target 664 back-end storage controllers 16
active quorum disk 58 back-end storage guidelines 38
active SVC cluster 143 background copy 448, 499–500, 513
add a new volume 172 background copy progress 454, 521
add a node 224 Background Copy Rate 400
add additional ports 329 background copy rate 400
add an HBA 240 backup 266, 386
Add SSH Public Key 114 of data with minimal impact on production 386
additional extents 60 backup speed 386
admin password 216 backup time 386
administration tasks 222, 290, 321 balance 89
administration using the GUI 275 bandwidth 46, 463, 492, 529
advanced security 5 bandwidth impact 505
AIX 170 basic setup requirements 115
AIX and FlashCopy 664 basic tasks 663
AIX and Remote Copy 669 bat script 705
AIX host system 181 bind 198
AIX LVM 664 bind address 19
AIX specific information 169 binding 402
AIX toolbox 181 bitmaps 384
AIX-based hosts 169 block aggregation 3, 16
allocation algorithm 54 block size 52
amount of I/O 65 block virtualization 50
analysis 81, 85, 369 block-for-block translation 52
application abends dump 270 boot 39
application server guidelines 38 boot device 39
application servers 16 boot time 39
application testing 386 boss node 12
assign VDisks 138 bottlenecks 82
assigned VDisk 172 buffers 384
asymmetrical virtualization 5 business requirements 81
asynchronous 490
asynchronous notifications 404
Asynchronous Peer-to-Peer Remote Copy 489
C
cable connections 31
asynchronous remote 427, 491
cache 14, 384, 392, 491
Asynchronous remote copy 427, 491
cache algorithm 83
asynchronous remote copy 431, 495
caching 82
asynchronously 489
caching capability 81
attributes 307
Call Home 658
authenticate 98
candidate node 225
authentication 120
cap 65
automate tasks 704
capacity 57, 180
automatic configuration 16
capacity information 354
automatically discover 229
capacity measurement 337
automatically restarted 363
capacity planning 81
automation 274
capacity utilization 4
auxiliary 459, 492, 494, 525
certificates 144–145, 276
auxiliary VDisk 491, 494, 500
change a FlashCopy 405
availability 2
change the IP addresses 217
availability reasons 82
channel extender 35

© Copyright IBM Corp. 2003, 2004, 2005, 2006. All rights reserved. 745
channels 496 connectivity 15
check software levels 357 consistency 384, 431, 495
chpartnership 505 consistency freeze 511
chrcconsistgrp 507 consistency group 387–388, 494
chrcrelationship 506 limits 389
chunks 50, 567 consistency group zero 389
CIM Agent for SVC 647 consistency groups 499
CIMOM 98 consistent 437–438, 501–502
clear feature 60 consistent copy 384
CLI 98, 127, 211, 265, 513 consistent data set 385
commands 181 Consistent Stopped state 435, 500
scripting for SVC task automation 274 Consistent Synchronized state 436, 479, 500, 549
clone 716 ConsistentDisconnected 448, 512
cloning 716 ConsistentStopped 446, 511
cluster 15, 212 ConsistentSynchronized 446, 511
adding nodes 140 constrained link 492
administration 16 container 50
creation 128, 140 controller, renaming 227
error log 263 conventional storage 559
IP address 104 cookie crumbs recovery 273
managing using the GUI 276 cooling 27
shutting down 219–221, 288 copy bandwidth 505
time zone 130, 148, 217 copy process 442, 507
time zone and time 285 copy rate 390
viewing properties 215, 283 copy service 489
cluster error log 368 Copy Services 14
cluster metadata 403 limitations with Windows 2000 676
cluster partnership 432, 496 managing 255, 356
cluster properties 218 Windows 2000 673
cluster state information 230 Windows NT 671
cluster Web interface 97 Windows Volume Sets 687
clustered IBM SAN appliance 5, 7 COPY_COMPLETED 404
clustered server resources 15 copying state 407
clusters 26 copying the grain 390
coarse grained striping 54 copy-on-write process 392
command line interface 127 core PID 45
command syntax 212 corruption 386
common MDG 61 counterpart SAN 35
common platform 3 create a FlashCopy 405
Compass architecture 14 create a new VDisk 335
complexity 2 create an MDG 235, 313
concepts 11 Create an SVC partnership 462, 529
concurrent commands 86 Create consistency group command 408
concurrent instances 566 Create mapping command 405
concurrent software upgrade 256 Create New Cluster 111
configuration 165 Create SVC partnership 450, 515
and administration using the GUI 275 Create the Metro Mirror relationship 451, 518
restoring 380 create VDisks 156
using the CLI 127 Creating a host 65–66
using the GUI 139 creating a host 238
configuration backup and recovery 266 creating a VDisk 136
configuration data 266 creating managed disk groups 152
configuration dump 270 Creating the FlashCopy mappings 415
configuration node 12, 15, 19, 128, 140 current cluster state 58
configuration procedures 97 current time zone 217
configure SDD 198 Cygwin 199
configure the AIX 169
Configuring the GUI 107
connected 436–437, 501 D
connected state 510–511 Dark Fibre 46
data

746 IBM System Storage SAN Volume Controller


backup with minimal impact on production 386 I/O trace 270
consistency 671 listing 269, 374
moving and migration 385 other nodes 272
data consistency 409, 490 Dynamic Pathing 208–209
data flow 33 dynamic pathing 207
data migration 26, 566 Dynamic shrinking 351
data migration and moving 385 dynamic volumes 681
data mining 386
data mover appliance 251
database log 494 E
degraded mode 36 effect of latency 83
delete electronic page 647
a FlashCopy 405 e-mail 647
a host 239 e-mail to RETAIN 659
a host port 241 empty MDG 236
a port 330 empty state 448, 513
a VDisk 246, 341, 355 Enlarging an extended basic volume 680
ports 241 enqueue 86
Delete consistency group command 408 enqueueing 86
Delete mapping command 405 Enterprise Storage Server (ESS) 425
dependent writes 388, 428, 430, 493–494 entire VDisk 386
destage 392 ERP 87
destructive 252, 264 error 263, 368, 511
detect the new MDisks 229 Error Code 364
detected 229 error handling 403
device driver 87 error log 263, 368
device SCSI address 45 analyzing 368
device specific modules 184 file 364
diagnose problems 647 error notification 262, 366
direct connection 38 error number 364
dirty bit 430, 494 error or event dump 270
discard commands 86 error priority 369
disconnected 436–437, 501 Error Recovery Process 87
discovering assigned VDisk 172, 185, 200 errors 231
discovering newly assigned MDisks 301 ESS (Enterprise Storage Server) 425
disk 672 ESS specialist 67, 72
disk access profile 249 ESS storage 67, 72
disk controller ESS to SVC 571
renaming 298 Ethernet 31
systems 227, 296 Ethernet connection 19, 39
viewing details 227, 296 event 263, 368
disk groups 48 event log 270
disk subsystems 82 events 435, 499
disk zone 32 excludes 302
Diskpart 188 exclusive processor 230
display summary information 228 Execute FlashCopy 410, 419
displaying managed disks 151 Execute Metro Mirror 453, 520
distance 35, 426 Executing a single FlashCopy mapping 422
distance limitations 427, 490 expand 66
distributed redundant cache 16 a VDisk 178, 188, 247
DMP 208 a volume 188
documentation 26, 282 expansion 60
DSMs 184 extended disk 680
dual site 83 extended distance solutions 426
dump extended volume 680
application abends 270 extenders 46
configuration 270 extent 12, 50, 89, 395, 560
error or event 270 free 57
featurization log 270 size 62
I/O statistics 270 size rules 61
extent level 560

Index 747
extent migration 566 Synthesis 401
extent size 89, 309 target 21
extent sizes 52 trigger 397
extents 52 FlashCopy features 385
FlashCopy functionality 664
FlashCopy indirection layer 389
F FlashCopy mapping 386, 395
fabric FlashCopy mapping states 398
local 35 Copying 398
remote 35 Idling/Copied 398
fabric interconnect 35 Prepared 400
failed node 58, 716 Preparing 399
failing node 403 Stopped 398
failover 35, 207, 491 Suspended 399
failover only 204 FlashCopy mappings 389
failover situation 426, 490 FlashCopy properties 389
fan-in 35 flexibility 81
fast restore 386 Flush done 396
FAStT 425 flush the cache 419
configuration 38 fmtdisk 233
migration considerations 690 focal point 497
storage 75 focal point node 497
favored hosts 88 forced deletion 328
feature log 265, 373 foreground I/O latency 505
feature log dump 265 format 337, 343
features, licensing 264, 371 free extents 53–54, 246
featurization log 270 front-end application 12
featurization log dump 270 front-end host 37
Featurization Settings 113 FTEDIT 674
Fibre Channel port fan in 35 full array size 75
Fibre Channel ports 31, 84 full-duplex 82
Fibre Channel switch 19
file aggregation 5
file system 206 G
filter criteria 277 gateway IP address 104
filtering 213, 276 GBICs 35
filters 213 general housekeeping 283
fixed error 263, 368 generate some randomness 100
FlashCopy 383 generating output 213
accessing source, target on the same AIX host 666 Generator 101
accessing target with recreatevg 667 geographically dispersed 83, 426
applications 385 Global 497–498
bitmap 390 Global Mirror 489
commands 405 Global Mirror relationship 497
Copy complete 397 Global Mirror remote copy technique 490
create 396 GM 489
Delete 397 gminterdelaysimulation 504
how it works 384 gmintradelaysimulation 504
I/O handling 402 gmlinktolerance 504
image mode disk 394 governing throttle 346
indirection layer 389 graceful manner 226
mapping 385, 396 grain 12, 390, 402
mapping events 395 grain is unsplit 391
Modify 398 grain size 390
prepare 396 grains 390, 400
rules 393 granularity 52, 386
serialization of I/O 402 GUI 107, 118
software license 21 signon 107
source 21
Start 397
Stop 398

748 IBM System Storage SAN Volume Controller


H image mode disk 232, 394
hardware configuration 650 Image Mode MDisk 569
HBA 36, 238 image mode to image mode 586
HBA fails 36 image mode to managed mode 580
HBA ports 39 image mode VDisk 564
heartbeat signal 15 image mode virtual disk 52
help 282 image mode virtual disks 64
heterogeneous hosts 166 image-mode mapping 18
Hewlett-Packard StorageWorks Modular Array 8000 20 importvg 664
high availability 13, 15, 19, 26, 46 inappropriate zoning 38
Hitachi Freedom Storage Thunder 9200 20 in-band virtualization 2
home directory 181 inconsistent 437, 501
host Inconsistent Copying state 436, 500
and application server guidelines 38 Inconsistent Stopped state 435, 500
configuration 165 inconsistent stopped state 479, 548–549
creating 238 InconsistentCopying 445, 510
definitions 149 InconsistentDisconnected 447, 512
deleting 328 InconsistentStopped 445, 510
HBAs 38 increasing complexity 2
information 237, 322 index number 301
showing 253 indirection 53
systems 32 indirection layer 389, 392
zone 166 indirection layer algorithm 390
host adapter configuration settings 183 informational error logs 404
host bay ports 73 initial considerations 690
host bus adapter 238 initial installation 97
host definition 131 input power 221
host definitions 131 install 25
host key 653 Install Certificate 145
host level 321 installation planning information for master console 651
host mapping 65 insufficient bandwidth 401
host to iogrp mappings 169 integrity 58, 387–388, 429, 494
host workload 307 Intel hardware 14
housekeeping 283 interaction with the cache 392
HP-UX support 208 intercluster 46
HP-UX support information 207 Intercluster communication and zoning 432, 496
intercluster link 432, 496
intercluster link bandwidth 505
I intercluster link maintenance 432, 496
I/O errors caused by path failures 404 Intercluster Metro Mirror 426, 490
I/O governing 248, 346 intercluster zoning 432, 496
I/O governing rate 65, 249 internal resources 86
I/O group 12–13, 15–17, 51 interswitch link (ISL) 34, 37
renaming 222, 290 interval 217
viewing details 222 intracluster 46
I/O group name 280 Intracluster Metro Mirror 426, 490
I/O pair 30 iogrp 233
I/O per secs 26 iogrp mappings 169
I/O statistics dump 270 IP address 19
I/O threshold 65 modifying 216, 284
I/O trace dump 270 IP addresses 26, 285
IBM 2145 SDD Disk Device 187 IP subnet 39
IBM TotalStorage SAN file system 6 ISL (interswitch link) 34, 37
ICAT 647, 653 ISL count 37
identical data 492 ISL hop 34, 46
idling 446, 511 ISL hop count 426, 490
idling state 442, 507 issue CLI commands 199
IdlingDisconnected 447, 512
Image mode 52
image mode 18, 306, 568 K
key files on AIX 181

Index 749
L managed disk group (MDG) 13
last extent 570 Managed Mode MDisk 569
latency 83, 392 managed mode to image mode 583
LBA 52, 430, 494 managed mode virtual disk 53, 64
LDM 676 management xxviii, 81
LDM (logical disk manager) 676 managing storage growth 2
LDM database 676 map a VDisk 162
license 20, 23, 104 map a VDisk to a host 247
license features 371 mapping 52, 60, 66, 384
licensing feature 264 mapping events 395
licensing feature settings 264, 371 mapping of VDisk extents 232
limiting factor 82 mapping state 395
Linux 181 mapping table 53
Linux kernel 14 maps 16
Linux on Intel 199 master 492, 494
list dump 269 master console 13, 27, 31, 648, 652
list of MDisks 320 installation planning 651
list of VDisks 320 master VDisk 494, 500
list the dumps 374 maximum capacity 52
listing dumps 269, 374 MDG (managed disk group) 13
Load balancing 204 MDG information 353
local cluster 439, 503 MDG level 234
local fabric 35 MDGs 26
local fabric interconnect 35 MDisk 13, 16, 26, 151, 228, 299
locking the quorum disk 230 adding 236, 317
log 494 discovering 229, 301
logged 263 displaying 133
Logical Block Address 430, 494 including 231, 302
logical block address 52 information 228, 300
logical configuration 18 modes 569
logical configuration data 267 name parameter 228
logical disk manager (LDM) 676 removing 236, 318
logical disks 62 renaming 229, 300
logical SANs 34 showing 252, 320, 352
logical unit numbers 134 showing in group 236
logins 497 working with 227
logs 494 MDisk group 61
lsrcrelationshipcandidate 506 creating 235, 313
LU 12 deleting 235, 316
LUN 12, 48, 89 name 280
LUN arrangement 89 renaming 235, 315
LUN masking 65–66 showing 232, 253, 304, 353
LUN per disk array 89 viewing information 234, 312
LUNs 2, 50, 62, 79 memory 13
LVM data structures 666 metadata management 401
metadata server 7
Metro 432
M Metro Mirror 425
maintaining availability 2 Metro Mirror consistency group 441–445, 505, 507–509
maintaining passwords 283 Metro Mirror consistency groups 433
maintenance levels 182 Metro Mirror features 431, 495
maintenance procedures 364 Metro Mirror process 431, 496
maintenance tasks 255, 356 Metro Mirror relationship 432, 441–442, 444, 476, 497,
managed disk 13, 16, 50, 62, 228, 299 506–507, 509, 545
display 133 microcode 15
displaying 151 Microsoft Cluster 188
working with 227, 296 Microsoft Multi Path Input Output 184
managed disk group 16, 134, 234 migrate 559
creating 152 migrate a VDisk 564
viewing 155 migrate between MDGs 564

750 IBM System Storage SAN Volume Controller


migrate data 568 shutting down 226
migrate VDisks 250 standby 716
migrating data 232 using the GUI 290
migrating multiple extents 560 viewing details 223, 292
migration node details 223
algorithm 567 node discovery 229, 301
functional overview 566 node dumps 272
operations 560 node is swapped 717
overview 560 node level 223, 291
tips 571 nodes 26
migration activities 560 non-preferred path 207
Migration operations 57 non-redundant 35
migration phase 233, 307 N-port 34
migration process 250 number of I/O groups 83
migration progress 565
migration scenarios 689
migration threads 560 O
minimal downtime 52 offline 57–58
mirrored 491 offline rules 563
mirrored copy 489 older disk systems 82
mkpartnership 505 online 57–58
mkrcconsistgrp 505 online help 282
mkrcrelationship 506 on-screen content 213, 276
Modify a host 239 OpenSSH 181
Modify consistency group command 408 OpenSSH client 199
Modify mapping command 405 operating system versions 182
Modifying a VDisk 64 ordered list 54
modifying a VDisk 248 ordering 388
mount 206 organizing on-screen content 213
mount point 206 other node dumps 272
moving and migrating data 385 overall performance needs 26
MPIO 39, 184 oversubscription 34
MSCS 188 overwritten 261, 384
multipath configuration 170
Multi-path I/O 39 P
Multipath solutions supported 207 package numbering and version 256, 356
multipath storage solution 184 parallelism 566
multipathing device driver 38 partial last extent 570
multiple disk arrays 81 partially used 52
multiple extents 560 partnership 432, 496, 504
passphrase 101
N password maintenance 215, 283
naming conventions 42 passwords 215, 283
NAS 8 path failover 207
new code 363 path failure 403–404
new disks 174 path failure errors 404
new mapping 247 path offline 403
new MDisk 309 path offline for source VDisk 403
no virtualization 52 path offline for target VDisk 404
NOCOPY 400 path offline state 403
node 13, 15, 222–223, 291 paths 39, 84, 168
adding 224, 292 path-selection policy algorithms 204
adding to cluster 140 peak 505
cloning 716 per cluster 566
deleting 225, 294 per managed disk 566
failure 403 performance 57
port 35 performance advantage 81
renaming 225, 293 performance considerations 82, 90
replacement procedure 716, 722 performance improvement 81
replacing failed 716 performance requirements 26

Index 751
performance testing 89 provisioning 505
performance throttling 346 pseudo device driver 170
pessimistic bitmap 402 public key 98, 101, 181, 651, 704
physical location 26 PuTTY 98, 118, 221
physical planning 27 application 212
physical rules 30 CLI session 121
physical site 27 command line 212
physical storage 16 default location 101
Physical Volume Links 209 security alert 122
PiT 384, 410 PuTTY application 121, 225
PiT consistent data 386 PuTTY Installation 199
PiT copy 390, 407 PuTTY Key Generator 101–102
PiT semantics 387 PuTTY Key Generator GUI 99
planning 83, 89 PuTTY Secure Copy 258
planning chart 32 PuTTY session 102, 123
planning rules 26 PuTTY SSH client software 199
plink 704 PVIDs 666
plink.exe 704 PVLinks 39, 209
point-in-time 384
point-in-time copy 438, 503
policing 65 Q
policy 53 QoS (Quality of Service) 4, 14
policy decision 431, 495 Quality of service 65
pool 53 Quality of Service (QoS) 4, 14
port queue depth 85
adding 240, 329 queue depth calculation 87–88
address example 45 queue depth limit 87–88, 91–92
deleting 241, 330 queued commands 85
port binding 198 quickly 127
port mapping 79 quiesce 221
port masking 169 quiesce time 419
POSIX compliant 14 quorum candidates 58
possible paths 168 quorum disk 58, 62, 229, 301
Power Systems 181 setting 301
Powerware 30 quorum index number 230
PPRC 4
background copy 448, 513 R
commands 439, 503 RAID controller 14, 32
configuration limits 439, 503 RAID protection 79
detailed states 445, 510 RDAC 48
license 22 real-time synchronized 426, 490
relationship 432, 497 reassign the VDisk 247
preferred node 63 reboot 671
preferred path 78, 89, 207 recall commands 213
pre-installation planning 26 recommended levels 357
Prepare (pre-trigger) FlashCopy mapping command 406 recovery algorithms 86
prepare a FlashCopy 406 recreatevg command 664, 667
prepare command 396 Redbooks Web site 744
PREPARE_COMPLETED 404 Contact us xxxi
prepared state 396 redundant 35
preparing state 396 redundant SAN 35
preparing volumes 177 redundant SAN fabrics 19
pre-trigger 406 redundant SVC 144
primary 459, 491, 525 redundant SVC environment 130
primary copy 494 reform the cluster 230
priority 250 registry 654, 672
priority setting 250 relationship 387, 490, 492
private key 98, 101, 181, 651, 704 relationship state diagram 435, 499
production VDisk 494 reliability 57
productivity 3 remote cluster 35

752 IBM System Storage SAN Volume Controller


Remote Copy SCSI primitives 229
and AIX 669 SDD 38, 49, 170, 176, 183, 198
Windows spanned volume 687 SDD (Subsystem Device Driver) 18, 176, 198, 200, 572
remote fabric 35 SDD Dynamic Pathing 207
interconnect 35 SDD for Linux 200
remotely 647 SDD installation 171
remove a disk 195 SDD package version 170, 183
remove a VDisk 181 SDDDSM 184
remove an MDG 235 SDD-specific commands 198
remove WWPN definitions 241 secondary 491
removed 58 secondary copy 494
rename a disk controller 298 secondary site 26
rename an MDG 315 secure data flow 98
rename an MDisk 300 secure session 226
renaming an I/O group 290 Secure Shell 98, 647, 651
repartitioning 57 Secure Shell (SSH) 98
rescan disks 187 security 5, 60, 653
reset function 654 separate zones 38
Reset SSH Fingerprint 653 sequential 64, 137, 336
resiliency 83 sequential mapping 18
restart the cluster 221 sequential policy 54
restart the node 226 serial numbers 173
restart the SVC node 295 serialization 402
restarting 457, 524 serialization of I/O by FlashCopy 402
restore procedure 380 service panel 97
restore process 266 service password 216
rmrcconsistgrp 509 service, maintenance using the GUI 356
rmrcrelationship 508 set attributes 307
Round robin 204 Set the cluster time zone 285
round robin 57, 63, 87, 207 setup Metro Mirror 449, 461, 513, 527
shared 52
shells 274
S short outages 233
sample script 707 show the MDG 353
SAN 2 show the MDisks 352
SAN Boot Support 208–209 shrink a VDisk 252, 351
SAN configuration 166 shrinking 61, 252, 351
SAN definitions 34 shrunk 61
SAN design guidelines 36 shut down 226
SAN fabric 32, 37 shut down a single node 226, 295
SAN file system design 6 shut down the cluster 221, 288
SAN Integration Server 13 shutdown 188
SAN interfaces 14 Signon page 107
SAN interoperability 38 Simple Network Management Protocol 231, 431, 495
SAN planning 32 simple volume 680
SAN Volume Controller 13 single extent 60
clustering 15 single name space 5
compatibility 19 single point of failure 35
documentation 282 site 27
general housekeeping 283 SNIA 3
help 282 SNMP 231, 431, 495
logical configuration 18 SNMP alerts 302
multipathing 18 SNMP manager 263
virtualization 16 SNMP trap 404
SAN Volume Controller (SVC) 13 Software 271
SAN zoning 98 software license
scalable 82 PPRC relationship 23
scalable cache 14 Remote Copy 22
scalable solutions 5 software licensing 20
scripting 274, 431, 495 parameters 20
scripts 188, 703

Index 753
software upgrade 256, 356, 358 starting 217, 287
software upgrade packages 356 stopping 218, 287
software utility 357 statistics dump 270
solution 81 Stop FlashCopy consistency group 407
sort 280 Stop FlashCopy mapping command 407
sort criteria 280 STOP_COMPLETED 404
sorting 280 stoprcconsistgrp 508
source 494 stoprcrelationship 507
source reads 391 storage capacity 26
source virtual disks 384 storage growth 2
space 52 storage network 3
space management 14 storage pool 7
spanned volume 680 storage virtualization 13
spanned volumes 687 storage zone 166
spare 716 stripe 82
spare node 716, 722 stripe VDisks 81
special FlashCopy events 397 striped 336
special migration 570 striped mapping 18
split 390 striped policy 54
splitting the SAN 35 striped VDisk 137
SPoF 35 striping 79
spread paths 84 subnet mask IP address 104
spreading the load 57 Subsystem Device Driver (SDD) 18, 176, 198, 200, 572
SSH 98, 123, 647, 651, 704 Subsystem Device Driver DSM 184
SSH (Secure Shell) 98 SUN Solaris support 207
SSH client 181, 199 SUN Solaris support information 207
SSH client software 98 superuser 107
SSH command line 212 supported capabilities 66
SSH keys 98, 118 supported host adapter 170
SSH public key 652–653 supported switches 37
SSH server 98 surviving node 225
SSH-2 98 suspended mapping 407
stack 568 SVC 13
stand-alone Metro Mirror relationship 452, 518 Basic installation 103
standby node 716 cluster configuration backup and recovery 266
Start (trigger) FlashCopy mapping command 406 node replacement procedure 716, 722
Start a PPRC relationship command 442–443, 507 task automation 274
startrcrelationship 507 SVC cluster 140, 144, 229
state 445, 510 SVC cluster candidates 463, 529
connected 436, 501 SVC cluster partnership 440, 504
consistent 437–438, 501–502 SVC cluster software 358
ConsistentDisconnected 448, 512 SVC configuration 16, 18, 26
ConsistentStopped 446, 511 backing up 378
ConsistentSynchronized 446, 511 deleting the backup 380
disconnected 436, 501 restoring 380
empty 448, 513 SVC device 13
idling 446, 511 SVC installations 36
IdlingDisconnected 447, 512 SVC intercluster 37
inconsistent 437, 501 SVC intracluster 36
InconsistentCopying 445, 510 SVC master console 98
InconsistentDisconnected 447, 512 SVC node 15, 19, 36
InconsistentStopped 445, 510 SVC PPRC functions 431, 495
overview 435, 499 SVC setup 166
synchronized 438, 502 SVC software 20
state fragments 437, 501 svcinfo 212, 229
State overview 436, 501 svcinfo lsfreeextents 565
state transitions 404 svcinfo lshbaportcandidate 240
states 395, 435, 499 svcinfo lsmdiskextent 565
statistics 217 svcinfo lsmigrate 565
statistics collection 287 svcinfo lsVDisk 232

754 IBM System Storage SAN Volume Controller


svcinfo lsVDiskextent 565 Unmanaged MDisk 569
svcinfo lsVDiskmember 252 unmap a VDisk 247
svctask 215, 229 unrecognized certificates 144
svctask chlicense 264 unused space 52
svctask dumpinternallog 265 update messages 284
svctask finderr 260 upgrade 356–357
svctask mkfcmap 405, 440–443, 504–507 upgrade precautions 256
Switch and zoning configuration 166 upgrading software 356
switch configuration 37 use of Metro Mirror 430, 494
switch zoning 168 using SDD 176, 198, 200
Switching copy direction 458, 486, 525, 555
switchrcconsistgrp 510
switchrcrelationship 509 V
symmetrical 1 VDisk 13, 319
symmetrical network 34 assigning 162
symmetrical virtualization 1 assigning to host 138
synchronization 499 creating 136, 156, 245, 335
Synchronized 492 creating in image mode 232, 306
synchronized 438, 502 deleting 246, 341
synchronizing 492 discovering assigned 172, 185, 200
synchronous reads 568 expanding 247
synchronous writes 568 I/O governing 248
synthesis 401 image mode migration concept 568
information 244, 334
mapped to this host 241
T mapping to a host 247, 344
T0 384 migrating 250, 347
target 494, 672 modifying 248, 346
target reads 391 path offline for source 403
target server 672 path offline for target 404
target virtual disks 384 showing 320
tasks 86 showing for MDisk 305
test new applications 386 showing for mDisk 232
threads parameter 347 showing map to a host 354
threshold level 65 showing using group 237
threshold quantity 65 shrinking 251, 348
throttles 248, 346 working with 237
throttling parameters 346 VDisk-to-host mapping 247
throughput 82 deleting 342
throughput ability 84 Veritas Volume Manager 208
tie-break situations 58 View Certificate 145
tie-break solution 230, 302 View I/O Group details 222
time 148, 217, 285 viewing managed disk groups 155
time zone 130, 148, 217, 285 viewing virtual disk 277
time zone code 217 virtual disk 13, 16, 53, 237, 242, 333, 386
time-zero 385 creating 63
Time-Zero copy 384 deleting 61
Tivoli SAN Manager 39 expanding 60
trace dump 270 layout planning 89
traffic profile activity 26 reducing 61
transitions 569 viewing 277
trigger 406 virtual pool 2
Virtual Private Network 647
virtualization 16
U virtualization device 4
unallocated capacity 193 Virtualization Limit 113
unassign 342 virtualization mapping 18
unconfigured nodes 224 virtualization operations 57
uneven performance 89 virtualization overview 13
unfixed error 263, 368 virtualized storage 21–22
uninterruptible power supply 13, 16, 30–31, 36, 221 VLUN 13

Index 755
voltage regulation 16
volume group 178
volume management 16
Volume Sets 673
volumes, creating and preparing for use 205
voting set 58
vpath configured 174
VPN 647

W
Web interface 198
Windows 2000 and Copy Services 673
Windows 2000 based hosts 182
Windows 2000 host configuration 182
Windows 2003 184
Windows desktop 183
Windows host system CLI 199
Windows NT and 2000 specific information 182
Windows NT and 2000 specifics 671
Windows NT Volume Sets 673
Windows registration 653
Windows registry 653
Windows spanned volumes 687
Windows Volume Sets 673, 687
with reboot 671
without reboot 671
working with managed disks 227, 296
worldwide port name 170
Write ordering 502
write ordering 428, 493
write through mode 36
writes 494
writes to source or target 392
write-through mode 16
WWN 131
WWNN 716, 722
WWNs 131
WWPN 170, 240, 325
WWPNs 238

Z
zone 16, 19, 32
zoning 45
zoning capabilities 32
zoning recommendation 187
zoning requirements 144
zoning rules 166

756 IBM System Storage SAN Volume Controller


IBM System Storage
SAN Volume Controller
IBM System Storage
SAN Volume Controller
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
IBM System Storage
IBM System Storage
IBM System Storage
SAN Volume Controller
IBM System Storage
SAN Volume Controller
Back cover ®

IBM System Storage


SAN Volume Controller

Install, use, and This IBM Redbook is an updated, detailed technical guide to the
IBM System Storage SAN Volume Controller (SVC), a virtualization
INTERNATIONAL
troubleshoot the SAN
appliance solution that maps virtualized volumes visible to hosts TECHNICAL
Volume Controller
and applications to physical volumes on storage devices. SUPPORT
ORGANIZATION
Learn how to
implement block Each server within the SAN has its own set of virtual storage
addresses, which are mapped to a physical address. If the
virtualization
physical addresses change, the server continues running using
the same virtual addresses it had before. This means that
Perform backup and BUILDING TECHNICAL
volumes or storage can be added or moved while the server is INFORMATION BASED ON
restore on a cluster still running. PRACTICAL EXPERIENCE
IBM's virtualization technology improves management of IBM Redbooks are developed
information at the “block” level in a network, enabling by the IBM International
applications and servers to share storage devices on a network. Technical Support
Organization. Experts from
IBM, Customers and Partners
This book covers the following areas: from around the world create
- Storage virtualization high-level overview timely technical information
- Architecture of the SVC based on realistic scenarios.
- Implementing and configuring the SVC Specific recommendations
- Using virtualization and advanced copy services functions are provided to help you
implement IT solutions more
- Migrating existing storage to the SVC effectively in your
environment.

For more information:


ibm.com/redbooks

SG24-6423-04 ISBN 0738494828

You might also like