SVC 7.2 Implementation
SVC 7.2 Implementation
ibm.com/redbooks
Front cover
Implementing the IBM
System Storage SAN
Volume Controller V7.2
Sangam Racherla
Matus Butora
Hartmut Lonzer
Libor Miklas
Install, use, and troubleshoot the SAN
Volume Controller
Become familiar with the exciting
new GUI
Learn how to use the Easy
Tier function
International Technical Support Organization
Implementing the IBM System Storage SAN Volume
Controller V7.2
February 2014
Draft Document for Review March 27, 2014 3:03 pm 7933edno.fm
SG24-7933-02
Copyright International Business Machines Corporation 2014. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
7933edno.fm Draft Document for Review March 27, 2014 3:03 pm
Third Edition (February 2014)
This edition applies to Version 7.2 of the IBM System Storage SAN Volume Controller.
This document was created or updated on March 27, 2014.
Note: Before using this information and the product it supports, read the information in Notices on
page xix.
Copyright IBM Corp. 2014. All rights reserved. iii
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Summary of changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
February 2014, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Chapter 1. Introduction to storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 User requirements driving storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Benefits of using the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 What is new in SAN Volume Controller V 7.2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Chapter 2. IBM System Storage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Brief history of the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 SAN Volume Controller architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 SAN Volume Controller topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 SAN Volume Controller terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 SAN Volume Controller components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.1 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.3 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.4 Split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.5 MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.6 Quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.7 Disk tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.8 Storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.9 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.10 Easy Tier performance function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.11 Hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.12 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5 Volume overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.1 Image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.2 Managed mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.3 Cache mode and cache-disabled volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.4 Mirrored volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.5 Thin-provisioned volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.5.6 Volume I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.6 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6.1 Use of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.6.2 iSCSI volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.6.3 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.6.4 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
iv Implementing the IBM System Storage SAN Volume Controller V7.2
2.7 Advanced Copy Services overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.7.1 Synchronous/Asynchronous remote copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.7.2 FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.7.3 Image mode migration and volume mirroring migration . . . . . . . . . . . . . . . . . . . . 40
2.8 SAN Volume Controller clustered system overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.8.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.8.2 Split I/O Groups or split cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.8.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8.4 Clustered system management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.8.5 IBM System Storage Productivity Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.9 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.9.1 Remote authentication via LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.9.2 SAN Volume Controller user names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.9.3 SAN Volume Controller superuser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.9.4 SAN Volume Controller Service Assistant Tool . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.9.5 SAN Volume Controller roles and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.9.6 SAN Volume Controller local authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.9.7 SAN Volume Controller remote authentication and single sign-on . . . . . . . . . . . . 58
2.10 SAN Volume Controller hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.10.1 Fibre Channel interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.10.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.10.3 FCoE interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.11 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.11.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.11.2 Solid-state drive solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.11.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.11.4 Solid-state drives and SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.12 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.12.1 Evaluation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.12.2 Automatic data placement mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.13 What is new with SAN Volume Controller 7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.13.1 SAN Volume Controller 7.2 supported hardware list, device driver, and firmware
levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.13.2 SAN Volume Controller 7.2.0 new features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.14 Useful SAN Volume Controller web links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Chapter 3. Planning and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 74
3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.3.4 IP Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.3.5 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.3.6 SAN Volume Controller clustered system configuration . . . . . . . . . . . . . . . . . . . . 97
3.3.7 Split-cluster system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.3.8 Storage pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.3.9 Virtual disk configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.3.10 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Contents v
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
3.3.11 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3.12 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.3.13 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . 112
3.3.14 SAN Volume Controller configuration backup procedure . . . . . . . . . . . . . . . . . 113
3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.4.3 SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . . 117
4.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.1.1 Network requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . 119
4.2 Setting up the SAN Volume Controller cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.2.1 Introducing the service panels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.2.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.2.3 Initiating cluster from the front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.3 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.3.1 Completing the Create Cluster Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.3.2 Changing the default superuser password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.3.3 Configuring the Service IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.3.4 Postrequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.4 Secure Shell overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.4.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 139
4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster. . . . . . . . . 141
4.4.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
4.4.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.4.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.5 Using IPv6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.5.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.5.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.1 Host attachment overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.2 SAN Volume Controller setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
5.2.1 Fibre Channel and SAN setup overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.3 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
5.3.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.3.2 iSCSI nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.3.3 iSCSI qualified name (IQN). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.3.4 iSCSI setup for the SAN Volume Controller and host server . . . . . . . . . . . . . . . 162
5.3.5 Volume discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.3.6 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.3.7 Target failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.3.8 Host failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5.4 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.4.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.4.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 166
5.4.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.4.4 Configuring fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.4.5 Installing the 2145 host attachment support package. . . . . . . . . . . . . . . . . . . . . 168
5.4.6 Subsystem Device Driver Path Control Module . . . . . . . . . . . . . . . . . . . . . . . . . 168
5.4.7 Configuring the assigned volume using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . 169
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
vi Implementing the IBM System Storage SAN Volume Controller V7.2
5.4.8 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . . 173
5.4.10 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.4.11 Running SAN Volume Controller commands from an AIX host system . . . . . . 174
5.5 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.5.1 Configuring Windows Server 2008 and 2012 hosts . . . . . . . . . . . . . . . . . . . . . . 175
5.5.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.5.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 176
5.5.4 Installing and configuring the host adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5.5.5 Changing the disk timeout on the Windows Server . . . . . . . . . . . . . . . . . . . . . . 176
5.5.6 Installing the SDDDSM multipath driver on Windows . . . . . . . . . . . . . . . . . . . . . 177
5.5.7 Attaching the SAN Volume Controller volumes to Windows Server 2008 R2 and 2012
179
5.5.8 Extending a Windows Server 2008 (R2) volume . . . . . . . . . . . . . . . . . . . . . . . . 186
5.5.9 Removing a disk on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.6 Using the SAN Volume Controller CLI from a Windows host . . . . . . . . . . . . . . . . . . . 194
5.7 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.7.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.7.2 System requirements for the IBM System Storage hardware provider . . . . . . . . 196
5.7.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . . 196
5.7.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.7.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . . 200
5.7.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.8 Specific Linux (on x86/x86_64) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.8.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.8.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.8.3 Disabling automatic Linux system updates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.8.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.8.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.9 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.9.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.9.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 210
5.9.3 HBAs for hosts running VMware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.9.4 VMware storage and zoning guidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.9.5 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.9.6 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.9.7 Attaching VMware to volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5.9.8 Volume naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.9.9 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . . . 216
5.9.10 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.9.11 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.10 Sun Solaris support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.10.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 219
5.10.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.11 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.11.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 220
5.11.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.11.3 Coexistence of SDD and PVLinks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5.11.4 Using a SAN Volume Controller volume as a cluster lock disk. . . . . . . . . . . . . 221
5.11.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 221
5.12 Using SDDDSM, SDDPCM, and SDD web interface . . . . . . . . . . . . . . . . . . . . . . . . 222
5.13 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.14 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Contents vii
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
5.14.1 Publications containing SAN Volume Controller storage subsystem attachment
guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Chapter 6. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
6.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.2.1 Migrating multiple extents (within a storage pool) . . . . . . . . . . . . . . . . . . . . . . . . 226
6.2.2 Migrating extents off of an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . 227
6.2.3 Migrating a volume between storage pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
6.2.4 Migrating the volume to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6.2.5 Non-disruptive Volume Move . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
6.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
6.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
6.4 Migrating data from an image mode volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.4.1 Image mode volume migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
6.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
6.5 Data migration for Windows using the SAN Volume Controller GUI . . . . . . . . . . . . . . 240
6.5.1 Windows Server 2008 host system connected directly to the LSI 3500 . . . . . . . 241
6.5.2 Adding the SAN Volume Controller between the host system and the LSI 3500 244
6.5.3 Importing the migrated disks into an online Windows Server 2008 host. . . . . . . 257
6.5.4 Adding the SAN Volume Controller between the host and LSI 3500 using the CLI .
260
6.5.5 Migrating a volume from managed mode to image mode. . . . . . . . . . . . . . . . . . 262
6.5.6 Migrating the volume from image mode to image mode. . . . . . . . . . . . . . . . . . . 267
6.5.7 Removing image mode data from the SAN Volume Controller. . . . . . . . . . . . . . 275
6.5.8 Map the free disks onto the Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 277
6.6 Migrating Linux SAN disks to SAN Volume Controller disks. . . . . . . . . . . . . . . . . . . . 279
6.6.1 Connecting the SAN Volume Controller to your SAN fabric . . . . . . . . . . . . . . . . 281
6.6.2 Preparing your SAN Volume Controller to virtualize disks . . . . . . . . . . . . . . . . . 282
6.6.3 Moving the LUNs to the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 286
6.6.4 Migrating the image mode volumes to managed MDisks . . . . . . . . . . . . . . . . . . 289
6.6.5 Preparing to migrate from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 292
6.6.6 Migrating the volumes to image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . 295
6.6.7 Removing the LUNs from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 296
6.7 Migrating ESX SAN disks to SAN Volume Controller disks . . . . . . . . . . . . . . . . . . . . 299
6.7.1 Connecting the SAN Volume Controller to your SAN fabric . . . . . . . . . . . . . . . . 301
6.7.2 Preparing your SAN Volume Controller to virtualize disks . . . . . . . . . . . . . . . . . 302
6.7.3 Moving the LUNs to the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 306
6.7.4 Migrating the image mode volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
6.7.5 Preparing to migrate from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 312
6.7.6 Migrating the managed volumes to image mode volumes . . . . . . . . . . . . . . . . . 315
6.7.7 Removing the LUNs from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 316
6.8 Migrating AIX SAN disks to SAN Volume Controller volumes. . . . . . . . . . . . . . . . . . . 320
6.8.1 Connecting the SAN Volume Controller to your SAN fabric . . . . . . . . . . . . . . . . 322
6.8.2 Preparing your SAN Volume Controller to virtualize disks . . . . . . . . . . . . . . . . . 323
6.8.3 Moving the LUNs to the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 328
6.8.4 Migrating image mode volumes to volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
6.8.5 Preparing to migrate from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 332
6.8.6 Migrating the managed volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6.8.7 Removing the LUNs from the SAN Volume Controller . . . . . . . . . . . . . . . . . . . . 336
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
viii Implementing the IBM System Storage SAN Volume Controller V7.2
6.9 Using SAN Volume Controller for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . 339
6.10 Using volume mirroring and thin-provisioned volumes together . . . . . . . . . . . . . . . . 340
6.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6.10.2 Volume mirroring with thin-provisioned volumes. . . . . . . . . . . . . . . . . . . . . . . . 342
Chapter 7. Advanced features for storage efficiency . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7.2 Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.2.1 Easy Tier concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
7.2.2 Disk tiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
7.2.3 Easy Tier process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.2.4 Easy Tier operating modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.2.5 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
7.2.6 More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7.3 Thin provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
7.3.1 Configuring a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
7.3.2 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
7.3.3 Limitations of virtual capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
7.4 Real-time Compression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
7.4.1 Real-time compression concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
7.4.2 Configuring compressed volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
7.4.3 Differences from IBM Real-time Compression Appliances . . . . . . . . . . . . . . . . . 364
Chapter 8. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
8.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8.1.1 Business requirements for FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8.1.2 Backup improvements with FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8.1.3 Restore with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
8.1.4 Moving and migrating data with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
8.1.5 Application testing with FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
8.1.6 Host and application considerations to ensure FlashCopy integrity . . . . . . . . . . 368
8.1.7 FlashCopy attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
8.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
8.2.1 FlashCopy and Tivoli Storage FlashCopy Manager . . . . . . . . . . . . . . . . . . . . . . 370
8.3 FlashCopy functional overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8.4 Implementing SAN Volume Controller FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
8.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
8.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
8.4.3 Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
8.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
8.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . . 379
8.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 381
8.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
8.4.9 FlashCopy and image mode volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
8.4.10 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
8.4.11 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
8.4.12 Thin-provisioned FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
8.4.13 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
8.4.14 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
8.4.15 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
8.4.16 Event handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
8.4.17 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Contents ix
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
8.4.18 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 390
8.4.19 FlashCopy presets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
8.5 Volume mirroring and migration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
8.6 Native IP replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
8.6.1 Native IP Replication Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
8.6.2 IP partnership limitations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
8.6.3 IP partnership and SAN Volume Controller terminology. . . . . . . . . . . . . . . . . . . 397
8.6.4 States of IP partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
8.6.5 Remote Copy Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
8.6.6 Supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
8.6.7 Setting up SAN Volume Controller system IP partnership using the SAN Volume
Controller GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
8.7 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
8.7.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
8.7.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
8.7.3 Metro Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
8.7.4 Multiple SAN Volume Controller System Mirroring . . . . . . . . . . . . . . . . . . . . . . . 414
8.7.5 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
8.7.6 Remote copy intercluster communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
8.7.7 Metro Mirror attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
8.7.8 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
8.7.9 Metro Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8.7.10 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
8.7.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror . . . . . . . . . 430
8.7.12 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8.8 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
8.8.1 Listing available SAN Volume Controller system partners . . . . . . . . . . . . . . . . . 432
8.8.2 Creating the SAN Volume Controller system partnership. . . . . . . . . . . . . . . . . . 432
8.8.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8.8.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8.8.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
8.8.6 Changing a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
8.8.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
8.8.8 Stopping a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
8.8.9 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
8.8.10 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . 435
8.8.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.8.12 Deleting a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
8.8.13 Reversing a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
8.8.14 Reversing a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 437
8.8.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
8.9 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
8.9.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
8.9.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
8.9.3 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
8.9.4 SAN Volume Controller Global Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . 440
8.9.5 Global Mirror relationship between master and auxiliary volumes . . . . . . . . . . . 442
8.9.6 Using Change Volumes with Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
8.9.7 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
8.9.8 Global Mirror Consistency Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
8.9.9 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
8.9.10 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
8.9.11 Thin-provisioned background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
x Implementing the IBM System Storage SAN Volume Controller V7.2
8.10 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
8.10.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
8.10.2 Global Mirror states and events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
8.10.3 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
8.10.4 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
8.11 Global Mirror commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
8.11.1 Listing the available SAN Volume Controller system partners . . . . . . . . . . . . . 458
8.11.2 Creating a SAN Volume Controller system partnership . . . . . . . . . . . . . . . . . . 461
8.11.3 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 462
8.11.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
8.11.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.11.6 Changing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.11.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
8.11.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
8.11.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
8.11.10 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 465
8.11.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
8.11.12 Deleting a Global Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 465
8.11.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
8.11.14 Reversing a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . 466
8.12 Troubleshooting remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
8.12.1 1920 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
8.12.2 1720 error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Chapter 9. SAN Volume Controller operations using the command-line interface. . 471
9.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
9.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
9.2 New commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
9.3 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 474
9.3.1 Viewing disk controller details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
9.3.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
9.3.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
9.3.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
9.3.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
9.3.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
9.3.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
9.3.8 Adding MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
9.3.9 Showing MDisks in a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
9.3.10 Working with a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
9.3.11 Creating a storage pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
9.3.12 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
9.3.13 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
9.3.14 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
9.3.15 Removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
9.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
9.4.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
9.4.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
9.4.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
9.4.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
9.4.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
9.4.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
9.5 Working with the Ethernet port for iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
9.6 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
Contents xi
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
9.6.1 Creating a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
9.6.2 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
9.6.3 Creating a thin-provisioned volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
9.6.4 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
9.6.5 Adding a mirrored volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
9.6.6 Splitting a mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
9.6.7 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
9.6.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
9.6.9 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
9.6.10 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
9.6.11 Assigning a volume to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
9.6.12 Showing volumes to host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
9.6.13 Deleting a volume to host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.6.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
9.6.15 Migrating a fully managed volume to an image mode volume . . . . . . . . . . . . . 509
9.6.16 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
9.6.17 Showing a volume on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
9.6.18 Showing which volumes are using a storage pool . . . . . . . . . . . . . . . . . . . . . . 511
9.6.19 Showing which MDisks are used by a specific volume. . . . . . . . . . . . . . . . . . . 511
9.6.20 Showing from which storage pool a volume has its extents . . . . . . . . . . . . . . . 512
9.6.21 Showing the host to which the volume is mapped . . . . . . . . . . . . . . . . . . . . . . 513
9.6.22 Showing the volume to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . 513
9.6.23 Tracing a volume from a host back to its physical disk. . . . . . . . . . . . . . . . . . . 513
9.7 Scripting under the CLI for SAN Volume Controller task automation . . . . . . . . . . . . . 515
9.7.1 Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
9.8 SAN Volume Controller advanced operations using the CLI . . . . . . . . . . . . . . . . . . . 519
9.8.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
9.8.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
9.9 Managing the clustered system using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
9.9.1 Viewing clustered system properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
9.9.2 Changing system settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
9.9.3 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
9.9.4 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
9.9.5 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
9.9.6 Setting the clustered system time zone and time . . . . . . . . . . . . . . . . . . . . . . . . 526
9.9.7 Starting statistics collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
9.9.8 Determining the status of a copy operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
9.9.9 Shutting down a clustered system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
9.10 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
9.10.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
9.10.2 Adding a node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
9.10.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.10.4 Deleting a node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.10.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
9.11 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
9.11.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
9.11.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
9.11.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
9.11.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
9.12 Managing authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
9.12.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
9.12.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
9.12.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
xii Implementing the IBM System Storage SAN Volume Controller V7.2
9.12.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
9.13 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
9.13.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
9.13.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
9.13.3 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
9.13.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
9.13.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 543
9.13.6 Preparing (pre-triggering) the FlashCopy Consistency Group . . . . . . . . . . . . . 544
9.13.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
9.13.8 Starting (triggering) FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . 546
9.13.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
9.13.10 Stopping the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
9.13.11 Stopping the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 548
9.13.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
9.13.13 Deleting the FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 549
9.13.14 Migrating a volume to a thin-provisioned volume . . . . . . . . . . . . . . . . . . . . . . 550
9.13.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
9.13.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
9.14 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556
9.14.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
9.14.2 Creating a SAN Volume Controller partnership between ITSO_SVC1 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
9.14.3 Creating a Metro Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
9.14.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
9.14.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 563
9.14.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
9.14.7 Starting a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
9.14.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
9.14.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
9.14.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 566
9.14.11 Stopping a Metro Mirror Consistency Group. . . . . . . . . . . . . . . . . . . . . . . . . . 567
9.14.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 568
9.14.13 Restarting a Metro Mirror Consistency Group in the Idling state . . . . . . . . . . 569
9.14.14 Changing the copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . 569
9.14.15 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 569
9.14.16 Switching the copy direction for a Metro Mirror Consistency Group . . . . . . . . 571
9.14.17 Creating a SAN Volume Controller partnership among many clustered systems.
572
9.14.18 Star configuration partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
9.15 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
9.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
9.15.2 Creating a SAN Volume Controller partnership between ITSO_SVC1 and
ITSO_SVC4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
9.15.3 Changing link tolerance and system delay simulation . . . . . . . . . . . . . . . . . . . 581
9.15.4 Creating a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 583
9.15.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
9.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri . . . . . . . . 584
9.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
9.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 585
9.15.9 Starting a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
9.15.10 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
9.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
9.15.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 588
Contents xiii
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
9.15.13 Stopping a Global Mirror Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . 589
9.15.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 590
9.15.15 Restarting a Global Mirror Consistency Group in the Idling state . . . . . . . . . . 590
9.15.16 Changing the direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
9.15.17 Switching the copy direction for a Global Mirror relationship . . . . . . . . . . . . . 591
9.15.18 Switching the copy direction for a Global Mirror Consistency Group . . . . . . . 593
9.15.19 Changing a Global Mirror relationship to the cycling mode. . . . . . . . . . . . . . . 594
9.15.20 Creating the thin-provisioned Change Volumes . . . . . . . . . . . . . . . . . . . . . . . 595
9.15.21 Stopping the stand-alone remote copy relationship . . . . . . . . . . . . . . . . . . . . 596
9.15.22 Setting the cycling mode on the stand-alone remote copy relationship . . . . . 596
9.15.23 Setting the Change Volume on the master volume. . . . . . . . . . . . . . . . . . . . . 597
9.15.24 Setting the Change Volume on the auxiliary volume . . . . . . . . . . . . . . . . . . . 597
9.15.25 Starting the stand-alone relationship in the cycling mode. . . . . . . . . . . . . . . . 598
9.15.26 Stopping the Consistency Group to change the cycling mode . . . . . . . . . . . . 599
9.15.27 Setting the cycling mode on the Consistency Group . . . . . . . . . . . . . . . . . . . 600
9.15.28 Setting the Change Volume on the master volume relationships of the Consistency
Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
9.15.29 Setting the Change Volumes on the auxiliary volumes. . . . . . . . . . . . . . . . . . 601
9.15.30 Starting the Consistency Group CG_W2K3_GM in the cycling mode . . . . . . 602
9.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
9.16.1 Upgrading software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
9.16.2 Running the maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
9.16.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
9.16.4 Setting the syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
9.16.5 Configuring error notification using an email server . . . . . . . . . . . . . . . . . . . . . 612
9.16.6 Analyzing the event log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
9.16.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
9.16.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
9.17 Backing up the SAN Volume Controller system configuration . . . . . . . . . . . . . . . . . 619
9.17.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
9.18 Restoring the SAN Volume Controller clustered system configuration . . . . . . . . . . . 621
9.18.1 Deleting the configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
9.19 Working with the SAN Volume Controller Quorum MDisks. . . . . . . . . . . . . . . . . . . . 622
9.19.1 Listing the SAN Volume Controller Quorum MDisks. . . . . . . . . . . . . . . . . . . . . 622
9.19.2 Changing the SAN Volume Controller Quorum Disks. . . . . . . . . . . . . . . . . . . . 622
9.20 Working with the Service Assistant menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
9.20.1 SAN Volume Controller CLI Service Assistant menu . . . . . . . . . . . . . . . . . . . . 623
9.21 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
9.22 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
Chapter 10. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . 627
10.1 SAN Volume Controller normal operations using the GUI . . . . . . . . . . . . . . . . . . . . 628
10.1.1 Introduction to SAN Volume Controller normal operations using the GUI . . . . 628
10.1.2 Organizing based on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
10.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
10.2 Working with external disk controllers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
10.2.1 Viewing the disk controller details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
10.2.2 Naming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639
10.2.3 Discovering MDisks from the external panel. . . . . . . . . . . . . . . . . . . . . . . . . . . 640
10.3 Working with storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
10.3.1 Viewing storage pool information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
10.3.2 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
10.3.3 Creating storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
xiv Implementing the IBM System Storage SAN Volume Controller V7.2
10.3.4 Renaming a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
10.3.5 Deleting a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
10.3.6 Adding or removing MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . 646
10.3.7 Showing the volumes that are associated with a storage pool . . . . . . . . . . . . . 646
10.4 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
10.4.1 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
10.4.2 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
10.4.3 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
10.4.4 Assigning MDisks to a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
10.4.5 Unassigning MDisks from a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
10.4.6 Including an excluded MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
10.4.7 Activating Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653
10.5 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
10.6 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
10.6.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
10.6.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
10.6.3 Renaming a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
10.6.4 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664
10.6.5 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
10.6.6 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
10.6.7 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
10.6.8 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
10.6.9 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
10.6.10 Deleting all host mappings for a given host . . . . . . . . . . . . . . . . . . . . . . . . . . 672
10.7 Working with volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
10.7.1 Volume information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
10.7.2 Creating a volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
10.7.3 Renaming a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685
10.7.4 Modifying a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
10.7.5 Modifying thin-provisioned or compressed volume properties . . . . . . . . . . . . . 688
10.7.6 Deleting a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
10.7.7 Creating or modifying the host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
10.7.8 Deleting a host mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
10.7.9 Deleting all host mappings for a given volume . . . . . . . . . . . . . . . . . . . . . . . . . 696
10.7.10 Shrinking a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
10.7.11 Expanding a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
10.7.12 Shrinking the real capacity of a thin-provisioned or compressed volume . . . . 700
10.7.13 Expanding the real capacity of a thin-provisioned or compressed volume . . . 702
10.7.14 Migrating a volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
10.7.15 Adding a mirrored copy to an existing volume . . . . . . . . . . . . . . . . . . . . . . . . 706
10.7.16 Deleting a mirrored copy from a volume mirror. . . . . . . . . . . . . . . . . . . . . . . . 709
10.7.17 Splitting a volume copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
10.7.18 Validating volume copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
10.7.19 Migrating to a thin-provisioned volume using volume mirroring . . . . . . . . . . . 712
10.7.20 Creating a volume in image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
10.7.21 Migrating a volume to an image mode volume . . . . . . . . . . . . . . . . . . . . . . . . 716
10.7.22 Creating an image mode mirrored volume . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
10.8 Copy Services: Managing FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
10.8.1 Creating a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
10.8.2 Creating and starting a snapshot preset with a single click . . . . . . . . . . . . . . . 728
10.8.3 Creating and starting a clone preset with a single click . . . . . . . . . . . . . . . . . . 729
10.8.4 Creating and starting a backup preset with a single click . . . . . . . . . . . . . . . . . 730
10.8.5 Creating a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 731
Contents xv
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
10.8.6 Creating FlashCopy mappings in a Consistency Group . . . . . . . . . . . . . . . . . . 733
10.8.7 Showing related volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
10.8.8 Moving a FlashCopy mapping to a Consistency Group . . . . . . . . . . . . . . . . . . 738
10.8.9 Removing a FlashCopy mapping from a Consistency Group . . . . . . . . . . . . . . 739
10.8.10 Modifying a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
10.8.11 Renaming a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
10.8.12 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
10.8.13 Deleting a FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
10.8.14 Deleting a FlashCopy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 744
10.8.15 Starting the FlashCopy copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745
10.8.16 Stopping the FlashCopy copy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 746
10.8.17 Starting a FlashCopy Consistency Group copy process. . . . . . . . . . . . . . . . . 747
10.8.18 Stopping the FlashCopy Consistency Group copy process . . . . . . . . . . . . . . 748
10.8.19 Migrating between a fully allocated volume and a Space-Efficient volume. . . 749
10.8.20 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 749
10.9 Copy Services: Managing remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 750
10.9.1 System partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
10.9.2 Creating the fibre channel partnership between two remote SAN Volume Controller
systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
10.9.3 Creating the IP partnership between two remote SAN Volume Controller systems
755
10.9.4 Creating stand-alone remote copy relationships. . . . . . . . . . . . . . . . . . . . . . . . 757
10.9.5 Creating a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
10.9.6 Renaming a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
10.9.7 Renaming a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 765
10.9.8 Moving a stand-alone remote copy relationship to a Consistency Group. . . . . 766
10.9.9 Removing a remote copy relationship from a Consistency Group . . . . . . . . . . 767
10.9.10 Starting a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
10.9.11 Starting a remote copy Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . 769
10.9.12 Switching the copy direction for a remote copy relationship . . . . . . . . . . . . . . 770
10.9.13 Switching the copy direction for a Consistency Group . . . . . . . . . . . . . . . . . . 771
10.9.14 Stopping a remote copy relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772
10.9.15 Stopping a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
10.9.16 Deleting stand-alone remote copy relationships . . . . . . . . . . . . . . . . . . . . . . . 775
10.9.17 Deleting a Consistency Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
10.10 Managing the SAN Volume Controller clustered system using the GUI . . . . . . . . . 777
10.10.1 System Status information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
10.10.2 View I/O Groups and their associated nodes . . . . . . . . . . . . . . . . . . . . . . . . . 778
10.10.3 View SAN Volume Controller clustered system properties . . . . . . . . . . . . . . . 780
10.10.4 Renaming a SAN Volume Controller clustered system. . . . . . . . . . . . . . . . . . 780
10.10.5 Shutting down a SAN Volume Controller clustered system . . . . . . . . . . . . . . 781
10.10.6 Upgrading software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
10.11 Managing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
10.11.1 Viewing I/O Group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
10.11.2 Modifying I/O Group properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
10.12 Managing nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
10.12.1 Viewing node properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
10.12.2 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
10.12.3 Adding a node to the SAN Volume Controller clustered system. . . . . . . . . . . 791
10.12.4 Removing a node from the SAN Volume Controller clustered system . . . . . . 792
10.13 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
10.13.1 Events panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 794
10.13.2 Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
xvi Implementing the IBM System Storage SAN Volume Controller V7.2
10.13.3 Running the fix procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805
10.13.4 Support panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807
10.14 User Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
10.14.1 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
10.14.2 Modifying the user properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
10.14.3 Removing a user password. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
10.14.4 Removing a user SSH Public Key. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
10.14.5 Deleting a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
10.14.6 Creating a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
10.14.7 Modifying the user group properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
10.14.8 Deleting a user group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
10.14.9 Audit log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 823
10.15 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
10.15.1 Configuring the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826
10.15.2 Configuring the service IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
10.15.3 Configuring Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
10.15.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
10.15.5 Fibre Channel information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
10.15.6 Event notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
10.15.7 Email notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
10.15.8 SNMP notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
10.15.9 Using the General panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
10.15.10 Date and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
10.15.11 Licensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
10.15.12 Upgrading software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
10.15.13 Setting GUI preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
10.16 Upgrading SAN Volume Controller software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
10.16.1 Precautions before the upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
10.16.2 SAN Volume Controller software upgrade test utility . . . . . . . . . . . . . . . . . . . 844
10.16.3 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
10.17 Service Assistant Tool with the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
10.17.1 Placing a SAN Volume Controller node into the service state . . . . . . . . . . . . 852
10.17.2 Exiting the service state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
10.17.3 Rebooting a SAN Volume Controller node . . . . . . . . . . . . . . . . . . . . . . . . . . . 855
10.17.4 Collect Logs page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857
10.17.5 Manage System page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
10.17.6 Recover System page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
10.17.7 Reinstall software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 859
10.17.8 Upgrade Manually page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
10.17.9 Modify WWNN page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
10.17.10 Change Service IP page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
10.17.11 Configure CLI Access page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
10.17.12 Restart Service page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
Appendix A. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 865
SAN Volume Controller performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
SAN Volume Controller performance perspectives. . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
Performance monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
Real-time performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
Performance data collection and Tivoli Storage Productivity Center for Disk. . . . . . . . 875
Contents xvii
Draft Document for Review March 27, 2014 3:03 pm 7933TOC.fm
Appendix B. Terminology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
Appendix C. SAN Volume Controller Stretched Cluster . . . . . . . . . . . . . . . . . . . . . . . 885
Stretched cluster overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
Non-ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
ISL configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
More information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
7933TOC.fm Draft Document for Review March 27, 2014 3:03 pm
xviii Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. xix
Draft Document for Review March 27, 2014 3:03 pm 7933spec.fm
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
7933spec.fm Draft Document for Review March 27, 2014 3:03 pm
xx Implementing the IBM System Storage SAN Volume Controller V7.2
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX 5L
AIX
DB2
developerWorks
DS4000
DS8000
Easy Tier
FlashCopy
GPFS
IBM Flex System
IBM Systems Director Active Energy
Manager
IBM
Power Systems
pureScale
Real-time Compression
Real-time Compression Appliance
Redbooks
Redbooks (logo)
Storwize
System p
System Storage DS
System Storage
System x
Tivoli
WebSphere
XIV
The following terms are trademarks of other companies:
Intel Xeon, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Copyright IBM Corp. 2014. All rights reserved. xxi
Draft Document for Review March 27, 2014 3:03 pm 7933pref.fm
Preface
This IBM Redbooks publication is a detailed technical guide to the IBM System
Storage SAN Volume Controller Version 7.2.
SAN Volume Controller is a virtualization appliance solution, which maps virtualized volumes
that are visible to hosts and applications, to physical volumes on storage devices. Each server
within the storage area network (SAN) has its own set of virtual storage addresses that are
mapped to physical addresses. If the physical addresses change, the server continues
running using the same virtual addresses that it had before. Therefore, volumes or storage
can be added or moved while the server is still running.
The IBM virtualization technology improves the management of information at the block
level in a network, thus enabling applications and servers to share storage devices on a
network.
This book is intended for readers who need to implement the SAN Volume Controller at a 7.2
release level with a minimum of effort.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Sangam Racherla is an IT Specialist. He holds a degree in Electronics and Communication
Engineering and has eleven years of experience in the IT field. His areas of expertise include
Microsoft Windows, Linux, IBM AIX, IBM System x, and IBM System P servers, and
various SAN and storage products.
Matus Butora is an IT Specialist and leader of Storage Support in the IBM ITD Delivery
Center in the Czech Republic. He works with enterprise storage environments providing
solutions and support for global strategic customers across the globe. Matus has ten years of
experience with Open storage hardware and software including IBM DS8000, IBM Midrange
Storage Storwize, NetApp, Tivoli Storage Management, and Tivoli Storage Productivity
Center. Matus is a certified IBM Professional, Brocade, and NetApp certified administrator.
Hartmut Lonzer is a System Partner Technical Sales Specialist in Germany. He works in the
German headquarters in Ehningen. As a former Storage Field Technical Sales Support
(FTSS) member, his main focus is on storage and IBM System x. He is responsible for
educating, supporting, and enabling the Business Partners in his area in technical matters.
His experience with SAN Volume Controller and V7000 storage goes back to the beginning of
these products. Hartmut has been with IBM in various technical roles for 36 years.
Libor Miklas is an IT Architect working at the IBM Integrated Delivery Center in the Czech
Republic. He demonstrates ten years of extensive experience within the IT industry. During
the last eight years, his main focus has been on the data protection solutions and on storage
management. Libor and his team design, implement, and support midrange and enterprise
storage environments for various global and local clients, worldwide. He is an IBM Certified
Deployment Professional for the Tivoli Storage Manager family of products and holds a
Masters degree in Electrical Engineering and Telecommunications.
Thanks to the following people for their contributions to this project:
7933pref.fm Draft Document for Review March 27, 2014 3:03 pm
xxii Implementing the IBM System Storage SAN Volume Controller V7.2
Megan Gilge
International Technical Support Organization, San Jose Center
Thanks to the authors of the previous editions of this book.
Authors of the first edition, Implementing the IBM System Storage SAN Volume Controller
V6.1, published in May 2011, were:
Angelo Bernasconi
Alexandre Chabrol
Peter Crowhurst
Frank Enders
Ian MacQuarrie
Jon Tate
Authors of the second edition, Implementing the IBM System Storage SAN Volume
Controller V6.3, published in April 2012, were:
Alejandro Berardinelli
Mark Chitti
Torben Jensen
Massimo Rosati
Christian Schroeder
Jon Tate
Now you can become a published author, too!
Heres an opportunity to spotlight your skills, grow your career, and become a published
authorall at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
Preface xxiii
Draft Document for Review March 27, 2014 3:03 pm 7933pref.fm
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
7933pref.fm Draft Document for Review March 27, 2014 3:03 pm
xxiv Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. xxv
Draft Document for Review March 27, 2014 3:03 pm 7933chang.fm
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-7933-02
for Implementing the IBM System Storage SAN Volume Controller V7.2
as created or updated on March 27, 2014.
February 2014, Third Edition
This revision reflects the addition, deletion, or modification of new and changed information
described below.
New information
IP replication capability
Enhancements to IBM Real-time Compression
Enhancements to stretched cluster
Changed information
Updates for Version 7.2
7933chang.fm Draft Document for Review March 27, 2014 3:03 pm
xxvi Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. 1
Draft Document for Review March 27, 2014 3:03 pm 7933 01 Introduction Hartmut.fm
Chapter 1. Introduction to storage
virtualization
In this chapter, we define the concept of storage virtualization. Then, we present an overview
explaining how you can apply virtualization to help address challenging storage requirements.
1
7933 01 Introduction Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
2 Implementing the IBM System Storage SAN Volume Controller V7.2
1.1 Storage virtualization terminology
Although storage virtualization is a term that is used extensively throughout the storage
industry, it can be applied to a wide range of technologies and underlying capabilities. In
reality, most storage devices can technically claim to be virtualized in one form or another.
Therefore, we must start by defining the concept of storage virtualization as used in this book.
IBM defines storage virtualization in the following manner:
Storage virtualization is a technology that makes one set of resources look and feel like
another set of resources, preferably with more desirable characteristics.
It is a logical representation of resources that is not constrained by physical limitations:
It hides part of the complexity.
It adds or integrates new function with existing services.
It can be nested or applied to multiple layers of a system.
When discussing storage virtualization, it is important to understand that virtualization can be
implemented at various layers within the I/O stack. We have to clearly distinguish between
virtualization at the disk layer and virtualization at the file system layer.
The focus of this book is virtualization at the disk layer, which is referred to as block-level
virtualization, or the block aggregation layer. A discussion of file system virtualization is
beyond the scope of this book.
However, if you are interested in file system virtualization, refer to the IBM General Parallel
File System (GPFS) or IBM Scale Out Network Attached Storage (SONAS), which is based
on GPFS.
To obtain more information and an overview of GPFS, visit the following website:
http://www.ibm.com/systems/software/gpfs/
To obtain more information about SONAS, visit the following website:
http://www.ibm.com/systems/storage/network/sonas/
The Storage Networking Industry Associations (SNIA) block aggregation model (Figure 1-1
on page 3) provides a useful overview of the storage domain and its layers. The figure shows
the three layers of a storage domain: the file, the block aggregation, and the block subsystem
layers.
The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).
The IBM implementation of a block aggregation solution is the IBM System Storage SAN
Volume Controller. The SAN Volume Controller is implemented as a clustered appliance in
the storage network layer. Chapter 2, IBM System Storage SAN Volume Controller on
page 9 explains the reasons why IBM chose to implement its IBM System Storage SAN
Volume Controller in the storage network layer.
Chapter 1. Introduction to storage virtualization 3
Draft Document for Review March 27, 2014 3:03 pm 7933 01 Introduction Hartmut.fm
Figure 1-1 SNIA block aggregation model
1
The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment.
Decoupling means abstracting the physical location of data from the logical representation of
the data. The virtualization engine presents logical entities to the user and internally manages
the process of mapping these entities to the actual location of the physical storage.
The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk, up to the
full capacity of a physical disk. A single block of information in this environment is identified by
its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which
is known as a logical block address (LBA).
The term physical disk is used in this context to describe a piece of storage that might be
carved out of a RAID array in the underlying disk subsystem.
Specific to the SAN Volume Controller implementation, the address space that is mapped
between the logical entity is referred to as volume, and the physical disk is referred to as
managed disks (MDisks).
Figure 1-2 on page 4 shows an overview of block-level virtualization.
1
This figure is produced by the Storage Networking Industry Association.
7933 01 Introduction Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
4 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 1-2 Block-level virtualization overview
The server and application are only aware of the logical entities, and they access these
entities using a consistent interface that is provided by the virtualization layer.
The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which allows a user to move or migrate data between
physical locations, which are referred to as storage pools.
We refer to block-level storage virtualizations as the cornerstones of virtualization. These
cornerstones of virtualization are the core benefits that a product, such as the SAN Volume
Controller, can provide over the traditional directly attached or SAN storage.
The SAN Volume Controller provides the following benefits:
The SAN Volume Controller provides online volume migration while applications are
running, which is possibly the greatest single benefit for storage virtualization. This
capability allows data to be migrated on and between the underlying storage subsystems
without any impact to the servers and applications. In fact, this migration is performed
without the knowledge of the servers and applications that it even occurred.
The SAN Volume Controller simplifies storage management by providing a single image
for multiple controllers and a consistent user interface for provisioning heterogeneous
storage.
The SAN Volume Controller provides enterprise-level Copy Services functions. Performing
the Copy Services functions within the SAN Volume Controller removes dependencies on
the storage subsystems, therefore enabling the source and target copies to be on other
storage subsystem types.
Storage utilization can be increased by pooling storage across the SAN.
Chapter 1. Introduction to storage virtualization 5
Draft Document for Review March 27, 2014 3:03 pm 7933 01 Introduction Hartmut.fm
System performance is often improved with the SAN Volume Controller as a result of
volume striping across multiple arrays or controllers and the additional cache that it
provides.
The SAN Volume Controller delivers these functions in a homogeneous way on a scalable
and highly available platform, over any attached storage, and to any attached server.
1.2 User requirements driving storage virtualization
Today, an emphasis exists on a smarter planet and dynamic infrastructure. Thus, there is a
need for a storage environment that is as flexible as the application and server mobility.
Business demands change quickly.
These key client concerns drive storage virtualization:
Growth in data center costs
Inability of IT organizations to respond quickly to business demands
Poor asset utilization
Poor availability or service levels
Lack of skilled staff for storage administration
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.
But how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
island that needs to be managed separately.
1.2.1 Benefits of using the SAN Volume Controller
The SAN Volume Controller can reduce the number of separate environments that need to
managed down to a single environment. It provides a single interface for storage
management. After the initial configuration of the storage subsystems, all of the day-to-day
storage management operations are performed from the SAN Volume Controller.
Because the SAN Volume Controller provides advanced functions, such as mirroring and
FlashCopy, there is no need to purchase them again for each new disk subsystem.
Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID disk subsystems. Using the installed raw capacity in the disk
subsystems will, depending on the RAID level that is used, show utilization numbers of less
than 35%. A block-level virtualization solution, such as the SAN Volume Controller, can allow
capacity utilization to increase to approximately 75 - 80%.
With SAN Volume Controller, free space does not need to be maintained and managed within
each storage subsystem, which further increases capacity utilization.
1.3 What is new in SAN Volume Controller V 7.2.0
One of the most important new functions in the IBM Storwize family is IP replication, which
enables the use of lower cost Ethernet connections for remote mirroring. The capability is
7933 01 Introduction Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
6 Implementing the IBM System Storage SAN Volume Controller V7.2
available as a chargeable option on SAN Volume Controller and all Storwize family systems
(Remote Mirror license). IP connections that are used for replication can have long latency
(the time to transmit a signal from one end to the other), which can be caused by distance or
by many hops between switches and other appliances in the network.
Traditional replication solutions transmit data, wait for a response, then transmit more data,
which can result in network utilization as low as 20% (based on IBM measurements). And this
gets worse the longer the latency. Some customers have deployed optimization appliances to
help get over this issue. Bridgeworks SANSlide technology integrated with IBM Storwize
family requires no separate appliances and so no additional cost and no configuration hassle.
It uses AI technology to transmit multiple data streams in parallel, adjusting automatically to
changing network environments and workloads. Because it does not use compression, it is
independent of application or data type. Most importantly, SANSlide improves network
bandwidth utilization up to 3x so customers might be able to deploy a less costly network
infrastructure or take advantage of faster data transfer to speed replication cycles, improve
remote data currency, and enjoy faster recovery.
Storwize family IBM Real-time Compression is also significantly enhanced. A new
compression algorithm delivers significant performance improvements. When V7.2 software
is installed, this new algorithm is used automatically for all new compressed volumes and for
any new data that is written to existing compressed volumes. So there is nothing for users to
do to experience the benefits of the new algorithm. The new algorithm delivers up to 3x
throughput when performing sequential writes, which is especially important for VMware
vMotion. The new algorithm also uses 35% less CPU for random workloads, which enables
compression to be used with more or more demanding workloads. In version 7.1, the use of
IBM Easy Tier is enabled with Real-time Compression. In version 7.2, Real-time
Compression is optimized to position data so that hot and cold data is kept segregated,
which helps improve the efficiency of Easy Tier.
Stretched cluster is enhanced. Before this release, stretched cluster configurations did not
provide manual failover capability, and data being sent across a long distance link had the
potential to be sent twice. The addition of site awareness in Storwize Family Software V7.2
routes I/O traffic between SAN Volume Controller nodes and storage controllers to optimize
the data flow, and it polices I/O traffic during a failure condition to allow for a manual cluster
invocation to ensure consistency. The use of stretched cluster continues to follow all the same
hardware installation guidelines as previously announced and found in the product
documentation. Use of enhanced stretched cluster is optional, and existing stretched cluster
configurations will continue to be supported.
1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. Using storage virtualization as the
foundation for a flexible and reliable storage solution helps enterprises to better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.
The IBM System Storage SAN Volume Controller is a mature, seventh-generation
virtualization solution that uses open standards and is consistent with the Storage Networking
Industry Association (SNIA) storage model. The SAN Volume Controller is an
appliance-based in-band block virtualization process, in which intelligence, including
advanced storage functions, is migrated from individual storage devices to the storage
network.
Chapter 1. Introduction to storage virtualization 7
Draft Document for Review March 27, 2014 3:03 pm 7933 01 Introduction Hartmut.fm
The IBM System Storage SAN Volume Controller can improve the utilization of your storage
resources, simplify your storage management, and improve the availability of your
applications.
7933 01 Introduction Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
8 Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. 9
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Chapter 2. IBM System Storage SAN Volume
Controller
In this chapter, we explain the major concepts that underlie the IBM System Storage SAN
Volume Controller.
We begin by presenting a brief history of the SAN Volume Controller product, and then
provide you with an architectural overview. After defining SAN Volume Controller terminology,
we describe software and hardware concepts and the additional functionalities that will be
available with the newest release. Finally, we provide links to websites where you can find
more information about SAN Volume Controller.
2
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
10 Implementing the IBM System Storage SAN Volume Controller V7.2
2.1 Brief history of the SAN Volume Controller
The IBM implementation of block-level storage virtualization, the IBM System Storage SAN
Volume Controller, is based on an IBM project that was initiated in the second half of 1999 at
the IBM Almaden Research Center. The project was called COMmodity PArts Storage
System, or COMPASS.
One goal of this project was to create a system that was almost exclusively composed of
off-the-shelf standard parts. As with any enterprise-level storage control system, it had to
deliver a level of performance and availability comparable to the highly optimized storage
controllers of previous generations. The idea of building a storage control system based on a
scalable cluster of lower performance servers, instead of a monolithic architecture of two
nodes, is still a compelling idea.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.
The first documentation covering this project was released to the public in 2003 in the form of
the IBM Systems Journal, Vol. 42, No. 2, 2003, The software architecture of a SAN storage
control system, by J. S. Glider, C. F. Fuente, and W. J. Scales, which you can read at this
website:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853
The results of the COMPASS project defined the fundamentals for the product architecture.
The announcement of the first release of the IBM System Storage SAN Volume Controller
took place in July 2003.
Each of the following releases brought new and more powerful hardware nodes, which
approximately doubled the I/O performance and throughput of its predecessors, provided new
functionality, and offered more interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).
The most recently released hardware node, the 2145-CG8, is based on IBM System x 3550
M3 server technology with an Intel Xeon 5600 2.53 GHz quad-core processor (Nehalem), 24
GB of cache, and one four-port 8 Gbps Fibre Channel card, optional second four port 8 Gbps
Fibre Channel card, two 1 Gbps ports, and an optional 10 Gbps iSCSI/FCoE card. Additional
four-port FC cards and iSCSI and FCoE cards are mutually exclusive. The SAN Volume
Controller node is capable of supporting up to four internal solid-state drives (SSDs). The
optional SSDs and optional 10 Gbps Ethernet card cannot be in the same node.
2.2 SAN Volume Controller architectural overview
The IBM System Storage SAN Volume Controller is a SAN block aggregation virtualization
appliance that is designed for attachment to various host computer systems.
There are two major approaches in use today to consider for the implementation of block-level
aggregation and virtualization:
Symmetric: In-band appliance
The device is a SAN appliance that sits in the data path, and all I/O flows through the
device. This implementation is also referred to as symmetric virtualization or in-band.
The device is both target and initiator. It is the target of I/O requests from the host
perspective, and the initiator of I/O requests from the storage perspective. The redirection
Chapter 2. IBM System Storage SAN Volume Controller 11
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
is performed by issuing new I/O requests to the storage. The SAN Volume Controller uses
symmetric virtualization.
Asymmetric: Out-of-band or controller-based
The device is usually a storage controller that provides an internal switch for external
storage attachment. In this approach, the storage controller intercepts and redirects I/O
requests to the external storage as it does for internal storage. The actual I/O requests are
themselves redirected. This implementation is also referred to as asymmetric
virtualization or out-of-band.
Figure 2-1 shows variations of the two virtualization approaches.
Figure 2-1 Overview of block-level virtualization architectures
Although these approaches provide essentially the same cornerstones of virtualization, there
can be interesting side effects, as discussed here.
The controller-based approach has high functionality, but it fails in terms of scalability or
upgradeability. Because of the nature of its design, there is no true decoupling with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Data migration issues and questions are challenging, such as how to reconnect the servers to
the new controller, and how to reconnect them online without any effect on your applications.
Be aware that with this approach, you not only replace a controller but also implicitly replace
your entire virtualization solution. In addition to replacing the hardware, it might be necessary
to update or repurchase the licenses for the virtualization feature, advanced copy functions,
and so on.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
12 Implementing the IBM System Storage SAN Volume Controller V7.2
With a SAN or fabric-based appliance solution that is based on a scale-out cluster
architecture, lifecycle management tasks, such as adding or replacing new disk subsystems
or migrating data between them, are extremely simple. Servers and applications remain
online, data migration takes place transparently on the virtualization platform, and licenses for
virtualization and copy services require no update: that is, they require no additional costs
when disk subsystems are replaced.
Only the fabric-based appliance solution provides an independent and scalable virtualization
platform that can provide enterprise-class copy services; is open for future interfaces and
protocols; allows you to choose the disk subsystems that best fit your requirements; and does
not lock you into specific SAN hardware.
For these reasons, IBM has chosen the SAN or fabric-based appliance approach for the
implementation of the IBM System Storage SAN Volume Controller.
The SAN Volume Controller possesses the following key characteristics:
It is highly scalable, providing an easy growth path to two-n nodes (grow in a pair of
nodes).
It is SAN interface-independent. It supports FC and FCoE and iSCSI, but it is also open for
future enhancements.
It is host-independent, for fixed block-based Open Systems environments.
It is external storage RAID controller-independent, providing a continuous and ongoing
process to qualify additional types of controllers.
It can use disks that are internally located within the nodes (SSDs).
It can use disks that are locally attached to the nodes (SAS and SSD drives).
On the SAN storage that is provided by the disk subsystems, the SAN Volume Controller can
offer the following services:
It can create and manage a single pool of storage that is attached to the SAN.
It can manage multiple tiers of storage.
It provides block-level virtualization (logical-unit virtualization).
It provides automatic block-level or sub-LUN-level data migration between storage tiers.
It provides advanced functions to the entire SAN:
Large scalable cache
Advanced Copy Services:
FlashCopy (point-in-time copy)
Metro Mirror and Global Mirror (remote copy, synchronous and asynchronous)
These mirror functions can be either FC or IP based.
It provides nondisruptive and concurrent data migration.
This list of features grows with each release, because the layered architecture of the SAN
Volume Controller can easily implement new storage features.
2.2.1 SAN Volume Controller topology
SAN-based storage is managed by the SAN Volume Controller in one or more pairs of SAN
Volume Controller hardware nodes, referred to as a clustered system or system. These nodes
are attached to the SAN fabric, along with RAID controllers and host systems. The SAN fabric
is zoned to allow the SAN Volume Controller to see the RAID controllers, and for the hosts to
Chapter 2. IBM System Storage SAN Volume Controller 13
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
see the SAN Volume Controller. The hosts are not allowed to see or operate on the same
physical storage (LUN) from the RAID controller that has been assigned to the SAN Volume
Controller. Storage controllers can be shared between the SAN Volume Controller and direct
host access as long as the same LUNs are not shared. The zoning capabilities of the SAN
switch must be used to create distinct zones to ensure that this rule is enforced.
SAN fabrics can include standard FC, FC over Ethernet, iSCSI over Ethernet, or possible
future types.
Figure 2-2 on page 13 shows a conceptual diagram of a storage system using the SAN
Volume Controller. It shows a number of hosts that are connected to a SAN fabric or LAN. In
practical implementations that have high-availability requirements (the majority of the target
clients for SAN Volume Controller), the SAN fabric cloud represents a redundant SAN. A
redundant SAN consists of a fault-tolerant arrangement of two or more counterpart SANs,
therefore providing alternate paths for each SAN-attached device.
Both scenarios (using a single network and using two physically separate networks) are
supported for iSCSI-based and LAN-based access networks to the SAN Volume Controller.
Redundant paths to volumes can be provided in both scenarios.
For simplicity, Figure 2-2 on page 13 shows only one SAN fabric and two zones, namely host
and storage. In a real environment, it is a preferred practice to use two redundant SAN
fabrics. The SAN Volume Controller can be connected to up to four fabrics. Zoning details are
described in 3.3.2, SAN zoning and SAN connections on page 80.
Figure 2-2 SAN Volume Controller conceptual and topology overview
A clustered system of SAN Volume Controller nodes that are connected to the same fabric
presents logical disks or volumes to the hosts. These volumes are created from managed
LUNs or managed disks (MDisks) that are presented by the RAID disk subsystems. There are
two distinct zones shown in the fabric:
A host zone, in which the hosts can see and address the SAN Volume Controller nodes
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
14 Implementing the IBM System Storage SAN Volume Controller V7.2
A storage zone, in which the SAN Volume Controller nodes can see and address the
MDisks/logical unit numbers (LUNs) that are presented by the RAID subsystems
Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens
through the SAN Volume Controller nodes. This design is commonly described as symmetric
virtualization. LUNs which are not processed by the SAN Volume Controller can still be
provided to the hosts.
For iSCSI-based access, using two networks and separating iSCSI traffic within the networks
by using a dedicated virtual local area network (VLAN) path for storage traffic prevents any IP
interface, switch, or target port failure from compromising the host servers access to the
volumes LUNs.
2.3 SAN Volume Controller terminology
To provide a higher level of consistency among IBM storage products, the terminology used
starting with SAN Volume Controller Version 7, and therefore throughout the rest of this book,
has changed when compared to previous SAN Volume Controller releases. Table 2-1 on
page 14 summarizes the major changes.
Table 2-1 SAN Volume Controller terminology mapping
SAN Volume Controller
terminology
Previous SAN Volume
Controller term
Description
Clustered system or system Cluster A clustered system consists of
between one to four I/O Groups.
Event Error An occurrence of significance
to a task or system. Events can
include completion or failure of
an operation, a user action, or a
change in the state of a
process.
Host mapping VDisk-to-host mapping The process of controlling
which hosts have access to
specific volumes within a
system.
Storage pool Managed disk (MDisk) group A collection of storage capacity
that provides the capacity
requirements for a volume.
Thin provisioning (or
thin-provisioned)
Space-efficient The ability to define a storage
unit (full system, storage pool,
or volume) with a logical
capacity size that is larger than
the physical capacity that is
assigned to that storage unit.
Volume Virtual disk (VDisk) A discrete unit of storage on
disk, tape, or other data
recording medium that
supports a form of identifier and
parameter list, such as a
volume label or input/output
control.
Chapter 2. IBM System Storage SAN Volume Controller 15
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
For a detailed glossary containing the terms and definitions that are used in the SAN Volume
Controller environment, see Appendix B, Terminology on page 877.
2.4 SAN Volume Controller components
The SAN Volume Controller product provides block-level aggregation and volume
management for attached disk storage. In simpler terms, the SAN Volume Controller
manages a number of back-end storage controllers or locally attached disks and maps the
physical storage within those controllers or disk arrays into logical disk images, or volumes,
that can be seen by application servers and workstations in the SAN.
The SAN is zoned so that the application servers cannot see the back-end physical storage,
which prevents any possible conflict between the SAN Volume Controller and the application
servers both trying to manage the back-end storage. The SAN Volume Controller is based on
the following components, which are discussed in more detail in later sections of this chapter.
2.4.1 Nodes
Each SAN Volume Controller hardware unit is called a node. The node provides the
virtualization for a set of volumes, cache, and copy services functions. SAN Volume Controller
nodes are deployed in pairs and multiple pairs make up a clustered system or system. A
system can consist of between one and four SAN Volume Controller node pairs.
One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.
Because the nodes are installed in pairs, each node provides a failover function to its partner
node in the event of a node failure.
2.4.2 I/O Groups
Each pair of SAN Volume Controller nodes is also referred to as an I/O Group. A SAN
Volume Controller clustered system can have from one to four I/O Groups. A specific volume
is always presented to a host server by a single I/O Group of the system.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O Group in the system. Also, under normal conditions, the I/Os for
that specific volume are always processed by the same node within the I/O Group. This node
is referred to as the preferred node for this specific volume.
Both nodes of an I/O Group act as the preferred node for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers. A maximum of
2,048 volumes per I/O Group is allowed. However, both nodes also act as failover nodes for
their respective partner node within the I/O Group. Therefore, a node takes over the I/O
workload from its partner node, if required.
Thus, in a SAN Volume Controller based environment, the I/O handling for a volume can
switch between the two nodes of the I/O Group. For this reason, it is mandatory for servers
that are connected through FC to use multipath drivers to be able to handle these failover
situations.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
16 Implementing the IBM System Storage SAN Volume Controller V7.2
The SAN Volume Controller I/O Groups are connected to the SAN so that all application
servers accessing volumes from this I/O Group have access to this group. Up to 512 host
server objects can be defined per I/O Group. The host server objects can access volumes
that are provided by this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group within the SAN Volume
Controller system; therefore, they can access volumes from separate I/O Groups. You can
move volumes between I/O Groups to redistribute the load between the I/O Groups; however,
moving volumes between I/O Groups cannot be done concurrently with host I/O and requires
a brief interruption to remap the host.
2.4.3 System
The system or clustered system consists of between one and four I/O Groups. Certain
configuration limitations are then set for the individual system. For example, the maximum
number of volumes supported per system is 8,192 (having a maximum of 2,048 volumes per
I/O Group), or the maximum managed disk supported is 32 PB per system.
All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.
A process is provided to back up the system configuration data onto disk so that it can be
restored in the event of a disaster. Note that this method does not back up application data.
Only SAN Volume Controller system configuration information is backed up.
For the purposes of remote data mirroring, two or more systems must form a partnership
prior to creating relationships between mirrored volumes.
For details about the maximum configurations that are applicable to the system, I/O Group,
and nodes, see the following link:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004510
2.4.4 Split cluster
Normally, a pair of nodes from the same I/O Group is physically located within the same rack,
in the same computer room. Since SAN Volume Controller 5.1, to provide protection against
failures that affect an entire location (for example, a power failure), you can split a single
system between two physical locations, up to 10 km (6.2 miles) apart. All inter-node
communication between SAN Volume Controller node ports in the same system must not
cross ISLs. Also, all inter-node communication between the SAN Volume Controller and
back-end disk controllers must not cross ISLs.
Therefore, the FC path between sites cannot use an inter-switch ISL path. The remote node
must have a direct path to the switch to which its partner and other system nodes connect.
Starting with SAN Volume Controller 6.3, the distance limit has been extended to Metro Mirror
distance (about 300 km (186.4 miles)). Appendix C, SAN Volume Controller Stretched
Cluster on page 885 has more information.
2.4.5 MDisks
The SAN Volume Controller system and its I/O Groups view the storage that is presented to
the SAN by the back-end controllers as a number of disks or LUNs, known as managed disks
Chapter 2. IBM System Storage SAN Volume Controller 17
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
or MDisks. Because the SAN Volume Controller does not attempt to provide recovery from
physical disk failures within the back-end controllers, an MDisk is usually provisioned from a
RAID array. The application servers, however, do not see the MDisks at all. Instead, they see
a number of logical disks, known as virtual disks or volumes, which are presented by the SAN
Volume Controller I/O Groups through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.
The MDisks are placed into storage pools where they are divided into a number of extents,
which can range in size from 16 MB to 8182 MB, as defined by the SAN Volume Controller
administrator. See the following link for an overview of the total storage capacity that is
manageable per system regarding the selection of extents:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004368#_Extents
A volume is host-accessible storage that has been provisioned out of one storage pool, or if it
is a mirrored volume, out of two storage pools.
The maximum size of an MDisk is 1 PB. A SAN Volume Controller system supports up to
4096 MDisks (including internal RAID arrays). At any point in time, an MDisk is in one of the
following three modes:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata stored on it.
The SAN Volume Controller does not write to an MDisk that is in unmanaged mode,
except when it attempts to change the mode of the MDisk to one of the other modes. The
SAN Volume Controller can see the resource, but the resource is not assigned to a
storage pool.
Managed MDisk
Managed mode MDisks are always members of a storage pool, and they contribute
extents to the storage pool. Volumes (if not operated in image mode) are created from
these extents. MDisks operating in managed mode might have metadata extents allocated
from them and can be used as quorum disks. This mode is the most common and normal
mode for an MDisk.
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume by
using virtualization. This mode is provided to satisfy three major usage scenarios:
Image mode allows the virtualization of MDisks already containing data that was
written directly and not through a SAN Volume Controller; rather, it was created by a
direct-connected host.
This mode allows a client to insert the SAN Volume Controller into the data path of an
existing storage volume or LUN with minimal downtime. Chapter 6, Data migration on
page 225, provides details of the data migration process.
Image mode allows a volume that is managed by the SAN Volume Controller to be
used with the native copy services function provided by the underlying RAID controller.
To avoid the loss of data integrity when the SAN Volume Controller is used in this way,
it is important that you disable the SAN Volume Controller cache for the volume.
SAN Volume Controller provides the ability to migrate to image mode, which allows the
SAN Volume Controller to export volumes and access them directly from a host without
the SAN Volume Controller in the path.
Each MDisk presented from an external disk controller has an online path count that is the
number of nodes having access to that MDisk. The maximum count is the maximum
number of paths detected at any point in time by the system. The current count is what the
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
18 Implementing the IBM System Storage SAN Volume Controller V7.2
system sees at this point in time. A current value less than the maximum can indicate that
SAN fabric paths have been lost. See 2.5.1, Image mode volumes on page 23 for more
details.
Solid-state drives (SSDs) that are located in SAN Volume Controller 2145-CF8 nodes are
presented to the cluster as MDisks. To determine whether the selected MDisk is an SSD,
click the link on the MDisk name to display the Viewing MDisk Details panel. If the selected
MDisk is an SSD that is located on a SAN Volume Controller 2145-CF8 node, the Viewing
MDisk Details panel displays values for the Node ID, Node Name, and Node Location
attributes. Alternatively, you can select Work with Managed Disks Disk Controller
Systems from the portfolio. On the Viewing Disk Controller panel, you can match the
MDisk to the disk controller system that has the following values for these attributes.
2.4.6 Quorum disk
A quorum disk is a managed disk (MDisk) that contains a reserved area for use exclusively by
the system. The system uses quorum disks to break a tie when exactly half the nodes in the
system remain after a SAN failure: this situation is referred to as split brain. Quorum
functionality is not supported on SSDs within SAN Volume Controller nodes.
There are three candidate quorum disks. However, only one quorum disk is active at any time.
Quorum disks are discussed in more detail in 2.8.1, Quorum disks on page 41.
2.4.7 Disk tier
It is likely that the MDisks (LUNs) presented to the SAN Volume Controller system have
various performance attributes due to the type of disk or RAID array on which they reside.
The MDisks can be on 15K disk revolutions per minute (RPMs) Fibre Channel or SAS disk,
Nearline SAS or SATA, or even SSDs.
Therefore, a storage tier attribute is assigned to each MDisk, with the default being
generic_hdd. Starting with SAN Volume Controller V6.1, a new tier 0 (zero) level disk attribute
is available for SSDs, and it is known as generic_ssd.
2.4.8 Storage pool
A storage pool is a collection of up to 128 MDisks that provides the pool of storage from which
volumes are provisioned. A single system can manage up to 128 storage pools. The size of
these pools can be changed (expanded or shrunk) at run time by adding or removing MDisks,
without taking the storage pool or the volumes offline.
At any point in time, an MDisk can only be a member in one storage pool, with the exception
of image mode volumes. See 2.5.1, Image mode volumes on page 23 for more information
about this topic.
Figure 2-3 illustrates the relationships of the SAN Volume Controller entities to each other.
Chapter 2. IBM System Storage SAN Volume Controller 19
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-3 Overview of SAN Volume Controller clustered system with I/O Group
Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator at the creation time of the storage pool and cannot be changed
later. The size of the extent ranges from 16 MB up - 8192 MB.
It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring (see 2.5.4,
Mirrored volumes on page 26) to copy volumes between pools.
The SAN Volume Controller limits the number of extents in a system to 2
22
= ~4 million.
Because the number of addressable extents is limited, the total capacity of a SAN Volume
Controller system depends on the extent size that is chosen by the SAN Volume Controller
administrator. The capacity numbers that are specified in Table 2-2 for a SAN Volume
Controller system assume that all defined storage pools have been created with the same
extent size.
Table 2-2 Extent size-to-addressability matrix
For most systems, a capacity of 1 to 2 PB is sufficient. A preferred practice is to use 256 MB
or, for larger clustered systems, 512 MB as the standard extent size.
Extent size maximum System capacity Extent size maximum System capacity
16 MB 64 TB 256 MB 1 PB
32 MB 128 TB 512 MB 2 PB
64 MB 256 TB 1024 MB 4 PB
128 MB 512 TB 2048 MB 8 PB
4096 MB 16 PB 8192 MB 32 PB
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
20 Implementing the IBM System Storage SAN Volume Controller V7.2
Single-tiered storage pool
MDisks that are used in a single-tiered storage pool must have the following characteristics to
avoid causing performance problems and other issues:
They have the same hardware characteristics, for example, the same RAID type, RAID
array size, disk type, and RPMs.
The disk subsystems providing the MDisks must have similar characteristics, for example,
maximum input/output operations per second (IOPS), response time, cache, and
throughput.
The MDisks that are used are the same size and are therefore MDisks that provide the
same number of extents. If that is not feasible, check the distribution of the volumes
extents in that storage pool.
For further details, see SAN Volume Controller Best Practices and Performance Guidelines,
SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
Multitiered storage pool
A multitiered storage pool has a mix of MDisks with more than one type of disk tier attribute,
for example, a storage pool containing a mix of generic_hdd and generic_ssd MDisks.
A multitiered storage pool therefore contains MDisks with various characteristics, as opposed
to a single-tier storage pool. However, it is a preferred practice for each tier to have MDisks of
the same size and MDisks that provide the same number of extents.
Multitiered storage pools are used to enable the automatic migration of extents between disk
tiers using the IBM Storage System SAN Volume Controller Easy Tier function. These storage
pools are described in more detail in Chapter 7, Advanced features for storage efficiency on
page 349.
2.4.9 Volumes
Volumes are logical disks that are presented to the host or application servers by the SAN
Volume Controller. The hosts cannot see the MDisks; they can only see the logical volumes
created from combining extents from a storage pool.
There are three types of volumes: striped, sequential, and image. These types are
determined by the way in which the extents are allocated from the storage pool, as explained
here:
A volume created in striped mode has extents allocated from each MDisk in the storage
pool in a round-robin fashion.
With a sequential mode volume, extents are allocated sequentially from an MDisk.
Image mode is a one-to-one mapped extent mode volume.
Using striped mode is the best method to use for most cases. However, sequential extent
allocation mode can slightly increase the sequential performance for certain workloads.
Figure 2-4 shows the striped volume mode and sequential volume mode, and it illustrates
how the extent allocation from the storage pool differs.
Chapter 2. IBM System Storage SAN Volume Controller 21
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-4 Storage pool extents overview
You can allocate the extents for a volume in many ways. The process is under full user control
at volume creation time and can be changed at any time by migrating single extents of a
volume to another MDisk within the storage pool.
Chapter 6, Data migration on page 225, Chapter 9, SAN Volume Controller operations
using the command-line interface on page 471, and Chapter 10, SAN Volume Controller
operations using the GUI on page 627 provide detailed explanations about how to create
volumes and migrate extents by using the GUI or command-line interface (CLI).
2.4.10 Easy Tier performance function
Easy Tier is a performance function that automatically migrates or moves extents off a volume
to, or from, one MDisk storage tier to another MDisk storage tier. Easy Tier monitors the host
I/O activity and latency on the extents of all volumes with the Easy Tier function turned on in a
multitier storage pool over a 24-hour period.
Next, it creates an extent migration plan based on this activity and then dynamically moves
high activity or hot extents to a higher disk tier within the storage pool. It also moves extents
whose activity has dropped off or cooled from the high-tier MDisks back to a lower-tiered
MDisk.
To experience the potential benefits of using Easy Tier in your environment before actually
installing expensive SSDs, you can turn on the Easy Tier function for a single-level storage
pool. Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier then
starts monitoring activity on the volume extents in the pool.
Easy Tier: The Easy Tier function can be turned on or off at the storage pool level and
volume level.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
22 Implementing the IBM System Storage SAN Volume Controller V7.2
Easy Tier creates a report every 24 hours, providing information about how Easy Tier
behaves if the tier were a multitiered storage pool. So, even though Easy Tier extent migration
is not possible within a single-tiered pool, the Easy Tier statistical measurement function is
available.
The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes.
The usage statistics file can be offloaded from the SAN Volume Controller nodes. Then, you
can use an IBM Storage Advisor Tool to create a summary report.
For more detailed information about Easy Tier functionality and more information about
statistics generation using the IBM Storage Advisor Tool, see Chapter 7, Advanced features
for storage efficiency on page 349.
2.4.11 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the SAN Volume Controller is a collection of host bus adapter (HBA) worldwide
port names (WWPNs) or iSCSI qualified names (IQNs), defined on the specific server. Note
that iSCSI names are internally identified by fake WWPNs, or WWPNs that are generated
by the SAN Volume Controller. Volumes can be mapped to multiple hosts, for example, a
volume that is accessed by multiple hosts of a server system.
iSCSI is an alternative means of attaching hosts. However, all communication with back-end
storage subsystems, and with other SAN Volume Controller systems, is still through FC.
Node failover can be handled without having a multipath driver installed on the iSCSI server.
An iSCSI-attached server can simply reconnect after a node failover to the original target IP
address, which is now presented by the partner node. To protect the server against link
failures in the network or HBA failures, using a multipath driver is mandatory.
Volumes are LUN-masked to the hosts HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that
are configured on the host object.
For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination)
adapter. Host objects can have both IQNs and WWPNs.
2.4.12 Maximum supported configurations
For details about the maximum configurations that are applicable to the system, I/O Group,
and nodes, select the Restrictions hot link in the section of the SAN Volume Controller
support site that corresponds to your SAN Volume Controller code level:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
There are certain configuration limits in the SAN Volume Controller. The following list includes
several of the more important limits. For the most current details, consult the SAN Volume
Controller support site.
16 WWNNs per storage subsystem
1 PB MDisk
8192 MB extents
Long Object Names can be up to 63 characters
See 2.13, What is new with SAN Volume Controller 7.2 on page 67 for a more detailed
explanation of the new features.
Chapter 2. IBM System Storage SAN Volume Controller 23
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
2.5 Volume overview
The maximum size of a single volume is 256 TB. A single SAN Volume Controller system
supports up to 8,192 volumes.
Volumes have the following characteristics or attributes:
Volumes can be created and deleted.
Volumes can be resized (expand or shrink).
Volume extents can be migrated at run time to another MDisk or storage pool.
Volumes can be created as fully allocated or thin-provisioned. A conversion from a fully
allocated to a thin-provisioned volume and vice versa can be done at run time.
Volumes can be stored in multiple storage pools (mirrored) to make them resistant to disk
subsystem failures or to improve the read performance.
Volumes can be mirrored synchronously or asynchronously for longer distances. A SAN
Volume Controller system can run active volume mirrors to a maximum of three other SAN
Volume Controller systems, but not from the same volume.
Volumes can be copied using FlashCopy. Multiple snapshots and quick restore from
snapshots (reverse flash copy) are supported.
Volumes can be compressed
Volumes have two major modes: image mode and managed mode. Managed mode volumes
have two policies: the sequential policy and the striped policy. Policies define how the extents
of a volume are allocated from a storage pool.
2.5.1 Image mode volumes
Image mode volumes are used to migrate LUNs that were previously mapped directly to host
servers over to the control of the SAN Volume Controller.
Image mode provides a one-to-one mapping between the logical block addresses (LBAs)
between a volume and an MDisk. Image mode volumes have a minimum size of one block
(512 bytes) and always occupy at least one extent.
An image mode MDisk is mapped to one and only one image mode volume.
The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of the creation of the image mode volume.
The SAN Volume Controller also supports the reverse process in which a managed mode
volume can be migrated to an image mode volume. If a volume is migrated to another MDisk,
it is represented as being in managed mode during the migration and is only represented as
an image mode volume after it has reached the state where it is a straight-through mapping.
An image mode MDisk is associated with exactly one volume. The last extent is partial, not
filled, if the (image mode) MDisk is not a multiple of the MDisk Groups extent size. An image
mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and
does not have any SAN Volume Controller metadata extents assigned to it. Managed or
image mode MDisks are always members of a storage pool.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
24 Implementing the IBM System Storage SAN Volume Controller V7.2
It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). Remember that the extent size
chosen for this specific storage pool must be the same as the extent size into which you plan
to migrate the data. All of the SAN Volume Controller copy services functions can be applied
to image mode disks.
Figure 2-5 Image mode volume versus striped volume
2.5.2 Managed mode volumes
Volumes operating in managed mode provide a full set of virtualization functions. Within a
storage pool, the SAN Volume Controller supports an arbitrary relationship between extents
on (managed mode) volumes and extents on MDisks. Each volume extent maps to exactly
one MDisk extent.
Figure 2-6 on page 25 represents this mapping diagrammatically. It shows a volume that is
made up of a number of extents shown as V0 to V7. Each of these extents is mapped to an
extent on one of the MDisks: A, B, or C. The mapping table stores the details of this
indirection.
Note several of the MDisk extents are unused. There is no volume extent that maps to them.
These unused extents are available for use in creating new volumes, migration, expansion,
and so on.
Chapter 2. IBM System Storage SAN Volume Controller 25
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-6 Simple view of block virtualization
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm: if the set of MDisks from which to allocate extents contains more than
one MDisk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no
free extents when its turn arrives, its turn is missed and the round-robin moves to the next
MDisk in the set that has a free extent.
When creating a new volume, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by simply choosing the next disk in a round-robin fashion.
The pseudo-random algorithm avoids the situation whereby the striping effect inherent in a
round-robin algorithm places the first extent for a large number of volumes on the same
MDisk. Placing the first extent of a number of volumes on the same MDisk can lead to poor
performance for workloads that place a large I/O load on the first extent of each volume, or
that create multiple sequential streams.
2.5.3 Cache mode and cache-disabled volumes
Under nominal conditions, a volumes read and write data is held in the cache of its preferred
node, with a mirrored copy of write data held in the partner node of the same I/O Group.
However, it is possible to create a volume with cache disabled, which means that the I/Os are
passed directly through to the back-end storage controller rather than being held in the nodes
cache.
Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (LUNs) that are used as SAN Volume Controller
image mode volumes. Using SAN Volume Controller Copy Services instead of the underlying
disk controller copy services gives better results.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
26 Implementing the IBM System Storage SAN Volume Controller V7.2
2.5.4 Mirrored volumes
The mirrored volume feature provides a simple RAID 1 function; thus, a volume has two
physical copies of its data. This approach allows the volume to remain online and accessible
even if one of the MDisks sustains a failure that causes it to become inaccessible.
The two copies of the volume are typically allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships,
it is serviced by an I/O Group, and it has a preferred node.
Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be either 0 or 1.
The feature provides a point-in-time copy functionality that is achieved by splitting a copy
from the volume. Note, however, that the mirrored volume feature does not address other
forms of mirroring based on remote copy (sometimes called Hyperswap), which mirrors
volumes across I/O Groups or clustered systems. It is also not intended to manage mirroring
or remote copy functions in back-end controllers.
Figure 2-7 provides an overview of volume mirroring.
Figure 2-7 Volume mirroring overview
A second copy can be added to a volume with a single copy, or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as fresh and the secondary copy
is defined as stale.
The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default synchronization rate or at a rate defined when creating the
volume or modifying it. The synchronization status for mirrored volumes is recorded on the
quorum disk.
Chapter 2. IBM System Storage SAN Volume Controller 27
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
If a two-copy mirrored volume is created with the format parameter, both copies are formatted
in parallel and the volume comes online when both operations are complete with the copies in
sync.
If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.
If it is known that MDisk space, which is used for creating copies, is already formatted, or if
the user does not require read stability, a no synchronization option can be selected that
declares the copies as synchronized (even when they are not).
To minimize the time required to resynchronize a copy that has become out of sync, only the
256 KB grains, which have been written to since the synchronization was lost, are copied.
This approach is known as an incremental synchronization. Only the changed grains need
to be copied to restore synchronization.
Where there are two copies of a volume, one copy is known as the primary copy. If the
primary is available and synchronized, reads from the volume are directed to it. The user can
select the primary when creating the volume, or can change it later.
Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.
Write I/O operations data flow with a mirrored volume
For write I/O operations to a mirrored volume, the SVC preferred node definition, together
with the multipathing driver on the host, are used to determine the preferred path. The host
routes the I/Os via the preferred path, and the corresponding node is responsible to further
destage written data from cache to both volume copies. Figure 2-8 shows the data flow for
write I/O processing when using volume mirroring.
Figure 2-8 Data flow for write I/O processing in a mirrored volume in SAN Volume Controller
All the writes are sent by the host to the preferred node for each volume (1); then, the data is
mirrored to the cache of the partner node in the I/O group (2) and then acknowledgement of
the write operation is sent to the host (3). The preferred node then destages the written data
to the two volume copies (4).
Important: An unmirrored volume can be migrated from one location to another by simply
adding a second copy to the desired destination, waiting for the two copies to synchronize,
and then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
28 Implementing the IBM System Storage SAN Volume Controller V7.2
A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while reading from one copy, it is repaired using
data from the other copy. This consistency check is performed asynchronously with host I/O.
Mirrored volumes consume bitmap space at a rate of 1 bit per 256 KB grain, which translates
to 1 MB of bitmap space supporting 2 TB-worth of mirrored volume. The default allocation of
bitmap space in 20 MB, which supports 40 TB of mirrored volumes. If all 512 MB of variable
bitmap space is allocated to mirrored volumes, 1 PB of mirrored volumes can be supported.
2.5.5 Thin-provisioned volumes
Volumes can be configured to be either thin-provisioned or fully allocated. A
thin-provisioned volume behaves with respect to application reads and writes as though they
were fully allocated. When creating a thin-provisioned volume, the user specifies two
capacities: the real physical capacity allocated to the volume from the storage pool, and its
virtual capacity available to the host. In a fully allocated volume, these two values are the
same.
Thus, the real capacity determines the quantity of MDisk extents that is initially allocated to
the volume. The virtual capacity is the capacity of the volume reported to all other SAN
Volume Controller components (for example, FlashCopy, Cache, and remote copy) and to the
host servers.
The real capacity is used to store both the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.
Thin-provisioned volumes can be used as volumes assigned to the host, by FlashCopy to
implement thin-provisioned FlashCopy targets, and also with the mirrored volumes feature.
When a thin-provisioned volume is initially created, a small amount of the real capacity is
used for initial metadata. Write I/Os to grains of the thin volume that were not previously
written to cause grains of the real capacity to be used to store metadata and the actual user
data. Write I/Os to grains that were previously written to update the grain where data was
previously written. The grain size is defined when the volume is created and can be 32 KB, 64
KB, 128 KB, or 256 KB. The default grain size is 256 KB, and is the strongly recommended
option. If you select 32 KB for the grain size, the volume size cannot exceed 260,000 GB. The
grain size cannot be changed after the thin-provisioned volume has been created. Generally,
smaller grain sizes save space but require more metadata access, which can adversely
impact performance. If you are not going to use the thin-provisioned volume as a FlashCopy
source or target volume, use 256 KB to maximize performance. If you are going to use the
thin-provisioned volume as a FlashCopy source or target volume, specify the same grain size
for the volume and for the FlashCopy function.
Figure 2-9 illustrates the thin-provisioning concept.
Important: Mirrored volumes can be taken offline if there is no quorum disk available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.
Chapter 2. IBM System Storage SAN Volume Controller 29
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-9 Conceptual diagram of thin-provisioned volume
Thin-provisioned volumes store both user data and metadata. Each grain of data requires
metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned
volumes are less than the I/O rates that are obtained from fully allocated volumes.
The metadata storage overhead is never greater than 0.1% of the user data. The overhead is
independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in
a FlashCopy map, for the best performance, use the same grain size as the map grain size. If
you are using the thin-provisioned volume directly with a host system, use a small grain size.
The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity allows a larger amount of data and metadata to be stored on the
volume. Thin-provisioned volumes use the real capacity that is provided in ascending order as
new data is written to the volume. If the user initially assigns too much real capacity to the
volume, the real capacity can be reduced to free storage for other uses.
A thin-provisioned volume can be configured to autoexpand. This feature causes the SAN
Volume Controller to automatically add a fixed amount of additional real capacity to the thin
volume as required. Autoexpand therefore attempts to maintain a fixed amount of unused real
capacity for the volume. This amount is known as the contingency capacity.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
Thin-provisioned volume format: Thin-provisioned volumes do not need formatting. A
read I/O, which requests data from unallocated data space, returns zeros. When a write I/O
causes space to be allocated, the grain is zeroed prior to use. However, if the node is a
CF8, space is not allocated for a host write that contains all zeros. The formatting flag is
ignored when a thin volume is created or when the real capacity is expanded; the
virtualization component never formats the real capacity of a thin-provisioned volume.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
30 Implementing the IBM System Storage SAN Volume Controller V7.2
A volume that is created without the autoexpand feature, and thus has a zero contingency
capacity, will go offline as soon as the real capacity is used and needs to expand.
Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity will be recalculated.
To support the auto expansion of thin-provisioned volumes, the storage pools from which they
are allocated have a configurable capacity warning. When the used capacity of the pool
exceeds the warning capacity, a warning event is logged. For example, if a warning of 80%
has been specified, the event will be logged when 20% of the free capacity remains.
A thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or
vice versa, by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated
copy from the volume after they are synchronized.
The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so
that grains containing all zeros do not cause any real capacity to be used.
2.5.6 Volume I/O governing
It is possible to constrain I/O operations so that the maximum amount of I/O activity that a
host can perform on a volume can be limited over a specific period of time. This governing
feature can be used to satisfy a Quality Of Service (QoS) requirement or a contractual
obligation (for example, if a client agrees to pay for I/Os performed, but will not pay for I/Os
beyond a certain rate). Only Read, Write, and Verify commands that access the physical
medium are subject to I/O governing.
The governing rate can be set in I/Os per second or MB per second. It can be altered by
changing the throttle value through the svcinfo chvdisk command and specifying the -rate
parameter.
An I/O budget is expressed as a number of I/Os, or MBs, over a minute. The budget is evenly
divided between all SAN Volume Controller nodes that service that volume, that is, between
the nodes that form the I/O Group of which that volume is a member.
The algorithm operates two levels of policing. While a volume on each SAN Volume Controller
node receives I/O at a rate lower than the governed level, no governing is performed.
However, when the I/O rate exceeds the defined threshold, then adjustments to the policy are
made. A check is made every minute to see that each node is continuing to receive I/O below
the threshold level. Whenever this check shows that the host has exceeded its limit on one or
more nodes, then policing begins for new I/Os.
The following conditions exist while policing is in force:
A budget allowance is calculated for a one-second period.
I/Os are counted over a period of a second.
If I/Os are received in excess of the one-second budget on any node in the I/O Group,
those I/Os and later I/Os are pended.
I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does
not affect the data copy rate from the primary volume. Governing has no effect on
FlashCopy or data migration I/O rates.
Chapter 2. IBM System Storage SAN Volume Controller 31
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
When the second expires, a new budget is established, and any pended I/Os are redriven
under the new budget.
This algorithm might cause I/O to backlog in the front end, which might eventually cause a
Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a
host stays within its one-second budget on all nodes in the I/O Group for a period of
one minute, the policing is relaxed and monitoring takes place over the one-minute period as
before.
2.6 iSCSI overview
iSCSI is an alternative means of attaching hosts to the SAN Volume Controller. All
communications with back-end storage subsystems and with other SAN Volume Controller
systems only occur through FC.
The iSCSI function is a software function that is provided by the SAN Volume Controller code,
not hardware.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that
encapsulates SCSI commands into TCP/IP packets and therefore uses an existing IP
network, instead of requiring expensive FC HBAs and a SAN fabric infrastructure.
A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) initiates read or write requests for data from a target server (for example, a data
storage system). Commands, which are sent by the client and processed by the server, are
put into the Command Descriptor Block (CDB). The server executes a command, and
completion is indicated by a special signal alert.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.
The concepts of names and addresses have been carefully separated in iSCSI:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
initiator name and target name also refer to an iSCSI name.
An iSCSI address specifies not only the iSCSI name of an iSCSI node, but also a location
of that node. The address consists of a host name or IP address, a TCP port number (for
the target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). A SAN Volume Controller node represents
an iSCSI node and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN),
which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted
for Internet nodes.
The iSCSI qualified name format is defined in RFC3720 and contains (in order) these
elements:
The string iqn.
A date code specifying the year and month in which the organization registered the
domain or sub-domain name used as the naming authority string.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
32 Implementing the IBM System Storage SAN Volume Controller V7.2
The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name.
Optionally, a colon (:), followed by a string of the assigning organizations choosing, which
must make each assigned iSCSI name unique.
For SAN Volume Controller, the IQN for its iSCSI target is specified as:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as:
iqn.1991-05.com.microsoft:<computer name>
The IQNs can be abbreviated used a descriptive name, known as an alias. An alias can be
assigned to an initiator or a target. The alias is independent of the name and does not have to
be unique. Because it is not unique, the alias must be used in a purely informational way. It
cannot be used to specify a target at login or used during authentication. Both targets and
initiators can have aliases.
An iSCSI name provides the correct identification of an iSCSI device irrespective of its
physical location. Remember, the IQN is an identifier, not an address.
The iSCSI session, which consists of a login phase and a full feature phase, is completed with
a special command.
The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.
If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.
As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more
than one TCP connection was established, iSCSI requires that each command and response
pair must go through one TCP connection. Thus, each separate read or write command is
carried out without the necessity to trace each request for passing separate flows. However,
separate transactions can be delivered through separate TCP connections within one
session.
Figure 2-10 illustrates an overview of the various block-level storage protocols and shows
where the iSCSI layer is positioned.
Be careful: Before changing system or node names for a SAN Volume Controller system
that has servers connected to it by way of iSCSI, be aware that because the system and
node name are part of the SAN Volume Controllers IQN, you can lose access to your data
by changing these names. The SAN Volume Controller GUI displays a warning, but the CLI
does not display a warning.
Chapter 2. IBM System Storage SAN Volume Controller 33
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-10 Overview of block-level protocol stacks
2.6.1 Use of IP addresses and Ethernet ports
The SAN Volume Controller node hardware has two Ethernet ports. The configuration details
of the two Ethernet ports can be displayed by the GUI, CLI, or panel on the front of the node.
There are two kinds of IP addresses:
System management IP address
This address is used for access to the SAN Volume Controller CLI, SAN Volume Controller
GUI, and to the Common Information Model Object Manager (CIMOM) that runs on the
SAN Volume Controller configuration node. Only one node, the configuration node,
presents a system management IP address at any one time. There can be two system
management IP addresses, one for each of the two Ethernet ports. Configuration node
failover is also supported.
Port IP address
This address is used to perform iSCSI I/O to the system. Each node can have a port IP
address for each of its ports.
SAN Volume Controller nodes have two or four Ethernet ports. These ports are either for 1
Gbps support or 10 Gbps support, depending on the model. System Management is only
possible over the 1 Gbps ports
Figure 2-11 shows an overview of the IP addresses on a SAN Volume Controller node port
and illustrates how these IP addresses are moved between the nodes of an I/O Group.
The management IP addresses and the iSCSI target IP addresses will fail over to the partner
node N2 if node N1 fails (and vice versa). The iSCSI target IPs will fail back to their
corresponding ports on node N1 when node N1 is running again.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
34 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 2-11 SAN Volume Controller IP address overview
It is a preferred practice to keep all of the eth0 ports on all of the nodes in the system on the
same subnet. The same practice applies for the eth1 ports; however, it can be a separate
subnet to the eth0 ports.
A maximum of 512 hosts per I/O group for CF8 and CG8 nodes, or a maximum of 256 hosts
per I/O group for other node types. Only 256 hosts can be configured as iSCSI per SAN
Volume Controller, due to IQN limits
You can find detailed examples of the SAN Volume Controller port configuration in Chapter 9,
SAN Volume Controller operations using the command-line interface on page 471 and
Chapter 10, SAN Volume Controller operations using the GUI on page 627.
2.6.2 iSCSI volume discovery
The iSCSI target implementation on the SAN Volume Controller nodes uses the hardware
offload features that are provided by the nodes hardware. This implementation results in a
minimal effect on the nodes CPU load for handling iSCSI traffic, and simultaneously delivers
excellent throughput (up to 95 MBps user data) on each of the two LAN ports. The use of
jumbo frames (maximum transmission unit (MTU) sizes greater than 1,500 bytes) is a
preferred practice.
Hosts can discover volumes through one of the following mechanisms:
Internet Storage Name Service (iSNS)
SAN Volume Controller can register itself with an iSNS name server; you set the IP
address of this server by using the svctask chcluster command. A host can then query
the iSNS server for available iSCSI targets.
Chapter 2. IBM System Storage SAN Volume Controller 35
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Service Location Protocol (SLP)
The SAN Volume Controller node runs an SLP daemon, which responds to host requests.
This daemon reports the available services on the node, such as the CIMOM service that
runs on the configuration node; the iSCSI I/O service can now also be reported.
iSCSI Send Target request
The host can also send a Send Target request using the iSCSI protocol to the iSCSI
TCP/IP port (port 3260).
2.6.3 iSCSI authentication
Authentication of the host server from the SAN Volume Controller system is optional and is
disabled by default. The user can choose to enable Challenge Handshake Authentication
Protocol (CHAP) authentication, which involves sharing a CHAP secret between the SAN
Volume Controller system and the host. The SAN Volume Controller as authenticator sends a
challenge message to the specific server (peer). The server responds with a value that is
checked by the SAN Volume Controller. If there is a match, the SAN Volume Controller
acknowledges the authentication. If not, the SAN Volume Controller will terminate the
connection and will not allow any I/O to volumes.
A CHAP secret can be assigned to each SAN Volume Controller host object. The host must
then use CHAP authentication to begin a communications session with a node in the system.
A CHAP secret can also be assigned to the system.
Volumes are mapped to hosts, and LUN masking is applied using the same methods that are
used for FC LUNs.
Because iSCSI can be used in networks where data security is a concern, the specification
allows for separate security methods. You can set up security, for example, through a method
such as IPSec, which is transparent for higher levels, such as iSCSI, because it is
implemented at the IP level. Details regarding securing iSCSI can be found in RFC3723,
Securing Block Storage Protocols over IP, which is available at this website:
http://tools.ietf.org/html/rfc3723
2.6.4 iSCSI multipathing
Multipathing drivers means that the host can send commands down multiple paths to the
SAN Volume Controller to the same volume. A fundamental multipathing difference exists
between FC and iSCSI environments.
If FC-attached hosts see their FC target, and volumes go offline, for example, due to a
problem in the target node, its ports, or the network, the host must use a separate SAN path
to continue I/O. A multipathing driver is therefore always required on the host.
SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the
key difference) the host is reconnected to the same IP target that reappears after a short
period of time and its volumes continue to be available for I/O. iSCSI allows failover without
host multipathing. To achieve this failover without host multipathing, the partner node in the
I/O Group takes over the port IP addresses and iSCSI names of a failed node.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
36 Implementing the IBM System Storage SAN Volume Controller V7.2
A host multipathing driver for iSCSI is required if you want these capabilities:
Protecting a server from network link failures
Protecting a server from network failures, if the server is connected through two separate
networks
Providing load balancing on the servers network links
2.7 Advanced Copy Services overview
Advanced Copy Services are a class of functionality of storage arrays and storage devices
that allow various forms of block-level data duplication. Put simply, Advanced Copy Services
allow you to make mirror images of part or all of your data eventually between distant sites.
This function has many benefits and uses, including, but not limited to facilitating disaster
recovery, building reporting instances to offload billing activities from production databases,
building quality assurance systems on regular intervals for regression testing, offloading
offline backups from production systems, and building test systems using production data.
SAN Volume Controller supports the following copy services:
Synchronous remote copy (Metro Mirror)
Asynchronous remote copy (Global Mirror)
Asynchronous remote copy with Change Volumes (Global Mirror)
Point-in-Time copy (FlashCopy)
Data migration (Image Mode Migration and volume mirroring migration)
Copy services functions are implemented either within a SAN Volume Controller System
(FlashCopy and Image Mode Migration) or between SAN Volume Controller or SAN Volume
Controller and Storwize systems (Metro Mirror and Global Mirror.) To use Metro Mirror and
Global Mirror functions, you must have the remote-copy license installed on each side.
You can create partnerships with SAN Volume Controller and Storwize systems to allow
Metro Mirror and Global Mirror to operate between the two systems. To be able to create
these partnerships, both clustered systems must be at version 6.3.0 or later.
A clustered system is in one of two layers: the replication layer or the storage layer. The SAN
Volume Controller system is always in the replication layer. The Storwize system is in the
storage layer by default, but the system can be configured to be in the replication layer
instead.
Figure 2-12 on page 37 shows an example of the layers in a SAN Volume Controller and
Storwize V7000 clustered-system partnership.
Be aware: With the iSCSI implementation in the SAN Volume Controller, an IP address
failover/failback between partner nodes of an I/O Group only takes place in cases of a
planned or unplanned node restart - node offline. When the partner node returns to online
status, there is a delay of five minutes before failback occurs for the IP addresses and
iSCSI names.
Chapter 2. IBM System Storage SAN Volume Controller 37
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-12 Example configuration for replication between SAN Volume Controller and Storwize
systems
Within the SAN Volume Controller, both intracluster copy services functions (FlashCopy and
Image Mode Migration) operate at the block level, while intercluster functions (Global Mirror
and Metro Mirror) operate at the volume layer. A volume is the container that it used to
present storage to host systems. Operating at this layer allows the Advanced Copy Services
functions to benefit from caching at the volume layer and helps facilitate the asynchronous
functions of Global Mirror and lessen the effect of synchronous Metro Mirror.
Operating at the volume layer also allows Advanced Copy Services functions to operate
above and independently of the function or characteristics of the underlying disk subsystems
that are used to provide storage resources to a SAN Volume Controller system. Therefore, as
long as the physical storage is virtualized with a SAN Volume Controller or Storwize V7000
and the backing array is supported by the SAN Volume Controller or V7000, then you can use
disparate backing storage.
2.7.1 Synchronous/Asynchronous remote copy
Global Mirror and Metro Mirror are implemented at the volume layer within the SAN Volume
Controller. They are collectively referred to as remote copy. In general, the purpose of both
functions is to maintain two copies of data. Often, the two copies will be separated by
distance, but not necessarily. The remote copy can be maintained in one of two modes:
synchronous or asynchronous.
Metro Mirror is the IBM-branded term for synchronous remote copy function, and Global
Mirror is the IBM-branded term for the asynchronous remote copy function.
Synchronous remote copy ensures that updates are physically committed (not in volume
cache) in both the primary and the secondary SAN Volume Controller clustered systems
FlashCopy: Although FlashCopy operates at the block level, this level is the block level of
the SAN Volume Controller, so the physical backing storage can be anything that the SAN
Volume Controller supports. However, performance will be limited to the slowest
performing storage that is involved in FlashCopy.
SVC 7.2.0 SVC 7.2.0 V7000 7.2.0
V7000 7.2.0 V3700 7.2.0
Cluster A Cluster B Cluster C
Cluster D Cluster E
Layer replication
Layer storage
Partnership Partnership
Partnership
Layer storage
Replication Layer
Storage Layer
Volumes presented to SVC
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
38 Implementing the IBM System Storage SAN Volume Controller V7.2
before the application considers the updates complete. Therefore, the secondary is fully
up-to-date if it is needed in a failover.
However, the application is fully exposed to the latency and bandwidth limitations of the
communication link to the secondary. In a truly remote situation, this extra latency can have a
significantly adverse effect on application performance; hence, there is a limitation on the
distance of Metro Mirror of 300 kilometers (~186 miles). This distance induces latency of
approximately 5 microseconds per kilometer, which does not include the latency added by the
equipment in the path. The nature of synchronous remote copy is that latency for the distance
and the equipment in the path will be added directly to your application I/O response times.
Overall latency for a complete round-trip should not exceed 80 milliseconds.
Special configuration guidelines exist for SAN fabrics that are used for data replication. It is
necessary to consider the distance and available bandwidth of the intersite links. The SAN
Volume Controller Support Portal contains details regarding these guidelines:
http://www.ibm.com/support/entry/portal/Overview/Hardware/System_Storage/Storage_s
oftware/Storage_virtualization/SAN_Volume_Controller_%282145%29
See 8.7, Metro Mirror on page 411 for more details about the SAN Volume Controllers
synchronous mirroring.
In asynchronous remote copy, the application is provided acknowledgement that the write is
complete prior to the write actually being committed (written to backing storage) at the
secondary. Thus, on a failover, certain updates (data) might be missing at the secondary.
The application must have an external mechanism for recovering the missing updates or
recovering to a consistent point in time (which is usually a few minutes in the past). This
mechanism can involve user intervention, but in most practical scenarios, it will need to be at
least partially automated.
Recovery on the secondary site involves assigning the Global Mirror targets from the SAN
Volume Controller target system to one or more hosts (which depends on your disaster
recovery design) and making those volumes visible on the host and creating any required
multipath device definitions.
The application must then be started and a recovery procedure to either a consistent point in
time or recovery of the missing updates must be performed. For this reason, the initial state of
Global Mirror targets is called crash consistent. This term might sound somewhat daunting,
but it merely means that the data on the volumes will appear to be in the same state as if an
application crash had occurred.
In asynchronous remote copy with cycling mode (Change Volumes), changes are tracked and
where needed copied to intermediate change volumes. Changes are transmitted to the
secondary site periodically. The secondary volumes are much further behind the primary
volume, and more data must be recovered in the event of a failover. Because the data transfer
can be smoothed over a longer time period, however, lower bandwidth is required to provide
an effective solution.
Since most applications, such as databases, have had mechanisms for dealing with this type
of data state for a long time, it is a fairly mundane operation (depending upon the application).
After this application recovery procedure is finished, the application will start normally.
Chapter 2. IBM System Storage SAN Volume Controller 39
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Most clients will aim to automate the failover or recovery of the remote copy through failover
management software. SAN Volume Controller provides Simple Network Management
Protocol (SNMP) traps and interfaces to enable this automation. IBM Support for automation
is provided by IBM Tivoli Storage Productivity Center.
You can access the Tivoli documentation online at the IBM Tivoli Storage Productivity Center
information center:
http://pic.dhe.ibm.com/infocenter/tivihelp/v59r1/index.jsp?topic=%2Fcom.ibm.tpc_V5
2.doc%2Ftpc_kc_homepage.html
2.7.2 FlashCopy
FlashCopy is the IBM-branded name for Point-in-Time (sometimes called Time-Zero, or T0)
copy. This function makes a copy of the blocks on a source volume and can duplicate them on
1 to 256 target volumes.
FlashCopy works by creating one or two (for incremental operations) bitmaps to track
changes to the data on the source volume. This bitmap is also used to present an image of
the source data at the point in time that the copy was taken to target hosts while the actual
data is being copied. This capability ensures that copies appear to be instantaneous.
If your FlashCopy targets have existing content, it will be overwritten during the copy
operation. Also, the no copy (copy rate 0) option, where only changed data is copied,
overwrites existing content. After the copy operation has started, the target volume appears to
have the contents of the source volume as it existed at the point in time that the copy was
initiated. Although the physical copy of the data takes an amount of time that varies based on
system activity and configuration, the resulting data at the target appears as though the copy
was made instantaneously.
FlashCopy permits the management operations to be coordinated, via a grouping of
FlashCopy pairs, so that a common single point in time is chosen for copying target volumes
from their respective source volumes. This capability allows a consistent copy of data for an
application that spans multiple volumes.
SAN Volume Controller also permits source and target volumes for FlashCopy to be
thin-provisioned volumes. FlashCopies to or from thinly provisioned volumes allow the
RPO: When planning your Recovery Point Objective (RPO), you will need to account for
application recovery procedures and the length of time that they will take and the point to
which the recovery procedures can roll back data.
Although Global Mirror on a SAN Volume Controller can provide typically subsecond RPO
times, the effective RPO time can be up to five minutes or longer, depending on the
application behavior.
FlashCopy: When using the multiple target capability of FlashCopy, if any additional copy
(C) is started while there is an existing copy in progress (B), C will have a dependency on
B. Therefore, if you terminate B, C becomes invalid.
Bitmap: In this context, bitmap refers to a special programming data structure that is used
to compactly store Boolean values. Do not confuse this definition with the popular image
file format.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
40 Implementing the IBM System Storage SAN Volume Controller V7.2
duplication of data while consuming less space. These types of volumes depend on the rate
of change of the data. Typically, these types of volumes are used in situations where time is
limited. Over time, they have the potential to fill the physical space that they were allocated.
Reverse FlashCopy enables target volumes to become restore points for the source volume
without breaking the FlashCopy relationship and without having to wait for the original copy
operation to complete. SAN Volume Controller supports multiple targets and thus multiple
rollback points.
In most practical scenarios, the FlashCopy functionality of the SAN Volume Controller is
integrated into a process or procedure that allows the benefits of the point-in-time copies to
be used to address business needs. IBM offers Tivoli Storage FlashCopy Manager for this
functionality. You can obtain more information about Tivoli Storage FlashCopy Manager at this
website:
http://www.ibm.com/software/products/en/tivostorflasmana
Most clients aim to integrate the FlashCopy feature for point-in-time copies and quick
recovery of their applications and databases. You can read a detailed description of
FlashCopy copy services in Chapter 8, Advanced Copy Services on page 365.
2.7.3 Image mode migration and volume mirroring migration
There are two methods of Advanced Copy Services that are available outside of the licensed
Advanced Copy Services features: Image mode migration and volume mirroring migration.
The base software functionality of the SAN Volume Controller includes both of these
capabilities.
Image mode migration works by establishing a one-to-one static mapping of volumes and
managed disks. This mapping allows the data on the managed disk to be presented directly
through the volume layer and allows the data to be moved between volumes and the
associated backing managed disks. This function provides a facility to use the SAN Volume
Controller as a migration tool. Otherwise, you have no recourse, such as migrating from
Vendor A hardware to Vendor B hardware, assuming that the two systems have no other
compatibility.
Volume mirroring migration is a clever use of the facility that the SAN Volume Controller offers
to mirror data on a volume between two sets of storage pools. Much like the logical volume
management portion of certain operating systems, the SAN Volume Controller can mirror
data transparently between two sets of physical hardware. You can use this feature to move
data between managed disk groups with no host I/O interruption by simply removing the
original copy after the mirroring is completed. This feature is much more limited than
FlashCopy and must not be used where FlashCopy is appropriate. Instead, use this function
as an infrequent-use, hardware-refresh aid, because you now have the ability to move
between your old storage system and new storage system without interruption.
2.8 SAN Volume Controller clustered system overview
In simple terms, a clustered system or system is a collection of servers that together provide a
set of resources to a client. The key point is that the client has no knowledge of the underlying
Careful planning: When migrating using the volume mirroring migration, your I/O rate will
be limited to the slowest of the two managed disk groups involved, so it is imperative that
you plan carefully to avoid affecting the live systems.
Chapter 2. IBM System Storage SAN Volume Controller 41
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
physical hardware of the system. The client is isolated and protected from changes to the
physical hardware. This arrangement offers many benefits including, most significantly, high
availability.
Resources on the clustered system act as highly available versions of unclustered resources.
If a node (an individual computer) in the system is unavailable or too busy to respond to a
request for a resource, the request is passed transparently to another node that can process
the request. The clients are unaware of the exact locations of the resources that they use.
The SAN Volume Controller is a collection of up to eight nodes, which are added in pairs
known as I/O Groups. These nodes are managed as a set (system), and they present a single
point of control to the administrator for configuration and service activity.
The eight-node limit for a SAN Volume Controller system is a limitation that is imposed by the
microcode and is not a limit of the underlying architecture. Larger system configurations might
be available in the future.
Although the SAN Volume Controller code is based on a purpose-optimized Linux kernel, the
clustered system feature is not based on Linux clustering code. The clustered system
software within the SAN Volume Controller, that is, the event manager cluster framework, is
based on the outcome of the COMPASS research project. It is the key element that isolates
the SAN Volume Controller application from the underlying hardware nodes. The clustered
system software makes the code portable. It provides the means to keep the single instances
of the SAN Volume Controller code that are running on separate systems nodes in sync.
Restarting nodes (during a code upgrade), adding new nodes, or removing old nodes from a
system or failing nodes therefore cannot affect the SAN Volume Controllers availability.
It is key for all active nodes of a system to know that they are members of the system.
Especially in situations, such as the split-brain scenario where single nodes lose contact with
other nodes, it is key to have a solid mechanism to decide which nodes form the active
system. A worst case scenario is a system that splits into two separate systems.
Within a SAN Volume Controller system, the voting set and a quorum disk are responsible for
the integrity of the system. If nodes are added to a system, they get added to the voting set. If
nodes are removed, they are removed quickly from the voting set. Over time, the voting set,
and thus the nodes in the system, can completely change so that the system has migrated
onto a completely separate set of nodes from the set on which it started.
The SAN Volume Controller clustered system implements a dynamic quorum. Following a
loss of nodes, if the system can continue to operate, it adjusts the quorum requirement so that
further node failure can be tolerated.
The lowest Node Unique ID in a system becomes the boss node for the group of nodes, and it
proceeds to determine (from the quorum rules) whether the nodes can operate as the
system. This node also presents the maximum two-cluster IP addresses on one or both of its
nodes Ethernet ports to allow access for system management.
2.8.1 Quorum disks
The system uses the quorum disk for two purposes: as a tiebreaker in the event of a SAN
fault (when exactly half of the nodes that were previously members of the system are present)
and to hold a copy of important system configuration data. Slightly over 256 MB is reserved
for this purpose on each quorum disk candidate. Only one active quorum disk exists in a
system; however, the system uses three MDisks as quorum disk candidates. The system
automatically selects the actual active quorum disk from the pool of assigned quorum disk
candidates.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
42 Implementing the IBM System Storage SAN Volume Controller V7.2
If a tiebreaker condition occurs, the one-half portion of the system nodes, which is able to
reserve the quorum disk after the split has occurred, locks the disk and continues to operate.
The other half stops its operation. This design prevents both sides from becoming
inconsistent with each other.
When MDisks are added to the SAN Volume Controller system, the SAN Volume Controller
system checks the MDisk to see if it can be used as a quorum disk. If the MDisk fulfills the
requirements, the SAN Volume Controller will assign the three first MDisks that are added to
the system as quorum candidates. One of these MDisks is selected as the active quorum
disk.
You can list the quorum disk candidates and the active quorum disk in a system by using the
svcinfo lsquorum command.
When the set of quorum disk candidates has been chosen, it is fixed. However, a new quorum
disk candidate can be chosen in one of these conditions:
When the administrator requests that a specific MDisk is to become a quorum disk by
using the svctask setquorum command
When an MDisk that is a quorum disk is deleted from a storage pool
When an MDisk that is a quorum disk changes to image mode
An offline MDisk will not be replaced as a quorum disk candidate.
For disaster recovery purposes, a system needs to be regarded as a single entity, so the
system and the quorum disk need to be colocated.
There are special considerations concerning the placement of the active quorum disk for a
stretched or split cluster and Split I/O Group configurations. The details are available at this
website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311
Quorum disk requirements: To be considered eligible as a quorum disk, a LUN must
meet the following criteria:
It must be presented by a disk subsystem that is supported to provide SAN Volume
Controller quorum disks.
It has been manually allowed to be a quorum disk candidate using the svctask
chcontroller -allow_quorum yes command.
It must be in managed mode (no image mode disks).
It must have sufficient free extents to hold the system state information, plus the stored
configuration metadata.
It must be visible to all of the nodes in the system.
Quorum disk placement: If possible, the SAN Volume Controller will place the quorum
candidates on separate disk subsystems. After the quorum disk has been selected,
however, no attempt is made to ensure that the other quorum candidates are presented
through separate disk subsystems.
Important: Quorum disk placement verification and adjustment to separate storage
systems (if possible) reduce the dependency from a single storage system and can
increase the quorum disk availability significantly.
Chapter 2. IBM System Storage SAN Volume Controller 43
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
During the normal operation of the system, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a
node fails for any reason, the workload that is intended for it is taken over by another node
until the failed node has been restarted and readmitted into the system (which happens
automatically). If the microcode on a node becomes corrupted, resulting in a failure, the
workload is transferred to another node. The code on the failed node is repaired, and the
node is readmitted into the system (again, all automatically).
2.8.2 Split I/O Groups or split cluster
An I/O Group is formed by a pair of SAN Volume Controller nodes. These nodes act as
failover nodes for each other, and hold mirrored copies of cached volume writes. See 2.4.2,
I/O Groups on page 15 for more information. Normally, these nodes are physically located
within the same rack, in the same computer room. To provide protection against failures that
affect an entire location (for example, a power failure), you can split a single system between
two physical locations. With version 6.3, released in October 2011, SAN Volume Controller
began supporting Stretched Cluster configurations where nodes can be separated by a
distance of up to 300 km in specific configurations. In this configuration, special attention
must be given to the quorum disks to ensure a successful clustered system failover.
Generally, when the nodes in a system have been split across sites, the SAN Volume
Controller system must be configured in the following manner:
Site 1 contains half of the SAN Volume Controller system nodes plus one quorum disk
candidate.
Site 2 contains half of the SAN Volume Controller system nodes plus one quorum disk
candidate.
Site 3 contains an active quorum disk.
This configuration ensures that a quorum disk is always available, even after a single-site
failure. All internode communication between SAN Volume Controller node ports in the same
system must not cross ISLs, which is also true for SAN Volume Controller to back-end disk
controllers. Therefore, the FC path between sites cannot use an inter-switch ISL path. The
remote node must have a direct path to the switch to which its partner and other system
nodes connect.
With SAN Volume Controller 6.3 there are significant enhancements for Split I/O group in two
different configurations:
Without ISLs between SAN Volume Controller nodes (similar to the SAN Volume
Controller 5.1 supported configuration). There is distance extension to up to 40 km (24.8
miles). Active and passive wavelength division multiplexing (WDM) devices can be used
between both sites.
With ISLs between SAN Volume Controller nodes. The maximum distance is similar to MM
distances (300km or 186 miles). The physical requirements are similar to MM
requirements, with ISL distance extension for active and passive WDM.
Important: Running a SAN Volume Controller system without a quorum disk can seriously
affect your operation. A lack of available quorum disks for storing metadata will prevent any
migration operation (including a forced MDisk delete).
Mirrored volumes can be taken offline if there is no quorum disk available. This behavior
occurs because the synchronization status for mirrored volumes is recorded on the
quorum disk.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
44 Implementing the IBM System Storage SAN Volume Controller V7.2
Other SAN Volume Controller configuration rules also continue to apply. For example, the
Ethernet port, eth0 on every SAN Volume Controller node, local site, or remote site, must still
be connected to the same subnet or subnets. For more details about split cluster
configuration, see 3.3.7, Split-cluster system configuration on page 97.
2.8.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive suffer from both seek and latency time at the drive level, which can result
in from 1 ms - 10 ms of response time (for an enterprise-class disk).
The new 2145-CG8 nodes combined with SAN Volume Controller provide 24 GB (optional 48
GB, with the second CPU card which offers more processor power and memory for the Real
Time Compression feature) memory per node, or 48 GB (96 GB) per I/O Group, or 192 GB
(384 GB) per SAN Volume Controller system. The SAN Volume Controller provides a flexible
cache model, and the nodes memory can be used as read or write cache. The size of the
write cache is limited to a maximum of 12 GB of the nodes memory. Dependent on the
current I/O conditions on a node, the entire 24 GB of memory can be fully used as read
cache.
Cache is allocated in 4 KB segments. A segment will hold part of one track. A track is the unit
of locking and destaging granularity in the cache. The cache virtual track size is 32 KB (eight
segments). A track might only be partially populated with valid pages. The SAN Volume
Controller coalesces writes up to a 256 KB track size if the writes reside in the same tracks
prior to destage. For example, if 4 KB is written into a track, another 4 KB is written to another
location in the same track. Therefore, the blocks that are written from the SAN Volume
Controller to the disk subsystem can be any size between 512 bytes up to 256 KB.
When data is written by the host, the preferred node within the I/O Group saves the data in its
cache. Before the cache returns completion to the host, the write must be mirrored to the
partner node, or copied into the cache of its partner node, for availability reasons. After having
a copy of the written data, the cache returns completion to the host. A volume that has not
received a write update during the last two minutes will automatically have all modified data
destaged to disk.
If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining
node empties all of its write cache and proceeds in operation mode, which is referred to as
write-through mode. A node operating in write-through mode writes data directly to the disk
subsystem before sending an I/O complete status message back to the host. Running in this
mode can degrade the performance of the specific I/O Group.
Write cache is partitioned by storage pool. This feature restricts the maximum amount of write
cache that a single storage pool can allocate in a system. Table 2-3 shows the upper limit of
write-cache data that a single storage pool in a system can occupy.
Table 2-3 Upper limit of write cache per storage pool
For in-depth information about SAN Volume Controller cache partitioning, it is important to
read IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at
this website:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
One storage
pool
Two storage
pools
Three storage
pools
Four storage
pools
More than four
storage pools
100% 66% 40% 33% 25%
Chapter 2. IBM System Storage SAN Volume Controller 45
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
A SAN Volume Controller node will treat part of its physical memory as non-volatile.
Non-volatile means that its contents are preserved across power losses and resets. Bitmaps
for Flash Copy and Remote Mirroring relationships, the virtualization table, and the write
cache are items in the non-volatile memory.
In the event of a disruption or external power loss, the physical memory is copied to a file in
the file system on the nodes internal disk drive, so that the contents can be recovered when
external power is restored. The uninterruptible power supply units, which are delivered with
each nodes hardware, ensure that there is sufficient internal power to keep a node
operational to perform this dump when the external power is removed. After dumping the
content of the non-volatile part of the memory to disk, the SAN Volume Controller node shuts
down.
2.8.4 Clustered system management
The SAN Volume Controller can be managed by one of the following interfaces:
A text command-line interface (CLI) accessed through a Secure Shell connection (SSH),
for example, PuTTY.
A web browser-based graphical user interface (GUI).
Tivoli Storage Productivity Center.
The GUI and a web server are installed in the SAN Volume Controller system nodes.
Therefore, any browser, if pointed at the system IP address, is able to access the
management GUI.
Management console
The management console for SAN Volume Controller is referred to as the IBM System
Storage Productivity Center (SSPC). This appliance is not needed any longer. The SAN
Volume Controller can be reached via the internal Management GUI.
2.8.5 IBM System Storage Productivity Center
IBM System Storage Productivity Center is based on server hardware (IBM System x-based)
and a set of preinstalled and optional software modules. Several of these preinstalled
modules provide base functionality only. Modules providing enhanced functionality can be
activated by installing separate licenses.
IBM System Storage Productivity Center contains the functions listed here:
IBM Tivoli Integrated Portal
IBM Tivoli Integrated Portal is a standards-based architecture for web administration. The
installation of Tivoli Integrated Portal is required to enable single sign-on (SSO) for Tivoli
Storage Productivity Center. Tivoli Storage Productivity Center now installs Tivoli
Integrated Portal along with Tivoli Storage Productivity Center.
IBM Tivoli Storage Productivity Center
IBM Tivoli Storage Productivity Center Basic Edition is preinstalled on the IBM System
Storage Productivity Center server. There are several other commercially available
products of Tivoli Storage Productivity Center that provide additional functionality beyond
Tivoli Storage Productivity Center Basic Edition. You can activate these packages by
adding the specific licenses to the preinstalled Basic Edition:
Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for
performance.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
46 Implementing the IBM System Storage SAN Volume Controller V7.2
Tivoli Storage Productivity Center for Data allows you to collect and monitor file
systems and databases.
Tivoli Storage Productivity Center Standard Edition is a bundle that includes all of the
other packages, along with SAN planning tools that make use of information that is
collected from the Tivoli Storage Productivity Center components.
IBM Tivoli Storage Productivity Center for Replication
The functions of Tivoli Storage Productivity Center for Replication provide the
management of the IBM FlashCopy, Metro Mirror, and Global Mirror capabilities for the
IBM DS8000, IBM SAN Volume Controller, and other devices. This package can also be
activated by installing the specific licenses.
Web Browser to access the GUI
SSH Client (PuTTY)
DS Common Information Model (CIM) agents
Windows Server 2008 Enterprise Edition
Several base software packages that are required for Tivoli Productivity Center
Optional software packages, such as anti-virus software or DS3000/4000/5000 Storage
Manager, can be installed on the IBM System Storage Productivity Center server by the
client.
Using Tivoli Storage Productivity Center or IBM System Director provides greater integration
points and launch in-context capabilities. Figure 2-13 provides an overview of the SAN
Volume Controller management components. We describe the details in Chapter 4, SAN
Volume Controller initial configuration on page 117. You can obtain further details about the
IBM System Storage Productivity Center in IBM System Storage Productivity Center Users
Guide Version 1 Release 5, SC27-2336. This guide is available at the following link:
http://pic.dhe.ibm.com/infocenter/tivihelp/v59r1/topic/com.ibm.sspc_v15.doc/fqz0_s
spc_usersguide_v15.pdf
More information is included in the IBM System Storage Productivity Center Introduction and
Planning Guide, SC23-8824. This guide is available at the following link:
http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/topic/com.ibm.sspc_v1401.do
c/fqz0_sspc_IPG_v1401.pdf
Chapter 2. IBM System Storage SAN Volume Controller 47
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-13 SAN Volume Controller management overview
2.9 User authentication
The SAN Volume Controller provides two methods of user authentication to control access to
the web-based management interface (GUI) and the CLI:
Local authentication is performed within the SAN Volume Controller system:
The available local CLI authentication methods are Secure Shell (ssh) key
authentication and, newly introduced with release SAN Volume Controller 6.3,
username and password. We explain the CLI setup in more detail in 4.4, Secure Shell
overview on page 138.
Local GUI authentication is done via user name and password. We discuss the GUI
setup in 4.3, Configuring the GUI on page 125.
Remote authentication means that the validation of a users permission to access the SAN
Volume Controllers management CLI/GUI is performed at a remote authentication server.
That is, except for the superuser account, there is no need to administer local user
accounts on the SAN Volume Controller. You can use an existing user management
system in your environment to control SAN Volume Controller user access, implementing
an SSO for the SAN Volume Controller.
2.9.1 Remote authentication via LDAP
Until SAN Volume Controller 6.2, the only supported remote authentication service was the
Tivoli Embedded Security Services, which is part of the Tivoli Integrated Portal. Beginning
with SAN Volume Controller 6.3, remote authentication via native LDAP was introduced. The
supported types of LDAP servers are IBM Tivoli Directory Server, Microsoft Active Directory
(MS AD), and OpenLDAP, for example, running on a Linux system.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
48 Implementing the IBM System Storage SAN Volume Controller V7.2
Users authenticated by an LDAP server can log in to the SAN Volume Controller web-based
GUI and the CLI. Unlike remote authentication via Tivoli Integrated Portal, users do not need
to be configured locally for CLI access. An SSH key is not required for CLI login in this
scenario either. However, locally administered users can co-exist with remote authentication
enabled. The default administrative user superuser must be a local user. It cannot be deleted
or manipulated, except for the password and SSH key.
You can define multiple LDAP servers if available for availability reasons. Authentication
requests are processed by those LDAP servers marked as preferred unless the connections
fail or a user is not found. Requests are distributed across all preferred servers for load
balancing in a round-robin fashion.
A user that is authenticated remotely by an LDAP server is granted permissions on the SAN
Volume Controller according to the role that is assigned to the group of which it is a member.
That is, any SAN Volume Controller user group with its assigned role, for example,
CopyOperator, must exist with an identical name on the SAN Volume Controller system and
on the LDAP server, if users in that role are to be authenticated remotely.
You must follow these guidelines:
Either native LDAP authentication or Tivoli Integrated Portal can be selected, but not both.
If more than one LDAP server is defined, they all must be of the same type, for example,
MS AD.
The SAN Volume Controller user group must be enabled for remote authentication.
The user group name must be identical in the SAN Volume Controller user group
management and on the LDAP server, and it is case-sensitive.
The LDAP server must transmit a group membership attribute for the user. The default
attribute name for MS AD and OpenLDAP is memberOf. The default attribute name for
Tivoli Directory Server is ibm-allGroups. For OpenLDAP implementations, it might be
necessary to configure the memberOf overlay if it is not in place.
In the following example, we demonstrate LDAP user authentication using a Microsoft
Windows Server 2008 R2 domain controller acting as an LDAP server.
Follow these steps to configure remote authentication:
1. Configure the SAN Volume Controller for remote authentication by selecting Settings
Directory Services, as shown in Figure 2-14 on page 48.
Figure 2-14 Configure Remote Authentication
2. Click Configure Remote Authentication.
Chapter 2. IBM System Storage SAN Volume Controller 49
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
3. Select the authentication type, as shown in Figure 2-15. Select LDAP and click Next.
Figure 2-15 Select authentication type
4. You must configure several parameters in the Configure Remote Authentication window,
as shown in Figure 2-16 on page 49 and Figure 2-17 on page 50:
For LDAP Type, select Microsoft Active Directory. (For an OpenLDAP server, select
Other for the type of LDAP server.)
For Security, choose None. (If your LDAP server requires a secure connection, select
Transport Layer Security; the LDAP servers certificate will be configured later.)
Click Advanced Settings to expand the bottom part of the window. Leave the User
Name and Password fields empty, if your LDAP server supports anonymous bind. For
our MS AD server, enter the credentials of an existing user on the LDAP server with
permission to query the LDAP directory. You can enter this information either in the
format of an email address, for example, [email protected], or in the
distinguished format, for example, cn=Administrator,cn=users,dc=itso,dc=corp.
Note the common name portion cn=users for MS AD servers.
If your LDAP server uses separate Attributes from the predefined attributes, you can
edit them here. You do not need to edit the attributes when MS AD is used as the LDAP
service.
Figure 2-16 Configure Remote Authentication
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
50 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 2-17 Configure Remote Authentication Advanced Settings
5. Figure 2-18 shows the Configure Remote Authentication, where we configure the LDAP
server details:
Enter the IP Address of at least one LDAP server.
Even though it is marked as optional, it might be required to enter a Base DN in the
distinguished name format, which defines the starting point in the directory at which to
search for users, for example, dc=itso,dc=corp.
You can add additional LDAP servers by clicking the green plus (+) icon.
Check Preferred if you want to use preferred LDAP servers.
Click Finish to save the settings.
Figure 2-18 LDAP Servers configuration
Now that we have enabled and configured the SAN Volume Controller for Remote
Authentication, we work with the user groups. For remote authentication through LDAP, no
local SAN Volume Controller users have to be maintained, but the user groups have to be set
up properly. The existing built-in SAN Volume Controller user groups can be used, as well as
groups created in the SAN Volume Controller user management. However, using self-defined
groups might be advisable to avoid SAN Volume Controller default groups interfering with
already existing group names on the LDAP server. Any user group, whether built-in or
self-defined, has to be enabled for remote authentication.
Chapter 2. IBM System Storage SAN Volume Controller 51
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Follow these steps to create a new user group:
1. Select Access Users New User Group. As shown in Figure 2-19, we create a new
user group.
Figure 2-19 Create a new user group
2. In the window that is shown in Figure 2-19:
Enter a meaningful Group Name, for example, SVC_LDAP_CopyOperator, according to its
intended role.
Select the desired Role by clicking Copy Operator.
To mark LDAP for Remote Authentication, select Enable for this group and click
Create.
You can modify these settings in a groups properties at any time.
Next, we create a group with exactly the same name on the LDAP server, that is, in the Active
Directory Domain:
1. On the Domain Controller, launch the Active Directory Users and Computers
management console, and navigate in your domain structure to the entity containing the
user groups. Click the Create new user group icon that is highlighted in Figure 2-20 to
create a new group.
Figure 2-20 Create new user group on the LDAP server
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
52 Implementing the IBM System Storage SAN Volume Controller V7.2
2. Enter the exact same name (it is case-sensitive), SVC_LDAP_CopyOperator, in the Group
Name field, as shown in Figure 2-21. Select the correct Group scope for your environment
and select Security for Group type. Click OK.
Figure 2-21 Edit the group properties
3. Edit the users properties, so that the user will be able to log in to the SAN Volume
Controller. Make the user a Member of the appropriate user group for the intended SAN
Volume Controller role, as shown in Figure 2-22, and click OK to save and apply the
settings.
Figure 2-22 Make the user a member of the appropriate group
Chapter 2. IBM System Storage SAN Volume Controller 53
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
At this point, we are ready for the authentication of the users for the SAN Volume Controller
through the remote server. To ensure that everything works properly, we will run a few tests to
verify the communication between the SAN Volume Controller and the configured LDAP
service:
1. Select Settings Directory Services, and then select Global Actions Test LDAP
Connections, as shown in Figure 2-23.
Figure 2-23 LDAP Connections Test
2. Figure 2-24 shows the result of a successful connection test.
Figure 2-24 Successful LDAP connection test
3. Next, we test a real user authentication attempt. Select Settings Directory Services.
Select Global Actions Test LDAP Authentication, as shown in Figure 2-25 on
page 54.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
54 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 2-25 Test LDAP Authentication
4. As shown in Figure 2-26, enter the User Credentials of a user that was defined on the
LDAP server, and click Test.
Figure 2-26 LDAP Authentication Test
5. The message, CMMVC7148I Task completed successfully, will display after a successful
test.
Both the LDAP connection test and the LDAP authentication test must complete successfully
to ensure that the LDAP authentication will work properly. In our case, an error message
points to user authentication problems during the LDAP authentication test. It might help to
analyze the LDAP servers response outside of the SAN Volume Controller. You can use any
native LDAP query tool, for example, the free software LDAPBrowser tool, which is available
at this website:
http://www.ldapbrowser.com/
For a pure MS AD environment, you can use the Microsoft Sysinternals ADExplorer tool,
which is available at this website:
http://technet.microsoft.com/en-us/sysinternals/bb963907
Assuming that the LDAP connection and the authentication test succeeded, users are able to
log in to the SAN Volume Controller GUI and CLI using their network credentials, for example,
their Microsoft Windows domain user name and password.
Figure 2-27 shows the Web GUI login window with the Windows domain credentials entered.
A user can log in with either its short name (that is, without the domain component) or with the
fully qualified user name in the form of an email address.
Chapter 2. IBM System Storage SAN Volume Controller 55
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-27 GUI login
After a successful login, the user name is displayed in a welcome message at the top of the
window, as shown in Figure 2-28 on page 55.
Figure 2-28 Welcome message after successful login
CLI login is possible with either the short user name or the fully qualified name. The
lscurrentuser CLI command displays the user name of the currently logged in user and its
role.
2.9.2 SAN Volume Controller user names
User names must be unique and can contain up to 256 printable ASCII characters:
Forbidden characters are the single quotation mark (), colon (:), percent symbol (%),
asterisk (*), comma (,), and double quotation marks ().
A user name cannot begin or end with a blank.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
56 Implementing the IBM System Storage SAN Volume Controller V7.2
Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden
characters, but passwords cannot begin or end with blanks.
2.9.3 SAN Volume Controller superuser
A special local user that is called the superuser always exists on every system. It cannot be
deleted. Its password is set by the user during clustered system initialization. The superuser
password can be reset from the nodes front panel, and this function can be disabled.
However, disabling the superuser makes the system inaccessible if all the users forget their
passwords or lose their SSH keys.
To register an SSH key for the superuser to provide command-line access, select Service
Assistant Configure CLI Access to assign a temporary key. However, the key will be lost
during a node restart. The permanent way to add the key is through the normal GUI, that is,
select User Management superuser Properties to register the SSH key for the
superuser. The superuser is always a member of user group 0, which has the most privileged
role within the SAN Volume Controller.
2.9.4 SAN Volume Controller Service Assistant Tool
SAN Volume Controller has a tool for performing service tasks on the system. In addition to
performing various service tasks from the front panel, you can also service a node through an
Ethernet connection using a web browser to access a GUI interface. The function is called the
Service Assistant Tool. It requires you to enter the superuser password during login.
2.9.5 SAN Volume Controller roles and user groups
Each user group is associated with a single role. The role for a user group cannot be
changed, but additional new user groups (with one of the defined roles) can be created.
User groups are used for local and remote authentication. Because the SAN Volume
Controller knows of five roles, by default, five user groups are defined in a SAN Volume
Controller system. See Table 2-4.
Table 2-4 User groups
The access rights for a user belonging to a specific user group are defined by the role that is
assigned to the user group. It is the role that defines what a user can or cannot do on a SAN
Volume Controller system.
Table 2-5 on page 57 shows the roles ordered (from the top) by the least privileged Monitor
role down to the most privileged SecurityAdmin role. The NasSystem role has no special user
group.
User group ID User group Role
0 SecurityAdmin SecurityAdmin
1 Administrator Administrator
2 CopyOperator CopyOperator
3 Service Service
4 Monitor Monitor
Chapter 2. IBM System Storage SAN Volume Controller 57
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Table 2-5 Commands permitted for each role
2.9.6 SAN Volume Controller local authentication
Local users are users that are managed entirely on the clustered system without the
intervention of a remote authentication service. Local users must have a password or an SSH
public key, or both. Key authentication is attempted first with the password as a fallback. The
password and the SSH key are used for command-line or file transfer (SecureCopy) access.
For GUI access, only the password is used.
A local user always belongs to only one user group. Figure 2-29 on page 58 shows an
overview of local authentication within the SAN Volume Controller.
Role Commands allowed by role
Monitor All svcinfo or informational commands, plus svctask finderr, dumperrlog,
dumpinternallog, chcurrentuser, ping, svcconfig backup, and svqueryclock
Service All commands allowed for the Monitor role, plus applysoftware, setlocale,
addnode, rmnode, cherrstate, writesernum, detectmdisk, includemdisk,
clearerrlog, cleardumps, settimezone, stopcluster, startstats, stopstats,
and setsystemtime
CopyOperator All commands allowed for the Monitor role, plus prestartfcconsistgrp,
startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp,
switchrcconsistgrp, chrcconsistgrp, startrcrelationship,
stoprcrelationship, switchrcrelationship, chrcrelationship, and
chpartnership
Administrator All commands, except chauthservice, mkuser, rmuser, chuser, mkusergrp,
rmusergrp, chusergrp, and setpwdreset
SecurityAdmin All commands, except those commands that are allowed by the NasSystem
role
NasSystem svctask: addmember, activatemember, and expelmember
Create and delete filesystem VDisks.
Local users: Be aware that local users are created for each SAN Volume Controller
system. Each user has a name, which must be unique across all users in one system.
If you want to allow access for a user on multiple systems, you have to define the user in
each system with the same name and the same privileges.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
58 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 2-29 Simplified overview of SAN Volume Controller local authentication
2.9.7 SAN Volume Controller remote authentication and single sign-on
You can configure a SAN Volume Controller system to use a remote authentication service.
Remote users are users that are managed by the remote authentication service and require
command-line or file-transfer access.
Remote users only have to be defined in the SAN Volume Controller system if command-line
access is required. No local user is required for GUI-only remote access. For command-line
access, the remote authentication flag must be set and its password must be defined for the
user. Remember that for users requiring CLI access with remote authentication, the password
must be defined locally for the users.
Remote users cannot belong to any user group because the remote authentication service,
for example, an LDAP directory server, such as IBM Tivoli Directory Server or Microsoft
Active Directory, delivers the user group information.
Figure 2-30 on page 59 gives an overview of SAN Volume Controller remote authentication.
Chapter 2. IBM System Storage SAN Volume Controller 59
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Figure 2-30 Simplified overview of SAN Volume Controller remote authentication
The authentication service that is supported by the SAN Volume Controller is the Tivoli
Embedded Security Services server component level 6.2.
The Tivoli Embedded Security Services server provides the following key features:
Tivoli Embedded Security Services isolates the SAN Volume Controller from the actual
directory protocol in use, which means that the SAN Volume Controller communicates
only with Tivoli Embedded Security Services to get its authentication information. The type
of protocol that is used to access the central directory or the kind of the directory system
that is used is transparent to SAN Volume Controller.
Tivoli Embedded Security Services provides a secure token facility that is used to enable
single sign-on (SSO). SSO means that users do not have to log in multiple times when
using what appears to them to be a single system. SSO is used within Tivoli Productivity
Center. When SAN Volume Controller access is launched from within Tivoli Productivity
Center, the user will not have to log in to the SAN Volume Controller, because the user has
already logged in to Tivoli Productivity Center.
Using a remote authentication service
Follow these steps to use SAN Volume Controller with a remote authentication service:
1. Configure the system with the location of the remote authentication server:
Change settings using the following command:
svctask chauthservice
View current settings using the following command:
svcinfo lscluster
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
60 Implementing the IBM System Storage SAN Volume Controller V7.2
SAN Volume Controller supports either an HTTP or HTTPS connection to the Tivoli
Embedded Security Services server. If the HTTP option is used, the user and password
information is transmitted in clear text over the IP network.
2. Configure user groups on the system matching those user groups that are used by the
authentication service. For each group of interest that is known to the authentication
service, a SAN Volume Controller user group must exist with the same name and the
remote setting enabled.
For example, you can have a group called sysadmins, whose members require the SAN
Volume Controller Administrator role. Configure this group using the following command:
svctask mkusergrp -name sysadmins -remote -role Administrator
If none of a users groups match any of the SAN Volume Controller user groups, the user
is not permitted to access the system.
3. Configure users that do not require SSH access. Any SAN Volume Controller users that
will use the remote authentication service and do not require SSH access must be deleted
from the system. The superuser cannot be deleted; it is a local user and cannot use the
remote authentication service.
4. Configure users that require SSH access. Any SAN Volume Controller users that will use
the remote authentication service and require SSH access must have their remote setting
enabled and the same password set on the system and the authentication service. The
remote setting instructs the SAN Volume Controller to consult the authentication service
for group information after the SSH key authentication step to determine the users role.
The need to configure the users password on the system in addition to the authentication
service is due to a limitation in the Tivoli Embedded Security Services server software.
5. Configure the system time. For correct operation, both the SAN Volume Controller system
and the system running the Tivoli Embedded Security Services server must have the
exact same view of the current time. The easiest way is to have them both use the same
Network Time Protocol (NTP) server.
Failure to follow this step can lead to poor interactive performance of the SAN Volume
Controller user interface or incorrect user-role assignments.
Also, Tivoli Storage Productivity Center uses the Tivoli Integrated Portal infrastructure and its
underlying IBM WebSphere Application Server capabilities to make use of an LDAP registry
and enable SSO.
You can obtain more information about implementing SSO within Tivoli Storage Productivity
Center 4.1 in the chapter about LDAP authentication support and SSO in IBM Tivoli Storage
Productivity Center V4.1 Release Guide, SG24-7725, which is available at this website:
http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open
2.10 SAN Volume Controller hardware overview
The hardware nodes, as defined in the underlying COMPASS architecture, are based on Intel
processors with standard PCI-X adapters to interface with the SAN and the LAN.
Chapter 2. IBM System Storage SAN Volume Controller 61
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
The new SAN Volume Controller 2145-CG8 Storage Engine has the following key hardware
features:
Intel Xeon 5600 Series six core. A second processor card is optional.
Twenty-four GB memory base per node, and memory base scalable up to 192 GB in total
Optionally, you can double the memory with the second processor card.
Four 2/4/8 Gbps auto sensing FC ports. Up to four additional FC ports are optional.
Up to four SSDs, enabling scale-out high performance SSD support
Optional 10Gbps iSCSI / FCoE dual port card
Two, redundant power supplies
A 19-inch rack-mounted enclosure
IBM Systems Director Active Energy Manager enablement
One U high
The 2145-CG8 nodes can be integrated easily within existing SAN Volume Controller
clustered systems. The nodes can be intermixed in pairs within existing SAN Volume
Controller systems. Mixing node types in a system results in volume performance
characteristics that depend on the node type in the volumes I/O Group. The standard
nondisruptive clustered system upgrade process can be used to replace older engines with
new 2145-CG8 engines. See IBM SAN Volume Controller Software Installation and
Configuration Guide, GC27-2286, for more information about this topic.
See the following link for integration into existing clustered systems, compatibility, and
interoperability with installed nodes and uninterruptible power supplies:
http://www.ibm.com/support/docview.wss?uid=ssg1S1002999
The SAN Volume Controller 2145-CG8 ships with preloaded V7.2 software.
Figure 2-31 shows the front-side view of the SAN Volume Controller 2145-CG8 node.
Figure 2-31 SAN Volume Controller 2145-CG8 storage engine
Important: Since SAN Volume Controller V6.2 and with the 2145-CG8 hardware, the IBM
System Storage SAN Volume Controller Storage Engine offers 10 Gigabit Ethernet
connectivity. For more information about this topic, see this website:
http://www.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=
gpateam&supplier=897&letternum=ENUS111-083
This solution includes a Common Information Model (CIM) Agent to enable unified storage
management based on open standards for units that comply with CIM Agent standards.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
62 Implementing the IBM System Storage SAN Volume Controller V7.2
Remember that several SAN Volume Controller features, such as iSCSI, are software
features and are therefore available on all node types running SAN Volume Controller V5.1 or
later.
2.10.1 Fibre Channel interfaces
The IBM SAN Volume Controller provides link speeds of 2/4/8 Gbps on SAN Volume
Controller 2145-CG8 nodes. The nodes come with a 4-port HBA. The FC ports on these node
types auto-negotiate the link speed that is used with the FC switch. The ports normally
operate at the maximum speed that is supported by both the SAN Volume Controller port and
the switch. However, if a large number of link errors occur, the ports might operate at a lower
speed than what is supported.
The actual port speed for each of the four ports can be displayed through the GUI, the CLI,
the nodes front panel, and also by light-emitting diodes (LEDs) that are placed at the rear of
the node. For mirroring there is an optional FC card with 4 ports available.
For details, consult the node-specific SAN Volume Controller hardware installation guides:
IBM System Storage SAN Volume Controller Model 2145-CG8 Hardware Installation
Guide, GC27-3923
The SAN Volume Controller imposes no limit on the FC optical distance between SAN
Volume Controller nodes and host servers. FC standards, along with small form-factor
plugable optics (SFP) capabilities and cable type, dictate the maximum FC distances that are
supported.
If longwave SFPs are used in the SAN Volume Controller nodes, the longest supported FC
link between the SAN Volume Controller and switch is 40 km (24.85 miles).
Table 2-6 shows the cable length that is supported with shortwave SFPs.
Table 2-6 Overview of supported cable length
Table 2-7 shows the applicable rules relating to the number of ISL hops allowed in a SAN
fabric between SAN Volume Controller nodes or the system.
Table 2-7 Number of supported ISL hops
FC-O OM1 (M6)
standard 62.2/125
microseconds
OM2 (M5)
standard 50/125
microseconds
OM3 (M5E)
optimized 50/125
microseconds
OM4 (M5F)
optimized 50/125
microseconds
2 Gbps FC 150 m (492.1 ft) 300 m (984.3 ft) 500 m (1640.5 ft) n. a.
4 Gbps FC 70 m (229.7 ft) 150 m (492.1 ft) 380 m (1246.9 ft) 400 m (1312.34 ft)
8 Gbps FC limiting 20 m (68.10 ft) 50 m (164 ft) 150 m (492.1 ft) 190 m (623.36 ft)
16 Gbps FC 15 m (49.21 ft) 35 m (114.82) 100 m (382.08 ft) 125 m (410.10 ft)
Between nodes in an
I/O Group
Between nodes in
separate I/O Groups
Between nodes and
the disk subsystem
Between nodes and
the host server
0
(connect to the same
switch)
0
(connect to the same
switch)
1
(recommended: 0,
connect to the same
switch)
Maximum 3
Chapter 2. IBM System Storage SAN Volume Controller 63
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
2.10.2 LAN interfaces
The 2145-CG8 node has two 1 Gbps LAN ports available. Also, this node supports 10 Gbps
Ethernet ports that can only be used for iSCSI I/O.
The system configuration node can be accessed on either eth0 or eth1. The system can have
two IPv4 and two IPv6 addresses that are used for configuration purposes (CLI or CIM object
manager (CIMOM) access). The clustered system can therefore be managed by SSH clients
or GUIs on System Storage Productivity Centers on separate physical IP networks. This
capability provides redundancy in the event of a failure of one of these IP networks.
Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each
SAN Volume Controller node port. These IP addresses are independent of the system
configuration IP addresses. See Figure 2-11 on page 34 for an IP address overview.
2.10.3 FCoE interfaces
Version 6.4 also includes Fibre Channel over Ethernet (FCoE) support. FCoE is still in its
infancy, but 16 Gbit native Fibre Channel might be the last speed increase in mass production
use. After that, SANs and Ethernet networks will finally converge with 40 Gbit and beyond.
The various FCF requirements on the CEE FCoE infrastructure mean that FCoE host
attachment is supported. Disk and fabric attach are still via native Fibre Channel. The FCoE
support is, like most SAN Volume Controller features, a software upgrade. If you have SAN
Volume Controller with the 10 Gbit features, FCoE support is added with an upgrade to
version 6.4. SAN Volume Controller CG8 nodes can be upgraded (non-disruptively) to include
10 Gbit FCoE or 10 Gbit iSCSI support. The same 10 Gbit ports are iSCSI and FCoE
capable. For performance, the FCoE ports compare (with regard to transport speed) with the
native Fibre Channel ports (8 Gbit versus 10 Gbit) and recent enhancements to the iSCSI
support means there are similar performance levels with iSCSI and Fibre Channel.
2.11 Solid-state drives
Solid-state drives (SSDs) can be used, or more specifically, single-layer cell (SLC) or
multilayer cell (MLC) NAND Flash-based disks (for the sake of simplicity, they are referred to
as SSDs elsewhere in this book), to overcome a growing problem that is known as the
memory or storage bottleneck.
2.11.1 Storage bottleneck problem
The memory or storage bottleneck describes the steadily growing gap between the time
required for a CPU to access data located in its cache/memory (typically in nanoseconds)
and data located on external storage (typically in milliseconds).
Although CPUs and cache/memory devices continually improve their performance, in
general, mechanical disks that are used as external storage do not. Figure 2-32 illustrates
these access time differences.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
64 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 2-32 The memory/storage bottleneck
The actual times that are shown are not that important, but note the dramatic difference
between accessing data that is located in cache and data that is located on external disk.
We have added a second scale to Figure 2-32, which gives you an idea of how long it takes to
access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you
an idea of the importance of future storage technologies closing or reducing the gap between
access times for data stored in cache/memory versus access times for data stored on an
external medium.
Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they have shown a
remarkable performance regarding capacity growth, form factor/size reduction, price
decrease ($/GB), and reliability.
However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O have not improved at the same rate, although they have certainly
improved. In actual environments, we can expect from todays enterprise-class FC
serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a
latency) of approximately 6 ms per I/O.
To summarize, todays rotating disks continue to advance in capacity (several TBs), form
factor/footprint (8.89 cm (3.5 inches), 6.35 cm (2.5 inches), and 4.57 cm (1.8 inches)), and
price ($/GB), but they are not getting much faster.
The limiting factor is the number of revolutions per minute (RPM) that a disk can perform
(approximately 15,000). This factor defines the time that is required to access a specific data
block on a rotating device. Small improvements will likely occur in the future, but a big step,
such as doubling the RPM, if technically even possible, inevitably has an associated increase
in power consumption and price that will be an inhibitor.
2.11.2 Solid-state drive solution
SSDs can provide a solution for this dilemma. No rotating parts mean improved robustness
and lower power consumption. A remarkable improvement in I/O performance and a massive
reduction in the average I/O response times (latency) are the compelling reasons to use
SSDs in todays storage subsystems.
Chapter 2. IBM System Storage SAN Volume Controller 65
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
Enterprise-class SSDs deliver typically 85,000 read and 36,000 write IOPS with latencies of
typically 50 s for reads and 800 s for writes. Their form factors (6.35 cm (2.5 inches)/8.89
cm (3.5 inches)) and their interfaces (FC/SAS/Serial Advanced Technology Attachment
(SATA)) make them easy to integrate into existing disk shelves.
2.11.3 Solid-state drive market
The SSD storage market is rapidly evolving. The key differentiator among todays SSD
products that are available on the market is not the storage medium, but the logic in the disk
internal controllers. The top priorities in todays controller development are optimally handling
what is referred to as wear-out leveling, which defines the controllers capability to ensure a
devices durability; and closing the remarkable gap between read and write I/O performance.
Todays SSD technology is only a first step into the world of high performance persistent
semiconductor storage. A group of the approximately 10 most promising technologies are
collectively referred to as Storage Class Memory (SCM).
Storage Class Memory
SCM promises a massive improvement in performance (IOPS), areal density, cost, and
energy efficiency compared to todays SSD technology. IBM Research is actively engaged in
these new technologies.
You can obtain details of nanoscale devices at this website:
http://researcher.watson.ibm.com/researcher/view_project.php?id=4284
You can obtain details of Storage Class Memory at this website:
http://tinyurl.com/plk7as
You can read a comprehensive and worthwhile overview of the SSD technology in a subset of
the well-known Spring 2010 and 2009 SNIA Technical Tutorials, which are available on the
SNIA website:
http://www.snia.org/education/tutorials/2010/spring#solid
When these technologies become a reality, it will fundamentally change the architecture of
todays storage infrastructures.
2.11.4 Solid-state drives and SAN Volume Controller
The IBM San Volume Controller supports using either internal or external SSDs.
Internal SSD
Certain SAN Volume Controller models support .76 m (2.5 inch) SSDs as internal storage. A
maximum of four drives can be installed per node, and up to 32 drives can be installed in a
clustered system. These drives can be used to create RAID managed disks that in turn can
be used to create volumes.
Internal SSDs can be configured in the following two RAID levels:
RAID 1- RAID 10: In this configuration, one half of the mirror will be in each node of the I/O
Group providing redundancy in case of a node failure.
RAID 0: In this configuration all the drives will be assigned to the same node. This
configuration is intended to be used with VDisk Mirroring, because no redundancy is
provided in case of a node failure.
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
66 Implementing the IBM System Storage SAN Volume Controller V7.2
External SSD
The SAN Volume Controller is able to manage SSDs in externally attached storage
controllers or enclosures.
The SSDs are configured as an array with a LUN and are presented to the SAN Volume
Controller as a normal MDisk. The solid-state MDisk tier then needs to be set by the chmdisk
-tier generic_ssd command or the GUI.
The SSD MDisks can then be placed into a single SSD tier storage pool. High-workload
volumes can be manually selected and placed into the pool to gain the performance benefits
of SSDs.
For a more effective use of SSDs, place the SSD MDisks into a multitiered storage pool
combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier turned on, it will
automatically detect and migrate high-workload extents onto the solid-state MDisks.
2.12 Easy Tier
Determining the amount of data activity in a SAN Volume Controller extent and when to move
the extent to an appropriate storage performance tier is usually too complex a task to manage
manually.
Easy Tier is a performance optimization function that overcomes this issue. It will
automatically migrate or move extents belonging to a volume to or from one MDisk storage
tier to another MDisk storage tier.
Easy Tier monitors the host I/O activity and latency on the extents of all volumes with the
Easy Tier function turned on in a multitier storage pool over a 24-hour period. It then creates
an extent migration plan based on this activity, and it will dynamically move high activity or hot
extents to a higher tier within the storage pool. It will also move extents whose activity has
dropped off or cooled from the high-tier MDisks back to a lower-tiered MDisk. Because this
migration works at the extent level and not at the volume level, it is often referred to as
sub-LUN migration.
The Easy Tier function can be turned on or off at the storage pool and volume level.
2.12.1 Evaluation mode
To experience the potential benefits of using Easy Tier in your environment before actually
installing expensive SSDs, you can turn on the Easy Tier function for a single-level storage
pool. Next, turn on the Easy Tier function for the volumes within that pool. Easy Tier will then
start monitoring activity on the volume extents in the pool.
Easy Tier will create a migration report every 24 hours on the number of extents that can be
moved if the pool were a multitiered storage pool. So, even though Easy Tier extent migration
is not possible within a single-tier pool, the Easy Tier statistical measurement function is
available.
The usage statistics file can be offloaded from the SAN Volume Controller configuration node
using the GUI (Settings Support). Then, you can use the Storage Advisor Tool to create
the statistics report. A web browser is used to view the output of the Storage Advisor Tool
(STAT). Contact your IBM representative or IBM Business Partner for more information about
the Storage Advisor Tool.
Chapter 2. IBM System Storage SAN Volume Controller 67
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
2.12.2 Automatic data placement mode
For Easy Tier to provide automatic extent migration, you need to have a storage pool that
contains MDisks with separate disk tiers, thus a multitiered storage pool. Then, you need to
set the -easytier parameter to on or auto for the storage pool and on for the volumes. The
volumes must be either striped or mirrored for Easy Tier to migrate extents. See Chapter 7,
Advanced features for storage efficiency on page 349 for more details about Easy Tier
operation and management.
2.13 What is new with SAN Volume Controller 7.2
This section highlights the new features that SAN Volume Controller 7.2 offers.
2.13.1 SAN Volume Controller 7.2 supported hardware list, device driver, and
firmware levels
With the SAN Volume Controller 7.2 release, as in every release, IBM offers functional
enhancements and new hardware that can be integrated into existing or new SAN Volume
Controller systems and also interoperability enhancements or new support for servers, SAN
switches, and disk subsystems. See the most current information at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
2.13.2 SAN Volume Controller 7.2.0 new features
The following list summarizes the new features:
IP replication with integrated Bridgeworks SANSlide network optimization
This technology provides a lower-cost option with up to 3x better network utilization.
Real-time Compression enhancements
These enmeshments provide up to 3x better throughput for VMware vMotion and 35%
reduced CPU utilization.
IBM Storage Mobile Dashboard Monitoring and health check for the Storwize family
systems from mobile devices
VMware 5.5 and VASA block support
This support maintains currency with latest VMware capabilities.
Enhanced stretched cluster for SAN Volume Controller
This technology optimizes inter-site networking and enables planned site failover.
IP replication starting with SAN Volume Controller Version 7.2
One of the most important new functions in Storwize family is IP replication, which enables
the use of lower-cost Ethernet connections for remote mirroring. The capability is available
as a chargeable option on all Storwize family systems. The new function is transparent to
servers and applications (they do not know it is being used) in the same way that
traditional FC-based mirroring is. All remote mirroring modes (Metro Mirror, Global Mirror,
and Global Mirror with changed volumes) are supported. Configuration of the system is
straightforward: Storwize family systems normally find each other on the network and can
be selected from the GUI. IP replication includes Bridgeworks SANSlide network
optimization technology and is available at no additional charge (remote mirror is a
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
68 Implementing the IBM System Storage SAN Volume Controller V7.2
chargeable option but the price does not change with IP replication). Existing remote
mirror users have access to the new function at no additional charge.
Real-time Compression enhancements
Storwize family Real-time Compression is also being significantly enhanced. A new
compression algorithm delivers significant performance improvements at a minimal cost in
terms of compression (2-5 percentage points reduced compression). Once V7.2 software
is installed, this new algorithm is used automatically for all new compressed volumes and
for any new data that is written to existing compressed volumes. No user tasks are
required to take advantage of the new algorithm. The new algorithm delivers up to 3x
throughput when performing sequential writes, which is especially important for VMware
vMotion. The new algorithm also uses 35% less CPU for random workloads, which
enables compression to be used with more and more demanding workloads. In version
7.1, we enabled the use of Easy Tier with Real-time Compression, to see both improved
performance (with Easy Tier) and also greater efficiency. In this version, the method
Real-time Compression uses to position data is optimized so that hot and cold data is
kept segregated, which helps improve the efficiency of Easy Tier even more.
IBM Storage Mobile Dashboard
A new storage mobile dashboard application is available to monitor the performance and
health of Storwize family systems. The application works with all Storwize family systems
and one instance of the application can manage multiple systems. At this point, the
application is display only. It cannot alter system configurations. At the bottom, you can
see used capacity. The bar changes color as capacity usage increases. Above that, you
can see a graph of either recent latency (on the left) or IOPS (on the right). You can also
view alerts for the systems and there is a filter capability to show only specific alert types.
The application is available today for iOS devices from the Apple App Store. If you do not
have a Storwize family system, you can run it in demo mode to see what it looks like.
Search for IBM Storage Mobile Dashboard.
VMware 5.5 and VASA block support
The Storwize Family 7.2 software enables users to get more capability out of their VMware
environments by being a provider for the vSphere API for Storage Awareness.
Enhanced stretched cluster for SAN Volume Controller Optimizes inter-site
networking and enables planned site failover
Before this release, stretched cluster configurations did not provide manual failover
capability, and data being sent across a long distance link had the potential to be sent
twice. The addition of site awareness in version 7.2 routes I/O traffic between SAN
Volume Controller nodes and storage controllers to optimize the data flow, and it polices
I/O traffic during a failure condition to allow for a manual cluster invocation to ensure
consistency. The use of stretched cluster continues to follow all the same hardware
installation guidelines as previously announced and found in the product documentation.
Use of enhanced stretched cluster is optional, and existing stretched cluster configurations
will continue to be supported. These configurations can be converted to the new
enhanced version with a few simple commands.
2.14 Useful SAN Volume Controller web links
The SAN Volume Controller Support Page is at the following website:
http://www.ibm.com/support/entry/portal/product/system_storage/storage_software/st
orage_virtualization/san_volume_controller_%282145%29?productContext=-1948454624
Chapter 2. IBM System Storage SAN Volume Controller 69
Draft Document for Review March 27, 2014 3:03 pm 7933 02 SVC Overview Hartmut .fm
The SAN Volume Controller Home Page is at the following website:
http://www.ibm.com/systems/storage/software/virtualization/svc/
The SAN Volume Controller Interoperability Page is at the following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
SAN Volume Controller online documentation is at the following website:
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp
lBM Redbooks publications about SAN Volume Controller are available at the following
website:
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
IBM developerWorks is the premier web-based technical resource and professional network
for IT practitioners:
https://www.ibm.com/developerworks/community/blogs/storagevirtualization/tags/svc?
lang=en
7933 02 SVC Overview Hartmut .fm Draft Document for Review March 27, 2014 3:03 pm
70 Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. 71
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Chapter 3. Planning and configuration
In this chapter, we describe the steps that are required when you plan the installation of an
IBM System Storage SAN Volume Controller in your storage network. We look at the
implications for your storage network and also discuss performance considerations.
3
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
72 Implementing the IBM System Storage SAN Volume Controller V7.2
3.1 General planning rules
To achieve the most benefit from the SAN Volume Controller, preinstallation planning must
include several important steps. These steps will ensure that the SAN Volume Controller
provides the best possible performance, reliability, and ease of management for your
application needs. Proper configuration also helps minimize downtime by avoiding changes to
the SAN Volume Controller and the storage area network (SAN) environment to meet future
growth needs.
Follow these steps when planning for the SAN Volume Controller:
1. Collect and document the number of hosts (application servers) to attach to the SAN
Volume Controller, the traffic profile activity (read or write, sequential or random), and the
performance requirements (I/O per second (IOPS)).
2. Collect and document the storage requirements and capacities:
The total back-end storage already present in the environment to be provisioned on the
SAN Volume Controller
The total back-end new storage to be provisioned on the SAN Volume Controller
The required virtual storage capacity that is used as a fully managed virtual disk
(volume) and used as a Space-Efficient volume
Important: At the time of writing this book, the statements we make are correct, but they
might change over time. Always verify any statements that have been made in this book
with the SAN Volume Controller supported hardware list, device driver, firmware and
recommended software levels at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
Note: Check the Pre-sale Technical and Delivery Assessment (TDA) document at this
website:
https://www.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/salib_SA572/lc=
en_ALL_ZZ
A pre-sale TDA should be conducted before submitting a final proposal to a customer and
must be conducted before placing an order to ensure the configuration is correct and the
solution being proposed is valid. The preinstall System Assurance Planning Review
(SAPR) Package includes various files that are used in preparation for a SAN Volume
Controller preinstall TDA. A preinstall TDA should be conducted shortly after the order is
placed and before the equipment arrives at the customer location to ensure the customer's
site is ready for the delivery and responsibilities are documented with regard to customer's
and IBM or Business Partner roles in the implementation.
Tip: For comprehensive information about the topics that are discussed here, see IBM
System Storage SAN Volume Controller: Planning Guide, GA32-0551.
We also go into much more depth about these topics in SAN Volume Controller Best
Practices and Performance Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
Chapter 3. Planning and configuration 73
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
The required storage capacity for local mirror copy (volume mirroring)
The required storage capacity for point-in-time copy (FlashCopy)
The required storage capacity for remote copy (Metro Mirror and Global Mirror)
The required storage capacity for compressed volumes
Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes
3. Define the local and remote SAN fabrics and clustered system systems, if a remote copy
or a secondary site is needed.
4. Define the number of clustered system systems and the number of pairs of nodes
(between one and four) for each system. Each pair of nodes (an I/O Group) is the
container for the volume. The number of necessary I/O Groups depends on the overall
performance requirements.
5. Design the SAN according to the requirement for high availability and best performance.
Consider the total number of ports and the bandwidth that is needed between the host and
the SAN Volume Controller, the SAN Volume Controller and the disk subsystem, between
the SAN Volume Controller nodes, and for the inter-switch link (ISL) between the local and
remote fabric.
6. Design the iSCSI network according to the requirements for high availability and best
performance. Consider the total number of ports and bandwidth that is needed between
the host and the SAN Volume Controller.
7. Determine the SAN Volume Controller service IP address.
8. Determine the IP addresses for the SAN Volume Controller system and for the host that
connects through iSCSI.
9. Determine the IP addresses for IP replication
10.Define a naming convention for the SAN Volume Controller nodes, host, and storage
subsystem.
11.Define the managed disks (MDisks) in the disk subsystem.
12.Define the storage pools. The storage pools depend on the disk subsystem in place and
the data migration requirements.
13.Plan the logical configuration of the volume within the I/O Groups and the storage pools to
optimize the I/O load between the hosts and the SAN Volume Controller.
14.Plan for the physical location of the equipment in the rack.
SAN Volume Controller planning can be categorized into two types:
Physical planning
Logical planning
We describe these planning types in more detail in the following sections.
Note: Check and carefully count the required ports for extended links. Especially in a
stretched cluster environment, you might need many of the expensive longwave gigabit
interface converters (GBICs).
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
74 Implementing the IBM System Storage SAN Volume Controller V7.2
3.2 Physical planning
You must consider several key factors when performing the physical planning of a SAN
Volume Controller installation. The physical site must have the following characteristics:
Power, cooling, and location requirements are present for the SAN Volume Controller and
the uninterruptible power supply units.
SAN Volume Controller nodes and their uninterruptible power supply units must be in the
same rack.
The SAN Volume Controller nodes that belong to the same I/O Group must be placed in
separate racks.
You must plan for two separate power sources if you have a redundant ac-power switch,
which is available as an optional feature.
A SAN Volume Controller node is one Electronic Industries Association (EIA) unit high.
Each uninterruptible power supply unit that comes with SAN Volume Controller V7.2 is one
EIA unit high.
Other hardware devices can be in the same SAN Volume Controller rack, such as IBM
Storwize, IBM Storwize V3700, SAN switches, an Ethernet switch, and other devices.
You must consider the maximum power rating of the rack; do not exceed it. For more
information about the power requirements, see the following website:
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc
.console.doc%2Fsvc_preparephysicalenvcg8_la1733.html
3.2.1 Preparing your uninterruptible power supply unit environment
Ensure that your physical site meets the installation requirements for the uninterruptible
power supply unit.
2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is
included, and can only operate on the following node types:
SAN Volume Controller 2145-CG8
SAN Volume Controller 2145-CF8
SAN Volume Controller 2145-8A4
SAN Volume Controller 2145-8G4
SAN Volume Controller 2145-8F4
When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 - 240 V,
single phase.
Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.
Chapter 3. Planning and configuration 75
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
3.2.2 Physical rules
The SAN Volume Controller must be installed in pairs to provide high availability, and each
node in the clustered system must be connected to a separate uninterruptible power supply
unit.
Be aware of the following considerations:
Each SAN Volume Controller node of an I/O Group must be connected to a separate
uninterruptible power supply unit.
Each uninterruptible power supply unit pair that supports a pair of nodes must be
connected to a separate power domain (if possible) to reduce the chances of input power
loss.
The uninterruptible power supply units, for safety reasons, must be installed in the lowest
positions in the rack. If necessary, move lighter units toward the top of the rack to make
way for the uninterruptible power supply units.
The power and serial connection from a node must be connected to the same
uninterruptible power supply unit; otherwise, the node will not start.
The 2145-CG8 (older version), 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 hardware
models must be connected to a 5115 uninterruptible power supply unit. They will not start
with a 5125 uninterruptible power supply unit. The 2145-CG8 uses the 8115
uninterruptible power supply unit.
Figure 3-1 on page 76 shows a power cabling example for the 2145-CG8.
Important: Do not share the SAN Volume Controller uninterruptible power supply unit with
any other devices.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
76 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 3-1 2145-CG8 power cabling
You must follow the guidelines for Fibre Channel (FC) cable connections. Occasionally, the
introduction of a new SAN Volume Controller hardware model means that there are internal
changes. One example is the worldwide port name (WWPN) mapping in the port mapping.
The 2145-8A4, 2145-8G4, 2145-CF8, and 2145 CG8 have the same mapping.
Figure 3-2 on page 77 shows the WWPN mapping.
Chapter 3. Planning and configuration 77
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Figure 3-2 WWPN mapping
Figure 3-3 on page 78 shows a sample layout where nodes within each I/O Group have been
split between separate racks. This layout protects against power failures and other events that
only affect a single rack.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
78 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 3-3 Sample rack layout
3.2.3 Cable connections
Create a cable connection table or documentation following your environments
documentation procedure to track all of the connections that are required for the setup:
Nodes
Uninterruptible power supply unit
Ethernet
iSCSI / FCoE connections
FC ports
3.3 Logical planning
For logical planning, we cover these topics:
Management IP addressing plan
SAN zoning and SAN connections
iSCSI IP addressing plan
IP Mirroring
Back-end storage subsystem configuration
SAN Volume Controller system configuration
Split-cluster system configuration
Chapter 3. Planning and configuration 79
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Storage pool configuration
Volume configuration
Host mapping (LUN masking)
Advanced Copy Services functions
SAN boot support
Data migration from non-virtualized storage subsystems
SAN Volume Controller configuration backup procedure
3.3.1 Management IP addressing plan
For management, remember these rules:
In addition to an FC connection, each node has an Ethernet connection for configuration
and error reporting.
Each SAN Volume Controller clustered system needs at least one IP address for
management and one IP address per node to be used for service, with the new Service
Assistant feature available starting with SAN Volume Controller 6.1.
The service IP address is usable only from the non-configuration node or when the SAN
Volume Controller system is in service mode. Service mode is a disruptive operation. Both
IP addresses must be in the same IP subnet. See Figure 3-1 on page 76.
Example 3-1 Management IP address sample
management IP add. 10.11.12.120
service IP add. 10.11.12.121
Each node in a SAN Volume Controller clustered system needs to have at least one
Ethernet connection.
Starting with SAN Volume Controller 6.1, the system management is performed through an
embedded GUI running on the nodes. A separate console, such as the traditional SAN
Volume Controller Hardware Management Console (HMC) or IBM System Storage
Productivity Center (SSPC), is no longer required to access the management interface. To
access the management GUI, you direct a web browser to the system management IP
address.
The clustered system must first be created specifying either an IPv4 or an IPv6 system
address for port 1. After the clustered system is created, additional IP addresses can be
created on port 1 and port 2 until both ports have an IPv4 and an IPv6 address defined. This
design allows the system to be managed on separate networks, which provides redundancy
in the event of a network failure.
Figure 3-4 on page 80 shows the IP configuration possibilities.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
80 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 3-4 IP configuration possibilities
Support for iSCSI provides one additional IPv4 and one additional IPv6 address for each
Ethernet port on every node. These IP addresses are independent of the clustered system
configuration IP addresses.
The SAN Volume Controller model 2145-CG8 can optionally have a serial-attached SCSI
(SAS) adapter with external ports disabled or a high speed 10 Gbps Ethernet adapter with
two ports. Two additional IPv4 or IPv6 addresses are required in both cases.
When accessing the SAN Volume Controller through the GUI or Secure Shell (SSH), choose
one of the available IP addresses to which to connect. No automatic failover capability is
available. If one network is down, use an IP address on the alternate network. Clients might
be able to use intelligence in domain name servers (DNS) to provide partial failover.
3.3.2 SAN zoning and SAN connections
SAN storage systems using the SAN Volume Controller can be configured with two, or up to
eight, SAN Volume Controller nodes, arranged in a SAN Volume Controller clustered system.
These SAN Volume Controller nodes are attached to the SAN fabric, along with disk
subsystems and host systems. The SAN fabric is zoned to allow the SAN Volume Controllers
to see each others nodes and the disk subsystems, and for the hosts to see the SAN
Volume Controllers. The hosts are not able to directly see or operate LUNs on the disk
subsystems that are assigned to the SAN Volume Controller system. The SAN Volume
Chapter 3. Planning and configuration 81
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Controller nodes within a SAN Volume Controller system must be able to see each other and
all of the storage that is assigned to the SAN Volume Controller system.
The zoning capabilities of the SAN switch are used to create three distinct zones. SAN
Volume Controller 7.2 supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, depending on the
hardware platform and on the switch where the SAN Volume Controller is connected. In an
environment where you have a fabric with multiple-speed switches, the preferred practice is to
connect the SAN Volume Controller and the disk subsystem to the switch operating at the
highest speed.
All SAN Volume Controller nodes in the SAN Volume Controller clustered system are
connected to the same SANs, and they present volumes to the hosts. These volumes are
created from storage pools that are composed of MDisks presented by the disk subsystems.
The fabric must have three distinct zones:
SAN Volume Controller clustered system zone: Create one zone per fabric with all of the
SAN Volume Controller ports cabled to this fabric to allow SAN Volume Controller
internode communication.
Host zones: Create a SAN Volume Controller host zone for each server accessing storage
from the SAN Volume Controller system.
Storage zone: Create one SAN Volume Controller storage zone for each storage
subsystem that is virtualized by the SAN Volume Controller.
Zoning considerations for Metro Mirror and Global Mirror
Ensure that you are familiar with the constraints on zoning a switch to support Metro Mirror
and Global Mirror partnerships. SAN configurations that use intracluster Metro Mirror and
Global Mirror relationships do not require additional switch zones.
SAN configurations that use intercluster Metro Mirror and Global Mirror relationships require
the following additional switch zoning considerations:
For each node in a clustered system, zone exactly two FC ports to exactly two FC ports
from each node in the partner clustered system.
If dual-redundant ISLs are available, split the two ports from each node evenly between
the two ISLs. That is, exactly one port from each node must be zoned across each ISL.
Local clustered system zoning continues to follow the standard requirement for all ports on
all nodes in a clustered system to be zoned to one another.
Configure your SAN so that FC traffic can be passed between the two clustered systems.
To configure the SAN this way, you can connect the clustered systems to the same SAN,
merge the SANs, or use routing technologies.
Configure zoning to allow all of the nodes in the local fabric to communicate with all of the
nodes in the remote fabric.
Important: Failure to follow these configuration rules exposes the clustered system to
the following condition and can result in the loss of host access to volumes.
If an intercluster link becomes severely and abruptly overloaded, the local FC fabric can
become congested to the extent that no FC ports on the local SAN Volume Controller
nodes are able to perform local intracluster heartbeat communication. This situation
can, in turn, result in the nodes experiencing lease expiry events. In a lease expiry
event, a node reboots to attempt to re-establish communication with the other nodes in
the clustered system. If the leases for all nodes expire simultaneously, a loss of host
access to volumes can occur for the duration of the reboot events.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
82 Implementing the IBM System Storage SAN Volume Controller V7.2
Optionally, modify the zoning so that the hosts that are visible to the local clustered system
can recognize the remote clustered system. This capability allows a host to have access to
data in both the local and remote clustered systems.
Verify that clustered system A cannot recognize any of the back-end storage that is owned
by clustered system B. A clustered system cannot access logical units (LUs) that a host or
another clustered system can also access.
Figure 3-5 shows the SAN Volume Controller zoning topology.
Figure 3-5 SAN Volume Controller zoning topology
Figure 3-6 on page 83 shows an example of SAN Volume Controller, host, and storage
subsystem connections.
Chapter 3. Planning and configuration 83
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Figure 3-6 Example of SAN Volume Controller, host, and storage subsystem connections
You must also observe the following guidelines:
LUNs (MDisks) must have exclusive access to a single SAN Volume Controller clustered
system and cannot be shared between other SAN Volume Controller clustered systems or
hosts.
A storage controller can present LUNs to both the SAN Volume Controller (as MDisks) and
to other hosts in the SAN. However, in this case it is better to avoid the SAN Volume
Controller and hosts that share the same storage ports.
Mixed port speeds are not permitted for intracluster communication. All node ports within
a clustered system must be running at the same speed.
ISLs are not to be used for intracluster node communication or node-to-storage controller
access.
The switch configuration in a SAN Volume Controller fabric must comply with the switch
manufacturers configuration rules, which can impose restrictions on the switch
configuration. For example, a switch manufacturer might limit the number of supported
switches in a SAN. Operation outside of the switch manufacturers rules is not supported.
Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host need to
be in separate zones. For example, IBM AIX and Microsoft hosts need to be in separate
zones. In this case, dissimilar means that the hosts are running separate operating
systems or are using separate hardware platforms. Therefore, various levels of the same
operating system are regarded as similar. Note that this requirement is a SAN
interoperability issue, rather than a SAN Volume Controller requirement.
Host zones are to contain only one initiator (HBA) each, and as many SAN Volume
Controller node ports as you need, depending on the high availability and performance
that you want from your configuration.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
84 Implementing the IBM System Storage SAN Volume Controller V7.2
You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.
Zoning examples
Figure 3-7 shows a SAN Volume Controller clustered system zoning example.
Figure 3-7 SAN Volume Controller clustered system zoning example
Figure 3-8 on page 85 shows a storage subsystem zoning example.
Important: Be aware of the following considerations:
The use of ISLs for intracluster node communication can negatively affect the
availability of the system due to the high dependency on the quality of these links to
maintain heartbeat and other system management services. Therefore, we strongly
advise that you only use them as part of an interim configuration to facilitate SAN
migrations, and not as part of the architected solution.
The use of ISLs for SAN Volume Controller node to storage controller access can
lead to port congestion, which can negatively affect the performance and resiliency
of the SAN. Therefore, we strongly advise that you only use them as part of an
interim configuration to facilitate SAN migrations, and not as part of the architected
solution. With SAN Volume Controller 6.3, you can use ISLs between nodes, but
they must be in a dedicated SAN, virtual SAN (CISCO technology), or logical SAN
(Brocade technology).
The use of mixed port speeds for intercluster communication can lead to port
congestion, which can negatively affect the performance and resiliency of the SAN
and is therefore not supported.
SVC 1 SVC 2
Fabric ID 12 Fabric ID 11
1 1 1 1
1 2 3 4
SVC
Port #
2 2 2 2
1 2 3 4
SVC
Port #
SVC-Cluster Zone 2:
Fabric Domain ID, Port
12,0 - 12,1 - 12,2 - 12,3
Fabric ID 22 Fabric ID 21
Fabric
1
Fabric
2
Ports 0 1 2 3 Ports 0 1 2 3
1 2 1 2
1 1 3 3
SVC #
Port # SVC
0 1 2 3 Port # Switch
1 2 1 2
2 2 4 4
SVC #
Port # SVC
0 1 2 3 Port # Switch
I
S
L
I
S
L
SVC-Cluster Zone 1:
Fabric Domain ID, Port
11,0 - 11,1 - 11,2 - 11,3
Storwize Family
Zoning Info:
Cluster zones with all
SVC ports
Chapter 3. Planning and configuration 85
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Figure 3-8 Storage subsystem zoning example
Figure 3-9 shows a host zoning example.
Figure 3-9 Host zoning example
SVC-Storwize V7000 Zone 1:
Fabric Domain ID, Port
11,0 - 11,1 - 11,2 - 11,3 11,8
SVC-EMC Zone 1:
11,0 - 11,1 - 11,2 - 11,3 11,9
SVC-Storwize V7000 Zone 2:
Fabric Domain ID, Port
12,0 - 12,1 - 12,2 - 12,3 12,8
SVC-EMC Zone 2:
12,0 - 12,1 - 12,2 - 12,3 12,9
Fabric ID 12 Fabric ID 11
Fabric ID 22 Fabric ID 21
Fabric
1
Fabric
2
1 2 1 2
1 1 3 3
SVC #
Port # SVC
0 1 2 3 Port # Switch
1 2 1 2
2 2 4 4
SVC #
Port # SVC
0 1 2 3 Port # Switch
V1
V2
E2 E1
Zoning Info:
All storage ports
and all SVC ports
Storwize Family
EMC
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
I
S
L
I
S
L
SVC-Power System Zone P1:
Fabric Domain ID, Port
21,1 - 11,0 - 11,1
SVC-Power System Zone P2:
Fabric Domain ID, Port
22,1 - 12,2 - 12,3
Fabric ID 12 Fabric ID 11
Fabric ID 22 Fabric ID 21
Zoning Info:
One Power System
Port and one SVC
Port per SVC Node
Fabric
1
Fabric
2
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
IBM Power System
P1 P2
I
S
L
I
S
L
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
86 Implementing the IBM System Storage SAN Volume Controller V7.2
3.3.3 iSCSI IP addressing plan
Since Version 6.3, the SAN Volume Controller supports host access through iSCSI (as an
alternative to FC), and the following considerations apply:
SAN Volume Controller uses the built-in Ethernet ports for iSCSI traffic. If the optional 10
Gbps Ethernet feature is installed, you can connect host systems through the two 10 Gbps
Ethernet ports per node.
All node types, which can run SAN Volume Controller 6.1 or later, can use the iSCSI
feature.
SAN Volume Controller supports the Challenge Handshake Authentication Protocol
(CHAP) authentication methods for iSCSI.
iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This
design reduces the need for multipathing support in the iSCSI host.
iSCSI IP addresses can be configured for one or more nodes.
iSCSI Simple Name Server (iSNS) addresses can be configured in the SAN Volume
Controller.
The iSCSI qualified name (IQN) for a SAN Volume Controller node is
iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the
clustered system name and the node name, it is important not to change these names
after iSCSI is deployed.
Each node can be given an iSCSI alias, as an alternative to the IQN.
The IQN of the host to a SAN Volume Controller host object is added in the same way that
you add FC WWPNs.
Host objects can have both WWPNs and IQNs.
Standard iSCSI host connection procedures can be used to discover and configure SAN
Volume Controller as an iSCSI target.
Next, we explain several ways in which you can configure SAN Volume Controller 6.1 or later.
Figure 3-10 shows the use of IPv4 management and iSCSI addresses in the same subnet.
Chapter 3. Planning and configuration 87
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Figure 3-10 Use of IPv4 addresses
You can set up the equivalent configuration with only IPv6 addresses.
Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate
subnets.
Figure 3-11 IPv4 address plan with two subnets
Figure 3-12 shows the use of redundant networks.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
88 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 3-12 Redundant networks
Figure 3-13 on page 88 shows the use of a redundant network and a third subnet for
management.
Figure 3-13 Redundant network with third subnet for management
Figure 3-14 shows the use of a redundant network for both iSCSI data and management.
Chapter 3. Planning and configuration 89
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Figure 3-14 Redundant network for iSCSI and management
Be aware of these considerations:
All of the examples are valid for IPv4 and IPv6 addresses.
It is valid to use IPv4 addresses on one port and IPv6 addresses on the other port.
It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.
3.3.4 IP Mirroring
One of the most important new functions of Version 7.2 in the Storwize family is IP replication,
which enables the use of lower-cost Ethernet connections for remote mirroring. The capability
is available as a chargeable option (Metro or Global Mirror) on all Storwize family systems.
The new function is transparent to servers and applications in the same way that traditional
FC-based mirroring is. All remote mirroring modes (Metro Mirror, Global Mirror, and Global
Mirror with changed volumes) are supported. Configuration of the system is straightforward:
Storwize family systems can normally find each other on the network and can be selected
from the GUI. IP replication includes Bridgeworks SANSlide network optimization technology
and is available at no additional charge. Remote mirror is a chargeable option but the price
does not change with IP replication. Existing remote mirror users have access to the new
function at no additional charge.
IP connections that are used for replication can have long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many hops between
switches and other appliances in the network. Traditional replication solutions transmit data,
wait for a response, then transmit more data, which can result in network usage as low as
20% (based on IBM measurements). This situation gets worse the longer the latency.
Bridgeworks SANSlide technology integrated with IBM Storwize family requires no separate
appliances and so no additional cost and no configuration hassle. It uses AI technology to
transmit multiple data streams in parallel, adjusting automatically to changing network
environments and workloads. And since it does not use compression, it is independent of
application or data type. Most importantly, SANSlide improves network bandwidth usage up
to 3x so customers might be able to deploy a less costly network infrastructure or take
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
90 Implementing the IBM System Storage SAN Volume Controller V7.2
advantage of faster data transfer to speed replication cycles, improve remote data currency,
and enjoy faster recovery.
These are the key features of IP Mirroring:
IBM offer Remote Copy Capability to low and mid-range customers who could not use these
features due to the prohibitive cost of FC-based replication.
Remote Copy Modes supported: MM, GM, GMCV.
Platforms Supported: all platforms that support remote copy
Configuration: Automatic Path Configuration via discovery of a remote cluster. Configure
any Ethernet port (10g/1G) for replication using Remotecopy port groups
Dedicated Ethernet ports for replication
Security: Chap-based authentication supported
Licensing: Same as existing remote copy
High Availability: Auto failover support across redundant links
Performance: Vendor supplied IP connectivity solution experience in dealing in low
bandwidth, high latency long distance IP links. Support for 80 msec Round Trip Time at a1
Gbps Link.
Figure 3-15 shows the schematic way to connect two sides via IP mirroring.
Figure 3-15 IP Mirroring
The next two pictures show some configuration possibilities for how two sites can be
connected via IP Mirroring. Figure 3-16 shows the configuration with single links.
Note: The limiting factor of the distance is the round-trip time. The maximum supported
round-trip time between sites is 80 milliseconds (ms) for a 1 Gbps link. For a 10 Gbps
link, the maximum supported round-trip time between sites is 10 milliseconds (ms).
Chapter 3. Planning and configuration 91
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Figure 3-16 Single Link configuration
Administrator must configure at least one port on each site to use with the link. Configuring
more than one port means replication keeps going even if a node fails. In Figure 3-17, you
see a redundant IP configuration with two links.
Figure 3-17 Two Links with Active and Failover Ports
Replication Group setup for dual redundant links:
Replication Group 1: 4 IP addresses, each on a different node (green)
Replication Group 2: 4 IP addresses, each on a different node (orange)
There are two simultaneous IP replication sessions at any time.
Possible user configuration of each Ethernet port:
Not used for IP replication (default)
Used for IP replication, link 1
Used for IP replication, link 2
IP replication status for each Ethernet port:
Not used for IP replication
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
92 Implementing the IBM System Storage SAN Volume Controller V7.2
Active (solid box)
Standby (outline box)
Figure 3-18 displays the configuration of an IP partnership.
Figure 3-18 Configuration of an IP partnership
Terminology for IP Replication
This section lists terminology for IP Replication.
Discovery. This term refers to the process by which two SAN Volume Controller clusters
exchange information about their IP address configuration. For IP-based partnerships, only IP
addresses configured for Remote Copy are discovered. For example, the first discovery takes
place then the user is running the mkippartnership CLI. Subsequent discoveries might take
place as a result of user activities (configuration changes) or as a result of hardware failures
(for example, node failure, ports failure, and so on).
Remote Copy Port Group. This term indicates the set of local and remote Ethernet ports (on
local and partnered SAN Volume Controller systems) that can access each other via a
long-distance IP link. For a successful partnership to be established between 2 SAN Volume
Controller clusters, there must be at least two ports in the same remote copy port group, one
from the local cluster and one from the partner cluster. There can be more than two ports
from the same system in a group to allow for TCP connection failover in the event of
local/partnered node or port failure.
Chapter 3. Planning and configuration 93
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Remote Copy Port Group ID. This is a numeric value that indicates which group the port
belongs to. Zero is used to indicate that a port is not used for remote copy. In order for two
SAN Volume Controller clusters to form a partnership, both clusters must have at least one
port configured with the same Group ID and they must be accessible to each other.
RC login. This is a bidirectional full-duplex data path between two SAN Volume Controller
clusters that are Remote Copy partners. This path is between an IP address pair, one local
and one remote. An RC login carries Remote Copy traffic consisting of host WRITEs,
background copy traffic during initial sync within a relationship, periodic updates in GMCV
relationships etc.
Path Configuration. This is the act of setting up RC logins between two partnered SAN
Volume Controller systems. The selection of IP addresses to be used for RC logins is based
on certain rules that are specified in the requirements section. Most of those rules are driven
by constraints & requirements from vendor supplied link management library. A simple
algorithm is run by each SAN Volume Controller system to arrive at the list of RC logins that
must be established. Both SAN Volume Controller clusters (local and remote) are expected to
arrive at exactly the same IP address pairs for RC login creation even though they run the
algorithm independently.
Best practices
This section lists best practices for IP replication.
Configure two physical links between sites for redundancy.
Configure Ethernet ports that are dedicated for Remote Copy. Do not allow iSCSI host
attach for these Ethernet ports.
Configure Replication Port group IDs on both nodes for each physical link to survive node
failover.
A minimum of four nodes are required for dual redundant links to work across node
failures. On a two-node system, if a node failure occurs, one link will be lost.
Do not zone in two SAN Volume Controller systems over FC/FCOE when an IP
partnership already exists.
Configure CHAP secret based authentication if required.
The maximum supported round-trip time between sites is 80 milliseconds (ms) for a
1 Gbps link.
The maximum supported round-trip time between sites is 10 milliseconds (ms) for a 10
Gbps link.
For IP partnerships, the recommended method of copying is Global Mirror with Change
Volumes. This method is recommended because of the performance benefits. Also, Global
Mirror and Metro Mirror might be more susceptible to the loss of synchronization.
The amount of inter-cluster heartbeat traffic is 1 megabit per second (Mbps) per link.
The minimum bandwidth requirement for the inter-cluster link is 10 Mbps. This, however,
scales up with the amount of host I/O that you choose to use.
3.3.5 Back-end storage subsystem configuration
Back-end storage subsystem configuration planning must be applied to all storage controllers
that are attached to the SAN Volume Controller.
See the following website for a list of currently supported storage subsystems:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
94 Implementing the IBM System Storage SAN Volume Controller V7.2
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, storage controllers that are used by the SAN Volume Controller clustered
system must be connected through SAN switches. Direct connection between the SAN
Volume Controller and the storage controller is not supported.
Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is a preferred
practice. Therefore, canister A in the V3700 subsystem can be connected to SAN A only,
or to SAN A and SAN B. And, canister B in the V3700 subsystem can be connected to
SAN B only, or to SAN B and SAN A.
Split controller configurations are supported with certain rules and configuration
guidelines. See 3.3.7, Split-cluster system configuration on page 97 for more
information.
All SAN Volume Controller nodes in a SAN Volume Controller clustered system must be
able to see the same set of ports from each storage subsystem controller. Violating this
guideline causes the paths to become degraded. This degradation can occur as a result of
applying inappropriate zoning and LUN masking. This guideline has important implications
for a disk subsystem, such as DS3000, V3700, V5000, or V7000, which imposes
exclusivity rules regarding to which HBA WWPNs a storage partition can be mapped.
If you do not have a storage subsystem that supports the SAN Volume Controller round-robin
algorithm, make the number of MDisks per storage pool a multiple of the number of storage
ports that are available. This approach ensures sufficient bandwidth to the storage controller
and an even balance across storage controller ports.
In general, configure disk subsystems as though no SAN Volume Controller exists. However,
we suggest the following specific guidelines:
Disk drives:
Exercise caution with large disk drives so that you do not have too few spindles to
handle the load.
RAID 5 is suggested for most workloads.
MDisks within storage pools:
SAN Volume Controller 6.1 and later provide for better load distribution across paths
within storage pools.
In previous code levels, the path to MDisk assignment was made in a round-robin
fashion across all MDisks configured to the clustered system. With that method, no
attention is paid to how MDisks within storage pools are distributed across paths.
Therefore, it is possible and even likely to have certain paths that are more heavily
loaded than others.
This condition is more likely to occur with a smaller number of MDisks contained in the
storage pool. Starting with SAN Volume Controller 6.1, the code contains logic that
considers MDisks within storage pools. Therefore, the code more effectively distributes
their active paths that are based on the storage controller ports that are available.
The Detect MDisk commands must be run following the creation or modification (add or
remove MDisk) of storage pools for paths to be redistributed.
Chapter 3. Planning and configuration 95
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Array sizes:
An array size of 8+P or 4+P is suggested for the IBM DS4000 and DS5000 families, if
possible.
Use the DS4000 segment size of 128 KB or larger to help the sequential performance.
Upgrade to EXP810 drawers, if possible.
Create LUN sizes that are equal to the RAID array and rank size. If the array size is
greater than 2 TB and the disk subsystem does not support MDisks larger than 2 TB,
create the minimum number of LUNs of equal size.
An array size of 7+P is suggested for the V3700, V5000, and V7000 Storwize families.
When adding more disks to a subsystem, consider adding the new MDisks to existing
storage pools versus creating additional small storage pools.
Scripts are available to restripe volume extents evenly across all MDisks in the storage
pools, if required. Go to the following website and search for svctools:
https://www.ibm.com/developerworks/mydeveloperworks/groups/service/html/comm
unityview?communityUuid=5cca19c3-f039-4e00-964a-c5934226abc1
Maximum of 1,024 worldwide node names (WWNNs) per cluster:
EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port. Each
WWNN appears as a separate controller to the SAN Volume Controller.
IBM, EMC CLARiiON, and HP use one WWNN per subsystem. Each WWNN appears
as a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs
per WWNN.
DS8000 using four of, or eight of, the 4-port HA cards:
Use ports 1 and 3 or ports 2 and 4 on each card (it does not matter for 8 Gb cards).
This setup provides 8 or 16 ports for SAN Volume Controller use.
Use eight ports minimum, up to 40 ranks.
Use 16 ports for 40 or more ranks. Sixteen is the maximum number of ports.
DS4000/DS5000 EMC CLARiiON/CX:
Both systems have the preferred controller architecture, and SAN Volume Controller
supports this configuration.
Use a minimum of four ports, and preferably eight or more ports, up to a maximum of
16 ports, so that more ports equate to more concurrent I/O that is driven by the SAN
Volume Controller.
Support is available for mapping controller A ports to Fabric A and controller B ports to
Fabric B or cross-connecting ports to both fabrics from both controllers. The
cross-connecting approach is preferred to avoid AVT/Trespass occurring if a fabric or
all paths to a fabric fail.
DS3400 subsystems:
Use a minimum of four ports.
Storwize family:
Use a minimum of four ports, and preferably eight ports.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
96 Implementing the IBM System Storage SAN Volume Controller V7.2
IBM XIV requirements and restrictions:
The use of XIV extended functions, including snaps, thin provisioning, synchronous
replication (native copy services), and LUN expansion of LUNs presented to the SAN
Volume Controller is not supported.
A maximum of 511 LUNs from one XIV system can be mapped to a SAN Volume
Controller clustered system.
Full 15 module XIV recommendations 161 usable TB:
Use two interface host ports from each of the six interface modules.
Use ports 1 and 3 from each interface module and zone these 12 ports with all SAN
Volume Controller node ports.
Create 48 LUNs of equal size, each of which is a multiple of 17 GB. This creates
approximately 1632 GB if you are using the entire full frame XIV with the SAN Volume
Controller.
Map LUNs to the SAN Volume Controller as 48 MDisks, and add all of them to the
single XIV storage pool so that the SAN Volume Controller drives the I/O to four
MDisks and LUNs for each of the 12 XIV FC ports. This design provides a good queue
depth on the SAN Volume Controller to drive XIV adequately.
Six module XIV recommendations - 55 TB usable:
Use two interface host ports from each of the two active interface modules.
Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.) Also
zone these four ports with all SAN Volume Controller node ports.
Create 16 LUNs of equal size, each of which is a multiple of 17 GB. This creates
approximately 1632 GB if you are using the entire XIV with the SAN Volume Controller.
Map the LUNs to the SAN Volume Controller as 16 MDisks, and add all of them to the
single XIV storage pool, so that the SAN Volume Controller drives I/O to four MDisks
and LUNs per each of the four XIV FC ports. This design provides a good queue depth
on the SAN Volume Controller to drive the XIV adequately.
Nine module XIV recommendations - 87 usable TB:
Use two interface host ports from each of the four active interface modules.
Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are
inactive.) Also, zone these eight ports with all of the SAN Volume Controller node ports.
Create 26 LUNs of equal size, each of which is a multiple of 17 GB. This creates
approximately 1632 GB approximately if you are using the entire XIV with the SAN
Volume Controller.
Map the LUNs to the SAN Volume Controller as 26 MDisks, and map all of them to the
single XIV storage pool, so that the SAN Volume Controller drives I/O to three MDisks
and LUNs on each of the six ports and four MDisks and LUNs on the other two XIV FC
ports. This design provides a useful queue depth on SAN Volume Controller to drive
XIV adequately.
Configure XIV host connectivity for the SAN Volume Controller clustered system:
Create one host definition on XIV, and include all SAN Volume Controller node
WWPNs.
You can create clustered system host definitions (one per I/O Group), but the
preceding method is easier.
Map all LUNs to all SAN Volume Controller node WWPNs.
Chapter 3. Planning and configuration 97
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
3.3.6 SAN Volume Controller clustered system configuration
To ensure high availability in SAN Volume Controller installations, consider the following
guidelines when you design a SAN with the SAN Volume Controller:
All nodes in a clustered system must be in the same LAN segment, because the nodes in
the clustered system must be able to assume the same clustered system or service IP
address. Make sure that the network configuration allows any of the nodes to use these IP
addresses. If you plan to use the second Ethernet port on each node, it is possible to have
two LAN segments. However, port 1 of every node must be in one LAN segment, and port
2 of every node must be in the other LAN segment.
To maintain application uptime in the unlikely event of an individual SAN Volume Controller
node failing, SAN Volume Controller nodes are always deployed in pairs (I/O Groups). If a
node fails or is removed from the configuration, the remaining node operates in a
degraded mode, but it is still a valid configuration. The remaining node operates in
write-through mode, meaning that the data is written directly to the disk subsystem (the
cache is disabled for the write).
The uninterruptible power supply unit must be in the same rack as the node to which it
provides power, and each uninterruptible power supply unit can only have one connected
node.
The FC SAN connections between the SAN Volume Controller node and the switches are
optical fiber. These connections can run at either 2 Gbps, 4 Gbps, or 8 Gbps, depending
on your SAN Volume Controller and switch hardware. The 2145-CG8, 2145-CF8,
2145-8A4, 2145-8G4, and 2145-8F4 SAN Volume Controller nodes auto-negotiate the
connection speed with the switch.
The SAN Volume Controller node ports must be connected to the FC fabric only. Direct
connections between the SAN Volume Controller and the host, or the disk subsystem, are
unsupported.
Two SAN Volume Controller clustered systems cannot have access to the same LUNs
within a disk subsystem. Configuring zoning so that two SAN Volume Controller clustered
systems have access to the same LUNs (MDisks) can, and will likely, result in data
corruption.
The two nodes within an I/O Group can be co-located (within the same set of racks) or can
be in separate racks and separate rooms. See 3.3.7, Split-cluster system configuration
on page 97 for more information about this topic.
The SAN Volume Controller uses three MDisks as quorum disks for the clustered system.
A preferred practice for redundancy is to have each quorum disk in a separate storage
subsystem, where possible. The current locations of the quorum disks can be displayed
using the lsquorum command and relocated using the chquorum command.
3.3.7 Split-cluster system configuration
You can implement a split-cluster system configuration (also referred to as a Split I/O Group)
as a high-availability option.
SAN Volume Controller 7.2 supports two split-cluster system configurations:
No ISL configuration:
Passive wave division multiplexing (WDM) devices can be used between both sites.
No ISLs can be located between the SAN Volume Controller nodes (similar to SAN
Volume Controller 5.1-supported configurations).
The supported distance is up to 40 km (24.8 miles).
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
98 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 3-19 on page 98 shows an example of a split-cluster configuration with no ISL
configuration.
Figure 3-19 Split cluster with no ISL configuration
ISL configuration:
ISLs located between the SAN Volume Controller nodes
Maximum distance similar to Metro Mirror distances
Physical requirements similar to Metro Mirror requirements
ISL distance extension with active and passive WDM devices
Figure 3-20 shows an example of a split cluster with ISL configuration.
Chapter 3. Planning and configuration 99
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Figure 3-20 Split Cluster with ISL Configuration
Use the split-cluster system configuration with the volume mirroring option to realize an
availability benefit. After volume mirroring is configured, use the
lscontrollerdependentvdisks command to validate that the volume mirrors reside on
separate storage controllers. Having the volume mirrors reside on separate storage
controllers ensures that access to the volumes is maintained in the event of the loss of a
storage controller.
When implementing a split-cluster system configuration, two of the three quorum disks can be
co-located in the same room where the SAN Volume Controller nodes are located. However,
the active quorum disk must reside in a separate room. This configuration ensures that a
quorum disk is always available, even after a single-site failure.
For split-cluster system configuration, configure the SAN Volume Controller in the following
manner:
Site 1: Half of the SAN Volume Controller clustered system nodes plus one quorum disk
candidate
Site 2: Half of the SAN Volume Controller clustered system nodes plus one quorum disk
candidate
Site 3: Active quorum disk
When a split-cluster configuration is used with volume mirroring, this configuration provides a
high-availability solution that is tolerant of a failure at a single site. If either the primary or
secondary site fails, the remaining sites can continue performing I/O operations.
See Appendix C, SAN Volume Controller Stretched Cluster on page 885 for more
information about split-cluster configurations.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
100 Implementing the IBM System Storage SAN Volume Controller V7.2
3.3.8 Storage pool configuration
The storage pool is at the center of the many-to-many relationship between the MDisks and
the volumes. It acts as a container from which managed disks contribute chunks of physical
disk capacity known as extents, and from which volumes are created.
MDisks in the SAN Volume Controller are LUNs assigned from the underlying disk
subsystems to the SAN Volume Controller and can be either managed or unmanaged. A
managed MDisk is an MDisk that is assigned to a storage pool:
A storage pool is a collection of MDisks. An MDisk can only be contained within a single
storage pool.
A SAN Volume Controller supports up to 128 storage pools.
The number of volumes that can be allocated from a storage pool is unlimited; however, an
I/O Group is limited to 2048, and the clustered system limit is 8192.
Volumes are associated with a single storage pool, except in cases where a volume is
being migrated or mirrored between storage pools.
The SAN Volume Controller supports extent sizes of 16, 32, 64, 128, 256, 512, 1024, 2048,
4096, and 8192 MB. Support for extent sizes 4096 and 8192 was added in SAN Volume
Controller 6.1. The extent size is a property of the storage pool and is set when the storage
pool is created. All MDisks in the storage pool have the same extent size, and all volumes that
are allocated from the storage pool have the same extent size. The extent size of a storage
pool cannot be changed. If another extent size is wanted, the storage pool must be deleted
and a new storage pool configured.
Table 3-1 on page 100 lists all of the extent sizes that are available in a SAN Volume
Controller.
Table 3-1 Extent size and maximum clustered system capacities
Consider the following information about storage pools:
Maximum clustered system capacity is related to the extent size:
Sixteen MB extent = 64 TB and doubles for each increment in extent size; for example,
32 MB = 128 TB. We strongly advise a minimum 128/256 MB. The IBM Storage
Performance Council (SPC) benchmarks used a 256 MB extent.
Extent size Maximum clustered system capacity
16 MB 64 TB
32 MB 128 TB
64 MB 256 TB
128 MB 512 TB
256 MB 1 PB
512 MB 2 PB
1,024 MB 4 PB
2,048 MB 8 PB
4,096 MB 16 PB
8,192 MB 32 PB
Chapter 3. Planning and configuration 101
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Pick the extent size, and use that size for all storage pools.
You cannot migrate volumes between storage pools with separate extent sizes.
However, you can use volume mirroring to create copies between storage pools with
separate extent sizes.
Storage pool reliability, availability, and serviceability (RAS) considerations:
It might make sense to create multiple storage pools if you ensure that a host only gets
its volumes built from one of the storage pools. If the storage pool goes offline, it affects
only a subset of all the hosts using the SAN Volume Controller. However, creating
multiple storage pools can cause a high number of storage pools, approaching the
SAN Volume Controller limits.
If you do not isolate hosts to storage pools, create one large storage pool. Creating one
large storage pool assumes that the physical disks are all the same size, speed, and
RAID level.
The storage pool goes offline if an MDisk is unavailable, even if the MDisk has no data
on it. Do not put MDisks into a storage pool until they are needed.
Create at least one separate storage pool for all the image mode volumes.
Make sure that the LUNs that are given to the SAN Volume Controller have all
host-persistent reserves removed.
Storage pool performance considerations
It might make sense to create multiple storage pools if you are attempting to isolate
workloads to separate disk spindles. Storage pools with too few MDisks cause an MDisk
overload, so it is better to have more spindle counts in a storage pool to meet workload
requirements.
The storage pool and SAN Volume Controller cache relationship
The SAN Volume Controller employs cache partitioning to limit the potentially negative
effect that a poorly performing storage controller can have on the clustered system. The
partition allocation size is defined based on the number of configured storage pools. This
design protects against individual controller overloading and failures from consuming write
cache and degrading performance of the other storage pools in the clustered system. We
discuss more details in 2.8.3, Cache on page 44.
Table 3-2 shows the limit of the write-cache data.
Table 3-2 Limit of the cache data
Consider the rule that no single partition can occupy more than its upper limit of cache
capacity with write data. These limits are upper limits, and they are the points at which the
SAN Volume Controller cache will start to limit incoming I/O rates for volumes that are
created from the storage pool. If a particular partition reaches this upper limit, the net
result is the same as a global cache resource that is full. That is, the host writes will be
serviced on a one-out-one-in basis, because the cache destages writes to the back-end
disks.
Number of storage pools Upper limit
1 100%
2 66%
3 40%
4 30%
5 or more 25%
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
102 Implementing the IBM System Storage SAN Volume Controller V7.2
However, only writes that are targeted at the full partition are limited. All I/O destined for
other (non-limited) storage pools continues as normal. The read I/O requests for the
limited partition also continue normally. However, because the SAN Volume Controller is
destaging write data at a rate that is obviously greater than the controller can sustain
(otherwise the partition does not reach the upper limit), read response times are also likely
affected.
3.3.9 Virtual disk configuration
An individual virtual disk (volume) is a member of one storage pool and one I/O Group. When
creating a volume, you first identify the wanted performance, availability, and cost
requirements for that volume, and then select the storage pool accordingly:
The storage pool defines which MDisks provided by the disk subsystem make up the
volume.
The I/O Group (two nodes make an I/O Group) defines which SAN Volume Controller
nodes provide I/O access to the volume.
Perform volume allocation based on the following considerations:
Optimize performance between the hosts and the SAN Volume Controller by attempting to
distribute volumes evenly across available I/O Groups and nodes within the clustered
system.
Reach the level of performance, reliability, and capacity that you require by using the
storage pool that corresponds to your needs (you can access any storage pool from any
node). That is, choose the storage pool that fulfills the demands for your volumes with
respect to performance, reliability, and capacity.
I/O Group considerations:
When you create a volume, it is associated with one node of an I/O Group. By default,
every time that you create a new volume, it is associated with the next node using a
round-robin algorithm. You can specify a preferred access node, which is the node
through which you send I/O to the volume instead of using the round-robin algorithm. A
volume is defined for an I/O Group.
Even if you have eight paths for each volume, all I/O traffic flows only toward one node
(the preferred node). Therefore, only four paths are used by the IBM Subsystem Device
Driver (SDD). The other four paths are used only in the case of a failure of the preferred
node or when concurrent code upgrade is running.
Creating image mode volumes:
Use image mode volumes when an MDisk already has data on it, from a
non-virtualized disk subsystem. When an image mode volume is created, it directly
corresponds to the MDisk from which it is created. Therefore, volume logical block
address (LBA) x = MDisk LBA x. The capacity of image mode volumes defaults to the
capacity of the supplied MDisk.
When you create an image mode disk, the MDisk must have a mode of unmanaged
and therefore does not belong to any storage pool. A capacity of 0 is not allowed.
Image mode volumes can be created in sizes with a minimum granularity of 512 bytes,
and they must be at least one block (512 bytes) in size.
Important: There is no fixed relationship between I/O Groups and storage pools.
Chapter 3. Planning and configuration 103
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Creating managed mode volumes with sequential or striped policy
When creating a managed mode volume with sequential or striped policy, you must use a
number of MDisks containing extents that are free and of a size that is equal to or greater
than the size of the volume that you want to create. There might be sufficient extents
available on the MDisk, but a contiguous block large enough to satisfy the request might
not be available.
Thin-Provisioned volume considerations:
When creating the Thin-Provisioned volume, you need to understand the utilization
patterns by the applications or group users accessing this volume. You must consider
items such as the actual size of the data, the rate of creation of new data, modifying or
deleting existing data, and so on.
Two operating modes for Thin-Provisioned volumes are available:
Autoexpand volumes allocate storage from a storage pool on demand with minimal
required user intervention. However, a misbehaving application can cause a volume
to expand until it has consumed all of the storage in a storage pool.
Non-autoexpand volumes have a fixed amount of assigned storage. In this case, the
user must monitor the volume and assign additional capacity when required. A
misbehaving application can only cause the volume that it uses to fill up.
Depending on the initial size for the real capacity, the grain size and a warning level can
be set. If a volume goes offline, either through a lack of available physical storage for
autoexpand, or because a volume that is marked as non-expand had not been
expanded in time, a danger exists of data being left in the cache until storage is made
available. This situation is not a data integrity or data loss issue, but you must not rely
on the SAN Volume Controller cache as a backup storage mechanism.
When you create a thin-provisioned volume, you can choose the grain size for
allocating space in 32 KB, 64 KB, 128 KB, or 256 KB chunks. The grain size that you
select affects the maximum virtual capacity for the thin-provisioned volume. The default
grain size is 256 KB, and is the strongly recommended option. If you select 32 KB for
the grain size, the volume size cannot exceed 260,000 GB. The grain size cannot be
changed after the thin-provisioned volume is created. Generally, smaller grain sizes
save space but require more metadata access, which can adversely affect
performance. If you are not going to use the thin-provisioned volume as a FlashCopy
source or target volume, use 256 KB to maximize performance. If you are going to use
the thin-provisioned volume as a FlashCopy source or target volume, specify the same
grain size for the volume and for the FlashCopy function.
Thin-provisioned volumes require more I/Os because of directory accesses. For truly
random workloads with 70% read and 30% write, a Thin-Provisioned volume requires
approximately one directory I/O for every user I/O.
The directory is two-way write-back-cached (just like the SAN Volume Controller
fastwrite cache), so certain applications perform better.
Thin-provisioned volumes require more processor processing, so the performance per
I/O Group can also be reduced.
Important:
Keep a warning level on the used capacity so that it provides adequate time to
respond and provision more physical capacity.
Warnings must not be ignored by an administrator.
Use the autoexpand feature of the Thin-Provisioned volumes.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
104 Implementing the IBM System Storage SAN Volume Controller V7.2
A thin-provisioned volume feature called zero detect provides clients with the ability to
reclaim unused allocated disk space (zeros) when converting a fully allocated volume
to a Thin-Provisioned volume using volume mirroring.
Volume mirroring guidelines:
Create or identify two separate storage pools to allocate space for your mirrored
volume.
Allocate the storage pools containing the mirrors from separate storage controllers.
If possible, use a storage pool with MDisks that share the same characteristics.
Otherwise, the volume performance can be affected by the poorer performing MDisk.
3.3.10 Host mapping (LUN masking)
For the host and application servers, the following guidelines apply:
Each SAN Volume Controller node presents a volume to the SAN through four ports.
Because two nodes are used in normal operations to provide redundant paths to the same
storage, a host with two HBAs can see multiple paths to each LUN that is presented by the
SAN Volume Controller. Use zoning to limit the pathing from a minimum of two paths to the
maximum that is available of eight paths, depending on the kind of high availability and
performance that you want to have in your configuration.
It is best to use zoning to limit the pathing to four paths. The hosts must run a multipathing
device driver to limit the pathing back to a single device. The multipathing driver supported
and delivered by SAN Volume Controller is the IBM Subsystem Device Driver (SDD).
Native multipath I/O (MPIO) drivers on selected hosts are supported. For operating
system-specific information about MPIO support, see this website:
http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html
You can find the actual version of the Subsystem Device Driver Device Specific Module
(SDDDSM) for IBM products at the following link:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000350
The number of paths to a volume from a host to the nodes in the I/O Group that owns the
volume must not exceed eight, even if eight is not the maximum number of paths
supported by the multipath driver (SDD supports up to 32). To restrict the number of paths
to a host volume, the fabrics must be zoned so that each host FC port is zoned to no more
than two ports from each SAN Volume Controller node in the I/O Group that owns the
volume.
Chapter 3. Planning and configuration 105
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
If a host has multiple HBA ports, each port must be zoned to a separate set of SAN
Volume Controller ports to maximize high availability and performance.
To configure greater than 256 hosts, you must configure the host to I/O Group mappings
on the SAN Volume Controller. Each I/O Group can contain a maximum of 256 hosts, so it
is possible to create 1,024 host objects on an eight-node SAN Volume Controller clustered
system. Volumes can only be mapped to a host that is associated with the I/O Group to
which the volume belongs.
Port masking
You can use a port mask to control the node target ports that a host can access, which
satisfies two requirements:
As part of a security policy, to limit the set of WWPNs that are able to obtain access to
any volumes through a given SAN Volume Controller port
As part of a scheme to limit the number of logins with mapped volumes visible to a host
multipathing driver, such as SDD, and thus limit the number of host objects configured
without resorting to switch zoning
The port mask is an optional parameter of the mkhost and chhost commands. The port
mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all
ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value
is 1111 (all ports enabled).
The SAN Volume Controller supports connection to the Cisco MDS family and Brocade
family. See the following website for the latest support information:
http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html
3.3.11 Advanced Copy Services
The SAN Volume Controller offers these Advanced Copy Services:
FlashCopy
Metro Mirror
Global Mirror
Multipathing:
The following list suggests the number of paths per volume (n+1 redundancy):
With two HBA ports, zone the HBA ports to the SAN Volume Controller ports 1 to 2
for a total of four paths.
With four HBA ports, zone the HBA ports to the SAN Volume Controller ports 1 to 1
for a total of four paths.
Optional (n+2 redundancy):
With four HBA ports, zone the HBA ports to the SAN Volume Controller ports 1 to 2 for
a total of eight paths.
We use the term HBA port to describe the SCSI initiator. We use the term SAN Volume
Controller port to describe the SCSI target.
The maximum number of host paths per volume must not exceed eight.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
106 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 3-21 Replication- and Storage Layer
SAN Volume Controller Advanced Copy Services must apply the following guidelines.
FlashCopy guidelines
Consider these FlashCopy guidelines:
Identify each application that must have a FlashCopy function implemented for its volume.
FlashCopy is a relationship between volumes. Those volumes can belong to separate
storage pools and separate storage subsystems.
You can use FlashCopy for backup purposes by interacting with the Tivoli Storage
Manager Agent, or for cloning a particular environment.
Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned,
or Incremental.
Define which FlashCopy rate best fits your requirement in terms of the performance and
the amount of time to complete the FlashCopy. Table 3-3 shows the relationship of the
background copy rate value to the attempted number of grains to be split per second.
Define the grain size that you want to use. A grain is the unit of data that is represented by
a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer
FlashCopy elapsed time and a higher space usage in the FlashCopy target volume.
Layers: SAN Volume Controller 6.3 introduces a new property for the clustered system
that is called layer. This property is used when a copy services partnership exists between
a SAN Volume Controller and an IBM Storwize V7000. There are two layers: replication
and storage. All SAN Volume Controller clustered systems are replication layers and
cannot be changed. By default, the IBM Storwize V7000 is a storage layer and must be
changed with the CLI command chsystem before you use it to make any copy services
partnership with the SAN Volume Controller. Figure 3-21 shows an example for replication
and storage layer.
SVC 7.2.0 SVC 7.2.0 V7000 7.2.0
V7000 7.2.0 V3700 7.2.0
Cluster A Cluster B Cluster C
Cluster D Cluster E
Layer replication
Layer storage
Partnership Partnership
Partnership
Layer storage
Replication Layer
Storage Layer
Volumes presented to SVC
Chapter 3. Planning and configuration 107
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Smaller grain sizes can have the opposite effect. Remember that the data structure and
the source data location can modify those effects.
In an actual environment, check the results of your FlashCopy procedure in terms of the
data that is copied at every run and in terms of elapsed time, comparing them to the new
SAN Volume Controller FlashCopy results. Eventually, adapt the grain/second and the
copy rate parameter to fit your environments requirements.
Table 3-3 Grain splits per second
Metro Mirror and Global Mirror guidelines
SAN Volume Controller supports both intracluster and intercluster Metro Mirror and Global
Mirror. From the intracluster point of view, any single clustered system is a reasonable
candidate for a Metro Mirror or Global Mirror operation. Intercluster operation, however, needs
at least two clustered systems that are separated by a number of moderately high-bandwidth
links.
Figure 3-22 shows a schematic of Metro Mirror connections.
User percentage Data copied per
second
256 KB grain per
second
64 KB grain per
second
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
41 - 50 2 MB 8 32
51 - 60 4 MB 16 64
61 - 70 8 MB 32 128
71 - 80 16 Mb 64 256
81 - 90 32 MB 128 512
91 - 100 64 MB 256 1024
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
108 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 3-22 Metro Mirror connections
Figure 3-22 contains two redundant fabrics. Part of each fabric exists at the local clustered
system and at the remote clustered system. No direct connection exists between the two
fabrics.
Technologies for extending the distance between two SAN Volume Controller clustered
systems can be broadly divided into two categories:
FC extenders
SAN multiprotocol routers
Due to the more complex interactions involved, IBM explicitly tests products of this class for
interoperability with the SAN Volume Controller. You can obtain the current list of supported
SAN routers in the supported hardware list on the SAN Volume Controller support website:
http://www.ibm.com/storage/support/2145
IBM has tested a number of FC extenders and SAN router technologies with the SAN Volume
Controller. You must plan, install, and test FC extenders and SAN router technologies with the
SAN Volume Controller so that the following requirements are met:
The round-trip latency between sites must not exceed 80 ms (40 ms one way). For Global
Mirror, this limit allows a distance between the primary and secondary sites of up to
8000 km (4970.96 miles) using a planning assumption of 100 km (62.13 miles) per 1 ms of
round-trip link latency.
The latency of long-distance links depends on the technology that is used to implement
them. A point-to-point dark fiber-based link will typically provide a round-trip latency of
1ms per 100 km (62.13 miles) or better. Other technologies provide longer round-trip
latencies, which affects the maximum supported distance.
The configuration must be tested with the expected peak workloads.
Chapter 3. Planning and configuration 109
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
When Metro Mirror or Global Mirror is used, a certain amount of bandwidth is required for
SAN Volume Controller intercluster heartbeat traffic. The amount of traffic depends on
how many nodes are in each of the two clustered systems.
Figure 3-23 shows the amount of heartbeat traffic, in megabits per second, that is
generated by various sizes of clustered systems.
Figure 3-23 Amount of heartbeat traffic
These numbers represent the total traffic between the two clustered systems when no I/O
is taking place to mirrored volumes. Half of the data is sent by one clustered system, and
half of the data is sent by the other clustered system. The traffic is divided evenly over all
available intercluster links. Therefore, if you have two redundant links, half of this traffic is
sent over each link during fault-free operation.
The bandwidth between sites must, at the least, be sized to meet the peak workload
requirements, in addition to maintaining the maximum latency that has been specified
previously. You must evaluate the peak workload requirement by considering the average
write workload over a period of one minute or less, plus the required synchronization copy
bandwidth.
With no active synchronization copies and no write I/O disks in Metro Mirror or Global
Mirror relationships, the SAN Volume Controller protocols operate with the bandwidth that
is indicated in Figure 3-23. However, you can only determine the true bandwidth that is
required for the link by considering the peak write bandwidth to volumes participating in
Metro Mirror or Global Mirror relationships and adding it to the peak synchronization copy
bandwidth.
If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency statements continue to
be true even during single failure conditions.
The configuration is tested to simulate the failure of the primary site (to test the recovery
capabilities and procedures), including eventual failback to the primary site from the
secondary.
The configuration must be tested to confirm that any failover mechanisms in the
intercluster links interoperate satisfactorily with the SAN Volume Controller.
The FC extender must be treated as a normal link.
The bandwidth and latency measurements must be made by, or on behalf of, the client.
They are not part of the standard installation of the SAN Volume Controller by IBM. Make
these measurements during installation, and record the measurements. Testing must be
repeated following any significant changes to the equipment that provides the intercluster
link.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
110 Implementing the IBM System Storage SAN Volume Controller V7.2
Global Mirror guidelines
Consider these guidelines:
When using SAN Volume Controller Global Mirror, all components in the SAN must be
capable of sustaining the workload that is generated by application hosts and the Global
Mirror background copy workload. Otherwise, Global Mirror can automatically stop your
relationships to protect your application hosts from increased response times. Therefore, it
is important to configure each component correctly.
Use a SAN performance monitoring tool, such as IBM Tivoli Storage Productivity Center,
which allows you to continuously monitor the SAN components for error conditions and
performance problems. This tool helps you detect potential issues before they affect your
disaster recovery solution.
The long-distance link between the two clustered systems must be provisioned to allow for
the peak application write workload to the Global Mirror source volumes, plus the
client-defined level of background copy.
The peak application write workload ideally must be determined by analyzing the SAN
Volume Controller performance statistics.
Statistics must be gathered over a typical application I/O workload cycle, which might be
days, weeks, or months, depending on the environment on which the SAN Volume
Controller is used. These statistics must be used to find the peak write workload that the
link must be able to support.
Characteristics of the link can change with use; for example, latency can increase as the
link is used to carry an increased bandwidth. The user must be aware of the links behavior
in such situations and ensure that the link remains within the specified limits. If the
characteristics are not known, testing must be performed to gain confidence of the links
suitability.
Users of Global Mirror must consider how to optimize the performance of the
long-distance link, which depends on the technology that is used to implement the link. For
example, when transmitting FC traffic over an IP link, it can be desirable to enable jumbo
frames to improve efficiency.
Using Global Mirror and Metro Mirror between the same two clustered systems is
supported.
Using Global Mirror and Metro Mirror between the SAN Volume Controller clustered
system and IBM Storwize systems with a minimum code level of 6.3 is supported.
It is supported for cache-disabled volumes to participate in a Global Mirror relationship;
however, it not a preferred practice to do so.
The gmlinktolerance parameter of the remote copy partnership must be set to an
appropriate value. The default value is 300 seconds (five minutes), which is appropriate for
most clients.
During SAN maintenance, the user must choose to reduce the application I/O workload for
the duration of the maintenance (so that the degraded SAN components are capable of
the new workload); disable the gmlinktolerance feature; increase the gmlinktolerance
value (meaning that application hosts might see extended response times from Global
Mirror volumes); or stop the Global Mirror relationships.
If the gmlinktolerance value is increased for maintenance lasting x minutes, it must only be
reset to the normal value x minutes after the end of the maintenance activity.
If gmlinktolerance is disabled for the duration of the maintenance, it must be re-enabled
after the maintenance is complete.
Chapter 3. Planning and configuration 111
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Global Mirror volumes must have their preferred nodes evenly distributed between the
nodes of the clustered systems. Each volume within an I/O Group has a preferred node
property that can be used to balance the I/O load between nodes in that group.
Figure 3-24 shows the correct relationship between volumes in a Metro Mirror or Global
Mirror solution.
Figure 3-24 Correct volume relationship
The capabilities of the storage controllers at the secondary clustered system must be
provisioned to allow for the peak application workload to the Global Mirror volumes, plus
the client-defined level of background copy, plus any other I/O being performed at the
secondary site. The performance of applications at the primary clustered system can be
limited by the performance of the back-end storage controllers at the secondary clustered
system to maximize the amount of I/O that applications can perform to Global Mirror
volumes.
It is necessary to perform a complete review before using Serial Advanced Technology
Attachment (SATA) for Metro Mirror or Global Mirror secondary volumes. Using a slower
disk subsystem for the secondary volumes for high-performance primary volumes can
mean that the SAN Volume Controller cache might not be able to buffer all the writes, and
flushing cache writes to SATA might slow I/O at the production site.
Storage controllers must be configured to support the Global Mirror workload that is
required of them. You can dedicate storage controllers to only Global Mirror volumes;
configure the controller to guarantee sufficient Quality of Service (QoS) for the disks being
used by Global Mirror; or ensure that physical disks are not shared between Global Mirror
volumes and other I/O (for example, by not splitting an individual RAID array).
MDisks within a Global Mirror storage pool must be similar in their characteristics, for
example, RAID level, physical disk count, and disk speed. This requirement is true of all
storage pools, but it is particularly important to maintain performance when using Global
Mirror.
When a consistent relationship is stopped, for example, by a persistent I/O error on the
intercluster link, the relationship enters the consistent_stopped state. I/O at the primary
site continues, but the updates are not mirrored to the secondary site. Restarting the
relationship begins the process of synchronizing new data to the secondary disk. While
this synchronization is in progress, the relationship is in the inconsistent_copying state.
Therefore, the Global Mirror secondary volume is not in a usable state until the copy has
completed and the relationship has returned to a Consistent state. For this reason, it is
highly advisable to create a FlashCopy of the secondary volume before restarting the
relationship. When started, the FlashCopy provides a consistent copy of the data, even
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
112 Implementing the IBM System Storage SAN Volume Controller V7.2
while the Global Mirror relationship is copying. If the Global Mirror relationship does not
reach the Synchronized state (if, for example, the intercluster link experiences further
persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster
recovery purposes.
If you plan to use a Fibre Channel over IP (FCIP) intercluster link, it is extremely important
to design and size the pipe correctly.
Example 3-2 shows a best-guess bandwidth sizing formula.
Example 3-2 WAN link calculation example
Amount of write data within 24 hours times 4 to allow for peaks
Translate into MB/s to determine WAN link needed
Example:
250 GB a day
250 GB * 4 = 1 TB
24 hours * 3600 secs/hr. = 86400 secs
1,000,000,000,000/ 86400 = approximately 12 MB/s,
Which means OC3 or higher is needed (155 Mbps or higher)
If compression is available on routers or WAN communication devices, smaller pipelines
might be adequate. Note that workload is probably not evenly spread across 24 hours. If
there are extended periods of high data change rates, consider suspending Global Mirror
during that time frame.
If the network bandwidth is too small to handle the traffic, the application write I/O
response times might be elongated. For the SAN Volume Controller, Global Mirror must
support short-term Peak Write bandwidth requirements. Remember that SAN Volume
Controller Global Mirror is much more sensitive to a lack of bandwidth than the DS8000.
You must also consider the initial sync and re-sync workload. The Global Mirror
partnerships background copy rate must be set to a value that is appropriate to the link
and secondary back-end storage. The more bandwidth that you give to the sync and
re-sync operation, the less workload can be delivered by the SAN Volume Controller for
the regular data traffic.
Do not propose Global Mirror if the data change rate will exceed the communication
bandwidth or if the round-trip latency exceeds 80 - 120 ms. A greater than 80 ms
round-trip latency requires SCORE/RPQ submission.
3.3.12 SAN boot support
The SAN Volume Controller supports SAN boot or startup for AIX, Microsoft Windows Server,
and other operating systems. SAN boot support can change from time to time, so check the
following websites regularly:
http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html
http://www.ibm.com/systems/support/storage/ssic/interoperability.wss
3.3.13 Data migration from a non-virtualized storage subsystem
Data migration is an extremely important part of a SAN Volume Controller implementation.
Therefore, you must accurately prepare a data migration plan. You might need to migrate your
data for one of these reasons:
To redistribute workload within a clustered system across the disk subsystem
To move workload onto newly installed storage
To move workload off old or failing storage, ahead of decommissioning it
Chapter 3. Planning and configuration 113
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
To move workload to rebalance a changed workload
To migrate data from an older disk subsystem to SAN Volume Controller-managed storage
To migrate data from one disk subsystem to another disk subsystem
Because multiple data migration methods are available, choose the method that best fits your
environment, your operating system platform, your kind of data, and your applications
service-level agreement (SLA).
We can define data migration as belonging to three groups:
Based on operating system Logical Volume Manager (LVM) or commands
Based on special data migration software
Based on the SAN Volume Controller data migration feature
With data migration, apply the following guidelines:
Choose which data migration method best fits your operating system platform, your kind of
data, and your SLA.
Check the interoperability matrix for the storage subsystem to which your data is being
migrated:
http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html
Choose where you want to place your data after migration in terms of the storage pools
that relate to a specific storage subsystem tier.
Check whether enough free space or extents are available in the target storage pool.
Decide if your data is critical and must be protected by a volume mirroring option or if it
must be replicated in a remote site for disaster recovery.
Prepare offline all of the zone and LUN masking and host mappings that you might need,
to minimize downtime during the migration.
Prepare a detailed operation plan so that you do not overlook anything at data migration
time.
Run a data backup before you start any data migration. Data backup must be part of the
regular data management process.
You might want to use the SAN Volume Controller as a data mover to migrate data from a
non-virtualized storage subsystem to another non-virtualized storage subsystem. In this
case, you might have to add additional checks that relate to the specific storage
subsystem to which you want to migrate. Be careful using slower disk subsystems for the
secondary volumes for high-performance primary volumes, because the SAN Volume
Controller cache might not be able to buffer all the writes and flushing cache writes to
SATA might slow I/O at the production site.
3.3.14 SAN Volume Controller configuration backup procedure
Save the configuration externally when changes, such as adding new nodes, disk
subsystems, and so on, have been performed on the clustered system. Saving the
configuration is a crucial part of SAN Volume Controller management, and various methods
can be applied to back up your SAN Volume Controller configuration. The preferred practice
is to implement an automatic configuration backup by applying the configuration backup
command. We describe this command for the CLI in Chapter 9, SAN Volume Controller
operations using the command-line interface on page 471 and we describe the GUI
operation in Chapter 10, SAN Volume Controller operations using the GUI on page 627.
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
114 Implementing the IBM System Storage SAN Volume Controller V7.2
3.4 Performance considerations
Although storage virtualization with the SAN Volume Controller improves flexibility and
provides simpler management of a storage infrastructure, it can also provide a substantial
performance advantage for various workloads. The SAN Volume Controller caching capability
and its ability to stripe volumes across multiple disk arrays are the reasons why performance
improvement is significant when implemented with midrange disk subsystems. This
technology is often only provided with high-end enterprise disk subsystems.
To ensure the wanted performance and capacity of your storage infrastructure, undertake a
performance and capacity analysis to reveal the business requirements of your storage
environment. When this analysis is done, you can use the guidelines in this chapter to design
a solution that meets the business requirements.
When discussing performance for a system, always identify the bottleneck and, therefore, the
limiting factor of a given system. You must also consider the component for whose workload
you identify a limiting factor. The component might not be the same component that is
identified as the limiting factor for other workloads.
When designing a storage infrastructure with SAN Volume Controller or implementing SAN
Volume Controller in an existing storage infrastructure, you must consider the performance
and capacity of the SAN, the disk subsystems, the SAN Volume Controller, and the known or
expected workload.
3.4.1 SAN
The SAN Volume Controller now has many models:
2145-8F4
2145-8G4
2145-8A4
2145-CF8
2145-CG8
All of these models can connect to 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps switches. From a
performance point of view, connecting the SAN Volume Controller to 8 Gbps or 16 Gbps
switches is better.
Correct zoning on the SAN switch brings security and performance together. Implement a
dual HBA approach at the host to access the SAN Volume Controller.
3.4.2 Disk subsystems
From a performance perspective, the following guidelines relate to connecting to a SAN
Volume Controller:
Tip: Technically, almost all storage controllers provide both striping (RAID 5 or RAID 10)
and a form of caching. The real benefit is the degree to which you can stripe the data
across all MDisks in a storage pool and therefore have the maximum number of active
spindles at one time. The caching is secondary. The SAN Volume Controller provides
additional caching to the caching that midrange controllers provide (usually a couple of
GB), whereas enterprise systems have much larger caches.
Chapter 3. Planning and configuration 115
Draft Document for Review March 27, 2014 3:03 pm 7933 03 Planning and Configuration Hartmut.fm
Connect all storage ports to the switch up to a maximum of 16, and zone them to all of the
SAN Volume Controller ports.
Zone all ports on the disk back-end storage to all ports on the SAN Volume Controller
nodes in a clustered system.
Also, ensure that you configure the storage subsystem LUN-masking settings to map all
LUNs that are used by the SAN Volume Controller to all the SAN Volume Controller
WWPNs in the clustered system.
The SAN Volume Controller is designed to handle large quantities of multiple paths from the
back-end storage.
In most cases, the SAN Volume Controller can improve performance, especially on mid-sized
to low-end disk subsystems, older disk subsystems with slow controllers, or uncached disk
systems, for these reasons:
The SAN Volume Controller can stripe across disk arrays, and it can stripe across the
entire set of supported physical disk resources.
The SAN Volume Controller has a 4 GB, 8 GB, or 24 GB (48 GB with the optional
processor card. 2145-CG8 only) cache in the last two models, 2145-CF8 and 2145-CG8,
and it has an advanced caching mechanism.
The SAN Volume Controller is capable of providing automated performance optimization
of hot spots through the use of solid-state drives (SSDs) and Easy Tier.
The SAN Volume Controller large cache and advanced cache management algorithms also
allow it to improve on the performance of many types of underlying disk technologies. The
SAN Volume Controllers capability to manage, in the background, the destaging operations
that are incurred by writes (in addition to still supporting full data integrity) has the potential to
be particularly important in achieving good database performance.
Depending on the size, age, and technology level of the disk storage system, the total cache
available in the SAN Volume Controller can be larger, smaller, or about the same as that
associated with the disk storage. Because hits to the cache can occur in either the upper
(SAN Volume Controller) or the lower (disk controller) level of the overall system, the system
as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if
the storage control level of the cache has the greater capacity, expect hits to this cache to
occur, in addition to hits in the SAN Volume Controller cache.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SAN
Volume Controller cannot increase the throughput potential of the underlying disks in all
cases, because this increase depends on both the underlying storage technology and the
degree to which the workload exhibits hotspots or sensitivity to cache size or cache
algorithms.
IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, explains the SAN Volume
Controller cache partitioning capability:
http://www.redbooks.ibm.com/abstracts/redp4426.html?Open
3.4.3 SAN Volume Controller
The SAN Volume Controller clustered system is scalable up to eight nodes, and the
performance is nearly linear when adding more nodes into a SAN Volume Controller
clustered system, until it becomes limited by other components in the storage infrastructure.
Although virtualization with the SAN Volume Controller provides a great deal of flexibility, it
7933 03 Planning and Configuration Hartmut.fm Draft Document for Review March 27, 2014 3:03 pm
116 Implementing the IBM System Storage SAN Volume Controller V7.2
does not diminish the necessity to have a SAN and disk subsystems that can deliver the
wanted performance. Essentially, SAN Volume Controller performance improvements are
gained by having as many MDisks as possible, therefore creating a greater level of concurrent
I/O to the back end without overloading a single disk or array.
Assuming that no bottlenecks exist in the SAN or on the disk subsystem, remember that you
must follow specific guidelines when you perform these tasks:
Creating a storage pool
Creating volumes
Connecting to or configuring hosts that must receive disk space from a SAN Volume
Controller clustered system
You can obtain more detailed information about performance and preferred practices for the
SAN Volume Controller in SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
3.4.4 Performance monitoring
Performance monitoring must be an integral part of the overall IT environment. For the SAN
Volume Controller, as for the other IBM storage subsystems, the official IBM tool to collect
performance statistics and supply a performance report is the IBM Tivoli Storage Productivity
Center.
You can obtain more information about using the IBM Tivoli Storage Productivity Center to
monitor your storage subsystem in SAN Storage Performance Management Using Tivoli
Storage Productivity Center, SG24-7364:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
See Chapter 10, SAN Volume Controller operations using the GUI on page 627, for detailed
information about collecting performance statistics.
Copyright IBM Corp. 2014. All rights reserved. 117
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Chapter 4. SAN Volume Controller initial
configuration
In this chapter, we discuss the following topics:
Managing the cluster
SAN Volume Controller Hardware Management Console
SAN Volume Controller initial configuration steps
4
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
118 Implementing the IBM System Storage SAN Volume Controller V7.2
4.1 Managing the cluster
You can manage the SAN Volume Controller in may ways. The following methods are the
most common:
Using the SAN Volume Controller Management GUI
Using a PuTTY-based SAN Volume Controller command-line interface (CLI)
Using an IBM Tivoli Storage Productivity Center (TPC)
Figure 4-1 shows the various ways to manage a SAN Volume Controller cluster.
Figure 4-1 SAN Volume Controller cluster management
Note that you have full management control of the SAN Volume Controller regardless of
which method you choose. IBM Tivoli Storage Productivity Center is a robust software
product with various functions that needs to be purchased separately.
If you already have a previously installed SAN Volume Controller cluster in your environment,
it is possible that you are using the SAN Volume Controller Console (Hardware Management
Console - HMC). You can still use it together with your IBM Tivoli Storage Productivity Center.
When using the specific, retail product called IBM System Storage Productivity Center
(SSPC, which is not offered anymore), you can only log in to your SAN Volume Controller
from one of them at a time.
If you decide to manage your SAN Volume Controller cluster with the SAN Volume Controller
CLI, it does not matter if you are using the SAN Volume Controller Console or IBM Tivoli
Storage Productivity Center server, because the SAN Volume Controller CLI is located on the
cluster and accessed through the Secure Shell (SSH), which can be installed anywhere.
Chapter 4. SAN Volume Controller initial configuration 119
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
4.1.1 Network requirements for SAN Volume Controller
To plan your installation, consider the TCP/IP address requirements of the IBM SAN Volume
Controller cluster and the requirements for the SAN Volume Controller cluster to access other
services. You must also plan the address allocation and the Ethernet router, gateway, and
firewall configuration to provide the required access and network security.
Figure 4-2 shows the TCP/IP ports and services that are used by the SAN Volume Controller.
Figure 4-2 TCP/IP ports
For more information about TCP/IP prerequisites, see Chapter 3, Planning and
configuration on page 71.
To assist you in starting an SAN Volume Controller initial configuration, Figure 4-3 shows a
common flowchart that covers all of the types of management.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
120 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-3 SAN Volume Controller initial configuration flowchart
In the next sections, we describe each of the steps shown in Figure 4-3.
4.2 Setting up the SAN Volume Controller cluster
This section provides the step-by-step instructions that are needed to create the cluster. You
must create a cluster to use SAN Volume Controller virtualized storage. The first phase to
create a cluster is performed from the front panel of the SAN Volume Controller; see 4.2.3,
Initiating cluster from the front panel on page 123. The second phase is performed from a
web browser accessing the management GUI; see 4.3, Configuring the GUI on page 125.
4.2.1 Introducing the service panels
This section gives you an overview of the service panels that are available, depending on
your SAN Volume Controller nodes. Use Figure 4-4 as a reference for the SAN Volume
Controller 2145-8F2 and 2145-8F4 node model buttons to press in the following steps.
Chapter 4. SAN Volume Controller initial configuration 121
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-4 SAN Volume Controller 8F2 node and SAN Volume Controller 8F4 node front and operator
panel
Use Figure 4-5 for the SAN Volume Controller Node 2145-8G4 and 2145-8A4 models.
Figure 4-5 SAN Volume Controller 8G4 node front and operator panel
Use Figure 4-6 as a reference for the SAN Volume Controller Node 2145-CF8 model; the
figure shows the CF8 models front panel.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
122 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-6 CF8 front panel
See Figure 4-7 for the SAN Volume Controller Node 2145-CG8 model.
Figure 4-7 SAN Volume Controller CG8 node front and operator panel
SAN Volume Controller V6.1 and later code levels introduce a new method for performing
service tasks. In addition to being able to perform service tasks from the front panel, you can
also service a node through an Ethernet connection using either a web browser or the CLI.
An additional Service IP address for each node canister is required. For more details, see
4.3.3, Configuring the Service IP Addresses on page 136 and 10.17, Service Assistant Tool
with the GUI on page 850.
4.2.2 Prerequisites
Ensure that the SAN Volume Controller nodes are physically installed and that Ethernet and
Fibre Channel (FC) connectivity has been correctly configured. For information about physical
connectivity to the SAN Volume Controller, see Chapter 3, Planning and configuration on
page 71.
Prior to configuring the cluster, ensure that the following information is available:
License
The license indicates whether the client is permitted to use FlashCopy, MetroMirror, or
both. It also indicates how much capacity the client is licensed to virtualize.
For IPv4 addressing:
Cluster IPv4 addresses: These addresses include one address for the cluster and
another address for the service address.
IPv4 subnet mask.
Gateway IPv4 address.
Chapter 4. SAN Volume Controller initial configuration 123
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
For IPv6 addressing:
Cluster IPv6 addresses: These addresses include one address for the cluster and
another address for the service address.
IPv6 prefix.
Gateway IPv6 address.
You perform the first step to create a cluster from the front panel of the SAN Volume
Controller. The second step is taken from a web browser accessing the management GUI.
4.2.3 Initiating cluster from the front panel
When the hardware is physically installed into racks, complete the following steps to initially
configure the cluster through the physical service panel. See 4.2.1, Introducing the service
panels on page 120. Follow these steps:
1. Choose any node that will be a member of the cluster that is being created.
2. Press and release the up or down button until Actions is displayed.
3. Press and release the select button.
4. Depending on whether you are creating a cluster with an IPv4 address or an IPv6
address, press and release the up or down button until either New Cluster IPv4? or New
Cluster IPv6? is displayed.
Figure 4-8 shows the various options for the cluster creation.
Figure 4-8 Cluster IPv4? and Cluster IPv6? options on the front panel display
Important: The IBM SAN Volume Controller must always be configured in a cluster. Stand
alone configuration is not supported and can lead to data loss and degraded performance.
Nodes: To add additional nodes to your cluster, use a separate process after you have
successfully created and initialized the cluster on the selected node.
Important: If a timeout occurs when you enter the input for the fields during these
steps, you must begin again from step 2. All of the changes are lost, so be sure to have
all of the information available before beginning again.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
124 Implementing the IBM System Storage SAN Volume Controller V7.2
If the New Cluster IPv4? or New Cluster IPv6? actions are displayed, move directly to
step 5.
If the New Cluster IPv4? or New Cluster IPv6? actions are not displayed, this node is
already a member of a cluster. Perform these steps:
a. Press and release the up or down button until Actions is displayed.
b. Press and release the select button to return to the Main Options menu.
c. Press and release the up or down button until Cluster: is displayed. The name of the
cluster to which the node belongs is displayed on line two of the panel.
In this case, there are two options:
If you want to delete this node from cluster, follow these steps:
i. Press and release the up or down button until Actions is displayed.
ii. Press and release the select button.
iii. Press and release the up or down button until Remove Cluster? is displayed.
iv. Press and hold the up button.
v. Press and release the select button.
vi. Press and release the up or down button until Confirm remove? is displayed.
vii. Press and release the select button.
viii.Release the up button, which deletes the cluster information from the node.
ix. Go back to step 1 on page 123 and start again.
If you do not want this node to be removed from an existing cluster, review the situation
and determine the correct nodes to include in the new cluster.
5. Press and release the select button to create the new cluster.
6. Press and release the select button again to modify the IP.
7. Use the up or down navigation button to change the value of the first field of the
IP address to the value that has been chosen.
8. Use the right navigation button to move to the next field. Use the up or down navigation
button to change the value of this field.
9. Repeat step 7 for each of the remaining fields of the IP address.
10.When the last field of the IP address has been changed, press the select button.
11.Press the right arrow button:
For IPv4, IPv4 Subnet: is displayed.
For IPv6, IPv6 Prefix: is displayed.
12.Press the select button.
IPv4 and IPv6:
For IPv4, pressing and holding the up or down buttons will increment or decrease
the IP address field by units of 10. The field value rotates from 0 to 255 with the
down button, and from 255 to 0 with the up button.
For IPv6, the address and the gateway address consist of eight 4-digit hexadecimal
values. Enter the full address by working across a series of four panels to update
each of the 4-digit hexadecimal values that make up the IPv6 addresses. The
panels consist of eight fields, where each field is a 4-digit hexadecimal value.
Chapter 4. SAN Volume Controller initial configuration 125
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were
changed. There is only a single field for IPv6 Prefix.
14.When the last field of IPv4 Subnet/IPv6 Mask has been changed, press the select button.
15.Press the right navigation button:
For IPv4, IPv4 Gateway: is displayed.
For IPv6, IPv6 Gateway: is displayed.
16.Press the select button.
17.Change the fields for the appropriate gateway in the same way that the IPv4/IPv6 address
fields were changed.
18.When the changes to all of the Gateway fields have been made, press the select button.
19.To review the settings before creating the cluster, use the right and left buttons. Make any
necessary changes, use the right and left buttons to see Confirm Created?, and press
the select button.
20.After you complete this task, the following information is displayed on the service display
panel:
Cluster: is displayed on line one.
A temporary, system-assigned cluster name that is based on the IP address is
displayed on line two.
If the cluster is not created, Create Failed: is displayed on line one of the service display.
Line two contains an error code. See the error codes that are documented in the IBM
System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the
reason why the cluster creation failed and the corrective action to take.
When you have created the cluster from the front panel with the correct IP address format,
you can finish the cluster configuration by accessing the management GUI, completing the
Create Cluster wizard, and adding other nodes to the cluster.
4.3 Configuring the GUI
After you have performed the activities in 4.2, Setting up the SAN Volume Controller cluster
on page 120, complete the cluster setup by using the SAN Volume Controller Console. Follow
the steps that are described in 4.3.1, Completing the Create Cluster Wizard on page 126, to
create the cluster and complete the configuration.
Important: At this time, do not repeat this procedure to add other nodes to the cluster.
To add nodes to the cluster, follow the steps described in 9.10.2, Adding a node on
page 530 and in 10.12.3, Adding a node to the SAN Volume Controller clustered system
on page 791.
Important: Make sure that the SAN Volume Controller cluster IP address (svcclusterip)
can be reached successfully using a ping command from the network.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
126 Implementing the IBM System Storage SAN Volume Controller V7.2
4.3.1 Completing the Create Cluster Wizard
You can easily access the management GUI by opening any supported web browser. Follow
these steps:
1. Open the Web GUI from the SSPC Console or from a supported web browser on any
workstation that can communicate with the cluster. We suggest that you use Firefox
Version 6 for your web browser:
Open a supported web browser and point to the IP address that you entered in step 7
on page 124:
http://svcclusteripaddress/
(Note that it redirects you to https://svcclusteripaddress/, which is the default for
access to the SAN Volume Controller cluster.)
Figure 4-9 shows the SAN Volume Controller 6.3 Welcome window.
Figure 4-9 Welcome window
2. Enter the default superuser password: passw0rd (with a zero) and click Continue, as
shown in Figure 4-10.
Figure 4-10 Login window
3. On the next page, read the license agreement carefully. To agree with it, select I agree
with the terms in the license agreement and click Next, as shown in Figure 4-11.
Chapter 4. SAN Volume Controller initial configuration 127
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-11 License Agreement window
4. On the Name, Date, and Time window (Figure 4-12), fill in the following details:
A cluster name (System Name): This name is case-sensitive and can consist of A to Z,
a to z, 0 to 9, and the underscore (_). It cannot start with a number. It has a minimum of
one character and a maximum of 60 characters.
A Time Zone: You can select the time zone for the cluster here.
A Date and a Time: Here, you can change the date and time of your cluster. If you use
a Network Time Protocol (NTP) server, enter the IP address of the NTP server by
selecting Set NTP Server IP Address.
Click Next to confirm your changes.
Figure 4-12 Name, Date, and Time window
5. The Change Date and Time Settings window opens for you to complete the updates on
the cluster; see Figure 4-13. When the task completes, click Close.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
128 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-13 Change Date and Time Settings window
6. Next, the System License window opens, as shown in Figure 4-14. To continue, fill out the
fields for Virtualization Limit, FlashCopy Limit, Global and Metro Mirror Limit, and
Real-Time Compression Limit for the number of Terabytes that are licensed. If you do not
have a license for any of these features, leave the value at 0. Click Next.
Figure 4-14 System License Settings
Chapter 4. SAN Volume Controller initial configuration 129
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
7. The Configure Email Event Notification window opens, as shown in Figure 4-15.
Figure 4-15 Configure Email Event Notification window
If you do not want to configure it or if you want to configure it later, click Next and go to
step 8 on page 132.
To ensure that your system continues to run smoothly, you can enable email event
notifications.
Email event notifications send messages about error, warning, or informational events and
inventory reports to an email address of local or remote support personnel. Ensure that all
the information is valid or that email notification is disabled.
If you want to configure email event notifications, click Configure Email Event
Notifications and a wizard begins. Enter the information:
a. On the first page, as shown in Figure 4-16, fill in the information that is required to
enable IBM Support personnel to contact this person to assist with problem resolution
(Contact Name, Email Reply Address, Machine Location, and Phone). Ensure that all
contact information is valid. Then, click Next.
Figure 4-16 Define Company Contact information
b. On the next page, as shown in Figure 4-17, configure at least one email server that is
used by your site and optionally, enable inventory reporting. Enter a valid IP address
and a server port for each added server. Ensure that the email servers are valid.
Inventory reports allow IBM service personnel to proactively notify you of any known
issues with your system. To activate the Inventory Service, click Enable inventory
reporting and choose a Reporting Interval in this window. Click Next.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
130 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-17 Configure Email Servers and Inventory Reporting window
c. Next, as shown on Figure 4-18, you can configure email addresses to receive
notifications. It is a preferred practice to select Support for User Type, enter a users
email address, and select Error for Event notifications. IBM service personnel will
notify this person if an error condition occurs on your system. Ensure that all email
addresses are valid. Click Next.
Figure 4-18 Configure Email Addresses window
d. The last window, Figure 4-18, is a summary of your Email Event Notification wizard.
Click Finish to complete the setup.
Chapter 4. SAN Volume Controller initial configuration 131
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-19 Email event notification Summary window
e. The wizard is now closed and the additional information has been added, as shown in
Figure 4-20. You can edit or discard your changes from this window. Then, click Next.
Figure 4-20 Configure Email Event Notification window with configuration information
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
132 Implementing the IBM System Storage SAN Volume Controller V7.2
8. Next, you can add available nodes to your cluster; see Figure 4-21.
Figure 4-21 Hardware window
Follow these steps:
a. To complete this operation, click an empty node position to view the candidate nodes.
b. For an empty slot, select the node that you want to add to your cluster using the
drop-down list. Change its name and click Add Node, as shown in Figure 4-22.
Figure 4-22 Add a node to the cluster
c. A pop-up window opens to inform you about the amount of time that is required to add
a node to the cluster; see Figure 4-23. If you want to add it, click OK.
Important: Remember that you must have at least two nodes by I/O Group. Add
your available nodes in sequence.
Chapter 4. SAN Volume Controller initial configuration 133
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-23 Warning message
d. The Add New Node window opens for you to complete the update on the cluster, as
shown on Figure 4-24. When the task is complete, click Close.
Figure 4-24 Add New Node window
When your node has been added to the cluster successfully, you see an updated view
of Figure 4-21, as shown in Figure 4-25.
Figure 4-25 Hardware window with a second node added to the cluster
9. Your cluster has been created successfully. However, you must complete several
remaining tasks before you use the cluster, such as changing the default superuser
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
134 Implementing the IBM System Storage SAN Volume Controller V7.2
password or defining an IP address for service. We guide you through these tasks in the
following sections.
4.3.2 Changing the default superuser password
Follow these steps to change the default superuser password:
1. Log in to the cluster using your web browser, and enter the user superuser and its default
password: passw0rd (with a zero), as shown in Figure 4-26. Then, click Login.
Figure 4-26 Login window
2. From the dynamic menu, select Access Users, as shown in Figure 4-27.
Figure 4-27 Users windows
3. Right-click the superuser user, and select Properties, as shown in Figure 4-28.
Chapter 4. SAN Volume Controller initial configuration 135
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-28 Edit superuser settings window
4. For Password, click Change, as shown in Figure 4-29.
Figure 4-29 User Properties windows
5. Enter the new password twice and validate your change by clicking OK, as shown in
Figure 4-30.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
136 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-30 Modifying the superuser password
4.3.3 Configuring the Service IP Addresses
Configuring this IP address is important, because it lets you access the Service Assistant
Tool. If there is an issue with a node, it allows you to view a detailed status and error
summary, and manage service actions on it. Follow these steps:
1. To configure the Service IP Addresses, select Configuration Network, as shown in
Figure 4-31.
Figure 4-31 Network window
2. Select Service IP addresses, as shown in Figure 4-32.
Chapter 4. SAN Volume Controller initial configuration 137
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-32 Service IP Addresses window
3. Select a node and click the port to which you want to assign a service IP address. See
Figure 4-33.
Figure 4-33 Configure Service IP window
4. Depending on whether you have installed an IPv4 or an IPv6 cluster, enter the following
information:
For IPv4:
Type an IPv4 address in the IP Address field.
Type an IPv4 Subnet Mask.
Type an IPv4 gateway in the Gateway field.
For IPv6:
Select Show IPv6.
Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of
0 to 127.
Type an IPv6 address in the IP Address field.
Type an IPv6 gateway in the Gateway field.
5. Repeat steps 3 and 4 for each node in your cluster.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
138 Implementing the IBM System Storage SAN Volume Controller V7.2
4.3.4 Postrequisites
Perform the following steps to complete the SAN Volume Controller cluster configuration. We
explain all of these steps in greater detail in Chapter 9, SAN Volume Controller operations
using the command-line interface on page 471 and in Chapter 10, SAN Volume Controller
operations using the GUI on page 627.
1. Configure the SSH keys for the command-line user, as shown in 4.4, Secure Shell
overview on page 138.
2. Configure user authentication and authorization.
3. Set up event notifications and inventory reporting.
4. Create the storage pools.
5. Add an MDisk to the storage pool.
6. Identify and create volumes.
7. Create a map host objects map.
8. Identify and configure the FlashCopy mappings and Metro Mirror relationship.
9. Back up configuration data.
4.4 Secure Shell overview
Secure Shell (SSH) key authentication has not been necessary for the GUI since SAN
Volume Controller 5.1. Also, it is not required for the SAN Volume Controller CLI. Beginning
with SAN Volume Controller 6.3, you can choose between either password or ssh key
authentication, or you can choose both password and ssh key authentication for the SAN
Volume Controller CLI. We explain SSH in the following sections.
The connection is secured by means of a private key and a public key pair:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the SSH server (SAN Volume Controller cluster).
3. A private key identifies the client. The private key is checked against the public key during
the connection. The private key must be protected.
4. Also, the SSH server must identify itself with a specific host key.
5. If the client does not have that host key yet, it is added to a list of known hosts.
SSH is the communication vehicle between the management system (the System Storage
Productivity Center or any workstation) and the SAN Volume Controller cluster.
The SSH client provides a secure environment from which to connect to a remote machine. It
uses the principles of public and private keys for authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administrative and service functions on the cluster. Each key pair is associated with a
Tip: If you choose not to create an ssh key pair, you can still access the SAN Volume
Controller cluster using the SAN Volume Controller CLI, as long as you have a user
password. You will be authenticated through the user name and password.
Chapter 4. SAN Volume Controller initial configuration 139
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.
To use the CLI, an SSH client must be installed on that system; the SSH key pair must be
generated on the client system; and the clients SSH public key must be stored on the SAN
Volume Controller clusters.
You must pre-install the freeware implementation of SSH-2 for Microsoft Windows, which is
called PuTTY, on the System Storage Productivity Center or any other workstation. This
software provides the SSH client function for users who are logged in to the SAN Volume
Controller Console and who want to invoke the CLI to manage the SAN Volume Controller
cluster.
4.4.1 Generating public and private SSH key pairs using PuTTY
Perform the following steps to generate SSH keys on the SSH client system:
1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start Programs PuTTY PuTTYgen.
2. On the PuTTY Key Generator GUI window (Figure 4-34), generate the keys:
a. Select SSH-2 RSA.
b. Leave the number of bits in a generated key value at 1024.
c. Click Generate.
Figure 4-34 PuTTY Key Generator GUI
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
140 Implementing the IBM System Storage SAN Volume Controller V7.2
3. Move the cursor onto the blank area to generate the keys.
4. After the keys are generated, save them for later use:
a. Click Save public key, as shown in Figure 4-35.
Figure 4-35 Saving the public key
b. You are prompted for a name, for example, pubkey, and a location for the public key, for
example, C:\Support Utils\PuTTY. Click Save.
If another name and location are chosen, ensure that you maintain a record of the
name and location. You must specify the name and location of this SSH public key in
the steps in 4.4.2, Uploading the SSH public key to the SAN Volume Controller cluster
on page 141.
c. In the PuTTY Key Generator window, click Save private key.
To generate keys: The blank area is the large blank rectangle on the GUI inside the
section of the GUI labeled Key (Figure 4-34). Continue to move the mouse pointer over
the blank area until the progress bar reaches the far right. This action generates
random characters to create a unique key pair.
Tip: The PuTTY Key Generator saves the public key with no extension, by default.
Use the string pub in naming the public key, for example, pubkey, to differentiate the
SSH public key from the SSH private key easily.
Chapter 4. SAN Volume Controller initial configuration 141
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
d. You are prompted with a warning message, as shown in Figure 4-36. Click Yes to save
the private key without a passphrase.
Figure 4-36 Saving the private key without a passphrase
e. When prompted, enter a name, for example, icat, and a location for the private key, for
example, C:\Support Utils\PuTTY. Click Save.
We suggest that you use the default name icat.ppk, because in SAN Volume
Controller clusters running on versions prior to SAN Volume Controller 5.1, this key has
been used for icat application authentication and must have this default name.
5. Close the PuTTY Key Generator GUI.
6. Navigate to the directory, for example, C:\Support Utils\PuTTY, where the private key was
saved.
4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster
After you have created your SSH key pair, you need to upload your SSH private key onto the
SAN Volume Controller cluster:
1. From your browser, enter https://svcclusteripaddress/.
Or, from the GUI interface, go to the Access management interface, as shown in
Figure 4-27. Select Users.
2. On the next window, as shown in Figure 4-37, select New User from the list to create a
new user, and then, click Go.
Private key extension: The PuTTY Key Generator saves the private key with the
PPK extension.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
142 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-37 Create a new user
3. From the window to create a New User, as shown in Figure 4-38, type the Name (user ID)
that you want to create and type the password twice. Select the access level that you want
to assign to your user (remember that the Security Administrator (SecurityAdmin) is the
maximum access level). Select the location from which you want to upload the SSH Public
Key file that you have created for this user. Click OK.
Figure 4-38 Create user name and password
4. You have completed your user creation process and uploaded the users SSH public key
that will be paired later with the users private.ppk key, as described in 4.4.3, Configuring
the PuTTY session for the CLI on page 143. Figure 4-41 shows the successful upload of
the SSH admin key.
You have now completed the setup requirements for the SAN Volume Controller cluster using
the SAN Volume Controller cluster web interface.
Chapter 4. SAN Volume Controller initial configuration 143
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
4.4.3 Configuring the PuTTY session for the CLI
Before the CLI can be used, you must configure the PuTTY session either by using the SSH
keys that were generated earlier in 4.4.1, Generating public and private SSH key pairs using
PuTTY on page 139, or by user name if you have configured the user without an ssh key.
Perform these steps to configure the PuTTY session on the SSH client system:
1. From the System Storage Productivity Center on a Microsoft Windows desktop, select
Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window.
2. In the PuTTY Configuration window (Figure 4-39), from the Category pane on the left,
click Session, if it is not selected.
Figure 4-39 PuTTY Configuration window
3. In the right pane, under the Specify the destination you want to connect to section, select
SSH. Under the Close window on exit section, select Only on clean exit, which ensures
that if any connection errors occur, they will be displayed on the users window.
4. From the Category pane on the left, select Connection SSH to display the PuTTY SSH
connection configuration window, as shown in Figure 4-40.
Tip: The items that you select in the Category pane affect the content that appears in
the right pane.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
144 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-40 PuTTY SSH connection configuration window
5. In the right pane, for the Preferred SSH protocol version, select 2.
6. From the Category pane on the left side of the PuTTY Configuration window, select
Connection SSH Auth.
Chapter 4. SAN Volume Controller initial configuration 145
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
7. On Figure 4-41, in the right pane, in the Private key file for authentication: field under the
Authentication parameters section, either browse to or type the fully qualified directory
path and file name of the SSH client private key file, for example, C:\Support
Utils\PuTTY\icat.PPK, that was created earlier.
You can skip the Connection SSH Auth part if you created the user only with
password authentication and no ssh key.
Figure 4-41 PuTTY Configuration: Private key file location for authentication
8. From the Category pane on the left side of the PuTTY Configuration window, click
Session.
9. In the right pane, follow these steps, as shown in Figure 4-42:
a. Under the Load, save, or delete a stored session section, select Default Settings,
and click Save.
b. For the Host name (or IP address) field, type the IP address of the SAN Volume
Controller cluster.
c. In the Saved Sessions field, type a name, for example, SVC, to associate with this
session.
d. Click Save.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
146 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 4-42 PuTTY Configuration: Saving a session
You can now either close the PuTTY Configuration window or leave it open to continue.
4.4.4 Starting the PuTTY CLI session
The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the
session using these steps:
1. From the SAN Volume Controller Console desktop, open the PuTTY application by
selecting Start Programs PuTTY.
2. On the PuTTY Configuration window (Figure 4-43), select the session that was saved
earlier, in our example, ITSO-SVC1, and click Load.
3. Click Open.
Tips:
When you enter the Host name or IP address in PuTTY, type your SAN Volume
Controller user name followed by an At sign (@) followed by your host name or IP
address. This way, you do not need to enter your user name each time that you want to
access your SAN Volume Controller cluster. If you have not created an ssh key, you will
be prompted for the password that you set for the user.
Normally, output that comes from the SAN Volume Controller is wider than the default
PuTTY window size. Change your PuTTY window appearance to use a font with a
character size of 8.
To change, click the Appearance item in the Category tree, as shown in Figure 4-42,
and then, click Font. Choose a font with a character size of 8.
Chapter 4. SAN Volume Controller initial configuration 147
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-43 Open PuTTY command-line session
4. If this is the first time that you use the PuTTY application since you generated and
uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens
explaining that a mismatch exists between the private and public keys, as shown in
Figure 4-44. Click Yes, which invokes the CLI.
Figure 4-44 PuTTY Security Alert
5. As shown in Example 4-1, the private key that is used in this PuTTY session is now
authenticated against the public key that was uploaded to the SAN Volume Controller
cluster.
Example 4-1 Authenticating
Using username "admin".
Authenticating with public key "rsa-key-20100909"
IBM_2145:ITSO_SVC1:admin>
You have now completed the required tasks to configure the CLI for SAN Volume Controller
administration from the SAN Volume Controller Console. You can close the PuTTY session.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
148 Implementing the IBM System Storage SAN Volume Controller V7.2
4.4.5 Configuring SSH for AIX clients
To configure SSH for AIX clients, follow these steps:
1. You must be able to reach the SAN Volume Controller cluster IP address successfully by
using the ping command from the AIX workstation from which cluster access is desired.
2. OpenSSL must be installed for OpenSSH to work. Install OpenSSH on the AIX client:
a. You can obtain the installation images at these websites:
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp
http://sourceforge.net/projects/openssh-aix
b. Follow the instructions carefully, because OpenSSL must be installed before using
SSH.
3. Generate an SSH key pair:
a. Run the cd command to go to the /.ssh directory.
b. Run the ssh-keygen -t rsa command.
c. The following message is displayed:
Generating public/private rsa key pair. Enter file in which to save the key
(//.ssh/id_rsa)
d. Pressing Enter will use the default file that is shown in parentheses. Otherwise, enter a
file name, for example, aixkey, and then press Enter.
e. The following prompt is displayed:
Enter a passphrase (empty for no passphrase)
When you use the CLI interactively, enter a passphrase because no other
authentication exists when connecting through the CLI. After typing the passphrase,
press Enter.
f. The following prompt is displayed:
Enter same passphrase again:
Type the passphrase again, and then, press Enter again.
g. A message is displayed indicating that the key pair has been created. The private key
file will have the name that was entered previously, for example, aixkey. The public key
file will have the name that was entered previously with an extension of .pub, for
example, aixkey.pub.
Using a passphrase: If you are generating an SSH keypair so that you can use the CLI
interactively, use a passphrase so that you must authenticate every time that you
connect to the cluster. It is possible to have a passphrase-protected key for scripted
usage, but you will have to use the expect command or a similar command to have the
passphrase parsed into the ssh command.
Chapter 4. SAN Volume Controller initial configuration 149
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
4.5 Using IPv6
You can use IPv4, or IPv6 in a dual-stack configuration. Migrating to (or from) IPv6 can be
done remotely and is nondisruptive.
4.5.1 Migrating a cluster from IPv4 to IPv6
As a prerequisite, enable and configure IPv6 on your local workstation in advance. In our
case, we have configured an interface with IPv4 and IPv6 addresses on the System Storage
Productivity Center, as shown in Example 4-2.
Example 4-2 Output of ipconfig on the System Storage Productivity Center
C:\Documents and Settings\Administrator>ipconfig
Windows IP Configuration
Ethernet adapter IPv6:
Connection-specific DNS Suffix . :
IP Address. . . . . . . . . . . . : 10.0.1.115
Subnet Mask . . . . . . . . . . . : 255.255.255.0
IP Address. . . . . . . . . . . . : 2001:610::115
IP Address. . . . . . . . . . . . : fe80::214:5eff:fecd:9352%5
Default Gateway . . . . . . . . . :
To update a cluster, follow these steps:
1. Select Configuration Network, as shown in Figure 4-45.
Figure 4-45 Network window
Using IPv6: To remotely access the SAN Volume Controller clusters running IPv6, you are
required to run a supported web browser and have IPv6 configured on your local
workstation.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
150 Implementing the IBM System Storage SAN Volume Controller V7.2
2. Select Management IP Addresses, and then, click port 1 of one of the nodes, as shown
in Figure 4-46.
Figure 4-46 Management IP Addresses
3. In the window that is shown in Figure 4-47, follow these steps:
a. Select Show IPv6.
b. Type an IPv6 address in the IP Address field.
c. Type an IPv6 gateway in the Gateway field.
d. Type an IPv6 prefix in the Subnet Mask / Prefix field. The Prefix field can have a value
of 0 to 127.
e. Click OK.
Figure 4-47 Modifying the IP addresses: Adding IPv6 addresses
4. A confirmation window opens (Figure 4-48). Click Apply Changes.
Chapter 4. SAN Volume Controller initial configuration 151
Draft Document for Review March 27, 2014 3:03 pm 7933 04 Initial Configuration Libor.fm
Figure 4-48 Confirming the changes
5. The Change Management task is launched on the server, as shown in Figure 4-49. Click
Close when the task completes.
Figure 4-49 Change Management IP window
6. Test the IPv6 connectivity using the ping command from a cmd.exe session on your local
workstation, as shown in Example 4-3.
Example 4-3 Testing IPv6 connectivity to the SAN Volume Controller cluster
C:\Documents and Settings\Administrator>ping
2001:0610:0000:0000:0000:0000:0000:119
Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data:
Reply from 2001:610::119: time=3ms
Reply from 2001:610::119: time<1ms
Reply from 2001:610::119: time<1ms
Reply from 2001:610::119: time<1ms
Ping statistics for 2001:610::119:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round-trip times in milliseconds:
Minimum = 0ms, Maximum = 3ms, Average = 0ms
7. Test the IPv6 connectivity to the cluster using a compatible IPv6 and SAN Volume
Controller web browser on your local workstation.
8. Remove the IPv4 address in the SAN Volume Controller GUI accessing the same
windows, as shown in Figure 4-47, and validate this change by clicking OK.
7933 04 Initial Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
152 Implementing the IBM System Storage SAN Volume Controller V7.2
4.5.2 Migrating a cluster from IPv6 to IPv4
The process of migrating a cluster from IPv6 to IPv4 is identical to the process that we
described in 4.5.1, Migrating a cluster from IPv4 to IPv6 on page 149, except that you add
IPv4 addresses and remove the IPv6 addresses.
Copyright IBM Corp. 2014. All rights reserved. 153
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Chapter 5. Host configuration
In this chapter, we describe the host configuration procedures that are required to attach
supported hosts to the IBM System Storage SAN Volume Controller.
5
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
154 Implementing the IBM System Storage SAN Volume Controller V7.2
5.1 Host attachment overview
The SAN Volume Controller supports a wide range of host types (both IBM and non-IBM),
therefore making it possible to consolidate storage in an open systems environment into a
common pool of storage. Then, you can use and manage the storage pool more efficiently as
a single entity from a central point on the SAN. We have discussed the benefits of storage
virtualization in more depth earlier in this book.
The ability to consolidate storage for attached open systems hosts provides the following
benefits:
Unified, easier storage management
Increased utilization rate of the installed storage capacity
Advanced Copy Services functions offered across storage systems from separate vendors
Only one kind of multipath driver to consider when attaching hosts
5.2 SAN Volume Controller setup
In the vast majority of SAN Volume Controller environments, where high performance and
high availability requirements exist, hosts are attached through a storage area network (SAN)
using the Fibre Channel (FC) Protocol (FCP). Even though other supported SAN
configurations are available, for example, single fabric design, it is a preferred practice and
also a commonly used setup for the SAN to consist of two independent fabrics. This design
provides redundant paths and prevents unwanted interference between fabrics in case an
incident affects one of the fabrics.
Starting with SAN Volume Controller 5.1, IP-based Small Computer System Interface (iSCSI)
connectivity was introduced to provide an alternative method to attach hosts through an
Ethernet local area network (LAN). However, any inter-node communication within the SAN
Volume Controller clustered system, between the SAN Volume Controller and its back-end
storage subsystems, and also between SAN Volume Controller clustered systems solely
takes place through FC. More information on SAN Volume Controller iSCSI connectivity is
available in 5.3, iSCSI on page 159.
Starting with SAN Volume Controller 6.4, the Fibre Channel over Ethernet (FCoE) is
supported on models 2145-CG8 and newer. Only 10 GbE lossless Ethernet or faster is
supported.
Redundant paths to volumes can be provided for both SAN-attached and iSCSI-attached
hosts. Figure 5-1 on page 155 shows the types of attachment that are supported with the
SAN Volume Controller 7.2 release.
Chapter 5. Host configuration 155
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-1 SAN Volume Controller host attachment overview
5.2.1 Fibre Channel and SAN setup overview
Host attachment to the SAN Volume Controller via FC must be made via a SAN fabric,
because direct attachment to the SAN Volume Controller nodes is not supported. For SAN
Volume Controller configurations, it is a preferred practice to use two redundant SAN fabrics.
Therefore, we advise that you have each host equipped with a minimum of two host bus
adapters (HBAs) or at least a dual-port HBA with each HBA connected to a SAN switch in
either fabric.
SAN Volume Controller imposes no particular limit on the actual distance between SAN
Volume Controller nodes and host servers. A server can therefore be attached to an edge
switch in a core-edge configuration, and the SAN Volume Controller cluster resides at the
core of the fabric.
For host attachment, SAN Volume Controller supports up to three inter-switch link (ISL) hops
in the fabric, which means that the server to the SAN Volume Controller can be separated by
up to five FC links, four of which can be 10 km long (6.2 miles) if longwave small form-factor
pluggables (SFPs) are used.
The SAN Volume Controller nodes themselves contain shortwave SFPs and must therefore
be within 300 m (.186 miles) of the switch to which they attach. The configuration that is
shown in Figure 5-2 on page 156 is therefore supported.
Table 5-1 shows the fabric type that can be used for communicating between hosts, nodes,
and RAID storage systems. These fabric types can be used at the same time.
Table 5-1 SAN Volume Controller communication options
Communication type Host to SAN
Volume
Controller
SAN Volume
Controller to
Storage
SAN Volume
Controller to
SAN Volume
Controller
Fibre Channel SAN (FC) Yes Yes Yes
iSCSI (1 Gbps or 10 Gbps Ethernet) Yes No No
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
156 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 5-2 Example of host connectivity
In this figure, the optical distance between SAN Volume Controller Node 1 and Host 2 is
slightly over 40 km (24.85 miles).
To avoid latencies leading to degraded performance, we suggest that you avoid ISL hops
whenever possible. That is, in an optimal setup, the servers connect to the same SAN switch
as the SAN Volume Controller nodes.
Remember these limits when connecting host servers to a SAN Volume Controller:
Up to 256 hosts per I/O Group are supported, which results in a total of 1,024 hosts per
cluster.
Note that if the same host is connected to multiple I/O Groups of a cluster, it counts as a
host in each of these groups.
A total of 512 distinct, configured host worldwide port names (WWPNs) is supported per
I/O Group.
This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is
generated for each iSCSI name) associated with all of the hosts that are associated with a
single I/O Group.
Access from a server to a SAN Volume Controller cluster through the SAN fabric is defined by
means of switch zoning.
FCoE (10 Gbps Ethernet) Yes Yes Yes
Communication type Host to SAN
Volume
Controller
SAN Volume
Controller to
Storage
SAN Volume
Controller to
SAN Volume
Controller
Chapter 5. Host configuration 157
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Consider these rules for zoning hosts with the SAN Volume Controller:
Homogeneous HBA port zones
Switch zones containing HBAs must contain HBAs from similar host types and similar
HBAs in the same host. For example, AIX and Microsoft Windows NT hosts must be in
separate zones, and QLogic and Emulex adapters must also be in separate zones.
HBA to SAN Volume Controller port zones
Place each hosts HBA in a separate zone along with one or two SAN Volume Controller
ports. If two ports, use one from each node in the I/O Group. Do not place more than two
SAN Volume Controller ports in a zone with an HBA, because this design results in more
than the recommended number of paths, as seen from the host multipath driver.
Maximum host paths per logical unit (LU)
For any volume, the number of paths through the SAN from the SAN Volume Controller
nodes to a host must not exceed eight. For most configurations, four paths to an I/O Group
(four paths to each volume that is provided by this I/O Group) are sufficient.
Balanced host load across HBA ports
To obtain the best performance from a host with multiple ports, ensure that each host port
is zoned with a separate group of SAN Volume Controller ports.
Balanced host load across SAN Volume Controller ports
To obtain the best overall performance of the subsystem and to prevent overloading, the
workload to each SAN Volume Controller port must be equal. You can achieve this
balance by zoning approximately the same number of host ports to each SAN Volume
Controller port.
Figure 5-3 on page 158 shows an overview of a configuration where servers contain two
single-port HBAs each. The following characteristics relate to Figure 5-3 on page 158:
Distribute the attached hosts equally between two logical sets per I/O Group, if possible.
Connect hosts from each set to the same group of SAN Volume Controller ports. This
Important: A configuration that breaches this rule is unsupported, because it can
introduce instability to the environment.
Recommended number of paths per volume (n+1 redundancy):
With two HBA ports, zone HBA ports to SAN Volume Controller ports 1 to 2 for a
total of four paths.
With four HBA ports, zone HBA ports to SAN Volume Controller ports 1 to 1 for a
total of four paths.
Optional (n+2 redundancy):
With 4 HBA ports, zone HBA ports to SAN Volume Controller ports 1 to 2 for a total
of eight paths.
Terms: Here, we use the term HBA port to describe the SCSI initiator and SAN
Volume Controller port to describe the SCSI target.
Important: The maximum number of host paths per LU is not to exceed eight.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
158 Implementing the IBM System Storage SAN Volume Controller V7.2
port group includes exactly one port from each SAN Volume Controller node in the I/O
Group. The zoning defines the correct connections.
The port groups are defined in the following manner:
Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both
nodes, for example, N1/N2 of I/O Group zero.
Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both
nodes of an I/O Group.
You can create aliases for these port groups (per I/O Group):
Fabric A: IOGRP0_PG1 N1_P1;N2_P1,IOGRP0_PG2 N1_P3;N2_P3
Fabric B: IOGRP0_PG1 N1_P4;N2_P4,IOGRP0_PG2 N1_P2;N2_P2
Create host zones by always using the host port WWPN, plus the PG1 alias for hosts in
the first host set. Always use the host port WWPN, plus the PG2 alias for hosts from the
second host set. If a host has to be zoned to multiple I/O Groups, simply add the PG1 or
PG2 aliases from the specific I/O Groups to the host zone.
Using this schema provides four paths to one I/O Group for each host and helps to maintain
an equal distribution of host connections on the SAN Volume Controller ports. Figure 5-3
shows an overview of this host zoning schema.
Figure 5-3 Overview of four-path host zoning
When possible, use the minimum number of paths necessary to achieve a sufficient level of
redundancy. For the SAN Volume Controller environment, no more than four paths per I/O
Group are required to accomplish this layout.
Chapter 5. Host configuration 159
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Remember that all paths must be managed by the multipath driver on the host side. If we
assume a server is connected through four ports to the SAN Volume Controller, each volume
is seen through eight paths. With 125 volumes mapped to this server, the multipath driver has
to support handling up to 1,000 active paths (8 x 125).
You can find configuration and operational details about the IBM Subsystem Device Driver
(SDD) in the Multipath Subsystem Device Driver Users Guide, S7000303, at the following
website:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 5-4 on page 159. You can combine this schema with the
previous four-path zoning schema.
Figure 5-4 Overview of eight-path host zoning
5.3 iSCSI
The iSCSI protocol is a block-level protocol that encapsulates SCSI commands into TCP/IP
packets and, therefore, uses an existing IP network instead of requiring the FC HBAs and
SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. iSCSI connectivity is a
software feature that is provided by the SAN Volume Controller code.
iSCSI-attached hosts can either use a single network connection or multiple network
connections.
Important: Only host attachment to SAN Volume Controller via iSCSI is supported. SAN
Volume Controller-to-storage connections are not supported.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
160 Implementing the IBM System Storage SAN Volume Controller V7.2
Each SAN Volume Controller node is equipped with two onboard Ethernet network interface
cards (NICs), which are capable of operating at a link speed of 10, 100, or 1000 Mbps. Both
of these cards can be used to carry iSCSI traffic. Each nodes NIC that is numbered 1 is used
as the primary SAN Volume Controller cluster management port. For optimal performance
achievement, we advise that you use a 1 Gb Ethernet connection between the SAN Volume
Controller-attached and iSCSI-attached hosts when using the SAN Volume Controller nodes
onboard NICs.
Starting with the SAN Volume Controller 2145-CG8, an optional 10 Gbps 2-port Ethernet
adapter (Feature Code (FC) 5700) is available. The required 10 Gbps shortwave SFPs are
available as FC 5711. If the 10 GbE option is installed, you cannot install any internal
solid-state drives (SSDs). The 10 GbE option is used solely for iSCSI traffic.
5.3.1 Initiators and targets
An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP
network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI
node.
You can use several types of iSCSI initiators in host systems:
Software initiator: Available for most operating systems, for example, AIX, Linux, and
Windows
Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI HBA
You can see the supported operating systems for iSCSI host attachment, as well as the
supported iSCSI HBAs, at the following web sites:
SAN Volume Controller v7.2 Support Matrix:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004450
SAN Volume Controller Information Center:
http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
An iSCSI target refers to a storage resource that is located on an iSCSI server, or, to be more
precise, to one of potentially many instances of iSCSI nodes running on that server as a
target.
5.3.2 iSCSI nodes
One or more iSCSI nodes exist within a network entity. The iSCSI node is accessible through
one or more network portals. A network portal is a component of a network entity that has a
TCP/IP network address and can be used by an iSCSI node.
An iSCSI node is identified by its unique iSCSI name and is referred to as an iSCSI qualified
name (IQN). Remember that the purpose of this name is for the identification of the node only,
not for the nodes address. In iSCSI, the name is separated from the addresses. This
separation allows multiple iSCSI nodes to use the same addresses, or, while it is
implemented in the SAN Volume Controller, the same iSCSI node to use multiple addresses.
Chapter 5. Host configuration 161
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
5.3.3 iSCSI qualified name (IQN)
A SAN Volume Controller cluster can provide up to eight iSCSI targets, one per node. Each
SAN Volume Controller node has its own IQN, which by default will be in this form:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
An iSCSI host in SAN Volume Controller is defined by specifying its iSCSI initiator names.
The following example shows an IQN of a Windows servers iSCSI software initiator:
iqn.1991-05.com.microsoft:itsoserver01
During the configuration of an iSCSI host in the SAN Volume Controller, you must specify the
hosts initiator IQNs. You can read about host creation in detail in Chapter 9, SAN Volume
Controller operations using the command-line interface on page 471, and in Chapter 10,
SAN Volume Controller operations using the GUI on page 627.
An alias string can also be associated with an iSCSI node. The alias allows an organization to
associate a user-friendly string with the iSCSI name. However, the alias string is not a
substitute for the iSCSI name.
Figure 5-5 on page 161 shows an overview of the iSCSI implementation in the SAN Volume
Controller.
Figure 5-5 SAN Volume Controller iSCSI overview
A host accessing SAN Volume Controller volumes via iSCSI connectivity uses one or more
Ethernet adapters or iSCSI HBAs to connect to the Ethernet network.
Both onboard Ethernet ports of an SAN Volume Controller node can be configured for iSCSI.
If iSCSI is used for host attachment, we advise that you dedicate Ethernet port one for SAN
Volume Controller management and port two for iSCSI use. This way, port two can get
connected to a separate network segment or virtual LAN (VLAN) for iSCSI, because SAN
Volume Controller does not support the use of VLAN tagging to separate management and
iSCSI traffic.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
162 Implementing the IBM System Storage SAN Volume Controller V7.2
Note that Ethernet link aggregation (port trunking) or channel bonding for the SAN Volume
Controller nodes Ethernet ports is not supported for the 1 Gbps ports in this release.
For each SAN Volume Controller node, that is, for each instance of an iSCSI target node in
the SAN Volume Controller node, you can define two IPv4 and two IPv6 addresses or iSCSI
network portals.
5.3.4 iSCSI setup for the SAN Volume Controller and host server
You must perform the following procedure when setting up a host server for use as an iSCSI
initiator with SAN Volume Controller volumes. The specific steps vary depending on the
particular host type and operating system that you use.
To configure a host, first select a software-based iSCSI initiator or a hardware-based iSCSI
initiator. For example, the software-based iSCSI initiator can be a Linux or Windows iSCSI
software initiator, and the hardware-based iSCSI initiator can be an iSCSI HBA inside the
host server.
To set up your host server for use as an iSCSI software-based initiator with SAN Volume
Controller volumes, perform the following steps (the CLI is used in this example):
1. Set up your SAN Volume Controller cluster for iSCSI:
a. Select a set of IPv4 or IPv6 addresses for the Ethernet ports on the nodes that are in
the I/O Groups that will use the iSCSI volumes.
b. Configure the node Ethernet ports on each SAN Volume Controller node in the
clustered system with the cfgportip command.
c. Verify that you have configured the node and the clustered systems Ethernet ports
correctly by reviewing the output of the lsportip command and lssystemip command.
d. Use the mkvdisk command to create volumes on the SAN Volume Controller clustered
system.
e. Use the mkhost command to create a host object on the SAN Volume Controller. It
defines the hosts iSCSI initiator to which the volumes are to be mapped.
f. Use the mkvdiskhostmap command to map the volume to the host object in the SAN
Volume Controller.
2. Set up your host server:
a. Ensure that you have configured your IP interfaces on the server.
b. Make sure that your iSCSI HBA is ready to use, or install the software for the iSCSI
software-based initiator on the server, if needed.
c. On the host server, run the configuration methods for iSCSI so that the host server
iSCSI initiator logs in to the SAN Volume Controller clustered system and discovers the
SAN Volume Controller volumes. The host then creates host devices for the volumes.
3. After the host devices are created, you can use them with your host applications.
5.3.5 Volume discovery
Hosts can discover volumes through one of the following three mechanisms:
Internet Storage Name Service (iSNS)
The SAN Volume Controller can register itself with an iSNS name server; the IP address
of this server is set using the chsystem command. A host can then query the iSNS server
for available iSCSI targets.
Chapter 5. Host configuration 163
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Service Location Protocol (SLP)
The SAN Volume Controller node runs an SLP daemon, which responds to host requests.
This daemon reports the available services on the node. One service is the CIM object
manager (CIMOM), which runs on the configuration node; iSCSI I/O service now also can
be reported.
SCSI Send Target request
The host can also send a Send Target request using the iSCSI protocol to the iSCSI
TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI
targets before a discovery can be started.
5.3.6 Authentication
The authentication of hosts is optional; by default, it is disabled. The user can choose to
enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication,
which involves sharing a CHAP secret between the cluster and the host. If the correct key is
not provided by the host, the SAN Volume Controller will not allow it to perform I/O to
volumes. Also, you can assign a CHAP secret to the cluster.
5.3.7 Target failover
A new feature with iSCSI is the option to move iSCSI target IP addresses, between SAN
Volume Controller nodes in an I/O Group. IP addresses will only be moved from one node to
its partner node if a node goes through a planned or unplanned restart. If the Ethernet link to
the SAN Volume Controller clustered system fails due to a cause outside of the SAN Volume
Controller (such as the cable being disconnected, the Ethernet router failing, and so on), the
SAN Volume Controller makes no attempt to fail over an IP address to restore IP access to
the cluster. To enable the validation of the Ethernet access to the nodes, it will respond to
ping with the standard one-per-second rate without frame loss.
There is a concept, which is used for handling the iSCSI IP address failover, that is called a
clustered Ethernet port. A clustered Ethernet port consists of one physical Ethernet port on
each node in the cluster. The clustered Ethernet port contains configuration settings that are
shared by all of these ports.
Figure 5-6 on page 164 shows an example of an iSCSI target node failover. It gives a
simplified overview of what happens during a planned or unplanned node restart in an SAN
Volume Controller I/O Group. This example refers to SAN Volume Controller nodes with no
optional 10GbE iSCSI adapter installed. The following numbered comments relate to the
numbers in Figure 5-6 on page 164:
1. During normal operation, one iSCSI node target node instance is running on each SAN
Volume Controller node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target,
including the management addresses if the node acts as the configuration node, are
presented on the two ports (P1/P2) of a node.
2. During a restart of an SAN Volume Controller node (N1), the iSCSI initiator, including all of
its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management
(IPv4/IPv6) IP addresses (if N1 acted as the configuration node), will fail over to
Port1/Port2 of the partner node within the I/O Group, that is, node N2. An iSCSI initiator
running on a server will execute a reconnect to its iSCSI target, that is, the same IP
addresses presented now by a new node of the SAN Volume Controller cluster.
3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP
addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
164 Implementing the IBM System Storage SAN Volume Controller V7.2
will execute a reconnect to its iSCSI target. The management addresses will not fail back.
N2 will remain in the role of the configuration node for this cluster.
Figure 5-6 iSCSI node failover scenario
5.3.8 Host failover
From a host perspective, a multipathing driver (MPIO) is not required to be able to handle an
SAN Volume Controller node failover. In the case of an SAN Volume Controller node restart,
the host simply reconnects to the IP addresses of the iSCSI target node that reappear after
several seconds on the ports of the partner node.
A host multipathing driver for iSCSI is required in these situations:
To protect a host from network link failures, including port failures on the SAN Volume
Controller nodes
To protect a host from an HBA failure (if two HBAs are in use)
To protect a host from network failures, if it is connected through two HBAs to two separate
networks
To provide load balancing on the servers HBA and the network links
The commands for the configuration of the iSCSI IP addresses have been separated from the
configuration of the cluster IP addresses.
The following commands are new commands for managing iSCSI IP addresses:
The lsportip command lists the iSCSI IP addresses that are assigned for each port on
each node in the cluster.
The cfgportip command assigns an IP address to each nodes Ethernet port for iSCSI
I/O.
Chapter 5. Host configuration 165
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
The following commands are new commands for managing the cluster IP addresses:
The lssystemip command returns a list of the cluster management IP addresses that are
configured for each port.
The chsystemip command modifies the IP configuration parameters for the cluster.
For a detailed description about how to use these commands, see Chapter 9, SAN Volume
Controller operations using the command-line interface on page 471.
The parameters for remote services (ssh and web services) remain associated with the
cluster object. During an SAN Volume Controller code upgrade, the configuration settings for
the clustered system are applied to the node Ethernet port 1.
For iSCSI-based access, using redundant network connections, and separating iSCSI traffic
by using a dedicated network or virtual LAN (VLAN), prevents any NIC, switch, or target port
failure from compromising the host servers access to the volumes.
Because both onboard Ethernet ports of an SAN Volume Controller node can be configured
for iSCSI, we advise that you dedicate Ethernet port 1 for SAN Volume Controller
management and port 2 for iSCSI usage. By using this approach, port 2 can be connected to
a dedicated network segment or VLAN for iSCSI. Because SAN Volume Controller does not
support the use of VLAN tagging to separate management and iSCSI traffic, you have an
option to assign the correct LAN switch port to a dedicated VLAN in order to separate SAN
Volume Controller management and iSCSI traffic.
5.4 AIX-specific information
The following section details specific information that relates to the connection of AIX-based
hosts in a SAN Volume Controller environment.
5.4.1 Configuring the AIX host
The following list outlines the required steps to attach SAN Volume Controller volumes to an
AIX host:
1. Install the HBAs in the AIX host system.
2. Ensure that you have installed the correct operating systems and version levels on your
host, including any updates and authorized program analysis reports (APARs) for the
operating system.
3. Connect the AIX host system to the FC switches.
4. Configure the FC switch zoning.
5. Install the 2145 host attachment support package; see also 5.4.5, Installing the 2145 host
attachment support package on page 168.
6. Install and configure the Subsystem Device Driver Path Control Module (SDDPCM).
7. Perform the logical configuration on the SAN Volume Controller to define the host,
volumes, and host mapping.
AIX-specific information: In this section, the IBM System p information applies to all
AIX hosts that are listed on the SAN Volume Controller interoperability support website,
including IBM System i partitions and IBM JS blades.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
166 Implementing the IBM System Storage SAN Volume Controller V7.2
8. Run cfgmgr to discover and configure the SAN Volume Controller volumes.
The following sections detail the current support information. It is vital that you regularly check
the listed websites for any updates.
5.4.2 Operating system versions and maintenance levels
At the time of writing this book, the SAN Volume Controller supports AIX levels from V4.3.3
through V7.1.
The SAN Volume Controller supports the following AIX levels:
AIX V4.3.3
AIX V5.1
AIX V5.2
AIX V5.3
AIX V6.1
AIX V7.1
For the latest information, and device driver support, always use the following website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
5.4.3 HBAs for IBM System p hosts
Ensure that your IBM System p AIX hosts contain the supported HBAs. See the following
website to obtain current interoperability information:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
5.4.4 Configuring fast fail and dynamic tracking
For hosts running AIX V5.2 or later operating systems, enable both fast fail and dynamic
tracking.
Perform the following steps to configure your host system to use the fast fail and dynamic
tracking attributes:
1. Issue the following command to set the FC SCSI I/O Controller Protocol Device to each
adapter:
chdev -l fscsi0 -a fc_err_recov=fast_fail
The preceding command was for adapter fscsi0. Example 5-1 shows the command for
both adapters on our test system running IBM AIX 5L V5.3.
Example 5-1 Enable fast fail
#chdev -l fscsi0 -a fc_err_recov=fast_fail
fscsi0 changed
#chdev -l fscsi1 -a fc_err_recov=fast_fail
fscsi1 changed
Important: The maximum number of FC ports that are supported in a single host (or
logical partition) is four. These ports can be four single-port adapters or two dual-port
adapters or a combination, as long as the maximum number of ports, which attach to the
SAN Volume Controller, does not exceed four.
Chapter 5. Host configuration 167
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
2. Issue the following command to enable dynamic tracking for each FC device:
chdev -l fscsi0 -a dyntrk=yes
The preceding example command was for adapter fscsi0. Example 5-2 shows the
command for both adapters on our test system running AIX 5L V5.3.
Example 5-2 Enable dynamic tracking
#chdev -l fscsi0 -a dyntrk=yes
fscsi0 changed
#chdev -l fscsi1 -a dyntrk=yes
fscsi1 changed
Host adapter configuration settings
You can display the availability of installed host adapters by using the command that is shown
in Example 5-3.
Example 5-3 FC host adapter availability
#lsdev -Cc adapter |grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC
Adapter
You can display the WWPN, along with other attributes, including the firmware level, by using
the command that is shown in Example 5-4. Note that the WWPN is represented as the
Network Address.
Example 5-4 FC host adapter settings and WWPN
#lscfg -vpl fcs0
fcs0 U0.1-P2-I4/Q1 FC Adapter
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Note: The fast fail and dynamic tracking attributes do not persist through an adapter
delete and reconfigure operation. Thus, if the adapters are deleted and then configured
back into the system, these attributes are lost and must be reapplied.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
168 Implementing the IBM System Storage SAN Volume Controller V7.2
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
5.4.5 Installing the 2145 host attachment support package
To configure SAN Volume Controller volumes to an AIX host with the proper device type of
2145, you must install the 2145 host attachment support fileset prior to running cfgmgr.
Running cfgmgr prior to installing the host attachment support fileset results in the LUNs
being configured as Other SCSI Disk Drives and they will not be recognized by the
SDDPCM. To correct the device type, you must delete the hdisks using rmdev -dl hdiskX and
then rerunning cfgmgr.
Perform the following steps to install the host attachment support package:
1. Access the following website:
http://www.ibm.com/servers/storage/support/software/sdd/downloading.html
2. Select Host Attachment for SDDPCM on AIX.
3. Download the appropriate host attachment package archive for your AIX version; the
fileset that is contained in the package is devices.fcp.disk.ibm.mpio.rte.
4. Follow the instructions that are provided on the website and the readme files to install the
script.
5.4.6 Subsystem Device Driver Path Control Module
The Subsystem Device Driver Path Control Module (SDDPCM) is a loadable path control
module for supported storage devices to supply path management functions and error
recovery algorithms. When the supported storage devices are configured as Multipath I/O
(MPIO) devices, SDDPCM is loaded as part of the AIX MPIO Fibre Channel Protocol (FCP)
or AIX MPIO serial-attached SCSI (SAS) device driver during the configuration.
The AIX MPIO device driver automatically discovers, configures, and makes available all
storage device paths. SDDPCM then manages these paths to provide these functions:
High availability and load balancing of storage I/O
Automatic path-failover protection
Concurrent download of supported storage devices licensed machine code
Prevention of a single-point failure
The AIX MPIO device driver along with SDDPCM enhances the data availability and I/O load
balancing of the SAN Volume Controller volumes.
SDDPCM installation
Download the appropriate version of SDDPCM and install it using the standard AIX
installation procedure. The latest SDDPCM software versions are available at the following
website:
SDD: For AIX hosts, use the Subsystem Device Driver Path Control Module (SDDPCM) as
the multipath software over the existing Subsystem Device Driver (SDD). Although still
supported, a discussion of SDD is beyond the scope of this publication. For information
regarding SDD, see Multipath Subsystem Device Driver Users Guide, GC52-1309.
Chapter 5. Host configuration 169
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
http://ibm.com/support/entry/portal/Downloads/Hardware/System_Storage/Storage_soft
ware/Other_software_products/System_Storage_Multipath_Subsystem_Device_Driver/
Check the driver readme file and make sure that your AIX system meets all prerequisites.
Example 5-5 shows the appropriate version of SDDPCM that is downloaded into the
/tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which
generates a dot.toc (.toc) file that is needed by the installp command prior to installing
SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX
host.
Example 5-5 Installing SDDPCM on AIX
# ls -l
total 3232
-rw-r----- 1 root system 1648640 Jul 15 13:24
devices.sddpcm.61.rte.tar
# tar -tvf devices.sddpcm.61.rte.tar
-rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte
# tar -xvf devices.sddpcm.61.rte.tar
x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks.
# inutoc .
# ls -l
total 6432
-rw-r--r-- 1 root system 531 Jul 15 13:25 .toc
-rw-r----- 1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte
-rw-r----- 1 root system 1648640 Jul 15 13:24
devices.sddpcm.61.rte.tar
# installp -ac -d . all
Example 5-6 shows the lslpp command that can be used to check the version of SDDPCM
that is currently installed.
Example 5-6 Checking SDDPCM device driver
# lslpp -l | grep sddpcm
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61
devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61
We describe how to enable the SDDPCM web interface in 5.12, Using SDDDSM, SDDPCM,
and SDD web interface on page 222.
5.4.7 Configuring the assigned volume using SDDPCM
We use an AIX host with host name Atlantic to demonstrate attaching SAN Volume Controller
volumes to an AIX host. Example 5-7 shows host configuration prior to configuring the SAN
Volume Controller volumes. The lspv output shows the existing hdisks, and the lsvg output
shows the existing volume group (VG).
Example 5-7 Status of AIX host system Atlantic
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
# lsvg
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
170 Implementing the IBM System Storage SAN Volume Controller V7.2
rootvg
Identifying the WWPNs of the host adapter ports
Example 5-8, shows how the lscfg commands can be used to list the WWPNs for all installed
adapters. We will use the WWPNs later for mapping the SAN Volume Controller volumes.
Example 5-8 HBA information for host Atlantic
# lscfg -vl fcs* |egrep fcs|Network
fcs1 U0.1-P2-I4/Q1 FC Adapter
Network Address.............10000000C932A865
Physical Location: U0.1-P2-I4/Q1
fcs2 U0.1-P2-I5/Q1 FC Adapter
Network Address.............10000000C94C8C1C
Displaying the SAN Volume Controller configuration
You can use the SAN Volume Controller CLI to display the host configuration on the SAN
Volume Controller and to validate the physical access from the host to the SAN Volume
Controller. Example 5-9 shows the use of the lshost and lshostvdiskmap commands to
obtain the following information:
We confirm that a host definition has been properly defined for the host Atlantic.
The WWPNs listed in Example 5-8 are logged in with two logins each.
Atlantic has three volumes assigned to each WWPN, and the volume serial numbers are
listed.
Example 5-9 SAN Volume Controller definitions for host system Atlantic
IBM_2145:ITSO_SVC1:admin>svcinfo lshost Atlantic
id 8
name Atlantic
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C94C8C1C
node_logged_in_count 2
state active
WWPN 10000000C932A865
node_logged_in_count 2
state active
IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Atlantic
id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
8 Atlantic 0 14 Atlantic0001
10000000C94C8C1C 6005076801A180E90800000000000060
8 Atlantic 1 22 Atlantic0002
10000000C94C8C1C 6005076801A180E90800000000000061
8 Atlantic 2 23 Atlantic0003
10000000C94C8C1C 6005076801A180E90800000000000062
IBM_2145:ITSO_SVC1:admin>
Chapter 5. Host configuration 171
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Discovering and configuring LUNs
The cfgmgr command performs the discovery of the new LUNs and configures them into AIX.
The following command probes the devices on the adapters individually:
# cfgmgr -l fcs1
# cfgmgr -l fcs2
The following command probes the devices sequentially across all installed adapters:
# cfgmgr -vS
The lsdev command lists the three newly configured hdisks that are represented as
MPIO FC 2145 devices, as shown in Example 5-10.
Example 5-10 Volumes from SAN Volume Controller
# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 MPIO FC 2145
hdisk4 Available 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145
Now, you can use the mkvg command to create a VG with the three newly configured hdisks,
as shown in Example 5-11.
Example 5-11 Running the mkvg command
# mkvg -y itsoaixvg hdisk3
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg
# mkvg -y itsoaixvg1 hdisk4
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg1
# mkvg -y itsoaixvg2 hdisk5
0516-1254 mkvg: Changing the PVID in the ODM.
itsoaixvg2
The lspv output now shows the new VG label on each of the hdisks that were included in the
VGs, as seen in Example 5-12.
Example 5-12 Showing the vpath assignment into the VG
# lspv
hdisk0 0009cdcaeb48d3a3 rootvg active
hdisk1 0009cdcac26dbb7c rootvg active
hdisk2 0009cdcab5657239 rootvg active
hdisk3 0009cdca28b589f5 itsoaixvg active
hdisk4 0009cdca28b87866 itsoaixvg1 active
hdisk5 0009cdca28b8ad5b itsoaixvg2 active
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
172 Implementing the IBM System Storage SAN Volume Controller V7.2
5.4.8 Using SDDPCM
You administer the SDDPM by using the pcmpath command. You use this command to
perform all administrative functions, such as displaying and changing the path state.
The pcmpath query adapter command displays the current state of the adapters. In
Example 5-13, we can see the status that both adapters show as optimal with
State=NORMAL and Mode=ACTIVE.
Example 5-13 SDDPCM commands that are used to check the availability of the adapters
# pcmpath query adapter
Active Adapters :2
Adpt# Name State Mode Select Errors Paths Active
0 fscsi1 NORMAL ACTIVE 407 0 6 6
1 fscsi2 NORMAL ACTIVE 425 0 6 6
The pcmpath query device command displays the current state of the adapters. In
Example 5-14, we can see the paths State and Mode for each of the defined hdisks. Both
adapters show the optimal status of State=NORMAL and Mode=ACTIVE.
Additionally, an asterisk (*) that is displayed next to a path indicates an inactive path that is
configured to the non-preferred SAN Volume Controller nodes in the I/O Group.
Example 5-14 SDDPCM commands that are used to check the availability of the devices
# pcmpath query device
Total Devices : 3
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000060
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 152 0
1* fscsi1/path1 OPEN NORMAL 48 0
2* fscsi2/path2 OPEN NORMAL 48 0
3 fscsi2/path3 OPEN NORMAL 160 0
DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000061
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0* fscsi1/path0 OPEN NORMAL 37 0
1 fscsi1/path1 OPEN NORMAL 66 0
2 fscsi2/path2 OPEN NORMAL 71 0
3* fscsi2/path3 OPEN NORMAL 38 0
DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 6005076801A180E90800000000000062
==========================================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi1/path0 OPEN NORMAL 66 0
1* fscsi1/path1 OPEN NORMAL 38 0
Chapter 5. Host configuration 173
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
2* fscsi2/path2 OPEN NORMAL 38 0
3 fscsi2/path3 OPEN NORMAL 70 0
#
5.4.9 Creating and preparing volumes for use with AIX V6.1 and SDDPCM
The itsoaixvg VG is created using hdisk3. A logical volume is created using the VG. Then, the
testlv1 file system is created and mounted on the /testlv1 mount point, as shown in
Example 5-15.
Example 5-15 Host system new VG and file system configuration
# lsvg -o
itsoaixvg2
itsoaixvg1
itsoaixvg
rootvg
# crfs -v jfs2 -g itsoaixvg -a size=3G -m /itsoaixvg -p rw -a agblksize=4096
File system created successfully.
3145428 kilobytes total disk space.
New File System size is 6291456
# lsvg -l itsoaixvg
itsoaixvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv00 jfs2log 1 1 1 closed/syncd N/A
fslv00 jfs2 384 384 1 closed/syncd /itsoaixvg
#
5.4.10 Expanding an AIX volume
AIX supports dynamic volume expansion starting at AIX 5L Version 5.2. This capability allows
a volumes capacity to be increased by the storage subsystem while the volumes are actively
in use by the host and applications.
The following restrictions apply:
The volume cannot belong to a concurrent-capable VG.
The volume cannot belong to a FlashCopy, Metro Mirror, or Global Mirror relationship.
The following steps outline how to expand a volume on an AIX host, when the volume is on
the SAN Volume Controller:
1. Display the current size of the SAN Volume Controller volume using the SAN Volume
Controller CLI command lsvdisk <VDisk_name>. The capacity of the volume, as seen by
the host, is displayed in the capacity field of the lsvdisk output in GBs.
2. The corresponding AIX hdisk can be identified by matching the vdisk_UID from the
lsvdisk output with the SERIAL field of the pcmpath query device output.
3. Display the capacity that is currently configured in AIX using the lspv hdisk command.
The capacity is shown in the TOTAL PPs field in MBs.
4. To expand the capacity of the SAN Volume Controller volume, use the expandvdisksize
command.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
174 Implementing the IBM System Storage SAN Volume Controller V7.2
5. After the capacity of the volume has been expanded, AIX needs to update its configured
capacity. To initiate the capacity update on AIX, use the chvg -g vg_name command,
where vg_name is the VG in which the expanded volume resides.
If AIX does not return any messages, it means that the command was successful and the
volume changes in this VG have been saved.
If AIX cannot see any changes in the volumes, it returns an explanatory message.
6. Display the new AIX-configured capacity using the lspv hdisk command. The capacity is
shown in the TOTAL PPs field in MBs.
5.4.11 Running SAN Volume Controller commands from an AIX host system
To issue CLI commands, you must install and prepare the SSH client system on the AIX host
system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also
need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for IBM Power
Systems. For AIX V4.3.3, the software is available from the AIX toolbox for Linux
applications at this website:
http://ibm.com/systems/power/software/aix/linux/toolbox/download.html
The AIX installation images from IBM developerWorks are available at this website:
http://sourceforge.net/projects/openssh-aix
Perform the following steps:
1. To generate the key files on AIX, issue the following command:
ssh-keygen -t rsa -f filename
The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for
rsa2 is only rsa. For rsa1, the type must be rsa1. When creating the key to the SAN
Volume Controller, use type rsa2. The -f parameter specifies the file names of the private
and public keys on the AIX server (the public key has the extension .pub after the file
name).
2. Next, install the public key on the SAN Volume Controller by using the Master Console.
Copy the public key to the Master Console and install the key to the SAN Volume
Controller, as described in Chapter 4, SAN Volume Controller initial configuration on
page 117.
3. On the AIX server, make sure that the private key and the public key are in the .ssh
directory and in the home directory of the user.
4. To connect to the SAN Volume Controller and use a CLI session from the AIX host, issue
the following command:
ssh -l admin -i filename svc
5. You can also issue the commands directly on the AIX host, which is useful when making
scripts. To issue the commands directly on the AIX host, add the SAN Volume Controller
commands to the previous command. For example, to list the hosts that are defined on the
SAN Volume Controller, enter the following command:
ssh -l admin -i filename svc svcinfo lshost
In this command, -l admin is the user name that is used to log in to the SAN Volume
Controller, -i filename is the filename of the private key that is generated, and svc is the
host name or IP address of the SAN Volume Controller.
Chapter 5. Host configuration 175
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
5.5 Windows-specific information
In the following sections, we detail specific information about the connection of
Windows-based hosts to the SAN Volume Controller environment.
5.5.1 Configuring Windows Server 2008 and 2012 hosts
This section provides an overview of the requirements for attaching the SAN Volume
Controller to a host running Windows Server 2008, Windows Server 2008 R2, or Windows
Server 2012. To make the Windows server capable to handle volumes that are presented by
the SAN Volume Controller, you must install a multipath driver: the IBM Subsystem Device
Driver Device Specific Module (SDDDSM).
Before you attach the SAN Volume Controller to your host, make sure that all of the following
requirements are fulfilled:
Check all prerequisites that are provided in section 2.0 of the SDDSM readme file.
Check the LUN limitations for your host system. Ensure that there are enough FC
adapters installed in the server to handle the total number of LUNs that you want to attach.
5.5.2 Configuring Windows
To configure the Windows hosts, follow these steps:
1. Make sure that the latest OS service pack and hotfixes are applied to your Windows
server system.
2. Use the latest supported firmware and driver levels on your host system.
3. Install the HBA or HBAs on the Windows server, as shown in 5.5.4, Installing and
configuring the host adapter on page 176.
4. Connect the Windows Server FC host adapters to the switches.
5. Configure the switches (zoning).
6. Install the FC host adapter driver, as described in 5.5.3, Hardware lists, device driver,
HBAs, and firmware levels on page 176.
7. Configure the HBA for hosts running Windows, as described in 5.5.4, Installing and
configuring the host adapter on page 176.
8. Check the HBA driver readme file for the required Windows registry settings, as described
in 5.5.3, Hardware lists, device driver, HBAs, and firmware levels on page 176.
9. Check the disk timeout on Windows Server, as described in 5.5.5, Changing the disk
timeout on the Windows Server on page 176.
10.Install and configure SDDDSM.
11.Restart the Windows Server host system.
12.Configure the host, volumes, and host mapping in the SAN Volume Controller.
13.Use Rescan disk in Computer Management of the Windows Server to discover the
volumes that were created on the SAN Volume Controller.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
176 Implementing the IBM System Storage SAN Volume Controller V7.2
5.5.3 Hardware lists, device driver, HBAs, and firmware levels
The latest information about the supported hardware, device driver, and firmware is available
at this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
On this page, browse to section V7.2.x, select the Supported Hardware, Device Driver,
Firmware and Recommended Software Levels link and then search for Windows.
At this Web site, you will also find the hardware list for supported HBAs and the driver levels
for Windows. Check the supported firmware and driver level for your HBA and follow the
manufacturers instructions to upgrade the firmware and driver levels for each type of HBA.
Most manufacturers driver readme files list the instructions for the Windows registry
parameters that have to be set for the HBA driver.
5.5.4 Installing and configuring the host adapter
Install the host adapters in your system. See the manufacturers instructions for the
installation and configuration of the HBAs.
Also, check the documentation that is provided for the server system for the installation
guidelines of FC HBAs regarding the installation in certain PCI(e) slots and so on.
The detailed configuration settings that you must make for the various vendors FC HBAs are
available in the SAN Volume Controller Information Center by selecting Installing Host
attachment Fibre Channel host attachments Hosts running the Microsoft
Windows Server operating system.
5.5.5 Changing the disk timeout on the Windows Server
This section describes how to change the disk I/O timeout value on Windows Server 2008,
Windows Server 2008 R2, and Windows Server 2012 systems.
On your Windows Server hosts, change the disk I/O timeout value to 60 in the Windows
registry:
1. In Windows, click Start, and select Run.
2. In the dialog text box, type regedit and press Enter.
3. In the registry browsing tool, locate the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.
4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the
value to 60, as shown in Figure 5-7 on page 177.
Chapter 5. Host configuration 177
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-7 Regedit
5.5.6 Installing the SDDDSM multipath driver on Windows
The following section shows how to install the SDDDSM driver on a Windows Server 2008 R2
host and Windows Server 2012.
Windows Server 2012, Windows Server 2008 (R2), and MPIO
Microsoft Multipath Input Output (MPIO) is a generic multipath driver that is provided by
Microsoft, which, by itself, does not form a complete solution. It works in conjunction with
device-specific modules (DSMs), which usually are provided by the vendor of the storage
subsystem. This design allows the parallel operation of multiple vendors storage systems on
the same host without interfering with each other, because the MPIO instance only interacts
with that storage system for which the DSM is provided.
MPIO is not installed with the Windows operating system, by default. Instead, storage
vendors must pack the MPIO drivers with their own DSMs. IBM Subsystem Device Driver
DSM (SDDDSM) is the IBM multipath I/O solution that is based on Microsoft MPIO
technology. It is a device-specific module that is designed specifically to support IBM storage
devices on Windows Server 2008 (R2), and Windows 2012 servers.
The intention of MPIO is to achieve better integration of multipath storage with the operating
system. It also allows the use of multipathing in the SAN infrastructure during the boot
process for SAN boot hosts.
Subsystem Device Driver Device Specific Module for the SVC
Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for the
SAN Volume Controller device for the Windows Server 2008 (R2), and Windows Server 2012
operating systems. Together with MPIO, SDDDSM is designed to support the multipath
configuration environments in the SAN Volume Controller. SDDDSM resides in a host system
along with the native disk device driver and provides the following functions:
Enhanced data availability
Dynamic I/O load-balancing across multiple paths
Automatic path failover protection
Enabled concurrent firmware upgrade for the storage system
Path-selection policies for the host system
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
178 Implementing the IBM System Storage SAN Volume Controller V7.2
No SDDDSM support exists for Windows Server 2000, because SDDDSM requires the
STORPORT version of the HBA device drivers. Table 5-2 lists the SDDDSM driver levels that
are supported at the time of writing this book.
Table 5-2 Currently supported SDDDSM driver levels
To check which levels are available, go to this website:
http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM
To download SDDDSM, go to this website:
http://ibm.com/support/docview.wss?uid=ssg1S4000350#SVC
After you download the appropriate archive (.zip file) from this URL, extract it to your local
hard drive and launch setup.exe to install SDDDSM. A command prompt window opens, as
shown in Figure 5-8. Confirm the installation by entering Y.
Figure 5-8 SDDDSM installation
After the setup has completed, enter Y again to confirm the reboot request, as shown in
Figure 5-9
Figure 5-9 Reboot system after installation
After the reboot, the SDDDSM installation is complete. You can verify the installation
completion in Device Manager, because the SDDDSM device appears (Figure 5-10 on
page 179), and the SDDDSM tools are installed (Figure 5-11 on page 179).
Windows operating system SDD level
Windows Server 2012 (x64) 2.4.3.4-4
Windows Server 2008 R2 (x64) 2.4.3.4-4
Windows Server 2008 (32-bit)/Windows Server 2008 (x64) 2.4.3.4-4
Chapter 5. Host configuration 179
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-10 SDDDSM installation
The SDDDSM tools are installed (Figure 5-11).
Figure 5-11 SDDDSM installation
5.5.7 Attaching the SAN Volume Controller volumes to Windows Server 2008
R2 and 2012
Create the volumes on the SAN Volume Controller, and map them to the Windows Server
2008 R2 or 2012 host.
In this example, we have mapped three SAN Volume Controller disks to the Windows Server
2008 R2 host named Diomede; see Example 5-16 on page 180.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
180 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 5-16 SAN Volume Controller host mapping to host Diomede
IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Diomede
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
0 Diomede 0 20 Diomede_0001 210000E08B0541BC
6005076801A180E9080000000000002B
0 Diomede 1 21 Diomede_0002 210000E08B0541BC
6005076801A180E9080000000000002C
0 Diomede 2 22 Diomede_0003 210000E08B0541BC
6005076801A180E9080000000000002D
Perform the following steps to use the devices on your Windows Server 2008 R2 host:
1. Click Start, and click Run.
2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens.
3. Select Action, and click Rescan Disks (Figure 5-12).
Figure 5-12 Windows Server 2008 R2: Rescan disks
4. The SAN Volume Controller disks now appear in the Disk Management window
(Figure 5-13 on page 181).
Chapter 5. Host configuration 181
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-13 Windows Server 2008 R2 Disk Management window
After you have assigned the SAN Volume Controller disks, they are also available in
Device Manager. The three assigned drives are represented by SDDDSM/MPIO as
IBM-2145 Multipath disk devices in the Device Manager (Figure 5-14).
Figure 5-14 Windows Server 2008 R2 Device Manager
5. To check that the disks are available, select Start All Programs Subsystem Device
Driver DSM, and click Subsystem Device Driver DSM (Figure 5-15 on page 182). The
SDDDSM Command Line Utility appears.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
182 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 5-15 Windows Server 2008 R2 Subsystem Device Driver DSM utility
6. Enter the datapath query device command and press Enter (Example 5-17). This
command displays all of the disks and the available paths, including their states.
Example 5-17 Windows Server 2008 R2 SDDDSM command-line utility
Microsoft Windows [Version 6.0.6001]
Copyright (c) 2006 Microsoft Corporation. All rights reserved.
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002B
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000002C
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
Chapter 5. Host configuration 183
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
SERIAL: 6005076801A180E9080000000000002D
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
C:\Program Files\IBM\SDDDSM>
7. Right-click the disk in Disk Management, and select Online to place the disk online
(Figure 5-16).
Figure 5-16 Windows Server 2008 R2: Place disk online
8. Repeat step 7 for all of your attached SAN Volume Controller disks.
9. Right-click one disk again, and select Initialize Disk (Figure 5-17).
Figure 5-17 Windows Server 2008 R2: Initialize Disk
10.Mark all of the disks that you want to initialize, and click OK (Figure 5-18 on page 184).
SAN zoning: When following the SAN zoning guidance, we get this result, using one
volume and a host with two HBAs, (number of volumes) x (number of paths per I/O
Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
184 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 5-18 Windows Server 2008 R2: Initialize Disk
11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-19).
Figure 5-19 Windows Server 2008 R2: New Simple Volume
12.The New Simple Volume Wizard opens. Click Next.
13.Enter a disk size, and click Next (Figure 5-20).
Figure 5-20 Windows Server 2008 R2: New Simple Volume
Chapter 5. Host configuration 185
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
14.Assign a drive letter, and click Next (Figure 5-21).
Figure 5-21 Windows Server 2008 R2: New Simple Volume
15.Enter a volume label, and click Next (Figure 5-22).
Figure 5-22 Windows Server 2008 R2: New Simple Volume
16.Click Finish, and repeat this step for every SAN Volume Controller disk on your host
system (Figure 5-23 on page 186).
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
186 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 5-23 Windows Server 2008 R2: Disk Management
5.5.8 Extending a Windows Server 2008 (R2) volume
Using SAN Volume Controller and Windows Server 2008 (R2) gives you the ability to extend
volumes while they are in use.
You can expand a volume in the SAN Volume Controller cluster, even if it is mapped to a host.
Certain operating systems, such as Windows Server since version 2000, can handle the
volumes being expanded even if the host has applications running.
A volume, which is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on
the SAN Volume Controller, cannot be expanded unless the host mapping is removed.
Therefore, the FlashCopy, Metro Mirror, or Global Mirror on that volume must be stopped
before it is possible to expand the volume.
If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all but one MSCS cluster node. Also, you must stop the applications in the resource that
access the volume to be expanded, before expanding the volume. Applications running in
other resources can continue to run. After expanding the volume, start the applications and
the resource, and then restart the other nodes in the MSCS.
Important:
If you want to expand a logical drive in an extended partition in Windows Server 2003,
apply the Hotfix from KB841650, which is available from the Microsoft Knowledge Base
at this website:
http://support.microsoft.com/kb/841650/
Use the updated Diskpart version for Windows Server 2003, which is available from the
Microsoft Knowledge Base at this website:
http://support.microsoft.com/kb/923076/
Chapter 5. Host configuration 187
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
To expand a volume in use on a Windows Server host, you use the Windows DiskPart utility.
To start DiskPart, select Start Run, and enter DiskPart.
Diskpart was developed by Microsoft to ease the administration of storage on Windows hosts.
It is a command-line interface (CLI), which you can use to manage disks, partitions, and
volumes by using scripts or direct input on the command line. You can list disks and volumes,
select them, and after selecting them, get more detailed information, create partitions, extend
volumes, and more. For more information about diskpart, see the Microsoft website:
http://www.microsoft.com
Further information about expanding partitions of a cluster-shared disk is available at the
following website:
http://support.microsoft.com/kb/304736
In the following discussion, we show an example of how to expand a volume from the SAN
Volume Controller on a Windows Server 2003 host.
To list a volume size, use the svcinfo lsvdisk <VDisk_name> command. This command gives
this information for the Senegal_bas0001 volume before expanding the volume.
Here, we can see that the capacity is 10 GB, and we can see the value of the vdisk_UID. To
see on which vpath this volume is located on the Windows Server 2003 host, we use the
datapath query device SDD command on the Windows host (Figure 5-24).
We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows
host (Figure 5-24) matches the volume ID of Senegal_bas0001.
To see the size of the volume on the Windows host, we use Disk Manager, as shown in
Figure 5-24.
Figure 5-24 Windows Server 2003: Disk Management
This window shows that the volume size is 10 GB. To expand the volume on the SAN Volume
Controller, we use the svctask expandvdisksize command to increase the capacity on the
volume. In this example, we expand the volume by 1 GB (Example 5-18 on page 188).
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
188 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 5-18 svctask expandvdisksize command
IBM_2145:ITSO_SVC1:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001
IBM_2145:ITSO_SVC1:admin>svcinfo lsvdisk Senegal_bas0001
id 7
name Senegal_bas0001
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
capacity 11.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801A180E9080000000000000F
throttling 0
preferred_node_id 3
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_0_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 11.00GB
real_capacity 11.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
To check that the volume has been expanded, we use the svcinfo lsvdisk command. In
Example 5-18, we can see that the Senegal_bas0001 volume has been expanded to 11 GB
in capacity.
After performing a Disk Rescan in Windows, you can see the new unallocated space in
Windows Disk Management, as shown in Figure 5-25 on page 189.
Chapter 5. Host configuration 189
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-25 Expanded volume in Disk Manager
This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity
available for the file system, use the following commands, as shown in Example 5-19:
diskpart Starts DiskPart in a DOS prompt
list volume Shows you all available volumes
select volume Selects the volume to expand
detail volume Displays details for the selected volume, including the unallocated
capacity
extend Extends the volume to the available unallocated space
Example 5-19 Using diskpart
C:\>diskpart
Microsoft DiskPart version 5.2.3790.3959
Copyright (C) 1999-2001 Microsoft Corporation.
On computer: SENEGAL
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
Volume 0 C NTFS Partition 75 GB Healthy System
Volume 1 S SVC_Senegal NTFS Partition 10 GB Healthy
Volume 2 D DVD-ROM 0 B Healthy
DISKPART> select volume 1
Volume 1 is the selected volume.
DISKPART> detail volume
Disk ### Status Size Free Dyn Gpt
-------- ---------- ------- ------- --- ---
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
190 Implementing the IBM System Storage SAN Volume Controller V7.2
* Disk 1 Online 11 GB 1020 MB
Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
DISKPART> extend
DiskPart successfully extended the volume.
DISKPART> detail volume
Disk ### Status Size Free Dyn Gpt
-------- ---------- ------- ------- --- ---
* Disk 1 Online 11 GB 0 B
Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
After extending the volume, the detail volume command shows that there is no free capacity
on the volume anymore. The list volume command shows the file system size. The Disk
Management window also shows the new disk size; see Figure 5-26.
Figure 5-26 Disk Management after extending the disk
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SAN Volume Controller volume. The new space will appear as
unallocated space at the end of the disk.
In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk
Management functions to allocate the new space. Expansion works irrespective of the volume
type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded
without stopping I/O, in most cases.
Chapter 5. Host configuration 191
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
5.5.9 Removing a disk on Windows
To remove a disk from Windows, when the disk is an SAN Volume Controller volume, we
follow the standard Windows procedure to ensure that there is no data that we want to
preserve on the disk, that no applications are using the disk, and that no I/O is going to the
disk. After completing this procedure, we remove the host mapping on the SAN Volume
Controller. We must ensure that we are removing the correct volume. To verify, we use SDD
to locate the serial number of the disk, and on the SAN Volume Controller, we use
lshostvdiskmap to find the volumes name and number. We also check that the SDD serial
number on the host matches the UID on the SAN Volume Controller for the volume.
When the host mapping is removed, we perform a rescan for the disk, Disk Management on
the server removes the disk, and the vpath goes into the status of CLOSE on the server. We
can verify these actions by using the datapath query device SDD command, but the vpath
that is closed is first removed after a reboot of the server.
In the following sequence of examples, we show how to remove a SAN Volume Controller
volume from a Windows server. We show it on a Windows Server 2003 operating system, but
the steps also apply to Windows Server 2000 and Windows Server 2008.
Figure 5-24 on page 187 shows the Disk Manager before removing the disk.
We will remove Disk 1. To find the correct volume information, we find the Serial/UID number
using SDD (Example 5-20).
Example 5-20 Removing SAN Volume Controller disk from the Windows server
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data. This operation is disruptive for the data due to a change in the
position of the logical block address (LBA) on the disks.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
192 Implementing the IBM System Storage SAN Volume Controller V7.2
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69
0
Knowing the Serial/UID of the volume and that the host name is Senegal, we identify the host
mapping to remove by using the lshostvdiskmap command on the SAN Volume Controller.
Then, we remove the actual host mapping (Example 5-21).
Example 5-21 Finding and removing the host mapping
IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0
6005076801A180E9080000000000000F
1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0
6005076801A180E90800000000000010
1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0
6005076801A180E90800000000000011
IBM_2145:ITSO_SVC1:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001
IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap Senegal
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0
6005076801A180E90800000000000010
1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0
6005076801A180E90800000000000011
Here, we can see that the volume is removed from the server. On the server, we then perform
a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been
removed, as shown in Figure 5-27 on page 193.
Chapter 5. Host configuration 193
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-27 Disk Management: Disk has been removed
SDDDSM also shows us that the status for all paths to Disk1 has changed to CLOSE,
because the disk is not available (Example 5-22 on page 194).
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
194 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 5-22 SDD: Closed path
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E9080000000000000F
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0
1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0
3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000010
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0
1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0
2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0
3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 6005076801A180E90800000000000011
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0
1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0
The disk (Disk1) is now removed from the server. However, to remove the SDDDSM
information about the disk, you must reboot the server at a convenient time.
5.6 Using the SAN Volume Controller CLI from a Windows host
To issue CLI commands, we must install and prepare the SSH client system on the Windows
host system.
We can install the PuTTY SSH client software on a Windows host by using the PuTTY
installation program. You can download PuTTY from the following website:
http://www.chiark.greenend.org.uk/~sgtatham/putty/
The following website offers SSH client alternatives for Windows:
http://www.openssh.com/windows.html
Cygwin software has an option to install an OpenSSH client. You can download Cygwin from
the following website:
http://www.cygwin.com/
Chapter 5. Host configuration 195
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
For more information about the CLI, see Chapter 9, SAN Volume Controller operations using
the command-line interface on page 471.
5.7 Microsoft Volume Shadow Copy
The SAN Volume Controller provides support for the Microsoft Volume Shadow Copy Service
(VSS). The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow)
copy of a Windows host volume while the volume is mounted and the files are in use.
In this section, we discuss how to install the Microsoft Volume Copy Shadow Service. The
following operating system versions are supported:
Windows Server 2003 with SP2 (x86 and x86_64)
Windows Server 2008 with SP2 (x86 and x86_64)
Windows Server 2008 R2 with SP1
Windows Server 2012
The following components are used to provide support for the service:
SAN Volume Controller
IBM System Storage hardware provider, which is known as the IBM System Storage
Support for Microsoft Volume Shadow Copy Service (IBMVSS)
Microsoft Volume Shadow Copy Service
IBMVSS is installed on the Windows host.
To provide the point-in-time shadow copy, the components complete the following process:
1. A backup application on the Windows host initiates a snapshot backup.
2. The Volume Shadow Copy Service notifies IBMVSS that a copy is needed.
3. The SAN Volume Controller prepares the volume for a snapshot.
4. The Volume Shadow Copy Service quiesces the software applications that are writing
data on the host and flushes file system buffers to prepare for a copy.
5. The SAN Volume Controller creates the shadow copy using the FlashCopy Service.
6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can
resume and notifies the backup application that the backup was successful.
The Volume Shadow Copy Service maintains a free pool of volumes for use as a FlashCopy
target and a reserved pool of volumes. These pools are implemented as virtual host systems
on the SAN Volume Controller.
5.7.1 Installation overview
The steps for implementing IBMVSS must be completed in the correct sequence. Before you
begin, you must have experience with, or knowledge of, administering a Windows operating
system. And, you must also have experience with, or knowledge of, administering a SAN
Volume Controller.
You will need to complete the following tasks:
Verify that the system requirements are met.
Install IBMVSS.
Verify the installation.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
196 Implementing the IBM System Storage SAN Volume Controller V7.2
Create a free pool of volumes and a reserved pool of volumes on the SAN Volume
Controller.
5.7.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install IBMVSS and
Virtual Disk Service software on the Windows operating system:
SAN Volume Controller with FlashCopy enabled
IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk
Service (VDS) software
5.7.3 Installing the IBM System Storage hardware provider
This section includes the steps to install the IBM System Storage hardware provider on a
Windows server. You must satisfy all of the system requirements before starting the
installation.
During the installation, you will be prompted to enter information about the SAN Volume
Controller Master Console, including the location of the truststore file. The truststore file is
generated during the installation of the Master Console. You must copy this file to a location
that is accessible to the IBM System Storage hardware provider on the Windows server.
When the installation is complete, the installation program might prompt you to restart the
system. Complete the following steps to install the IBM System Storage hardware provider on
the Windows server:
1. Download the installation archive from this IBM website and extract it to a directory on the
Windows server where you want to install IBMVSS:
http://ibm.com/support/docview.wss?uid=ssg1S4000833
2. Log in to the Windows server as an administrator, and navigate to the directory where the
installation files are located.
3. Run the installation program by double-clicking IBMVSSVDS.exe.
4. The Welcome window opens, as shown in Figure 5-28. Click Next to continue with the
installation.
Figure 5-28 IBM VSS/VSD installation: Welcome
Chapter 5. Host configuration 197
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
5. Accept the license agreement on the next window. The Choose Destination Location
window opens (Figure 5-29). Click Next to accept the default directory where the setup
program will install the files, or click Change to select another directory.
Figure 5-29 IBM VSS/VSD installation: Choose Destination Location
6. Click Install to begin the installation (Figure 5-30).
Figure 5-30 IBM VSS/VSD installation
7. The next window asks you to select a CIM server, that is, the SAN Volume Controller.
Unlike for older SAN Volume Controller versions, the config node provides the CIM service
on the cluster IP address. Select either the correct automatically discovered CIM server, or
select Enter CIM Server address manually, and click Next (Figure 5-31 on page 198).
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
198 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 5-31 IBM VSS/VSD installation: Select CIM Server
8. The Enter CIM Server Details window opens. Enter the following information in the fields
(Figure 5-32):
a. The CIM Server Address field is propagated with the URL according to the CIM Server
address that was chosen in the previous step.
b. In the CIM User field, type the user name that the IBMVSS software will use to gain
access to the SAN Volume Controller.
c. In the CIM Password field, type the password for the SAN Volume Controller user name
that was provided in the previous step. Click Next.
Figure 5-32 IBM VSS/VSD installation: Enter CIM Server Details
9. In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to
restart the system (Figure 5-33 on page 199).
Chapter 5. Host configuration 199
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-33 IBM VSS/VSD installation complete
5.7.4 Verifying the installation
Perform the following steps to verify the installation:
1. Select Start All Programs Administrative Tools Services from the Windows
server start menu.
2. Ensure that the service named IBM System Storage Support for Microsoft Volume
Shadow Copy Service and Virtual Disk Service software appears and that the Status is
set to Started and that the Startup Type is set to Automatic.
3. Open a command prompt window, and issue the following command:
vssadmin list providers
This command ensures that the service named IBM System Storage Support for Microsoft
Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider;
see Example 5-23.
Example 5-23 Microsoft Software Shadow copy provider
C:\Users\Administrator>vssadmin list providers
vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
(C) Copyright 2001-2005 Microsoft Corp.
Provider name: 'Microsoft Software Shadow Copy provider 1.0'
Provider type: System
Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5}
Version: 1.0.0.7
Additional information:
If these settings change after installation, you can use the ibmvcfg.exe tool to update
the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new
settings.
If you do not have the CIM Agent server, port, or user information, contact your CIM
Agent administrator.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
200 Implementing the IBM System Storage SAN Volume Controller V7.2
Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware
Provider'
Provider type: Hardware
Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b}
Version: 4.2.1.0816
If you are able to successfully perform all of these verification tasks, the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was
successfully installed on the Windows server.
5.7.5 Creating the free and reserved pools of volumes
The IBM System Storage hardware provider maintains a free pool of volumes and a reserved
pool of volumes. Because these objects do not exist on the SAN Volume Controller, the free
pool of volumes and the reserved pool of volumes are implemented as virtual host systems.
You must define these two virtual host systems on the SAN Volume Controller.
When a shadow copy is created, the IBM System Storage hardware provider selects a
volume in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This process protects the volume from being overwritten by other Volume Shadow Copy
Service users.
To successfully perform a Volume Shadow Copy Service operation, there must be enough
volumes mapped to the free pool. The volumes must be the same size as the source
volumes.
Use the SAN Volume Controller Console or the SAN Volume Controller CLI to perform the
following steps:
1. Create a host for the free pool of volumes. You can use the default name VSS_FREE or
specify another name. Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeros); see Example 5-24.
Example 5-24 Creating an mkhost for the free pool
IBM_2145:ITSO_SVC1:admin>svctask mkhost -name VSS_FREE -hbawwpn
5000000000000000 -force Host, id [2], successfully created
2. Create a virtual host for the reserved pool of volumes. You can use the default name
VSS_RESERVED or specify another name. Associate the host with the WWPN
5000000000000001 (14 zeros); see Example 5-25.
Example 5-25 Creating an mkhost for the reserved pool
IBM_2145:ITSO_SVC1:admin>svctask mkhost -name VSS_RESERVED -hbawwpn
5000000000000001 -force
Host, id [3], successfully created
3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be
mapped to any other hosts. If you already have volumes created for the free pool of
volumes, you must assign the volumes to the free pool.
4. Create host mappings between the volumes that were selected in step 3 and the
VSS_FREE host to add the volumes to the free pool. Alternatively, you can use the
ibmvcfg add command to add volumes to the free pool; see Example 5-26 on page 201.
Chapter 5. Host configuration 201
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Example 5-26 Host mappings
IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002
Virtual Disk to Host map, id [1], successfully created
5. Verify that the volumes have been mapped. If you do not use the default WWPNs
5000000000000000 and 5000000000000001, you must configure the IBM System
Storage hardware provider with the WWPNs. See Example 5-27.
Example 5-27 Verify hosts
IBM_2145:ITSO_SVC1:admin>svcinfo lshostvdiskmap VSS_FREE
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
2 VSS_FREE 0 10 msvc0001 5000000000000000
6005076801A180E90800000000000012
2 VSS_FREE 1 11 msvc0002 5000000000000000
6005076801A180E90800000000000013
5.7.6 Changing the configuration parameters
You can change the parameters that you defined when you installed the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software.
Therefore, you must use the ibmvcfg.exe utility. It is a command-line utility that is located in
the C:\Program Files\IBM\Hardware Provider for VSS-VDS directory. See Example 5-28.
Example 5-28 Using ibmvcfg.exe utility help
C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe
IBM System Storage VSS Provider Configuration Tool Commands
----------------------------------------
ibmvcfg.exe <command> <command arguments>
Commands:
/h | /help | -? | /?
showcfg
listvols <all|free|unassigned>
add <volume esrial number list> (separated by spaces)
rem <volume serial number list> (separated by spaces)
Configuration:
set user <CIMOM user name>
set password <CIMOM password>
set trace [0-7]
set trustpassword <trustpassword>
set truststore <truststore location>
set usingSSL <YES | NO>
set vssFreeInitiator <WWPN>
set vssReservedInitiator <WWPN>
set FlashCopyVer <1 | 2> (only applies to ESS)
set cimomPort <PORTNUM>
set cimomHost <Hostname>
set namespace <Namespace>
set targetSVC <svc_cluster_ip>
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
202 Implementing the IBM System Storage SAN Volume Controller V7.2
set backgroundCopy <0-100>
Table 5-3 lists the available commands.
Table 5-3 Available ibmvcfg.exe utility commands
Command Description Example
ibmvcfg showcfg This command lists the current
settings.
ibmvcfg showcfg
ibmvcfg set username
<username>
This command sets the user
name to access the SAN
Volume Controller Console.
ibmvcfg set username Dan
ibmvcfg set password
<password>
This command sets the
password of the user name that
will access the SAN Volume
Controller Console.
ibmvcfg set password
mypassword
ibmvcfg set targetSVC
<ipaddress>
This command specifies the IP
address of the SAN Volume
Controller on which the
volumes are located when
volumes are moved to and from
the free pool with the ibmvcfg
add and ibmvcfg rem
commands. The IP address is
overridden if you use the -s flag
with the ibmvcfg add and
ibmvcfg rem commands.
set targetSVC 9.43.86.120
set backgroundCopy This command sets the
background copy rate for
FlashCopy.
set backgroundCopy 80
ibmvcfg set usingSSL This command specifies
whether to use the Secure
Sockets Layer (SSL) protocol to
connect to the SAN Volume
Controller Console.
ibmvcfg set usingSSL yes
ibmvcfg set cimomPort
<portnum>
This command specifies the
SAN Volume Controller
Console port number. The
default value is 5999.
ibmvcfg set cimomPort 5999
ibmvcfg set cimomHost
<server name>
This command sets the name
of the server where the SAN
Volume Controller Console is
installed.
ibmvcfg set cimomHost
cimomserver
ibmvcfg set namespace
<namespace>
This command specifies the
namespace value that the
Master Console uses. The
default value is \root\ibm.
ibmvcfg set namespace
\root\ibm
Chapter 5. Host configuration 203
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
ibmvcfg set vssFreeInitiator
<WWPN>
This command specifies the
WWPN of the host. The default
value is 5000000000000000.
Modify this value only if there is
a host already in your
environment with a WWPN of
5000000000000000.
ibmvcfg set vssFreeInitiator
5000000000000000
ibmvcfg set
vssReservedInitiator <WWPN>
This command specifies the
WWPN of the host. The default
value is 5000000000000001.
Modify this value only if there is
a host already in your
environment with a WWPN of
5000000000000001.
ibmvcfg set
vssReservedInitiator
5000000000000001
ibmvcfg listvols This command lists all
volumes, including information
about the size, location, and
host mappings.
ibmvcfg listvols
ibmvcfg listvols all This command lists all
volumes, including information
about the size, location, and
host mappings.
ibmvcfg listvols all
ibmvcfg listvols free This command lists the
volumes that are currently in
the free pool.
ibmvcfg listvols free
ibmvcfg listvols unassigned This command lists the
volumes that are currently not
mapped to any hosts.
ibmvcfg listvols unassigned
ibmvcfg add -s ipaddress This command adds one or
more volumes to the free pool
of volumes. Use the -s
parameter to specify the IP
address of the SAN Volume
Controller where the volumes
are located. The -s parameter
overrides the default IP address
that is set with the ibmvcfg set
targetSVC command.
ibmvcfg add vdisk12 ibmvcfg
add 600507 68018700035000000
0000000BA -s 66.150.210.141
ibmvcfg rem -s ipaddress This command removes one
or more volumes from the free
pool of volumes. Use the -s
parameter to specify the IP
address of the SAN Volume
Controller where the volumes
are located. The -s parameter
overrides the default IP address
that is set with the ibmvcfg set
targetSVC command.
ibmvcfg rem vdisk12 ibmvcfg
rem 600507 68018700035000000
0000000BA -s 66.150.210.141
Command Description Example
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
204 Implementing the IBM System Storage SAN Volume Controller V7.2
5.8 Specific Linux (on x86/x86_64) information
The following sections describe specific information that relates to the connection of Linux on
Intel-based hosts to the SAN Volume Controller environment.
5.8.1 Configuring the Linux host
Follow these steps to configure the Linux host:
1. Use the latest firmware levels on your host system.
2. Install the HBA or HBAs on the Linux server, as described in 5.5.4, Installing and
configuring the host adapter on page 176.
3. Install the supported HBA driver/firmware and upgrade the kernel, if required.
4. Connect the Linux server FC host adapters to the switches.
5. Configure the switches (zoning), if needed.
6. Install SDD for Linux, as described in 5.8.5, Multipathing in Linux on page 205.
7. Configure the host, volumes, and host mapping in the SAN Volume Controller.
8. Rescan for LUNs on the Linux server to discover the volumes that were created on the
SAN Volume Controller.
5.8.2 Configuration information
The SAN Volume Controller supports hosts that run the following Linux distributions:
Red Hat Enterprise Linux
SUSE Linux Enterprise Server
For the latest information, always use the following website:
http://www.ibm.com/storage/support/2145
This website provides the hardware list for supported HBAs and device driver levels for Linux.
Check the supported firmware and driver level for your HBA, and follow the manufacturers
instructions to upgrade the firmware and driver levels for each type of HBA.
5.8.3 Disabling automatic Linux system updates
Many Linux distributions give you the ability to configure your systems for automatic system
updates. Red Hat provides this ability in the form of a program called up2date. Novell SUSE
provides the YaST Online Update utility. These features periodically query for updates that are
available for each host. You can configure them to install any new updates automatically that
they find.
Often, the automatic update process also upgrades the system to the latest kernel level. Old
hosts still running SDD must turn off the automatic update of kernel levels, because certain
drivers that are supplied by IBM, such as SDD, depend on a specific kernel and will cease to
Chapter 5. Host configuration 205
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
function on a new kernel. Similarly, HBA drivers need to be compiled against specific kernels
to function optimally. By allowing automatic updates of the kernel, you risk affecting your host
systems unexpectedly.
5.8.4 Setting queue depth with QLogic HBAs
The queue depth is the number of I/O operations that can be run in parallel on a device.
Configure your host running the Linux operating system by using the formula that is specified
in 5.13, Calculating the queue depth on page 223.
Perform the following steps to set the maximum queue depth:
1. Add the following line to the /etc/modules.conf file:
For the 2.6 kernel (SUSE Linux Enterprise Server 9, or later, or Red Hat Enterprise
Linux 4, or later):
options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth
2. Rebuild the RAM disk, which is associated with the kernel that is being used, by using one
of the following commands:
If you are running on a SUSE Linux Enterprise Server operating system, run the
mk_initrd command.
If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd
command, and then restart.
5.8.5 Multipathing in Linux
Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide
their own multipath support by the operating system. On older systems, it is necessary to
install the IBM SDD multipath driver. Installation and configuration instructions for SDD are
not provided because it is not to be deployed on newly installed Linux hosts.
Device Mapper Multipath (DM-MPIO)
Red Hat Enterprise Linux 5 (RHEL5) and later and SUSE Linux Enterprise Server 10
(SLES10) and later provide their own multipath support for the operating system. Therefore,
you do not have to install an additional device driver. Always check whether your operating
system includes one of the supported multipath drivers. You can obtain this information in the
links that are provided in 5.8.2, Configuration information on page 204. In SLES10, the
multipath drivers and tools are installed by default. For RHEL5, though, the user has to
explicitly choose the multipath components during the operating system installation to install
them. Each of the attached SAN Volume Controller LUNs has a special device file in the Linux
/dev directory.
Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SAN
Volume Controller allows. The following website provides the most current information about
the maximum configuration for the SAN Volume Controller:
http://www.ibm.com/storage/support/2145
Creating and preparing DM-MPIO volumes for use
First, you have to start the MPIO daemon on your system. Run either the following SLES10
command or the following RHEL5 command on your host system:
Enable MPIO for SLES10 by running the following commands:
/etc/init.d/boot.multipath {start|stop}
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
206 Implementing the IBM System Storage SAN Volume Controller V7.2
/etc/init.d/multipathd
{start|stop|status|try-restart|restart|force-reload|reload|probe}
Enable MPIO for RHEL5 by running the following commands:
modprobe dm-multipath
modprobe dm-round-robin
service multipathd start
chkconfig multipathd on
Example 5-29 shows the commands that are issued on a Red Hat Enterprise Linux 5.1
operating system.
Example 5-29 Starting MPIO daemon on Red Hat Enterprise Linux
[root@palau ~]# modprobe dm-round-robin
[root@palau ~]# multipathd start
[root@palau ~]# chkconfig multipathd on
[root@palau ~]#
Follow these steps to enable multipathing for IBM devices:
1. Open the multipath.conf file and follow the instructions. The file is located in the /etc
directory. Example 5-30 shows editing using vi.
Example 5-30 Editing the multipath.conf file
[root@palau etc]# vi multipath.conf
2. Add the following entry to the multipath.conf file:
device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_alua /dev/%n"
}
3. Restart the multipath daemon; see Example 5-31.
Example 5-31 Stopping and starting the multipath daemon
[root@palau ~]# service multipathd stop
Stopping multipathd daemon: [ OK ]
[root@palau ~]# service multipathd start
Starting multipathd daemon: [ OK ]
Tip: Run insserv boot.multipath multipathd to automatically load the multipath
driver and multipathd daemon during start-up.
Note: You can download example multipath.conf files from the IBM Subsystem
Device Driver for Linux website:
http://ibm.com/support/docview.wss?uid=ssg1S4000107#DM
Chapter 5. Host configuration 207
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
4. Type the multipath -dl command to see the mpio configuration. You see two groups with
two paths each. All paths must have the state [active][ready], and one group will show
[enabled].
5. Use the fdisk command to create a partition on the SAN Volume Controller disk, as
shown in Example 5-32.
Example 5-32 fdisk
[root@palau scsi]# fdisk -l
Disk /dev/hda: 80.0 GB, 80032038912 bytes
255 heads, 63 sectors/track, 9730 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 83 Linux
/dev/hda2 14 9730 78051802+ 8e Linux LVM
Disk /dev/sda: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sda doesn't contain a valid partition table
Disk /dev/sdb: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
208 Implementing the IBM System Storage SAN Volume Controller V7.2
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/sdh: 4244 MB, 4244635648 bytes
131 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 8122 * 512 = 4158464 bytes
Disk /dev/sdh doesn't contain a valid partition table
Disk /dev/dm-2: 4244 MB, 4244635648 bytes
255 heads, 63 sectors/track, 516 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3: 4244 MB, 4244635648 bytes
255 heads, 63 sectors/track, 516 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-3 doesn't contain a valid partition table
[root@palau scsi]# fdisk /dev/dm-2
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF
disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 1
First cylinder (1-516, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-516, default 516):
Using default value 516
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table.
The new table will be used at the next reboot.
[root@palau scsi]# shutdown -r now
6. Create a file system using the mkfs command (Example 5-33).
Example 5-33 mkfs command
[root@palau ~]# mkfs -t ext3 /dev/dm-2
Chapter 5. Host configuration 209
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
518144 inodes, 1036288 blocks
51814 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1061158912
32 block groups
32768 blocks per group, 32768 fragments per group
16192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@palau ~]#
7. Create a mount point, and mount the drive, as shown in Example 5-34.
Example 5-34 Mount point
[root@palau ~]# mkdir /svcdisk_0
[root@palau ~]# cd /svcdisk_0/
[root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0
[root@palau svcdisk_0]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
73608360 1970000 67838912 3% /
/dev/hda1 101086 15082 80785 16% /boot
tmpfs 967984 0 967984 0% /dev/shm
/dev/dm-2 4080064 73696 3799112 2% /svcdisk_0
5.9 VMware configuration information
This section explains the requirements and additional information for attaching the SAN
Volume Controller to a variety of guest host operating systems running on the VMware
operating system.
5.9.1 Configuring VMware hosts
To configure the VMware hosts, follow these steps:
1. Install the HBAs in your host system, as described in 5.9.3, HBAs for hosts running
VMware on page 210.
2. Connect the server FC host adapters to the switches.
3. Configure the switches (zoning), as described in 5.9.4, VMware storage and zoning
guidance on page 210.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
210 Implementing the IBM System Storage SAN Volume Controller V7.2
4. Install the VMware operating system (if not already installed) and check the HBA timeouts,
as described in 5.9.5, Setting the HBA timeout for failover in VMware on page 211.
5. Configure the host, volumes, and host mapping in the SAN Volume Controller, as
described in 5.9.7, Attaching VMware to volumes on page 212.
5.9.2 Operating system versions and maintenance levels
For the latest information about VMware support, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
At the time of writing this book, the following versions are supported:
ESX V5.x
ESX V4.x
5.9.3 HBAs for hosts running VMware
Ensure that your hosts that are running on VMware operating systems that use the correct
HBAs and firmware levels. Install the host adapters in your system. See the manufacturers
instructions for the installation and configuration of the HBAs.
For older ESX versions, you can see the supported HBAs at this IBM website:
http://ibm.com/storage/support/2145
Mostly, the supported HBA device drivers are already included in the ESX server build, but for
various newer storage adapters, you might be required to load additional ESX drivers. Check
the VMware HCL if you need to load a custom driver for your adapter:
http://www.vmware.com/resources/compatibility/search.php
After installing the HBAs, load the default configuration of your FC HBAs. You must use the
same model of HBA with the same firmware in one server. It is not supported to have Emulex
and QLogic HBAs that access the same target in one server.
SAN boot support
The SAN boot of any guest operating system is supported under VMware. The nature of
VMware means that SAN boot is a requirement on any guest operating system. The guest
operating system must reside on a SAN disk.
If you are unfamiliar with the VMware environment and the advantages of storing virtual
machines and application data on a SAN, it is useful to get an overview about VMware
products before continuing.
VMware documentation is available at this website:
http://www.vmware.com/support/pubs/
5.9.4 VMware storage and zoning guidance
The VMware ESX server can use a Virtual Machine File System (VMFS). VMFS is a file
system that is optimized to run multiple virtual machines as one workload to minimize disk
I/O. It also can handle concurrent access from multiple physical machines, because it
enforces the appropriate access controls. Therefore, multiple ESX hosts can share the same
set of LUNs.
Chapter 5. Host configuration 211
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Theoretically, you can run all of your virtual machines on one LUN. However, for performance
reasons in more complex scenarios, it can be better to load balance virtual machines over
separate HBAs, storages, or arrays.
If you run an ESX host, for example, with several virtual machines, it makes sense to use one
slow array. For example, you can use one slow array for Print and Active Directory Services
guest operating systems without high I/O, and another fast array for database guest operating
systems.
Using fewer volumes has the following advantages:
More flexibility to create virtual machines without creating new space on the SAN Volume
Controller
More possibilities for taking VMware snapshots
Fewer volumes to manage
Using more and smaller volumes has the following advantages:
Separate I/O characteristics of the guest operating systems
More flexibility (the multipathing policy and disk shares are set per volume)
Microsoft Cluster Service requires its own volume for each cluster disk resource
You can obtain more documentation about designing your VMware infrastructure at one of
these websites:
http://www.vmware.com/vmtn/resources/
http://www.vmware.com/resources/techresources/1059
5.9.5 Setting the HBA timeout for failover in VMware
The timeout for failover for ESX hosts must be set to 30 seconds:
For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The
timeout value is:
2 x PortDownRetryCount + 5 seconds
Set the qlport_down_retry parameter to 14.
For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be
set to 30 seconds.
To make these changes on your system, perform the following steps:
1. Back up the /etc/vmware/esx.cof file.
2. Open the /etc/vmware/esx.cof file for editing.
3. The file includes a section for every installed SCSI device.
4. Locate your SCSI adapters, and edit the previously described parameters.
5. Repeat this process for every installed HBA.
See Example 5-35.
Guidelines:
ESX server hosts that use shared storage for virtual machine failover or load balancing
must be in the same zone.
You can have only one VMFS volume per volume.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
212 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 5-35 Setting the HBA timeout
[root@nile svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup
[root@nile svc]# vi /etc/vmware/esx.conf
5.9.6 Multipathing in ESX
The VMware ESX server performs multipathing. You do not need to install an additional
multipathing driver, such as SDD.
5.9.7 Attaching VMware to volumes
First, we make sure that the VMware host is logged into the SAN Volume Controller. In our
examples, we use the VMware ESX server V3.5 and the host name Nile.
Enter the following command to check the status of the host:
svcinfo lshost <hostname>
Example 5-36 shows that the host Nile is logged into the SAN Volume Controller with two
HBAs.
Example 5-36 lshost Nile
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 2
WWPN 210000E08B892BCD
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active
Then, we have to set the SCSI Controller Type in VMware. By default, the ESX server
disables the SCSI bus sharing and does not allow multiple virtual machines to access the
same VMFS file at the same time. See Figure 5-34 on page 213.
But in many configurations, such as those configurations for high availability, the virtual
machines have to share the same VMFS file to share a disk.
Follow these steps to set the SCSI Controller Type in VMware:
1. Log in to your Infrastructure Client, shut down the virtual machine, right-click it, and select
Edit settings.
2. Highlight the SCSI Controller, and select one of the three available settings, depending on
your configuration:
None: Disks cannot be shared by other virtual machines.
Virtual: Disks can be shared by virtual machines on the same server.
Physical: Disks can be shared by virtual machines on any server.
Click OK to apply the setting.
Chapter 5. Host configuration 213
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-34 Changing SCSI bus settings
3. Create your volumes on the SAN Volume Controller, and then, map them to the ESX
hosts.
For this configuration, we created one volume and mapped it to our ESX host, as shown in
Example 5-37.
Example 5-37 Mapped volume to ESX host Nile
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Nile 0 12 VMW_pool 210000E08B892BCD
60050768018301BF2800000000000010
ESX does not automatically scan for SAN changes (except when rebooting the entire ESX
server). If you made any changes to your SAN Volume Controller or SAN configuration,
perform the following steps:
1. Open your VMware Infrastructure Client.
2. Select the host.
3. In the Hardware window, choose Storage Adapters.
4. Click Rescan.
Tips:
If you want to use features, such as VMotion, the volumes that own the VMFS file have
to be visible to every ESX host that will be able to host the virtual machine.
In SAN Volume Controller, select Allow the virtual disks to be mapped even if they
are already mapped to a host.
The volume must have the same SCSI ID on each ESX host.
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
214 Implementing the IBM System Storage SAN Volume Controller V7.2
To configure a storage device to use it in VMware, perform the following steps:
1. Open your VMware Infrastructure Client.
2. Select the host for which you want to see the assigned volumes, and click the
Configuration tab.
3. In the Hardware window on the left side, click Storage.
4. To create a new storage pool, select click here to create a datastore or click Add storage
if the field does not appear (Figure 5-35).
Figure 5-35 VMware add datastore
5. The Add storage wizard will appear.
6. Select Create Disk/Lun, and click Next.
7. Select the SAN Volume Controller volume that you want to use for the datastore, and click
Next.
8. Review the disk layout and click Next.
9. Enter a datastore name and click Next.
10.Select a block size, enter the size of the new partition, and then, click Next.
11.Review your selections, and click Finish.
Now, the created VMFS datastore appears in the Storage window (Figure 5-36). You will see
the details for the highlighted datastore. Check whether all of the paths are available and that
the Path Selection is set to Round Robin.
Chapter 5. Host configuration 215
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-36 VMware storage configuration
If not all of the paths are available, check your SAN and storage configuration. After fixing the
problem, select Refresh to perform a path rescan. The view will be updated to the new
configuration.
The preferred practice is to use the Round Robin Multipath Policy for SAN Volume Controller.
If you have to edit this policy, perform the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change.
5. Select Round Robin.
6. Click OK.
7. Click Close.
Now, your VMFS datastore has been created, and you can start using it for your guest
operating systems. Round Robin will distribute the I/O load across all available paths. If you
want to use a fixed path, the policy setting Fixed is supported, as well.
5.9.8 Volume naming in VMware
In the Virtual Infrastructure Client, a volume is displayed as a sequence of three or four
numbers, which are separated by colons (Figure 5-37) and are shown under the Device and
SAN Identifier columns:
<SCSI HBA>:<SCSI target>:<SCSI volume>:<disk partition>
The following definitions apply to the previous variables:
SCSI HBA
The number of the SCSI HBA (can change).
SCSI target
The number of the SCSI target (can change).
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
216 Implementing the IBM System Storage SAN Volume Controller V7.2
SCSI volume
The number of the volume (never changes).
disk partition
The number of the disk partition (never changes). If the last number is not displayed, the
name stands for the entire volume.
Figure 5-37 Volume naming in VMware
5.9.9 Setting the Microsoft guest operating system timeout
For a Windows 2000 Server or Windows Server 2003 that is installed as a VMware guest
operating system, the disk timeout value must be set to 60 seconds.
We provide the instructions to perform this task in 5.5.5, Changing the disk timeout on the
Windows Server on page 176.
5.9.10 Extending a VMFS volume
It is possible to extend VMFS volumes while virtual machines are running. First, you have to
extend the volume on the SAN Volume Controller, and then you are able to extend the VMFS
volume. Before performing these steps, perform a backup of your data.
Perform the following steps to extend a volume:
1. The volume can be expanded with the svctask expandvdisksize -size 1 -unit gb
<VDiskname> command; see Example 5-38.
Example 5-38 Expanding a volume on the SAN Volume Controller
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 60.0GB
type striped
formatted yes
mdisk_id
mdisk_name
Chapter 5. Host configuration 217
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000010
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 60.00GB
real_capacity 60.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool
id 12
name VMW_pool
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
capacity 65.0GB
type striped
formatted yes
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018301BF2800000000000010
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid 0
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
218 Implementing the IBM System Storage SAN Volume Controller V7.2
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS45
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 65.00GB
real_capacity 65.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
5. Select Storage Adapters.
6. Click Rescan.
7. Make sure that the Scan for new Storage Devices check box is marked, and click OK.
After the scan has completed, the new capacity is displayed in the Details section.
8. Click Storage.
9. Right-click the VMFS volume and click Properties.
10.Click Add Extend.
11.Select the new free space, and click Next.
12.Click Next.
13.Click Finish.
The VMFS volume has now been extended, and the new space is ready for use.
5.9.11 Removing a datastore from an ESX host
Before you remove a datastore from an ESX host, you have to migrate or delete all of the
virtual machines that reside on this datastore.
To remove it, perform the following steps:
1. Back up the data.
2. Open the Virtual Infrastructure Client.
3. Select the host.
4. Select Configuration.
Chapter 5. Host configuration 219
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
5. Select Storage.
6. Highlight the datastore that you want to remove.
7. Click Remove.
8. Read the warning, and if you are sure that you want to remove the datastore and delete all
of the data on it, click Yes.
9. Remove the host mapping on the SAN Volume Controller, or delete the volume (as shown
in Example 5-39).
10.In the VI Client, select Storage Adapters.
11.Click Rescan.
12.Make sure that the Scan for new Storage Devices check box is marked, and click OK.
13.After the scan completes, the disk disappears from the view.
Your datastore has been removed successfully from the system.
Example 5-39 Remove host mapping: Delete the volume
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_pool
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool
5.10 Sun Solaris support information
For the latest information about supported software and driver levels, always see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
5.10.1 Operating system versions and maintenance levels
At the time of writing this book, Sun Solaris 8, Sun Solaris 9, and Sun Solaris 10 are
supported.
5.10.2 SDD dynamic pathing
Solaris supports dynamic pathing when you either add more paths to an existing volume, or
present a new volume to a host. No user intervention is required. SDD is aware of the
preferred paths that SAN Volume Controller sets per volume.
SDD will use a round-robin algorithm when failing over paths. That is, SDD will try the next
known preferred path. If this method fails and all preferred paths have been tried, it will use a
round-robin algorithm on the non-preferred paths until it finds a path that is available. If all
paths are unavailable, the volume will go offline. Therefore, it can take time to perform path
failover when multiple paths go offline.
SDD under Solaris performs load balancing across the preferred paths, where appropriate.
Veritas Volume Manager with dynamic multipathing
Veritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the
next available I/O path for I/O requests without action from the administrator. VM with DMP is
also informed when you repair or restore a connection, and when you add or remove devices
after the system has been fully booted (provided that the operating system recognizes the
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
220 Implementing the IBM System Storage SAN Volume Controller V7.2
devices correctly). The new Java Native Interface (JNI) drivers support the host mapping of
new volumes without rebooting the Solaris host.
Note the following support characteristics:
Veritas VM with DMP supports load balancing across multiple paths with the SAN Volume
Controller.
Veritas VM with DMP does not support preferred pathing with the SAN Volume Controller.
Coexistence with SDD and Veritas VM with DMP
Veritas Volume Manager with DMP will coexist in pass-through mode with SDD. DMP will
use the vpath devices that are provided by SDD.
OS cluster support
Solaris with Symantec Cluster V4.1, Symantec SFHA, and SFRAC V4.1/5.0, and Solaris with
Sun Cluster V3.1/3.2 are supported at the time of writing this book.
SAN boot support
Note the following support characteristics:
Boot from SAN is supported under Solaris 9 running Symantec Volume Manager.
Boot from SAN is not supported when SDD is used as the multipathing software.
5.11 Hewlett-Packard UNIX configuration information
For the latest information about Hewlett-Packard UNIX (HP-UX) support, see this website:
http://ibm.com/systems/storage/software/virtualization/svc/interop.html
5.11.1 Operating system versions and maintenance levels
At the time of writing this book, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).
5.11.2 Multipath solutions supported
At the time of writing this book, SDD V1.7.2.0 for HP-UX is supported. Multipathing Software
PV Link and Cluster Software Service Guard V11.14/11.16/11.17/11.18 are also supported.
However, in a cluster environment, we suggest that you use SDD.
SDD dynamic pathing
HP-UX supports dynamic pathing when you either add more paths to an existing volume, or
present a new volume to a host.
SDD is aware of the preferred paths that the SVC sets per volume. SDD will use a
round-robin algorithm when failing over paths. That is, it will try the next known preferred path.
If this method fails and all preferred paths have been tried, it will use a round-robin algorithm
on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the
volume will go offline. It can take time, therefore, to perform path failover when multiple paths
go offline.
SDD under HP-UX performs load balancing across the preferred paths where appropriate.
Chapter 5. Host configuration 221
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Physical volume links (PVLinks) dynamic pathing
Unlike SDD, PVLinks does not load balance and it is unaware of the preferred paths that the
SAN Volume Controller sets per volume. Therefore, we strongly suggest that you use SDD,
except when in a clustering environment or when using a SAN Volume Controller volume as
your boot disk.
When creating a VG, specify the primary path that you want HP-UX to use when accessing
the Physical Volume that is presented by the SAN Volume Controller. This path, and only this
path, will be used to access the PV as long as it is available, no matter what the SAN Volume
Controllers preferred path is to that volume. Therefore, be careful when creating VGs so that
the primary links to the PVs (and load) are balanced over both HBAs, FC switches, SAN
Volume Controller nodes, and so on.
When extending a VG to add alternate paths to the PVs, the order in which you add these
paths is HP-UXs order of preference if the primary path becomes unavailable. Therefore,
when extending a VG, the first alternate path that you add must be from the same SAN
Volume Controller node as the primary path, to avoid unnecessary node failover due to an
HBA, FC link, or FC switch failure.
5.11.3 Coexistence of SDD and PVLinks
If you want to multipath a volume with PVLinks while SDD is installed, you need to make sure
that SDD does not configure a vpath for that volume. To make sure that SDD does not
configure a vpath for that volume, you need to put the serial number of any volumes that you
want SDD to ignore in the /etc/vpathmanualexcl.cfg directory. In the case of SAN boot, if
you are booting from a SAN Volume Controller volume, when you install SDD (from Version
1.6 onward), SDD will automatically ignore the boot volume.
SAN boot support
SAN boot is supported on HP-UX by using PVLinks as the multipathing software on the boot
device. You can use PVLinks or SDD to provide the multipathing support for the other devices
that are attached to the system.
5.11.4 Using a SAN Volume Controller volume as a cluster lock disk
ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When
using a SAN Volume Controller volume as your lock disk, if the path to
FIRST_CLUSTER_LOCK_PV becomes unavailable, the HP node will not be able to access
the lock disk if a 50-50 split in quorum occurs.
To ensure redundancy, when editing your Cluster Configuration ASCII file, make sure that the
variable FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node
in your cluster. For example, when configuring a two-node HP cluster, make sure that
FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SAN Volume Controller node
and through a separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.
5.11.5 Support for HP-UX with greater than eight LUNs
HP-UX will not recognize more than eight LUNS per port using the generic SCSI behavior.
To accommodate this behavior, the SAN Volume Controller supports a type that is
associated with a host. This type can be set using the svctask mkhost command and
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
222 Implementing the IBM System Storage SAN Volume Controller V7.2
modified using the svctask chhost command. You can set the type to generic, which is the
default for HP-UX.
When an initiator port, which is a member of a host of type HP-UX, accesses a SAN Volume
Controller, the SAN Volume Controller will behave in the following way:
Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.
When an inquiry command for any page is sent to LUN 0 using Peripheral Device
Addressing, it is reported as Peripheral Device Type 0Ch (controller).
When any command other than an inquiry is sent to LUN 0 using Peripheral Device
Addressing, the SAN Volume Controller will respond as an unmapped LUN 0 normally
responds.
When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral
Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown
Device Type.
When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device
Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh
(unknown or no device type). This response is in contrast to the behavior for generic hosts,
where peripheral Device Type 00h is returned.
5.12 Using SDDDSM, SDDPCM, and SDD web interface
After installing the SDDDSM or SDD driver, specific commands are available. To open a
command window for SDDDSM or SDD, from the desktop, select Start Programs
Subsystem Device Driver Subsystem Device Driver Management.
The command documentation for the various operating systems is available in the Multipath
Subsystem Device Driver Users Guide:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
It is also possible to configure SDDDSM to offer a web interface that provides basic
information. Before this configuration can work, you must configure the web interface. Sddsrv
does not bind to any TCP/IP port by default, but it allows port binding to be dynamically
enabled or disabled.
For all platforms except Linux, the multipath driver package ships an sddsrv.conf template
file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the
sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, it is located
in the directory where SDDDSM was installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf.
You can then dynamically change the port binding by modifying the parameters in the
sddsrv.conf file and changing the values of Enableport and Loopbackbind to True.
Figure 5-38 shows the start window of the multipath driver web interface.
Chapter 5. Host configuration 223
Draft Document for Review March 27, 2014 3:03 pm 7933 05 Host Configuration Libor.fm
Figure 5-38 SDD web interface
5.13 Calculating the queue depth
The queue depth is the number of I/O operations that can be run in parallel on a device. It is
usually possible to set a limit on the queue depth on the SDD paths (or equivalent) or the
HBA. Ensure that you configure the servers to limit the queue depth on all of the paths to the
SAN Volume Controller disks in configurations that contain a large number of servers or
volumes.
You might have a number of servers in the configuration that are idle or do not initiate the
calculated quantity of I/O operations. In that case, you might not need to limit the queue
depth.
5.14 Further sources of information
For more information about host attachment and configuration to the SAN Volume Controller,
see the IBM System Storage SAN Volume Controller: Host Attachment Users Guide,
SC26-7905.
For more information about SDDDSM configuration, see the latest IBM System Storage
Multipath Subsystem Device Driver Users Guide, which is available from this website:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
7933 05 Host Configuration Libor.fm Draft Document for Review March 27, 2014 3:03 pm
224 Implementing the IBM System Storage SAN Volume Controller V7.2
The IBM SAN Volume Controller Information Center provides comprehensive information
about host attachment and storage subsystem attachment, troubleshooting, and much more:
http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
5.14.1 Publications containing SAN Volume Controller storage subsystem
attachment guidelines
It is beyond the scope of this document to describe the attachment to each subsystem that
the SAN Volume Controller supports. The following short list includes the material that we
found especially useful in the writing of this book, and in the field:
SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521,
describes in detail how you can optimize your back-end storage to maximize your
performance on the SAN Volume Controller:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
DS8000 Performance Monitoring and Tuning, SG24-7146, describes the guidelines and
procedures to make the most of the performance that is available from your DS8000
storage subsystem when attached to the SAN Volume Controller:
http://www.redbooks.ibm.com/abstracts/sg247146.html?Open
IBM Midrange System Storage Implementation and Best Practices Guide, SG24-6363,
explains how to connect and configure your storage for optimized performance on the
SAN Volume Controller:
http://www.redbooks.ibm.com/abstracts/sg246363.html?Open
IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659,
discusses specific considerations for attaching the XIV Storage System to a SAN Volume
Controller:
http://www.redbooks.ibm.com/abstracts/sg247659.html?Open
Copyright IBM Corp. 2014. All rights reserved. 225
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Chapter 6. Data migration
In this chapter, we explain how to migrate from a conventional storage infrastructure to a
virtualized storage infrastructure by using the IBM System Storage SAN Volume Controller.
We also explain how the SAN Volume Controller can be phased out of a virtualized storage
infrastructure, for example, after a trial period or after using the SAN Volume Controller as a
data migration tool.
We introduce and demonstrate SAN Volume Controller support of non-disruptive movement
of volumes between SAN Volume Controller I/O groups referred to as Non-disruptive Volume
Move or Multi-node volume access.
Finally, we describe how to migrate from a fully allocated volume to a thin-provisioned volume
by using the volume mirroring feature and the thin-provisioned volume together.
6
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
226 Implementing the IBM System Storage SAN Volume Controller V7.2
6.1 Migration overview
The SAN Volume Controller allows you to change the mapping of volume extents to managed
disk (MDisk) extents, without interrupting host access to the volume. This functionality is used
when performing volume migrations, and it applies to any volume that is defined on the SAN
Volume Controller.
This functionality can be used for these tasks:
Migrating data from older back-end storage to SAN Volume Controller-managed storage
Migrating data from one back-end controller to another back-end controller using the SAN
Volume Controller as a data block mover, and afterwards removing the SAN Volume
Controller from the SAN
Migrating data from managed mode back into image mode prior to removing the SAN
Volume Controller from a SAN
Redistributing volumes and, therefore, the workload within a SAN Volume Controller
clustered system across back-end storage:
Moving workload onto newly installed storage
Moving workload off of old or failing storage, ahead of decommissioning it
Moving workload to rebalance a changed workload
Migrating data from one SAN Volume Controller clustered system to another SAN Volume
Controller system
Moving volumes I/O caching between SAN Volume Controller I/O groups to redistribute
workload across SAN Volume Controller clustered system
6.2 Migration operations
You can perform migration at either the volume or the extent level, depending on the purpose
of the migration. The following migration activities are supported:
Migrating extents within a storage pool, redistributing the extents of a given volume on the
MDisks within the same storage pool
Migrating extents off of an MDisk, which is removed from the storage pool, to other MDisks
in the same storage pool
Migrating a volume from one storage pool to another storage pool
Migrating a volume to change the virtualization type of the volume to image
Moving a volume between I/O Groups non-disruptively
6.2.1 Migrating multiple extents (within a storage pool)
You can migrate a number of volume extents at one time by using the migrateexts command.
For detailed information about the migrateexts command parameters, use the SAN Volume
Controller command-line interface help by typing this command:
help migrateexts
Or, see IBM System Storage SAN Volume Controller Command-Line Interface Users Guide,
GC27-2287.
Chapter 6. Data migration 227
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
When executed, this command migrates a given number of extents from the source MDisk,
where the extents of the specified volume reside, to a defined target MDisk that must be part
of the same storage pool.
You can specify a number of migration threads to be used in parallel (from 1 to 4).
If the type of the volume is image, the volume type transitions to striped when the first extent
is migrated. The MDisk access mode transitions from image to managed.
6.2.2 Migrating extents off of an MDisk that is being deleted
When an MDisk is deleted from a storage pool using the rmmdisk -force command, any
extents on the MDisk being used by a volume are first migrated off the MDisk and onto other
MDisks in the storage pool prior to its deletion.
In this case, the extents that need to be migrated are moved onto the set of MDisks that are
not being deleted. This statement is true if multiple MDisks are being removed from the
storage pool at the same time.
If a volume uses one or more extents that need to be moved as a result of an rmmdisk
command, the virtualization type for that volume is set to striped (if it was previously
sequential or image).
If the MDisk is operating in image mode, the MDisk transitions to managed mode while the
extents are being migrated. Upon deletion, it transitions to unmanaged mode.
6.2.3 Migrating a volume between storage pools
An entire volume can be migrated from one storage pool to another storage pool by using the
migratevdisk command. A volume can be migrated between storage pools regardless of the
virtualization type (image, striped, or sequential), although it transitions to the virtualization
type of striped. The command varies, depending on the type of migration, as shown in
Table 6-1.
Table 6-1 Migration types and associated commands
Using the -force flag: If the -force flag is not used and if volumes occupy extents on one
or more of the MDisks that are specified, the command fails.
When the -force flag is used and if volumes occupy extents on one or more of the MDisks
that are specified, all extents on the MDisks will be migrated to the other MDisks in the
storage pool if there are enough free extents in the storage pool. The deletion of the
MDisks is postponed until all extents are migrated, which can take time. In the case where
there are insufficient free extents in the storage pool, the command fails.
Storage pool-to-storage pool type Command
Managed to managed migratevdisk
Image to managed migratevdisk
Managed to image migratetoimage
Image to image migratetoimage
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
228 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-1 shows a managed volume migration to another storage pool.
Figure 6-1 Managed volume migration to another storage pool
In Figure 6-1, we illustrate volume V3 migrating from Pool 2 to Pool 3.
Extents are allocated to the migrating volume from the set of MDisks in the target storage
pool, using the extent allocation algorithm.
The process can be prioritized by specifying the number of threads that will be used in parallel
(from 1 to 4) while migrating; using only one thread will put the least background load on the
system.
The offline rules apply to both storage pools. Therefore, referring back to Figure 6-1, if any of
the M4, M5, M6, or M7 MDisks go offline, the V3 volume goes offline. If the M4 MDisk goes
offline, V3 and V5 go offline, but V1, V2, V4, and V6 remain online.
If the type of the volume is image, the volume type transitions to striped when the first extent
is migrated. The MDisk access mode transitions from image to managed.
For the duration of the move, the volume is listed as being a member of the original storage
pool. For the purposes of configuration, the volume moves to the new storage pool
instantaneously at the end of the migration.
Rule: For the migration to be acceptable, the source storage pool and the destination
storage pool must have the same extent size. Note that volume mirroring can also be used
to migrate a volume between storage pools. You can use this method if the extent sizes of
the two pools are not the same.
Chapter 6. Data migration 229
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
6.2.4 Migrating the volume to image mode
The facility to migrate a volume to an image mode volume can be combined with the ability to
migrate between storage pools. The source for the migration can be a managed mode or an
image mode volume. This leads to four possibilities:
Migrate image mode-to-image mode within a storage pool.
Migrate managed mode-to-image mode within a storage pool.
Migrate image mode-to-image mode between storage pools.
Migrate managed mode-to-image mode between storage pools.
These conditions must apply to be able to migrate:
The destination MDisk must be greater than or equal to the size of the volume.
The MDisk that is specified as the target must be in an unmanaged state at the time that
the command is run.
If the migration is interrupted by a system recovery, the migration will resume after the
recovery completes.
If the migration involves moving between storage pools, the volume behaves as described
in 6.2.3, Migrating a volume between storage pools on page 227.
Regardless of the mode in which the volume starts, it is reported as being in managed mode
during the migration. Also, both of the MDisks involved are reported as being in image mode
during the migration. Upon completion of the command, the volume is classified as an image
mode volume.
6.2.5 Non-disruptive Volume Move
One of the major enhancements introduced in SAN Volume Controller code version 6.4 is the
feature called Non-disruptive Volume Move (NDVM). In the previous versions of SAN Volume
Controller code, it was possible to migrate volumes between I/O groups but this operation
required I/O operations to be quiesced to all volumes being migrated. After the migration was
completed it was the new I/O group that owned the unique access to the volume I/O
operations.
Non-disruptive Volume Move supports access to a single volume by all nodes in the clustered
system. This feature adds the concept of access versus caching I/O groups. While the access
to a volume is unlimited to any node of the system there is still a single I/O group that controls
the I/O caching. This dynamic balancing of the SAN Volume Controller workload is found to
be extremely helpful in situations where the natural growth of the environments I/O demands
forces the customers and storage administrators to expand hardware resources. With
Non-disruptive Volume Move you can instantly rebalance the workload to the volumes to the
new set of SAN Volume Controller nodes (I/O group) without a need to quiesce or interrupt
application operations and easily nail down the high utilization of original I/O group.
Before moving the volumes to a new I/O group on the SAN Volume Controller system, ensure
the following:
The host has access to new I/O group node ports via SAN zoning
The ost is assigned to new I/O group on SAN Volume Controller system level
The host operating system and multipathing software supports the NDVM feature
For list of supported systems refer to Supported Hardware List, Device Driver, Firmware and
Recommended Software Levels for SAN Volume Controller, available at:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004392
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
230 Implementing the IBM System Storage SAN Volume Controller V7.2
In this example, we want to move one of the AIX host volumes from existing I/O group to the
recently added pair of SAN Volume Controller nodes. To perform the Non-disruptive Volume
Move using SAN Volume Controller GUI, do these steps:
1. Verify that the host is assigned to both the source and target I/O groups. Select Hosts
from the left menu pane (Figure 6-2 on page 230) and confirm column # of I/O Groups.
Figure 6-2 SAN Volume Controller Hosts I/O group assignment
2. Right-click on the host Properties Mapped Volumes tab. Verify the volumes and
caching I/O group ownership (Figure 6-3)
Figure 6-3 Caching I/O group ID
3. Now we will move lpar01_vol3 from the existing SAN Volume Controller I/O group 0 to the
new I/O group 1. From the left menu pane, select Volumes to see all volumes and
optionally filter the output for wanted results (Figure 6-4 on page 231).
Chapter 6. Data migration 231
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-4 Select and filter volumes
4. Right-click the volume lpar01_vol3 and in the menu select Move Volume to a New I/O
Group. The wizard window is launched (Figure 6-5) and click Next.
Figure 6-5 Move Volume to a New I/O Group wizard - Welcome
5. Select new I/O group and optionally the preferred SAN Volume Controller node, or leave
Automatic for default node assignment. Click Apply and Next (Figure 6-6 on page 232).
6. You can see the progress of the task displayed in the task window and SAN Volume
Controller CLI command sequence performing svctask movevdisk and svctask
addvdiskaccess.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
232 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-6 Move Volume to a New I/O Group wizard - Select New I/O Group and preferred node
7. The task completion window appears. Now it is time to detect new paths by the selected
host to switch over the I/O processing tho the new I/O group. Perform the path detection
based on the operating system specific procedures (Figure 8). Click Apply and Next.
Figure 6-7 Move Volume to a New I/O Group wizard - Detect New Paths
8. At this point, the SAN Volume Controller removes the old I/O group access to a volume by
calling the svctask rmvdiskaccess CLI command. After task completion, close the task
window.
9. The confirmation with additional information about the I/O group move is displayed on the
Move Volume to a New I/O Group wizard display. Proceed to the Summary by clicking
Next.
10.Review the summary information and click Finish. The volume has been successfully
moved to a new I/O group without I/O disruption on the host side. To verify that volume is
now being cached by the new I/O group, verify the Caching I/O group column on the
Volumes submenu (Figure 6-8 on page 233)
Chapter 6. Data migration 233
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-8 New caching I/O group
For SAN Volume Controller code version 6.4 and above, cli command svctask chvdisk is not
supported for providing migration of a volume between I/O groups. Although it still modifies
multiple properties of a volume, the new SAN Volume Controller CLI command movevdisk is
used for moving a volume between I/O groups.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
234 Implementing the IBM System Storage SAN Volume Controller V7.2
In certain conditions, you might still want to keep the volume accessible via multiple I/O
groups. This is possible, but keep in mind that only a single I/O group can provide caching of
the I/O to the volume. For modifying access to a volume for additional I/O groups, use the
SAN Volume Controller CLI commands addvdiskaccess or rmvdiskacces. (See Chapter 9,
SAN Volume Controller operations using the command-line interface on page 471 for
command line reference)
You can use the SAN Volume Controller GUI to modify additional I/O group access by
selecting volume, right-clicking Properties Edit, and selecting the wanted I/O groups by
checking the checkboxes under the Accessible I/O Groups property (Figure 6-9).
Figure 6-9 Modifying Accessible I/O Groups for a volume
6.2.6 Monitoring the migration progress
To monitor the progress of ongoing migrations, use the CLI command:
svcinfo lsmigrate
To determine the extent allocation of MDisks and volumes, use the following commands.
To list the volume IDs and the corresponding number of extents that the volumes occupy
on the queried MDisk, use the following CLI command:
svcinfo lsmdiskextent <mdiskname | mdisk_id>
To list the MDisk IDs and the corresponding number of extents that the queried volumes
occupy on the listed MDisks, use the following CLI command:
svcinfo lsvdiskextent <vdiskname | vdisk_id>
Important: To change the caching I/O group for a volume, use the movevdisk command.
Chapter 6. Data migration 235
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
To list the number of available free extents on an MDisk, use the following CLI command:
svcinfo lsfreeextents <mdiskname | mdisk_id>
6.3 Functional overview of migration
This section describes a functional view of data migration.
6.3.1 Parallelism
You can perform several of the following activities in parallel.
Per system
A SAN Volume Controller system supports up to 32 active concurrent instances of members
of the set of migration activities:
Migrate multiple extents
Migrate between storage pools
Migrate off a deleted MDisk
Migrate to image mode
These high-level migration tasks operate by scheduling single extent migrations:
Up to 256 single extent migrations can run concurrently. This number is made up of single
extent migrates, which result from the operations previously listed.
The Migrate Multiple Extents and Migrate Between storage pools commands support a
flag that allows you to specify the number of parallel threads to use, between 1 and 4.
This parameter affects the number of extents that will be concurrently migrated for that
migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated
concurrently for that operation, subject to other resource constraints.
Per MDisk
The SAN Volume Controller supports up to four concurrent single extent migrates per MDisk.
This limit does not take into account whether the MDisk is the source or the destination. If
more than four single extent migrates are scheduled for a particular MDisk, further migrations
are queued pending the completion of one of the currently running migrations.
6.3.2 Error handling
If a medium error occurs on a read from the source; if the destinations medium error table is
full; if an I/O error occurs on a read from the source repeatedly; or if the MDisks go offline
repeatedly, the migration is suspended or stopped.
Important: After a migration has been started, there is no way for you to stop the
migration. The migration runs to completion unless it is stopped or suspended by an error
condition, or if the volume being migrated is deleted.
If you want the ability to start, suspend, or cancel a migration or control the rate of
migration, consider using the volume mirroring function or migrating volumes between
storage pools.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
236 Implementing the IBM System Storage SAN Volume Controller V7.2
The migration will be suspended if any of the following conditions exist. Otherwise, it will be
stopped:
The migration is between storage pools and has progressed beyond the first extent.
These migrations are always suspended rather than stopped, because stopping a
migration in progress leaves a volume spanning storage pools, which is not a valid
configuration other than during a migration.
The migration is a Migrate to Image Mode (even if it is processing the first extent).
These migrations are always suspended rather than stopped, because stopping a
migration in progress leaves the volume in an inconsistent state.
A migration is waiting for a metadata checkpoint that has failed.
If a migration is stopped, and if any migrations are queued awaiting the use of the MDisk for
migration, these migrations now commence. However, if a migration is suspended, the
migration continues to use resources, and so, another migration is not started.
The SAN Volume Controller attempts to resume the migration if the error log entry is marked
as fixed using the CLI or the GUI. If the error condition no longer exists, the migration will
proceed. The migration might resume on a node other than the node that started the
migration.
6.3.3 Migration algorithm
This section describes the effect of the migration algorithm.
Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MB. In this
description, this unit is referred to as a chunk.
We describe the algorithm that is used to migrate an extent:
1. Pause (pause means to queue all new I/O requests in the virtualization layer in the SAN
Volume Controller and to wait for all outstanding requests to complete) all I/O on the
source MDisk on all nodes in the SAN Volume Controller system. The I/O to other extents
is unaffected.
2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific
chunk that is being migrated. Writes to the extent are mirrored to the source and
destination.
3. On the node that is performing the migration, for each 256 KB section of the chunk:
Synchronously read 256 KB from the source.
Synchronously write 256 KB to the target.
4. After the entire chunk has been copied to the destination, repeat the process for the next
chunk within the extent.
5. After the entire extent has been migrated, pause all I/O to the extent being migrated,
perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to
the destination, and stop mirroring writes (writes only to destination).
6. If the checkpoint fails, the I/O is unpaused.
Chapter 6. Data migration 237
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
During the migration, the extent can be divided into three regions, as shown in Figure 6-10 on
page 237:
Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the
virtualization layer waiting for the chunk to be copied.
Reads to Region A are directed to the destination, because this data has already been
copied. Writes to Region A are written to both the source and the destination extent to
maintain the integrity of the source extent.
Reads and writes to Region C are directed to the source, because this region has yet to
be migrated.
The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack, such as cache
destages, are held back. If the back-end storage is operating with significant latency, it is
possible that this operation might take time (minutes) to complete, which can have an adverse
affect on the overall performance of the SAN Volume Controller. To avoid this situation, if the
migration of a particular chunk is still active after one minute, the migration is paused for 30
seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the
migration of the chunk is resumed. This algorithm is repeated as many times as necessary to
complete the migration of the chunk.
Figure 6-10 Migrating an extent
The SAN Volume Controller guarantees read stability during data migrations even if the data
migration is stopped by a node reset or a system shutdown. This read stability is possible
because the SAN Volume Controller disallows writes on all nodes to the area being copied,
and upon a failure, the extent migration is restarted from the beginning. At the conclusion of
the operation, we will have these results:
Extents have been migrated in 16 MB chunks, one chunk at a time.
Chunks are either copied, in progress, or not copied.
When the extent is finished, its new location is saved.
Figure 6-11 shows the data migration and write operation relationship.
Extent N-1 Extent N Extent N+1
Region A
(already
copied)
reads/writes
go to
destination
Region C
(yet to be copied)
reads/writes go
to source
Region B
(copying)
reads/writes
paused
16 MB
Not to scale
Managed Disk Extents
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
238 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-11 Migration and write operation relationship
6.4 Migrating data from an image mode volume
This section describes migrating data from an image mode volume to a fully managed
volume. This type of migration is used to take an existing host LUN and move it into the
virtualization environment as provided by the SAN Volume Controller system.
6.4.1 Image mode volume migration concept
First, we describe the concepts that are associated with this operation.
MDisk modes
There are three MDisk modes:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata stored on it.
The SAN Volume Controller will not write to an MDisk that is in unmanaged mode except
when it attempts to change the mode of the MDisk to one of the other modes.
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume
with no virtualization. Image mode volumes have a minimum size of one block (512 bytes)
and always occupy at least one extent. An image mode MDisk is associated with exactly
one volume.
Managed mode MDisk
Managed mode MDisks contribute extents to the pool of available extents in the storage
pool. Zero or more managed mode volumes might use these extents.
Transitions between the modes
The following state transitions can occur to an MDisk (see Figure 6-12 on page 239):
Unmanaged mode to managed mode
This transition occurs when an MDisk is added to a storage pool, which makes the MDisk
eligible for the allocation of data and metadata extents.
Managed mode to unmanaged mode
This transition occurs when an MDisk is removed from a storage pool.
Chapter 6. Data migration 239
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Unmanaged mode to image mode
This transition occurs when an image mode MDisk is created on an MDisk that was
previously unmanaged. It also occurs when an MDisk is used as the target for a migration
to image mode.
Image mode to unmanaged mode
There are two distinct ways in which this transition can happen:
When an image mode volume is deleted. The MDisk that supported the volume
becomes unmanaged.
When an image mode volume is migrated in image mode to another MDisk, the MDisk
that is being migrated from remains in image mode until all data has been moved off of
it. It then transitions to unmanaged mode.
Image mode to managed mode
This transition occurs when the image mode volume that is using the MDisk is migrated
into managed mode.
Managed mode to image mode is impossible
There is no operation that will take an MDisk directly from managed mode to image mode.
You can achieve this transition by performing operations that convert the MDisk to
unmanaged mode and then to image mode.
Figure 6-12 Various states of a volume
Image mode volumes have the special property that the last extent in the volume can be a
partial extent. Managed mode disks do not have this property.
To perform any type of migration activity on an image mode volume, the image mode disk first
must be converted into a managed mode disk. If the image mode disk has a partial last
extent, this last extent in the image mode volume must be the first extent to be migrated. This
migration is handled as a special case.
Managed
mode
start migrate to
managed mode
complete migrate
Migrating to
image mode
Image mode
Not in group
add to group
remove from group
delete image
mode vdisk
create image
mode vdisk
start migrate to image mode
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
240 Implementing the IBM System Storage SAN Volume Controller V7.2
After this special migration operation has occurred, the volume becomes a managed mode
volume and is treated in the same way as any other managed mode volume. If the image
mode disk does not have a partial last extent, no special processing is performed. The image
mode volume is simply changed into a managed mode volume and is treated in the same way
as any other managed mode volume.
After data is migrated off a partial extent, there is no way to migrate data back onto the partial
extent.
6.4.2 Migration tips
Several methods are available to migrate an image mode volume to a managed mode
volume:
If your image mode volume is in the same storage pool as the MDisks on which you want
to migrate the extents, you can perform one of these migrations:
Migrate a single extent. You have to migrate the last extent of the image mode volume
(number N-1).
Migrate multiple extents.
Migrate all of the in-use extents from an MDisk. Migrate extents off an MDisk that is
being deleted.
If you have two storage pools, one storage pool for the image mode volume, and one
storage pool for the managed mode volumes, you can migrate a volume from one storage
pool to another storage pool.
Have one storage pool for all the image mode volumes, and other storage pools for the
managed mode volumes, and use the migrate volume facility.
Be sure to verify that enough extents are available in the target storage pool.
6.5 Data migration for Windows using the SAN Volume
Controller GUI
In this section, we move two LUNs from a Microsoft Windows Server 2008 server that is
currently attached to an LSI 3500 storage subsystem over to the SAN Volume Controller. The
migration examples include the following scenarios:
Moving a Microsoft servers SAN LUNs from a storage subsystem and virtualizing those
same LUNs through the SAN Volume Controller
Perform this activity when introducing the SAN Volume Controller into your environment.
This section shows that your host downtime is only a few minutes while you remap and
remask disks using your storage subsystem LUN management tool. We describe this step
in detail in 6.5.2, Adding the SAN Volume Controller between the host system and the LSI
3500 on page 244.
Migrating your image mode volume to a volume while your host is still running and
servicing your business application
Perform this activity if you are removing a storage subsystem from your SAN environment,
or if you want to move the data onto LUNs that are more appropriate for the type of data
stored on those LUNs, taking into account availability, performance, and redundancy. We
describe this step in 6.5.6, Migrating the volume from image mode to image mode on
page 267.
Chapter 6. Data migration 241
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Migrating your volume to an image mode volume
Perform this activity if you are removing the SAN Volume Controller from your SAN
environment after a trial period. We describe this step in detail in 6.5.5, Migrating a
volume from managed mode to image mode on page 262.
Moving an image mode volume to another image mode volume
Use this procedure to migrate data from one storage subsystem to another storage
subsystem. We describe this step in detail in 6.6.6, Migrating the volumes to image mode
volumes on page 295.
You can use these activities individually or together to migrate your servers LUNs from one
storage subsystem to another storage subsystem using the SAN Volume Controller as your
migration tool.
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SAN Volume Controller.
6.5.1 Windows Server 2008 host system connected directly to the LSI 3500
In our example configuration, we use a Windows Server 2008 host and an LSI 3500 storage
subsystem. The host has two LUNs (drive X and Y). The two LUNs are part of one LSI 3500
array. Before the migration, LUN masking is defined in the LSI 3500 to give access to the
Windows Server 2008 host system for the volumes from LSI 3500 labeled X and Y (see
Figure 6-14 on page 242).
Figure 6-13 shows the starting zoning scenario.
Figure 6-13 Starting zoning scenario
Figure 6-14 on page 242 shows the two LUNs (drive X and Y).
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
242 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-14 Drives X and Y
Chapter 6. Data migration 243
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-15 shows the properties of one of the LSI 3500 disks using the Subsystem Device
Driver DSM (SDDDSM). The disk appears as an LSI INF-01-00 Multipath Disk Device.
Figure 6-15 Disk properties
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
244 Implementing the IBM System Storage SAN Volume Controller V7.2
6.5.2 Adding the SAN Volume Controller between the host system and the LSI
3500
Figure 6-16 shows the new environment with the SAN Volume Controller and a second
storage subsystem attached to the SAN. The second storage subsystem is not required to
migrate to the SAN Volume Controller, but in the following examples, we show that it is
possible to move data across storage subsystems without any host downtime.
Figure 6-16 Add SAN Volume Controller and second storage subsystem
To add the SAN Volume Controller between the host system and the LSI 3500 storage
subsystem, perform the following steps:
1. Check that you have installed the supported device drivers on your host system.
2. Check that your SAN environment fulfills the supported zoning configurations.
3. Shut down the host.
4. Change the LUN masking in the LSI 3500. Mask the LUNs to the SAN Volume Controller,
and remove the masking for the host.
Figure 6-17 on page 245 shows the two LUNs with LUN IDs 10 and 11 remapped to the
SAN Volume Controller ITSOSVC1.
Chapter 6. Data migration 245
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-17 LUNs remapped
5. Log in to your SAN Volume Controller Console and open Pools and System Migration.
See Figure 6-18 on page 245.
Figure 6-18 Pools and System Migration
Important: To avoid potential data loss, back up all the data stored on your external
storage before using the wizard.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
246 Implementing the IBM System Storage SAN Volume Controller V7.2
6. Click Start New Migration, which starts a wizard, as shown in Figure 6-19.
Figure 6-19 Start New Migration
7. Follow the Storage Migration Wizard, as shown in Figure 6-20 on page 246, and then click
Next.
Figure 6-20 Migration Wizard (Step 1 of 8)
8. Figure 6-21 on page 247 shows the Prepare Environment for Migration information; click
Next.
Chapter 6. Data migration 247
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-21 Migration Wizard: Preparing the environment for migration (Step 2 of 8)
9. Click Next to complete the storage mapping. See Figure 6-22.
Figure 6-22 Migration Wizard: Mapping storage (Step 3 of 8)
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
248 Implementing the IBM System Storage SAN Volume Controller V7.2
10.Figure 6-23 on page 248 shows device discovery. Click Close.
Figure 6-23 Discovering devices
11.Figure 6-24 shows the available MDisks for Migration.
Figure 6-24 Migration Wizard (Step 4 of 8)
12.Mark both MDisks for migrating, as shown in Figure 6-25 on page 249, and then click
Next.
Chapter 6. Data migration 249
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-25 Migration Wizard: Selecting disks for migration
13.Figure 6-26 shows the MDisk import process. During the import process, a new storage
pool is automatically created, in our case, MigrationPool_8192. You can see that the
command that is issued by the wizard creates an image mode volume with a one-to-one
mapping to mdisk4 and mdisk5. Click Close to continue.
Figure 6-26 Migration Wizard: MDisk import process (Step 5 of 8)
14.Now, we create a new host object to which we will map the volume later. Click Create
Host, as shown in Figure 6-27 on page 250.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
250 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-27 Migration Wizard: Creating a new host
15.Figure 6-28 shows the empty fields that we need to complete to match our host
requirements.
Figure 6-28 Migration Wizard: Host information fields
16.Type the Host Name that you want to use for the host, add the Fibre Channel (FC) port,
and select a Host Type. In our case, the name is WIN2K8_01. Click Create Host, as
shown in Figure 6-29 on page 251.
Chapter 6. Data migration 251
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-29 Migration Wizard: Completed host information
17.Figure 6-30 on page 251 shows the progress of creating a host. Click Close.
Figure 6-30 Progress status: Creating a host
18.Figure 6-31 shows that the host was created successfully. Click Next to continue.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
252 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-31 Migration Wizard: Host creation was successful
19.Figure 6-32 shows all the available volumes to map to a host.
Figure 6-32 Migration Wizard: Volumes available for mapping (Step 6 of 8)
20.Mark both volumes and click Map to Host, as shown in Figure 6-33 on page 253.
Chapter 6. Data migration 253
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-33 Migration Wizard: Mapping volumes to host
21.Modify Mapping by choosing the host using the drop-down menu, as shown in
Figure 6-34, and then click Next.
Figure 6-34 Migration Wizard: Modify Host Mappings
22.The rightmost side of Figure 6-35 on page 254 shows the volumes that can be marked to
map to your host. Mark both volumes and click Apply.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
254 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-35 Migration Wizard: Volume mapping to host
23.Figure 6-36 shows the progress of the volume mapping to the host. Click Close when
finished.
Figure 6-36 Modify Mappings: Task completed
24.After the volume to host mapping task is completed, notice that beneath the column
heading Host Mappings, a host is shown as marked Yes; see Figure 6-37 on page 255.
Click Next.
Chapter 6. Data migration 255
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-37 Migration Wizard: Map Volumes to Hosts
25.Select the storage pool that you want to use for migration, in our case, DS35_pool1, as
shown in Figure 6-38, and click Next.
Figure 6-38 Migration Wizard: Selecting a storage pool to use for migration (Step 7 of 8)
26.Migration starts automatically by performing a volume copy, as shown in Figure 6-39 on
page 256.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
256 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-39 Start Migration: Task completed
27.The window in Figure 6-40 opens, advising that migration has begun. Click Finish.
Figure 6-40 Migration Wizard: Data migration has begun (Step 8 of 8)
28.The window in Figure 6-41 on page 256 opens automatically to show the progress of the
migration.
Figure 6-41 Progress of migration process
29.Go to Volumes Volumes by host, as shown in Figure 6-42, to see all the volumes that
are served by the newly created host for this migration step.
Chapter 6. Data migration 257
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-42 Selecting to view volumes by host
30.Figure 6-43 shows all the volumes (copy0* and copy1) that are served by the newly
created host.
Figure 6-43 Volumes served by host
You can see in Figure 6-43 that the migrated volume is actually a mirrored volume with one
copy on the image mode pool and another copy in a managed mode storage pool. The
administrator can choose to leave the volume or split the initial copy from the mirror.
6.5.3 Importing the migrated disks into an online Windows Server 2008 host
To import the migrated disks into an online Windows 2008 Server host, perform these steps:
1. Start the Windows Server 2008 host system again, and go to the Device Manager to see
the new disk properties that are changed to a 2145 Multipath Disk Device (Figure 6-44 on
page 258).
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
258 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-44 Device manager: See the new disk properties
2. Figure 6-45 shows the Disk Management window.
Figure 6-45 Migrated disks are available
Chapter 6. Data migration 259
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device
Driver DSM to open the SDDDSM command-line utility; see Figure 6-46.
Figure 6-46 Subsystem Device Driver DSM CLI
4. Enter the datapath query device command to check whether all paths are available, as
planned in your SAN environment; see Example 6-1.
Example 6-1 The datapath query device command
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 2
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018D92083000000000000013
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 145 0
1 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 75 0
2 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 73 0
3 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 0 0
4 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 0 0
5 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 0 0
6 Scsi Port7 Bus0/Disk1 Part0 OPEN NORMAL 0 0
7 Scsi Port8 Bus0/Disk1 Part0 OPEN NORMAL 76 0
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
260 Implementing the IBM System Storage SAN Volume Controller V7.2
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018D92083000000000000014
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 0 0
1 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 0 0
2 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 0 0
3 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 94 0
4 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 77 0
5 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 76 0
6 Scsi Port8 Bus0/Disk2 Part0 OPEN NORMAL 0 0
7 Scsi Port7 Bus0/Disk2 Part0 OPEN NORMAL 68 0
C:\Program Files\IBM\SDDDSM>
6.5.4 Adding the SAN Volume Controller between the host and LSI 3500 using
the CLI
In this section, we only use CLI commands to add direct-attached storage to the SAN Volume
Controllers managed storage. To read about our preparation of the environment, see 6.5.1,
Windows Server 2008 host system connected directly to the LSI 3500 on page 241.
Verifying the currently used storage pools
Verify the currently used storage pool on the SAN Volume Controller, as shown in
Example 6-2, to discover the storage pools free capacity.
Example 6-2 Storage pools free capacity
IBM_2145:SVC_ITSO2:ITSO_admin>svcinfo lsmdiskgrp -delim " "
id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity
used_capacity real_capacity overallocation warning easy_tier easy_tier_status compression_active
compression_virtual_capacity compression_compressed_capacity compression_uncompressed_capacity
0 DS35_pool1 online 4 19 253.00GB 1024 89.00GB 200.00GB 160.00GB 160.86GB 79 80 auto inactive yes
20.00GB 0.31MB 0.00MB
1 MigrationPool_8192 online 2 2 30.00GB 8192 0 30.00GB 30.00GB 30.00GB 100 0 auto inactive no 0.00MB
0.00MB 0.00MB
IBM_2145:SVC_ITSO2:ITSO_admin>
Creating a storage pool
When we move the two LUNs to the SAN Volume Controller, we use them initially in image
mode. Therefore, we need a storage pool to hold those disks.
First, we add a new empty storage pool, in our case imagepool, for the import of the LUNs, as
shown in Example 6-3. It is better to have a separate pool in case a problem occurs during
the import. That way, the import process cannot affect the other storage pools.
Example 6-3 Adding a storage pool
IBM_2145:SVC_ITSO2:admin>svctask mkmdiskgrp -name imagepool -tier generic_hdd -easytier off -ext 256
MDisk Group, id [2], successfully created
IBM_2145:SVC_ITSO2:admin>
Chapter 6. Data migration 261
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Verifying the creation of the new storage pool
Now, we verify whether the new storage pool has been added correctly, as shown in
Example 6-4 on page 261.
Example 6-4 Verifying the new storage pool
IBM_2145:SVC_ITSO2:ITSO_admin>svcinfo lsmdiskgrp -delim " "
id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity
used_capacity real_capacity overallocation warning easy_tier easy_tier_status compression_active
compression_virtual_capacity compression_compressed_capacity compression_uncompressed_capacity
0 DS35_pool1 online 4 19 253.00GB 1024 89.00GB 200.00GB 160.00GB 160.86GB 79 80 auto inactive yes
20.00GB 0.31MB 0.00MB
1 MigrationPool_8192 online 2 2 30.00GB 8192 0 30.00GB 30.00GB 30.00GB 100 0 auto inactive no 0.00MB
0.00MB 0.00MB
2 imagepool online 0 0 0 256 0 0.00MB 0.00MB 0.00MB 0 0 off inactive no 0.00MB 0.00MB 0.00MB
IBM_2145:SVC_ITSO2:ITSO_admin>
Creating the image volumes
As shown in Example 6-5, we need to create two image volumes (image1 and image2) within
our storage pool imagepool. We need one for each MDisk to import LUNs from the storage
controller to the SAN Volume Controller.
Example 6-5 Creating the image volumes
IBM_2145:SVC_ITSO2:ITSO_admin>svctask mkvdisk -name image1 -iogrp 0 -mdiskgrp imagepool -vtype image
-mdisk mdisk7 -syncrate 80
Virtual Disk, id [17], successfully created
IBM_2145:SVC_ITSO2:ITSO_admin>svctask mkvdisk -name image2 -iogrp 0 -mdiskgrp imagepool -vtype image
-mdisk mdisk6 -syncrate 80
Virtual Disk, id [18], successfully created
IBM_2145:SVC_ITSO2:ITSO_admin>
Verifying the image volumes
Now, we check again whether the volumes are created within the storage pool imagepool, as
shown in Example 6-6.
Example 6-6 Verifying the image volumes
IBM_2145:SVC_ITSO2:ITSO_admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type
FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change compressed_copy_count
17 image1 0 io_grp0 online 2 imagepool 10.00GB image
60050768018D92083000000000000015 0 1 empty 0 no 0
18 image2 0 io_grp0 online 2 imagepool 20.00GB image
60050768018D92083000000000000016 0 1 empty 0 no 0
IBM_2145:SVC_ITSO2:ITSO_admin>
Creating the host
We check whether our host exists or if we need to create it, as shown in Example 6-7. In our
case, the server has already been created.
Example 6-7 Listing the host
IBM_2145:SVC_ITSO2:ITSO_admin>svcinfo lshost
id name port_count iogrp_count status
0 WIN2K8_01 2 4 online
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
262 Implementing the IBM System Storage SAN Volume Controller V7.2
IBM_2145:SVC_ITSO2:ITSO_admin>
Mapping the image volumes to the host
Next, we map the image volumes to host WIN2K8_01, as shown in Example 6-8. This
mapping is also known as LUN masking.
Example 6-8 Mapping the volumes
IBM_2145:SVC_ITSO2:ITSO_admin>svctask mkvdiskhostmap -force -host WIN2K8_01 -scsi 2 image1
Virtual Disk to Host map, id [2], successfully created
IBM_2145:SVC_ITSO2:ITSO_admin>svctask mkvdiskhostmap -force -host WIN2K8_01 -scsi 3 image2
Virtual Disk to Host map, id [3], successfully created
IBM_2145:SVC_ITSO2:ITSO_admin>
Adding the image volumes to a storage pool
Add the image volumes to storage pool DS35_pool1, as shown in Example 6-9, to have them
mapped as fully allocated volumes that are managed by the SAN Volume Controller.
Example 6-9 Adding the volumes to the storage pool
IBM_2145:SVC_ITSO2:ITSO_admin>svctask addvdiskcopy -mdiskgrp DS35_pool1 image1
Vdisk [17] copy [1] successfully created
IBM_2145:SVC_ITSO2:ITSO_admin>svctask addvdiskcopy -mdiskgrp DS35_pool1 image2
Vdisk [18] copy [1] successfully created
IBM_2145:SVC_ITSO2:ITSO_admin>
Checking the status of the volumes
Both volumes now have a second copy, which is shown as type many in Example 6-10. Both
volumes are available to be used by the host.
Example 6-10 Status check
IBM_2145:SVC_ITSO2:ITSO_admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type
FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change compressed_copy_count
17 image1 0 io_grp0 online many many 10.00GB many
60050768018D92083000000000000015 0 2 empty 0 no 0
18 image2 0 io_grp0 online many many 20.00GB many
60050768018D92083000000000000016 0 2 empty 0 no 0
IBM_2145:SVC_ITSO2:ITSO_admin>
6.5.5 Migrating a volume from managed mode to image mode
In this section, we migrate a managed volume to an image mode volume by performing these
steps:
1. We create an empty storage pool for each volume that we want to migrate to image mode.
These storage pools will host the target MDisk that we will map later to our server at the
end of the migration.
2. We go to Pools MDisk by Pools to create a new pool from the drop-down menu, as
shown in Figure 6-47.
Chapter 6. Data migration 263
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-47 Selecting Pools
3. To create an empty storage pool for migration, you perform Step 1 and Step 2, as shown in
Figure 6-48 and Figure 6-49 on page 264.
4. Step 1 (Figure 6-48) prompts you for the pool name, extent size, and warning threshold.
After you enter the information, click Next.
Figure 6-48 Create Storage Pool (Step 1 of 2)
5. Step 2 prompts you to optionally select the MDisk to include in the storage pool
(Figure 6-49 on page 264). Click Create.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
264 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-49 Create Storage Pool (Step 2 of 2)
6. Figure 6-50 reminds you that an empty storage pool has been created. Click Yes.
Figure 6-50 Reminder
7. Figure 6-51 on page 265 shows the progress status as the system creates a storage pool
for migration. Click Close to continue.
Chapter 6. Data migration 265
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-51 Create Storage Pool for Migration progress status
8. From the Volumes panel, select the volume that you want to migrate to image mode and
select Export to Image Mode from the drop-down menu, as shown in Figure 6-52.
Figure 6-52 Select volume
9. Select the MDisk onto which to migrate the volume, as shown in Figure 6-53 on page 266,
and then click Next.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
266 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-53 Migrate to an Image Mode
10.Select a storage pool into which the image mode volume will be placed after the migration
completes, in our case, the For Migration storage pool. Click Finish; see Figure 6-54.
Figure 6-54 Select storage pool
Chapter 6. Data migration 267
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
11.The volume is exported to image mode and placed in the For Migration storage pool; see
Figure 6-55. Click Close.
Figure 6-55 Export Volume to image process
12.Navigate to the Pools MDisk by Pools panel. Click the plus sign (+) to the left of the
name (expand icon). Note that mdisk8 is now an image mode MDisk, as shown in
Figure 6-56.
Figure 6-56 MDisk is in image mode
13.Repeat these steps for every volume that you want to migrate to an image mode volume.
14.Delete the image mode data from the SAN Volume Controller by using the procedure that
is described in 6.5.7, Removing image mode data from the SAN Volume Controller on
page 275.
6.5.6 Migrating the volume from image mode to image mode
Use the volume migration from image mode to image mode process to move image mode
volumes from one storage subsystem to another storage subsystem without going through
the SAN Volume Controller fully managed mode. The data stays available for the applications
during this migration. This procedure is nearly the same as the procedure that we described
in 6.5.5, Migrating a volume from managed mode to image mode on page 262.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
268 Implementing the IBM System Storage SAN Volume Controller V7.2
In our example, we migrate the windows server W2k8_Log volume to another disk subsystem
as an image mode volume. The second storage subsystem is an LSI 5100; a new LUN is
configured on the storage and mapped to the SAN Volume Controller system. The LUN is
available to the SAN Volume Controller as an unmanaged MDisk9, as shown in Figure 6-57
on page 268.
Figure 6-57 Unmanaged disk on a storage subsystem
To migrate the image mode volume to another image mode volume, perform the following
steps:
1. Mark the unmanaged MDisk9 and click either Actions or the right-click the mouse and
select Import from the list, as shown in Figure 6-58.
Figure 6-58 Import the unmanaged MDisk into SAN Volume Controller
2. The Import Wizard window opens, describing the process of importing the MDisk and
mapping an image mode volume to it, as shown in Figure 6-59. Enable the caching and
click Next.
Chapter 6. Data migration 269
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-59 Import Wizard (Step1 of 2)
3. Select a temporary pool, because you do not want to migrate the volume into a SAN
Volume Controller managed volume pool. Select the extent size from the drop-down
menu and click Finish. See Figure 6-60.
Figure 6-60 Import Wizard (Step 2 of 2)
4. The import process starts, as shown in Figure 6-61, by creating a temporary storage pool
MigrationPool_1024 (1 GB) and an image volume. Click Close to continue.
Figure 6-61 Import of MDisk and creation of temporary storage pool MigrationPool_8192
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
270 Implementing the IBM System Storage SAN Volume Controller V7.2
5. As shown in Figure 6-62, there is now an image mode mdisk9 with the import controller
name and SCSI ID as its name.
Figure 6-62 Imported mdisk9 within the created storage pool
6. Now, create a new storage pool Migration_Out (with the same extent size (1 GB) as the
automatically created storage pool MigrationPool_1024 for transferring the image mode
disk. Go to Pools MDisks by Pools, as shown in Figure 6-63.
Figure 6-63 Pools
7. Click Create Pool to create an empty storage pool, as shown in Figure 6-64.
Figure 6-64 Create a new storage pool
Chapter 6. Data migration 271
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
8. Give your new storage pool the meaningful name Migration_Out and click the Advanced
Settings drop-down menu. Choose 1.0 GB as the extent size for your new storage pool,
as shown in Figure 6-65.
Figure 6-65 Creating an empty storage pool with extent size 1GB (Step 1 of 2)
9. Figure 6-66 shows a storage pool window without any disks. Click Create to continue to
create an empty storage pool.
Figure 6-66 No disks are displayed (Step 2 of 2)
10.The warning in Figure 6-67 pops up to remind you that an empty storage pool will be
created. Click Yes to continue.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
272 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-67 Warning message: Creating an empty storage pool
11.Figure 6-68 shows the progress of creating the storage pool Migration_Out. Click Close to
continue.
Figure 6-68 Progress of storage pool creation
12.You have created the empty storage pool for the image to image migration. Go to
Volumes Volumes by Pool, as shown in Figure 6-69 on page 272.
Figure 6-69 Storage pool has been created
Chapter 6. Data migration 273
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
13.In the left panel, select the storage pool of the imported disk called MigrationPool_1024.
Then, mark the image disk that you want to migrate out and select Actions. From the
drop-down menu, select Export to Image Mode, as shown in Figure 6-70.
Figure 6-70 Export to Image Mode
14.Select the target MDisk mdisk10 on the new disk controller to which you want to migrate.
Click Next, as shown in Figure 6-71.
Figure 6-71 Selecting the target MDisk (Step 1 of 2)
15.Select the target Migration_Out (empty) storage pool, as shown in Figure 6-72 on
page 274. Click Finish.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
274 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-72 Selecting the target storage pool (Step 2 of 2)
16.Figure 6-73 shows the progress status of the Export Volume to Image process. Click
Close to continue.
Figure 6-73 Export Volume to Image progress status
17.Figure 6-74 on page 275 shows that the MDisk location has changed as expected to the
new storage pool Migration_Out.
Chapter 6. Data migration 275
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-74 Image disk migrated to new storage pool
18.Repeat these steps for all image mode volumes that you want to migrate.
19.If you want to delete the data from the SAN Volume Controller, use the procedure that is
described in 6.5.7, Removing image mode data from the SAN Volume Controller on
page 275.
6.5.7 Removing image mode data from the SAN Volume Controller
If your data resides in an image mode volume inside the SAN Volume Controller, you can
remove the volume from the SAN Volume Controller, which allows you to free up the original
LUN for reuse. The following sections illustrate how to migrate data to an image mode
volume. Depending on your environment, you might have to follow these procedures before
deleting the image volume:
6.5.5, Migrating a volume from managed mode to image mode on page 262
6.5.6, Migrating the volume from image mode to image mode on page 267
To remove the image mode volume from the SAN Volume Controller, we use the delete
vdisk command.
If the command succeeds on an image mode volume, the underlying back-end storage
controller will be consistent with the data that a host might previously have read from the
image mode volume. That is, all fast write data will have been flushed to the underlying LUN.
Deleting an image mode volume causes the MDisk that is associated with the volume to be
ejected from the storage pool. The mode of the MDisk will be returned to unmanaged.
As shown in Example 6-1 on page 259, the SAN disks currently reside on the SAN Volume
Controller 2145 device.
Check that you have installed the supported device drivers on your host system.
Image mode volumes only: This situation only applies to image mode volumes. If you
delete a normal volume, all of the data will also be deleted.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
276 Implementing the IBM System Storage SAN Volume Controller V7.2
To switch back to the storage subsystem, perform the following steps:
1. Shut down your host system.
2. Open the Volumes by Host window to see which volumes are currently mapped to your
host, as shown in Figure 6-75 on page 276.
Figure 6-75 Volume by host mapping
3. Check your Host and select your volume. Then, show the drop-down menu by
right-clicking the mouse and select Unmap all Hosts, as shown in Figure 6-76.
Figure 6-76 Unmap volume from host
4. Verify your unmap process, as shown in Figure 6-77, and click Unmap.
Chapter 6. Data migration 277
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-77 Verifying your unmapping process
5. Repeat steps 3 to 5 for every image mode volume that you want to remove from the SAN
Volume Controller.
6. Edit the LUN masking on your storage subsystem. Remove the SAN Volume Controller
from the LUN masking, and add the host to the masking.
7. Power on your host system.
6.5.8 Map the free disks onto the Windows Server 2008
To detect and map the disks that have been freed from SAN Volume Controller management,
go to the Windows Server 2008:
1. Using your LSI 3500 Storage Manager interface, now remap the two LUNs that were
MDisks back to your Windows Server 2008 server.
2. Open your Device Manager window. Figure 6-78 shows that the LUNs are now back to an
LSI INF 01-00 type.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
278 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-78 LSI INF 01-00 type
3. Open your Disk Management window. The disks have appeared. You might need to
reactivate your disk by using the right-click mouse option on each disk.
Chapter 6. Data migration 279
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-79 Windows Server 2008 Disk Management
6.6 Migrating Linux SAN disks to SAN Volume Controller disks
In this section, we move the two LUNs from a Linux server that is currently booting directly off
of our DS4000 storage subsystem over to the SAN Volume Controller. We then manage those
LUNs with the SAN Volume Controller and move them between other managed disks. Finally,
we move them back to image mode disks so that those LUNs can be masked and mapped
back to the Linux server directly.
Using this example can help you to perform any of the following activities in your environment:
Move a Linux servers SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SAN Volume Controller.
Perform this activity first when introducing the SAN Volume Controller into your
environment. This section shows that your host downtime is only a few minutes while you
remap and remask disks using your storage subsystem LUN management tool. We
describe this step in detail in 6.6.2, Preparing your SAN Volume Controller to virtualize
disks on page 282.
Move data between storage subsystems while your Linux server is still running and
servicing your business application.
Perform this activity if you are removing a storage subsystem from your SAN environment.
Or, perform this activity if you want to move the data onto LUNs that are more appropriate
for the type of data that is stored on those LUNs, taking availability, performance, and
redundancy into consideration. We describe this step in 6.6.4, Migrating the image mode
volumes to managed MDisks on page 289.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
280 Implementing the IBM System Storage SAN Volume Controller V7.2
Move your Linux servers LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the Linux server.
We describe this step in 6.6.5, Preparing to migrate from the SAN Volume Controller on
page 292.
You can use these three activities individually, or together, to migrate your Linux servers
LUNs from one storage subsystem to another storage subsystem using the SAN Volume
Controller as your migration tool. If you do not use all three activities, you can introduce or
remove the SAN Volume Controller from your environment.
The only downtime that is required for these activities is the time that it takes to remask and
remap the LUNs between the storage subsystems and your SAN Volume Controller.
In Figure 6-80, we show our Linux environment.
Figure 6-80 Linux SAN environment
Figure 6-80 shows our Linux server connected to our SAN infrastructure. It has two LUNs that
are masked directly to it from our storage subsystem:
The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise
Linux V5.1), and this LUN is used to boot the system directly from the storage subsystem.
The operating system identifies it as /dev/mapper/VolGroup00-LogVol00.
Linux sees this LUN as our /dev/sda disk.
SCSI LUN ID 0: To successfully boot a host off of the SAN, you must have assigned the
LUN as SCSI LUN ID 0.
LINUX
Host
SAN
IBM or OEM
Storage
Subsystem
Green Zone
Zoning for migration scenarios
Chapter 6. Data migration 281
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
We have also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB in size, and it
is mounted in the /data folder on the /dev/dm-2 disk.
Example 6-11 shows our disks that attach directly to the Linux hosts.
Example 6-11 Directly attached disks
[root@Palau data]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1971344 7601400 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data
[root@Palau data]#
Our Linux server represents a typical SAN environment with a host directly using LUNs that
were created on a SAN storage subsystem, as shown in Figure 6-80 on page 280:
The Linux servers host bus adapter (HBA) cards are zoned so that they are in the Green
Zone with our storage subsystem.
The two LUNs that have been defined on the storage subsystem, using LUN masking, are
directly available to our Linux server.
6.6.1 Connecting the SAN Volume Controller to your SAN fabric
This section describes the steps that you take to introduce the SAN Volume Controller into
your SAN environment. Although this section only summarizes these activities, you can
introduce the SAN Volume Controller into your SAN environment without any downtime to any
host or application that also uses your storage area network.
If you have a SAN Volume Controller that is already connected, skip to 6.6.2, Preparing your
SAN Volume Controller to virtualize disks on page 282.
Connecting the SAN Volume Controller to your SAN fabric requires that you perform these
tasks:
1. Assemble your SAN Volume Controller components (nodes, uninterruptible power supply
units, and redundant ac power switches). Cable the SAN Volume Controller correctly,
power on the SAN Volume Controller, and verify that the SAN Volume Controller is visible
on your SAN. We describe these tasks in much greater detail in Chapter 3, Planning and
configuration on page 71.
2. Create and configure your SAN Volume Controller system.
3. Create these additional zones:
A SAN Volume Controller node zone (our Black Zone in Figure 6-81 on page 282)
A storage zone (our Red Zone)
A host zone (our Blue Zone)
For more detailed information about how to configure the zones correctly, see Chapter 3,
Planning and configuration on page 71.
Figure 6-81 on page 282 shows our environment.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
282 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-81 SAN environment with attached SAN Volume Controller
6.6.2 Preparing your SAN Volume Controller to virtualize disks
This section describes the preparation tasks that we performed before taking our Linux server
offline. These activities are all nondisruptive. They do not affect your SAN fabric or your
existing SAN Volume Controller configuration (if you already have a production SAN Volume
Controller).
Creating a storage pool
When we move the two Linux LUNs to the SAN Volume Controller, we use them initially in
image mode. Therefore, we need a storage pool to hold those disks.
First, we need to create an empty storage pool for each of the disks, using the commands in
Example 6-12.
Example 6-12 Create an empty storage pool
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Pool1 -ext 512
MDisk Group, id [2], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity
virtual_capacity used_capacity real_capacity overallocation warning easy_tier
easy_tier_status
2 Palau_Pool1 online 0 0 0 512 0
0.00MB 0.00MB 0.00MB 0 0 auto
inactive
LINUX
Host
SVC
SVC
SVC
I/O grp0
SAN
IBM or OEM
Storage
Subsystem
IBM or OEM
Storage
Subsystem
Green Zone
Red Zone
Blue Zone
Black Zone
Zoning per Migration Scenarios
By Pinocchio 12-09-2005
Chapter 6. Data migration 283
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
3 Palau_Pool2 online 0 0 0 512 0
0.00MB 0.00MB 0.00MB 0 0 auto
inactive
IBM_2145:ITSO-CLS1:admin>
Creating your host definition
If you have prepared your zones correctly, the SAN Volume Controller can see the Linux
servers HBAs on the fabric (our host only had one HBA).
The svcinfo lshbaportcandidate command on the SAN Volume Controller lists all of the
worldwide names (WWNs), which have not yet been allocated to a host, that the SAN Volume
Controller can see on the SAN fabric. Example 6-13 shows the output of the nodes that it
found on our SAN fabric. (If the port did not show up, a zone configuration problem exists.)
Example 6-13 Display HBA port candidates
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2
IBM_2145:ITSO-CLS1:admin>
If you do not know the WWN of your Linux server, you can look at which WWNs are currently
configured on your storage subsystem for this host. Figure 6-82 shows our configured ports
on an IBM DS4700 storage subsystem.
Figure 6-82 Display port WWNs
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
284 Implementing the IBM System Storage SAN Volume Controller V7.2
After verifying that the SAN Volume Controller can see our host (Palau), we create the host
entry and assign the WWN to this entry. Example 6-14 shows these commands.
Example 6-14 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn
210000E08B054CAA:210000E08B89C1CD
Host, id [0], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palau
id 0
name Palau
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B89C1CD
node_logged_in_count 4
state inactive
WWPN 210000E08B054CAA
node_logged_in_count 4
state inactive
IBM_2145:ITSO-CLS1:admin>
Verifying that we can see our storage subsystem
If we set up our zoning correctly, the SAN Volume Controller can see the storage subsystem
with the svcinfo lscontroller command (Example 6-15).
Example 6-15 Discover the storage controller
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814 FAStT
IBM_2145:ITSO-CLS1:admin>
You can rename the storage subsystem to a more meaningful name with the svctask
chcontroller -name command. If you have multiple storage subsystems that connect to your
SAN fabric, renaming the storage subsystems makes it considerably easier to identify them.
Getting the disk serial numbers
To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (in case the SAN Volume Controller sees many available, unmanaged MDisks), we
get the LUN serial numbers from our storage subsystem administration tool (Storage
Manager).
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are
shown in the following figures. Figure 6-83 on page 285 shows the disk serial number
SAN_Boot_palau.
Chapter 6. Data migration 285
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-83 Obtaining the disk serial number - SAN_Boot_palau
Figure 6-84 shows the disk serial number Palau_data.
Figure 6-84 Obtaining the disk serial number - Palau_data
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
286 Implementing the IBM System Storage SAN Volume Controller V7.2
Before we move the LUNs to the SAN Volume Controller, we must configure the host
multipath configuration for the SAN Volume Controller. Add the following entry to your
multipath.conf file, as shown in Example 6-16, and add the content of Example 6-17 to the
file.
Example 6-16 Edit the multipath.conf file
[root@Palau ~]# vi /etc/multipath.conf
[root@Palau ~]# service multipathd stop
Stopping multipathd daemon: [ OK ]
[root@Palau ~]# service multipathd start
Starting multipathd daemon: [ OK ]
[root@Palau ~]#
Example 6-17 Data to add to the multipath.conf file
# SVC
device {
vendor "IBM"
product "2145CF8"
path_grouping_policy group_by_serial
}
We are now ready to move the ownership of the disks to the SAN Volume Controller, discover
them as MDisks, and give them back to the host as volumes.
6.6.3 Moving the LUNs to the SAN Volume Controller
In this step, we move the LUNs that are assigned to the Linux server and reassign them to the
SAN Volume Controller.
Our Linux server has two LUNs: one LUN is for our boot disk and operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.
If we only wanted to move the LUN that holds our application and data files, we do not have to
reboot the host. The only requirement is that we unmount the file system and vary off the
volume group (VG) to ensure the data integrity between the reassignment.
The following steps are required, because we intend to move both LUNs at the same time:
1. Confirm that the multipath.conf file is configured for the SAN Volume Controller.
2. Shut down the host.
If you are only moving the LUNs that contain the application and data, follow this
procedure instead:
a. Stop the applications that use the LUNs.
b. Unmount those file systems with the umount MOUNT_POINT command.
c. If the file systems are a logical volume manager (LVM) volume, deactivate that VG with
the vgchange -a n VOLUMEGROUP_NAME.
d. If possible, also unload your HBA driver using the rmmod DRIVER_MODULE command.
This command removes the SCSI definitions from the kernel (we will reload this
module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem
Chapter 6. Data migration 287
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
to rescan for new disks without requiring you to unload the HBA driver; however, we do
not provide those details here.
3. Using Storage Manager (our storage subsystem management tool), we can unmap and
unmask the disks from the Linux server and remap and remask the disks to the SAN
Volume Controller.
4. From the SAN Volume Controller, discover the new disks with the svctask detectmdisk
command. The disks will be discovered and named mdiskN, where N is the next available
MDisk number (starting from 0). Example 6-18 shows the commands that we used to
discover our MDisks and to verify that we have the correct MDisks.
Example 6-18 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 mdisk26 online unmanaged 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 mdisk27 online unmanaged 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd
IBM_2145:ITSO-CLS1:admin>
5. After we have verified that we have the correct MDisks, we rename them to avoid
confusion in the future when we perform other MDisk-related tasks (Example 6-19).
Example 6-19 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity
ctrl_LUN_# controller_name UID
tier
26 md_palauS online unmanaged 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online unmanaged 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:admin>
LUN IDs: Even though we are using boot from SAN, you can also map the boot disk
with any LUN to the SAN Volume Controller. The LUN does not have to be 0 until later
when we configure the mapping in the SAN Volume Controller to the host.
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
task display) with the serial number that you recorded earlier (in Figure 6-83 and
Figure 6-84 on page 285).
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
288 Implementing the IBM System Storage SAN Volume Controller V7.2
6. We create our image mode volumes with the svctask mkvdisk command and the -vtype
image option (Example 6-20). This command virtualizes the disks in the exact same layout
as though they were not virtualized.
Example 6-20 Create the image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool1 -iogrp 0 -vtype
image -mdisk md_palauS -name palau_SANB
Virtual Disk, id [29], successfully created
IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Pool2 -iogrp 0 -vtype
image -mdisk md_palauD -name palau_Data
Virtual Disk, id [30], successfully create
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 md_palauS online image 2 Palau_Pool1 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online image 3 Palau_Pool2 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:admin>
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_wri te_state se_copy_count
29 palau_SANB 0 io_grp0 online 4
Palau_Pool1 12.0GB image
60050768018301BF280000000000002B 0 1 empty
0
30 palau_Data 0 io_grp0 online 4
Palau_Pool2 5.0GB image
60050768018301BF280000000000002C 0 1 empty
0
7. Map the new image mode volumes to the host (Example 6-21).
Example 6-21 Map the volumes to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANB
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data
Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau
id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
0 Palau 0 29 palau_SANB
210000E08B89C1CD 60050768018301BF280000000000002B
Important: Make sure that you map the boot volume with SCSI ID 0 to your host. The
host must be able to identify the boot volume during the boot process.
Chapter 6. Data migration 289
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
0 Palau 1 30 palau_Data
210000E08B89C1CD 60050768018301BF280000000000002C
IBM_2145:ITSO-CLS1:admin>
8. Power on your host server and enter your FC HBA BIOS before booting the operating
system, and make sure that you change the boot configuration so that it points to the SAN
Volume Controller. In our example, we performed the following steps on a QLogic HBA:
a. Press Ctrl+Q to enter the HBA BIOS.
b. Open Configuration Settings.
c. Open Selectable Boot Settings.
d. Change the entry from your storage subsystem to the SAN Volume Controller 2145
LUN with SCSI ID 0.
e. Exit the menu and save your changes.
9. Boot up your Linux operating system.
If you only moved the application LUN to the SAN Volume Controller and left your Linux
server running, you only need to follow these steps to see the new volume:
a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, you can issue commands to the kernel to rescan the
SCSI bus to see the new volumes (these details are beyond the scope of this book).
b. Check your syslog, and verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, rediscover the VG and then run the
vgchange -a y VOLUME_GROUP command to activate the VG.
10.Mount your file systems with the mount /MOUNT_POINT command (Example 6-22). The df
output shows us that all of the disks are available again.
Example 6-22 Mount data disk
[root@Palau data]# mount /dev/dm-2 /data
[root@Palau data]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1938056 7634688 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data
[root@Palau data]#
11.You are now ready to start your application.
6.6.4 Migrating the image mode volumes to managed MDisks
While the Linux server is still running, and while our file systems are in use, we migrate the
image mode volumes onto striped volumes, with the extents being spread over the other
three MDisks. In our example, the three new LUNs are located on a DS4500 storage
subsystem, so we will also move to another storage subsystem in this example.
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process has completed before starting your application.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
290 Implementing the IBM System Storage SAN Volume Controller V7.2
Preparing MDisks for striped mode volumes
From our second storage subsystem, we have performed these tasks:
Created and allocated three new LUNs to the SAN Volume Controller
Discovered them as MDisks
Renamed these LUNs to more meaningful names
Created a new storage pool
Placed all of these MDisks into this storage pool
You can see the output of our commands in Example 6-23.
Example 6-23 Create a new storage pool
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512
MDisk Group, id [8], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 md_palauS online image 2 Palau_Pool1 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online image 3 Palau_Pool2 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
28 mdisk28 online unmanaged 8.0GB
0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd
29 mdisk29 online unmanaged 8.0GB
0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd
30 mdisk30 online unmanaged 8.0GB
0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
tier
26 md_palauS online image 2 Palau_Pool1 12.0GB
0000000000000008 DS4700
600a0b800026b2820000428f48739bca00000000000000000000000000000000 generic_hdd
27 md_palauD online image 3 Palau_Pool2 5.0GB
0000000000000009 DS4700
600a0b800026b282000042f84873c7e100000000000000000000000000000000 generic_hdd
28 palau-md1 online unmanaged 8 MD_palauVD 8.0GB
0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000 generic_hdd
Chapter 6. Data migration 291
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
29 palau-md2 online unmanaged 8 MD_palauVD 8.0GB
0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000 generic_hdd
30 palau-md3 online unmanaged 8 MD_palauVD 8.0GB
0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000 generic_hdd
IBM_2145:ITSO-CLS1:admin>
Migrating the volumes
We are now ready to migrate the image mode volumes onto striped volumes in the
MD_palauVD storage pool with the svctask migratevdisk command (Example 6-24).
While the migration is running, our Linux server is still running.
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-24. Listing the storage pool with the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pools is slowly increasing as those extents are
moved to the new storage pool.
Example 6-24 Migrating image mode volumes to striped volumes
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp
MD_palauVD
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp
MD_palauVD
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 25
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 70
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:admin>
After this task has completed, Example 6-25 shows that the volumes are now spread over
three MDisks.
Example 6-25 Migration complete
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD
id 8
name MD_palauVD
status online
mdisk_count 3
vdisk_count 2
capacity 24.0GB
extent_size 512
free_capacity 7.0GB
virtual_capacity 17.00GB
used_capacity 17.00GB
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
292 Implementing the IBM System Storage SAN Volume Controller V7.2
real_capacity 17.00GB
overallocation 70
warning 0
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANB
id
28
29
30
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data
id
28
29
30
IBM_2145:ITSO-CLS1:admin>
Our migration to striped volumes on another storage subsystem (DS4500) is now complete.
The original MDisks (palau-md1, palau-md2, and palau-md3) can now be removed from the
SAN Volume Controller, and these LUNs can be removed from the storage subsystem.
If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can
remove it from our SAN fabric.
6.6.5 Preparing to migrate from the SAN Volume Controller
Before we move the Linux servers LUNs from being accessed by the SAN Volume Controller
as volumes to being directly accessed from the storage subsystem, we must convert the
volumes into image mode volumes.
You might want to perform this activity for any one of these reasons:
You purchased a new storage subsystem, and you were using SAN Volume Controller as
a tool to migrate from your old storage subsystem to this new storage subsystem.
You used the SAN Volume Controller to FlashCopy or Metro Mirror a volume onto another
volume, and you no longer need that host connected to the SAN Volume Controller.
You want to move a host, which is currently connected to the SAN Volume Controller, and
its data to a site where no SAN Volume Controller exists.
Changes to your environment no longer require this host to use the SAN Volume
Controller.
We can perform other preparation activities before we have to shut down the host and
reconfigure the LUN masking and mapping. This section covers those activities.
If you are moving the data to a new storage subsystem, it is assumed that the storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-85 on
page 293.
Chapter 6. Data migration 293
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-85 Environment with SAN Volume Controller
Making fabric zone changes
The first step is to set up the SAN configuration so that all of the zones are created. You must
add the new storage subsystem to the Red Zone so that the SAN Volume Controller can
communicate with it directly.
We also need a Green Zone for our host to use when we are ready for it to directly access the
disk after it has been removed from the SAN Volume Controller.
It is assumed that you have created the necessary zones, and after your zone configuration is
set up correctly, the SAN Volume Controller sees the new storage subsystem controller using
the svcinfo lscontroller command, as shown in Example 6-26.
Example 6-26 Check controller name
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
product_id_high
0 controller0 IBM 1814
FAStT
IBM_2145:ITSO-CLS1:admin>
It is also a good idea to rename the new storage subsystems controller to a more useful
name, which can be done with the svctask chcontroller -name command, as shown in
Example 6-27 on page 294.
LINUX
Host
SVC
SVC
SVC
I/O grp0
SAN
IBM or OEM
Storage
Subsystem
IBM or OEM
Storage
Subsystem
Green Zone
Red Zone
Blue Zone
Black Zone
Zoning for migration scenarios
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
294 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 6-27 Rename controller
IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name ITSO-4700 0
IBM_2145:ITSO-CLS1:admin>
Also, verify that controller name was changed as you wanted, as shown in Example 6-28.
Example 6-28 Recheck controller name
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
product_id_high
0 ITSO-4700 IBM 1814
FAStT
IBM_2145:ITSO-CLS1:admin>
Creating new LUNs
On our storage subsystem, we created two LUNs and masked the LUNs so that the SAN
Volume Controller can see them. Eventually, we will give these two LUNs directly to the host,
removing the volumes that the host currently has. To check that the SAN Volume Controller
can use these two LUNs, issue the svctask detectmdisk command, as shown in
Example 6-29.
Example 6-29 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
0 mdisk0 online managed
600a0b800026b282000042f84873c7e100000000000000000000000000000000
28 palau-md1 online managed 8
MD_palauVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 palau-md2 online managed 8
MD_palauVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 palau-md3 online managed 8
MD_palauVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdisk31 online unmanaged
6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdisk32 online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
Even though the MDisks will not stay in the SAN Volume Controller for long, we suggest that
you rename them to more meaningful names so that they do not get confused with other
MDisks that are used by other activities.
Also, we create the storage pools to hold our new MDisks, as shown in Example 6-30 on
page 295.
Chapter 6. Data migration 295
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Example 6-30 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512
MDisk Group, id [9], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512
CMMVC5758E Object name already exists.
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning easy_tier easy_tier_status
8 MD_palauVD online 3 2
24.0GB 512 7.0GB 17.00GB 17.00GB
17.00GB 70 0 auto inactive
9 MDG_Palauivd online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0 auto inactive
IBM_2145:ITSO-CLS1:admin>
Our SAN Volume Controller environment is now ready for the volume migration to image
mode volumes.
6.6.6 Migrating the volumes to image mode volumes
While our Linux server is still running, we migrate the managed volumes onto the new MDisks
using image mode volumes. The command to perform this action is the svctask
migratetoimage command, which is shown in Example 6-31.
Example 6-31 Migrate the volumes to image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_SANB -mdisk
mdpalau_ivd -mdiskgrp MD_palauVD
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk
mdpalau_ivd1 -mdiskgrp MD_palauVD
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
28 palau-md1 online managed 8
MD_palauVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 palau-md2 online managed 8
MD_palauVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 palau-md3 online managed 8
MD_palauVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdpalau_ivd1 online image 8
MD_palauVD 6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdpalau_ivd online image 8
MD_palauVD 12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
296 Implementing the IBM System Storage SAN Volume Controller V7.2
progress 4
migrate_source_vdisk_index 29
migrate_target_mdisk_index 32
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 30
migrate_source_vdisk_index 30
migrate_target_mdisk_index 31
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:admin>
During the migration, our Linux server is unaware that its data is being physically moved
between storage subsystems.
After the migration has completed, the image mode volumes are ready to be removed from
the Linux server. And, the real LUNs can be mapped and masked directly to the host by using
the storage subsystems tool.
6.6.7 Removing the LUNs from the SAN Volume Controller
The next step requires downtime on the Linux server, because we will remap and remask the
disks so that the host sees them directly through the Green Zone, as shown in Figure 6-85 on
page 293.
Our Linux server has two LUNs: one LUN is our boot disk and holds operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.
If we only want to move the LUN that holds our application and data files, we can move that
LUN without rebooting the host. The only requirement is that we unmount the file system and
vary off the VG to ensure data integrity during the reassignment.
When you intend to move both LUNs at the same time, you must use these required steps:
1. Confirm that your operating system is configured for the new storage.
2. Shut down the host.
If you are only moving the LUNs that contain the application and data, you can follow this
procedure instead:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems with the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG with the vgchange -a n
VOLUMEGROUP_NAME command.
Before you start: Moving LUNs to another storage subsystem might need an additional
entry in the multipath.conf file. Check with the storage subsystem vendor to see which
content you must add to the file. You might be able to install and modify the file ahead of
time.
Chapter 6. Data migration 297
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
d. If you can, unload your HBA driver using the rmmod DRIVER_MODULE command. This
command removes the SCSI definitions from the kernel (we will reload this module and
rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for
new disks without requiring you to unload the HBA driver; however, we do not provide
these details here.
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command
(Example 6-32). To double-check that you have removed the volumes, use the svcinfo
lshostvdiskmap command, which shows that these disks are no longer mapped to the
Linux server.
Example 6-32 Remove the volumes from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANB
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau
IBM_2145:ITSO-CLS1:admin>
4. Remove the volumes from the SAN Volume Controller by using the svctask rmvdisk
command. This step makes them unmanaged, as seen in Example 6-33.
Example 6-33 Remove the volumes from the SAN Volume Controller
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANB
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
31 mdpalau_ivd1 online unmanaged
6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
Cached data: When you run the svctask rmvdisk command, the SAN Volume
Controller will first double-check that there is no outstanding dirty cached data for the
volume that is being removed. If there is still uncommitted cached data, the command
fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SAN Volume Controller will automatically destage uncommitted cached data two
minutes after the last write activity for the volume. How much data there is to destage,
and how busy the I/O subsystem is, determine how long this command takes to
complete.
You can check if the volume has uncommitted data in the cache by using the command
svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This
attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Modified data might exist in the cache.
corrupt Modified data might have existed in the cache, but any data has been
lost.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
298 Implementing the IBM System Storage SAN Volume Controller V7.2
32 mdpalau_ivd online unmanaged
12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask
the disks from the SAN Volume Controller back to the Linux server.
6. Power on your host server and enter your FC HBA BIOS before booting the OS. Make
sure that you change the boot configuration so that it points to the SAN Volume Controller.
In our example, we performed the following steps on a QLogic HBA:
Pressed Ctrl+Q to enter the HBA BIOS
Opened Configuration Settings
Opened Selectable Boot Settings
Changed the entry from the SAN Volume Controller to the storage subsystem LUN with
SCSI ID 0
Exited the menu and saved the changes
7. We now restart the Linux server.
If all of the zoning, LUN masking, and mapping were done successfully, the Linux server
boots as though nothing has happened.
However, if you only moved the application LUN to the SAN Volume Controller and left
your Linux server running, you must follow these steps to see the new volume:
a. Load your HBA driver with the modprobe DRIVER_NAME command.
If you did not (and cannot) unload your HBA driver, you can issue commands to the
kernel to rescan the SCSI bus to see the new volumes (describing these details is
beyond the scope of this book).
b. Check your syslog and verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, run the vgscan command to
rediscover the VG, and then, run the vgchange -a y VOLUME_GROUP command to
activate the VG.
Important: If one of the disks is used to boot your Linux server, you must make sure
that it is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds that
disk during its initialization.
Important: This step is the last step that you can perform and still safely back out
everything that you have done so far.
Up to this point, you can reverse all of the actions that you have performed so far to get
the server back online without data loss:
Remap and remask the LUNs back to the SAN Volume Controller.
Run the svctask detectmdisk command to rediscover the MDisks.
Recreate the volumes with the svctask mkvdisk command.
Remap the volumes back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
Chapter 6. Data migration 299
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
8. Mount your file systems with the mount /MOUNT_POINT command (Example 6-34). The df
output shows that all of the disks are available again.
Example 6-34 File system after migration
[root@Palau ~]# mount /dev/dm-2 /data
[root@Palau ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
10093752 1938124 7634620 21% /
/dev/sda1 101086 12054 83813 13% /boot
tmpfs 1033496 0 1033496 0% /dev/shm
/dev/dm-2 5160576 158160 4740272 4% /data
[root@Palau ~]#
9. You are ready to start your application.
10.Finally, to make sure that the MDisks are removed from the SAN Volume Controller, run
the svctask detectmdisk command. The MDisks first will be discovered as offline, and
then, they will automatically be removed when the SAN Volume Controller determines that
there are no volumes associated with these MDisks.
6.7 Migrating ESX SAN disks to SAN Volume Controller disks
In this section, we move the two LUNs from our VMware ESX server to the SAN Volume
Controller. The ESX operating system is installed locally on the host, but the two SAN disks
are connected, and the virtual machines are stored there.
We then manage those LUNs with the SAN Volume Controller, move them between other
managed disks, and finally move them back to image mode disks so that those LUNs can
then be masked and mapped back to the VMware ESX server directly.
This example can help you perform any one of the following activities in your environment:
Move your ESX servers data LUNs (that are your VMware vmfs file systems where you
might have your virtual machines stored), which are directly accessed from a storage
subsystem, to virtualized disks under the control of the SAN Volume Controller.
Move LUNs between storage subsystems while your VMware virtual machines are still
running.
You can perform this activity to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, taking into account availability, performance,
and redundancy. We describe this step in 6.7.4, Migrating the image mode volumes on
page 309.
Move your VMware ESX servers LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the server.
This step starts in 6.7.5, Preparing to migrate from the SAN Volume Controller on
page 312.
You can use these activities individually, or together, to migrate your VMware ESX servers
LUNs from one storage subsystem to another storage subsystem, using the SAN Volume
Controller as your migration tool. If you do not use all three activities, you can introduce the
SAN Volume Controller in your environment, or move the data between your storage
subsystems.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
300 Implementing the IBM System Storage SAN Volume Controller V7.2
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SAN Volume Controller.
In Figure 6-86, we show our starting SAN environment.
Figure 6-86 ESX environment before migration
Figure 6-86 shows our ESX server connected to the SAN infrastructure. It has two LUNs that
are masked directly to it from our storage subsystem.
Our ESX server represents a typical SAN environment with a host directly using LUNs that
were created on a SAN storage subsystem, as shown in Figure 6-86:
The ESX servers HBA cards are zoned so that they are in the Green Zone with our
storage subsystem.
The two LUNs that have been defined on the storage subsystem and that use LUN
masking are directly available to our ESX server.
Chapter 6. Data migration 301
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
6.7.1 Connecting the SAN Volume Controller to your SAN fabric
This section describes the steps that are needed to introduce the SAN Volume Controller into
your SAN environment. Although we only summarize these activities here, you can introduce
the SAN Volume Controller into your SAN environment without any downtime to any host or
application that also uses your storage area network.
If you have a SAN Volume Controller already connected, skip to the instructions that are given
in 6.7.2, Preparing your SAN Volume Controller to virtualize disks on page 302.
You must perform these tasks to connect the SAN Volume Controller to your SAN fabric:
Assemble your SAN Volume Controller components (nodes, uninterruptible power supply
unit, and redundant ac power switches), cable the SAN Volume Controller correctly, power
on the SAN Volume Controller, and verify that the SAN Volume Controller is visible on your
SAN area network.
Create and configure your SAN Volume Controller system.
Create these additional zones:
A SAN Volume Controller node zone (the Black Zone in our diagram on Example 6-57
on page 325)
A storage zone (our Red Zone)
A host zone (our Blue Zone)
For more detailed information about how to configure the zones in the correct way, see
Chapter 3, Planning and configuration on page 71.
Figure 6-87 shows the environment that we set up.
Important: Be extremely careful when connecting the SAN Volume Controller to your
storage area network, because this action requires you to connect cables to your SAN
switches and to alter your switch zone configuration. Performing these activities incorrectly
can render your SAN inoperable, so make sure that you fully understand the effect of your
actions.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
302 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 6-87 SAN environment with SAN Volume Controller attached
6.7.2 Preparing your SAN Volume Controller to virtualize disks
This section describes the preparatory tasks that we perform before taking our ESX server or
virtual machines offline. These tasks are all nondisruptive activities, which do not affect your
SAN fabric or your existing SAN Volume Controller configuration (if you already have a
production SAN Volume Controller in place).
Creating a storage pool
When we move the two ESX LUNs to the SAN Volume Controller, they first are used in image
mode, and therefore, we need a storage pool to hold those disks.
We create an empty storage pool for these disks by using the command that is shown in
Example 6-35. Our MDG_Nile_VM storage pool holds the boot LUN and our data LUN.
Example 6-35 Creating an empty storage pool
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512
MDisk Group, id [3], successfully created
Creating the host definition
If you prepared the zones correctly, the SAN Volume Controller can see the ESX servers
HBAs on the fabric (our host only had one HBA).
First, we get the WWN for our ESX servers HBA, because we have many hosts that are
connected to our SAN fabric and in the Blue Zone. We want to make sure that we have the
correct WWN to reduce our ESX servers downtime.
Chapter 6. Data migration 303
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Log in to your VMware Management Console as root, navigate to Configuration, and select
Storage Adapter. The storage adapters are shown on the right side of this window and
display all of the necessary information. Figure 6-88 shows our WWNs, which are
210000E08B89B8C0 and 210000E08B892BCD.
Figure 6-88 Obtain your WWN using the VMware Management Console
Use the svcinfo lshbaportcandidate command on the SAN Volume Controller to list all of
the WWNs, which have not yet been allocated to a host, that the SAN Volume Controller can
see on the SAN fabric. Example 6-36 on page 303 shows the output of the nodes that it found
on our SAN fabric. (If the port did not show up, a zone configuration problem exists.)
Example 6-36 Add the host to the SAN Volume Controller
IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate
id
210000E08B89B8C0
210000E08B892BCD
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2
IBM_2145:ITSO-CLS1:admin>
After verifying that the SAN Volume Controller can see our host, we create the host entry and
assign the WWN to this entry. Example 6-37 shows these commands.
Example 6-37 Create the host entry
IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn
210000E08B89B8C0:210000E08B892BCD
Host, id [1], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nile
id 1
name Nile
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B892BCD
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
304 Implementing the IBM System Storage SAN Volume Controller V7.2
node_logged_in_count 4
state active
WWPN 210000E08B89B8C0
node_logged_in_count 4
state active
IBM_2145:ITSO-CLS1:admin>
Verifying that you can see your storage subsystem
If our zoning has been performed correctly, the SAN Volume Controller can also see the
storage subsystem with the svcinfo lscontroller command (Example 6-38).
Example 6-38 Available storage controllers
IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id
product_id_low product_id_high
0 DS4500 IBM
1742-900
1 DS4700 IBM
1814 FAStT
Getting your disk serial numbers
To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (in case the SAN Volume Controller sees many available, unmanaged MDisks), we
get the LUN serial numbers from our storage subsystem administration tool (Storage
Manager).
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive, and choose Properties. Figure 6-89 and
Figure 6-90 show our serial numbers. Figure 6-89 shows disk serial number VM_W2k3.
Chapter 6. Data migration 305
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-89 Obtaining disk serial number VM_W2k3
Figure 6-90 shows disk serial number VM_SLES.
Figure 6-90 Obtaining disk serial number VM_SLES
We are ready to move the ownership of the disks to the SAN Volume Controller, discover
them as MDisks, and give them back to the host as volumes.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
306 Implementing the IBM System Storage SAN Volume Controller V7.2
6.7.3 Moving the LUNs to the SAN Volume Controller
In this step, we move the LUNs that are assigned to the ESX server and reassign them to the
SAN Volume Controller.
Our ESX server has two LUNs, as shown in Figure 6-91.
Figure 6-91 VMware LUNs
The virtual machines are located on these LUNs. Therefore, to move these LUNs under the
control of the SAN Volume Controller, we do not need to reboot the entire ESX server, but we
do have to stop and suspend all VMware guests that are using these LUNs.
Moving VMware guest LUNs
To move the VMware LUNs to the SAN Volume Controller, perform the following steps:
1. Using Storage Manager, we have identified the LUN number that has been presented to
the ESX server. Record which LUN had which LUN number; see Figure 6-92.
Figure 6-92 Identify LUN numbers in IBM DS4000 Storage Manager
2. Identify all of the VMware guests that are using this LUN and shut them down. One way to
identify them is to highlight the virtual machine and open the Summary tab. The datapool
that is used is displayed under Datastore. Figure 6-93 on page 307 shows a Linux virtual
machine using the datastore named SLES_Costa_Rica.
Chapter 6. Data migration 307
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-93 Identify the LUNs that are used by virtual machines
3. If you have several ESX hosts, also check the other ESX hosts to make sure that no guest
operating system is running and using this datastore.
4. Repeat steps 1 to 3 for every datastore that you want to migrate.
5. After the guests are suspended, we use Storage Manager (our storage subsystem
management tool) to unmap and unmask the disks from the ESX server and to remap and
remask the disks to the SAN Volume Controller.
6. From the SAN Volume Controller, discover the new disks with the svctask detectmdisk
command. The disks will be discovered and named as mdiskN, where N is the next
available MDisk number (starting from 0). Example 6-39 shows the commands that we
used to discover our MDisks and to verify that we have the correct MDisks.
Example 6-39 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
21 mdisk21 online unmanaged
60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 mdisk22 online unmanaged
70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
308 Implementing the IBM System Storage SAN Volume Controller V7.2
7. After we have verified that we have the correct MDisks, we rename them to avoid
confusion in the future when we perform other MDisk-related tasks; see Example 6-40.
Example 6-40 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
21 ESX_SLES online unmanaged
60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online unmanaged
70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
8. We create our image mode volumes with the svctask mkvdisk command; see
Example 6-41. Using the parameter -vtype image ensures that it will create image mode
volumes, which means that the virtualized disks will have the exact same layout as though
they were not virtualized.
Example 6-41 Create the image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype
image -mdisk ESX_W2k3 -name ESX_W2k3_IVD
Virtual Disk, id [29], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype
image -mdisk ESX_SLES -name ESX_SLES_IVD
Virtual Disk, id [30], successfully created
IBM_2145:ITSO-CLS1:admin>
9. Finally, we can map the new image mode volumes to the host. Use the same SCSI LUN
IDs as on the storage subsystem for the mapping; see Example 6-42.
Example 6-42 Map the volumes to the host
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD
Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD
60050768018301BF280000000000002A
1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD
60050768018301BF2800000000000029
Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk
command task display) with the serial number that you obtained earlier (in Figure 6-89
and Figure 6-90 on page 305).
Chapter 6. Data migration 309
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
10.Using the VMware Management Console, rescan to discover the new volume. Open the
configuration tab, select Storage Adapters, and click Rescan. During the rescan, you
can receive geometry errors when ESX discovers that the old disk has disappeared. Your
volume will appear with the new vmhba devices.
11.We are ready to restart the VMware guests again.
At this point, you have migrated the VMware LUNs successfully to the SAN Volume
Controller.
6.7.4 Migrating the image mode volumes
While the VMware server and its virtual machines are still running, we migrate the image
mode volumes onto striped volumes, with the extents being spread over three other MDisks.
Preparing MDisks for striped mode volumes
In this example, we migrate the image mode volumes to volumes and move the data to
another storage subsystem in one step.
Adding a new storage subsystem to the SAN Volume Controller
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-94.
Figure 6-94 ESX SAN Volume Controller SAN environment
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
310 Implementing the IBM System Storage SAN Volume Controller V7.2
Make fabric zone changes
The first step is to set up the SAN configuration so that all of the zones are created. Add the
new storage subsystem to the Red Zone so that the SAN Volume Controller can talk to it
directly.
We also need a Green Zone for our host to use when we are ready for it to directly access the
disk, after it has been removed from the SAN Volume Controller.
We assume that you have created the necessary zones.
In our environment, we have performed these tasks:
Created three LUNs on another storage subsystem and mapped it to the SAN Volume
Controller
Discovered them as MDisks
Created a new storage pool
Renamed these LUNs to more meaningful names
Put all these MDisks into this storage pool
You can see the output of our commands in Example 6-43.
Example 6-43 Create a new storage pool
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
21 ESX_SLES online image 3
MDG_Nile_VM 60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online image 3
MDG_Nile_VM 70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
23 mdisk23 online unmanaged
55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 mdisk24 online unmanaged
55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 mdisk25 online unmanaged
55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VD
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VD
IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VD
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
21 ESX_SLES online image 3
MDG_Nile_VM 60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
Chapter 6. Data migration 311
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
22 ESX_W2k3 online image 3
MDG_Nile_VM 70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
Migrating the volumes
At this point, we are ready to migrate the image mode volumes onto striped volumes in the
new storage pool (MDG_ESX_VD) with the svctask migratevdisk command
(Example 6-44). While the migration is running, our VMware ESX server and our VMware
guests continue to run.
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-44. Listing the storage pool with the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pool is slowly increasing as those extents are
moved to the new storage pool.
Example 6-44 Migrating image mode volumes to striped volumes
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp
MDG_ESX_VD
IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp
MDG_ESX_VD
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 1
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 4
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 4
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
312 Implementing the IBM System Storage SAN Volume Controller V7.2
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 MDG_Nile_VM online 2 2
130.0GB 512 1.0GB 130.00GB 130.00GB
130.00GB 100 0
4 MDG_ESX_VD online 3 0
165.0GB 512 35.0GB 0.00MB 0.00MB
0.00MB 0 0
IBM_2145:ITSO-CLS1:admin>
If you compare the svcinfo lsmdiskgrp output after the migration, as shown in
Example 6-45, you can see that all of the virtual capacity has now been moved from the old
storage pool (MDG_Nile_VM) to the new storage pool (MDG_ESX_VD). The mdisk_count
column shows that the capacity is now spread over three MDisks.
Example 6-45 List MDisk group
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 MDG_Nile_VM online 2 0
130.0GB 512 130.0GB 0.00MB 0.00MB
0.00MB 0 0
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0
IBM_2145:ITSO-CLS1:admin>
The migration to the SAN Volume Controller is complete. You can remove the original MDisks
from the SAN Volume Controller and remove these LUNs from the storage subsystem.
If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it
from our SAN fabric.
6.7.5 Preparing to migrate from the SAN Volume Controller
Before we move the ESX servers LUNs from being accessible by the SAN Volume Controller
as volumes to becoming directly accessed from the storage subsystem, we need to convert
the volumes into image mode volumes.
You might want to perform this activity for any one of these reasons:
You purchased a new storage subsystem, and you were using SAN Volume Controller as
a tool to migrate from your old storage subsystem to this new storage subsystem.
You used SAN Volume Controller to FlashCopy or Metro Mirror a volume onto another
volume, and you no longer need that host connected to the SAN Volume Controller.
You want to move a host, and its data, that currently is connected to the SAN Volume
Controller to a site where there is no SAN Volume Controller.
Chapter 6. Data migration 313
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Changes to your environment no longer require this host to use the SAN Volume
Controller.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
314 Implementing the IBM System Storage SAN Volume Controller V7.2
There are also other preparatory activities that we can perform before we shut down the host
and reconfigure the LUN masking and mapping. This section describes those activities. In our
example, we will move volumes that are located on a DS4500 to image mode volumes that
are located on a DS4700.
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as described in Adding a new
storage subsystem to the SAN Volume Controller on page 309 and Make fabric zone
changes on page 310.
Creating new LUNs
On our storage subsystem, we create two LUNs and mask the LUNs so that the SAN Volume
Controller can see them. These two LUNs eventually will be given directly to the host,
removing the volumes that it currently has. To check that the SAN Volume Controller can use
them, issue the svctask detectmdisk command, as shown in Example 6-46.
Example 6-46 Discover the new MDisks
IBM_2145:ITSO-CLS1:admin>svctask detectmdisk
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
26 mdisk26 online unmanaged
120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 mdisk27 online unmanaged
100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000
Even though the MDisks will not stay in the SAN Volume Controller for long, we suggest that
you rename them to more meaningful names so that they do not get confused with other
MDisks that are being used by other activities. We also create the storage pools to hold our
new MDisks. Example 6-47 shows these tasks.
Example 6-47 Rename the MDisks
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26
IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27
IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512
MDisk Group, id [5], successfully created
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
Chapter 6. Data migration 315
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0
5 MDG_IVD_ESX online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
IBM_2145:ITSO-CLS1:admin>
Our SAN Volume Controller environment is ready for the volume migration to image mode
volumes.
6.7.6 Migrating the managed volumes to image mode volumes
While our ESX server is still running, we migrate the managed volumes onto the new MDisks
using image mode volumes. The command to perform this action is the svctask
migratetoimage command, which is shown in Example 6-48.
Example 6-48 Migrate the volumes to image mode volumes
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk
ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX
IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk
ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
26 ESX_IVD_SLES online image 5
MDG_IVD_ESX 120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 ESX_IVD_W2K3 online image 5
MDG_IVD_ESX 100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
During the migration, our ESX server is unaware that its data is being physically moved
between storage subsystems. We can continue to run and continue to use the virtual
machines that are running on the server.
You can check the migration status with the svcinfo lsmigrate command, as shown in
Example 6-49 on page 316.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
316 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 6-49 The svcinfo lsmigrate command and output
IBM_2145:ITSO-CLS1:admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 2
migrate_source_vdisk_index 29
migrate_target_mdisk_index 27
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 12
migrate_source_vdisk_index 30
migrate_target_mdisk_index 26
migrate_target_mdisk_grp 5
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS1:admin>
After the migration has completed, the image mode volumes are ready to be removed from
the ESX server, and the real LUNs can be mapped and masked directly to the host using the
storage subsystems tool.
6.7.7 Removing the LUNs from the SAN Volume Controller
Your ESX servers configuration determines in what order your LUNs are removed from the
control of the SAN Volume Controller, and whether you need to reboot the ESX server and
suspend the VMware guests.
In our example, we have moved the virtual machine disks. Therefore, to remove these LUNs
from the control of the SAN Volume Controller, we must stop and suspend all of the VMware
guests that are using this LUN. We must perform the following steps:
1. Check which SCSI LUN IDs are assigned to the migrated disks by using the svcinfo
lshostvdiskmap command, as shown in Example 6-50. Compare the volume UID and sort
out the information.
Example 6-50 Note the SCSI LUN IDs
IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name
wwpn vdisk_UID
1 Nile 0 30 ESX_SLES_IVD
210000E08B892BCD 60050768018301BF280000000000002A
1 Nile 1 29 ESX_W2k3_IVD
210000E08B892BCD 60050768018301BF2800000000000029
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count
0 vdisk_A 0 io_grp0 online
2 MDG_Image 36.0GB image
Chapter 6. Data migration 317
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
29 ESX_W2k3_IVD 0 io_grp0 online
4 MDG_ESX_VD 70.0GB striped
60050768018301BF2800000000000029 0 1
30 ESX_SLES_IVD 0 io_grp0 online
4 MDG_ESX_VD 60.0GB striped
60050768018301BF280000000000002A 0 1
IBM_2145:ITSO-CLS1:admin>
2. Shut down and suspend all guests using the LUNs. You can use the same method that is
used in Moving VMware guest LUNs on page 306 to identify the guests that are using
this LUN.
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command
(Example 6-51). To double-check that the volumes have been removed, use the svcinfo
lshostvdiskmap command, which shows that these volumes are no longer mapped to the
ESX server.
Example 6-51 Remove the volumes from the host
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD
IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD
4. Remove the volumes from the SAN Volume Controller by using the svctask rmvdisk
command, which makes the MDisks unmanaged, as shown in Example 6-52.
Example 6-52 Remove the volumes from the SAN Volume Controller
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD
IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD
IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
Cached data: When you run the svctask rmvdisk command, the SAN Volume
Controller first double-checks that there is no outstanding dirty cached data for the
volume that is being removed. If there is still uncommitted cached data, the command
fails with this error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SAN Volume Controller will automatically destage uncommitted cached data two
minutes after the last write activity for the volume. How much data there is to destage,
and how busy the I/O subsystem is, determine how long this command takes to
complete.
You can check if the volume has uncommitted data in the cache by using the svcinfo
lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This
attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Modified data might exist in the cache.
corrupt Modified data might have existed in the cache, but the data has been
lost.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
318 Implementing the IBM System Storage SAN Volume Controller V7.2
26 ESX_IVD_SLES online unmanaged
120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 ESX_IVD_W2K3 online unmanaged
100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000
IBM_2145:ITSO-CLS1:admin>
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask
the disks from the SAN Volume Controller back to the ESX server. Remember that in
Example 6-50 on page 316, we recorded the SCSI LUN IDs. To map your LUNs on the
storage subsystem, use the same SCSI LUN IDs that you used in the SAN Volume
Controller.
6. Using the VMware Management Console, rescan to discover the new volume. Figure 6-95
on page 319 shows the view before the rescan. Figure 6-96 on page 319 shows the view
after the rescan. Note that the size of the LUN has changed, because we have moved to
another LUN on another storage subsystem.
Important: This is the last step that you can perform and still safely back out of
everything you have done so far.
Up to this point, you can reverse all of the actions that you have performed so far to get
the server back online without data loss:
Remap and remask the LUNs back to the SAN Volume Controller.
Run the svctask detectmdisk command to rediscover the MDisks.
Recreate the volumes with the svctask mkvdisk command.
Remap the volumes back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
Chapter 6. Data migration 319
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-95 Before adapter rescan
Figure 6-96 After adapter rescan
During the rescan, you can receive geometry errors when ESX discovers that the old disk
has disappeared. Your volume will appear with a new vmhba address, and VMware will
recognize it as our VMWARE-GUESTS disk.
7. We are now ready to restart the VMware guests.
8. Finally, to make sure that the MDisks are removed from the SAN Volume Controller, run
the svctask detectmdisk command. The MDisks are discovered as offline and then
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
320 Implementing the IBM System Storage SAN Volume Controller V7.2
automatically removed when the SAN Volume Controller determines that there are no
volumes associated with these MDisks.
6.8 Migrating AIX SAN disks to SAN Volume Controller volumes
In this section, we describe how to move the two LUNs from an AIX server, which is directly
off our DS4000 storage subsystem, over to the SAN Volume Controller.
We manage those LUNs with the SAN Volume Controller, move them between other
managed disks, and then finally move them back to image mode disks so that those LUNs
can then be masked and mapped back to the AIX server directly.
Using this example can help you to perform any of the following activities in your environment:
Move an AIX servers SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SAN Volume Controller, which is the first activity that you perform when
introducing the SAN Volume Controller into your environment.
This section shows that your host downtime is only a few minutes while you remap and
remask disks using your storage subsystem LUN management tool. This step starts in
6.8.2, Preparing your SAN Volume Controller to virtualize disks on page 323.
Move data between storage subsystems while your AIX server is still running and
servicing your business application.
You can perform this activity if you are removing a storage subsystem from your SAN
environment and you want to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, taking into account availability, performance,
and redundancy. We describe this step in 6.8.4, Migrating image mode volumes to
volumes on page 330.
Move your AIX servers LUNs back to image mode volumes, so that they can be remapped
and remasked directly back to the AIX server.
This step starts in 6.8.5, Preparing to migrate from the SAN Volume Controller on
page 332.
Use these activities individually or together to migrate your AIX servers LUNs from one
storage subsystem to another storage subsystem by using the SAN Volume Controller as
your migration tool. If you do not use all three activities, you can introduce or remove the SAN
Volume Controller from your environment.
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SAN Volume Controller.
Chapter 6. Data migration 321
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
We show our AIX environment in Figure 6-97.
Figure 6-97 AIX SAN environment
Figure 6-97 shows our AIX server connected to our SAN infrastructure. It has two LUNs
(hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.
The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the
itsoaixvg1 LVM group, as shown in Example 6-53 on page 322.
AIX
Host
SAN
IBM or OEM
Storage
Subsystem
Green Zone
Zoning for migration scenarios
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
322 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 6-53 AIX SAN configuration
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1D-08-02 1814 DS4700 Disk Array Device
hdisk4 Available 1D-08-02 1814 DS4700 Disk Array Device
#lspv
hdisk0 0009cddaea97bf61 rootvg active
hdisk1 0009cdda43c9dfd5 rootvg active
hdisk2 0009cddabaef1d99 rootvg active
hdisk3 0009cdda0a4c0dd5 itsoaixvg active
hdisk4 0009cdda0a4d1a64 itsoaixvg1 active
#
Our AIX server represents a typical SAN environment with a host directly using LUNs that
were created on a SAN storage subsystem, as shown in Figure 6-97 on page 321:
The AIX servers HBA cards are zoned so that they are in the Green (dotted line) Zone
with our storage subsystem.
The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem. Using
LUN masking, they are directly available to our AIX server.
6.8.1 Connecting the SAN Volume Controller to your SAN fabric
This section describes the steps to take to introduce the SAN Volume Controller into your
SAN environment. Although this section only summarizes these activities, you can
accomplish this task without any downtime to any host or application that also uses your
storage area network.
If you have a SAN Volume Controller already connected, skip to 6.8.2, Preparing your SAN
Volume Controller to virtualize disks on page 323.
Connecting the SAN Volume Controller to your SAN fabric requires that you perform these
tasks:
Assemble your SAN Volume Controller components (nodes, uninterruptible power supply
unit, and redundant ac power switches), cable the SAN Volume Controller correctly, power
on the SAN Volume Controller, and verify that the SAN Volume Controller is visible on your
SAN.
Create and configure your SAN Volume Controller system.
Create these additional zones:
A SAN Volume Controller node zone (our Black Zone in Example 6-66 on page 332)
A storage zone (our Red Zone)
A host zone (our Blue Zone)
Important: Be extremely careful when connecting the SAN Volume Controller to your
storage area network, because this action requires you to connect cables to your SAN
switches and to alter your switch zone configuration. Performing these activities incorrectly
can render your SAN inoperable, so make sure that you fully understand the effect of your
actions.
Chapter 6. Data migration 323
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-98 on page 323 shows our environment.
Figure 6-98 SAN environment with SAN Volume Controller attached
6.8.2 Preparing your SAN Volume Controller to virtualize disks
This section describes the preparatory tasks that we perform before taking our AIX server
offline. These tasks are all nondisruptive activities and do not affect your SAN fabric or your
existing SAN Volume Controller configuration (if you already have a production SAN Volume
Controller in place).
Creating a storage pool
When we move the two AIX LUNs to the SAN Volume Controller, they first are used in image
mode; therefore, we must create a storage pool to hold those disks. We must create an empty
storage pool for these disks, using the commands in Example 6-54. We name the storage
pool to hold our LUNs aix_imgmdg.
Example 6-54 Create empty mdiskgroup
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512
MDisk Group, id [7], successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size
free_capacity virtual_capacity used_capacity real_capacity overallocation
warning
7 aix_imgmdg online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
AIX
Host
SVC
SVC
SVC
I/O grp0
SAN
IBM or OEM
Storage
Subsystem
IBM or OEM
Storage
Subsystem
Green Zone
Red Zone
Blue Zone
Black Zone
Zoning for migration scenarios
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
324 Implementing the IBM System Storage SAN Volume Controller V7.2
IBM_2145:ITSO-CLS2:admin>
Creating our host definition
If you have prepared the zones correctly, the SAN Volume Controller can see the AIX servers
HBAs on the fabric (our host only had one HBA).
First, we get the WWN for our AIX servers HBA, because we have many hosts that are
connected to our SAN fabric and in the Blue Zone. We want to make sure we have the correct
WWN to reduce our AIX servers downtime. Example 6-55 shows the commands to get the
WWN; our host has a WWN of 10000000C932A7FB.
Example 6-55 Discover your WWN
#lsdev -Ccadapter|grep fcs
fcs0 Available 1Z-08 FC Adapter
fcs1 Available 1D-08 FC Adapter
#lscfg -vpl fcs0
fcs0 U0.1-P2-I4/Q1 FC Adapter
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1
#lscfg -vpl fcs1
fcs1 U0.1-P2-I5/Q1 FC Adapter
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A67B
Chapter 6. Data migration 325
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A800
ROS Level and ID............02C03891
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C932A800
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1
Device Specific.(YL)........U0.1-P2-I5/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1
##
The svcinfo lshbaportcandidate command on the SAN Volume Controller lists all of the
WWNs, which have not been allocated to a host yet, that the SAN Volume Controller can see
on the SAN fabric. Example 6-56 shows the output of the nodes that it found in our SAN
fabric. (If the port did not show up, a zone configuration problem exists.)
Example 6-56 Add the host to the SAN Volume Controller
IBM_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidate
id
10000000C932A7FB
10000000C932A800
210000E08B89B8C0
IBM_2145:ITSO-CLS2:admin>
After verifying that the SAN Volume Controller can see our host (Kanaga), we create the host
entry and assign the WWN to this entry, as shown with the commands in Example 6-57.
Example 6-57 Create the host entry
IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn
10000000C932A7FB:10000000C932A800
Host, id [5], successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga
id 5
name Kanaga
port_count 2
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
326 Implementing the IBM System Storage SAN Volume Controller V7.2
type generic
mask 1111
iogrp_count 4
WWPN 10000000C932A800
node_logged_in_count 2
state inactive
WWPN 10000000C932A7FB
node_logged_in_count 2
state inactive
IBM_2145:ITSO-CLS2:admin>
Verifying that we can see our storage subsystem
If we performed the zoning correctly, the SAN Volume Controller can see the storage
subsystem with the svcinfo lscontroller command (Example 6-58).
Example 6-58 Discover the storage controller
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814
IBM_2145:ITSO-CLS2:admin>
Getting the disk serial numbers
To help avoid the risk of creating the wrong volumes from all of the available, unmanaged
MDisks (in case there are many available, unmanaged MDisks that are seen by the SAN
Volume Controller), we obtain the LUN serial numbers from our storage subsystem
administration tool (Storage Manager).
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager will provide the LUN
serial numbers. Right-click your logical drive and choose Properties. The following figures
show our serial numbers. Figure 6-99 on page 327 shows disk serial number kanage_lun0.
Names: The svctask chcontroller command enables you to change the discovered
storage subsystem name in the SAN Volume Controller. In complex SANs, we suggest that
you rename your storage subsystem to a more meaningful name.
Chapter 6. Data migration 327
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Figure 6-99 Obtaining disk serial number kanage_lun0
Figure 6-100 shows disk serial number kanage_Lun1.
Figure 6-100 Obtaining disk serial number kanga_Lun1
We are ready to move the ownership of the disks to the SAN Volume Controller, discover
them as MDisks, and give them back to the host as volumes.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
328 Implementing the IBM System Storage SAN Volume Controller V7.2
6.8.3 Moving the LUNs to the SAN Volume Controller
In this step, we move the LUNs that are assigned to the AIX server and reassign them to the
SAN Volume Controller.
Because we only want to move the LUN that holds our application and data files, we move
that LUN without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.
The following steps are required, because we intend to move both LUNs at the same time:
1. Confirm that the SDD is installed.
2. Unmount and vary off the VGs:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems with the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG with the varyoffvg
VOLUMEGROUP_NAME command.
Example 6-59 shows the commands that we ran on Kanaga.
Example 6-59 AIX command sequence
#varyoffvg itsoaixvg
#varyoffvg itsoaixvg1
#lsvg
rootvg
itsoaixvg
itsoaixvg1
#lsvg -o
rootvg
3. Using Storage Manager (our storage subsystem management tool), we can unmap and
unmask the disks from the AIX server and remap and remask the disks to the SAN
Volume Controller.
4. From the SAN Volume Controller, discover the new disks with the svctask detectmdisk
command. The disks will be discovered and named mdiskN, where N is the next available
mdisk number (starting from 0). Example 6-60 shows the commands that we used to
discover our MDisks and to verify that we have the correct MDisks.
Example 6-60 Discover the new MDisks
IBM_2145:ITSO-CLS2:admin>svctask detectmdisk
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 mdisk24 online unmanaged
5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
Before you start: Moving LUNs to the SAN Volume Controller requires that the
Subsystem Device Driver (SDD) is installed on the AIX server. You can install the SDD
ahead of time; however, it might require an outage of your host to install the SDD ahead of
time.
Chapter 6. Data migration 329
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
25 mdisk25 online unmanaged
8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>
5. After we have verified that we have the correct MDisks, we rename them to avoid
confusion in the future when we perform other MDisk-related tasks (Example 6-61).
Example 6-61 Rename the MDisks
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online unmanaged
8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>
6. We create our image mode volumes with the svctask mkvdisk command and the option
-vtype image (Example 6-62). This command virtualizes the disks in the exact same
layout as though they were not virtualized.
Example 6-62 Create the image mode volumes
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype
image -mdisk Kanaga_AIX -name IVD_Kanaga
Virtual Disk, id [8], successfully created
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype
image -mdisk Kanaga_AIX1 -name IVD_Kanaga1
Virtual Disk, id [9], successfully created
IBM_2145:ITSO-CLS2:admin>
7. Finally, we can map the new image mode volumes to the host (Example 6-63).
Example 6-63 Map the volumes to the host
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1
Virtual Disk to Host map, id [1], successfully created
IBM_2145:ITSO-CLS2:admin>
Important: Match your discovered MDisk serial numbers (the UID on the svcinfo
lsmdisk command task display) with the serial number that you discovered earlier (in
Figure 6-99 and Figure 6-100 on page 327).
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process has completed before starting your application.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
330 Implementing the IBM System Storage SAN Volume Controller V7.2
Now, we are ready to perform the following steps to put the image mode volumes online:
1. Remove the old disk definitions, if you have not done so already.
2. Run the cfgmgr -vs command to rediscover the available LUNs.
3. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
4. Mount your file systems with the mount /MOUNT_POINT command.
5. You are ready to start your application.
6.8.4 Migrating image mode volumes to volumes
While the AIX server is still running and our file systems are in use, we migrate the image
mode volumes onto striped volumes, with the extents being spread over three other MDisks.
Preparing MDisks for striped mode volumes
From our storage subsystem, we performed these tasks:
Created and allocated three LUNs to the SAN Volume Controller
Discovered them as MDisks
Renamed these LUNs to more meaningful names
Created a new storage pool
Put all these MDisks into this storage pool
You can see the output of our commands in Example 6-64.
Example 6-64 Create a new storage pool
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512
IBM_2145:ITSO-CLS2:admin>svctask detectmdisk
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 mdisk26 online unmanaged
6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 mdisk27 online unmanaged
6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 mdisk28 online unmanaged
6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28
IBM_2145:ITSO-CLS2:admin>
IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd
IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd
IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd
Chapter 6. Data migration 331
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>
Migrating the volumes
We are ready to migrate the image mode volumes onto striped volumes with the svctask
migratevdisk command (Example 6-24 on page 291).
While the migration is running, our AIX server is still running and we can continue accessing
the files.
To check the overall progress of the migration, we use the svcinfo lsmigrate command, as
shown in Example 6-65. Listing the storage pool with the svcinfo lsmdiskgrp command
shows that the free capacity on the old storage pool is slowly increasing while those extents
are moved to the new storage pool.
Example 6-65 Migrating image mode volumes to striped volumes
IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd
IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd
IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate
migrate_type MDisk_Group_Migration
progress 10
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 9
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS2:admin>
After this task has completed, Example 6-66 on page 332 shows that the volumes are spread
over three MDisks in the aix_vd storage pool. The old storage pool is empty.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
332 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 6-66 Migration complete
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd
id 6
name aix_vd
status online
mdisk_count 3
vdisk_count 2
capacity 18.0GB
extent_size 512
free_capacity 5.0GB
virtual_capacity 13.00GB
used_capacity 13.00GB
real_capacity 13.00GB
overallocation 72
warning 0
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg
id 7
name aix_imgmdg
status online
mdisk_count 2
vdisk_count 0
capacity 13.0GB
extent_size 512
free_capacity 13.0GB
virtual_capacity 0.00MB
used_capacity 0.00MB
real_capacity 0.00MB
overallocation 0
warning 0
IBM_2145:ITSO-CLS2:admin>
Our migration to the SAN Volume Controller is complete. You can remove the original MDisks
from the SAN Volume Controller, and you can remove these LUNs from the storage
subsystem.
If these LUNs are the LUNs that were used last on our storage subsystem, we can remove
these LUNs from our SAN fabric.
6.8.5 Preparing to migrate from the SAN Volume Controller
Before we change the AIX servers LUNs from being accessed by the SAN Volume Controller
as volumes to being accessed directly from the storage subsystem, we need to convert the
volumes into image mode volumes.
You can perform this activity for one of these reasons:
You purchased a new storage subsystem, and you were using the SAN Volume Controller
as a tool to migrate from your old storage subsystem to this new storage subsystem.
You used the SAN Volume Controller to FlashCopy or Metro Mirror a volume onto another
volume, and you no longer need that host connected to the SAN Volume Controller.
You want to move a host, and its data, that is currently connected to the SAN Volume
Controller to a site where there is no SAN Volume Controller.
Chapter 6. Data migration 333
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Changes to your environment no longer require this host to use the SAN Volume
Controller.
There are other preparatory activities to be performed before we shut down the host and
reconfigure the LUN masking and mapping. This section covers those activities.
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as shown in Figure 6-101.
Figure 6-101 Environment with SAN Volume Controller
Making fabric zone changes
The first step is to set up the SAN configuration so that all of the zones are created. Add the
new storage subsystem to the Red Zone, so that the SAN Volume Controller can
communicate with it directly.
Create a Green Zone for our host to use when we are ready for it to access the disk directly,
after it has been removed from the SAN Volume Controller.
It is assumed that you have created the necessary zones.
After your zone configuration is set up correctly, the SAN Volume Controller sees the new
storage subsystems controller by using the svcinfo lscontroller command, as shown in
Example 6-67 on page 334. It is also useful to rename the controller to a more meaningful
name. You can use the svctask chcontroller -name command.
AIX
Host
SVC
SVC
SVC
I/O grp0
SAN
IBM or OEM
Storage
Subsystem
IBM or OEM
Storage
Subsystem
Green Zone
Red Zone
Blue Zone
Black Zone
Zoning for migration scenarios
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
334 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 6-67 Discovering the new storage subsystem
IBM_2145:ITSO-CLS2:admin>svcinfo lscontroller
id controller_name ctrl_s/n vendor_id product_id_low product_id_high
0 DS4500 IBM 1742-900
1 DS4700 IBM 1814 FAStT
IBM_2145:ITSO-CLS2:admin>
Creating new LUNs
On our storage subsystem, we created two LUNs and masked them so that the SAN Volume
Controller can see them. We will eventually give these LUNs directly to the host, removing the
volumes that it currently has. To check that the SAN Volume Controller can use the LUNs,
issue the svctask detectmdisk command, as shown in Example 6-68.
In our example, we use two 10 GB LUNs that are located on the DS4500 subsystem. Thus, in
this step, we migrate back to image mode volumes and to another subsystem in one step. We
have already deleted the old LUNs on the DS4700 storage subsystem, which is the reason
why they appear offline here.
Example 6-68 Discover the new MDisks
IBM_2145:ITSO-CLS2:admin>svctask detectmdisk
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX offline managed 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 offline managed 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
29 mdisk29 online unmanaged
10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 mdisk30 online unmanaged
10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>
Even though the MDisks will not stay in the SAN Volume Controller for long, we suggest that
you rename them to more meaningful names so that they do not get confused with other
MDisks that are used by other activities. Also, we create the storage pools to hold our new
MDisks, as shown in Example 6-69 on page 335.
Chapter 6. Data migration 335
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Example 6-69 Rename the MDisks
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29
IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30
IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512
MDisk Group, id [3], successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 KANAGA_AIXMIG online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
6 aix_vd online 3 2
18.0GB 512 5.0GB 13.00GB 13.00GB
13.00GB 72 0
7 aix_imgmdg offline 2 0
13.0GB 512 13.0GB 0.00MB 0.00MB
0.00MB 0 0
IBM_2145:ITSO-CLS2:admin>
At this point, our SAN Volume Controller environment is ready for the volume migration to
image mode volumes.
6.8.6 Migrating the managed volumes
While our AIX server is still running, we migrate the managed volumes onto the new MDisks
using image mode volumes. The command to perform this action is the svctask
migratetoimage command, which is shown in Example 6-70.
Example 6-70 Migrate the volumes to image mode volumes
IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG
-mdiskgrp KANAGA_AIXMIG
IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1
-mdiskgrp KANAGA_AIXMIG
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX offline managed 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 offline managed 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
336 Implementing the IBM System Storage SAN Volume Controller V7.2
29 AIX_MIG online image 3
KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 AIX_MIG1 online image 3
KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>svcinfo lsmigrate
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 9
migrate_target_mdisk_index 30
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 8
migrate_target_mdisk_index 29
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO-CLS2:admin>
During the migration, our AIX server is unaware that its data is being moved physically
between storage subsystems.
After the migration is complete, the image mode volumes are ready to be removed from the
AIX server, and the real LUNs can be mapped and masked directly to the host by using the
storage subsystems tool.
6.8.7 Removing the LUNs from the SAN Volume Controller
The next step will require downtime while we remap and remask the disks so that the host
sees them directly through the Green Zone.
Because our LUNs only hold data files, and because we use a unique VG, we can remap and
remask the disks without rebooting the host. The only requirement is that we unmount the file
system and vary off the VG to ensure data integrity after the reassignment.
Follow these required steps to remove the SAN Volume Controller:
1. Confirm that the correct device driver for the new storage subsystem is loaded. Because
we are moving to a DS4500, we can continue to use the SDD.
2. Shut down any applications and unmount the file systems:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems with the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG with the varyoffvg
VOLUMEGROUP_NAME command.
Before you start: Moving LUNs to another storage system might need a driver other than
SDD. Check with the storage subsystems vendor to see which driver you will need. You
might be able to install this driver ahead of time.
Chapter 6. Data migration 337
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
3. Remove the volumes from the host by using the svctask rmvdiskhostmap command
(Example 6-71). To double-check that you have removed the volumes, use the svcinfo
lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX
server.
Example 6-71 Remove the volumes from the host
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1
IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga
IBM_2145:ITSO-CLS2:admin>
4. Remove the volumes from the SAN Volume Controller by using the svctask rmvdisk
command, which will make the MDisks unmanaged, as shown in Example 6-72.
Example 6-72 Remove the volumes from the SAN Volume Controller
IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga
IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1
IBM_2145:ITSO-CLS2:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
29 AIX_MIG online unmanaged
10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 AIX_MIG1 online unmanaged
10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
IBM_2145:ITSO-CLS2:admin>
Cached data: When you run the svctask rmvdisk command, the SAN Volume
Controller first double-checks that there is no outstanding dirty cached data for the
volume that is being removed. If uncommitted cached data still exists, the command
fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You will have to wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SAN Volume Controller will automatically destage uncommitted cached data two
minutes after the last write activity for the volume. How much data there is to destage,
and how busy the I/O subsystem is, determine how long this command takes to
complete.
You can check whether the volume has uncommitted data in the cache by using the
svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute.
This attribute has the following meanings:
empty No modified data exists in the cache.
not_empty Modified data might exist in the cache.
corrupt Modified data might have existed in the cache, but any modified data
has been lost.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
338 Implementing the IBM System Storage SAN Volume Controller V7.2
5. Using Storage Manager (our storage subsystem management tool), unmap and unmask
the disks from the SAN Volume Controller back to the AIX server.
We are ready to access the LUNs from the AIX server. If all of the zoning, LUN masking, and
mapping were done successfully, our AIX server will boot as though nothing has happened:
1. Run the cfgmgr -S command to discover the storage subsystem.
2. Use the lsdev -Cc disk command to verify the discovery of the new disk.
3. Remove the references to all of the old disks. Example 6-73 shows the removal using SDD
and Example 6-74 on page 339 shows the removal using SDDPCM.
Example 6-73 Remove references to old paths using SDD
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk5 Defined 1Z-08-02 SAN volume Controller Device
hdisk6 Defined 1Z-08-02 SAN volume Controller Device
hdisk7 Defined 1D-08-02 SAN volume Controller Device
hdisk8 Defined 1D-08-02 SAN volume Controller Device
hdisk10 Defined 1Z-08-02 SAN volume Controller Device
hdisk11 Defined 1Z-08-02 SAN volume Controller Device
hdisk12 Defined 1D-08-02 SAN volume Controller Device
hdisk13 Defined 1D-08-02 SAN volume Controller Device
vpath0 Defined Data Path Optimizer Pseudo Device Driver
vpath1 Defined Data Path Optimizer Pseudo Device Driver
vpath2 Defined Data Path Optimizer Pseudo Device Driver
# for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done
hdisk5 deleted
hdisk6 deleted
hdisk7 deleted
hdisk8 deleted
hdisk10 deleted
hdisk11 deleted
hdisk12 deleted
hdisk13 deleted
#for i in 0 1 2; do rmdev -dl vpath$i -R;done
vpath0 deleted
Important: This step is the last step that you can perform and still safely back out of
everything you have done so far.
Up to this point, you can reverse all of the actions that you have performed so far to get
the server back online without data loss:
Remap and remask the LUNs back to the SAN Volume Controller.
Run the svctask detectmdisk command to rediscover the MDisks.
Recreate the volumes with the svctask mkvdisk command.
Remap the volumes back to the server with the svctask mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
Chapter 6. Data migration 339
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
vpath1 deleted
vpath2 deleted
#lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device
hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device
#
Example 6-74 Remove references to old paths using SDDPCM
# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk3 Defined 1D-08-02 MPIO FC 2145
hdisk4 Defined 1D-08-02 MPIO FC 2145
hdisk5 Available 1D-08-02 MPIO FC 2145
# for i in 3 4; do rmdev -dl hdisk$i -R;done
hdisk3 deleted
hdisk4 deleted
# lsdev -Cc disk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive
hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive
hdisk5 Available 1D-08-02 MPIO FC 2145
4. If your application and data are on an LVM volume, rediscover the VG and then run the
varyonvg VOLUME_GROUP command to activate the VG.
5. Mount your file systems with the mount /MOUNT_POINT command.
6. You are ready to start your application.
Finally, to make sure that the MDisks are removed from the SAN Volume Controller, run the
svctask detectmdisk command. The MDisks will first be discovered as offline. Then, they will
automatically be removed after the SAN Volume Controller determines that there are no
volumes that are associated with these MDisks.
6.9 Using SAN Volume Controller for storage migration
The primary use of the SAN Volume Controller is not as a storage migration tool. However,
the advanced capabilities of the SAN Volume Controller enable us to use the SAN Volume
Controller as a storage migration tool. Therefore, you can add the SAN Volume Controller
temporarily to your SAN environment to copy the data from one storage subsystem to another
storage subsystem. The SAN Volume Controller enables you to copy image mode volumes
directly from one subsystem to another subsystem while host I/O is running. The only
downtime that is required is when the SAN Volume Controller is added to and removed from
your SAN environment.
To use the SAN Volume Controller for migration purposes only, perform the following steps:
1. Add the SAN Volume Controller to your SAN environment.
2. Prepare the SAN Volume Controller.
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
340 Implementing the IBM System Storage SAN Volume Controller V7.2
3. Depending on your operating system, unmount the selected LUNs or shut down the host.
4. Add the SAN Volume Controller between your storage and the host.
5. Mount the LUNs or start the host again.
6. Start the migration.
7. After the migration process is complete, unmount the selected LUNs or shut down the
host.
8. Remove the SAN Volume Controller from your SAN.
9. Mount the LUNs, or start the host again.
10.The migration is complete.
As you can see, extremely little downtime is required. If you prepare everything correctly, you
are able to reduce your downtime to a few minutes. The copy process is handled by the SAN
Volume Controller, so the host does not hinder the performance while the migration
progresses.
To use the SAN Volume Controller for storage migrations, perform the steps that are
described in the following sections:
6.5.2, Adding the SAN Volume Controller between the host system and the LSI 3500 on
page 244
6.5.6, Migrating the volume from image mode to image mode on page 267
6.5.7, Removing image mode data from the SAN Volume Controller on page 275
6.10 Using volume mirroring and thin-provisioned volumes
together
In this section, we show that you can use the volume mirroring feature and thin-provisioned
volumes together to move data from a fully allocated volume to a thin-provisioned volume.
6.10.1 Zero detect feature
The zero detect feature for thin-provisioned volumes enables clients to reclaim unused
allocated disk space (zeros) when converting a fully allocated volume to a thin-provisioned
volume using volume mirroring.
To migrate from a fully allocated volume to a thin-provisioned volume, perform these steps:
1. Add the target thin-provisioned copy.
2. Wait for synchronization to complete.
3. Remove the source fully allocated copy.
By using this feature, clients can free up managed disk space easily and make better use of
their storage, without needing to purchase any additional function for the SAN Volume
Controller.
Volume mirroring and thin-provisioned volume functions are included in the base virtualization
license. Clients with thin-provisioned storage on an existing storage system can migrate their
data under SAN Volume Controller management using thin-provisioned volumes without
having to allocate additional storage space.
Chapter 6. Data migration 341
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
Zero detect only works if the disk actually contains zeros. An uninitialized disk can contain
anything, unless the disk has been formatted (for example, using the -fmtdisk flag on the
mkvdisk command).
Figure 6-102 on page 341 shows the thin-provisioned volume zero detect concept.
Figure 6-102 The thin-provisioned volume zero detect feature
Figure 6-103 shows the thin-provisioned volume organization.
Figure 6-103 The thin-provisioned volume organization
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
342 Implementing the IBM System Storage SAN Volume Controller V7.2
As shown in Figure 6-103 on page 341, a thin-provisioned volume has these components:
Used capacity
This term specifies the portion of real capacity that is being used to store data. For
non-thin-provisioned copies, this value is the same as the volume capacity. If the volume
copy is thin-provisioned, the value increases from zero to the real capacity value as more
of the volume is written to.
Real capacity
This capacity is the real allocated space in the storage pool. In a thin-provisioned volume,
this value can differ from the total capacity.
Free data
This value specifies the difference between the real capacity and the used capacity
values. If the free data capacity reaches the used capacity and if the volume has been
configured with the -autoexpand option, the SAN Volume Controller will autoexpand the
allocated space for this volume to keep this value equal to the real capacity.
Grains
This value is the smallest unit into which the allocated space can be divided.
Metadata
This value is allocated in the real capacity, and it tracks the used capacity, real capacity,
and free capacity.
6.10.2 Volume mirroring with thin-provisioned volumes
In this section, we show an example of using the volume mirror feature with thin-provisioned
volumes:
1. We create a fully allocated volume of 15 GB named VD_Full, as shown in Example 6-75.
Example 6-75 VD_Full creation example
IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk
0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full
Virtual Disk, id [2], successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status offline
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 15.00GB
type striped
formatted yes
.
.
vdisk_UID 60050768018401BF280000000000000B
mdisk_grp_name MDG_DS47
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
Chapter 6. Data migration 343
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
2. We then add a thin-provisioned volume copy with the volume mirroring option by using the
addvdiskcopy command and the autoexpand parameter, as shown in Example 6-76.
Example 6-76 addvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype
striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full
VDisk [2] copy [1] successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
tsync_rate 50
copy_count 2
copy_id 0
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
fused_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync no
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
344 Implementing the IBM System Storage SAN Volume Controller V7.2
As you can see in Example 6-77, the VD_Full has a copy_id 1 where the used_capacity is
0.41 MB, which is equal to the metadata, because only zeros exist in the disk.
The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in
the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real
capacity minus the used capacity.
If zeros are written on the disk, the thin-provisioned volume does not consume space.
Example 6-77 shows that the thin-provisioned volume does not consume space even
when the capacities are in sync.
Example 6-77 Thin-provisioned volume display
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2
vdisk_id vdisk_name copy_id progress
estimated_completion_time
2 VD_Full 0 100
2 VD_Full 1 100
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 2
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
Chapter 6. Data migration 345
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
3. We can split the volume mirror or remove one of the copies, keeping the thin-provisioned
copy as our valid copy by using the splitvdiskcopy command or the rmvdiskcopy
command:
If you need your copy as a thin-provisioned clone, we suggest that you use the
splitvdiskcopy command, because that command will generate a new volume and
you will be able to map to any server that you want.
If you need your copy because you are migrating from a previously fully allocated
volume to go to a thin-provisioned volume without any effect on the server operations,
we suggest that you use the rmvdiskcopy command. In this case, the original volume
name is kept and it remains mapped to the same server.
Example 6-78 shows the splitvdiskcopy command.
Example 6-78 splitvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full
Virtual Disk, id [7], successfully created
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count fast_write_state
2 VD_Full 0 io_grp0 online
0 MDG_DS47 15.00GB striped
60050768018401BF280000000000000B 0 1 empty
7 VD_SEV 0 io_grp0 online
1 MDG_DS83 15.00GB striped
60050768018401BF280000000000000D 0 1 empty
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV
id 7
name VD_SEV
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
346 Implementing the IBM System Storage SAN Volume Controller V7.2
formatted no
vdisk_UID 60050768018401BF280000000000000D
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
Example 6-79 shows the rmvdiskcopy command.
Example 6-79 rmvdiskcopy command
IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count fast_write_state
2 VD_Full 0 io_grp0 online
1 MDG_DS83 15.00GB striped
60050768018401BF280000000000000B 0 1 empty
IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
Chapter 6. Data migration 347
Draft Document for Review March 27, 2014 3:03 pm 7933 06 Data Migration Matus_final.fm
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 1
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
7933 06 Data Migration Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
348 Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. 349
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
Chapter 7. Advanced features for storage
efficiency
In this chapter, we introduce the basic concepts of dynamic data relocation and storage
optimization features. The IBM SAN Volume Controller family of products offers the software
functions for storage efficiency. Those include:
Easy Tier
Thin provisioning
Real-time compression
In the following text, we provide only a basic technical overview and benefits of each feature.
More details for planning and configuration are available in the IBM Redbooks publications:
Easy Tier
Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
IBM System Storage SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521
IBM DS8000 Easy Tier, REDP-4667 (this concept is similar to SAN Volume Controller
Easy Tier)
Thin Provisioning
Thin Provisioning in an IBM SAN or IP SAN Enterprise Environment, REDP-4265
DS8000 Thin Provisioning, REDP-4554 (similar concept to SAN Volume Controller
Thin Provision)
Real-Time Compression
Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859
Implementing IBM Real-time Compression in SAN Volume Controller and IBM
Storwize V7000, TIPS1083
Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
7
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
350 Implementing the IBM System Storage SAN Volume Controller V7.2
7.1 Introduction
In modern and complex application environments, the increasing and often unpredictable
demands for storage capacity and performance lead to issues of planning and optimization of
storage resources.
Consider the following typical storage management issues:
Usually when a storage system is implemented, only a portion of the configurable physical
capacity is deployed. When the storage system runs out of the installed capacity and more
capacity is requested, a hardware upgrade is implemented to add new physical resources
to the storage system. This new physical capacity can hardly be configured to keep an
even spread of the overall storage resources. Typically, the new capacity is allocated to
fulfill only new storage requests. The existing storage allocations do not benefit from the
new physical resources. Similarly, the new storage requests do not benefit from the
existing resources: only new resources are used.
In a complex production environment, it is not always possible to optimize storage
allocation for performance. The unpredictable rate of storage growth and the fluctuations
in throughput requirements, which are I/O per second (IOPS), often lead to inadequate
performance. Furthermore, the tendency to use even larger volumes to simplify storage
management works against the granularity of storage allocation, and a cost-efficient
storage tiering solution becomes difficult to achieve. With the introduction of high
performing, but expensive, technologies such as solid-state drives (SSD), this challenge
becomes even more important.
The move to larger and larger physical disk drive capacities means that previous access
densities that were achieved with low-capacity drives can no longer be sustained.
Any business has applications that are more critical than others, and there is a need for
specific application optimization. Therefore, there is a need for the ability to relocate
specific application data to faster storage media.
While more and more servers are purchased with local SSDs attached for better
application response time, the data distribution across these direct-attached SSDs and
external storage arrays must be carefully addressed. An integrated and automated
approach is crucial to achieve performance improvement without compromise to data
consistency, especially in a disaster recovery situation.
All of these issues deal with data placement and relocation capabilities or data volume
reduction. Most of these challenges can be managed by having spare resources available
and by moving data, the use of data mobility tools or operating systems features (such as
host level mirroring), to optimize storage configurations. However, all of these corrective
actions are expensive in terms of hardware resources, labor, and service availability.
Relocating data among the physical storage resources dynamically or effectively reducing the
amount of data, that is, transparently to the attached host systems, is becoming increasingly
important.
Chapter 7. Advanced features for storage efficiency 351
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
7.2 Easy Tier
In todays storage market, solid-state drives (SSDs) are emerging as an attractive alternative
to hard disk drives (HDDs). Because of their low response times, high throughput, and
IOPS-energy-efficient characteristics, SSDs have the potential to allow your storage
infrastructure to achieve significant savings in operational costs. However, the current
acquisition cost per GB for SSDs is currently much higher than for HDDs. SSD performance
depends a lot on workload characteristics, so SSDs need to be used with HDDs. It is critical
to choose the right mix of drives and the right data placement to achieve optimal performance
at low cost. Maximum value can be derived by placing hot data with high IO density and low
response time requirements on SSDs, while targeting HDDs for cooler data that is accessed
more sequentially and at lower rates.
Easy Tier automates the placement of data among different storage tiers, and can also be
enabled for internal and external storage. This IBM SAN Volume Controller no charge feature
boosts your storage infrastructure performance to achieve optimal performance through a
software, server, and storage solution.
7.2.1 Easy Tier concepts
Easy Tier is a no charge feature of IBM SAN Volume Controller that brings the enterprise
storage functions (originally available on IBM DS8000 and IBM XIV enterprise class storage
systems) to the midrange segment. It enables automated subvolume data placement
throughout different storage tiers to intelligently align the system with current workload
requirements and to optimize the usage of SSDs. This functionality includes the ability to
automatically and nondisruptively relocate data (at the extent level) from one tier to another
tier in either direction to achieve the best available storage performance for your workload in
your environment. Easy Tier reduces the I/O latency for hot spots, but it does not replace
storage cache. Both Easy Tier and storage cache solve a similar access latency workload
problem, but these two methods weigh differently in the algorithmic construction based on
locality of reference, recency, and frequency. Because Easy Tier monitors I/O performance
from the device end (after cache), it can pick up the performance issues that cache cannot
solve and complement the overall storage system performance.
In general, the storage environments I/O is monitored on volumes and the entire volume is
always placed inside one appropriate storage tier. Determining the amount of I/O is too
complex for monitoring I/O statistics on single extents and moving them manually to an
appropriate storage tier and reacting to workload changes.
The SSDs are treated no differently by the SAN Volume Controller than hard disk drives
(HDDs) regarding RAID arrays or MDisks.
The individual SSDs in the storage that is managed by the SAN Volume Controller are
combined into an array, usually in RAID 10 or RAID 5 format. It is unlikely that RAID6 SSD
arrays will be used due to the double parity overhead, with two logical SSDs used for parity
only. A LUN is created on the array and is then presented to the SAN Volume Controller as a
normal managed disk (MDisk).
As is the case for HDDs, the SSD RAID array format helps to protect against individual SSD
failures. Depending on your requirements, you can achieve more high availability protection
above the RAID level by using volume mirroring.
Easy Tier is a performance optimization function, as it automatically migrates, or moves,
extents belonging to a volume between different storage tiers (Figure 7-1). As this migration
works at the extent level, it is often referred to as sub-LUN migration.
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
352 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 7-1 Easy Tier
You can enable Easy Tier for storage on a volume basis. It monitors the I/O activity and
latency of the extents on all Easy Tier enabled volumes over a 24 hour period. Based on the
performance log, it creates an extent migration plan and dynamically moves high activity or
hot extents to a higher disk tier within the same storage pool, as well as moving extents
whose activity has dropped off, or cooled, from higher disk tier MDisks back to a lower tier
MDisk.
7.2.2 Disk tiers
The MDisks (LUNs) presented to the SAN Volume Controller cluster are likely to have
different performance attributes because of the type of disk or RAID array that they reside on.
The MDisks can be on 15 K RPM Fibre Channel or SAS disk, Near-line SAS or SATA, or even
SSDs.
Thus, a storage tier attribute is assigned to each MDisk. The default is generic_hdd. With
SAN Volume Controller V6.1, a new disk tier attribute is available for SSDs and is known as
generic_ssd.
Keep in mind that the SAN Volume Controller does not automatically detect SSD MDisks.
Instead, all external MDisks are initially put into the generic_hdd tier by default. Then the
administrator must manually change the SSD tier to generic_ssd by using the command-line
interface (CLI) or GUI.
Single tier storage pools
Figure 7-2 on page 353 shows a scenario in which a single storage pool is populated with
MDisks that are presented by an external storage controller. In this solution, the striped or
mirrored volume can be measured by Easy Tier, but no action to optimize the performance
occurs.
Chapter 7. Advanced features for storage efficiency 353
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
Figure 7-2 Single tier storage pool with striped volume
MDisks that are used in a single-tier storage pool should have the same hardware
characteristics, for example, the same RAID type, RAID array size, disk type, and disk
revolutions per minute (RPM) and controller performance characteristics.
Multitier storage pools
A multitier storage pool has a mix of MDisks with more than one type of disk tier attribute, for
example, a storage pool that contains a mix of generic_hdd and generic_ssd MDisks.
Figure 7-3 on page 353 shows a scenario in which a storage pool is populated with two
different MDisk types: one belonging to an SSD array and one belonging to an HDD array.
Although this example shows RAID 5 arrays, other RAID types can be used.
Figure 7-3 Multitier storage pool with striped volume
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
354 Implementing the IBM System Storage SAN Volume Controller V7.2
Adding SSD to the pool means that additional space is also now available for new volumes or
volume expansion.
7.2.3 Easy Tier process
The Easy Tier function has four main processes:
I/O Monitoring
This process operates continuously and monitors volumes for host I/O activity. It collects
performance statistics for each extent and derives averages for a rolling 24-hour period of
I/O activity.
Easy Tier makes allowances for large block I/Os and thus considers only I/Os of up
to 64 KB as migration candidates.
This process is efficient and adds negligible processing overhead to the SAN Volume
Controller nodes.
Data Placement Advisor
The Data Placement Advisor uses workload statistics to make a cost benefit decision as to
which extents are to be candidates for migration to a higher performance (SSD) tier.
This process also identifies extents that need to be migrated back to a lower (HDD) tier.
Data Migration Planner
By using the extents that were previously identified, the Data Migration Planner step builds
the extent migration plan for the storage pool.
Data Migrator
This process involves the actual movement or migration of the volumes extents up to, or
down from, the high disk tier. The extent migration rate is capped so that a maximum of up
to 30 MBps is migrated, which equates to around 3 TB a day that are migrated between
disk tiers.
When it relocates volume extents, Easy Tier performs these actions:
It attempts to migrate the most active volume extents up to SSD first.
To ensure that a free extent is available, you might need to first migrate a less frequently
accessed extent back to the HDD.
A previous migration plan and any queued extents that are not yet relocated are
abandoned.
7.2.4 Easy Tier operating modes
Easy Tier has three main operating modes:
Off mode
Evaluation or measurement only mode
Automatic Data Placement or extent migration mode
Easy Tier off mode
With Easy Tier turned off, no statistics are recorded and no extent migration occurs.
Attention: Image mode and sequential volumes are not candidates for Easy Tier
automatic data placement.
Chapter 7. Advanced features for storage efficiency 355
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
Evaluation or measurement only mode
Easy Tier Evaluation or measurement only mode collects usage statistics for each extent in a
single tier storage pool where the Easy Tier value is set to on for both the volume and the
pool. This collection is typically done for a single-tier pool that contains only HDDs so that the
benefits of adding SSDs to the pool can be evaluated before any major hardware acquisition.
A dpa_heat.nodeid.yymmdd.hhmmss.data statistics summary file is created in the /dumps
directory of the SAN Volume Controller nodes. This file can be offloaded from the SAN
Volume Controller nodes with PSCP -load or by using the GUI as shown in the IBM System
Storage SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521
IBM Redbooks publication. A web browser is used to view the report that is created by the
tool.
Automatic Data Placement or extent migration mode
In Automatic Data Placement or extent migration operating mode, the storage pool parameter
-easytier on or auto must be set, and the volumes in the pool must have -easytier on. The
storage pool must also contain MDisks with different disk tiers, thus being a multitiered
storage pool.
Dynamic data movement is transparent to the host server and application users of the data,
other than providing improved performance. Extents are automatically migrated as explained
in Implementation rules.
The statistic summary file is also created in this mode. This file can be offloaded for input to
the advisor tool. The tool produces a report on the extents that are moved to SSD and a
prediction of performance improvement that can be gained if more SSD arrays are available.
7.2.5 Implementation considerations
No Easy Tier license is required for the SAN Volume Controller. Easy Tier comes as part of
the V6.1 code. For Easy Tier to migrate extents, you need to have disk storage available that
has different tiers, for example a mix of SSD and HDD.
Implementation rules
Keep in mind the following implementation and operation rules when you use the IBM System
Storage Easy Tier function on the SAN Volume Controller:
Easy Tier automatic data placement is not supported on image mode or sequential
volumes. I/O monitoring for such volumes is supported, but you cannot migrate extents on
such volumes unless you convert image or sequential volume copies to striped volumes.
Automatic data placement and extent I/O activity monitors are supported on each copy of
a mirrored volume. Easy Tier works with each copy independently of the other copy.
Options: The Easy Tier function can be turned on or off at the storage pool level and at the
volume level.
Volume mirroring consideration: Volume mirroring can have different workload
characteristics on each copy of the data because reads are normally directed to the
primary copy and writes occur to both. Thus, the number of extents that Easy Tier
migrates to the SSD tier might be different for each copy.
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
356 Implementing the IBM System Storage SAN Volume Controller V7.2
If possible, the SAN Volume Controller creates new volumes or volume expansions by
using extents from MDisks from the HDD tier. However, it uses extents from MDisks from
the SSD tier if necessary.
When a volume is migrated out of a storage pool that is managed with Easy Tier, Easy Tier
automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated even if it is between pools that
both have Easy Tier automatic data placement enabled. Automatic data placement for the
volume is re-enabled when the migration is complete.
Limitations
When you use IBM System Storage Easy Tier on the SAN Volume Controller, Easy Tier has
the following limitations:
Removing an MDisk by using the -force parameter
When an MDisk is deleted from a storage pool with the -force parameter, extents in use
are migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
Migrating extents
When Easy Tier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts CLI command on that volume.
Migrating a volume to another storage pool
When the SAN Volume Controller migrates a volume to a new storage pool, Easy Tier
automatic data placement between the two tiers is temporarily suspended. After the volume
is migrated to its new storage pool, Easy Tier automatic data placement between the
generic SSD tier and the generic HDD tier resumes for the moved volume, if appropriate.
When the SAN Volume Controller migrates a volume from one storage pool to another, it
attempts to migrate each extent to an extent in the new storage pool from the same tier as
the original extent. In several cases, such as where a target tier is unavailable, the other tier
is used. For example, the generic SSD tier might be unavailable in the new storage pool.
Migrating a volume to image mode.
Easy Tier automatic data placement does not support image mode. When a volume with
Easy Tier automatic data placement mode active is migrated to image mode, Easy Tier
automatic data placement mode is no longer active on that volume.
Image mode and sequential volumes cannot be candidates for automatic data placement,
however, Easy Tier supports evaluation mode for image mode volumes.
7.2.6 More information
Detailed planning and configuration considerations, the best practices, and monitoring and
measurement tools description are available in the IBM System Storage SAN Volume
Best practices:
Always set the storage pool -easytier value to on instead of the default value auto.
This setting makes it easier to turn on evaluation mode for existing single tier pools, and no
further changes are needed when you move to multitier pools. For more information about
the mix of pool and volume settings, see the IBM System Storage SAN Volume Controller
Best Practices and Performance Guidelines, SG24-7521.
Using Easy Tier can make it more appropriate to use smaller storage pool extent sizes.
Chapter 7. Advanced features for storage efficiency 357
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
Controller Best Practices and Performance Guidelines, SG24-7521 and Implementing IBM
Easy Tier with IBM Real-time Compression, TIPS1072 Redbooks publications.
7.3 Thin provisioning
Thin provisioning, in a shared storage environment, is a method for optimizing utilization of
available storage. It relies on allocation of blocks of data on-demand versus the traditional
method of allocating all the blocks up front. This methodology eliminates almost all
whitespace which helps avoid the poor utilization rates, often as low as 10%, that occur in the
traditional storage allocation method where large pools of storage capacity are allocated to
individual servers but remain unused (not written to).
Thin provisioning presents more storage space to the hosts or servers that are connected to
the storage system than is actually available on the storage system. The IBM SAN Volume
Controller has this capability for both Fibre Channel and iSCSI provisioned volumes.
An example of thin provisioning is when a storage system contains 5000 GB of usable
storage capacity, but the storage administrator has mapped volumes of 500 GB each to 15
hosts. In this example, the storage administrator makes 7500 GB of storage space visible to
the hosts even though the storage system has only 5000 GB of usable space (Figure 7-4). If
all 15 hosts immediately use all 500 GB provisioned to them, there would be a problem. The
storage administrator has to monitor the system and add storage as needed.
Figure 7-4 Concept of thin provisioning
You can imagine thin provisioning as the same process as when airlines sell more tickets on a
flight than available physical seats, assuming that some passengers will not appear at
checkin. They do not assign actual seats at the time of sale, avoiding each client having a
claim on a specific seat number. The same concept applies to thin provisioning (airline) IBM
SAN Volume Controller (plane) and its volumes (seats). The storage administrator (airline
ticketing system) has to closely monitor the allocation process and set proper thresholds.
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
358 Implementing the IBM System Storage SAN Volume Controller V7.2
7.3.1 Configuring a thin-provisioned volume
Volumes can be configured as thin-provisioned or fully allocated. Thin-provisioned volumes
are created with real and virtual capacities. You can still create volumes by using a striped,
sequential, or image mode virtualization policy, just as you can with any other volume.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the
capacity of the volume that is reported to other IBM System Storage SAN Volume Controller
components (such as FlashCopy or remote copy) and to the hosts.
A directory maps the virtual address space to the real address space. The directory and the
user data share the real capacity.
Thin-provisioned volumes come in two operating modes: autoexpand and non-autoexpand.
You can switch the mode at any time. If you select the autoexpand feature, the SAN Volume
Controller automatically adds a fixed amount of additional real capacity to the thin volume as
required. Therefore, the autoexpand feature attempts to maintain a fixed amount of unused
real capacity for the volume. This amount is known as the contingency capacity. The
contingency capacity is initially set to the real capacity that is assigned when the volume is
created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A volume that is created without the autoexpand feature, and thus has a zero contingency
capacity, goes offline as soon as the real capacity is used and needs to expand.
Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by
the current virtual capacity, and the contingency capacity is recalculated.
A thin-provisioned volume can be converted nondisruptively to a fully allocated volume, or
vice versa, by using the volume mirroring function. For example, you can add a
thin-provisioned copy to a fully allocated primary volume and then remove the fully allocated
copy from the volume after they are synchronized.
The fully allocated to thin-provisioned migration procedure uses a zero-detection algorithm so
that grains that contain all zeros do not cause any real capacity to be used.
Space allocation
When a thin-provisioned volume is initially created, a small amount of the real capacity is
used for initial metadata. Write I/Os to the grains of the thin volume (that were not previously
written to) cause grains of the real capacity to be used to store metadata and user data. Write
I/Os to the grains (that were previously written to) update the grain where data was previously
written.
Warning threshold: Enable the warning threshold (by using email or an SNMP trap) when
working with thin-provisioned volumes, on the volume, and on the storage pool side,
especially when you do not use the autoexpand mode. Otherwise, the thin volume goes
offline if it runs out of space.
Tip: Consider using thin-provisioned volumes as targets in the FlashCopy relationships.
Grain definition: The grain is defined when the volume is created and can be 32 KB,
64 KB, 128 KB, or 256 KB.
Chapter 7. Advanced features for storage efficiency 359
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
Smaller granularities can save more space, but they have larger directories. When you use
thin-provisioning with FlashCopy, specify the same grain size for both the thin-provisioned
volume and FlashCopy.
To create a thin-provisioned volume, choose the Create Volume option from the Volumes in a
dynamic menu an and select Thin Provision (Figure 7-5 on page 359). Enter the required
capacity and volume name.
Figure 7-5 Thin-provisioned volume creation
In the Advanced menu of this wizard, you can set virtual and real capacity, warning
thresholds, and grain size (Figure 7-6).
Figure 7-6 Advanced options
For the complete configuration procedure for thin-provisioned volumes, follow the guidance in
7.3.1, Configuring a thin-provisioned volume on page 358.
7.3.2 Performance considerations
Thin-provisioned volumes require more I/Os because of the directory accesses:
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
360 Implementing the IBM System Storage SAN Volume Controller V7.2
For truly random workloads, a thin-provisioned volume requires approximately one
directory I/O for every user I/O so that performance is 50% of a normal volume.
The directory is two-way write-back cache (similar to the SAN Volume Controller fast-write
cache) so that certain applications perform better.
Thin-provisioned volumes require more CPU processing so that the performance per I/O
group is lower.
Use the striping policy to spread thin-provisioned volumes across many storage pools.
Thin-provisioned volumes save capacity only if the host server does not write to whole
volumes. Whether the thin-provisioned volume works well partly depends on how the file
system allocated the space:
Some file systems (for example, New Technology File System (NTFS)) write to the whole
volume before the overwrite the deleted files. Other file systems reuse space in preference
to allocating new space.
File system problems can be moderated by tools, such as defrag, or by managing
storage by using host Logical Volume Managers (LVM).
The thin-provisioned volume also depends on how applications use the file system. For
example, some applications delete log files only when the file system is nearly full.
There is no recommendation for thin-provisioned volumes. As explained previously, the
performance of thin-provisioned volumes depends on what is used in the particular
environment. For the best performance, use fully allocated volumes instead of thin
provisioned volumes.
7.3.3 Limitations of virtual capacity
A couple of factors (extent and grain size) limit the virtual capacity of thin-provisioned volumes
beyond the factors that limit the capacity of regular volumes. Table 7-1 shows the maximum
thin-provisioned volume virtual capacities for an extent size.
Table 7-1 Maximum thin volume virtual capacities for an extent size
Important: Do not use thin-provisioned volumes where high I/O performance is required.
Extent size in MB Maximum volume real capacity
in GB
Maximum thin virtual capacity
in GB
16 2,048 2,000
32 4,096 4,000
64 8,192 8,000
128 16,384 16,000
256 32,768 32,000
512 65,536 65,000
1024 131,072 130,000
2048 262,144 260,000
4096 524,288 520,000
Chapter 7. Advanced features for storage efficiency 361
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
Table 7-2 shows the maximum thin-provisioned volume virtual capacities for a grain size.
Table 7-2 Maximum thin volume virtual capacities for a grain size
For more information and detailed performance considerations for configuring thin
provisioning, see the IBM System Storage SAN Volume Controller Best Practices and
Performance Guidelines, SG24-7521 Redbooks publication.
7.4 Real-time Compression
The IBM Real-time Compression software embedded in IBM System Storage SAN Volume
Controller and IBM Storwize V7000 solution addresses the requirements of primary storage
data reduction, including performance. It does so by using a purpose-built technology called
real-time compression (RACE engine). It offers the following benefits:
Compression for active primary data: IBM Real-time Compression can be used with active
primary data. Therefore, it supports workloads that are not candidates for compression in
other solutions. The solution supports online compression of existing data. It allows
storage administrators to regain free disk space in an existing storage system without
requiring administrators and users to clean up or archive data. This configuration
significantly enhances the value of existing storage assets, and the benefits to the
business are immediate. The capital expense of upgrading or expanding the storage
system is delayed.
Compression for replicated/mirrored data: Remote volume copies can be compressed in
addition to the volumes at the primary storage tier. This process reduces storage
requirements in Metro Mirror and Global Mirror destination volumes as well.
No changes to the existing environment are required: IBM Real-time Compression is part
of the storage system. It was designed with transparency in mind so that it can be
implemented without changes to applications, hosts, networks, fabrics, or external storage
systems. The solution is not apparent to hosts, so users and applications continue to work
as-is. Compression occurs within the Storwize V7000 or SAN Volume Controller system
itself.
Overall savings in operational expenses: More data is stored in a rack space, so fewer
storage expansion enclosures are required to store a data set. This reduced rack space
has the following benefits:
Reduced power and cooling requirements: More data is stored in a system, therefore
requiring less power and cooling per gigabyte or used capacity.
8192 1,048,576 1,040,000
Grain size in KB Maximum thin virtual capacity in GB
32 260,000
64 520,000
128 1,040,000
256 2,080,000
Extent size in MB Maximum volume real capacity
in GB
Maximum thin virtual capacity
in GB
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
362 Implementing the IBM System Storage SAN Volume Controller V7.2
Reduced software licensing for additional functions in the system: More data stored per
enclosure reduces the overall spending on licensing.
Disk space savings are immediate: The space reduction occurs when the host writes the
data. This process is unlike other compression solutions in which some or all of the
reduction is realized only after a post-process compression batch job is run.
7.4.1 Real-time compression concepts
The RACE technology is based on over 40 patents that are not primarily about compression.
Instead, they define how to make industry standard Lempel-Ziv (LZ) compression of primary
storage operate in real-time and allow random access. The primary intellectual property
behind this is the RACE engine. At a high level, the IBM RACE component compresses data
written into the storage system dynamically. This compression occurs transparently, so Fibre
Channel and iSCSI connected hosts are not aware of the compression. RACE is an in-line
compression technology, meaning that each host write is compressed as it passes through
the SAN Volume Controller software to the disks. This has a clear benefit over other
compression technologies that are post-processing based. These technologies do not
provide immediate capacity savings, and therefore are not a good fit for primary storage
workloads such as databases and active data set applications.
When a host sends a write request, it is acknowledged by the write cache of the system, and
then staged to the storage pool. As part of its staging, it passes through the compression
engine, and is then stored in compressed format onto the storage pool. Writes are therefore
acknowledged immediately after received by the write cache, with compression occurring as
part of the staging to internal or external physical storage.
Capacity is saved when the data is written by the host because the host writes are smaller
when written to the storage pool.
IBM Real-time Compression is a self-tuning solution, similar to the Storwize V7000 and SAN
Volume Controller system itself. It is adapted to the workload that runs on the system at any
particular moment.
Random Access Compression Engine
The IBM patented Random Access Compression Engine turns over the traditional approach
to compression. It uses variable-size chunks for the input, and fixed-size chunks for the
output. This method enables an efficient and consistent method to index the compressed data
because it is stored in fixed-size containers.
RACE technology is implemented into the SAN Volume Controller and Storwize V7000 thin
provisioning layer, and is an organic part of the stack. The SAN Volume Controller and
Storwize V7000 software stack is shown in Figure 7-7 on page 363. Compression is
transparently integrated with existing system management design. All of the SAN Volume
Controller and Storwize V7000 advanced features are supported on compressed volumes.
You can create, delete, migrate, map (assign), and unmap (unassign) a compressed volume
as though it were a fully allocated volume. This compression method provides non-disruptive
conversion between compressed and uncompressed volumes. This conversion provides a
uniform user-experience and eliminates the need for special procedures when dealing with
compressed volumes.
Tip: Implementing compression in SAN Volume Controller provides the same benefits
to externally virtualized storage systems.
Chapter 7. Advanced features for storage efficiency 363
Draft Document for Review March 27, 2014 3:03 pm 7933 07 Easytier AB - noSSD.fm
When a host sends a write request to Storwize V7000 or SAN Volume Controller, it reaches
the cache layer. The host is immediately sent an acknowledgement of its I/Os. When the
cache layer de-stages to the RACE, the I/Os are sent to the thin-provisioning layer. They are
then sent to RACE and, if necessary, the original host write or writes. The metadata that holds
the index of the compressed volume is updated if needed, and is compressed as well.
Figure 7-7 RACE integration in IBM SAN Volume Controller
When a host sends a read request to SAN Volume Controller, it reaches the cache layer:
If there is a cache hit, the SAN Volume Controller cache replies to the host with the
requested data.
If there is a cache miss, the SAN Volume Controller cache sends an internal read request
to the thin-provisioning layer and then to RACE.
7.4.2 Configuring compressed volumes
To use compression on the SAN Volume Controller and Storwize V7000, licensing is required.
With the SAN Volume Controller, real-time compression is licensed by capacity, per terabyte
of virtual data. In the Storwize V7000, real-time compression is licensed per enclosure.
The configuration is similar to generic volumes and transparent to users. From the Volumes in
the dynamic menu, choose Create Volume and select Compressed (Figure 7-8 on
page 364). Choose the wanted storage pool and enter the required capacity and volume
name.
The summary in the bottom of the wizard provides the information about allocated (virtual)
capacity and the real capacity that data consume on this volume. In our example, we have
defined a 15 GB volume but the real capacity is only 307 MB (because there is no data from
the host).
7933 07 Easytier AB - noSSD.fm Draft Document for Review March 27, 2014 3:03 pm
364 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 7-8 Configuring compressed volume
When the compressed volume is configured, you can directly map it to the host or do so later
on request. For more details about definition of compressed volumes, see 7.4.2, Configuring
compressed volumes on page 363.
7.4.3 Differences from IBM Real-time Compression Appliances
Although the underlying technology of the IBM Real-time Compression Appliances (RtCA)
and the built-in Real-time Compression in SAN Volume Controller and Storwize V7000 is the
same, there are some notable differences:
Appliance-based versus internal: IBM Real-time Compression Appliance is
implemented as an add-on to an NAS storage system, whereas SAN Volume Controller
and Storwize V7000 compression is integral to the system.
File-based versus block-based: Real-time Compression Appliance compresses data
written to files, whereas SAN Volume Controller/Storwize V7000 compression is for data
written to block devices. However, this distinction makes no practical difference to the user.
Block devices typically contain file systems, so the SAN Volume Controller and Storwize
V7000 compression is applicable to files as well.
Exports versus volumes: Real-time Compression Appliance configuration of which data to
compress is based on exports and shares, whereas in SAN Volume Controller and
Storwize V7000 the configuration is per volume.
Supported external storage systems: the Real-time Compression Appliance support
matrix is focused on the major NAS storage systems. SAN Volume Controller and
Storwize V7000 support the major SAN storage systems.
For more information about real-time compression and its deployment in IBM SAN Volume
Controller, see the Real-time Compression in SAN Volume Controller and Storwize V7000,
REDP-4859 Redbooks publication.
Copyright IBM Corp. 2014. All rights reserved. 365
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Chapter 8. Advanced Copy Services
Before proceeding in this chapter, review the content of the Advanced Copy Services
Overview in 2.7, Advanced Copy Services overview on page 36, where we first describe
these functions at a high-level view.
In this chapter, we will discuss in detail the Advanced Copy Services functions that are
available for the IBM System Storage SAN Volume Controller. The majority of these functions
are also available for the IBM Storwize product family.
In this chapter, we also describe the native IP Replication, which is introduced in 2.13.2, SAN
Volume Controller 7.2.0 new features on page 67.
In Chapter 9, SAN Volume Controller operations using the command-line interface on
page 471, we explain how to use the command-line interface (CLI) and Advanced Copy
Services.
In Chapter 10, SAN Volume Controller operations using the GUI on page 627, we explain
how to use the GUI and Advanced Copy Services.
8
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
366 Implementing the IBM System Storage SAN Volume Controller V7.2
8.1 FlashCopy
The FlashCopy function of the IBM System Storage SAN Volume Controller provides the
capability to perform a point-in-time copy of one or more volumes. In this section, we
describe the inner workings of FlashCopy and provide details of its configuration and use.
You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and cache and is therefore transparent to the host.
While the FlashCopy operation is performed, the source volume is frozen briefly to initialize
the FlashCopy bitmap and then I/O is allowed to resume. Although several FlashCopy options
require the data to be copied from the source to the target in the background, which can take
a while complete, the resulting data on the target volume is presented so that the copy
appears to have completed immediately. This process is done through the use of a bitmap (or
bit array), which keeps track of changes to the data after the FlashCopy is initiated and an
indirection layer, which allows data to be read from the source volume transparently.
8.1.1 Business requirements for FlashCopy
When deciding if FlashCopy will address your challenges, you need to adopt a combined
business and technical view of the problems that you need to solve. First, determine the
needs from a business perspective. Then, determine if FlashCopy can address the technical
needs of those business requirements.
The business applications for FlashCopy are wide ranging. Common use cases for
FlashCopy include, but are not limited to these examples:
Rapidly creating consistent backups of dynamically changing data
Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
Rapidly creating copies of production datasets for application development and testing
Rapidly creating copies of production datasets for auditing purposes and data mining
Rapidly creating copies of production datasets for quality assurance
Regardless of your business needs, FlashCopy within the SAN Volume Controller is
extremely flexible and has a broad feature set, making it applicable to many scenarios.
8.1.2 Backup improvements with FlashCopy
FlashCopy does not reduce the time that it takes to perform a backup to traditional backup
infrastructure. However, it can be used to minimize and, under certain conditions, eliminate
application downtime that is associated with performing backups or transfer the resource
consumption of performing intensive backups from production systems. After the FlashCopy
is performed, the resulting image of the data can be backed up to tape, as if it were the source
system. After the copy to tape has been completed, the image data is redundant and the
target volumes can be discarded. For time-limited applications, such as these examples, no
Important: Because FlashCopy operates at the block level, below the host operating
system and cache, those levels do need to be flushed for consistent FlashCopies.
Chapter 8. Advanced Copy Services 367
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
copy or incremental FlashCopy is used most often. Using these methods puts less load on
your infrastructure.
Usually when FlashCopy is used for backup purposes, the target data is managed as
read-only at the operating system level. This approach provides extra security by ensuring
that your target data has not been modified and remains true to the source.
8.1.3 Restore with FlashCopy
FlashCopy has the ability to perform a restore from any existing FlashCopy mapping.
Therefore, you can restore (or copy) from the target to the source of your regular FlashCopy
relationships. (It might be easier to think of this method as reversing the direction of the
FlashCopy mappings.) This capability has several benefits: 1) No need to worry about pairing
mistakes, you just trigger a restore. 2) It will appear instantaneous. 3) You can maintain a
pristine image of your data while restoring what was the primary data.
This approach can be used for a variety of applications, such as recovering your production
database application after an errant batch process caused extensive damage.
In addition to the restore option, which copies the entire target volume to the source volume,
the target can be used to perform a restore of individual files. Simply make it available on a
host. We suggest that you do not make it the source, because seeing doubles of disks causes
problems for most host operating systems. Copy the files to the source via normal host data
copy methods for your environment.
8.1.4 Moving and migrating data with FlashCopy
FlashCopy can be used to facilitate the movement or migration of data between hosts while
minimizing downtime for applications. FlashCopy will allow application data to be copied from
source volumes to new target volumes while applications remain online. After the volumes are
fully copied and synchronized, the application can be brought down and then immediately
brought back up on the new server accessing the new FlashCopy target volumes.
This method differs from the other migration methods, which are discussed later in this
chapter. This method, using FlashCopy, is typically faster and more efficient from a labor
perspective than the other methods.
Common uses for this capability are host and back-end storage hardware refreshes.
8.1.5 Application testing with FlashCopy
It is often important to test a new version of an application or operating system using actual
production data. This testing ensures the highest quality possible for your environment.
FlashCopy makes this type of testing easy to accomplish without putting the production data
at risk or requiring downtime to create a constant copy. You simply create a FlashCopy of your
source and use that for your testing. This copy is a duplicate of your production data down to
Preferred practices: While restoring from a FlashCopy is significantly quicker than a
traditional tape media restore, you must not use restoring from a FlashCopy as a substitute
for good archiving practices. Instead, keep one to several iterations of your FlashCopies so
that you can near-instantly recover your data from the most recent history and keep your
long-term archive as appropriate for your business.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
368 Implementing the IBM System Storage SAN Volume Controller V7.2
the block level, so that even physical disk identifiers will be copied. Therefore, it is impossible
for your applications to tell the difference.
8.1.6 Host and application considerations to ensure FlashCopy integrity
Because FlashCopy is at the block level, it is necessary to understand the interaction
between your application and the host operating system. From a logical standpoint, it is
perhaps easiest to think of these objects as layers that sit on top of one another. The
application is the topmost layer, and beneath it is the operating system layer. Both of these
layers have various levels and methods of caching data to provide better speed. Because the
SAN Volume Controller and thus FlashCopy sit below these layers, they are not aware of the
cache at the application or operating system layers. In order to ensure the integrity of the copy
that is made, it is necessary to flush the host operating system and application cache for any
outstanding reads or writes prior to performing the FlashCopy operation. Failing to flush the
host operating system and application cache will produce what is referred to as a crash
consistent copy. The resulting copy will require the same type of recovery procedure, such as
log replay and filesystem checks, that is required following a host crash. FlashCopies that are
crash consistent are usually able to be used following file system and application recovery
procedures.
Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare before starting a FlashCopy operation. When this type of facility is not
available, the host cache must be flushed manually by quiescing the application and
unmounting the filesystem or drives.
8.1.7 FlashCopy attributes
The FlashCopy function in SAN Volume Controller possesses the following attributes:
The target is the time-zero copy of the source, which is known as FlashCopy mapping
targets.
FlashCopy produces an exact copy of the source volume, including any metadata that was
written by the host operating system, logical volume manager, and applications.
The source volume and target volume are available (almost) immediately following the
FlashCopy operation.
The source and target volumes must be the same virtual size.
The source and target volumes must be on the same SAN Volume Controller clustered
system.
The source and target volumes do not need to be in the same I/O Group or storage pool.
The storage pool extent sizes can differ between the source and target.
The source volumes can have up to 256 target volumes (Multiple Target FlashCopy).
Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that applications data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases, because the database maintains strict control over I/O. This method is opposed
to flushing data from both the application and the backing database, which is always the
recommended method, because it is safer. However, this method can be used when
facilities do not exist or your environment has time sensitivity.
Chapter 8. Advanced Copy Services 369
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
The target volumes can be the source volumes for other FlashCopy relationships
(cascaded FlashCopy).
Consistency Groups are supported to enable FlashCopy across multiple volumes.
Up to 127 Consistency Groups are supported for FlashCopy.
The target volume can be updated independently of the source volume.
Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of
the SAN Volume Controller I/O Group to prevent a single point of failure.
FlashCopy mapping and Consistency Groups can be automatically withdrawn after the
completion of the background copy.
Thin-provisioned FlashCopy will only consume disk space when updates are made to the
source or target data and not for the entire capacity of a volume copy.
FlashCopy licensing is based on the virtual capacity of the source volumes.
Incremental FlashCopy copies all of the data for the first FlashCopy and then only the
changes for all subsequent FlashCopy copies. Incremental FlashCopy can substantially
reduce the time that is required to recreate an independent image.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original
copy operation to complete.
The maximum number of supported FlashCopy mappings is 8,192 per SAN Volume
Controller system.
The size of the source and target volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.
8.2 Reverse FlashCopy
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original copy
operation to complete. It supports multiple targets (up to 256) and thus multiple rollback
points.
A key advantage of the SAN Volume Controller Multiple Target Reverse FlashCopy function is
that the reverse FlashCopy does not destroy the original target, thus allowing processes using
the target, such as a tape backup, to continue uninterrupted.
SAN Volume Controller also provides the ability to create an optional copy of the source
volume to be made prior to starting the reverse copy operation. This ability to restore back to
the original source data can be useful for diagnostic purposes.
This list shows the required steps to restore from an on-disk backup:
1. Optional: Create a new target volume (volume Z) and use FlashCopy to copy the
production volume (volume X) onto the new target for later problem analysis.
2. Create a new FlashCopy map with the backup to be restored (volume Y) or (volume W) as
the source volume and volume X as the target volume, if this map does not already exist.
3. Start the FlashCopy map (volume Y volume X) with the -restore option to copy the
backup data onto the production disk. If the -restore option is specified and no
FlashCopy mapping exists, the command is ignored, preserving your data integrity.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
370 Implementing the IBM System Storage SAN Volume Controller V7.2
The production disk is instantly available with the backup data. Figure 8-1 on page 370 shows
an example of Reverse FlashCopy.
Figure 8-1 Reverse FlashCopy
Note that regardless of whether the initial FlashCopy map (volume X volume Y) is
incremental, the Reverse FlashCopy operation only copies the modified data.
Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and
adding them to a new reverse Consistency Group. Consistency Groups cannot contain
more than one FlashCopy map with the same target volume.
8.2.1 FlashCopy and Tivoli Storage FlashCopy Manager
The management of many large FlashCopy relationships and Consistency Groups is a
complex task without a form of automation for assistance.
IBM Tivoli Storage FlashCopy Manager provides fast application-aware backups and restores
exploiting advanced point-in-time image technologies in the SAN Volume Controller. In
addition, it provides an optional integration with IBM Tivoli Storage Manager, for the long-term
storage of snapshots. Figure 8-2 on page 371 shows the integration of Tivoli Storage
Manager and FlashCopy Manager from a conceptual level.
Chapter 8. Advanced Copy Services 371
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-2 Tivoli Storage Manager for Advanced Copy Services features
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for
Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli
FlashCopy Manager, you can coordinate and automate host preparation steps before issuing
FlashCopy start commands to ensure that a consistent backup of the application is made.
You can put databases into hot backup mode and flush file-system cache prior to starting the
FlashCopy.
FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy,
and provides a simple interface to perform the reverse operation.
Figure 8-3 on page 372 shows the FlashCopy Manager feature.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
372 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 8-3 Tivoli Storage Manager FlashCopy Manager features
IBM Tivoli FlashCopy Manager V4.1, released December 2013, adds additional support for
VMware 5.5 and vSphere environments with Site Recovery Manager (SRM), along with
instant restore for Virtual Machine File System (VMFS) datastores. This release also
integrates with IBM Tivoli Storage Manager for virtual environments, and it allows backup of
point-in-time images into the Tivoli Storage Manager infrastructure for long-term storage.
The addition of VMware vSphere brings support and application awareness for FlashCopy
Manager up to the following list:
Microsoft Exchange and Microsoft SQL Server, including SQL Server 2012 Availability
Groups
IBM DB2 and Oracle databases, for use either with or without SAP environments
IBM GPFS software snapshots for DB2 pureScale
Other applications can be supported via script customizing
If you want to learn more about IBM Tivoli FlashCopy Manager, visit the following link,
because describing IBM Tivoli FlashCopy Manager in detail is beyond the scope of this
document:
http://www.ibm.com/software/products/en/tivostorflasmana/
8.3 FlashCopy functional overview
FlashCopy works by defining a FlashCopy mapping that consists of one source volume
together with one target volume. Multiple FlashCopy mappings (source-to-target
relationships) can be defined, and point-in-time consistency can be maintained across
multiple individual mappings using Consistency Groups. See Consistency Group with
Multiple Target FlashCopy on page 377 for more information about this topic.
Chapter 8. Advanced Copy Services 373
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Before you start a FlashCopy (regardless of the type and options specified), you must issue a
prestartfcmap or prestartfcconsistgrp, which puts the SAN Volume Controller cache into
write-through mode, providing a flushing of the I/O currently bound for your volume. After
FlashCopy is started, an effective copy of a source volume to a target volume has been
created. The content of the source volume is presented immediately on the target volume and
the original content of the target volume is lost. This FlashCopy operation is also referred to
as a time-zero copy (T
0
).
Immediately following the FlashCopy operation, both the source and target volumes are
available for use. The FlashCopy operation creates a bitmap that is referenced and
maintained to direct I/O requests within the source and target relationship. This bitmap is
updated to reflect the active block locations as data is copied in the background from the
source to target and updates are made to the source.
For more details about background copy, see 8.4.5, Grains and the FlashCopy bitmap on
page 378. Figure 8-4 illustrates the redirection of the host I/O toward the source volume and
the target volume.
Figure 8-4 Redirection of host I/O
8.4 Implementing SAN Volume Controller FlashCopy
In the following section, we describe how FlashCopy is implemented in the SAN Volume
Controller.
8.4.1 FlashCopy mappings
FlashCopy occurs between a source volume and a target volume. The source and target
volumes must be the same size. The minimum granularity that SAN Volume Controller
supports for FlashCopy is an entire volume; it is not possible to use FlashCopy to copy only
part of a volume.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
374 Implementing the IBM System Storage SAN Volume Controller V7.2
The source and target volumes must belong to the same SAN Volume Controller system, but
they do not have to be in the same I/O Group or storage pool. FlashCopy associates a source
volume to a target volume through FlashCopy mapping.
To become members of a FlashCopy mapping, source and target volumes must be the same
size. Volumes that are members of a FlashCopy mapping cannot have their size increased or
decreased while they are members of the FlashCopy mapping.
A FlashCopy mapping is the act of creating a relationship between a source volume and a
target volume. FlashCopy mappings can be either stand-alone or a member of a Consistency
Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a
stand-alone mapping or a Consistency Group.
Figure 8-5 illustrates the concept of FlashCopy mapping.
Figure 8-5 FlashCopy mapping
8.4.2 Multiple Target FlashCopy
The SAN Volume Controller supports up to 256 target volumes from a single source volume.
Each copy is managed by a unique mapping. In general, each mapping acts independently
and is not affected by other mappings sharing the same source volume. Figure 8-6 on
page 374 illustrates the Multiple Target FlashCopy implementation.
Figure 8-6 Multiple Target FlashCopy implementation
Important: As with any point-in-time copy technology, you will be bound by operating
system and application requirements for interdependent data, as well as the restriction to
an entire volume.
Important: Prior copies must be complete for independent FlashCopy mappings.
Chapter 8. Advanced Copy Services 375
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-6 shows four targets and mappings that are taken from a single source, along with
their interdependencies. In this example, Target 1 is the oldest (as measured from the time
that it was started) through to Target 4, which is the newest. The ordering is important
because of the way in which the data is copied when multiple target volumes are defined and
because of the dependency chain that results.
A write to the source volume does not cause its data to be copied to all of the targets. Instead,
it is copied to the newest target volume only (Target 4 in Figure 8-6). The older targets will
refer to new targets first before referring to the source.
From the point of view of an intermediate target disk (neither the oldest or the newest), it
treats the set of newer target volumes and the true source volume as a type of composite
source.
It treats all older volumes as a kind of target (and behaves like a source to them). If the
mapping for an intermediate target volume shows 100% progress, its target volume contains
a complete set of data. In this case, mappings treat the set of newer target volumes, up to and
including the 100% progress target, as a form of composite source. A dependency
relationship exists between a particular target and all newer targets (up to and including a
target that shows 100% progress) that share the same source until all data has been copied
to this target and all older targets.
You can read more about Multiple Target FlashCopy in 8.4.6, Interaction and dependency
between Multiple Target FlashCopy mappings on page 379.
8.4.3 Consistency Groups
Consistency Groups address the requirement to preserve point-in-time data consistency
across multiple volumes for applications having related data that spans multiple volumes. For
these volumes, Consistency Groups maintain the integrity of the FlashCopy by ensuring that
dependent writes are executed in the applications intended sequence.
When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs the operation on all FlashCopy mappings that are
contained within the Consistency Group at the same time.
Figure 8-7 on page 376 illustrates a Consistency Group consisting of two FlashCopy
mappings.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
376 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 8-7 FlashCopy Consistency Group
Dependent writes
To illustrate why it is crucial to use Consistency Groups when a data set spans multiple
volumes, consider the following typical sequence of writes for a database update transaction:
1. A write is executed to update the database log, indicating that a database update is about
to be performed.
2. A second write is executed to perform the actual update to the database.
3. A third write is executed to update the database log, indicating that the database update
has completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before starting the next step. However, if the database log (updates 1 and 3) and the
database itself (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur prior to the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes (1) and (3) but not (2), because the FlashCopy of
the database volume occurred before the write was completed.
In this case, if the database was restarted using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction had completed
successfully. In fact, it had not, because the FlashCopy of the volume with the database file
was started (the bitmap was created) before the write had completed to the volume.
Therefore, the transaction is lost and the integrity of the database is in question.
To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, it is necessary to perform a FlashCopy operation on multiple volumes as an
atomic operation. To accomplish this method, the SAN Volume Controller supports the
concept of Consistency Groups.
A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings, which is the
maximum number of FlashCopy mappings supported by the SAN Volume Controller system.
Important: After an individual FlashCopy mapping has been added to a Consistency
Group, it can only be managed as part of the group. Operations, such as prepare, start,
and stop, are no longer allowed on the individual mapping.
Chapter 8. Advanced Copy Services 377
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
FlashCopy commands can then be issued to the FlashCopy Consistency Group and therefore
simultaneously for all of the FlashCopy mappings that are defined in the Consistency Group.
For example, when issuing a FlashCopy start command to the Consistency Group, all of the
FlashCopy mappings in the Consistency Group are started at the same time, resulting in a
point-in-time copy that is consistent across all of the FlashCopy mappings that are contained
in the Consistency Group.
Consistency Group with Multiple Target FlashCopy
It is important to note that a Consistency Group aggregates FlashCopy mappings, not
volumes. Thus, where a source volume has multiple FlashCopy mappings, they can be in the
same or separate Consistency Groups.
If a particular volume is the source volume for multiple FlashCopy mappings, you might want
to create separate Consistency Groups to separate each mapping of the same source
volume. Regardless of whether the source volume with multiple target volumes is in the same
consistency group or in separate consistency groups, the resulting FlashCopy produces
multiple identical copies of the source data.
Maximum configurations
Table 8-1 lists the FlashCopy properties and maximum configurations.
Table 8-1 FlashCopy properties and maximum configurations
8.4.4 FlashCopy indirection layer
The FlashCopy indirection layer governs the I/O to both the source and target volumes when
a FlashCopy mapping is started, which is done using a FlashCopy bitmap. The purpose of the
FlashCopy indirection layer is to enable both the source and target volumes for read and write
I/O immediately after the FlashCopy has been started.
To illustrate how the FlashCopy indirection layer works, we examine what happens when a
FlashCopy mapping is prepared and subsequently started.
FlashCopy property Maximum Comment
FlashCopy targets per source 256 This maximum is the maximum number of
FlashCopy mappings that can exist with the same
source volume.
FlashCopy mappings per system 4,096 The number of mappings is no longer limited by
the number of volumes in the system, so the
FlashCopy component limit applies.
FlashCopy Consistency Groups
per system
127 This maximum is an arbitrary limit that is policed
by the software.
FlashCopy volume capacity per
I/O Group
1,024 TB This maximum is a limit on the quantity of
FlashCopy mappings using bitmap space from
this I/O Group. This maximum configuration will
consume all 512 MB of bitmap space for the I/O
Group and allow no Metro and Global Mirror
bitmap space. The default is 40 TB.
FlashCopy mappings per
Consistency Group
512 This limit is due to the time that is taken to
prepare a Consistency Group with a large number
of mappings.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
378 Implementing the IBM System Storage SAN Volume Controller V7.2
When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush the write cache to the source volume or volumes that are part of a Consistency
Group.
2. Put cache into write-through mode on the source volumes.
3. Discard cache for the target volumes.
4. Establish a sync point on all of the source volumes in the Consistency Group (creating the
FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source volumes and target
volumes.
6. Enable cache on both the source volumes and target volumes.
FlashCopy provides the semantics of a point-in-time copy using the indirection layer, which
intercepts I/O that is directed at either the source or target volumes. The act of starting a
FlashCopy mapping causes this indirection layer to become active in the I/O path, which
occurs automatically across all FlashCopy mappings in the Consistency Group. The
indirection layer then determines how each I/O is to be routed based on the following factors:
The volume and the logical block address (LBA) to which the I/O is addressed
Its direction (read or write)
The state of an internal data structure, the FlashCopy bitmap
The indirection layer allows the I/O to go through to the underlying volume; redirects the I/O
from the target volume to the source volume; or queues the I/O while it arranges for data to be
copied from the source volume to the target volume. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.
8.4.5 Grains and the FlashCopy bitmap
When data is copied between volumes, it is copied in units of address space known as
grains. Grains are units of data grouped together to optimize the use of the bitmap that keeps
track of changes to the data between the source and target volume. You have the option of
using 64 KB or 256 KB grain sizes; 256 KB is the default. The FlashCopy bitmap contains one
bit for each grain and is used to keep track of whether the source grain has been copied to
the target. Note that the 64 KB grain size consumes bitmap space at a rate of four times the
default 256 KB size.
The FlashCopy bitmap dictates read and write behavior for both the source and target
volumes.
Source reads
Reads are performed from the source volume, which is the same as for non-FlashCopy
volumes.
Source writes
Writes to the source will cause the grain to be copied to the target if it has not already been
copied, the bitmap to be updated, and then the write will be performed to the source.
Target reads
Reads are performed from the target if the grain has already been copied. Otherwise, the
read is performed from the source and no copy is performed.
Chapter 8. Advanced Copy Services 379
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Target writes
Writes to the target will cause the grain to be copied from the source to the target unless the
entire grain is being updated on the target. In this case, the target will be marked as split with
the source (if there is no I/O error during the write) and the write will go directly to the target.
The FlashCopy indirection layer algorithm
Imagine the FlashCopy indirection layer as the I/O traffic director when a FlashCopy mapping
is active. The I/O is intercepted and handled according to whether it is directed at the source
volume or at the target volume, depending on the nature of the I/O (read or write) and the
state of the grain (whether it has been copied).
In Figure 8-8, we illustrate how the background copy runs while I/Os are handled according to
the indirection layer algorithm.
Figure 8-8 I/O processing with FlashCopy
8.4.6 Interaction and dependency between Multiple Target FlashCopy
mappings
Figure 8-9 on page 380 represents a set of four FlashCopy mappings that share a common
source. The FlashCopy mappings will target volumes Target 0, Target 1, Target 2, and
Target 3.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
380 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 8-9 Interactions between MTFC mappings
Target 0 is not dependent on a source, because it has completed copying. Target 0 has two
dependent mappings (Target 1 and Target 2).
Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been
copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of
Target 1 has been copied, it can then move to the idle_copied state.
Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of
Target 2 has been copied. No target is dependent on Target 2, so when all of the data has
been copied to Target 2, it can move to the idle_copied state.
Target 3 has actually completed copying, so it is not dependent on any other maps.
Target writes with Multiple Target FlashCopy
A write to an intermediate or newest target volume must consider the state of the grain within
its own mapping, and the state of the grain of the next oldest mapping:
If the grain of the next oldest mapping has not been copied yet, it must be copied before
the write is allowed to proceed to preserve the contents of the next oldest mapping. The
data that is written to the next oldest mapping comes from a target or source.
If the grain in the target being written has not yet been copied, the grain is copied from the
oldest already copied grain in the mappings that are newer than the target, or the source if
none are already copied. After this copy has been done, the write can be applied to the
target.
Target reads with Multiple Target FlashCopy
If the grain being read has already been copied from the source to the target, the read simply
returns data from the target being read. If the grain has not been copied, each of the newer
mappings is examined in turn and the read is performed from the first copy found. If none are
found, the read is performed from the source.
Chapter 8. Advanced Copy Services 381
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Stopping the copy process
When a stop command is issued to a mapping that contains a target that has dependent
mappings, the mapping will enter the stopping state and begin copying all grains that are
uniquely held on the target volume of the mapping being stopped to the next oldest mapping
that is in the Copying state. The mapping will remain in the stopping state until all grains have
been copied and then enter the stopped state.
For example, if the mapping that is associated with Target 0 was issued a stopfcmap or
stopfcconsistgrp command, Target 0 enters the Stopping state while a process copies the
data of Target 0 to Target 1. After all of the data has been copied, Target 0 enters the Stopped
state, and Target 1 is no longer dependent upon Target 0, but Target 1 remains dependent on
Target 2.
8.4.7 Summary of the FlashCopy indirection layer algorithm
Table 8-2 summarizes the indirection layer algorithm.
Table 8-2 Summary table of the FlashCopy indirection layer algorithm
8.4.8 Interaction with the cache
This copy-on-write process introduces significant latency into write operations. To isolate the
active application from this additional latency, the FlashCopy indirection layer is placed
logically beneath the cache. Therefore, the additional latency introduced by the copy-on-write
process is only encountered by the internal cache destage operation and not by the
application.
Note about stopping the copy process: The stopping copy process can be ongoing for
several mappings sharing the same source at the same time. At the completion of this
process, the mapping automatically will make an asynchronous state transition to the
Stopped state or the idle_copied state if the mapping was in the Copying state with
progress = 100%.
Volume
being
accessed
Has the grain
been copied?
Host I/O operation
Read Write
Source No Read from the source
volume.
Copy grain to most recently
started target for this source,
then write to the source.
Yes Read from the source
volume.
Write to the source volume.
Target No If any newer targets exist for
this source in which this grain
has already been copied,
read from the oldest of these
targets. Otherwise, read from
the source.
Hold the write. Check the
dependency target volumes
to see if the grain has been
copied. If the grain is not
already copied to the next
oldest target for this source,
copy the grain to the next
oldest target. Then, write to
the target.
Yes Read from the target volume. Write to the target volume.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
382 Implementing the IBM System Storage SAN Volume Controller V7.2
In Figure 8-10, we illustrate the logical placement of the FlashCopy indirection layer.
Figure 8-10 Logical placement of the FlashCopy indirection layer
8.4.9 FlashCopy and image mode volumes
FlashCopy can be used with image mode volumes. Because the source and target volumes
must be exactly the same size, when creating a FlashCopy mapping you must create a target
volume with the exact same size as the image mode volume. To accomplish this task, use the
svcinfo lsvdisk -bytes volumeName command. The size in bytes is then used to create the
volume to use in the FlashCopy mapping. This method provides an exact number of bytes,
because image mode volumes might not line up one-to-one on other measurement unit
boundaries. In Example 8-1, we list the size of the Image_volume_A volume. Subsequently,
the volume_A_copy volume is created, specifying the same size.
Example 8-1 Listing the size of a volume in bytes and creating a volume of equal size
IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_volume_A
id 8
name Image_volume_A
IO_group_id 0
IO_group_name io_grp0
status online
storage_pool_id 2
storage_pool_name Storage_Pool_Image
capacity 36.0GB
type image
.
.
.
autoexpand
warning
grainsize
IBM_2145:ITSO-CLS1:admin>svctask mkvolume -size 36 -unit gb -name volume_A_copy
-mdiskgrp Storage_Pool_DS47 -vtype striped -iogrp 1
Virtual Disk, id [19], successfully created
Chapter 8. Advanced Copy Services 383
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.4.10 FlashCopy mapping events
In this section, we describe the events that modify the states of a FlashCopy. We describe the
mapping events in Table 8-3.
Table 8-3 Mapping events
Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume
commands to modify the size of the volume. See 9.6.10, Expanding a volume on
page 505 and 9.6.16, Shrinking a volume on page 509 for more information. But,
remember these actions must be performed before a mapping is created.
You can use an image mode volume as either a FlashCopy source volume or target
volume.
Overview of a FlashCopy sequence of events:
1. Associate the source data set with a target location (one or more source and target
volumes).
2. Create a FlashCopy mapping for each source volume to the corresponding target
volume. The target volume must be equal in size to the source volume.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.
Mapping event Description
Create A new FlashCopy mapping is created between the specified source
volume and the specified target volume. The operation fails if any one
of the following conditions is true:
The source volume is already a member of 256 FlashCopy
mappings.
The node has insufficient bitmap memory.
The source and target volumes are different sizes.
Prepare The prestartfcmap or prestartfcconsistgrp command is directed to
either a Consistency Group for FlashCopy mappings that are members
of a normal Consistency Group or to the mapping name for FlashCopy
mappings that are stand-alone mappings. The prestartfcmap or
prestartfcconsistgrp command places the FlashCopy mapping into
the Preparing state.
Important: The prestartfcmap or prestartfcconsistgrp command
can corrupt any data that previously resided on the target volume,
because cached writes are discarded. Even if the FlashCopy mapping
is never started, the data from the target might have changed logically
during the act of preparing to start the FlashCopy mapping.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
384 Implementing the IBM System Storage SAN Volume Controller V7.2
Flush done The FlashCopy mapping automatically moves from the Preparing state
to the Prepared state after all cached data for the source is flushed and
all cached data for the target is no longer valid.
Start When all of the FlashCopy mappings in a Consistency Group are in the
Prepared state, the FlashCopy mappings can be started. To preserve
the cross-volume Consistency Group, the start of all of the FlashCopy
mappings in the Consistency Group must be synchronized correctly
with respect to I/Os that are directed at the volumes by using the
startfcmap or startfcconsistgrp command.
The following actions occur during the execution of the startfcmap
command or the startfcconsistgrp command:
New reads and writes to all source volumes in the Consistency
Group are paused in the cache layer until all ongoing reads and
writes beneath the cache layer are completed.
After all FlashCopy mappings in the Consistency Group are
paused, the internal cluster state is set to allow FlashCopy
operations.
After the cluster state is set for all FlashCopy mappings in the
Consistency Group, read and write operations continue on the
source volumes.
The target volumes are brought online.
As part of the startfcmap or startfcconsistgrp command, read and
write caching is enabled for both the source and target volumes.
Modify The following FlashCopy mapping properties can be modified:
FlashCopy mapping name
Clean rate
Consistency group
Copy rate (for background copy or stopping copy priority)
Automatic deletion of the mapping when the background copy is
complete
Stop There are two separate mechanisms by which a FlashCopy mapping
can be stopped:
You have issued a command.
An I/O error has occurred.
Delete This command requests that the specified FlashCopy mapping be
deleted. If the FlashCopy mapping is in the Stopped state, the force
flag must be used.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the Stopped state.
Copy complete After all of the source data has been copied to the target and there are
no dependent mappings, the state is set to Copied. If the option to
automatically delete the mapping after the background copy completes
is specified, the FlashCopy mapping is deleted automatically. If this
option is not specified, the FlashCopy mapping is not deleted
automatically and can be reactivated by preparing and starting again.
Bitmap online/offline The node has failed.
Mapping event Description
Chapter 8. Advanced Copy Services 385
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.4.11 FlashCopy mapping states
In this section, we describe the states of a FlashCopy mapping in more detail.
Idle_or_copied
The source and target volumes act as independent volumes even if a mapping exists between
the two. Read and write caching is enabled for both the source and the target volumes.
If the mapping is incremental and the background copy is complete, the mapping only records
the differences between the source and target volumes. If the connection to both nodes in the
I/O group that the mapping is assigned to is lost, the source and target volumes are offline.
Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.
Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the SCSI
front end as a hardware error. If the mapping is incremental and a previous mapping has
completed, the mapping only records the differences between the source and target volumes.
If the connection to both nodes in the I/O group that the mapping is assigned to is lost, the
source and target volumes go offline.
Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error. Any
changed write data for the source volume is flushed from the cache. Any read or write data for
the target volume is discarded from the cache. If the mapping is incremental and a previous
mapping has completed, the mapping records only the differences between the source and
target volumes. If the connection to both nodes in the I/O group that the mapping is assigned
to is lost, the source and target volumes go offline.
Performing the cache flush that is required as part of the startfcmap or startfcconsistgrp
command causes I/Os to be delayed waiting for the cache flush to complete. To overcome this
problem, SAN Volume Controller FlashCopy supports the prestartfcmap or
prestartfcconsistgrp commands, which prepare for a FlashCopy start while still allowing
I/Os to continue to the source volume.
In the Preparing state, the FlashCopy mapping is prepared by the following steps:
1. Flushing any modified write data that is associated with the source volume from the cache.
Read data for the source is left in the cache.
2. Placing the cache for the source volume into write-through mode, so that subsequent
writes wait until data has been written to disk before completing the write command that
is received from the host.
3. Discarding any read or write data that is associated with the target volume from the cache.
Stopped
The mapping is stopped because either you issued a stop command or an I/O error occurred.
The target volume is offline and its data is lost. To access the target volume, you must restart
or delete the mapping. The source volume is accessible and the read and write cache is
enabled. If the mapping is incremental, the mapping is recording write operations to the
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
386 Implementing the IBM System Storage SAN Volume Controller V7.2
source volume. If the connection to both nodes in the I/O group that the mapping is assigned
to is lost, the source and target volumes go offline.
Stopping
The mapping is in the process of copying data to another mapping.
If the background copy process is complete, the target volume is online while the stopping
copy process completes.
If the background copy process is not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs.
The source volume is accessible for I/O operations.
Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
both the source and target volume to go offline. When access to the metadata is restored, the
mapping returns to the copying or stopping state and the source and target volumes return
online. The background copy process resumes. Any data that has not been flushed and has
been written to the source or target volume before the suspension, is in cache until the
mapping leaves the suspended state.
Summary of FlashCopy mapping states
Table 8-4 on page 386 lists the various FlashCopy mapping states and the corresponding
states of the source and target volumes.
Table 8-4 FlashCopy mapping state summary
8.4.12 Thin-provisioned FlashCopy
FlashCopy source and target volumes can be thin-provisioned.
Either source or target thin-provisioned
The most common configuration is a fully allocated source and a thin-provisioned target. This
configuration allows the target to consume a smaller amount of real storage than the source.
State Source Target
Online/Offline Cache state Online/Offline Cache state
Idling/Copied Online Write-back Online Write-back
Copying Online Write-back Online Write-back
Stopped Online Write-back Offline N/A
Stopping Online Write-back Online if copy
complete
Offline if copy not
complete
N/A
Suspended Offline Write-back Offline N/A
Preparing Online Write-through Online but not
accessible
N/A
Prepared Online Write-through Online but not
accessible
N/A
Chapter 8. Advanced Copy Services 387
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
With this configuration, only use the NOCOPY (background copy rate = 0%) option. Although
the COPY option is supported, this option creates a fully allocated target and therefore
defeats the purpose of thin provisioning.
Source and target both thin-provisioned
When both the source and target volumes are thin-provisioned, only the data that is allocated
to the source will be copied to the target. In this configuration, the background copy option will
have no effect.
Thin-provisioned incremental FlashCopy
The implementation of thin-provisioned volumes does not preclude the use of incremental
FlashCopy on the same volumes. It does not make sense to have a fully allocated source
volume and then use incremental FlashCopy, which is always a full copy the first time, to copy
this fully allocated source volume to a thin-provisioned target volume. However, this action is
not prohibited.
Consider this optional configuration:
A thin-provisioned source volume can be copied incrementally using FlashCopy to a
thin-provisioned target volume. Whenever the FlashCopy is performed, only data that has
been modified is recopied to the target. Note that if space is allocated on the target
because of I/O to the target volume, this space will not be reclaimed with subsequent
FlashCopy operations.
A fully allocated source volume can be copied incrementally using FlashCopy to another
fully allocated volume at the same time as it is being copied to multiple thin-provisioned
targets (taken at separate points in time). This combination allows a single full backup to
be kept for recovery purposes and separates the backup workload from the production
workload. At the same time, it allows older thin-provisioned backups to be retained.
8.4.13 Background copy
With FlashCopy background copy enabled, the source volume data will be copied to the
corresponding target volume. With the FlashCopy background copy disabled, only data that
changed on the source volume will be copied to the target volume.
The benefit of using a FlashCopy mapping with background copy enabled is that the target
volume becomes a real clone (independent from the source volume) of the FlashCopy
mapping source volume after the copy is complete. When the background copy function is not
performed, the target volume only remains a valid copy of the source data while the
FlashCopy mapping remains in place.
The background copy rate is a property of a FlashCopy mapping that is defined as a value
between 0 and 100. The background copy rate can be defined and changed dynamically for
individual FlashCopy mappings. A value of 0 disables the background copy.
Table 8-5 shows the relationship of the background copy rate value to the attempted number
of grains to be copied per second.
Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
388 Implementing the IBM System Storage SAN Volume Controller V7.2
Table 8-5 Background copy rate
The grains per second numbers represent the maximum number of grains that the SAN
Volume Controller copies per second, assuming that the bandwidth to the managed disks
(MDisks) can accommodate this rate.
If the SAN Volume Controller is unable to achieve these copy rates because of insufficient
bandwidth from the SAN Volume Controller nodes to the MDisks, the background copy I/O
contends for resources on an equal basis with the I/O that is arriving from the hosts. Both
background copy I/O and I/O that is arriving from the hosts tend to see an increase in latency
and a consequential reduction in throughput. Both background copy and foreground I/O
continue to make forward progress, and do not stop, hang, or cause the node to fail. The
background copy is performed by both nodes of the I/O Group in which the source volume
resides.
8.4.14 Synthesis
The FlashCopy functionality in SAN Volume Controller simply creates copy volumes. All of the
data in the source volume is copied to the destination volume, including operating system,
logical volume manager, and application metadata.
8.4.15 Serialization of I/O by FlashCopy
In general, the FlashCopy function in the SAN Volume Controller introduces no explicit
serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and
target volumes.
Value Data copied per
second
Grains per second
(256 KB grain)
Grains per second
(64 KB grain)
1 - 10 128 KB 0.5 2
11 - 20 256 KB 1 4
21 - 30 512 KB 2 8
31 - 40 1 MB 4 16
41 - 50 2 MB 8 32
51 - 60 4 MB 16 64
61 - 70 8 MB 32 128
71 - 80 16 MB 64 256
81 - 90 32 MB 128 512
91 - 100 64 MB 256 1024
Synthesis: Certain operating systems are unable to use FlashCopy without an additional
step, which is termed synthesis. In summary, synthesis performs a type of transformation
on the operating system metadata on the target volume so that the operating system can
use the disk.
Chapter 8. Advanced Copy Services 389
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared and the mappings are derived from a particular
source volume. The lock is used in the following modes under the following conditions:
The lock is held in shared mode for the duration of a read from the target volume, which
touches a grain that has not been copied from the source.
The lock is held in exclusive mode while a grain is being copied from the source to the
target.
If the lock is held in shared mode, and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in either
shared or exclusive mode must wait for it to be freed.
8.4.16 Event handling
When a FlashCopy mapping is not copying or stopping, the FlashCopy function does not
affect the handling or reporting of events for error conditions encountered in the I/O path.
Event handling and reporting are only affected by FlashCopy when a FlashCopy mapping is
copying or stopping or, in other words, actively moving data.
We describe these scenarios in the following sections.
Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes making up the I/O Group of the source volume. When a
node fails, one copy of the bitmap, for all FlashCopy mappings whose source volume is a
member of the failing nodes I/O Group, become inaccessible. FlashCopy continues with a
single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the
source I/O Group. The system metadata is updated to indicate that the missing node no
longer holds a current bitmap. When the failing node recovers, or a replacement node is
added to the I/O Group, the bitmap redundancy will be restored.
Path failure (Path Offline state)
In a fully functioning system, all of the nodes have a software representation of every volume
in the system within their application hierarchy.
Because the storage area network (SAN) that links the SAN Volume Controller nodes to each
other and to the MDisks is made up of many independent links, it is possible for a subset of
the nodes to be temporarily isolated from several of the MDisks. When this situation happens,
the managed disks are said to be Path Offline on certain nodes.
When an MDisk enters the Path Offline state on a SAN Volume Controller node, all of the
volumes that have extents on the MDisk also become Path Offline. Again, this situation
happens only on the affected nodes. When a volume is Path Offline on a particular SAN
Volume Controller node, the host access to that volume through the node will fail with the
SCSI check condition indicating Offline.
Other nodes: Other nodes might see the managed disks as Online, because their
connection to the managed disks is still functioning.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
390 Implementing the IBM System Storage SAN Volume Controller V7.2
Path Offline for the source volume
If a FlashCopy mapping is in the Copying state and the source volume goes Path Offline, this
Path Offline state is propagated to all target volumes up to but not including the target volume
for the newest mapping that is 100% copied but remains in the Copying state. If no mappings
are 100% copied, all of the target volumes are taken offline. Again, note that Path Offline is a
state that exists on a per-node basis. Other nodes might not be affected. If the source volume
comes Online, the target and source volumes are brought back Online.
Path Offline for the target volume
If a target volume goes Path Offline but the source volume is still Online, and if there are any
dependent mappings, those target volumes will also go Path Offline. The source volume will
remain Online.
8.4.17 Asynchronous notifications
FlashCopy raises informational event log entries for certain mapping and Consistency Group
state transitions.
These state transitions occur as a result of configuration events that complete
asynchronously, and the informational events can be used to generate Simple Network
Management Protocol (SNMP) traps to notify the user. Other configuration events complete
synchronously, and no informational events are logged as a result of these events:
PREPARE_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the Prepared state as a result of a user request to prepare. The
user can now start (or stop) the mapping or Consistency Group.
COPY_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group enters the Idle_or_copied state when it was previously in the Copying
or Stopping state. This state transition indicates that the target disk now contains a
complete copy and no longer depends on the source.
STOP_COMPLETED: This state transition is logged when the FlashCopy mapping or
Consistency Group has entered the Stopped state as a result of a user request to stop. It
will be logged after the automatic copy process has completed. This state transition
includes mappings where no copying needed to be performed. This state transition differs
from the event that is logged when a mapping or group enters the Stopped state as a
result of an I/O error.
8.4.18 Interoperation with Metro Mirror and Global Mirror
FlashCopy can work together with Metro Mirror and Global Mirror to provide better protection
of the data. For example, we can perform a Metro Mirror copy to duplicate data from Site_A to
Site_B and then perform a daily FlashCopy to back up the data to another location.
Table 8-6 lists which combinations of FlashCopy and remote copy are supported. In the table,
remote copy refers to Metro Mirror and Global Mirror.
Chapter 8. Advanced Copy Services 391
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Table 8-6 FlashCopy and remote copy interaction
8.4.19 FlashCopy presets
The SAN Volume Controller GUI interface provides three FlashCopy presets (Snapshot,
Clone, and Backup) to simplify the more common FlashCopy operations.
Note that although these presets meet the majority of FlashCopy requirements, they do not
provide support for all possible FlashCopy options. If more specialized options are required
that are not supported by the presets, they must be performed using CLI commands.
In the following section, we describe the three preset options and their use cases.
Snapshot
This preset creates a copy-on-write point-in-time copy. The snapshot is not intended to be an
independent copy. Instead, it is used to maintain a view of the production data at the time that
the snapshot is created. Therefore, the snapshot holds only the data from regions of the
production volume that have changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.
Snapshot uses these preset parameters:
No background copy
Incremental: No
Delete after completion: No
Cleaning rate: No
The target pool is the primary copy source pool.
Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate a large number of changes to be made to the source or target
volume; a significant proportion of the volumes will not be changed. By ensuring that only
changes require a copy of data to be made, the total amount of disk space that is required for
Component Remote copy primary site Remote copy secondary site
FlashCopy
Source
Supported Supported
Latency: When the FlashCopy
relationship is in the Preparing
and Prepared states, the cache
at the remote copy secondary
site operates in write-through
mode.
This process adds additional
latency to the already latent
remote copy relationship.
FlashCopy
Target
This is a supported
combination. It has several
restrictions: 1) Issuing a stop
-force might cause the remote
copy relationship to need to be
fully resynchronized. 2)Code
level must be 6.2.x or higher.
3)I/O Group must be the same.
This is a supported combination
with the major
restriction that the FlashCopy
mapping cannot be
copying, stopping, or
suspended. Otherwise, the
restrictions are the same as at
the remote copy primary site.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
392 Implementing the IBM System Storage SAN Volume Controller V7.2
the copy is significantly reduced and therefore allows for many Snapshot copies to be used in
the environment.
Snapshots are therefore useful for providing protection against corruption or similar issues
with the validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing, including what-if
modeling based on production data, without requiring a full copy of the data to be provisioned.
Clone
The clone preset creates an exact replica of the volume, which can be changed without
affecting the original volume. After the copy completes, the mapping that was created by the
preset is automatically deleted.
Clone preset parameters:
Background copy rate: 50
Incremental: No
Delete after completion: Yes
Cleaning rate: 50
The target pool is the primary copy source pool.
Use case
Users want a copy of the volume that they can modify without affecting the original volume.
After the clone is established, there is no expectation that it will be refreshed or that there will
be any further need to reference the original production data again. If the source is
thin-provisioned, the target will be thin-provisioned for the auto-create target.
Backup
The backup preset creates a point-in-time replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.
Backup preset parameters:
Background Copy rate: 50
Incremental: Yes
Delete after completion: No
Cleaning rate: 50
The target pool is the primary copy source pool.
Use case
The user wants to create a copy of the volume that can be used as a backup in the event that
the source becomes unavailable, as in the case of the loss of the underlying physical
controller. The user plans to periodically update the secondary copy and does not want to
suffer the overhead of creating a completely new copy each time (and incremental FlashCopy
times are faster than full copy, which helps to reduce the window where the new backup is not
yet fully effective). If the source is thin-provisioned, the target will be thin-provisioned on this
option for the auto-create target.
Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volumes performance.
Chapter 8. Advanced Copy Services 393
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.5 Volume mirroring and migration options
Volume mirroring is a simple RAID 1-type function that is designed to allow a volume to
remain online even when the storage pool backing it becomes inaccessible. Volume mirroring
is designed to protect the volume from storage infrastructure failures by providing the ability to
seamlessly mirror between storage pools.
Volume mirroring is provided by a specific volume mirroring function in the I/O stack, and it
cannot be manipulated like a FlashCopy or other types of copy volumes. This feature does,
however, provide migration functionality, which can be obtained by splitting the mirrored copy
from the source or by using the migrate to function. Volume mirroring does not have the
ability to control back-end storage mirroring or replication.
With volume mirroring, host I/O completes when both copies are written. Prior to 6.3.0, this
feature took a copy offline when it had an I/O time-out, and then resynchronized with the
online copy after it recovered. With 6.3.0, this feature has been enhanced with a tunable
latency tolerance. This tolerance is designed to provide an option to give preference to losing
the redundancy between the two copies. This tunable time-out value is either Latency or
Redundancy.
The Latency tuning option, which is set with svctask chvdisk -mirrowritepriority latency,
is the default. This behavior was available in releases prior to 6.3.0. It prioritizes host I/O
latency, which yields a preference to host I/O over availability.
However, you might have a need in your environment to give preference to Redundancy when
availability is more important than I/O response time. Use svctask chvdisk -mirror
writepriority redundancy.
Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.
Migration offers several options:
Export to Image mode: This option allows you to move storage from managed mode to
image mode, which is useful if you are using the SAN Volume Controller as a migration
device. For example, vendor As product cannot communicate with vendor Bs product, but
you need to migrate existing data from vendor A to vendor B. Using Export to image
mode allows you to migrate data using Copy Services functions and then return control to
the native array, while maintaining access to the hosts.
Import to Image mode: This option allows you to import an existing storage MDisk or
logical unit number (LUN) with its existing data from an external storage system, without
putting metadata on it, so that the existing data remains intact. After you have imported it,
all copy services functions can be used to migrate the storage to other locations, while the
data remains accessible to your hosts.
Volume migration using volume mirroring and then using Split into New Volume: This
option allows you to use the available RAID 1 functionality. You create two copies of data
that initially have a set relationship (one primary and one secondary) but then break the
relationship (both primary and no relationship) to make them independent copies of data.
You can use this to migrate data between storage pools and devices. You might use this
option if you want to move volumes to multiple storage pools. Note that you only can mirror
one volume at a time.
Volume migration using Move to Another Pool: This option allows any volume to be moved
between storage pools without any interruption to the host access. This option is
effectively a quicker version of the Volume Mirroring and Split into New Volume option.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
394 Implementing the IBM System Storage SAN Volume Controller V7.2
You might use this option if you want to move volumes in a single step or you do not have
a volume mirror copy already.
Managing Volume Mirror and migration with the GUI
Refer to section 10.7, Working with volumes on page 673 for detailed description of volume
mirroring and migration operations using the SAN Volume Controller GUI.
Migration: While these migration methods do not disrupt access, you will need to take a
brief outage to install the host drivers for your SAN Volume Controller. See IBM System
Storage SAN Volume Controller Host Attachment Users Guide, SC26-7905, for more
detail. Make sure to consult the revision of the document that applies for your SAN Volume
Controller.
Chapter 8. Advanced Copy Services 395
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.6 Native IP replication
Before we describe Remote Copy features which benefit from use of multiple SAN Volume
Controller systems, it is important to highlight the new partnership option introduced with the
7.2 version of SAN Volume Controller code: native IP replication.
8.6.1 Native IP Replication Technology
Remote Mirroring over IP communication is now supported on IBM SAN Volume Controller
and Storwize Family systems using Ethernet communication links. SAN Volume Controller IP
replication uses innovative Bridgeworks SANSlide technology to optimize network bandwidth
and utilization. This new function enables the use of lower speed and lower cost networking
infrastructure for data replication. Bridgeworks SANSlide technology integrated into IBM SAN
Volume Controller and Storwize Family Software uses artificial intelligence to help optimize
network bandwidth utilization and adapt to changing workload and network conditions. This
technology can improve remote mirroring network bandwidth utilization up to three times,
which may enable clients to deploy a less costly network infrastructure or speed remote
replication cycles to enhance disaster recovery effectiveness.
In a typical Ethernet network data flow, the data transfer slows down over time. this is
because of the latency caused by waiting for acknowledgement of each set of packets sent
because the next packet set cannot be sent until the previous one has been acknowledged
(Figure 8-11 on page 395).
Figure 8-11 Typical Ethernet network data flow
Note: For creating remote partnerships between IBM SAN Volume Controller and Storwize
Family systems consider these rules:
A SAN Volume Controller is always in the Replication layer.
By default, Storwize V7000 is in the Storage layer.
A system can only form partnerships with systems in the same layer.
A SAN Volume Controller can virtualize a Storwize V7000 only if the Storwize V7000 is
in Storage layer.
With version 6.4, a Storwize V7000 in Replication layer can virtualize a Storwize V7000
in Storage layer.
Refer to chapter 2.7, Advanced Copy Services overview on page 36 for more information
about storage layers.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
396 Implementing the IBM System Storage SAN Volume Controller V7.2
Using the Bridgeworks SANSlide technology, this typical behavior can be dramatically
eliminated with enhanced parallelism of the data flow by using multiple virtual connections
(VC) that share the same IP links and addresses. The Artificial Intelligence engine can
dynamically adjusts number of VCs, receive window size, and packet size as appropriate to
maintain optimum performance. While waiting for one VCs ACK, it sends more packets
across other VCs. In case packets are lost from any VC, data will be automatically
retransmitted (Figure 8-26 on page 416).
Figure 8-12 Optimized network data flow using Bridgeworks SANSlide technology
For more information about Bridgeworks SANSlide technology, refer to IBM Storwize V7000
and SANSlide Implementation, REDP-5023.
With native IP partnership, the following Copy Services features are supported:
Metro Mirror
Referred to as synchronous replication, Metro Mirror provides a consistent copy of a source
virtual disk on a target virtual disk. Data is written to the target virtual disk synchronously after
it is written to the source virtual disk, so that the copy is continuously updated.
Global Mirror and Global Mirror Change Volumes
Referred to as asynchronous replication, Global Mirror provides a consistent copy of a source
virtual disk on a target virtual disk. Data is written to the target virtual disk asynchronously, so
that the copy is continuously updated, but the copy might not contain the last few updates in
the event that a disaster recovery operation is performed. An added extension to Global
Mirror is Global Mirror with Change Volumes. Global Mirror with Change Volumes is the
preferred method for use with native IP replication.
8.6.2 IP partnership limitations
Following prerequisites and assumptions must be considered before IP partnership between
two SAN Volume Controller systems can be established:
SAN Volume Controller systems are successfully installed with the latest IBM SAN Volume
Controller 7.2.0 code levels.
SAN Volume Controller systems have the necessary licenses that allow remote copy
partnerships to be configured between two systems. No separate license is required to
enable IP partnership.
The storage SAN configurations are properly done and the infrastructure to support SAN
Volume Controller systems in remote copy partnerships over IP links is properly in place.
The two systems should be able to ping each other and do the discovery.
Chapter 8. Advanced Copy Services 397
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
The maximum partnerships between the local and remote systems including both IP and
FC partnerships is limited to the current maximum supported, that is, three partnerships
(four systems total).
In 7.2.0 only a single partnership over IP will be supported.
A system can have simultaneous partnerships over FC and IP but with separate systems.
FC zones between two systems must be removed before configuring an IP partnership.
IP partnerships are supported on both 10 Gbps links and 1 Gbps links. However, the
intermix of both on a single link is not currently supported.
The maximum supported round-trip time is 80 milliseconds (ms) for 1 Gbps links.
The maximum supported round-trip time is 10 ms for 10 Gbps links.
The minimum supported link bandwidth is 10 Mbps.
The inter-cluster heartbeat traffic consumes 1 Mbps per link.
Only nodes from two I/O groups can have ports configured for an IP partnership.
Migrations of remote copy relationships directly from Fibre Channel-based partnerships to
IP partnerships are not supported.
IP partnerships between the two systems can be either over IPv4 or IPv6 only and not
both.
VLAN tagging of the IP addresses configured for remote copy is currently not supported.
SAN Volume Controller allows two system IPs (management IPs) to be configured. If
planning to use the second system IP for IP partnerships, both the system IPs should be
in the same subnet.
An added layer of security is provided by means of Challenge Handshake Authentication
Protocol (CHAP) authentication.
TCP ports 3260 and 3265 are used for IP partnership communications and therefore
these ports will need to be open up in firewalls between the systems.
The maximum throughput is currently restricted based on use of 1 Gbps or 10 Gbps ports:
One 1 Gbps port could transfer up to 110 Megabytes per second
Two 1 Gbps ports could transfer up to 220 Megabytes per second
One 10 Gbps port could transfer up to 190 Megabytes per second
One 10 Gbps ports could transfer up to 280 Megabytes per second
8.6.3 IP partnership and SAN Volume Controller terminology
The IP partnership terminology and abbreviations used are provided in Table 8-7 on
page 398:
Note: The Bandwidth setting definition when creating the IP partnerships has changed:
Previously, the bandwidth setting defaults to 50 MBytes and it was the maximum transfer
rate from primary to secondary site for initial sync/resyncs of volumes.
Link bandwidth setting is now configured using Mbits not MBytes and you set this to a
value that the communication link can actually sustain or what is actually allocated for
replication. Background copy rate setting is now a percentage of the link bandwidth and
it determines the bandwidth available for initial sync and resyncs or for Global Mirror with
Change Volumes.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
398 Implementing the IBM System Storage SAN Volume Controller V7.2
Table 8-7 Terminology
8.6.4 States of IP partnership
The following table lists and describes the different partnership states in IP partnership.
Table 8-8 States of IP partnership
IP partnership and SAN Volume Controller
terminology
Description
Remote copy group or Remote copy port group This is a number that groups a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same remote copy group can
form remote copy connections with the partner system.
0 Ports that are not configured for remote copy.
1 Ports that belong to remote copy port group 1
2 Ports that belong to remote copy port group 2
Note: Each IP address can be shared for iSCSI host attach
and remote copy functionality. Therefore, appropriate settings
must be applied to each IP address.
IP partnership Two SAN Volume Controller systems that are partnered to
perform remote copy over native IP links.
FC partnership Two SAN Volume Controller systems that are partnered to
perform remote copy over native Fibre Channel links.
Failover Failure of a node within an I/O group fails causes virtual disk
access through the surviving node. The IP addresses failover
to the surviving node in the I/O group. When the configuration
node of the system fails, management IPs also fail over to an
alternate node.
Failback When the failed node rejoins the system, all failed over IP
addresses are failed back from the surviving node to the
rejoined node and virtual disk access is restored through this
node.
linkbandwidthmbits Aggregate bandwidth of all physical links between two sites in
Mbps.
IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.
State Systems
connected
Support for
active remote
copy I/O
Comments
Partially_Configured_Local No No This state indicates that the initial
discovery has been completed.
Fully_Configured Yes Yes Discovery has successfully
completed between two systems
and the two systems can establish
remote copy relationships.
Fully_Configured_Stopped Yes Yes The partnership is stopped on the
system.
Fully_Configured_Remote_Stopped Yes No The partnership is stopped on the
remote system.
Chapter 8. Advanced Copy Services 399
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
To establish two systems in IP partnerships, you need to perform the following steps:
1. The administrator configures the CHAP secret on both the systems. Note that this is not a
mandatory step and users can choose to not configure the CHAP secret.
2. If required, the administrator configures the system IP addresses on both local and remote
systems, so that they can discover each other over the network.
3. Administrator then configures the SAN Volume Controller ports on each node in both the
systems using the svctask cfgportip command:
a. Configure the IP addresses for remote copy data.
b. Add the IP addresses in the respective remote copy port group.
c. Define whether the host access on these ports over iSCSI will be allowed or not.
4. The administrator then establishes partnership with the remote system, from the local
system, where the partnership state then transitions to the Partially_Configured_Local
state.
5. The administrator then establishes partnership from remote system with local system and
if successful, partnership state then transitions to the Fully_Configured state, thus
implying that the partnerships over IP network have been successfully established. The
partnership state will momentarily remain in not_present state before transitioning to
fully_configured.
6. The administrator then creates Metro Mirror, Global Mirror, and Global Mirror with Change
Volume relationships.
8.6.5 Remote Copy Groups
This section discusses remote copy groups (or remote copy port groups) and the different
ways in which the links between the two remote systems can be configured. The two SAN
Volume Controller systems can be connected to each other over one or at most two links. To
Not_Present Yes No The two systems cannot
communicate with each other. This
state is also seen when data paths
between the two systems have not
been established.
Fully_Configured_Exceeded Yes No There are too many systems in the
network and the partnership from
the local system to remote system
has been disabled.
Fully_Configured_Excluded No No The connection is excluded due to
too many problems, or either system
is unable to support the I/O work
load for the Metro Mirror and Global
Mirror relationships.
State Systems
connected
Support for
active remote
copy I/O
Comments
Partnership consideration: When creating the partnership, there is no master/auxiliary
status defined or implied. The partnership is equal and the concepts of master/auxiliary
and primary/secondary only apply to volume relationships, not to system partnerships.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
400 Implementing the IBM System Storage SAN Volume Controller V7.2
address the requirement to let SAN Volume Controller know about the physical links between
2 sites, the concept of remote copy port groups has been introduced.
SAN Volume Controller IP addresses that are connected to the same physical link are
designated with identical remote copy port groups. SAN Volume Controller supports three
remote copy groups: 0, 1, and 2. SAN Volume Controller IP addresses are by default in
remote copy port group 0. Ports in port group 0 are not considered for creating Remote
Copy data paths between 2 systems. In order for partnerships to be established over IP links
directly, IP ports need to be either configured in remote copy group 1 if there is a single
inter-site link or both 1 and 2 if there are two inter-site links.
You can assign each one IPv4 address and one IPv6 address to each Ethernet port on SAN
Volume Controller platforms. Each of these IP addresses can be shared between iSCSI host
attach and IP partnership. The user must configure the required IP address (IPv4 or IPv6) on
an Ethernet port with a remote copy port group. The administrator might want to use IPv6
addresses for remote copy operations and use IPv4 addresses on that same port for iSCSI
host attach. This also implies that for two systems to establish IP partnership both must have
IPv6 addresses configured.
Administrators could choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address & any other IP address configured
on that Ethernet port.
8.6.6 Supported configurations
Supported configurations for IP partnership, in the first release, are described in this section.
1. Two 2-node systems in IP partnership over a single inter-site link
Figure 8-13 Single link with only one remote copy port group configured in each system
As detailed in Figure 8-13, there are two systems, System A and System B. A single
remote copy port group 1 is created on Node A1 on System A and on Node B2 on
System B (administrator could have chosen to configure the remote copy port group on
Note: In order to establish an IP partnership, each SAN Volume Controller node must have
only a single remote copy port group configured; that is, either 1 or 2. Remaining IP
addresses must be in remote copy port group 0.
Chapter 8. Advanced Copy Services 401
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Node B1 on System B instead of Node B2) as there is only a single inter-site link to
facilitate the IP partnership traffic. At any given time, only the IP addresses configured in
remote copy port group 1 on the nodes in System A and System B will participate in
establishing data paths between the two systems after the IP partnerships are created. In
this configuration, it should be noted that there are no failover ports configured on the
partner node in the same I/O group.
This configuration has the following characteristics:
In this configuration, only one node in each system has remote copy port group
configured and there are no failover ports configured.
For any reason, should the Node A1 in System A or the Node B2 in System B were to
hit some failure, this would results in IP partnership to stop and enter the not_present
state until the failed nodes have recovered.
After the nodes recover, the IP ports will fail back and IP partnership will recover and
partnership state go to the fully_configured state.
If the inter-site system link fails, the IP partnerships transition to the not_present state
This configuration is not recommended because it is not resilient to node failures.
2. Two 2-node systems in IP partnership over a single inter-site link (with failover ports
configured)
Figure 8-14 A single inter-site link, only one remote copy group on each system and nodes with
failover ports configured
As detailed in Figure 8-14, there are two systems: System A and System B. A single
remote copy port group 1 is configured on two Ethernet ports, one each, on Node A1 and
Node A2 on System A and similarly, on Node B1 and Node B2 on System B. It should be
noted that even though there are two ports configured for remote copy port group 1, only
one Ethernet port in each remote copy port group will actually participate actively in the IP
partnership process. This selection is determined by a path configuration algorithm which
is designed to choose data paths between the 2 systems in order to optimize
performance. The other port on the partner node in the I/O Group thus behaves like
standby port that will be used in the event of a node failure. If Node A1 fails in System A,
as failover port is configured on Node A2 on Ethernet Port 2, IP partnership will continue
servicing replication I/O from Ethernet Port 2. However, it should be noted that it might
take some time for discovery and path configuration logic to re-establish paths post
failover and this can cause partnerships to transition to not_present for that time period.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
402 Implementing the IBM System Storage SAN Volume Controller V7.2
The details of the particular IP port which is actively participating in IP partnership is
provided in the svcinfo lsportip output (reported as used).
This configuration has the following characteristics:
In this configuration, each node in the I/O group has the same remote copy port group
configured. However, only one port in that remote copy port group is active at any given
point in time.
For any reason, if the Node A1 in System A or the Node B2 in System B, has a failure
failure in the respective systems, this casues IP partnerships to trigger re-discovery
and continue servicing the I/O from the failover port.
The discovery mechanism that is triggered due to failover might introduce a delay
where-in the partnerships will momentarily transition to the not_present state and
then recover.
3. Two 4-node systems in IP partnership over a single inter-site link (with failover ports
configured) (see Figure 8-15)
Figure 8-15 Multinode systems (two I/O groups on each system) single inter-site link with only one
remote copy port group configured on each node in both the systems
As detailed in Figure 8-15, there are two 4-node systems, System A and System B. A
single remote copy port group 1 is configured on nodes A1, A2, A3 and A4 on System A,
Site A and similarly on nodes B1, B2, B3 and B4 on System B, Site B. It should be noted
that even though there are four ports configured for remote copy group 1, only one
Ethernet port in each remote copy port group on each system actually participates actively
in the IP partnership process. Selection of ports is determined by a path configuration
algorithm. The other ports play the role of standby ports.
If Node A1 fails in System A, the IP partnership selects one of the remaining ports
configured with remote copy port group 1, from any of the nodes from either of the two I/O
Chapter 8. Advanced Copy Services 403
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
groups in System A. However, it should be noted again that it might take some time
(generally tens of seconds) for discovery and path configuration logic to re-establish paths
post failover and this can cause partnerships to transition to the not_present state. This
leads to remote copy relationships to stop and the administrator might need to manually
verify the issues in the eventlog and start the relationships or remote copy consistency
groups, if they do not autorecover. The details of the particular IP port actively participating
in the IP partnership process is provided in the svcinfo lsportip view (reported as
used).
This configuration has the following characteristics:
In this configuration, each node has the remote copy port group configured in both the
I/O groups. However, only one port in that remote copy port group will remain active
and participate in IP partnership.
For any reason, should the Node A1 in System A or the Node B2 in System B were to
hit some failure in the system, this would results in IP partnerships to trigger discovery
and continue servicing the I/O from the failover port.
The discovery mechanism that gets triggered due to failover might introduce a delay
wherein the partnerships will momentarily transition to the not_present state and then
recover.
The bandwidth of the single link is completely utilized.
4. Eight-node system in IP partnership with four node system over single inter-site link
(Figure 8-16 on page 404)
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
404 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 8-16 Multinode systems (8-node system at site A and 4-node system at site B) single
inter-site link with only one remote copy port group configured on each node in both the systems
As detailed in Figure 8-16, there is an eight-node system, System A in Site A ,and a
four-node system, system B in Site B. A single remote copy port group 1 is configured on
nodes A1, A2, A5, and A6 on System A at Site A and similarly, a single remote copy port
group 1 is configured on nodes B1, B2, B3 and B4 on system B. It should be clearly
noted that in System A, even though there are four I/O groups (eight nodes), any two I/O
groups at maximum are supported to be configured for IP partnerships. If Node A1 fails in
System A, IP partnership continues using one of the ports configured in remote copy port
group from any of the nodes from either of the two I/O groups in System A. However, it
should be noted again that it might take some time for discovery and path configuration
logic to re-establish paths post failover and this might cause partnerships to transition to
the not_present state. This can lead to remote copy relationships to stop and the
administrator will then need to manually start them, if the relationships do not
auto-recover. The details of which particular IP port is actively participating in IP
partnership process is provided in svcinfo lsportip output (reported as used).
This configuration has the following characteristics:
In this configuration, each node has the remote copy port group configured in both the
I/O groups identified for participating in IP Replication. However, only one port in that
remote copy port group remains active and participates in IP Replication.
Chapter 8. Advanced Copy Services 405
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
For any reason, if the Node A1 in System A or the Node B2 in System B has a failure in
the system, the IP partnerships trigger discovery and continue servicing the I/O from
the failover ports
The discovery mechanism that gets triggered due to failover might introduce a delay
wherein the partnerships will momentarily transition to the not_present state and then
recover.
The bandwidth of the single link is completely utilized.
5. Two 2-node systems with two inter-site links (Figure 8-17)
Figure 8-17 Dual links with two remote copy groups on each system configured
As detailed in Figure 8-17, two remote copy port groups 1 and 2 are configured on the
nodes, in System A and System B, as there are two inter-site links available. In this
configuration, the failover ports are not configured on partner nodes in the I/O group.
Instead, on both the nodes, the ports are maintained in different remote copy port groups
and both will remain active and participate in IP partnership using both the links. However,
if either of the nodes in the I/O group fails, that is, if Node A1 on System A fails, the IP
partnership continues only from the available IP port configured in remote copy port group
2. So the effective bandwidth of the two links is reduced to 50%, that is, only the
bandwidth of only a single link is effectively available, until failure is resolved.
This configuration has the following characteristics:
In this configuration, there are two inter-site links and hence two remote copy port
groups are configured.
Each node has only one IP port in remote copy port group 1 or 2.
Both the IP ports in the two remote copy port groups participate simultaneously in IP
partnerships. So both the links are used.
During node failure or link failure, the IP partnership traffic continues from the other
available link and the port group. Hence, if two links of 10 Mbps each are available and
you have 20 Mbps of effective link bandwidth, then during a failure, bandwidth will
reduce to 10 Mbps only.
After the node failure or link failure is resolved and failback happens, the entire
bandwidth of both the links will be available as before.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
406 Implementing the IBM System Storage SAN Volume Controller V7.2
6. Two 4-node systems in IP partnership with dual inter-site links (Figure 8-18)
Figure 8-18 Multinode systems (two I/O groups on each system) with dual inter-site links between
the two systems
As detailed in Figure 8-18, there are two 4-node systems, System A and System B. This
configuration is an extension of Configuration 5 to a multinode multi-I/O group
environment. As seen in this configuration, there are two I/O groups and each node in the
I/O group has a single port configured either in remote copy port groups 1 or 2. It
should be noted that even though there are two ports configured in remote copy port
groups 1 and 2, only one IP port in each remote copy port group on each system will
actively participate in IP partnership. The other ports configured in the same remote copy
port group, thus act as standby ports in event of failure. Which port in a configured remote
copy port group will participate in IP partnership at any given moment, is determined by a
path configuration algorithm.
In this configuration, if Node A1 fails in System A, IP partnership traffic continues from
Node A2 (that is, remote copy port group 2 and at the same time, the failover will also
cause discovery in remote copy port group 1 and thus the IP partnership traffic will now
continue from Node A3 on which remote copy port group 1 is configured. The details of the
particular IP port actively participating in IP partnership process is provided in the
svcinfo lsportip output (reported as used).
This configuration has the following characteristics:
In this configuration, each node has the remote copy port group configured in the I/O
groups either 1 or 2. However, only one port in both the remote copy port groups will
remain active and participate in IP partnership.
Only a single port from each configured remote copy port group, participates
simultaneously in IP partnership. So both the links are used.
During node failure or port failure of a node actively participating in IP partnership,
since another port is in the system in the same remote copy port group but in a
different I/O Group, IP partnership will continue from the alternate port.
Chapter 8. Advanced Copy Services 407
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
The pathing algorithm can initiate discovery of available port in the affected remote
copy port group in the second I/O group and pathing will be re-established, restoring
the total bandwidth, that is, both the links are available to support IP partnership.
7. Eight-node system in IP partnership with a four-node system over dual inter-site links
(Figure 8-19)
Figure 8-19 Multinode systems (two I/O groups on each system) with dual inter-site links between
the two systems
As detailed in Figure 8-19, there is an eight-node System A in Site A, and a four-node
System B in Site B. Because a maximum of two I/O groups in IP partnership is supported
in a system, in System A, even though there are four I/O groups (eight nodes), nodes from
only two I/O groups are configured with remote copy port groups. The remaining or all the
I/O groups can be configured to be remote copy partnerships over Fibre Channel. In this
configuration, there are two links and two I/O groups are configured with remote copy port
groups 1 and 2 on each node, but path selection logic is managed by an internal
algorithm and thus, this entirely depends on the pathing algorithm to decide which of the
nodes will actively participate in IP partnership. So, even if Node A5 and Node A6 are
configured with remote copy port groups properly, it might happen that active IP
partnership traffic on both the links will be driven from Node A1 and Node A2 only. If for
any reason should Node A1 fail in System A, IP partnership traffic will continue from Node
A2 (that is, remote copy port group 2) and the failover will also cause IP partnership traffic
to now continue from Node A5 on which remote copy port group 1 is configured. The
details of the particular IP port actively participating in IP partnership process is provided
in the svcinfo lsportip output (reported as used).
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
408 Implementing the IBM System Storage SAN Volume Controller V7.2
This configuration has the following characteristics:
In this configuration, there are two I/O Groups with nodes in those I/O groups
configured in two remote copy port groups as there are two inter-site links for
participating in IP partnership. However, only one port in a particular remote copy port
group will remain active and participate in IP partnership.
One port from each remote copy port group participates in IP partnership
simultaneously. Hence both the links are used.
In the event of a node or port failure on the node actively participating in IP partnership,
since another port is available on an alternate node in the system with the same
remote copy port group, RC data path will be established from that port.
The path selection algorithm will initiate discovery of available port in the affected
remote copy port group in the alternate I/O groups and paths will be re-established,
restoring the total bandwidth across both links
The remaining or all the I/O groups can be in remote copy partnerships with other
systems
8. An example of unsupported configuration for single inter-site link (Figure 8-20)
Figure 8-20 Two node systems with single inter-site link and remote copy port groups configured on
each port and each node as failover ports to support IP partnership
This configuration as detailed in Figure 8-20 is similar to Configuration 2, but differs
because each node now has the same remote copy port group configured on more than
one IP ports.
It should be noted that on any given node, only one port at any given time can participate
in IP partnership and configuring multiple ports in the same remote copy group on the
same node is not supported.
9. An example of an unsupported configuration for dual inter-site link (Figure 8-21)
Chapter 8. Advanced Copy Services 409
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-21 Dual Links with two Remote Copy Port Groups with failover Port Groups configured on
each System.
This configuration as detailed in Figure 8-21 is similar to Configuration 5, but differs
because each node now also has 2 ports configured with remote copy port groups. In this
configuration, the path selection algorithm can select a path in a manner such that at times
this might cause partnerships to transition to the not_present state and then recover.
This is a configuration restriction and this configuration is not recommended to be used,
until the configuration restriction is lifted in future releases.
10.Example deployment for Configuration 2 (2 on page 401) with dedicated inter-site link
(Figure 8-22)
Figure 8-22 Example Deployment
Figure 8-22 represents a typical deployment scenario based on Configuration 2 Two
2-node systems in IP partnership over a single inter-site link (with failover ports
configured), (2 on page 401). In this configuration, one port on each node in System A
and System B is configured in remote copy group 1 to establish IP partnership and
support remote copy relationships. There is a dedicated inter-site link used for IP
partnership traffic and iSCSI host attach is disabled on those ports.
Configuration steps:
a. Configure system IP addresses properly; as such they can be reached over the
inter-site link.
b. Qualify if the partnerships need to be created over IPv4 or IPv6 and accordingly assign
IP addresses and open firewall ports 3260 and 3265
c. Configure IP ports for remote copy, on both the systems:
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
410 Implementing the IBM System Storage SAN Volume Controller V7.2
i. Remote copy group 1
ii. Host as no
iii. Assign IP address.
d. Check the maximum transmission unit (MTU) levels across the network meet the
requirements as set default MTU is 1500 on SAN Volume Controller.
e. Establish IP partnerships from both the systems.
f. After the partnerships are in the fully_configured state, you can create the remote
copy relationships.
11.Example deployment for Configuration 5 (5 on page 405), ports shared with host access
(Figure 8-23)
Figure 8-23 : Example deployment
Figure 8-23 represents a typical deployment scenario based on Configuration 5 Two
2-node systems with two inter-site links detailed earlier. In this configuration IP ports to be
shared by both iSCSI hosts and for IP partnership.
Configuration steps:
a. Configure System IP addresses properly, so that they can be reached over the
inter-site link
b. Qualify if the partnerships need to be created over IPv4 or IPv6 and accordingly assign
ip addresses and open firewall ports, 3260 and 3265.
c. Configure IP ports for remote copy, on System A1:
i. Node 1:
Port 1, remote copy port group 1.
Host as yes
Assign IP address.
ii. Node 2:
Port 4, Remote Copy Port Group 2.
Host as yes
Assign IP address
d. Configure IP ports for remote copy on System B1:
i. Node 1:
Port 1, remote copy port group 1.
Chapter 8. Advanced Copy Services 411
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Host as yes
Assign IP address
ii. Node 2:
Port 4, remote copy port group 2.
Host as yes
Assign IP address
e. Check the MTU levels across the network and meet the requirements as set. The
default MTU is 1500 on SAN Volume Controller.
f. Establish IP partnerships from both the systems.
g. After the partnerships are in the fully_configured state you can create the remote
copy relationships.
8.6.7 Setting up SAN Volume Controller system IP partnership using the SAN
Volume Controller GUI
Refer to 10.9.3, Creating the IP partnership between two remote SAN Volume Controller
systems on page 755 for steps on how to create IP partnership between SAN Volume
Controller systems using the SAN Volume Controller GUI.
8.7 Metro Mirror
In the following topics, we describe the Metro Mirror copy service, which is a synchronous
remote copy function. Metro Mirror in the SAN Volume Controller is similar to Metro Mirror in
the IBM System Storage DS family, at a functional level, but the implementation differs.
The SAN Volume Controller provides a single point of control when enabling Metro Mirror in
your networks, regardless of the disk subsystems that are used, so long as those disk
subsystems are supported by the SAN Volume Controller.
The general application of Metro Mirror is to maintain two real-time synchronized copies of a
disk. Often, two copies are geographically dispersed between two SAN Volume Controller
systems, although it is possible to use Metro Mirror within a single system (within an I/O
Group). If the master copy fails, you can enable an auxiliary copy for I/O operation.
A typical application of this function is to set up a dual-site solution using two SAN Volume
Controller systems. The first site is considered the primary or production site, and the second
site is considered the backup site or failover site, which is activated when a failure at the first
site is detected.
Tips: Note that intracluster Metro Mirror will consume more resources within the system
when compared to an intercluster Metro Mirror relationship, where resource allocation is
shared between the systems. Licensing must also be doubled, because both source and
target are within the same system.
Use intercluster Metro Mirror when possible.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
412 Implementing the IBM System Storage SAN Volume Controller V7.2
8.7.1 Metro Mirror overview
Metro Mirror establishes a synchronous relationship between two volumes of equal size. The
volumes in a Metro Mirror relationship are referred to as the master (primary) volume and the
auxiliary (secondary) volume. Traditional FC Metro Mirror is primarily used in a metropolitan
area or geographical area, up to a maximum distance of 300 km (186.4 miles) to provide
synchronous replication of data. With synchronous copies, host applications write to the
master volume, but they do not receive confirmation that the write operation has completed
until the data is written to the auxiliary volume. This action ensures that both the volumes
have identical data when the copy completes. After the initial copy completes, the Metro
Mirror function maintains a fully synchronized copy of the source data at the target site at all
times.
Metro Mirror has the following characteristics:
Zero RPO
Synchronous
Production application performance impacted by round-trip latency
Remember that increased distance will directly affect host I/O performance, because the
writes are synchronous. Use the requirements for application performance when selecting
your Metro Mirror auxiliary location.
Consistency Groups can be used to maintain data integrity for dependent writes, similar to
FlashCopy Consistency Groups and Global Mirror Consistency Groups, which will be
discussed later.
The SAN Volume Controller provides both intracluster and intercluster Metro Mirror.
Intracluster Metro Mirror
Intracluster Metro Mirror performs the intracluster copying of a volume, in which both volumes
belong to the same system and I/O Group within the system. Because it is within the same
I/O Group, there must be sufficient bitmap space within the I/O Group for both sets of
volumes, as well as licensing on the system.
Intercluster Metro Mirror
Intercluster Metro Mirror performs intercluster copying of a volume, in which one volume
belongs to a system and the other volume belongs to a separate system.
Two SAN Volume Controller systems must be defined in a SAN Volume Controller
partnership, which must be performed on both SAN Volume Controller systems to establish a
fully functional Metro Mirror partnership.
Using standard single-mode connections, the supported distance between two SAN Volume
Controller systems in a Metro Mirror partnership is 10 km (6.2 miles), although greater
distances can be achieved by using extenders. For extended distance solutions, contact your
IBM representative.
Important: Performing Metro Mirror across I/O Groups within a system is not supported.
Limit: When a local fabric and a remote fabric are connected together for Metro Mirror
purposes, the inter-switch link (ISL) hop count between a local node and a remote node
cannot exceed seven.
Chapter 8. Advanced Copy Services 413
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.7.2 Remote copy techniques
In this section, we describe the differences between synchronous remote copy and
asynchronous remote copy.
Synchronous remote copy
Metro Mirror is a fully synchronous remote copy technique that ensures, as long as writes to
the auxiliary volumes are possible, that writes are committed at both the master and auxiliary
volumes before write completion has been acknowledged to the host.
Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, Metro Mirror suspends writes to
the auxiliary volume and allows I/O to the master volume to continue, to avoid affecting the
operation of the master volumes.
Figure 8-24 on page 413 illustrates how a write to the master volume is mirrored to the cache
of the auxiliary volume before an acknowledgement of the write is sent back to the host that
issued the write. This process ensures that the auxiliary is synchronized in real time, in case it
is needed in a failover situation.
However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, using traditional fibre channel Metro Mirror has distance
limitations, based on your performance requirements. The SAN Volume Controller will not
support more than 300 km (186.4 miles).
Figure 8-24 Write on volume in Metro Mirror relationship
8.7.3 Metro Mirror features
SAN Volume Controller Metro Mirror supports the following features:
Synchronous remote copy of volumes dispersed over metropolitan distances.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
414 Implementing the IBM System Storage SAN Volume Controller V7.2
The SAN Volume Controller implements Metro Mirror relationships between volume pairs,
with each volume in a pair managed by a SAN Volume Controller system or IBM Storwize
V7000 system (requires 6.3.0).
The SAN Volume Controller supports intracluster Metro Mirror, where both volumes
belong to the same system (and I/O Group).
The SAN Volume Controller supports intercluster Metro Mirror, where each volume
belongs to a separate SAN Volume Controller system. You can configure a specific SAN
Volume Controller system for partnership with another system. All intercluster Metro Mirror
processing takes place between two SAN Volume Controller systems that are configured
in a partnership.
Intercluster and intracluster Metro Mirror can be used concurrently.
The SAN Volume Controller does not require that a control network or fabric is installed to
manage Metro Mirror. For intercluster Metro Mirror, the SAN Volume Controller maintains
a control link between two systems. This control link is used to control the state and
coordinate updates at either end. The control link is implemented on top of the same Fibre
Channel (FC) fabric connection that the SAN Volume Controller uses for Metro Mirror I/O.
The SAN Volume Controller implements a configuration model that maintains the Metro
Mirror configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.
The SAN Volume Controller allows the resynchronization of changed data so that write
failures occurring on either the master or auxiliary volumes do not require a complete
resynchronization of the relationship.
8.7.4 Multiple SAN Volume Controller System Mirroring
Each SAN Volume Controller system can maintain up to three partner system relationships,
allowing as many as four systems to be directly associated with each other. This SAN Volume
Controller partnership capability enables the implementation of disaster recovery (DR)
solutions.
Figure 8-25 on page 415 shows an example of a Multiple System Mirroring configuration.
Note: For restrictions and limitations of native IP replication see 8.6.2, IP partnership
limitations on page 396
Chapter 8. Advanced Copy Services 415
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-25 Multiple System Mirroring configuration example
Software-level restrictions for Multiple System Mirroring:
A partnership between a system running 6.1.0 and a system running a version earlier
than 4.3.1 is not supported.
Systems in a partnership where one system is running 6.1.0 and the other system is
running 4.3.1 cannot participate in additional partnerships with other systems.
Systems that are all running either 6.1.0 or 5.1.0 can participate in up to three system
partnerships.
To use an IBM Storwize V7000 as a system partner, the Storwize V7000 must have
6.3.0 or newer code and be configured to operate in the replication layer. Layer settings
are only available on the V7000.
To use native IP replication between SAN Volume Controller systems, both systems
must be at SAN Volume Controller code 7.2 or higher. A maximum of one IP
partnership is allowed per SAN Volume Controller system.
Object name length: SAN Volume Controller 6.1 supports object names up to 63
characters. Previous levels only supported up to 15 characters.
When SAN Volume Controller 6.1 systems are partnered with 4.3.1 and 5.1.0 systems,
various object names are truncated at 15 characters when displayed from 4.3.1 and 5.1.0
systems.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
416 Implementing the IBM System Storage SAN Volume Controller V7.2
Supported Multiple System Mirroring topologies
Multiple System Mirroring allows for various partnership topologies as illustrated in the
following examples in Figure 8-26:
Example: A B, A C, and A D
Figure 8-26 SAN Volume Controller star topology
Figure 8-26 shows four systems in a star topology, with system A at the center. System A can
be a central DR site for the three other locations.
Using a star topology, you can migrate applications by using a process, such as the process
that is described in the following example:
1. Suspend application at A.
2. Remove the A B relationship.
3. Create the A C relationship (or alternatively, the B C relationship).
4. Synchronize to system C, and ensure that A C is established:
A B, A C, A D, B C, B D, and C D
A B, A C, and B C
Chapter 8. Advanced Copy Services 417
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-27 shows an example of a triangle topology.
Example: A B, A C, and B C
Figure 8-27 SAN Volume Controller triangle topology
Figure 8-28 shows an example of a SAN Volume Controller fully connected topology.
Example: A B, A C, A D, B D, and C D
Figure 8-28 SAN Volume Controller fully connected topology
Figure 8-28 is a fully connected mesh where every system has a partnership to each of the
three other systems. This topology allows volumes to be replicated between any pair of
systems.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
418 Implementing the IBM System Storage SAN Volume Controller V7.2
Example: A B, A C, and B C
Figure 8-29 shows a daisy-chain topology.
Figure 8-29 SAN Volume Controller daisy-chain topology
Note that although systems can have up to three partnerships, volumes only can be part of
one remote copy relationship, for example, A B.
8.7.5 Importance of write ordering
Many applications that use block storage must survive failures, such as the loss of power or a
software crash, without losing the data that existed prior to the failure. Because many
applications need to perform large numbers of update operations in parallel with storage,
maintaining write ordering is key to ensuring the correct operation of applications following a
disruption.
An application that performs a high volume of database updates usually is designed with the
concept of dependent writes. With dependent writes, it is important to ensure that an earlier
write has completed before a later write is started. Reversing the order of dependent writes
can undermine an applications algorithms and can lead to problems, such as detected or
undetected data corruption.
See 8.4.3, Consistency Groups on page 375 for more information regarding dependent
writes.
Metro Mirror Consistency Groups
A Metro Mirror Consistency Group can contain an arbitrary number of relationships up to the
maximum number of Metro Mirror relationships supported by the SAN Volume Controller
system. Metro Mirror commands can be issued to a Metro Mirror Consistency Group and
therefore simultaneously for all Metro Mirror relationships defined within that Consistency
Group, or to a single Metro Mirror relationship that is not part of a Metro Mirror Consistency
Group. For example, when issuing a Metro Mirror startrcconsistgrp command to the
Consistency Group, all of the Metro Mirror relationships in the Consistency Group are started
at the same time.
Figure 8-30 on page 419 illustrates the concept of Metro Mirror Consistency Groups.
Because the MM_Relationship 1 and 2 are part of the Consistency Group, they can be
handled as one entity. The stand-alone MM_Relationship 3 is handled separately.
System partnership intermix: All of the preceding topologies are valid for the intermix of
the IBM Storwize V7000 with the SAN Volume Controller, as long as the V7000 is set to the
replication layer and running 6.3.0 code.
Chapter 8. Advanced Copy Services 419
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-30 Metro Mirror Consistency Group
Certain uses of Metro Mirror require the manipulation of more than one relationship. Metro
Mirror Consistency Groups can provide the ability to group relationships so that they are
manipulated in unison.
Consider the following points:
Metro Mirror relationships can be part of a Consistency Group, or they can be stand-alone
and therefore handled as single instances.
A Consistency Group can contain zero or more relationships. An empty Consistency
Group, with zero relationships in it, has little purpose until it is assigned its first
relationship, except that it has a name.
All relationships in a Consistency Group must have corresponding master and auxiliary
volumes.
Although it is possible to use Consistency Groups to manipulate sets of relationships that do
not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a Consistency Group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
Consistency Group.
For example, consider the case of two applications that are completely independent, yet they
are placed into a single Consistency Group. In the event of an error, there is a loss of
synchronization, and a background copy process is required to recover synchronization.
While this process is in progress, Metro Mirror rejects attempts to enable access to the
auxiliary volumes of either application.
If one application finishes its background copy more quickly than the other application, Metro
Mirror still refuses to grant access to its auxiliary volumes even though it is safe in this case.
The Metro Mirror policy is to refuse access to the entire Consistency Group if any part of it is
inconsistent.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
420 Implementing the IBM System Storage SAN Volume Controller V7.2
Stand-alone relationships and Consistency Groups share a common configuration and state
model. All of the relationships in a non-empty Consistency Group have the same state as the
Consistency Group.
8.7.6 Remote copy intercluster communication
In the traditional fibre channel (FC or FCoE), the intercluster communication between
systems in a Metro Mirror and Global Mirror partnership is performed over the SAN. In the
following section, we provide details regarding this communication path.
For details on intercluster communication between systems in an IP partnership, refer to
8.6.4, States of IP partnership on page 398.
Zoning
The SAN Volume Controller node ports on each SAN Volume Controller system must be able
to communicate with each other for the partnership creation to be performed. Switch zoning is
critical to facilitating intercluster communication. See Chapter 3, Planning and configuration
on page 71 for critical information regarding proper zoning for intercluster communication.
Intercluster communication channels
When a SAN Volume Controller system partnership has been defined on a pair of systems,
additional intercluster communication channels are established:
A single control channel, which is used to exchange and coordinate configuration
information
I/O channels between each of these nodes in the systems
These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and are repaired to maintain operation where possible. If communication between
SAN Volume Controller systems is interrupted or lost, an event is logged (and consequently,
the Metro Mirror and Global Mirror relationships will stop).
Intercluster links
All SAN Volume Controller nodes maintain a database of other devices that are visible on the
fabric. This database is updated as devices appear and disappear.
Devices that advertise themselves as SAN Volume Controller nodes are categorized
according to the SAN Volume Controller system to which they belong. SAN Volume Controller
nodes that belong to the same system establish communication channels between
themselves and begin to exchange messages to implement clustering and the functional
protocols of SAN Volume Controller.
Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they have been configured together to perform a remote copy relationship.
The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
Alerts: You can configure the SAN Volume Controller to raise Simple Network
Management Protocol (SNMP) traps to the enterprise monitoring system to alert on events
indicating that an interruption in internode communication has occurred.
Chapter 8. Advanced Copy Services 421
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.
8.7.7 Metro Mirror attributes
The Metro Mirror function in SAN Volume Controller possesses the following attributes:
A SAN Volume Controller system partnership is created between two SAN Volume
Controller systems or a SAN Volume Controller System and IBM Storwize V7000
operating in the replication layer (for intercluster Metro Mirror).
A Metro Mirror relationship is created between two volumes of the same size.
To manage multiple Metro Mirror relationships as one entity, relationships can be made
part of a Metro Mirror Consistency Group, which ensures data consistency across multiple
Metro Mirror relationships and provides ease of management.
When a Metro Mirror relationship is started, and when the background copy has
completed, the relationship becomes consistent and synchronized.
After the relationship is synchronized, the auxiliary volume holds a copy of the production
data at the primary, which can be used for DR.
To access the auxiliary volume, the Metro Mirror relationship must be stopped with the
access option enabled before write I/O will be allowed to the auxiliary.
The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.
8.7.8 Methods of synchronization
This section describes two methods that can be used to establish a synchronized
relationship.
Full synchronization after creation
The full synchronization after creation method is the default method. It is the simplest in that it
requires no administrative activity apart from issuing the necessary commands. However, in
certain environments, the available bandwidth can make this method unsuitable.
Use this command sequence for a single relationship:
Run mkrcrelationship without specifying the -sync option.
Run startrcrelationship without specifying the -clean option.
Synchronized before creation
In this method, the administrator must ensure that the master and auxiliary volumes contain
identical data before creating the relationship:
Both disks are created with the security delete feature to make all data zero.
A complete tape image (or other method of moving data) is copied from one disk to the
other disk.
With this technique, do not allow I/O on the master or auxiliary before the relationship is
established.
Then, the administrator must run these commands:
Run mkrcrelationship with the -sync flag.
Run startrcrelationship without the -clean flag.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
422 Implementing the IBM System Storage SAN Volume Controller V7.2
8.7.9 Metro Mirror states and events
In this section, we describe the various states of a Metro Mirror relationship and the
conditions that cause them to change.
In Figure 8-31, the Metro Mirror relationship state diagram shows an overview of the states
that can apply to a Metro Mirror relationship in a connected state.
Figure 8-31 Metro Mirror mapping state diagram
When creating the Metro Mirror relationship, you can specify if the auxiliary volume is already
in sync with the master volume, and the background copy process is then skipped. This
capability is especially useful when creating Metro Mirror relationships for volumes that have
been created with the format option.
The step identifiers that are shown in Figure 8-31 are described here:
Step 1:
a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror
relationship enters the ConsistentStopped state.
Important: Failure to perform these steps correctly can cause Metro Mirror to report the
relationship as consistent when it is not, therefore creating a data loss or data integrity
exposure for hosts accessing data on the auxiliary volume.
Chapter 8. Advanced Copy Services 423
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
b. The Metro Mirror relationship is created without specifying that the master and auxiliary
volumes are in sync, and the Metro Mirror relationship enters the InconsistentStopped
state.
Step 2:
a. When starting a Metro Mirror relationship in the ConsistentStopped state, the Metro
Mirror relationship enters the ConsistentSynchronized state. Therefore, no updates
(write I/O) have been performed on the master volume while in the ConsistentStopped
state. Otherwise, the -force option must be specified, and the Metro Mirror relationship
then enters the InconsistentCopying state, while the background copy is started.
b. When starting a Metro Mirror relationship in the InconsistentStopped state, the Metro
Mirror relationship enters the InconsistentCopying state, while the background copy is
started.
Step 3
When the background copy completes, the Metro Mirror relationship transitions from the
InconsistentCopying state to the ConsistentSynchronized state.
Step 4:
a. When stopping a Metro Mirror relationship in the ConsistentSynchronized state,
specifying the -access option, which enables write I/O on the auxiliary volume, the
Metro Mirror relationship enters the Idling state.
b. To enable write I/O on the auxiliary volume, when the Metro Mirror relationship is in the
ConsistentStopped state, issue the command svctask stoprcrelationship specifying
the -access option, and the Metro Mirror relationship enters the Idling state.
Step 5:
a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the
-primary argument to set the copy direction. Given that no write I/O has been
performed (to either the master or auxiliary volume) while in the Idling state, the Metro
Mirror relationship enters the ConsistentSynchronized state.
b. If write I/O has been performed to either the master or auxiliary volume, the -force
option must be specified, and the Metro Mirror relationship then enters the
InconsistentCopying state, while the background copy is started.
Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an
error), a state transition is applied:
For example, the Metro Mirror relationships in the ConsistentSynchronized state enter the
ConsistentStopped state, and the Metro Mirror relationships in the InconsistentCopying
state enter the InconsistentStopped state.
If the connection is broken between the SAN Volume Controller systems in a partnership,
all (intercluster) Metro Mirror relationships enter a Disconnected state. For further
information, see Connected versus disconnected on page 424.
State overview
in the following sections, we provide an overview of the various Metro Mirror states.
Common states: Stand-alone relationships and Consistency Groups share a common
configuration and state model. All Metro Mirror relationships in a Consistency Group that is
not empty have the same state as the Consistency Group.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
424 Implementing the IBM System Storage SAN Volume Controller V7.2
Connected versus disconnected
Under certain error scenarios (for example, a power failure at one site causing one complete
system to disappear), communications between two systems in a Metro Mirror relationship
can be lost. Alternatively, the fabric connection between the two systems might fail, leaving
the two systems running but unable to communicate with each other.
When the two systems can communicate, the systems and the relationships spanning them
are described as connected. When they cannot communicate, the systems and the
relationships spanning them are described as disconnected.
In this state, both systems are left with fragmented relationships and will be limited regarding
the configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and what configuration commands are permitted.
When the systems can communicate again, the relationships become connected again.
Metro Mirror automatically reconciles the two state fragments, taking into account any
configuration or other event that took place while the relationship was disconnected. As a
result, the relationship can either return to the state that it was in when it became
disconnected or enter a new state.
Relationships that are configured between volumes in the same SAN Volume Controller
system (intracluster) will never be described as being in a disconnected state.
Consistent versus inconsistent
Relationships that contain volumes that are operating as secondaries can be described as
being consistent or inconsistent. Consistency Groups that contain relationships can also be
described as being consistent or inconsistent. The consistent or inconsistent property
describes the relationship of the data on the auxiliary to the data on the master volume. It can
be considered a property of the auxiliary volume itself.
An auxiliary volume is described as consistent if it contains data that might have been read by
a host system from the master if power had failed at an imaginary point in time while I/O was
in progress, and power was later restored. This imaginary point in time is defined as the
recovery point. The requirements for consistency are expressed with respect to activity at the
master up to the recovery point:
The auxiliary volume contains the data from all of the writes to the master for which the
host received successful completion and that data had not been overwritten by a
subsequent write (before the recovery point).
For writes for which the host did not receive a successful completion (that is, it received
bad completion or no completion at all), and the host subsequently performed a read from
the master of that data and that read returned successful completion and no later write
was sent (before the recovery point), the auxiliary contains the same data as that returned
by the read from the master.
From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred).
If an application is designed to cope with unexpected power failure, this guarantee of
consistency means that the application will be able to use the auxiliary and begin operation
just as though it had been restarted after the hypothetical power failure. Again, maintaining
the application write ordering is the key property of consistency.
Chapter 8. Advanced Copy Services 425
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
See 8.4.3, Consistency Groups on page 375 for more information regarding dependent
writes.
If a relationship, or set of relationships, is inconsistent and an attempt is made to start an
application using the data in the secondaries, a number of outcomes are possible:
The application might decide that the data is corrupt and crash or exit with an event code.
The application might fail to detect that the data is corrupt and return erroneous data.
The application might work without a problem.
Because of the risk of data corruption, and in particular undetected data corruption, Metro
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.
Consistency as a concept can be applied to a single relationship or a set of relationships in a
Consistency Group. Write ordering is a concept that an application can maintain across a
number of disks accessed through multiple systems; therefore, consistency must operate
across all those disks.
When deciding how to use Consistency Groups, the administrator must consider the scope of
an applications data, taking into account all of the interdependent systems that communicate
and exchange information.
If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
All of the data accessed by the group of systems must be placed into a single Consistency
Group.
The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistent versus synchronized
A copy that is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the master and auxiliary volumes only differ in regions where writes are
outstanding from the host.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in time in the past. Write I/O might have continued to a
master and not have been copied to the auxiliary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between systems when writing to the auxiliary.
When communication is lost for an extended period of time, Metro Mirror tracks the changes
that occurred on the master, but not the order of such changes or the details of such changes
(write data). When communication is restored, it is impossible to synchronize the auxiliary
without sending write data to the auxiliary out of order and, therefore, losing consistency.
Two policies can be used to cope with this situation:
Make a point-in-time copy of the consistent auxiliary before allowing the auxiliary to
become inconsistent. In the event of a disaster before consistency is achieved again, the
point-in-time copy target provides a consistent, although out-of-date, image.
Accept the loss of consistency and the loss of a useful auxiliary, while synchronizing the
auxiliary.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
426 Implementing the IBM System Storage SAN Volume Controller V7.2
Detailed states
In the following sections, we describe the states that are portrayed to the user, for either
Consistency Groups or relationships. We also describe additional information that is available
in each state. The major states are designed to provide guidance about the available
configuration commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for either read or write I/O. A copy process needs
to be started to make the auxiliary consistent.
This state is entered when the relationship or Consistency Group was InconsistentCopying
and has either suffered a persistent error or received a stop command that has caused the
copy process to stop.
A start command causes the relationship or Consistency Group to move to the
InconsistentCopying state. A stop command is accepted, but has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for either read or write I/O.
This state is entered after a start command is issued to an InconsistentStopped relationship
or a Consistency Group. It is also entered when a forced start is issued to an Idling or
ConsistentStopped relationship or Consistency Group.
In this state, a background copy process runs that copies data from the master to the auxiliary
volume.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or stop command places the relationship or Consistency Group into an
InconsistentStopped state. A start command is accepted but has no effect.
If the background copy process completes on a stand-alone relationship, or on all
relationships for a Consistency Group, the relationship or Consistency Group transitions to
the ConsistentSynchronized state.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date with respect to the master.
This state can arise when a relationship was in a ConsistentSynchronized state and suffers
an error that forces a Consistency Freeze. It can also arise when a relationship is created with
a CreateConsistentFlag set to TRUE.
Normally, following an I/O error, subsequent write activity causes updates to the master and
the auxiliary is no longer synchronized (set to false). In this case, to reestablish
Chapter 8. Advanced Copy Services 427
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
synchronization, consistency must be given up for a period. You must use a start command
with the -force option to acknowledge this condition, and the relationship or Consistency
Group transitions to InconsistentCopying. Enter this command only after all outstanding
events have been repaired.
In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can enter a switch command that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.
If the relationship or Consistency Group becomes disconnected, the auxiliary transitions to
ConsistentDisconnected. The master transitions to IdlingDisconnected.
An informational status log is generated whenever a relationship or Consistency Group enters
the ConsistentStopped state with a status of Online. You can configure this event to generate
an SNMP trap that can be used to trigger automation or manual intervention to issue a start
command following a loss of synchronization.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O, and the auxiliary volume is accessible for read-only I/O.
Writes that are sent to the master volume are sent to both the master and auxiliary volumes.
Either successful completion must be received for both writes, the write must be failed to the
host, or a state must transition out of the ConsistentSynchronized state before a write is
completed to the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
A switch command leaves the relationship in the ConsistentSynchronized state, but it
reverses the master and auxiliary roles.
A start command is accepted, but it has no effect.
If the relationship or Consistency Group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Consequently, both master and auxiliary volumes are accessible for write I/O.
In this state, the relationship or Consistency Group accepts a start command. Metro Mirror
maintains a record of regions on each disk that received write I/O while idling. This record is
used to determine what areas need to be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship has received write I/O, which is
indicated by the Synchronized status. If the start command leads to loss of consistency, you
must specify the -force parameter.
Following a start command, the relationship or Consistency Group transitions to
ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is
a loss of consistency.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
428 Implementing the IBM System Storage SAN Volume Controller V7.2
Also, while in this state, the relationship or Consistency Group accepts a -clean option on the
start command. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship
or Consistency Group are all in the master role and accept read or write I/O.
The priority in this state is to recover the link to restore the relationship or consistency.
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transitions to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or Consistency Group, which depends on these factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.
While IdlingDisconnected, if a write I/O is received that causes the loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event will also be raised when this condition occurs for the
ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship
or Consistency Group are all in the auxiliary role and do not accept read or write I/O.
No configuration activity, except for deletes, is permitted until the relationship becomes
connected again.
When the relationship or Consistency Group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either condition is true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
In either case, the relationship or Consistency Group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship
or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O.
This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary
side of a relationship becomes disconnected.
In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time that it had in that state. When entered from ConsistentSynchronized, the
Chapter 8. Advanced Copy Services 429
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
FreezeTime shows the last time at which the relationship or Consistency Group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
system.
A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.
When the relationship or Consistency Group becomes connected again, the relationship or
Consistency Group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. These conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the master while disconnected.
Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.
Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show.
It is entered when a Consistency Group is first created. It is exited when the first relationship
is added to the Consistency Group, at which point, the state of the relationship becomes the
state of the Consistency Group.
Background copy
Metro Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy takes place on relationships that are in the
InconsistentCopying state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
all of the nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node in turn divides its allocation evenly between the multiple relationships performing a
background copy.
For intracluster relationships, each node is assigned a static quota of 25 MBps.
8.7.10 Practical use of Metro Mirror
The master volume is the production volume, and updates to this copy are mirrored in real
time to the auxiliary volume. The contents of the auxiliary volume that existed when the
relationship was created are destroyed.
While the Metro Mirror relationship is active, the auxiliary volume is not accessible for host
application write I/O at any time. The SAN Volume Controller allows read-only access to the
auxiliary volume when it contains a consistent image. This time period is only intended to
allow boot time operating system discovery to complete without error, so that any hosts at the
secondary site can be ready to start up the applications with minimum delay, if required.
Switching copy direction: The copy direction for a Metro Mirror relationship can be
switched so that the auxiliary volume becomes the master, and the master volume
becomes the auxiliary, much like FlashCopys restore option.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
430 Implementing the IBM System Storage SAN Volume Controller V7.2
For example, many operating systems must read logical block address (LBA) zero to
configure a logical unit. Although read access is allowed at the auxiliary in practice, the data
on the auxiliary volumes cannot be read by a host, because most operating systems write a
dirty bit to the file system when it is mounted. Because this write operation is not allowed on
the auxiliary volume, the volume cannot be mounted.
This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads that are performed at the auxiliary and
later write I/Os that are performed at the master.
To enable access to the auxiliary volume for host operations, you must stop the Metro Mirror
relationship by specifying the -access parameter.
While access to the auxiliary volume for host operations is enabled, the host must be
instructed to mount the volume and related tasks before the application can be started, or
instructed to perform a recovery process.
For example, the Metro Mirror requirement to enable the auxiliary copy for access
differentiates it from third-party mirroring software on the host, which aims to emulate a
single, reliable disk regardless of what system is accessing it. Metro Mirror retains the
property that there are two volumes in existence, but it suppresses one volume while the copy
is being maintained.
Using an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required and that the tasks to be performed on the host involved in establishing the
operation on the auxiliary copy are substantial. The goal is to make this rapid (much faster
when compared to recovering from a backup copy) but not seamless.
The failover process can be automated through failover management software. The SAN
Volume Controller provides SNMP traps and programming (or scripting) for the CLI to enable
this automation.
8.7.11 Valid combinations of FlashCopy, Metro Mirror, and Global Mirror
Table 8-9 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions
that are valid for a single volume.
Table 8-9 Valid combination for a single volume
8.7.12 Metro Mirror configuration limits
Table 8-10 lists the Metro Mirror configuration limits.
Table 8-10 Metro Mirror configuration limits
FlashCopy Metro Mirror or Global Mirror
Master
Metro Mirror or Global Mirror
Auxiliary
FlashCopy Source Supported Supported
FlashCopy Target Not supported Not supported
Parameter Value
Number of Metro Mirror
Consistency Groups per system
256
Chapter 8. Advanced Copy Services 431
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.8 Metro Mirror commands
For comprehensive details about Metro Mirror Commands, see IBM System Storage SAN
Volume Controller Command-Line Interface Users Guide, GC27-2287.
The command set for Metro Mirror contains two broad groups:
Commands to create, delete, and manipulate relationships and Consistency Groups
Commands to cause state changes
Where a configuration command affects more than one system, Metro Mirror performs the
work to coordinate configuration activity between the systems. Certain configuration
commands can only be performed when the systems are connected and fail with no effect
when they are disconnected.
Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by Metro Mirror when the systems become connected again.
For any given command, with one exception, a single system actually receives the command
from the administrator. This design is significant for defining the context for a
CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp
command, in which case, the system receiving the command is called the local system.
The exception mentioned previously is the command that sets systems into a Metro Mirror
partnership. The mkfcpartnership and mkippartnership command must be issued to both
the local and remote systems.
The commands here are described as an abstract command set and are implemented by
either method:
CLI that can be used for scripting and automation
GUI that can be used for one-off tasks
Number of Metro Mirror
relationships per system
8,192
Number of Metro Mirror
relationships per Consistency
Group
8,192
Total volume size per I/O Group There is a per I/O Group limit of 1,024 TB on the quantity of
master and auxiliary volume address spaces that can participate
in Metro Mirror and Global Mirror relationships. This maximum
configuration will consume all 512 MB of bitmap space for the I/O
Group and allow no FlashCopy bitmap space.
Note: Do not use Volumes larger than 2 TB in Global Mirror with
Change Volumes relationships.
Parameter Value
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
432 Implementing the IBM System Storage SAN Volume Controller V7.2
8.8.1 Listing available SAN Volume Controller system partners
To create a SAN Volume Controller system partnership, use the svcinfo lsclustercandidate
command.
svcinfo lspartnershipcandidate
Use the svcinfo lspartnershipcandidate command to list the systems that are available for
setting up a two-system partnership. This command is a prerequisite for creating Metro Mirror
relationships.
8.8.2 Creating the SAN Volume Controller system partnership
To create a SAN Volume Controller system partnership, use the svctask mkfcpartnership
command for traditional fibre channel (FC or FCoE) connections, or, svctask
mkippartnership for IP based connections.
svctask mkfcpartnership
Use the svctask mkfcpartnership command to establish a one-way Metro Mirror partnership
between the local system and a remote system. Alternatively, use svctask mkippartnership
to create IP based partnership.
To establish a fully functional Metro Mirror partnership, you must issue this command to both
systems. This step is a prerequisite to creating Metro Mirror relationships between volumes
on the SAN Volume Controller systems.
When creating the partnership, you can specify the bandwidth to be used by the background
copy process between the local and the remote SAN Volume Controller system. The
bandwidth must be set to a value that is less than or equal to the bandwidth that can be
sustained by the intercluster link.
Background copy bandwidth effect on foreground I/O latency
The background copy bandwidth determines the rate at which the background copy for the
SAN Volume Controller will be attempted. The background copy bandwidth can affect the
foreground I/O latency in one of three ways:
The following results can occur if the background copy bandwidth is set too high for the
Metro Mirror intercluster link capacity:
The background copy I/Os can back up on the Metro Mirror intercluster link.
There is a delay in the synchronous auxiliary writes of foreground I/Os.
The foreground I/O latency will increase as perceived by applications.
If the background copy bandwidth is set too high for the storage at the primary site, the
background copy read I/Os overload the master storage and delay foreground I/Os.
If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the auxiliary overload the secondary storage and again delay
the synchronous auxiliary writes of foreground I/Os.
To set the background copy bandwidth optimally, make sure that you consider all three
resources (the master storage, the intercluster link bandwidth, and the secondary storage).
Provision the most restrictive of these three resources between the background copy
Note: This command is not supported on IP partnerships. Use mkippartnership for IP
connections.
Chapter 8. Advanced Copy Services 433
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
bandwidth and the peak foreground I/O workload. This provisioning can be done by a
calculation (as previously described) or alternatively by determining experimentally how much
background copy can be allowed before the foreground I/O latency becomes unacceptable,
and then backing off to allow for peaks in workload and a safety margin.
svctask chpartnership
In case it is needed to change the bandwidth that is available for background copy in a SAN
Volume Controller system partnership, you can use the svctask chpartnership command to
specify the new bandwidth.
8.8.3 Creating a Metro Mirror Consistency Group
To create a Metro Mirror Consistency Group, use the svctask mkrcconsistgrp command.
svctask mkrcconsistgrp
The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror
Consistency Group.
The Metro Mirror Consistency Group name must be unique across all of the Consistency
Groups that are known to the systems owning this Consistency Group. If the Consistency
Group involves two systems, the systems must be in communication throughout the creation
process.
The new Consistency Group does not contain any relationships and will be in the Empty
state. Metro Mirror relationships can be added to the group either upon creation or afterward
by using the svctask chrelationship command.
8.8.4 Creating a Metro Mirror relationship
To create a Metro Mirror relationship, use the command svctask mkrcrelationship.
svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Metro Mirror relationship. This
relationship persists until it is deleted.
The auxiliary volume must be equal in size to the master volume or the command will fail, and
if both volumes are in the same system, they must both be in the same I/O Group. The master
and auxiliary volume cannot be in an existing relationship and cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When creating the Metro Mirror relationship, you can add it to an already existing Consistency
Group, or it can be a stand-alone Metro Mirror relationship if no Consistency Group is
specified.
To check whether the master or auxiliary volumes comply with the prerequisites to participate
in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.
svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list available volumes that are
eligible for a Metro Mirror relationship.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
434 Implementing the IBM System Storage SAN Volume Controller V7.2
When issuing the command, you can specify the source volume name and secondary system
to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the
command is issued with no flags, all volumes that are not disallowed by another configuration
state, such as being a FlashCopy target, are listed.
8.8.5 Changing a Metro Mirror relationship
To modify the properties of a Metro Mirror relationship, use the command svctask
chrcrelationship.
svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Metro
Mirror relationship:
Change the name of a Metro Mirror relationship.
Add a relationship to a group.
Remove a relationship from a group using the -force flag.
8.8.6 Changing a Metro Mirror Consistency Group
To change the name of a Metro Mirror Consistency Group, use the svctask chrcconsistgrp
command.
svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Metro Mirror
Consistency Group.
8.8.7 Starting a Metro Mirror relationship
To start a stand-alone Metro Mirror relationship, use the svctask startrcrelationship
command.
svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Metro Mirror
relationship.
When issuing the command, you can set the copy direction if it is undefined, and optionally,
you can mark the auxiliary volume of the relationship as clean. The command fails if it is used
to attempt to start a relationship that is part of a Consistency Group.
This command can only be issued to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
either by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force flag when restarting the relationship. This situation can arise if,
for example, the relationship was stopped, and then further writes were performed on the
original master of the relationship. The use of the -force flag here is a reminder that the data
Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a
Consistency Group that is not empty, the relationship must have the same state and copy
direction as the group to be added to it.
Chapter 8. Advanced Copy Services 435
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
on the auxiliary will become inconsistent while resynchronization (background copying)
occurs, and therefore, the data is not usable for DR purposes before the background copy has
completed.
In the Idling state, you must specify the master volume to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.
8.8.8 Stopping a Metro Mirror relationship
To stop a stand-alone Metro Mirror relationship, use the svctask stoprcrelationship
command.
svctask stoprcrelationship
Use the svctask stoprcrelationship command to stop the copy process for a relationship.
You can also use it to enable write access to a consistent auxiliary volume by specifying the
-access flag.
This command applies to a stand-alone relationship. It is rejected if it is addressed to a
relationship that is part of a Consistency Group. You can issue this command to stop a
relationship that is copying from master to auxiliary.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue an svctask startrcrelationship command. Write activity is no longer copied
from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized
state, this command causes a consistency freeze.
When a relationship is in a consistent state (that is, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the stoprcrelationship command to enable write access to the auxiliary
volume.
8.8.9 Starting a Metro Mirror Consistency Group
To start a Metro Mirror Consistency Group, use the svctask startrcconsistgrp command.
Use the svctask startrcconsistgrp command to start a Metro Mirror Consistency Group.
This command can only be issued to a Consistency Group that is connected.
For a Consistency Group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped either by a stop command or by an I/O error.
8.8.10 Stopping a Metro Mirror Consistency Group
To stop a Metro Mirror Consistency Group, use the svctask stoprcconsistgrp command.
svctask stoprcconsistgrp
Use the svctask stoprcconsistgrp command to stop the copy process for a Metro Mirror
Consistency Group. It can also be used to enable write access to the auxiliary volumes in the
group if the group is in a consistent state.
If the Consistency Group is in an inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
436 Implementing the IBM System Storage SAN Volume Controller V7.2
copied from the master to the auxiliary volumes belonging to the relationships in the group.
For a Consistency Group in the ConsistentSynchronized state, this command causes a
consistency freeze.
When a Consistency Group is in a consistent state (for example, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
argument with the svctask stoprcconsistgrp command to enable write access to the
auxiliary volumes within that group.
8.8.11 Deleting a Metro Mirror relationship
To delete a Metro Mirror relationship, use the svctask rmrcrelationship command.
svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two volumes. It does
not affect the volumes themselves.
If the relationship is disconnected at the time that the command is issued, the relationship is
only deleted on the system on which the command is being run. When the systems
reconnect, then the relationship is automatically deleted on the other system.
Alternatively, if the systems are disconnected, and you still want to remove the relationship on
both systems, you can issue the rmrcrelationship command independently on both of the
systems.
If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which Metro Mirror does not
inhibit access to inconsistent data.
8.8.12 Deleting a Metro Mirror Consistency Group
To delete a Metro Mirror Consistency Group, use the svctask rmrcconsistgrp command.
svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Metro Mirror Consistency Group. This
command deletes the specified Consistency Group. You can issue this command for any
existing Consistency Group.
If the Consistency Group is disconnected at the time that the command is issued, the
Consistency Group is only deleted on the system on which the command is being run. When
the systems reconnect, the Consistency Group is automatically deleted on the other system.
Alternatively, if the systems are disconnected, and you still want to remove the Consistency
Group on both systems, you can issue the svctask rmrcconsistgrp command separately on
both of the systems.
If the Consistency Group is not empty, the relationships within it are removed from the
Consistency Group before the group is deleted. These relationships then become
stand-alone relationships. The state of these relationships is not changed by the action of
removing them from the Consistency Group.
Chapter 8. Advanced Copy Services 437
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.8.13 Reversing a Metro Mirror relationship
To reverse a Metro Mirror relationship, use the svctask switchrcrelationship command.
svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the master and
auxiliary volumes when a stand-alone relationship is in a consistent state. When issuing the
command, the desired master is specified.
8.8.14 Reversing a Metro Mirror Consistency Group
To reverse a Metro Mirror Consistency Group, use the svctask switchrcconsistgrp
command.
svctask switchrcconsistgrp
Use the svctask switchrcconsistgrp command to reverse the roles of the master and
auxiliary volumes when a Consistency Group is in a consistent state. This change is applied
to all of the relationships in the Consistency Group, and when issuing the command, the
desired master is specified.
8.8.15 Background copy
Metro Mirror paces the rate at which the background copy is performed by the appropriate
relationships. The background copy takes place on relationships that are in the
InconsistentCopying state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
the nodes that are performing background copy for one of the eligible relationships. This
allocation is made without regard for the number of disks for which the node is responsible.
Each node in turn divides its allocation evenly between the multiple relationships performing a
background copy.
For intracluster relationships, each node is assigned a static quota of 25 MBps.
8.9 Global Mirror
In the following topics, we describe the Global Mirror copy service, which is an asynchronous
remote copy service. It provides and maintains a consistent mirrored copy of a source volume
to a target volume.
Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The
volumes in a Global Mirror relationship are referred to as the master (source) volume and the
auxiliary (target) volume, which is the same as Metro Mirror.
Consistency Groups can be used to maintain data integrity for dependent writes, similar to
FlashCopy Consistency Groups.
Bandwidth limit: The SVC partnership bandwidth limit is specified in megabytes per
second and only applies during the initial copy or resynchronization. This number is
independent of whatever transport method you use to get data between locations.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
438 Implementing the IBM System Storage SAN Volume Controller V7.2
Global Mirror writes data to the auxiliary volume asynchronously, which means that host
writes to the master volume will provide the host with confirmation that the write is complete
prior to the I/O completing on the auxiliary volume.
Global Mirror has the following characteristics:
Near-zero RPO
Asynchronous
Production application performance impacted by I/O sequencing preparation time
8.9.1 Intracluster Global Mirror
Although Global Mirror is available for intracluster, it has no functional value for production
use. Intracluster Metro Mirror provides the same capability with less overhead. However,
leaving this functionality in place simplifies testing and allows for client experimentation and
testing (for example, to validate server failover on a single test system). Note that like
Intracluster Metro Mirror, you will need to take into consideration the increase in the license
requirement, as both source and target will exist on the same SAN Volume Controller System.
8.9.2 Intercluster Global Mirror
Intercluster Global Mirror operations require a pair of SAN Volume Controller systems
connected by a number of intercluster links. The two SAN Volume Controller systems must be
defined in a SAN Volume Controller system partnership to establish a fully functional Global
Mirror relationship.
8.9.3 Asynchronous remote copy
Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, the
write operations are completed on the primary site and the write acknowledgement is sent to
the host before it is received at the secondary site. An update of this write operation is sent to
the secondary site at a later stage, which provides the capability to perform remote copy over
distances exceeding the limitations of synchronous remote copy.
The Global Mirror function provides the same function as Metro Mirror remote copy, but over
long-distance links with higher latency, without requiring the hosts to wait for the full round-trip
delay of the long-distance link.
Figure 8-32 shows that a write operation to the master volume is acknowledged back to the
host issuing the write before the write operation is mirrored to the cache for the auxiliary
volume.
Note: SAN Volume Controller, Storwize V7000, IBM Flex System V7000, and Storwize
V7000 Unified systems running V6.x or V7.x do not support the use of intracluster Global
Mirror functionality. This restriction may be lifted in a future software release.
Limit: When a local fabric and a remote fabric are connected together for Global Mirror
purposes, the ISL hop count between a local node and a remote node must not exceed
seven hops.
Chapter 8. Advanced Copy Services 439
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-32 Global Mirror write sequence
The Global Mirror algorithms maintain a consistent image on the auxiliary at all times. They
achieve this consistent image by identifying sets of I/Os that are active concurrently at the
master, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary. As a result, Global Mirror maintains the features of Write Ordering
and Read Stability that are described in this chapter.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system, and it is therefore not subject to the
latency of the long-distance link. These two elements of the protocol ensure that the
throughput of the total system can be grown by increasing system size, while maintaining
consistency across a growing data set.
In SAN Volume Controller code 7.2, these algorithms have been enhanced to optimize Global
Mirror behavior and latency even further. As stated before, Global Mirror write I/O from
production to a secondary SAN Volume Controller system requires serialization and
sequence-tagging before being sent across network to remote site (to maintain a write-order
consistent copy of data). Sequence-tagged Global Mirror writes on secondary system are
processed without parallelism and management of write I/O sequencing imposes additional
latency on write I/Os in pre 7.2 code versions. As a result, high bandwidth Global Mirror
throughput environments can experience performance impacts on primary system during
high I/O peak periods. The new SAN Volume Controller code V7.2 now allows more
parallelism in processing and managing Global Mirror writes on secondary system using
these methods:
nodes on the secondary system store replication writes in new redundant non-volatile
cache
cache content details are shared between nodes
cache content details itself is batched to make node-to-node latency less of an issue
nodes intelligently apply these batches in parallel as soon as possible
nodes internally manage and optimize Global Mirror secondary write I/O processing
Note: The V7.2 enhancements of Global Mirror require no changes in administration and
management but before upgrading to SAN Volume Controller V7.2 you must stop all Global
Mirror relationships. The proper checks are provided in the latest svcupgradetest utility.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
440 Implementing the IBM System Storage SAN Volume Controller V7.2
In a failover scenario, where the secondary site needs to become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that will
use this data must have an external mechanism for recovering the missing updates and
reapplying them, for example, a transaction log replay.
8.9.4 SAN Volume Controller Global Mirror features
SAN Volume Controller Global Mirror supports the following features:
Asynchronous remote copy of volumes dispersed over metropolitan-scale distances is
supported.
The SAN Volume Controller implements the Global Mirror relationship between a volume
pair, with each volume in the pair being managed by a SAN Volume Controller system.
The SAN Volume Controller supports intercluster Global Mirror, where each volume
belongs to its separate SAN Volume Controller system. A given SAN Volume Controller
system can be configured for partnership with between one and three other systems. For
IP partnership restrictions refer to 8.6.2, IP partnership limitations on page 396.
The SAN Volume Controller does not require a control network or fabric to be installed to
manage Global Mirror. For intercluster Global Mirror, the SAN Volume Controller maintains
a control link between the two systems. This control link is used to control the state and to
coordinate the updates at either end. The control link is implemented on top of the same
FC fabric connection that the SAN Volume Controller uses for Global Mirror I/O.
The SAN Volume Controller implements a configuration model that maintains the Global
Mirror configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.
The SAN Volume Controller implements flexible resynchronization support, enabling it to
resynchronize volume pairs that have experienced write I/Os to both disks and to
resynchronize only those regions that are known to have changed.
An optional feature for Global Mirror permits a delay simulation to be applied on writes that
are sent to auxiliary volumes.
As of SAN Volume Controller 6.3.0 and above, Global Mirror source and target volumes
can be associated with Change Volumes.
Colliding writes
Prior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any
given 512 byte LBA of a volume. If a further write is received from a host while the auxiliary
write is still active, even though the master write might have completed, the new host write will
be delayed until the auxiliary write is complete. This restriction is needed in case a series of
writes to the auxiliary have to be retried (called reconstruction). Conceptually, the data for
reconstruction comes from the master volume.
If multiple writes are allowed to be applied to the master for a given sector, only the most
recent write will get the correct data during reconstruction, and if reconstruction is interrupted
for any reason, the intermediate state of the auxiliary is inconsistent.
Applications that deliver such write activity will not achieve the performance that Global Mirror
is intended to support. A volume statistic is maintained about the frequency of these
collisions. From V4.3.1 onward, an attempt is made to allow multiple writes to a single
location to be outstanding in the Global Mirror algorithm. There is still a need for master writes
to be serialized, and the intermediate states of the master data must be kept in a non-volatile
journal while the writes are outstanding to maintain the correct write ordering during
reconstruction. Reconstruction must never overwrite data on the auxiliary with an earlier
Chapter 8. Advanced Copy Services 441
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
version. The volume statistic that is monitoring colliding writes is now limited to those writes
that are not affected by this change.
Figure 8-33 shows a colliding write sequence example.
Figure 8-33 Colliding writes example
These numbers correspond to the numbers in Figure 8-33:
(1) The first write is performed from the host to LBA X.
(2) The host is provided acknowledgment that the write has completed even though the
mirrored write to the auxiliary volume has not yet completed.
(1) and (2) occur asynchronously with the first write.
(3) The second write is performed from the host also to LBA X. If this write occurs prior to
(2), the write will be written to the journal file.
(4) The host is provided acknowledgment that the second write is complete.
Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that
are sent to auxiliary volumes. This feature allows you to perform testing that detects colliding
writes. Therefore, you can use this feature to test an application before the full deployment of
the feature. The feature can be enabled separately for each of the intracluster or intercluster
Global Mirrors. You specify the delay setting by using the chsystem command and view the
delay by using the lssystem command. The gm_intra_cluster_delay_simulation field
expresses the amount of time that intracluster auxiliary I/Os are delayed. The
gm_inter_cluster_delay_simulation field expresses the amount of time that intercluster
auxiliary I/Os are delayed. A value of zero (0) disables the feature.
Multiple System Mirroring
The rules for a Global Mirror Multiple System Mirroring environment are the same as the rules
in an Metro Mirror environment; see 8.7.4, Multiple SAN Volume Controller System Mirroring
on page 414.
Tip: If you are experiencing repeated problems with the delay on your link, make sure that
the delay simulator was properly disabled.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
442 Implementing the IBM System Storage SAN Volume Controller V7.2
8.9.5 Global Mirror relationship between master and auxiliary volumes
When creating a Global Mirror relationship, the master volume is initially assigned as the
master, and the auxiliary volume is initially assigned as the auxiliary. This design implies that
the initial copy direction is mirroring the master volume to the auxiliary volume. After the initial
synchronization is complete, the copy direction can be changed, if appropriate.
In the most common applications of Global Mirror, the master volume contains the production
copy of the data and is used by the host application. The auxiliary volume contains the
mirrored copy of the data and is used for failover in DR scenarios. Due to the nature of
consistency requirements and SCSI protocol standards, the auxiliary or target volume cannot
be actively in use while the Global Mirror relationship is actively copying data.
8.9.6 Using Change Volumes with Global Mirror
Global Mirror is designed to achieve a recovery point objective (RPO) as low as possible, so
that data is as up-to-date as possible. This design places several strict requirements on your
infrastructure, and in certain situations, with low network link quality or congested or
overloaded hosts, you might be affected by multiple 1920 congestion errors.
Congestion errors happen in three primary situations:
Congestion at the source site via the host or network.
Congestion on the network link or network path.
Congestion at the target site via the host or network.
With 6.3.0, Global Mirror receives new functionality that is designed to address a few
conditions, which negatively affect certain Global Mirror implementations:
The estimation of the bandwidth requirements tends to be complex.
It is often difficult to guarantee that the latency and bandwidth requirements can be met.
Congested hosts on either the source or target site can cause disruption.
Congested network links can cause disruption with only intermittent peaks.
In order to address these issues, Change Volumes have been added as an option for Global
Mirror relationships. Change Volumes use the FlashCopy functionality, but they cannot be
manipulated as FlashCopy volumes, because they are special purpose only. Change
Volumes provide the ability to replicate point-in-time images on a cycling period. The default
is 300 seconds. Your change rate only needs to include the condition of the data at the
point-in-time that the image was taken, instead of all the updates during the period. Using this
function can provide significant reductions in replication volume.
Global Mirror with Change Volumes hasthe following characteristics:
Larger RPO
Point-in-time copies
Asynchronous
Possible system performance overhead because point-in-time copies are created locally
Figure 8-34 is a diagram of a simple Global Mirror relationship without Change Volumes.
Additional considerations:
A volume can only be part of one Global Mirror relationship at a time.
As of SAN Volume Controller 6.2.0.0, a volume that is a FlashCopy target can be part of
a Global Mirror relationship.
Chapter 8. Advanced Copy Services 443
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-34 Global Mirror without Change Volumes
With Change Volumes, this environment looks like Figure 8-35.
Figure 8-35 Global Mirror with Change Volumes
With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary Global Mirror volume at
the target site, which is then captured in another Change Volume on the target site. This
approach provides an always consistent image at the target site and protects your data from
being inconsistent during resynchronization.
We look more closely at how Change Volumes might save you replication traffic. See
Figure 8-36 on page 444.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
444 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 8-36 Global Mirror I/O replication without Change Volumes
In Figure 8-36, you can see a number of I/Os on the Source and the same number on the
Target, and in the same order. Assuming that this data is the same set of data being updated
repeatedly, this approach results in wasted network traffic. The I/O can be completed much
more efficiently, as shown in Figure 8-37.
Figure 8-37 Global Mirror I/O with Change Volumes
In Figure 8-37, the same data is being updated repeatedly, so Change Volumes demonstrate
significant I/O transmission savings, by needing to only send I/O number 16, which was the
last I/O before the cycling period.
You can adjust the cycling period with the chrcrelationship -cycleperiodseconds
<60-86400> command from the CLI. If a copy does not complete in the cycle period, the next
cycle will not start until the prior cycle has completed. For this reason, using Change Volumes
gives you two possibilities for RPO:
If your replication completes in the cycling period, your RPO is twice the cycling period.
If your replication does not complete within the cycling period, your RPO is twice the
completion time. The next cycling period will start immediately after the prior cycling
period is finished.
Carefully consider your business requirements versus the performance of Global Mirror with
Change Volumes. Global Mirror with Change Volumes increases the intercluster traffic for
more frequent cycling periods. So, selecting the shortest cycle periods possible is not always
the answer. In most cases, the default must meet requirements and perform reasonably well.
Important: When making your Global Mirror volumes with Change Volumes, make sure
that you remember to select the Change Volume on the auxiliary (target) site. Failure to do
so will leave you exposed during a resynchronization operation.
Chapter 8. Advanced Copy Services 445
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.9.7 Importance of write ordering
Many applications that use block storage have a requirement to survive failures, such as loss
of power or a software crash, and to not lose data that existed prior to the failure. Because
many applications must perform large numbers of update operations in parallel to that block
storage, maintaining write ordering is key to ensuring the correct operation of applications
following a disruption.
An application that performs a high volume of database updates is usually designed with the
concept of dependent writes. With dependent writes, it is important to ensure that an earlier
write has completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the applications algorithms and can
lead to problems, such as detected or undetected data corruption.
The SAN Volume Controller Global Mirror implementation operates in a manner that is
designed to keep a consistent image at the secondary site at all times. The SAN Volume
Controller Global Mirror implementation uses very complex algorithms that operate to identify
sets of data and number those sets of data in sequence. The data is then applied at the
secondary site in the defined sequence.
Operating in this manner ensures that as long as the relationship is in a
consistent_synchronized state, your Global Mirror target data will be at least crash consistent
and allow for quick recovery via your application crash recovery facilities.
See 8.4.3, Consistency Groups on page 375 for more information regarding dependent
writes.
8.9.8 Global Mirror Consistency Groups
Global Mirror Consistency Groups address the issue of dependent writes across volumes,
where the objective is to preserve data consistency across multiple Global Mirror volumes.
Consistency Groups ensure a consistent data set, because applications have relational data
spanning across multiple volumes.
A Global Mirror Consistency Group can contain an arbitrary number of relationships up to the
maximum number of Global Mirror relationships that is supported by the SAN Volume
Controller system. Global Mirror commands can be issued to a Global Mirror Consistency
Group, and therefore simultaneously for all Global Mirror relationships that are defined within
that Consistency Group. Or Global Mirror commands can be issued to a single Metro Mirror
relationship, if it is not part of a Global Mirror Consistency Group.
For example, when issuing a Global Mirror start command to the Consistency Group, all of
the Global Mirror relationships in the Consistency Group are started at the same time.
Figure 8-38 on page 446 illustrates the concept of Global Mirror Consistency Groups.
Because GM_Relationship 1 and GM_Relationship 2 are part of the Consistency Group, they
can be handled as one entity. The stand-alone GM_Relationship 3 is handled separately.
Important: The GUI for 6.3.0 will automatically create Change Volumes for you. However,
it is a limitation of this initial release that they are fully provisioned volumes. To save space,
create thin-provisioned volumes in advance and use the existing volume option to select
your Change Volumes.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
446 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 8-38 Global Mirror Consistency Group
Certain uses of Global Mirror require the manipulation of more than one relationship. Global
Mirror Consistency Groups can provide the ability to group relationships so that they are
manipulated in unison. Global Mirror relationships within a Consistency Group can be in any
form:
Global Mirror relationships can be part of a Consistency Group, or they can be
stand-alone and therefore handled as single instances.
A Consistency Group can contain zero (0) or more relationships. An empty Consistency
Group, with zero relationships in it, has little purpose until it is assigned its first
relationship, except that it has a name.
All of the relationships in a Consistency Group must have matching master and auxiliary
volumes.
Although it is possible to use Consistency Groups to manipulate sets of relationships that do
not need to satisfy these strict rules, such manipulation can lead to undesired side effects.
The rules behind a Consistency Group mean that certain configuration commands are
prohibited. These specific configuration commands are not prohibited if the relationship is not
part of a Consistency Group.
For example, consider the case of two applications that are completely independent, yet they
are placed into a single Consistency Group. If a loss of synchronization occurs, and a
background copy process is required to recover synchronization, while this process is in
progress, Global Mirror rejects attempts to enable access to the auxiliary volumes of either
application.
If one application finishes its background copy before the other, Global Mirror still refuses to
grant access to its auxiliary volume. Even though it is safe in this case, Global Mirror policy
refuses access to the entire Consistency Group if any part of it is inconsistent.
Chapter 8. Advanced Copy Services 447
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Stand-alone relationships and Consistency Groups share a common configuration and state
model. All of the relationships in a Consistency Group that is not empty have the same state
as the Consistency Group.
8.9.9 Distribution of work among nodes
For the best performance, Global Mirror volumes must have their preferred nodes evenly
distributed among the nodes of the systems. Each volume within an I/O Group has a
preferred node property that can be used to balance the I/O load between nodes in that
group. Global Mirror also uses this property to route I/O between systems.
8.9.10 Background copy performance
Background copy resources for intercluster remote copy are available within two nodes of an
I/O Group to perform background copy at a maximum of 200 MBps (each piece of data that is
read and each piece of data that is written) total. The background copy performance is
subject to sufficient RAID controller bandwidth. Performance is also subject to other potential
bottlenecks, such as the intercluster fabric, and possible contention from host I/O for the SAN
Volume Controller bandwidth resources.
Background copy I/O will be scheduled to avoid bursts of activity that might have an adverse
effect on system behavior. An entire grain of tracks on one volume will be processed at
around the same time but not as a single I/O. Double buffering is used to try to take
advantage of sequential performance within a grain. However, the next grain within the
volume might not be scheduled for a while. Multiple grains might be copied simultaneously
and might be enough to satisfy the requested rate, unless the available resources cannot
sustain the requested rate.
Background copy proceeds from the low LBA to the high LBA in sequence to avoid conflicts
with FlashCopy, which operates in the opposite direction. It is expected that background copy
will not convey conflict with sequential applications, because it tends to vary disks more often.
8.9.11 Thin-provisioned background copy
Metro Mirror and Global Mirror relationships will preserve the space-efficiency of the master.
Conceptually, the background copy process detects an unallocated region of the master and
sends a special zero buffer to the auxiliary. If the auxiliary volume is thin-provisioned, and
the region is unallocated, the special buffer prevents a write and, therefore, an allocation. If
the auxiliary volume is not thin-provisioned or the region in question is an allocated region of
a thin-provisioned volume, a buffer of real zeros is synthesized on the auxiliary and written
as normal.
8.10 Global Mirror process
There are several steps in the Global Mirror process:
1. A SAN Volume Controller system partnership is created between two SAN Volume
Controller systems (for intercluster Global Mirror).
2. A Global Mirror relationship is created between two volumes of the same size.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
448 Implementing the IBM System Storage SAN Volume Controller V7.2
3. To manage multiple Global Mirror relationships as one entity, the relationships can be
made part of a Global Mirror Consistency Group to ensure data consistency across
multiple Global Mirror relationships, or simply for ease of management.
4. The Global Mirror relationship is started, and when the background copy has completed,
the relationship is consistent and synchronized.
5. When synchronized, the auxiliary volume holds a copy of the production data at the
master that can be used for DR.
6. To access the auxiliary volume, the Global Mirror relationship must be stopped with the
access option enabled, before write I/O is submitted to the auxiliary.
7. The remote host server is mapped to the auxiliary volume, and the disk is available for I/O.
8.10.1 Methods of synchronization
This section describes two methods that can be used to establish a relationship.
Full synchronization after creation
Full synchronization after creation is the default method. It is the simplest method, and it
requires no administrative activity apart from issuing the necessary commands. However, in
certain environments, the bandwidth that is available makes this method unsuitable.
Use this sequence for a single relationship:
A new relationship is created (mkrcrelationship is issued) without specifying the -sync
flag.
A new relationship is started (startrcrelationship is issued) without the -clean flag.
Synchronized before creation
In this method, the administrator must ensure that the master and auxiliary volumes contain
identical data before creating the relationship. There are two ways to ensure that the master
and auxiliary volumes contain identical data:
Both disks are created with the security delete (-fmtdisk) feature to make all data zero.
A complete tape image (or other method of moving data) is copied from one disk to the
other disk.
With this technique, do not allow I/O on either the master or auxiliary before the relationship is
established.
Then, the administrator must ensure that the commands are issued:
A new relationship is created (mkrcrelationship is issued) with the -sync flag.
A new relationship is started (startrcrelationship is issued) without the -clean flag.
8.10.2 Global Mirror states and events
In this section, we explain the states of a Global Mirror relationship and the series of events
that modify these states.
Important: Failure to perform these steps correctly can cause Global Mirror to report the
relationship as consistent when it is not, therefore creating a data loss or data integrity
exposure for hosts accessing data on the auxiliary volume.
Chapter 8. Advanced Copy Services 449
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Figure 8-39 shows an overview of the states that apply to a Global Mirror relationship in the
connected state.
Figure 8-39 Global Mirror state diagram
When creating the Global Mirror relationship, you can specify whether the auxiliary volume is
already in sync with the master volume, and the background copy process is then skipped.
This capability is especially useful when creating Global Mirror relationships for volumes that
have been created with the format option.
The following steps explain the Global Mirror state diagram (these numbers correspond to the
numbers in Figure 8-39):
Step 1:
a. The Global Mirror relationship is created with the -sync option, and the Global Mirror
relationship enters the ConsistentStopped state.
b. The Global Mirror relationship is created without specifying that the master and
auxiliary volumes are in sync, and the Global Mirror relationship enters the
InconsistentStopped state.
Step 2:
a. When starting a Global Mirror relationship in the ConsistentStopped state, it enters the
ConsistentSynchronized state. This state implies that no updates (write I/O) have been
performed on the master volume while in the ConsistentStopped state. Otherwise, you
must specify the -force option, and the Global Mirror relationship then enters the
InconsistentCopying state while the background copy is started.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
450 Implementing the IBM System Storage SAN Volume Controller V7.2
b. When starting a Global Mirror relationship in the InconsistentStopped state, the Global
Mirror relationship enters the InconsistentCopying state while the background copy is
started.
Step 3:
a. When the background copy completes, the Global Mirror relationship transitions from
the InconsistentCopying state to the ConsistentSynchronized state.
Step 4:
a. When stopping a Global Mirror relationship in the ConsistentSynchronized state, where
specifying the -access option enables write I/O on the auxiliary volume, the Global
Mirror relationship enters the Idling state.
b. To enable write I/O on the auxiliary volume, when the Global Mirror relationship is in
the ConsistentStopped state, issue the command svctask stoprcrelationship,
specifying the -access option, and the Global Mirror relationship enters the Idling state.
Step 5:
a. When starting a Global Mirror relationship that is in the Idling state, you must specify
the -primary argument to set the copy direction. Because no write I/O has been
performed (to either the master or auxiliary volume) while in the Idling state, the Global
Mirror relationship enters the ConsistentSynchronized state.
b. In case write I/O has been performed to either the master or the auxiliary volume, you
must specify the -force option. The Global Mirror relationship then enters the
InconsistentCopying state, while the background copy is started.
If the Global Mirror relationship is intentionally stopped or experiences an error, a state
transition is applied. For example, Global Mirror relationships in the ConsistentSynchronized
state enter the ConsistentStopped state, and Global Mirror relationships in the
InconsistentCopying state enter the InconsistentStopped state.
In a case where the connection is broken between the SAN Volume Controller systems in a
partnership, all of the (intercluster) Global Mirror relationships enter a Disconnected state. For
further information, see Connected versus disconnected on page 450.
State overview
The SAN Volume Controller-defined concepts of state are key to understanding the
configuration concepts. We explain them in more detail here.
Connected versus disconnected
This distinction can arise when a Global Mirror relationship is created with the two volumes in
separate systems.
Under certain error scenarios, communications between the two systems might be lost. For
example, power might fail, causing one complete system to disappear. Alternatively, the fabric
connection between the two systems might fail, leaving the two systems running but unable to
communicate with each other.
Common configuration and state model: Stand-alone relationships and Consistency
Groups share a common configuration and state model. All of the Global Mirror
relationships in a Consistency Group that is not empty have the same state as the
Consistency Group.
Chapter 8. Advanced Copy Services 451
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
When the two systems can communicate, the systems and the relationships spanning them
are described as connected. When they cannot communicate, the systems and the
relationships spanning them are described as disconnected.
In this scenario, each system is left with half of the relationship, and each system has only a
portion of the information that was available to it before. Only a subset of the normal
configuration activity is available.
The disconnected relationships are portrayed as having a changed state. The new states
describe what is known about the relationship and which configuration commands are
permitted.
When the systems can communicate again, the relationships become connected again.
Global Mirror automatically reconciles the two state fragments, taking into account any
configuration activity or other event that took place while the relationship was disconnected.
As a result, the relationship can either return to the state that it was in when it became
disconnected or it can enter another connected state.
Relationships that are configured between volumes in the same SAN Volume Controller
system (intracluster) will never be described as being in a disconnected state.
Consistent versus inconsistent
Relationships or Consistency Groups that contain relationships can be described as being
consistent or inconsistent. The consistent or inconsistent property describes the state of the
data on the auxiliary volume in relation to the data on the master volume. Consider the
consistent or inconsistent property to be a property of the auxiliary volume.
An auxiliary volume is described as consistent if it contains data that might have been read by
a host system from the master if power had failed at an imaginary point in time while I/O was
in progress, and power was later restored. This imaginary point in time is defined as the
recovery point. The requirements for consistency are expressed with respect to activity at the
master up to the recovery point:
The auxiliary volume contains the data from all writes to the master for which the host had
received successful completion and that data has not been overwritten by a subsequent
write (before the recovery point).
The writes are on the auxiliary and the host did not receive successful completion for
these writes (that is, the host received bad completion or no completion at all), and the
host subsequently performed a read from the master of that data. If that read returned
successful completion and no later write was sent (before the recovery point), the auxiliary
contains the same data as the data that was returned by the read from the master.
From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred).
If an application is designed to cope with an unexpected power failure, this guarantee of
consistency means that the application will be able to use the auxiliary and begin operation
just as though it had been restarted after the hypothetical power failure.
Again, the application depends on the key properties of consistency:
Write ordering
Read stability for the correct operation at the auxiliary
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
452 Implementing the IBM System Storage SAN Volume Controller V7.2
If a relationship, or a set of relationships, is inconsistent and if an attempt is made to start an
application using the data in the secondaries, a number of outcomes are possible:
The application might decide that the data is corrupt and crash or exit with an error code.
The application might fail to detect that the data is corrupt and return erroneous data.
The application might work without a problem.
Because of the risk of data corruption, and, in particular, undetected data corruption, Global
Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.
You can apply consistency as a concept to a single relationship or to a set of relationships in
a Consistency Group. Write ordering is a concept that an application can maintain across a
number of disks that are accessed through multiple systems, and therefore, consistency must
operate across all of those disks.
When deciding how to use Consistency Groups, the administrator must consider the scope of
an applications data, taking into account all of the interdependent systems that communicate
and exchange information.
If two programs or systems communicate and store details as a result of the information that
is exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
Consistency Group.
The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistent versus synchronized
A copy that is consistent and up-to-date is described as synchronized. In a synchronized
relationship, the master and auxiliary volumes only differ in the regions where writes are
outstanding from the host.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at an earlier point in time. Write I/O might have continued to a
master and not have been copied to the auxiliary. This state arises when it becomes
impossible to keep up-to-date and maintain consistency. An example is a loss of
communication between systems when writing to the auxiliary.
When communication is lost for an extended period of time, Global Mirror tracks the changes
that occur on the master volumes, but not the order of these changes, or the details of these
changes (write data). When communication is restored, it is impossible to make the auxiliary
synchronized without sending write data to the auxiliary out of order and, therefore, losing
consistency.
You can use two policies to cope with this situation:
Make a point-in-time copy of the consistent auxiliary before allowing the auxiliary to
become inconsistent. In the event of a disaster, before consistency is achieved again, the
point-in-time copy target provides a consistent, though out-of-date, image.
Accept the loss of consistency, and the loss of a useful auxiliary, while making it
synchronized.
Detailed states
In the following sections, we describe the states that are portrayed to the user, for either
Consistency Groups or relationships. We also explain extra information that is available in
Chapter 8. Advanced Copy Services 453
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
each state. We described the various major states to provide guidance regarding the
available configuration commands.
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is inaccessible for either read or write I/O. A copy process needs to
be started to make the auxiliary consistent.
This state is entered when the relationship or Consistency Group was InconsistentCopying
and has either suffered a persistent error or received a stop command that has caused the
copy process to stop.
A start command causes the relationship or Consistency Group to move to the
InconsistentCopying state. A stop command is accepted, but has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is inaccessible for either read or write I/O.
This state is entered after a start command is issued to an InconsistentStopped relationship
or Consistency Group. It is also entered when a forced start is issued to an Idling or
ConsistentStopped relationship or Consistency Group.
In this state, a background copy process runs, which copies data from the master to the
auxiliary volume.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or a stop command places the relationship or Consistency Group into the
InconsistentStopped state. A start command is accepted, but it has no effect.
If the background copy process completes on a stand-alone relationship, or on all
relationships for a Consistency Group, the relationship or Consistency Group transitions to
the ConsistentSynchronized state.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date with respect to the master.
This state can arise when a relationship is in the ConsistentSynchronized state and
experiences an error that forces a Consistency Freeze. It can also arise when a relationship is
created with a CreateConsistentFlag set to true.
Normally, following an I/O error, subsequent write activity causes updates to the master, and
the auxiliary is no longer synchronized (set to false). In this case, to reestablish
synchronization, consistency must be given up for a period. A start command with the
-force option must be used to acknowledge this situation, and the relationship or
Consistency Group transitions to InconsistentCopying. Issue this command only after all of
the outstanding events are repaired.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
454 Implementing the IBM System Storage SAN Volume Controller V7.2
In the unusual case where the master and auxiliary are still synchronized (perhaps following a
user stop, and no further write I/O was received), a start command takes the relationship to
ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch
command is permitted that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to ConsistentDisconnected. The master side transitions to IdlingDisconnected.
An informational status log is generated every time that a relationship or Consistency Group
enters ConsistentStopped with a status of Online state. You can enable this situation to be
configured to enable an SNMP trap and provide a trigger to automation software to consider
issuing a start command following a loss of synchronization.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O. The auxiliary volume is accessible for read-only I/O.
Writes that are sent to the master volume are sent to both the master and auxiliary volumes.
Either successful completion must be received for both writes; the write must be failed to the
host; or a state must transition out of the ConsistentSynchronized state before a write is
completed to the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
A switch command leaves the relationship in the ConsistentSynchronized state, but reverses
the master and auxiliary roles.
A start command is accepted, but has no effect.
If the relationship or Consistency Group becomes disconnected, the same transitions are
made for ConsistentStopped.
Idling
Idling is a connected state. Both the master and auxiliary disks are operating in the master
role. Consequently, both the master and auxiliary disks are accessible for write I/O.
In this state, the relationship or Consistency Group accepts a start command. Global Mirror
maintains a record of regions on each disk that received write I/O while Idling. This record is
used to determine what areas need to be copied following a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship has received write I/O, which is
indicated by the synchronized status. If the start command leads to the loss of consistency,
you must specify a -force parameter.
Following a start command, the relationship or Consistency Group transitions to
ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is
a loss of consistency.
Also, while in this state, the relationship or Consistency Group accepts a -clean option on the
start command. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.
Chapter 8. Advanced Copy Services 455
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
IdlingDisconnected
IdlingDisconnected is a disconnected state. The volume or disks in this half of the relationship
or Consistency Group are all in the master role and accept read or write I/O.
The major priority in this state is to recover the link and reconnect the relationship or
Consistency Group.
No configuration activity is possible (except for deletes or stops) until the relationship is
reconnected. At that point, the relationship transitions to a connected state. The exact
connected state that is entered depends on the state of the other half of the relationship or
Consistency Group, which depends on these factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.
While IdlingDisconnected, if a write I/O is received that causes loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised. This same event
will also be raised when this condition occurs for the ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The volumes in this half of the relationship
or Consistency Group are all in the auxiliary role and do not accept read or write I/O.
No configuration activity, except for deletes, is permitted until the relationship reconnects.
When the relationship or Consistency Group reconnects, the relationship becomes
InconsistentCopying automatically unless either of these conditions exist:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop while disconnected.
In either case, the relationship or Consistency Group becomes InconsistentStopped.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The volumes in this half of the relationship
or Consistency Group are all in the auxiliary role and accept read I/O but not write I/O.
This state is entered from ConsistentSynchronized or ConsistentStopped when the auxiliary
side of a relationship becomes disconnected.
In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point in time that Consistency was frozen. When entered from ConsistentStopped, it
retains the time that it had in that state. When entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or Consistency Group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
system.
A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.
When the relationship or Consistency Group reconnects, the relationship or Consistency
Group becomes ConsistentSynchronized only if this state does not lead to a loss of
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
456 Implementing the IBM System Storage SAN Volume Controller V7.2
consistency. This state does not lead to a loss of consistency provided that these conditions
are true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the master while disconnected.
Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.
Empty
This state only applies to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show.
It is entered when a Consistency Group is first created. It is exited when the first relationship
is added to the Consistency Group, at which point, the state of the relationship becomes the
state of the Consistency Group.
8.10.3 Practical use of Global Mirror
Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The
volumes in a Global Mirror relationship are referred to as the master (primary) volume and the
auxiliary (secondary) volume. The relationship between the two copies is asymmetric.
The master volume is the production volume, and updates to this copy are mirrored to the
auxiliary volume. The contents of the auxiliary volume that existed prior to the relationship is
lost.
While the Global Mirror relationship is active, the auxiliary copy (volume) is inaccessible for
host application write I/O at any time. The SAN Volume Controller allows read-only access to
the auxiliary volume when it contains a consistent image. This read-only access is only
intended to allow the boot-time operating system discovery to complete without error, so that
any hosts at the secondary site can be ready to start up the applications with minimal delay, if
required.
For example, many operating systems need to read LBA 0 (zero) to configure a logical unit.
Although read access is allowed on the auxiliary, in practice the data on the auxiliary volumes
cannot be read by a host, because most operating systems write a dirty bit to the file system
when it is mounted. Because this write operation is not allowed on the auxiliary volume, the
volume cannot be mounted.
This access is only provided where consistency can be guaranteed. However, there is no way
in which coherency can be maintained between reads that are performed at the auxiliary and
later write I/Os that are performed at the master.
To enable access to the auxiliary volume for host operations, you must stop the Global Mirror
relationship by specifying the -access parameter.
While access to the auxiliary volume for host operations is enabled, you must instruct the host
to mount the volume and other related tasks, before the application can be started or
instructed to perform a recovery process.
Using an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required, and the tasks to be performed on the host that is involved in establishing
Switching the copy direction: The copy direction for a Global Mirror relationship can be
switched so that the auxiliary volume becomes the master and the master volume
becomes the auxiliary, much like the restore option for FlashCopy.
Chapter 8. Advanced Copy Services 457
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
operation on the auxiliary copy are substantial. The goal is to make this failover rapid (much
faster than recovering from a backup copy), but it is not seamless.
You can automate the failover process by using failover management software. The SAN
Volume Controller provides SNMP traps and programming (or scripting) for the CLI to enable
this automation.
Table 8-9 on page 430 outlines the combinations of FlashCopy and Metro Mirror or Global
Mirror functions that are valid for a volume.
8.10.4 Global Mirror configuration limits
Table 8-11 lists the Global Mirror configuration limits.
Table 8-11 Global Mirror configuration limits
8.11 Global Mirror commands
Here, we summarize several of the most important Global Mirror commands. For complete
details about all of the Global Mirror commands, see IBM System Storage SAN Volume
Controller: Command-Line Interface Users Guide, GC27-2287.
The command set for Global Mirror contains two broad groups:
Commands to create, delete, and manipulate relationships and Consistency Groups
Commands that cause state changes
If a configuration command affects more than one system, Global Mirror performs the work to
coordinate configuration activity between the systems. Certain configuration commands can
only be performed when the systems are connected, and those commands fail with no effect
when the systems are disconnected.
Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by Global Mirror when the systems are reconnected.
For any given command, with one exception, a single system actually receives the command
from the administrator. This action is significant for defining the context for a
CreateRelationship (mkrcrelationship) command or a CreateConsistencyGroup
Parameter Value
Number of Metro Mirror
Consistency Groups per system
256
Number of Metro Mirror
relationships per system
8,192 (based on the maximum number of volumes per system)
Number of Metro Mirror
relationships per Consistency
Group
8,192
Total volume size per I/O Group A per I/O Group limit of 1,024 TB exists on the quantity of master
and auxiliary volume address spaces that can participate in
Metro Mirror and Global Mirror relationships. This maximum
configuration will consume 512 MB of bitmap space for the I/O
Group and allow 10 MB of space for all remaining copy services
features.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
458 Implementing the IBM System Storage SAN Volume Controller V7.2
(mkrcconsistgrp) command, in which case, the system receiving the command is called the
local system.
The exception is the command that sets systems into a Global Mirror partnership. The
administrator must issue the mkfcpartnership command to both the local system and to the
remote system.
We describe the commands here as an abstract command set. You can implement these
commands in one of two ways:
Through a CLI, which can be used for scripting and automation
Through a GUI, which can be used for one-off tasks
8.11.1 Listing the available SAN Volume Controller system partners
To create a SAN Volume Controller system partnership, we use the svcinfo
lsclustercandidate command.
svcinfo lsclustercandidate
Use the svcinfo lsclustercandidate command to list the systems that are available for
setting up a two-system partnership. This command is a prerequisite for creating Global
Mirror relationships.
To display the characteristics of the system, use the svcinfo lscluster command,
specifying the name of the system.
svctask chcluster
There are four parameters for Global Mirror in the command:
-gmlinktolerance link_tolerance
This parameter specifies the maximum period of time that the system will tolerate delay
before stopping Global Mirror relationships. Specify values between 60 and 86,400
seconds in increments of 10 seconds. The default value is 300. Do not change this value
except under the direction of IBM Support.
-relationshipbandwidthlimit cluster_relationship_bandwidth_limit
This parameter controls the maximum rate at which any one remote copy relationship can
synchronize. The default value for the relationship bandwidth limit is 25 MBps, but this
value can now be specified between 1 MBps to 1,000 MBps. Note that the partnership
overall limit is controlled by the chpartnership -bandwidth command and must be set on
each involved system accordingly.
-gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to an auxiliary volume) is delayed. This parameter permits you to test performance
implications before deploying Global Mirror and obtaining a long-distance link. Specify a
value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use
this argument to test each intercluster Global Mirror relationship separately.
Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the hosts performance. The
limit must never be higher than the maximum that is supported by the infrastructure
connecting the remote sites, regardless of the compression rates that you might
achieve.
Chapter 8. Advanced Copy Services 459
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
-gmintradelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intracluster copying
to an auxiliary volume) is delayed. This parameter permits you to test performance
implications before deploying Global Mirror and obtaining a long-distance link. Specify a
value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use
this argument to test each intracluster Global Mirror relationship separately.
Use the svctask chcluster command to adjust these values; see the following example:
svctask chcluster -gmlinktolerance 300
You can view all of these parameter values with the svcinfo lscluster <clustername>
command.
gmlinktolerance
We need to focus on the gmlinktolerance parameter in particular. If poor response extends
past the specified tolerance, a 1920 event is logged and one or more Global Mirror
relationships are automatically stopped, which protects the application hosts at the primary
site. During normal operations, application hosts experience a minimal effect from the
response times, because the Global Mirror feature uses asynchronous replication.
However, if Global Mirror operations experience degraded response times from the
secondary system for an extended period of time, I/O operations begin to queue at the
primary system. This queue results in an extended response time to application hosts. In this
situation, the gmlinktolerance feature stops Global Mirror relationships and the application
hosts response time returns to normal. After a 1920 event has occurred, the Global Mirror
auxiliary volumes are no longer in the consistent_synchronized state until you fix the cause of
the event and restart your Global Mirror relationships. For this reason, ensure that you
monitor the system to track when these 1920 events occur.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero).
However, the gmlinktolerance feature cannot protect applications from extended response
times if it is disabled. It might be appropriate to disable the gmlinktolerance feature under the
following circumstances:
During SAN maintenance windows where degraded performance is expected from SAN
components and application hosts can withstand extended response times from Global
Mirror volumes.
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the Global Mirror relationships. For
example, if you test using an I/O generator, which is configured to stress the back-end
storage, the gmlinktolerance feature might detect the high latency and stop the Global
Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of
exposing the test host to extended response times.
We suggest using a script to monitor the Global Mirror status periodically. Example 8-2 shows
an example of a script in ksh to check the Global Mirror status.
Example 8-2 Script example
[AIX1@root] /usr/GMC > cat checkSVCgm
#!/bin/sh
#
# Description
#
# GM_STATUS GM Status variable
# HOSTsvcNAME SVC system ipaddress
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
460 Implementing the IBM System Storage SAN Volume Controller V7.2
# PARA_TEST Consistent syncronized variable
# PARA_TESTSTOPIN Stop inconsistent variable
# PARA_TESTSTOP Consistent stopped variable
# IDCONS Consistency Group ID variable
# variable definition
HOSTsvcNAME="128.153.3.237"
IDCONS=255
PARA_TEST="consistent_synchronized"
PARA_TESTSTOP="consistent_stopped"
PARA_TESTSTOPIN="inconsistent_stopped"
FLOG="/usr/GMC/log/gmtest.log"
VAR=0
# Start Programm
if [[ $1 == "" ]]
then
CICLI="true"
fi
while $CICLI
do
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG
if [[ $GM_STATUS = $PARA_TEST ]]
then
sleep 600
else
sleep 600
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]]
then
ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS
TESTEX=`echo $?`
echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG
fi
GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:
'NR==2 {print $8 }'`
if [[ $GM_STATUS = $PARA_TESTSTOP ]]
then
echo "`date` Global Mirror restarted <$GM_STATUS>"
else
echo "`date` ERROR Global Mirro Failed <$GM_STATUS>"
fi
sleep 600
fi
((VAR+=1))
done
PARA_TESTSTOP="consistent_stopped"
The script in Example 8-2 on page 459 performs these functions:
Check the Global Mirror status every 600 seconds.
If the status is ConsistentSyncronized, wait another 600 seconds and test again.
Chapter 8. Advanced Copy Services 461
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
If the status is ConsistentStopped or InconsistentStopped, wait another 600 seconds and
then try to restart Global Mirror. If the status remains ConsistentStopped or
InconsistentStopped, it is likely that an associated 1920 event exists, which means that we
might have a performance problem.
Waiting 600 seconds before restarting Global Mirror can give the SAN Volume Controller
enough time to deliver the high workload that is requested by the server. Because Global
Mirror has been stopped for 10 minutes (600 seconds), the auxiliary copy is now out of
date by this amount of time and must be resynchronized.
A 1920 event indicates that one or more of the SAN components are unable to provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).
If 1920 events are occurring, it can be necessary to use a performance monitoring and
analysis tool, such as the IBM Tivoli Storage Productivity Center, to assist in identifying and
resolving the problem.
8.11.2 Creating a SAN Volume Controller system partnership
To create a SAN Volume Controller system partnership, use the svctask mkfcpartnership
command for traditional fibre channel (FC or FCoE) connections, or, svctask
mkippartnership for IP based connections.
svctask mkfcpartnership
Use the svctask mkfcpartnership command to establish a one-way Global Mirror
partnership between the local system and a remote system. Alternatively, use svctask
mkippartnership to create IP based partnership.
To establish a fully functional Global Mirror partnership, you must issue this command on both
systems. This step is a prerequisite for creating Global Mirror relationships between volumes
on the SAN Volume Controller systems.
When creating the partnership, you can specify the bandwidth to be used by the background
copy process between the local SAN Volume Controller system and the remote SAN Volume
Controller system, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth
must be set to a value that is less than or equal to the bandwidth that can be sustained by the
intercluster link.
Background copy bandwidth effect on foreground I/O latency
The background copy bandwidth determines the rate at which the background copy will be
attempted for Global Mirror. The background copy bandwidth can affect foreground I/O
latency in one of three ways:
The following result can occur if the background copy bandwidth is set too high compared
to the Global Mirror intercluster link capacity:
The background copy I/Os can back up on the Global Mirror intercluster link.
There is a delay in the synchronous auxiliary writes of foreground I/Os.
The foreground I/O latency will increase as perceived by applications.
If the background copy bandwidth is set too high for the storage at the primary site,
background copy read I/Os overload the primary storage and delay foreground I/Os.
Sample script: The script that is described in Example 8-2 on page 459 is supplied as is.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
462 Implementing the IBM System Storage SAN Volume Controller V7.2
If the background copy bandwidth is set too high for the storage at the secondary site,
background copy writes at the secondary overload the secondary storage and again delay
the synchronous secondary writes of foreground I/Os.
To set the background copy bandwidth optimally, make sure that you consider all three
resources (the primary storage, the intercluster link bandwidth, and the secondary storage).
Provision the most restrictive of these three resources between the background copy
bandwidth and the peak foreground I/O workload. Perform this provisioning by calculation or,
alternatively, by determining experimentally how much background copy can be allowed
before the foreground I/O latency becomes unacceptable and then reducing the background
copy to accommodate peaks in workload and an additional safety margin.
svctask chpartnership
To change the bandwidth that is available for background copy in a SAN Volume Controller
system partnership, use the svctask chpartnership command to specify the new bandwidth.
8.11.3 Creating a Global Mirror Consistency Group
To create a Global Mirror Consistency Group, use the svctask mkrcconsistgrp command.
svctask mkrcconsistgrp
Use the svctask mkrcconsistgrp command to create a new, empty Global Mirror
Consistency Group.
The Global Mirror Consistency Group name must be unique across all Consistency Groups
that are known to the systems owning this Consistency Group. If the Consistency Group
involves two systems, the systems must be in communication throughout the creation
process.
The new Consistency Group does not contain any relationships and will be in the Empty
state. You can add Global Mirror relationships to the group, either upon creation or afterward,
by using the svctask chrelationship command.
8.11.4 Creating a Global Mirror relationship
To create a Global Mirror relationship, use the svctask mkrcrelationship command.
svctask mkrcrelationship
Use the svctask mkrcrelationship command to create a new Global Mirror relationship. This
relationship persists until it is deleted.
The auxiliary volume must be equal in size to the master volume or the command will fail, and
if both volumes are in the same system, they must both be in the same I/O Group. The master
and auxiliary volume cannot be in an existing relationship, and they cannot be the target of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When creating the Global Mirror relationship, you can add it to a Consistency Group that
already exists, or it can be a stand-alone Global Mirror relationship if no Consistency Group is
specified.
Optional parameter: If you do not use the -global optional parameter, a Metro Mirror
relationship will be created instead of a Global Mirror relationship.
Chapter 8. Advanced Copy Services 463
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
To check whether the master or auxiliary volumes comply with the prerequisites to participate
in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as
shown in the next section.
svcinfo lsrcrelationshipcandidate
Use the svcinfo lsrcrelationshipcandidate command to list the available volumes that are
eligible to form a Global Mirror relationship.
When issuing the command, you can specify the master volume name and auxiliary system
to list the candidates that comply with the prerequisites to create a Global Mirror relationship.
If the command is issued with no parameters, all volumes that are not disallowed by another
configuration state, such as being a FlashCopy target, are listed.
8.11.5 Changing a Global Mirror relationship
To modify the properties of a Global Mirror relationship, use the svctask chrcrelationship
command.
svctask chrcrelationship
Use the svctask chrcrelationship command to modify the following properties of a Global
Mirror relationship:
Change the name of a Global Mirror relationship.
Add a relationship to a group.
Remove a relationship from a group using the -force flag.
8.11.6 Changing a Global Mirror Consistency Group
To change the name of a Global Mirror Consistency Group, use the following command.
svctask chrcconsistgrp
Use the svctask chrcconsistgrp command to change the name of a Global Mirror
Consistency Group.
8.11.7 Starting a Global Mirror relationship
To start a stand-alone Global Mirror relationship, use the following command.
svctask startrcrelationship
Use the svctask startrcrelationship command to start the copy process of a Global Mirror
relationship.
When issuing the command, you can set the copy direction if it is undefined, and, optionally,
you can mark the auxiliary volume of the relationship as clean. The command fails if it is used
as an attempt to start a relationship that is already a part of a Consistency Group.
Adding a Global Mirror relationship: When adding a Global Mirror relationship to a
Consistency Group that is not empty, the relationship must have the same state and copy
direction as the group to be added to it.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
464 Implementing the IBM System Storage SAN Volume Controller V7.2
You can only issue this command to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
either by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when restarting the relationship. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship. The use of the -force parameter here is a reminder
that the data on the auxiliary will become inconsistent while resynchronization (background
copying) takes place and, therefore, is unusable for DR purposes before the background copy
has completed.
In the Idling state, you must specify the master volume to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.
8.11.8 Stopping a Global Mirror relationship
To stop a stand-alone Global Mirror relationship, use the svctask stoprcrelationship
command.
svctask stoprcrelationship
Use the svctask stoprcrelationship command to stop the copy process for a relationship.
You can also use this command to enable write access to a consistent auxiliary volume by
specifying the -access parameter.
This command applies to a stand-alone relationship. It is rejected if it is addressed to a
relationship that is part of a Consistency Group. You can issue this command to stop a
relationship that is copying from master to auxiliary.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue an svctask startrcrelationship command. Write activity is no longer copied
from the master to the auxiliary volume. For a relationship in the ConsistentSynchronized
state, this command causes a Consistency Freeze.
When a relationship is in a consistent state (that is, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the svctask stoprcrelationship command to enable write access to the
auxiliary volume.
8.11.9 Starting a Global Mirror Consistency Group
To start a Global Mirror Consistency Group, use the svctask startrcconsistgrp command.
svctask startrcconsistgrp
Use the svctask startrcconsistgrp command to start a Global Mirror Consistency Group.
You can only issue this command to a Consistency Group that is connected.
For a Consistency Group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped either by a stop command or by an I/O error.
Chapter 8. Advanced Copy Services 465
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
8.11.10 Stopping a Global Mirror Consistency Group
To stop a Global Mirror Consistency Group, use the svctask stoprcconsistgrp command.
svctask stoprcconsistgrp
Use the svctask startrcconsistgrp command to stop the copy process for a Global Mirror
Consistency Group. You can also use this command to enable write access to the auxiliary
volumes in the group if the group is in a consistent state.
If the Consistency Group is in an inconsistent state, any copy operation stops and does not
resume until you issue the svctask startrcconsistgrp command. Write activity is no longer
copied from the master to the auxiliary volumes that belong to the relationships in the group.
For a Consistency Group in the ConsistentSynchronized state, this command causes a
Consistency Freeze.
When a Consistency Group is in a consistent state (for example, in the ConsistentStopped,
ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access
parameter with the svctask stoprcconsistgrp command to enable write access to the
auxiliary volumes within that group.
8.11.11 Deleting a Global Mirror relationship
To delete a Global Mirror relationship, use the svctask rmrcrelationship command.
svctask rmrcrelationship
Use the svctask rmrcrelationship command to delete the relationship that is specified.
Deleting a relationship only deletes the logical relationship between the two volumes. It does
not affect the volumes themselves.
If the relationship is disconnected at the time that the command is issued, the relationship is
only deleted on the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.
Alternatively, if the systems are disconnected, and you still want to remove the relationship on
both systems, you can issue the rmrcrelationship command independently on both of the
systems.
A relationship cannot be deleted if it is part of a Consistency Group. You must first remove the
relationship from the Consistency Group.
If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which Global Mirror does not
inhibit access to inconsistent data.
8.11.12 Deleting a Global Mirror Consistency Group
To delete a Global Mirror Consistency Group, use the svctask rmrcconsistgrp command.
svctask rmrcconsistgrp
Use the svctask rmrcconsistgrp command to delete a Global Mirror Consistency Group.
This command deletes the specified Consistency Group. You can issue this command for any
existing Consistency Group.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
466 Implementing the IBM System Storage SAN Volume Controller V7.2
If the Consistency Group is disconnected at the time that the command is issued, the
Consistency Group is only deleted on the system on which the command is being run. When
the systems reconnect, the Consistency Group is automatically deleted on the other system.
Alternatively, if the systems are disconnected, and you still want to remove the Consistency
Group on both systems, you can issue the svctask rmrcconsistgrp command separately on
both of the systems.
If the Consistency Group is not empty, the relationships within it are removed from the
Consistency Group before the group is deleted. These relationships then become
stand-alone relationships. The state of these relationships is not changed by the action of
removing them from the Consistency Group.
8.11.13 Reversing a Global Mirror relationship
To reverse a Global Mirror relationship, use the svctask switchrcrelationship command.
svctask switchrcrelationship
Use the svctask switchrcrelationship command to reverse the roles of the master volume
and the auxiliary volume when a stand-alone relationship is in a consistent state. When
issuing the command, the desired master needs to be specified.
8.11.14 Reversing a Global Mirror Consistency Group
To reverse a Global Mirror Consistency Group, use the svctask switchrcconsistgrp
command.
svctask switchrcconsistgrp
Use the svctask switchrcconsistgrp command to reverse the roles of the master volume
and the auxiliary volume when a Consistency Group is in a consistent state. This change is
applied to all of the relationships in the Consistency Group. When issuing the command, the
desired master needs to be specified.
8.12 Troubleshooting remote copy
Remote copy (Global Mirror and Metro Mirror) has two primary error codes that will be
displayed: 1920 or 1720. A 1920 is a congestion error. This error means that either the source,
the link between the source and target, or the target was unable to keep up with the
requested rate of demand. A 1720 error is a heartbeat or system partnership communication
error. This error often is more serious, because failing communication between your system
partners involves extended diagnostic time.
8.12.1 1920 error
A 1920 error (event ID 050010) can have several triggers, such as the following probable
causes:
Primary 2145 system or SAN fabric problem (10%)
Primary 2145 system or SAN fabric configuration (10%)
Secondary 2145 system or SAN fabric problem (15%)
Secondary 2145 system or SAN fabric configuration (25%)
Chapter 8. Advanced Copy Services 467
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Intercluster link problem (15%)
Intercluster link configuration (25%)
In practice, the most often overlooked cause is latency. Global Mirror has a round-trip-time
tolerance limit of 80 milliseconds. A message sent from your source SAN Volume Controller
system to your target SAN Volume Controller System and the accompanying
acknowledgement must have a total time of 80 milliseconds or 40 milliseconds each way (for
V4.1.1.x and up).
The primary component of your round-trip time is the physical distance between sites. For
every 1,000 kilometers (621.36 miles), you will observe a 5 millisecond delay. This delay does
not include the time that is added by equipment in the path. Every device will add a varying
amount of time, depending on the device, but a good rule is 25 microseconds for pure
hardware devices. For software-based functions (such as compression that is implemented in
software), the added delay tends to be much higher (usually in the millisecond plus range.)
We look at an example of a physical delay.
Company A has a production site that is 1,900 kilometers (1,180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide Fibre Channel over IP (FCIP) to encapsulate the FC traffic between sites. Now, there
are seven devices, and 1900 kilometers (1,180.6 miles) of distance delay. All the devices are
adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds each way,
for a total of 19 milliseconds. Combined with the device latency, that is 19.4 milliseconds of
physical latency minimum, which is under the 80 millisecond limit of Global Mirror, until you
realize that this number is the best case number. The link quality and bandwidth play a large
role. Your network provider will likely guarantee a latency maximum on your network link; be
sure to stay as far beneath the Global Mirror round-trip-time (RTT) limit as possible. You can
easily double or triple the expected physical latency with a lower quality or lower bandwidth
network link. Then, you are within the range of exceeding the limit if high I/O occurs that
exceeds the existing bandwidth capacity.
When you get a 1920 event, always check the latency first. Remember that the FCIP routing
layer can introduce latency if it is not properly configured. If your network provider reports a
much lower latency, you might have a problem at your FCIP routing layer. Most FCIP routing
devices have built-in tools to allow you to check the RTT. When checking latency, remember
that TCP/IP routing devices (including FCIP routers) report RTT or round-trip-time using
standard 64-byte ping packets.
In Figure 8-40 on page 468, you can see why the effective transit time must only be measured
using packets that are large enough to hold an FC frame, or 2,148 bytes (2,112 bytes of
payload and 36 bytes of header). Allow overhead to be safe, because various switch vendors
have optional features that might increase this size. After you have verified your latency using
the proper packet size, proceed with normal hardware troubleshooting.
Before we proceed, we look at the second largest component of your RTT, which is
serialization delay. Serialization delay is the amount of time that is required to move a packet
of data of a specific size across a network link of a given bandwidth. The time required to
move a specific amount of data decreases as the data transmission rate increases. Look
again at Figure 8-40 on page 468 and notice the orders of magnitude of difference between
Important: For 4.1.0.x and earlier, this limit was 68 milliseconds or 34 milliseconds one
way for Fibre Channel (FC) extenders, for SAN routers it was 20 milliseconds round-trip or
10 milliseconds one way. Make sure to use the correct values for the correct versions.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
468 Implementing the IBM System Storage SAN Volume Controller V7.2
the link bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is
insufficient. Never use a TCP/IP ping to measure RTT for FCIP traffic.
Figure 8-40 The effect of packet size (in bytes) versus the link size
Figure 8-40 compares the amount of time in microseconds that is required to transmit a
packet across network links of varying bandwidth capacity. Three packet sizes are used:
64 bytes: The size of the common ping packet
1,500 bytes: The size of the standard TCP/IP packet
2.148 bytes: The size of an FC frame
Finally, remember that your path maximum transmission unit (MTU) affects the delay that is
incurred to get a packet from one location to another location. An MTU might cause
fragmentation or be too large and cause too many retransmits when a packet is lost.
8.12.2 1720 error
The 1720 error (event ID 050020) is the other problem remote copy might encounter. The
amount of bandwidth that is needed for system-to-system communications varies based on
the number of nodes. It is important that it is not zero. When a partner on either side stops
communication, you will see a 1720 appear in your error log. According to the product
documentation, there are no likely field-replaceable unit breakages or other causes.
The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, if your fabric has more than 64 host bus
adapter (HBA) ports zoned, check your fabric configuration for zoning of more than one HBA
port for each node per I/O Group. One port for each node per I/O Group that is associated
with the host is the recommended zoning configuration for fabrics. For those fabrics with 64 or
more host ports, this recommendation becomes a rule. You must follow this zoning rule, or the
configuration is technically unsupported.
Chapter 8. Advanced Copy Services 469
Draft Document for Review March 27, 2014 3:03 pm 7933 08 Advanced Copy Services Matus_final.fm
Improper zoning will lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer via IBM Tivoli TotalStorage Productivity
Center and comparing against your sample interval will reveal potential SAN congestion. If a
zero buffer credit timer is above 2% of the total time of the sample interval, it is likely to cause
problems.
Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences indicate a larger problem.
If you receive multiple 1720 errors, recheck your network connection and then check the
V7000 partnership information to verify its status and settings. Then, proceed to perform
diagnostics for every piece of equipment in the path between the two V7000s. It often helps to
have a diagram showing the path of your replication from both logical and physical
configuration viewpoints.
If your investigations fail to resolve your remote copy problems, contact your IBM support
representative for more complete analysis.
7933 08 Advanced Copy Services Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
470 Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. 471
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Chapter 9. SAN Volume Controller
operations using the
command-line interface
In this chapter, we describe operational management. We use the command-line interface
(CLI) to demonstrate both normal operation and then advanced operation.
You can use either the CLI or GUI to manage IBM System Storage SAN Volume Controller
operations. We use the CLI in this chapter. You can script these operations, and we think it is
easier to create the documentation for the scripts using the CLI.
This chapter assumes a fully functional SAN Volume Controller environment.
9
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
472 Implementing the IBM System Storage SAN Volume Controller V7.2
9.1 Normal operations using CLI
In the following topics, we describe the commands that best represent normal operational
commands.
9.1.1 Command syntax and online help
Two major command sets are available:
The svcinfo command set allows you to query the various components within the SAN
Volume Controller environment.
The svctask command set allows you to make changes to the various components within
the SAN Volume Controller.
When the command syntax is shown, you will see certain parameters in square brackets, for
example [parameter]. These brackets indicate that the parameter is optional in most if not all
instances. Any information that is not in square brackets is required information. You can view
the syntax of a command by entering one of the following commands:
svcinfo -? Shows a complete list of informational commands
svctask -? Shows a complete list of task commands
svcinfo commandname -? Shows the syntax of informational commands
svctask commandname -? Shows the syntax of task commands
svcinfo commandname -filtervalue?Shows the filters that you can use to reduce the output
of the informational commands
If you look at the syntax of the command by typing svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
Using shortcuts
You can use this command to display a list of display or execution commands. This command
produces an alphabetical list of actions that are supported. The command parameter must be
svcinfo for display commands or svctask for execution commands. The model parameter
allows for different shortcuts on different platforms: 2145 or 2076.
<command> Shortcuts <model>
See Example 9-1 on page 473 (lines have been removed from the command output for
brevity).
Command prefix changes: The svctask and svcinfo command prefixes are no longer
needed when issuing a command. If you have existing scripts that use those prefixes, they
will continue to function. You do not need to change your scripts.
Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask
commandname -h command.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.
Chapter 9. SAN Volume Controller operations using the command-line interface 473
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-1 shortcuts command
IBM_2145:ITSO_SVC1:admin>svctask shortcuts 2145
addcontrolenclosure
addhostiogrp
addhostport
addmdisk
addnode
addvdiskcopy
applydrivesoftware
applysoftware
cancellivedump
cfgportip
chhost
chiogrp
chldap
chldapserver
chlicense
chmdisk
chmdiskgrp
chnode
chnodehw
chpartnership
chquorum
chrcconsistgrp
mkemailserver
mkemailuser
mkfcconsistgrp
mkfcmap
mkhost
mkldapserver
mkmdiskgrp
mkpartnership
mkrcconsistgrp
mkrcrelationship
mksnmpserver
mksyslogserver
mkuser
mkusergrp
mkvdisk
mkvdiskhostmap
prmmdisk
rmmdiskgrp
rmnode
rmpartnership
rmportip
rmrcconsistgrp
triggerlivedump
writesernum
Using reverse-i-search
If you work on your SAN Volume Controller with the same PuTTy session for many hours and
enter many commands, scrolling back to find your previous or similar commands can be a
time-intensive task. In this case, using the reverse-i-search command can help you quickly
and easily find any command that you have already issued in the history of your commands
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
474 Implementing the IBM System Storage SAN Volume Controller V7.2
by using the Ctrl+R keys. Ctrl+R will allow you to interactively search through the command
history as you type commands. Pressing Ctrl+R at an empty command prompt will give you a
prompt, as shown in Example 9-2.
Example 9-2 Using reverse-i-search
IBM_2145:ITSO_SVC1:admin>lsiogrp
id name node_count vdisk_count host_count
0 io_grp0 2 10 8
1 io_grp1 2 10 8
2 io_grp2 0 0 0
3 io_grp3 0 0 0
4 recovery_io_grp 0 0 0
(reverse-i-search)`i': lsiogrp
As shown in Example 9-2, we had executed an lsiogrp command. By pressing Ctrl+R and
typing sv, the command that we needed was recalled from history.
9.2 New commands
The following commands are introduced in version 7.2.0.0. They are mainly available for
stretched cluster configuration and IP replication. Example 9-3 provides the list of these
commands.
Example 9-3 New commands and topics for enhanced stretched clustered system
Multiple drive firmware download:
lsdriveupgradeprogress
Enhanced stretched cluster:
chsite
lssite
override quorum
IP replication:
mkfcpartnership
mkippartnership
TMS configuration:
lsportib
lsportibcandidate
SAN Volume Controller 7.2.0.0 also includes some changed and added some attributes and
variables of already existing commands. See the command reference or help for details.
9.3 Working with managed disks and disk controller systems
This section details the various configuration and administrative tasks that you can perform
on the managed disks (MDisks) within the SAN Volume Controller environment and the tasks
that you can perform at a disk controller level.
Chapter 9. SAN Volume Controller operations using the command-line interface 475
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
9.3.1 Viewing disk controller details
Use the lscontroller command to display summary information about all available back-end
storage systems.
To display more detailed information about a specific controller, run the command again and
append the controller name parameter, for example, controller ID 2, as shown in Example 9-4.
Example 9-4 lscontroller command
IBM_2145:ITSO_SVC1:admin>lscontroller 2
id 2
controller_name DS3500
WWNN 20080080E51B09E8
mdisk_link_count 10
max_mdisk_link_count 10
degraded no
vendor_id LSI
product_id_low INF-01-0
product_id_high 0
product_revision 0770
ctrl_s/n b Ns M
allow_quorum yes
WWPN 20680080E51B09E8
path_count 12
max_path_count 24
WWPN 20690080E51B09E8
path_count 8
max_path_count 20
WWPN 20580080E51B09E8
path_count 12
max_path_count 12
WWPN 20590080E51B09E8
path_count 8
max_path_count 20
IBM_2145:ITSO_SVC1:admin>
9.3.2 Renaming a controller
Use the chcontroller command to change the name of a storage controller. To verify the
change, run the lscontroller command. Example 9-5 shows both of these commands.
Example 9-5 chcontroller command
IBM_2145:ITSO_SVC1:admin>chcontroller -name ITSO-DS3500 DS3500
IBM_2145:ITSO_SVC1:admin>lscontroller
id controller_name ctrl_s/n vendor_id product_id_low
product_id_high
0 ITSO-DS5000 LSI INF-01-0
0
2 ITSO-DS3500 b Ns M LSI INF-01-0
0
IBM_2145:ITSO_SVC1:admin>
This command renames the controller named controller0 to DS4500.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
476 Implementing the IBM System Storage SAN Volume Controller V7.2
9.3.3 Discovery status
Use the lsdiscoverystatus command, as shown in Example 9-6, to determine if a discovery
operation is in progress. The output of this command is a status of active or inactive.
Example 9-6 lsdiscoverystatus command
IBM_2145:ITSO_SVC1:admin>lsdiscoverystatus
id scope IO_group_id IO_group_name status
0 fc_fabric inactive
This command displays the state of all discoveries in the clustered system. During discovery,
the system updates the drive and MDisk records. You must wait until the discovery has
finished and is inactive before you attempt to use the system. This command displays one of
the following results:
active: There is a discovery operation in progress at the time that the command is issued.
inactive: There are no discovery operations in progress at the time that the command is
issued.
9.3.4 Discovering MDisks
In general, the clustered system detects the MDisks automatically when they appear in the
network. However, certain Fibre Channel (FC) controllers do not send the required Small
Computer System Interface (SCSI) primitives that are necessary to automatically discover the
new MDisks.
If new storage has been attached and the clustered system has not detected it, it might be
necessary to run this command before the system can detect the new MDisks.
Use the detectmdisk command to scan for newly added MDisks (Example 9-7).
Example 9-7 detectmdisk
IBM_2145:ITSO_SVC1:admin>detectmdisk
To check whether any newly added MDisks were successfully detected, run the lsmdisk
command and look for new unmanaged MDisks.
If the disks do not appear, check that the disk is appropriately assigned to the SAN Volume
Controller in the disk subsystem, and that the zones are set up properly.
Choosing a new name: The chcontroller command specifies the new name first. You
can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 63 characters in length. However, the new name cannot
start with a number, dash, or the word controller (because this prefix is reserved for SAN
Volume Controller assignment only).
Discovery process: If you have assigned a large number of logical unit numbers (LUNs)
to your SAN Volume Controller, the discovery process can take time. Check several times
by using the lsmdisk command to see if all the expected MDisks are present.
Chapter 9. SAN Volume Controller operations using the command-line interface 477
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
When all the disks allocated to the SAN Volume Controller are seen from the SAN Volume
Controller system, the following procedure is a useful way to verify which MDisks are
unmanaged and ready to be added to the storage pool.
Perform the following steps to display MDisks:
1. Enter the lsmdiskcandidate command, as shown in Example 9-8. This command displays
all detected MDisks that are not currently part of a storage pool.
Example 9-8 lsmdiskcandidate command
IBM_2145:ITSO_SVC1:admin>lsmdiskcandidate
id
0
1
2
.
.
Alternatively, you can list all MDisks (managed or unmanaged) by issuing the lsmdisk
command, as shown in Example 9-9.
Example 9-9 lsmdisk command
IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
0 mdisk0 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000000 ITSO-DS3500
60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 generic_hdd
1 mdisk1 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000001 ITSO-DS3500
60080e50001b0b62000007b24e731e6000000000000000000000000000000000 generic_hdd
2 mdisk2 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000002 ITSO-DS3500
60080e50001b09e8000006f44e731bdc00000000000000000000000000000000 generic_hdd
3 mdisk3 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000003 ITSO-DS3500
60080e50001b0b62000007b44e731e8400000000000000000000000000000000 generic_hdd
4 mdisk4 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000004 ITSO-DS3500
60080e50001b09e8000006f64e731bff00000000000000000000000000000000 generic_hdd
5 mdisk5 online managed 1 STGPool_DS3500-2 128.0GB 0000000000000005 ITSO-DS3500
60080e50001b0b62000007b64e731ea900000000000000000000000000000000 generic_hdd
6 mdisk6 online unmanaged 10.0GB 0000000000000006 ITSO-DS3500
60080e50001b09e80000085f4e7d60dd00000000000000000000000000000000 generic_hdd
From this output, you can see additional information, such as the current status, about
each MDisk. For the purpose of our current task, we are only interested in the unmanaged
disks, because they are candidates for a storage pool.
2. If not all the MDisks that you expected are visible, rescan the available Fibre Channel (FC)
network by entering the detectmdisk command, as shown in Example 9-10.
Example 9-10 detectmdisk
IBM_2145:ITSO_SVC1:admin>detectmdisk
3. If you run the lsmdiskcandidate command again and your MDisk or MDisks are still not
visible, check that the LUNs from your subsystem have been properly assigned to the
SAN Volume Controller and that appropriate zoning is in place (for example, the SAN
Volume Controller can see the disk subsystem). See Chapter 3, Planning and
Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
478 Implementing the IBM System Storage SAN Volume Controller V7.2
configuration on page 71 for details about setting up your storage area network (SAN)
fabric.
9.3.5 Viewing MDisk information
When viewing information about the MDisks (managed or unmanaged), we can use the
lsmdisk command to display overall summary information about all available managed disks.
To display more detailed information about a specific MDisk, run the command again and
append the -mdisk name parameter (for example, mdisk0).
The overview command is lsmdisk -delim, as shown in Example 9-11.
The summary for an individual MDisk is lsmdisk (name/ID of the MDisk from which you want
the information), as shown in Example 9-12 on page 478.
Example 9-11 lsmdisk command
IBM_2145:ITSO_SVC1:admin>lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie
r
0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001
b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd
1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001
b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd
2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001
b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd
3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001
b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd
4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001
b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd
5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001
b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd
6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7
d60dd00000000000000000000000000000000:generic_hdd
Example 9-12 shows a summary for a single MDisk.
Example 9-12 Usage of the command lsmdisk (ID)
IBM_2145:ITSO_SVC1:admin>lsmdisk 0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 128.0GB
quorum_index 1
block_size 512
controller_name ITSO-DS3500
ctrl_type 4
ctrl_WWNN 20080080E51B09E8
controller_id 2
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000
preferred_WWPN 20580080E51B09E8
active_WWPN 20580080E51B09E8
Chapter 9. SAN Volume Controller operations using the command-line interface 479
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd
9.3.6 Renaming an MDisk
Use the chmdisk command to change the name of an MDisk. When using the command, be
aware that the new name comes first and then the ID/name of the MDisk being renamed. Use
this format: chmdisk -name (new name) (current ID/name). Use the lsmdisk command to
verify the change. Example 9-13 show both of these commands.
Example 9-13 chmdisk command
IBM_2145:ITSO_SVC1:admin>chmdisk -name mdisk_0 mdisk0
This command renamed the MDisk named mdisk0 to mdisk_0.
9.3.7 Including an MDisk
If a significant number of errors occur on an MDisk, the SAN Volume Controller automatically
excludes it. These errors can result from a hardware problem, a SAN problem, or the result of
poorly planned maintenance. If it is a hardware fault, you can receive a Simple Network
Management Protocol (SNMP) alert about the state of the disk subsystem (before the disk
was excluded), and you can undertake preventive maintenance. If not, the hosts that were
using virtual disks (VDisks), which used the excluded MDisk, now have I/O errors.
By running the lsmdisk command, you can see that mdisk0 is excluded in Example 9-14.
Example 9-14 lsmdisk command: Excluded MDisk
IBM_2145:ITSO_SVC1:admin>lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie
r
0:mdisk0:excluded:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e500
01b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd
1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001
b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd
2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001
b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd
3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001
b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd
4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001
b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd
The chmdisk command: The chmdisk command specifies the new name first. You can
use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 63 characters in length. However, the new name cannot
start with a number, dash, or the word mdisk (because this prefix is reserved for SAN
Volume Controller assignment only).
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
480 Implementing the IBM System Storage SAN Volume Controller V7.2
5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001
b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd
6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7
d60dd00000000000000000000000000000000:generic_hdd
After taking the necessary corrective action to repair the MDisk (for example, replace the
failed disk, repair the SAN zones, and so on), we need to include the MDisk again. We issue
the includemdisk command (Example 9-15), because the SAN Volume Controller system
does not include the MDisk automatically.
Example 9-15 includemdisk
IBM_2145:ITSO_SVC1:admin>includemdisk mdisk0
Running the lsmdisk command again shows that mdisk0 is online again; see Example 9-16.
Example 9-16 lsmdisk command: Verifying that MDisk is included
IBM_2145:ITSO_SVC1:admin>lsmdisk -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie
r
0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001
b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd
1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001
b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd
2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001
b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd
3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001
b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd
4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001
b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd
5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001
b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd
6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:ITSO-DS3500:60080e50001b09e80000085f4e7
d60dd00000000000000000000000000000000:generic_hdd
9.3.8 Adding MDisks to a storage pool
If you created an empty storage pool or you simply assign additional MDisks to your already
configured storage pool, you can use the addmdisk command to populate the storage pool
(Example 9-17).
Example 9-17 addmdisk command
IBM_2145:ITSO_SVC1:admin>addmdisk -mdisk mdisk6 STGPool_Multi_Tier
You can only add unmanaged MDisks to a storage pool. This command adds the MDisk
named mdisk6 to the storage pool named STGPool_Multi_Tier.
Important: Do not add this MDisk to a storage pool if you want to create an image mode
volume from the MDisk that you are adding. As soon as you add an MDisk to a storage
pool, it becomes managed, and extent mapping is not necessarily one-to-one anymore.
Chapter 9. SAN Volume Controller operations using the command-line interface 481
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
9.3.9 Showing MDisks in a storage pool
Use the lsmdisk -filtervalue command, as shown in Example 9-18, to see which MDisks
are part of a specific storage pool. This command shows all the MDisks that are part of a
storage pool if they belong to the Storage Subsystem named STGPool_DS3500-1.
Example 9-18 lsmdisk -filtervalue: MDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue mdisk_grp_name=STGPool_DS3500-1
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
0 mdisk0 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000000 DS3500
60080e50001b0b62000007b04e731e4d00000000000000000000000000000000 generic_hdd
1 mdisk1 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000001 DS3500
60080e50001b0b62000007b24e731e6000000000000000000000000000000000 generic_hdd
2 mdisk2 online managed 0 STGPool_DS3500-1 128.0GB 0000000000000002 DS3500
60080e50001b09e8000006f44e731bdc00000000000000000000000000000000 generic_hdd
As you can see in Example 9-18, with this command, by using a wild card, you will be able to
see all the MDisks present in the storage pools named STGPool_* where the asterisk (*) is a
wild card.
9.3.10 Working with a storage pool
Before we can create any volumes on the SAN Volume Controller clustered system, we need
to virtualize the allocated storage that is assigned to the SAN Volume Controller. After we
have assigned volumes to the SAN Volume Controllers managed disks, we cannot start
using them until they are members of a storage pool. Therefore, one of our first operations is
to create a storage pool where we can place our MDisks.
This section describes the operations using MDisks and the storage pool. It explains the tasks
that we can perform at the storage pool level.
9.3.11 Creating a storage pool
After a successful login to the CLI interface of the SAN Volume Controller, we create the
storage pool.
Using the mkmdiskgrp command, create a storage pool, as shown in Example 9-19.
Example 9-19 mkmdiskgrp
IBM_2145:ITSO_SVC1:admin>mkmdiskgrp -name STGPool_Multi_Tier -ext 256
MDisk Group, id [3], successfully created
This command creates a storage pool called STGPool_Multi_Tier. The extent size that is
used within this group is 256 MB. We have not added any MDisks to the storage pool yet, so
it is an empty storage pool.
You can add unmanaged MDisks and create the storage pool in the same command. Use the
command mkmdiskgrp with the -mdisk parameter and enter the IDs or names of the MDisks,
which adds the MDisks immediately after the storage pool is created.
Prior to the creation of the storage pool, enter the lsmdisk command, as shown in
Example 9-20. This command lists all of the available MDisks that are seen by the SAN
Volume Controller system.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
482 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-20 Listing available MDisks
IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie
r
0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001
b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd
1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001
b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd
2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001
b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd
3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001
b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd
4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001
b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd
5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001
b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd
6:mdisk6:online:unmanaged:::10.0GB:0000000000000006:DS3500:60080e50001b09e80000085f4e7d60dd
00000000000000000000000000000000:generic_hdd
8:mdisk7:online:unmanaged:::10.0GB:00
00000000000008:DS3500:60080e50001b09e8000008614e7d8a2c00000000000000000000000000000000:gene
ric_hdd
Using the same command as before (mkmdiskgrp) and knowing the MDisk IDs that we are
using, we can add multiple MDisks to the storage pool at the same time. We now add the
unmanaged MDisks to the storage pool that we created, as shown in Example 9-21.
Example 9-21 Creating a storage pool and adding available MDisks
IBM_2145:ITSO_SVC1:admin>mkmdiskgrp -name STGPool_DS5000 -ext 256 -mdisk 6:8
MDisk Group, id [2], successfully created
This command creates a storage pool called STGPool_DS5000. The extent size that is used
within this group is 256 MB, and two MDisks (6 and 8) are added to the storage pool.
By running the lsmdisk command, you now see the MDisks as managed and as part of the
STGPool_DS3500-1, as shown in Example 9-22.
Example 9-22 lsmdisk command
IBM_2145:ITSO_SVC1:admin>lsmdisk -filtervalue controller_name=ITSO-DS3500 -delim :
id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID:tie
r
0:mdisk0:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000000:ITSO-DS3500:60080e50001
b0b62000007b04e731e4d00000000000000000000000000000000:generic_hdd
1:mdisk1:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000001:ITSO-DS3500:60080e50001
b0b62000007b24e731e6000000000000000000000000000000000:generic_hdd
Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a
-name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the
SAN Volume Controller internally. If you do not enter the -mdisk parameter, an empty
storage pool is created.
If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the
underscore. The name can be between one and 63 characters in length, but it cannot start
with a number or the word MDiskgrp (because this prefix is reserved for SAN Volume
Controller assignment only).
Chapter 9. SAN Volume Controller operations using the command-line interface 483
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
2:mdisk2:online:managed:0:STGPool_DS3500-1:128.0GB:0000000000000002:ITSO-DS3500:60080e50001
b09e8000006f44e731bdc00000000000000000000000000000000:generic_hdd
3:mdisk3:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000003:ITSO-DS3500:60080e50001
b0b62000007b44e731e8400000000000000000000000000000000:generic_hdd
4:mdisk4:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000004:ITSO-DS3500:60080e50001
b09e8000006f64e731bff00000000000000000000000000000000:generic_hdd
5:mdisk5:online:managed:1:STGPool_DS3500-2:128.0GB:0000000000000005:ITSO-DS3500:60080e50001
b0b62000007b64e731ea900000000000000000000000000000000:generic_hdd
6:mdisk6:online:managed:2:STGPool_DS3500-2:10.0GB:0000000000000006:ITSO-DS3500:60080e50001b
09e80000085f4e7d60dd00000000000000000000000000000000:generic_hdd
7:mdisk7:online:managed:3:STGPool_Multi_Tier:10.0GB:0000000000000007:ITSO-DS3500:60080e5000
1b0b620000091f4e7d8c9400000000000000000000000000000000:generic_hdd
8:mdisk8:online:managed:2:STGPool_DS3500-2:10.0GB:0000000000000008:ITSO-DS3500:60080e50001b
09e8000008614e7d8a2c00000000000000000000000000000000:generic_hdd
9:mdisk9:online:managed:3:STGPool_Multi_Tier:10.0GB:0000000000000009:ITSO-DS3500:60080e5000
1b0b62000009214e7d928000000000000000000000000000000000:generic_hdd
At this point, you have completed the tasks that are required to create a new storage pool.
9.3.12 Viewing storage pool information
Use the lsmdiskgrp command, as shown in Example 9-23 on page 483, to display
information about the storage pools that are defined in the SAN Volume Controller.
Example 9-23 lsmdiskgrp command
IBM_2145:ITSO_SVC1:admin>lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_capacity:
used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_status
0:STGPool_DS3500-1:online:3:11:382.50GB:256:62.50GB:320.00GB:320.00GB:320.00GB:83:0:auto:in
active
1:STGPool_DS3500-2:online:3:11:384.00GB:256:262.00GB:122.00GB:122.00GB:122.00GB:31:0:auto:i
nactive
2:STGPool_DS5000-1:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:inactive
3:STGPool_Multi_Tier:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:inactive
9.3.13 Renaming a storage pool
Use the chmdiskgrp command to change the name of a storage pool. To verify the change,
run the lsmdiskgrp command. Example 9-24 shows both of these commands.
Example 9-24 chmdiskgrp command
IBM_2145:ITSO_SVC1:admin>chmdiskgrp -name STGPool_DS3500-2_new 1
IBM_2145:ITSO_SVC1:admin>lsmdiskgrp -delim :
id:name:status:mdisk_count:vdisk_count:capacity:extent_size:free_capacity:virtual_
capacity:used_capacity:real_capacity:overallocation:warning:easy_tier:easy_tier_st
atus
0:STGPool_DS3500-1:online:3:11:382.50GB:256:62.50GB:320.00GB:320.00GB:320.00GB:83:
0:auto:inactive
1:STGPool_DS3500-2_new:online:3:11:384.00GB:256:262.00GB:122.00GB:122.00GB:122.00G
B:31:0:auto:inactive
2:STGPool_DS5000-1:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:in
active
3:STGPool_Multi_Tier:online:2:0:20.00GB:256:20.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:
inactive
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
484 Implementing the IBM System Storage SAN Volume Controller V7.2
This command renamed the storage pool STGPool_DS3500-2 to STGPool_DS3500-2_new as
shown.
9.3.14 Deleting a storage pool
Use the rmmdiskgrp command to remove a storage pool from the SAN Volume Controller
system configuration (Example 9-25).
Example 9-25 rmmdiskgrp
IBM_2145:ITSO_SVC1:admin>rmmdiskgrp STGPool_DS3500-2_new
This command removes storage pool STGPool_DS3500-2_new from the SAN Volume
Controller system configuration.
9.3.15 Removing MDisks from a storage pool
Use the rmmdisk command to remove an MDisk from a storage pool (Example 9-26).
Example 9-26 rmmdisk command
IBM_2145:ITSO_SVC1:admin>rmmdisk -mdisk 8 -force 2
This command removes the MDisk with ID 8 from the storage pool with ID 2. The -force flag
is set, because there are volumes using this storage pool.
9.4 Working with hosts
In this section, we explain the tasks that you can perform at a host level. When we create a
host in our SAN Volume Controller system, we need to define the connection method.
Changing the storage pool: The chmdiskgrp command specifies the new name first. You
can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new
name can be between one and 63 characters in length. However, the new name cannot
start with a number, dash, or the word mdiskgrp (because this prefix is reserved for SAN
Volume Controller assignment only).
Removing a storage pool from the SAN Volume Controller system configuration: If
there are MDisks within the storage pool, you must use the -force flag to remove the
storage pool from the SAN Volume Controller system configuration, for example:
rmmdiskgrp STGPool_DS3500-2_new -force
Ensure that you definitely want to use this flag, because it destroys all mapping information
and data held on the volumes, which cannot be recovered.
Sufficient space: The removal only takes place if there is sufficient space to migrate the
volumes data to other extents on other MDisks that remain in the storage pool. After you
remove the MDisk from the storage pool, it takes time to change the mode from managed
to unmanaged depending on the size of the MDisk that you are removing.
Chapter 9. SAN Volume Controller operations using the command-line interface 485
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Starting with SAN Volume Controller 5.1, we can now define our host as iSCSI-attached or
FC-attached.
9.4.1 Creating a Fibre Channel-attached host
In the following sections, we illustrate how to create an FC-attached host under various
circumstances.
Host is powered on, connected, and zoned to the SAN Volume Controller
When you create your host on the SAN Volume Controller, it is good practice to check
whether the host bus adapter (HBA) worldwide port names (WWPNs) of the server are visible
to the SAN Volume Controller. By checking, you ensure that zoning is done and that the
correct WWPN will be used. Issue the lshbaportcandidate command, as shown in
Example 9-27.
Example 9-27 lshbaportcandidate command
IBM_2145:ITSO_SVC1:admin>lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
After you know that the WWPNs that are displayed match your host (use host or SAN switch
utilities to verify), use the mkhost command to create your host.
The command to create a host is shown in Example 9-28.
Example 9-28 mkhost
IBM_2145:ITSO_SVC1:admin>mkhost -name Almaden -hbawwpn
210000E08B89C1CD:210000E08B054CAA
Host, id [2], successfully created
This command creates a host called Almaden using WWPN 21:00:00:E0:8B:89:C1:CD and
21:00:00:E0:8B:05:4C:AA.
Name: If you do not provide the -name parameter, the SAN Volume Controller
automatically generates the name hostx (where x is the ID sequence number that is
assigned by the SAN Volume Controller internally).
You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the
underscore (_). The name can be between one and 63 characters in length. However, the
name cannot start with a number, dash, or the word host (because this prefix is reserved
for SAN Volume Controller assignment only).
Ports: You can define from one up to eight ports per host, or you can use the addport
command, which we show in 9.4.5, Adding ports to a defined host on page 489.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
486 Implementing the IBM System Storage SAN Volume Controller V7.2
Host is not powered on or not connected to the SAN
If you want to create a host on the SAN Volume Controller without seeing your target WWPN
by using the lshbaportcandidate command, add the -force flag to your mkhost command, as
shown in Example 9-29. This option is more open to human error than if you choose the
WWPN from a list, but it is typically used when many host definitions are created at the same
time, such as through a script.
In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create
the host, regardless of whether they are connected, as shown in Example 9-29.
Example 9-29 mkhost -force
IBM_2145:ITSO_SVC1:admin>mkhost -name Almaden -hbawwpn
210000E08B89C1CD:210000E08B054CAA -force
Host, id [2], successfully created
This command forces the creation of a host called Almaden using WWPN
210000E08B89C1CD:210000E08B054CAA.
9.4.2 Creating an iSCSI-attached host
Now, we can create host definitions to a host that is not connected to the SAN but that has
LAN access to our SAN Volume Controller nodes. Before we create the host definition, we
configure our SAN Volume Controller systems to use the new iSCSI connection method. We
describe additional information about configuring your nodes to use iSCSI in 9.9.3, iSCSI
configuration on page 524.
The iSCSI functionality allows the host to access volumes through the SAN Volume Controller
without being attached to the SAN. Back-end storage and node-to-node communication still
need the FC network to communicate, but the host does not necessarily need to be
connected to the SAN.
When we create a host that is going to use iSCSI as a communication method, iSCSI initiator
software must be installed on the host to initiate the communication between the SAN Volume
Controller and the host. This installation creates an iSCSI qualified name (IQN) identifier that
is needed before we create our host.
Before we start, we check our servers IQN address. We are running Windows Server 2008.
We select Start Programs Administrative tools, and we select iSCSI initiator. In our
example, our IQN, as shown in Figure 9-1, is:
iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
WWPNs: WWPNs are not case sensitive in the CLI.
Chapter 9. SAN Volume Controller operations using the command-line interface 487
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Figure 9-1 IQN from the iSCSI initiator tool
We create the host by issuing the mkhost command, as shown in Example 9-30. When the
command completes successfully, we display our newly created host.
Example 9-30 mkhost command
IBM_2145:ITSO_SVC1:admin>mkhost -name Baldur -iogrp 0 -iscsiname
iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
Host, id [4], successfully created
IBM_2145:ITSO_SVC1:admin>lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111
iogrp_count 1
iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 0
state offline
It is important to know that when the host is initially configured, the default authentication
method is set to no authentication and no Challenge Handshake Authentication Protocol
(CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SAN
Volume Controller system, use the chhost command with the chapsecret parameter.
We have now created our host definition. We map a volume to our new iSCSI server, as
shown in Example 9-31. We have already created the volume, as shown in 9.6.1, Creating a
volume on page 492. In our scenario, our volumes ID is 21 and the host name is Baldur. We
map it to our iSCSI host.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
488 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-31 Mapping a volume to the iSCSI host
IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Baldur 21
Virtual Disk to Host map, id [0], successfully created
After the volume has been mapped to the host, we display the host information again, as
shown in Example 9-32.
Example 9-32 lshost
IBM_2145:ITSO_SVC1:admin>lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111
iogrp_count 1
iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 1
state online
If you need to display a CHAP secret for an already defined server, use the lsiscsiauth
command. The lsiscsiauth command lists the Challenge Handshake Authentication
Protocol (CHAP) secret configured for authenticating an entity to the SAN Volume Controller
system.
9.4.3 Modifying a host
Use the chhost command to change the name of a host. To verify the change, run the lshost
command. Example 9-33 shows both of these commands.
Example 9-33 chhost command
IBM_2145:ITSO_SVC1:admin>chhost -name Angola Guinea
IBM_2145:ITSO_SVC1:admin>lshost
id name port_count iogrp_count
0 Palau 2 4
1 Nile 2 1
2 Kanaga 2 1
3 Siam 2 2
4 Angola 1 4
This command renamed the host from Guinea to Angola.
Tip: FC hosts and iSCSI hosts are handled in the same way operationally after they have
been created.
Host name: The chhost command specifies the new name first. You can use letters A to Z
and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be
between one and 63 characters in length. However, it cannot start with a number, dash, or
the word host (because this prefix is reserved for SAN Volume Controller assignment
only).
Chapter 9. SAN Volume Controller operations using the command-line interface 489
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
9.4.4 Deleting a host
Use the rmhost command to delete a host from the SAN Volume Controller configuration. If
your host is still mapped to volumes and you use the -force flag, the host and all the
mappings with it are deleted. The volumes are not deleted, only the mappings to them.
The command that is shown in Example 9-34 deletes the host called Angola from the SAN
Volume Controller configuration.
Example 9-34 rmhost Angola
IBM_2145:ITSO_SVC1:admin>rmhost Angola
9.4.5 Adding ports to a defined host
If you add an HBA or a network interface controller (NIC) to a server that is already defined
within the SAN Volume Controller, you can use the addhostport command to add the new
port definitions to your host configuration.
If your host is currently connected through SAN with FC and if the WWPN is already zoned to
the SAN Volume Controller system, issue the lshbaportcandidate command, as shown in
Example 9-35, to compare with the information that you have from the server administrator.
Example 9-35 lshbaportcandidate
IBM_2145:ITSO_SVC1:admin>lshbaportcandidate
id
210000E08B054CAA
If the WWPN matches your information (use host or SAN switch utilities to verify), use the
addhostport command to add the port to the host.
Example 9-36 shows the command to add a host port.
Example 9-36 addhostport
IBM_2145:ITSO_SVC1:admin>addhostport -hbawwpn 210000E08B054CAA Palau
This command adds the WWPN of 210000E08B054CAA to the Palau host.
Hosts that require the -type parameter: If you use Hewlett-Packard UNIX (HP-UX), you
use the -type option. See IBM System Storage Open Software Family SAN Volume
Controller: Host Attachment Guide, SC26-7563, for more information about the hosts that
require the -type parameter.
Deleting a host: If there are any volumes that are assigned to the host, you must use the
-force flag, for example, rmhost -force Angola.
Adding multiple ports: You can add multiple ports all at one time by using the separator
or colon (:) between WWPNs, for example:
addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
490 Implementing the IBM System Storage SAN Volume Controller V7.2
If the new HBA is not connected or zoned, the lshbaportcandidate command does not
display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs
and use the -force flag to create the host, as shown in Example 9-37.
Example 9-37 addhostport
IBM_2145:ITSO_SVC1:admin>addhostport -hbawwpn 210000E08B054CAA -force Palau
This command forces the addition of the WWPN named 210000E08B054CAA to the host called
Palau.
If you run the lshost command again, you see your host with an updated port count of 2 in
Example 9-38.
Example 9-38 lshost command: Port count
IBM_2145:ITSO_SVC1:admin>lshost
id name port_count iogrp_count
0 Palau 2 4
1 ITSO_W2008 1 4
2 Thor 3 1
3 Frigg 1 1
4 Baldur 1 1
If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN
ID before you add the port. Unlike FC-attached hosts, you cannot check for available
candidates with iSCSI.
After you have acquired the additional iSCSI IQN, use the addhostport command, as shown
in Example 9-39.
Example 9-39 Adding an iSCSI port to an already configured host
IBM_2145:ITSO_SVC1:admin>addhostport -iscsiname iqn.1991-05.com.microsoft:baldur 4
9.4.6 Deleting ports
If you make a mistake when adding a port, or if you remove an HBA from a server that is
already defined within the SAN Volume Controller, you can use the rmhostport command to
remove WWPN definitions from an existing host.
Before you remove the WWPN, be sure that it is the correct WWPN by issuing the lshost
command, as shown in Example 9-40.
Example 9-40 lshost command
IBM_2145:ITSO_SVC1:admin>lshost Palau
id 0
name Palau
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B054CAA
WWPNs: WWPNs are not case sensitive within the CLI.
Chapter 9. SAN Volume Controller operations using the command-line interface 491
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
node_logged_in_count 2
state active
WWPN 210000E08B89C1CD
node_logged_in_count 2
state offline
When you know the WWPN or iSCSI IQN, use the rmhostport command to delete a host
port, as shown in Example 9-41.
Example 9-41 rmhostport
For removing WWPN
IBM_2145:ITSO_SVC1:admin>rmhostport -hbawwpn 210000E08B89C1CD Palau
and for removing iSCSI IQN
IBM_2145:ITSO_SVC1:admin>rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur
Baldur
This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI
IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.
9.5 Working with the Ethernet port for iSCSI
In this section, we describe the commands that are useful for setting, changing, and
displaying the SAN Volume Controller Ethernet port for iSCSI configuration.
Example 9-42 shows the lsportip command listing the iSCSI IP addresses assigned for
each port on each node in the system.
Example 9-42 lsportip command
IBM_2145:ITSO_SVC1:admin>lsportip
id node_id node_name IP_address mask
gateway IP_address_6 prefix_6 gateway_6 MAC
duplex state speed failover
1 1 node1
00:1a:64:95:2f:cc Full unconfigured 1Gb/s no
1 1 node1
00:1a:64:95:2f:cc Full unconfigured 1Gb/s yes
2 1 node1 10.44.36.64 255.255.255.0
10.44.36.254 00:1a:64:95:2f:ce Full
online 1Gb/s no
2 1 node1
00:1a:64:95:2f:ce Full online 1Gb/s yes
1 2 node2
00:1a:64:95:3f:4c Full unconfigured 1Gb/s no
1 2 node2
00:1a:64:95:3f:4c Full unconfigured 1Gb/s yes
Removing multiple ports: You can remove multiple ports at one time by using the
separator or colon (:) between the port names, for example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
492 Implementing the IBM System Storage SAN Volume Controller V7.2
2 2 node2 10.44.36.65 255.255.255.0
10.44.36.254 00:1a:64:95:3f:4e Full
online 1Gb/s no
2 2 node2
00:1a:64:95:3f:4e Full online 1Gb/s yes
1 3 node3
00:21:5e:41:53:18 Full unconfigured 1Gb/s no
1 3 node3
00:21:5e:41:53:18 Full unconfigured 1Gb/s yes
2 3 node3 10.44.36.60 255.255.255.0
10.44.36.254 00:21:5e:41:53:1a
Full online 1Gb/s no
2 3 node3
00:21:5e:41:53:1a Full online 1Gb/s yes
1 4 node4
00:21:5e:41:56:8c Full unconfigured 1Gb/s no
1 4 node4
00:21:5e:41:56:8c Full unconfigured 1Gb/s yes
2 4 node4 10.44.36.63 255.255.255.0
10.44.36.254 00:21:5e:41:56:8e Full
online 1Gb/s no
2 4 node4
00:21:5e:41:56:8e Full online 1Gb/s yes
Example 9-43 shows how the cfgportip command assigns an IP address to each node
Ethernet port for iSCSI I/O.
Example 9-43 cfgportip command
IBM_2145:ITSO_SVC1:admin>cfgportip -node 4 -ip 10.44.36.63 -gw 10.44.36.254 -mask 255.255.255.0 2
IBM_2145:ITSO_SVC1:admin>cfgportip -node 1 -ip 10.44.36.64 -gw 10.44.36.254 -mask 255.255.255.0 2
IBM_2145:ITSO_SVC1:admin>cfgportip -node 2 -ip 10.44.36.65 -gw 10.44.36.254 -mask 255.255.255.0 2
9.6 Working with volumes
In this section, we describe the various configuration and administrative tasks that can be
performed on the volume within the SAN Volume Controller environment.
9.6.1 Creating a volume
The mkvdisk command creates sequential, striped, or image mode volume objects. When
they are mapped to a host object, these objects are seen as disk drives with which the host
can perform I/O operations.
When creating a volume, you must enter several parameters at the CLI. There are both
mandatory and optional parameters.
See the full command string and detailed information in Command-Line Interface Users
Guide, SC27-2287.
Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.
Chapter 9. SAN Volume Controller operations using the command-line interface 493
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
When you are ready to create a volume, you must know the following information before you
start creating the volume:
In which storage pool the volume is going to have its extents
From which I/O Group the volume will be accessed
Which SAN Volume Controller node will be the preferred node for the volume
Size of the volume
Name of the volume
Type of the volume
Whether this volume will be managed by Easy Tier to optimize its performance
When you are ready to create your striped volume, use the mkvdisk command (we discuss
sequential and image mode volumes later). In Example 9-44, this command creates a 10 GB
striped volume with volume ID 20 within the storage pool STGPool_DS3500-2 and assigns it to
the iogrp_0 I/O Group. Its preferred node will be node 1.
Example 9-44 mkvdisk command
IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp io_grp0 -node 1
-size 10 -unit gb -name Tiger
Virtual Disk, id [20], successfully created
To verify the results, use the lsvdisk command, as shown in Example 9-45.
Example 9-45 lsvdisk command
IBM_2145:ITSO_SVC1:admin>lsvdisk 20
id 20
name Tiger
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name STGPool_DS3500-2
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000016
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
494 Implementing the IBM System Storage SAN Volume Controller V7.2
primary yes
mdisk_grp_id 1
mdisk_grp_name STGPool_DS3500-2
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB
At this point, you have completed the required tasks to create a volume.
9.6.2 Volume information
Use the lsvdisk command to display summary information about all volumes that are
defined within the SAN Volume Controller environment. To display more detailed information
about a specific volume, run the command again and append the volume name parameter or
the volume ID.
Example 9-46 shows both of these commands.
Example 9-46 lsvdisk command
IBM_2145:ITSO_SVC1:admin>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type
FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change
0 Volume_A 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
0 GMREL1 6005076801AF813F1000000000000031 0 1 empty 0
0 no
1 Volume_B 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
1 GMREL2 6005076801AF813F1000000000000032 0 1 empty 0
0 no
2 Volume_C 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
2 GMREL3 6005076801AF813F1000000000000033 0 1 empty 0
0 no
IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_A
id 0
name Volume_A
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name Pool_DS3500-1
capacity 10.00GB
type striped
formatted no
Chapter 9. SAN Volume Controller operations using the command-line interface 495
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
mdisk_id
mdisk_name
FC_id
FC_name
RC_id 0
RC_name GMREL1
vdisk_UID 6005076801AF813F1000000000000031
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB
9.6.3 Creating a thin-provisioned volume
Example 9-47 shows how to create a thin-provisioned volume. In addition to the normal
parameters, you must use the following parameters:
-rsize This parameter makes the volume a thin-provisioned volume;
otherwise, the volume is fully allocated.
-autoexpand This parameter specifies that thin-provisioned volume copies
automatically expand their real capacities by allocating new extents
from their storage pool.
-grainsize This parameter sets the grain size (in KB) for a thin-provisioned
volume.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
496 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-47 Usage of the command mkvdisk
IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-2 -iogrp 0 -vtype striped -size
10 -unit gb -rsize 50% -autoexpand -grainsize 32
Virtual Disk, id [21], successfully
created
This command creates a space-efficient 10 GB volume. The volume belongs to the storage
pool named STGPool_DS3500-2 and is owned by the io_grp1 I/O Group. The real capacity
automatically expands until the volume size of 10 GB is reached. The grain size is set to 32 K,
which is the default.
9.6.4 Creating a volume in image mode
This virtualization type allows an image mode volume to be created when an MDisk already
has data on it, perhaps from a pre-virtualized subsystem. When an image mode volume is
created, it directly corresponds to the previously unmanaged MDisk from which it was
created. Therefore, with the exception of thin-provisioned image mode volume, the volumes
logical block address (LBA) x equals MDisk LBA x.
You can use this command to bring a non-virtualized disk under the control of the clustered
system. After it is under the control of the clustered system, you can migrate the volume from
the single managed disk.
As soon as the first MDisk extent has been migrated, the volume is no longer an image mode
volume. You can add an image mode volume to an already populated storage pool with other
types of volume, such as a striped or sequential volume.
You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.
Disk size: When using the -rsize parameter, you have the following options: disk_size,
disk_size_percentage, and auto.
Specify the disk_size_percentage value using an integer, or an integer immediately
followed by the percent (%) symbol.
Specify the units for a disk_size integer using the -unit parameter; the default is MB.
The -rsize value can be greater than, equal to, or less than the size of the volume.
The auto option creates a volume copy that uses the entire size of the MDisk. If you
specify the -rsize auto option, you must also specify the -vtype image option.
An entry of 1 GB uses 1,024 MB.
Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That
is, the minimum size that can be specified for an image mode volume must be the same as
the storage pool extent size to which it is added, with a minimum of 16 MB.
Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks,
and the remaining space on the larger MDisk is inaccessible.
If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.
Chapter 9. SAN Volume Controller operations using the command-line interface 497
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Use the mkvdisk command to create an image mode volume, as shown in Example 9-48.
Example 9-48 mkvdisk (image mode)
IBM_2145:ITSO_SVC1:admin>mkvdisk -mdiskgrp STGPool_DS3500-1 -iogrp 0 -mdisk mdisk10 -vtype
image -name Image_Volume_A
Virtual Disk, id [22], successfully
created
This command creates an image mode volume called Image_Volume_A using the mdisk10
MDisk. The volume belongs to the storage pool STGPool_DS3500-1 and is owned by the
io_grp0 I/O Group.
If we run the lsvdisk command again, notice that the volume named Image_Volume_A has a
status of image, as shown in Example 9-49.
Example 9-49 lsvdisk
IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue type=image
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count
fast_write_state se_copy_count RC_change
22 Image_Volume_A 0 io_grp0 online 0 STGPool_DS3500-1 10.00GB
image 6005076801AF813F1000000000000018 0 1
empty 0 no
9.6.5 Adding a mirrored volume copy
You can create a mirrored copy of a volume, which keeps a volume accessible even when the
MDisk on which it depends has become unavailable. You can create a copy of a volume either
on separate storage pools or by creating an image mode copy of the volume. Copies increase
the availability of data; however, they are not separate objects. You can only create or change
mirrored copies from the volume.
In addition, you can use volume mirroring as an alternative method of migrating volumes
between storage pools.
For example, if you have a non-mirrored volume in one storage pool and want to migrate that
volume to another storage pool, you can add a new copy of the volume and specify the
second storage pool. After the copies are synchronized, you can delete the copy on the first
storage pool. The volume is copied to the second storage pool while remaining online during
the copy.
To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds
a copy of the chosen volume to the selected storage pool, which changes a non-mirrored
volume into a mirrored volume.
In the following scenario, we show creating a mirrored volume from one storage pool to
another storage pool.
As you can see in Example 9-50, the volume has a copy with copy_id 0.
Example 9-50 lsvdisk
IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_no_mirror
id 23
name Volume_no_mirror
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
498 Implementing the IBM System Storage SAN Volume Controller V7.2
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000019
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
In Example 9-51, we add the volume copy mirror by using the addvdiskcopy command.
Example 9-51 addvdiskcopy
IBM_2145:ITSO_SVC1:admin>addvdiskcopy -mdiskgrp STGPool_DS5000-1 -vtype striped -unit gb
Volume_no_mirror
Chapter 9. SAN Volume Controller operations using the command-line interface 499
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Vdisk [23] copy [1] successfully created
During the synchronization process, you can see the status by using the
lsvdisksyncprogress command. As shown in Example 9-52, the first time that the status is
checked, the synchronization progress is at 48%, and the estimated completion time is
11:09:26. The second time that the command is run, the progress status is at 100%, and the
synchronization is complete.
Example 9-52 Synchronization
IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
23 Volume_no_mirror 1 48 110926203918
IBM_2145:ITSO_SVC1:admin>lsvdisksyncprogress
vdisk_id vdisk_name copy_id progress estimated_completion_time
23 Volume_no_mirror 1 100
As you can see in Example 9-53, the new mirrored volume copy (copy_id 1) has been added
and can be seen by using the lsvdisk command.
Example 9-53 lsvdisk
IBM_2145:ITSO_SVC1:admin>lsvdisk 23
id 23
name Volume_no_mirror
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 1.00GB
type many
formatted no
mdisk_id many
mdisk_name many
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F1000000000000019
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 2
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
500 Implementing the IBM System Storage SAN Volume Controller V7.2
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 1.00GB
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
While adding a volume copy mirror, you can define a mirror with different parameters to the
volume copy. Therefore, you can define a thin-provisioned volume copy for a non-volume copy
volume and vice versa, which is one way to migrate a non-thin-provisioned volume to a
thin-provisioned volume.
Now, we can change the name of the volume just mirrored from Volume_no_mirror to
Volume_mirrored, as shown in Example 9-54.
Volume copy mirror parameters: To change the parameters of a volume copy mirror, you
must delete the volume copy and redefine it with the new values.
Chapter 9. SAN Volume Controller operations using the command-line interface 501
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-54 Volume name changes
IBM_2145:ITSO_SVC1:admin>chvdisk -name Volume_mirrored Volume_no_mirror
9.6.6 Splitting a mirrored volume
The splitvdiskcopy command creates a new volume in the specified I/O Group from a copy
of the specified volume. If the copy that you are splitting is not synchronized, you must use the
-force parameter. The command fails if you are attempting to remove the only synchronized
copy. To avoid this failure, wait for the copy to synchronize, or split the unsynchronized copy
from the volume by using the -force parameter. You can run the command when either
volume copy is offline.
Example 9-55 shows the splitvdiskcopy command, which is used to split a mirrored volume.
It creates a new volume, Volume_new from Volume_mirrored.
Example 9-55 Split volume
IBM_2145:ITSO_SVC1:admin>splitvdiskcopy -copy 1 -iogrp 0 -name Volume_new Volume_mirrored
Virtual Disk, id [24], successfully created
As you can see in Example 9-56, the new volume named Volume_new has been created as an
independent volume.
Example 9-56 lsvdisk
IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_new
id 24
name Volume_new
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
capacity 1.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001A
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
502 Implementing the IBM System Storage SAN Volume Controller V7.2
primary yes
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
By issuing the command in Example 9-55 on page 501,Volume_mirrored will no longer have
its mirrored copy and a new volume will be created automatically.
9.6.7 Modifying a volume
Executing the chvdisk command will modify a single property of a volume. Only one property
can be modified at a time. So, changing the name and modifying the I/O Group require two
invocations of the command.
You can specify a new name or label. The new name can be used subsequently to reference
the volume. The I/O Group with which this volume is associated can be changed. Note that
changing the I/O Group with which this volume is associated requires a flush of the cache
within the nodes in the current I/O Group to ensure that all data is written to disk. I/O must be
suspended at the host level before performing this operation.
9.6.8 I/O governing
You can set a limit on the number of I/O operations accepted for a volume. The limit is set in
terms of I/Os per second or MB per second. By default, no I/O governing rate is set when a
volume is created.
Tips: If the volume has a mapping to any hosts, it is not possible to move the volume to an
I/O Group that does not include any of those hosts.
This operation will fail if there is not enough space to allocate bitmaps for a mirrored
volume in the target I/O Group.
If the -force parameter is used and the system is unable to destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.
If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.
Chapter 9. SAN Volume Controller operations using the command-line interface 503
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Base the choice between I/O and MB as the I/O governing throttle on the disk access profile
of the application. Database applications generally issue large amounts of I/O, but they only
transfer a relatively small amount of data. In this case, setting an I/O governing throttle based
on MB per second does not achieve much. It is better to use an I/Os per second as a second
throttle.
At the other extreme, a streaming video application generally issues a small amount of I/O,
but it transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle based on I/Os per second does not achieve much, so it is better to use an
MB per second throttle.
An example of the chvdisk command is shown in Example 9-57.
Example 9-57 chvdisk
IBM_2145:ITSO_SVC1:admin>chvdisk -rate 20 -unitmb volume_7
IBM_2145:ITSO_SVC1:admin>chvdisk -warning 85% volume_7
The first command changes the volume throttling of volume_7 to 20 MBps. The second
command changes the thin-provisioned volume warning to 85%. To verify the changes, issue
the lsvdisk command, as shown in Example 9-58.
Example 9-58 lsvdisk command: Verifying throttling
IBM_2145:ITSO_SVC1:admin>lsvdisk volume_7
id 1
name volume_7
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001F
virtual_disk_throttling (MB) 20
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of
the lsvdisk command) does not mean that zero I/Os per second (or MB per second) can
be achieved. It means that no throttle is set.
New name first: The chvdisk command specifies the new name first. The name can
consist of letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It
can be between one and 63 characters in length. However, it cannot start with a number,
the dash, or the word vdisk (because this prefix is reserved for SAN Volume Controller
assignment only).
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
504 Implementing the IBM System Storage SAN Volume Controller V7.2
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
autoexpand on
warning 85
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
2.02GB
9.6.9 Deleting a volume
When executing this command on an existing fully managed mode volume, any data that
remained on it will be lost. The extents that made up this volume will be returned to the pool of
free extents available in the storage pool.
If any remote copy, FlashCopy, or host mappings still exist for this volume, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the volume and any
volume to host mappings and copy mappings.
If the volume is currently the subject of a migrate to image mode, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the volume.
If the command succeeds (without the -force flag) for an image mode volume, the underlying
back-end controller logical unit will be consistent with the data that a host might previously
have read from the image mode volume. That is, all fast write data has been flushed to the
underlying LUN. If the -force flag is used, there is no guarantee.
If there is any non-destaged data in the fast write cache for this volume, the deletion of the
volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write
cache is deleted.
Use the rmvdisk command to delete a volume from your SAN Volume Controller
configuration, as shown in Example 9-59.
Chapter 9. SAN Volume Controller operations using the command-line interface 505
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-59 rmvdisk
IBM_2145:ITSO_SVC1:admin>rmvdisk volume_A
This command deletes the volume_A volume from the SAN Volume Controller configuration.
If the volume is assigned to a host, you need to use the -force flag to delete the volume
(Example 9-60).
Example 9-60 rmvdisk (-force)
IBM_2145:ITSO_SVC1:admin>rmvdisk -force volume_A
9.6.10 Expanding a volume
Expanding a volume presents a larger capacity disk to your operating system. Although this
expansion can be easily performed using the SAN Volume Controller, you must ensure that
your operating systems support expansion before using this function.
Assuming that your operating systems support it, you can use the expandvdisksize command
to increase the capacity of a given volume.
Example 9-61 shows a sample of this command.
Example 9-61 expandvdisksize
IBM_2145:ITSO_SVC1:admin>expandvdisksize -size 5 -unit gb volume_C
This command expands the volume_C volume, which was 35 GB before, by another 5 GB to
give it a total size of 40 GB.
To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 9-62 on page 505. This command changes the real size of the volume_B volume to a
real capacity of 55 GB. The capacity of the volume remains unchanged.
Example 9-62 lsvdisk
IBM_2145:ITSO_SVC1:admin>lsvdisk volume_B
id 26
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 50.02GB
free_capacity 50.02GB
overallocation 199
autoexpand on
warning 80
grainsize 32
se_copy yes
IBM_2145:ITSO_SVC1:admin>expandvdisksize -rsize 5 -unit gb volume_B
IBM_2145:ITSO_SVC1:admin>lsvdisk volume_B
id 26
name volume_B
capacity 100.00GB
type striped
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
506 Implementing the IBM System Storage SAN Volume Controller V7.2
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 55.02GB
free_capacity 55.02GB
overallocation 181
autoexpand on
warning 80
grainsize 32
se_copy yes
9.6.11 Assigning a volume to a host
Use the mkvdiskhostmap command to map a volume to a host. When executed, this command
creates a new mapping between the volume and the specified host, which essentially
presents this volume to the host as though the disk was directly attached to the host. It is only
after this command is executed that the host can perform I/O to the volume. Optionally, a
SCSI LUN ID can be assigned to the mapping.
When the HBA on the host scans for devices that are attached to it, it discovers all of the
volumes that are mapped to its FC ports. When the devices are found, each one is allocated
an identifier (SCSI LUN ID).
For example, the first disk found is generally SCSI LUN 1, and so on. You can control the
order in which the HBA discovers volumes by assigning the SCSI LUN ID as required. If you
do not specify a SCSI LUN ID, the system automatically assigns the next available SCSI LUN
ID, given any mappings that already exist with that host.
Using the volume and host definition that we created in the previous sections, we assign
volumes to hosts that are ready for their use. We use the mkvdiskhostmap command (see
Example 9-63).
Example 9-63 mkvdiskhostmap
IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Almaden volume_B
Virtual Disk to Host map, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Almaden volume_C
Virtual Disk to Host map, id [1], successfully
created
This command displays volume_B and volume_C that are assigned to host Almaden, as shown
in Example 9-64.
Example 9-64 lshostvdiskmap -delim command
IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim :
id:name:SCSI_id:vdisk_id:vdisk_name:vdisk_UID
2:Almaden:0:26:volume_B:6005076801AF813F1000000000000020
Important: If a volume is expanded, its type will become striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your volume to the
specified size, you receive the following error message:
CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
Chapter 9. SAN Volume Controller operations using the command-line interface 507
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
2:Almaden:1:27:volume_C:6005076801AF813F1000000000000021
Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs, for
example:
Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
Volume 3 is mapped to Host 1 with SCSI LUN ID 4.
When the device driver scans the HBA, it might stop after discovering Volumes 1 and 2,
because there is no SCSI LUN mapped with ID 3.
It is not possible to map a volume to a host more than one time at separate LUNs
(Example 9-65).
Example 9-65 mkvdiskhostmap
IBM_2145:ITSO_SVC1:admin>mkvdiskhostmap -host Siam volume_A
Virtual Disk to Host map, id [0], successfully created
This command maps the volume called volume_A to the host called Siam.
At this point, you have completed all tasks that are required to assign a volume to an attached
host.
9.6.12 Showing volumes to host mapping
Use the lshostvdiskmap command to show which volumes are assigned to a specific host
(Example 9-66).
Example 9-66 lshostvdiskmap
IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Siam
id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID
3,Siam,0,0,volume_A,210000E08B18FF8A,60050768018301BF280000000000000C
From this command, you can see that the host Siam has only one assigned volume called
volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is presented
to the host. If no host is specified, all defined host to volume mappings will be returned.
Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can
help assign a specific LUN ID to a volume that is to be associated with a given host. The
default (if nothing is specified) is to increment based on what is already assigned to the
host.
Important: Ensure that the SCSI LUN ID allocation is contiguous.
Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, in this case, you must specify this flag before the host
name. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
508 Implementing the IBM System Storage SAN Volume Controller V7.2
9.6.13 Deleting a volume to host mapping
When deleting a volume mapping, you are not deleting the volume itself, only the connection
from the host to the volume. If you mapped a volume to a host by mistake, or you simply want
to reassign the volume to another host, use the rmvdiskhostmap command to unmap a
volume from a host (Example 9-67).
Example 9-67 rmvdiskhostmap
IBM_2145:ITSO_SVC1:admin>rmvdiskhostmap -host Tiger volume_D
This command unmaps the volume called volume_D from the host called Tiger.
9.6.14 Migrating a volume
From time to time, you might want to migrate volumes from one set of MDisks to another set
of MDisks to decommission an old disk subsystem, to have better balanced performance
across your virtualized environment, or simply to migrate data into the SAN Volume Controller
environment transparently using image mode.
You can obtain further information about migration in Chapter 6, Data migration on
page 225.
As you can see from the parameters that are shown in Example 9-68 on page 508, before
you can migrate your volume, you must know the name of the volume that you want to migrate
and the name of the storage pool to which you want to migrate. To discover the names, run
the lsvdisk and lsmdiskgrp commands.
After you know these details, you can issue the migratevdisk command, as shown in
Example 9-68.
Example 9-68 migratevdisk
IBM_2145:ITSO_SVC1:admin>migratevdisk -mdiskgrp STGPool_DS5000-1 -vdisk volume_C
This command moves volume_C to the storage pool named STGPool_DS5000-1.
You can run the lsmigrate command at any time to see the status of the migration process
(Example 9-69).
Example 9-69 lsmigrate command
IBM_2145:ITSO_SVC1:admin>lsmigrate
migrate_type MDisk_Group_Migration
Important: After migration is started, it continues until completion unless it is stopped or
suspended by an error condition or unless the volume being migrated is deleted.
Tips: If insufficient extents are available within your target storage pool, you receive an
error message. Make sure that the source MDisk group and target MDisk group have the
same extent size.
The optional threads parameter allows you to assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.
Chapter 9. SAN Volume Controller operations using the command-line interface 509
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
progress 0
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id 0
IBM_2145:ITSO_SVC1:admin>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id
0
9.6.15 Migrating a fully managed volume to an image mode volume
Migrating a fully managed volume to an image mode volume allows the SAN Volume
Controller to be removed from the data path, which might be useful where the SAN Volume
Controller is used as a data mover appliance. You can use the migratetoimage command.
To migrate a fully managed volume to an image mode volume, the following rules apply:
The destination MDisk must be greater than or equal to the size of the volume.
The MDisk that is specified as the target must be in an unmanaged state.
Regardless of the mode in which the volume starts, it is reported as a managed mode
during the migration.
Both of the MDisks that are involved are reported as being image mode volumes during
the migration.
If the migration is interrupted by a system recovery or a cache problem, the migration
resumes after the recovery completes.
Example 9-70 shows an example of the command.
Example 9-70 migratetoimage
IBM_2145:ITSO_SVC1:admin>migratetoimage -vdisk volume_A -mdisk mdisk10 -mdiskgrp
STGPool_IMAGE
In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put
into the STGPool_IMAGE storage pool.
9.6.16 Shrinking a volume
The shrinkvdisksize command reduces the capacity that is allocated to the particular
volume by the amount that you specify. You cannot shrink the real size of a thin-provisioned
volume to less than its used size. All capacities, including changes, must be in multiples of
512 bytes. An entire extent is reserved even if it is only partially used. The default capacity
units are MBs.
Progress: The progress is given as percent complete. If you receive no more replies, it
means that the process has finished.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
510 Implementing the IBM System Storage SAN Volume Controller V7.2
You can use this command to shrink the physical capacity that is allocated to a particular
volume by the specified amount. You also can use this command to shrink the virtual capacity
of a thin-provisioned volume without altering the physical capacity that is assigned to the
volume:
For a non-thin-provisioned volume, use the -size parameter.
For a thin-provisioned volumes real capacity, use the -rsize parameter.
For the thin-provisioned volumes virtual capacity, use the -size parameter.
When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
The system arbitrarily reduces the capacity of the volume by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the volume. You cannot
control which extents are removed, and therefore you cannot assume that it is unused space
that is removed.
Note that image mode volumes cannot be reduced in size. Instead, they must first be
migrated to fully Managed Mode. To run the shrinkvdisksize command on a mirrored
volume, all copies of the volume must be synchronized.
Assuming that your operating system supports it, you can use the shrinkvdisksize command
to decrease the capacity of a given volume.
Example 9-71 shows an example of this command.
Example 9-71 shrinkvdisksize
IBM_2145:ITSO_SVC1:admin>shrinkvdisksize -size 44 -unit gb volume_D
This command shrinks a volume called volume_D from a total size of 80 GB, by 44 GB, to a
new total size of 36 GB.
9.6.17 Showing a volume on an MDisk
Use the lsmdiskmember command to display information about the volume that is using
space on a specific MDisk, as shown in Example 9-72.
Example 9-72 lsmdiskmember command
IBM_2145:ITSO_SVC1:admin>lsmdiskmember mdisk8
id copy_id
24 0
27 0
Important:
If the volume contains data, do not shrink the disk.
Certain operating systems or file systems use the outer edge of the disk for
performance reasons. This command can shrink a FlashCopy target volume to the
same capacity as the source.
Before you shrink a volume, validate that the volume is not mapped to any host objects.
If the volume is mapped, data is displayed. You can determine the exact capacity of the
source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the volume by the required amount by issuing the shrinkvdisksize -size
disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.
Chapter 9. SAN Volume Controller operations using the command-line interface 511
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
This command displays a list of all of the volume IDs that correspond to the volume copies
that use mdisk8.
To correlate the IDs that are displayed in this output to volume names, we can run the
lsvdisk command, which we discuss in more detail in 9.6, Working with volumes on
page 492.
9.6.18 Showing which volumes are using a storage pool
Use the lsvdisk -filtervalue command, as shown in Example 9-73, to see which volumes
are part of a specific storage pool. This command shows all of the volumes that are part of the
storage pool named STGPool_DS3500_2.
Example 9-73 lsvdisk -filtervalue: VDisks in the MDG
IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,0,no
9,W2K3_SRV2_VOL03,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
00000000000000A,0,1,empty,0,0,no
10,W2K3_SRV2_VOL04,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000B,0,1,empty,0,0,no
11,W2K3_SRV2_VOL05,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000C,0,1,empty,0,0,no
12,W2K3_SRV2_VOL06,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000D,0,1,empty,0,0,no
16,AIX_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,20.00GB,striped,,,,,6005076801AF813F1
000000000000011,0,1,empty,0,0,no
9.6.19 Showing which MDisks are used by a specific volume
Use the lsvdiskmember command, as shown in Example 9-74, to show from which MDisks a
specific volumes extents came.
Example 9-74 lsvdiskmember command
IBM_2145:ITSO_SVC1:admin>lsvdiskmember 0
id
4
5
6
7
If you want to know more about these MDisks, you can run the lsmdisk command, as
explained in 9.2, New commands on page 474 (using the ID that is displayed in
Example 9-74 rather than the name).
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
512 Implementing the IBM System Storage SAN Volume Controller V7.2
9.6.20 Showing from which storage pool a volume has its extents
Use the lsvdisk command, as shown in Example 9-75, to show to which storage pool a
specific volume belongs.
Example 9-75 lsvdisk command: Storage pool name
IBM_2145:ITSO_SVC1:admin>lsvdisk Volume_D
id 25
name Volume_D
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 6005076801AF813F100000000000001E
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
Chapter 9. SAN Volume Controller operations using the command-line interface 513
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
tier_capacity 2.02GB
To learn more about these storage pools, you can run the lsmdiskgrp command, as explained
in 9.3.10, Working with a storage pool on page 481.
9.6.21 Showing the host to which the volume is mapped
To show the hosts to which a specific volume has been assigned, run the lsvdiskhostmap
command, as shown in Example 9-76.
Example 9-76 lsvdiskhostmap command
IBM_2145:ITSO_SVC1:admin>lsvdiskhostmap -delim , volume_B
id,name,SCSI_id,host_id,host_name,vdisk_UID
26,volume_B,0,2,Almaden,6005076801AF813F1000000000000020
This command shows the host or hosts to which the volume_B volume was mapped. It is
normal for you to see duplicate entries, because there are more paths between the clustered
system and the host. To be sure that the operating system on the host sees the disk only one
time, you must install and configure a multipath software application, such as the IBM
Subsystem Driver (SDD).
9.6.22 Showing the volume to which the host is mapped
To show the volume to which a specific host has been assigned, run the lshostvdiskmap
command, as shown in Example 9-77.
Example 9-77 lshostvdiskmap command example
IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Almaden
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID
2,Almaden,0,26,volume_B,60050768018301BF2800000000000005
2,Almaden,1,27,volume_A,60050768018301BF2800000000000004
This command shows which volumes are mapped to the host called Almaden.
9.6.23 Tracing a volume from a host back to its physical disk
In many cases, you must verify exactly which physical disk is presented to the host; for
example, from which storage pool a specific volume comes. However, from the host side, it is
not possible for the server administrator using the GUI to see on which physical disks the
volumes are running.
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, in this case, you must specify this flag before the volume name.
Otherwise, the command does not return any data.
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, in this case, you must specify this flag before the volume name.
Otherwise, the command does not return any data.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
514 Implementing the IBM System Storage SAN Volume Controller V7.2
Instead, you must enter the command (listed in Example 9-78) from your multipath command
prompt. Follow these steps:
1. On your host, run the datapath query device command. You see a long disk serial
number for each vpath device, as shown in Example 9-78.
Example 9-78 datapath query device
DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000005
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0
1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0
DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000004
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0
1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0
DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED
SERIAL: 60050768018301BF2800000000000006
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0
1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0
2. Run the lshostvdiskmap command to return a list of all assigned volumes
(Example 9-79).
Example 9-79 lshostvdiskmap
IBM_2145:ITSO_SVC1:admin>lshostvdiskmap -delim , Almaden
id,name,SCSI_id,vdisk_id,vdisk_name,vdisk_UID
2,Almaden,0,26,volume_B,60050768018301BF2800000000000005
2,Almaden,1,27,volume_A,60050768018301BF2800000000000004
2,Almaden,2,28,volume_C,60050768018301BF2800000000000006
Look for the disk serial number that matches your datapath query device output. This
host was defined in our SAN Volume Controller as Almaden.
3. Run the lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make
up the specified volume (Example 9-80).
Example 9-80 lsvdiskmember
IBM_2145:ITSO_SVC1:admin>lsvdiskmember volume_E
id
0
1
2
3
4
10
11
State: In Example 9-78, the state of each path is OPEN. Sometimes, you will see the
state CLOSED. This state does not necessarily indicate a problem, because it might be a
result of the paths processing stage.
Chapter 9. SAN Volume Controller operations using the command-line interface 515
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
13
15
16
17
4. Query the MDisks with the lsmdisk mdiskID to find their controller and LUN number
information, as shown in Example 9-81. The output displays the controller name and the
controller LUN ID to help you to track back to a LUN within the disk subsystem (provided
that you gave your controller a unique name, such as a serial number). See Example 9-81.
Example 9-81 lsmdisk command
IBM_2145:ITSO_SVC1:admin>lsmdisk 0
id 0
name mdisk0
status online
mode managed
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
capacity 128.0GB
quorum_index 1
block_size 512
controller_name ITSO-DS3500
ctrl_type 4
ctrl_WWNN 20080080E51B09E8
controller_id 2
path_count 4
max_path_count 4
ctrl_LUN_# 0000000000000000
UID 60080e50001b0b62000007b04e731e4d00000000000000000000000000000000
preferred_WWPN 20580080E51B09E8
active_WWPN 20580080E51B09E8
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier generic_hdd
9.7 Scripting under the CLI for SAN Volume Controller task
automation
Command prefix changes: The svctask and svcinfo command prefixes are no longer
necessary when issuing a command. If you have existing scripts that use those prefixes,
they will continue to function. You do not need to change the scripts.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
516 Implementing the IBM System Storage SAN Volume Controller V7.2
Using scripting constructs works better for the automation of regular operational jobs. You can
use available shells to develop scripts. Scripting enhances the productivity of SAN Volume
Controller administrators and the integration of their storage virtualization environment. You
can create your own customized scripts to automate a large number of tasks for completion at
a variety of times and run them through the CLI.
We suggest that you keep the scripting as simple as possible in large SAN environments
where scripting commands are used. It is harder to manage fallback, documentation, and the
verification of a successful script prior to execution in a large SAN environment.
In this section, we present an overview of how to automate various tasks by creating scripts
using the SAN Volume Controller CLI.
9.7.1 Scripting structure
When creating scripts to automate the tasks on the SAN Volume Controller, use the structure
that is illustrated in Figure 9-2 on page 516.
Figure 9-2 Scripting structure for SAN Volume Controller task automation
Creating a Secure Shell connection to the SAN Volume Controller
When creating a connection to the SAN Volume Controller, if you are running the script, you
must have access to a public key that corresponds to a public key that has been previously
uploaded to the SAN Volume Controller.
The key is used to establish the SSH connection that is needed to use the CLI on the SAN
Volume Controller. If the SSH keypair is generated without a passphrase, you can connect
without the need of special scripting to parse in the passphrase.
Secure Shell Key: Starting with SAN Volume Controller 6.3, using a Secure Shell (SSH)
Key is optional, you can use a user ID and password to access the system. However for
security reasons, we suggest the use of SSH Key. We provide a sample of its use.
Create
connection
(SSH) to the
SVC
Run the
command
Run the
commands
Perform
logging
Scheduled
activation
or
Manual
activation
Chapter 9. SAN Volume Controller operations using the command-line interface 517
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
On UNIX systems, you can use the ssh command to create an SSH connection with the SAN
Volume Controller. On Windows systems, you can use a utility called plink.exe, which is
provided with the PuTTY tool, to create an SSH connection with the SAN Volume Controller.
In the following examples, we use plink to create the SSH connection to the SAN Volume
Controller.
Executing the commands
When using the CLI, see the IBM System Storage SAN Volume Controller Command-Line
Interface Users Guide, GC27-2287, to obtain the correct syntax and a detailed explanation of
each command. You can download this guide for each SAN Volume Controller code level from
the SAN Volume Controller documentation page at this website:
http://www.ibm.com/support/entry/portal/support?brandind=System%20Storage~Storage%
20software~Storage%20virtualization
When using the CLI, not all commands provide a response to determine the status of the
invoked command. Therefore, always create checks that can be logged for monitoring and
troubleshooting purposes.
Connecting to the SVC using a predefined SSH connection
The easiest way to create an SSH connection to the SAN Volume Controller is when plink
can call a predefined PuTTY session.
Define a session, including this information:
The auto-login user name and setting the auto-login user name to your SAN Volume
Controller admin user name (for example, admin). Set this parameter under the
Connection Data category, as shown in Figure 9-3.
Figure 9-3 Auto-login configuration
The private key for authentication (for example, icat.ppk). This key is the private key that
you have already created. Set this parameter under the Connection Session Auth
category, as shown in Figure 9-4.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
518 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 9-4 An ssh private key configuration
The IP address of the SAN Volume Controller clustered system. Set this parameter under
the Session category, as shown in Figure 9-5.
Figure 9-5 IP address
Enter this information:
A session name. Our example uses ITSO_SVC1.
Our PuTTy version is 0.63.
To use this predefined PuTTY session, use the following syntax:
plink ITSO_SVC1
If a predefined PuTTY session is not used, use this syntax:
plink admin@<your cluster ip add> -i "C:\DirectoryPath\KeyName.PPK"
Chapter 9. SAN Volume Controller operations using the command-line interface 519
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
IBM provides a suite of scripting tools based on Perl. You can download these scripting tools
from this website:
http://www.alphaworks.ibm.com/tech/svctools
9.8 SAN Volume Controller advanced operations using the CLI
In the following sections, we describe the commands that we think best represent advanced
operational commands.
9.8.1 Command syntax
Two major command sets are available:
The svcinfo command set allows you to query the various components within the SAN
Volume Controller environment.
The svctask command set allows you to make changes to the various components within
the SAN Volume Controller.
When the command syntax is shown, you see several parameters in square brackets, for
example, [parameter], which indicate that the parameter is optional in most if not all
instances. Any parameter that is not in square brackets is required information. You can view
the syntax of a command by entering one of the following commands:
svcinfo -? Shows a complete list of information commands.
svctask -? Shows a complete list of task commands.
svcinfo commandname -? Shows the syntax of information commands.
svctask commandname -? Shows the syntax of task commands.
svcinfo commandname -filtervalue? Shows which filters you can use to reduce the output
of the information commands.
If you look at the syntax of the command by typing svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
9.8.2 Organizing on window content
Sometimes the output of a command can be long and difficult to read in the window. In cases
where you need information about a subset of the total number of available items, you can
use filtering to reduce the output to a more manageable size.
Important command prefix changes: The svctask and svcinfo command prefixes are
no longer necessary when issuing a command. If you have existing scripts that use those
prefixes, they will continue to function. You do not need to change the scripts.
Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname
-h.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were issued recently. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
520 Implementing the IBM System Storage SAN Volume Controller V7.2
Filtering
To reduce the output that is displayed by a command, you can specify a number of filters,
depending on which command you are running. To see which filters are available, type the
command followed by the -filtervalue? flag, as shown in Example 9-82.
Example 9-82 lsvdisk -filtervalue? command
IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue?
Filters for this view are :
name
id
IO_group_id
IO_group_name
status
mdisk_grp_name
mdisk_grp_id
capacity
type
FC_id
FC_name
RC_id
RC_name
vdisk_name
vdisk_id
vdisk_UID
fc_map_count
copy_count
fast_write_state
se_copy_count
filesystem
preferred_node_id
mirror_write_priority
RC_flash
When you know the filters, you can be more selective in generating output:
Multiple filters can be combined to create specific searches.
You can use an asterisk (*) as a wildcard when using names.
When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb.
For example, if we issue the lsvdisk command with no filters but with the -delim parameter,
we see the output that is shown in Example 9-83.
Example 9-83 lsvdisk command: No filters
IBM_2145:ITSO_SVC1:admin>lsvdisk -delim ,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
0,ESXI_SRV1_VOL01,1,io_grp1,online,many,many,100.00GB,many,,,,,6005076801AF813F100000000000
0014,0,2,empty,0,no
1,volume_7,0,io_grp0,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F10000000
0000001F,0,1,empty,1,no
2,W2K3_SRV1_VOL02,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000003,0,1,empty,0,no
3,W2K3_SRV1_VOL03,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000004,0,1,empty,0,no
Chapter 9. SAN Volume Controller operations using the command-line interface 521
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
4,W2K3_SRV1_VOL04,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000005,0,1,empty,0,no
5,W2K3_SRV1_VOL05,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000006,0,1,empty,0,no
6,W2K3_SRV1_VOL06,1,io_grp1,online,0,STGPool_DS3500-1,10.00GB,striped,,,,,6005076801AF813F1
000000000000007,0,1,empty,0,no
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,no
If we now add a filter to our lsvdisk command (mdisk_grp_name) we can reduce the output, as
shown in Example 9-84 on page 521.
Example 9-84 lsvdisk command: With a filter
IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,no
9.9 Managing the clustered system using the CLI
In these sections, we demonstrate how to perform system administration.
9.9.1 Viewing clustered system properties
Use the lssystem command to display summary information about the clustered system, as
shown in Example 9-85.
Example 9-85 lssystem command
IBM_2145:ITSO_SVC1:admin>lssystem
id 000002006BE04FC4
name ITSO_SVC1
location local
partnership
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons as opposed to wrapping text over multiple lines. This parameter is normally
used in cases where you need to get reports during script execution.
Changes since SAN Volume Controller 6.3:
The svcinfo lscluster command is changed to lssystem.
The svctask chcluster command is changed to chsystem, and several optional
parameters have moved to new commands. For example, to change the IP address of
the system, you can now use the chsystemip command. All the old commands are
maintained for compatibility reasons.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
522 Implementing the IBM System Storage SAN Volume Controller V7.2
bandwidth
total_mdisk_capacity 836.5GB
space_in_mdisk_grps 786.5GB
space_allocated_to_vdisks 434.02GB
total_free_space 402.5GB
total_vdiskcopy_capacity 442.00GB
total_used_capacity 432.00GB
total_overallocation 52
total_vdisk_capacity 341.00GB
total_allocated_extent_capacity 435.75GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address 69.50.219.51
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 786.50GB
tier_free_capacity 352.25GB
has_nas_key no
layer appliance
Use the lssystemstats command to display the most recent values of all node statistics
across all nodes in a clustered system, as shown in Example 9-86.
Example 9-86 lssystemstats command
IBM_2145:ITSO_SVC1:admin>lssystemstats
stat_name stat_current stat_peak stat_peak_time
cpu_pc 1 1 110927162859
Chapter 9. SAN Volume Controller operations using the command-line interface 523
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
fc_mb 0 0 110927162859
fc_io 7091 7314 110927162524
sas_mb 0 0 110927162859
sas_io 0 0 110927162859
iscsi_mb 0 0 110927162859
iscsi_io 0 0 110927162859
write_cache_pc 0 0 110927162859
total_cache_pc 0 0 110927162859
vdisk_mb 0 0 110927162859
vdisk_io 0 0 110927162859
vdisk_ms 0 0 110927162859
mdisk_mb 0 0 110927162859
mdisk_io 0 0 110927162859
mdisk_ms 0 0 110927162859
drive_mb 0 0 110927162859
drive_io 0 0 110927162859
drive_ms 0 0 110927162859
vdisk_r_mb 0 0 110927162859
vdisk_r_io 0 0 110927162859
vdisk_r_ms 0 0 110927162859
vdisk_w_mb 0 0 110927162859
vdisk_w_io 0 0 110927162859
vdisk_w_ms 0 0 110927162859
mdisk_r_mb 0 0 110927162859
mdisk_r_io 0 0 110927162859
mdisk_r_ms 0 0 110927162859
mdisk_w_mb 0 0 110927162859
mdisk_w_io 0 0 110927162859
mdisk_w_ms 0 0 110927162859
drive_r_mb 0 0 110927162859
drive_r_io 0 0 110927162859
drive_r_ms 0 0 110927162859
drive_w_mb 0 0 110927162859
drive_w_io 0 0 110927162859
drive_w_ms 0 0 110927162859
9.9.2 Changing system settings
Use the chsystem command to change the settings of the system. This command modifies
specific features of a clustered system. You can change multiple features by issuing a single
command.
All command parameters are optional; however, you must specify at least one parameter.
Important considerations:
Starting with SAN Volume Controller 6.3, the svctask chcluster command is changed
to chsystem, and several optional parameters have moved to new commands. For
example, to change the IP address of the system, you can now use chsystemip
command. All the old commands are maintained for script compatibility reasons.
Changing the speed on a running system breaks I/O service to the attached hosts.
Before changing the fabric speed, stop the I/O from the active hosts and force these
hosts to flush any cached data by unmounting volumes (for UNIX host types) or by
removing drive letters (for Windows host types). You might need to reboot specific hosts
to detect the new fabric speed.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
524 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-87 shows configuring the Network Time Protocol (NTP) IP address.
Example 9-87 chsystem command
IBM_2145:ITSO_SVC1:admin>chsystem -ntpip 10.200.80.1
9.9.3 iSCSI configuration
SAN Volume Controller 5.1 introduced the IP-based Small Computer System Interface
(iSCSI) as a supported method of communication between the SAN Volume Controller and
hosts. All back-end storage and intracluster communication still use FC and the SAN, so
iSCSI cannot be used for that type of communication.
We describe in detail how iSCSi works in 2.6, iSCSI overview on page 31. In this section, we
show how we configured our system for use with iSCSI.
We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to
contain the clustered system IP. When we configured our nodes to be used with iSCSI, we did
not affect our clustered system IP. The clustered system IP is changed, as shown in 9.9.2,
Changing system settings on page 523.
It is important to know that you can have more than a one IP address-to-one physical
connection relationship. We have the capability to have a four-to-one relationship (4:1),
consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port
per node.
There are two ways to perform iSCSI authentication or Challenge Handshake Authentication
Protocol (CHAP), either for the whole clustered system or per host connection. Example 9-88
shows configuring CHAP for the whole clustered system.
Example 9-88 Setting a CHAP secret for the entire clustered system to passw0rd
IBM_2145:ITSO_SVC1:admin>chsystem -iscsiauthmethod chap -chapsecret passw0rd
In our scenario, we have a clustered system IP of 9.64.210.64, which is not affected during
our configuration of the nodes IP addresses.
We start by listing our ports using the lsportip command (not shown). We see that we have
two ports per node with which to work. Both ports can have two IP addresses that can be
used for iSCSI.
We configure the secondary port in both nodes in our I/O Group, as shown in Example 9-89.
Example 9-89 Configuring secondary Ethernet port on SAN Volume Controller nodes
IBM_2145:ITSO_SVC1:admin>cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask 255.255.255.0 2
IBM_2145:ITSO_SVC1:admin>cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask 255.255.255.0 2
While both nodes are online, each node will be available to iSCSI hosts on the IP address that
we have configured. Note that iSCSI failover between nodes is enabled automatically.
Therefore, if a node goes offline for any reason, its partner node in the I/O Group will become
available on the failed nodes port IP address. This design ensures that hosts can continue to
Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will
need to reconnect if changes are made to the IP addresses of the nodes.
Chapter 9. SAN Volume Controller operations using the command-line interface 525
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
perform I/O. The lsportip command displays the port IP addresses that are currently active
on each node.
9.9.4 Modifying IP addresses
We can use both IP ports of the nodes. However, all IP information is required the first time
that you configure a second port, because port 1 on the system must always have one stack
fully configured.
There are now two active system ports on the configuration node. If the system IP address is
changed, the open command-line shell closes during the processing of the command. You
must reconnect to the new IP address if connected through that port.
If the clustered system IP address is changed, the open command-line shell closes during the
processing of the command and you must reconnect to the new IP address. If this node
cannot rejoin the clustered system, you can bring the node up in service mode. In this mode,
the node can be accessed as a stand-alone node using the service IP address.
We discuss the service IP address in more detail in 9.20, Working with the Service Assistant
menu on page 623.
List the IP addresses of the clustered system by issuing the lssystemip command, as shown
in Example 9-90.
Example 9-90 lssystemip command
IBM_2145:ITSO_SVC1:admin>lssystemip
cluster_id cluster_name location port_id IP_address subnet_mask gateway
IP_address_6 prefix_6 gateway_6
000002006BE04FC4 ITSO_SVC1 local 1 10.18.228.81 255.255.255.0 10.18.228.1
fd09:5030:beef:cafe:0000:0000:0000:0083 64 fd09:5030:beef:cafe:0000:0000:0000:0001
000002006BE04FC4 ITSO_SVC1 local 2
000002006AC03A42 ITSO_SVC2 remote 1 10.18.228.82 255.255.255.0 10.18.228.1
000002006AC03A42 ITSO_SVC2 remote 2
0000020060A06FB8 ITSO_SVC3 remote 1 10.18.228.83 255.255.255.0 10.18.228.83
fdee:beeb:beeb:0000:0000:0000:0000:0083 48 fdee:beeb:beeb:0000:0000:0000:0000:0083
0000020060A06FB8 ITSO_SVC3 remote 2
Modify the IP address by issuing the chsystemip command. You can either specify a static IP
address or have the system assign a dynamic IP address, as shown in Example 9-91.
Example 9-91 chsystemip -systemip
IBM_2145:ITSO_SVC1:admin>chsystemip -systemip 10.20.133.5 -gw 10.20.135.1 -mask
255.255.255.0 -port 1
This command changes the current IP address of the clustered system to 10.20.133.5.
List the IP service addresses of the clustered system by issuing the lsserviceip command.
Important: If you specify a new system IP address, the existing communication with the
system through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address, but your SSH key
still works.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
526 Implementing the IBM System Storage SAN Volume Controller V7.2
9.9.5 Supported IP address formats
Table 9-1 lists the IP address formats.
Table 9-1 ip_address_list formats
At this point, we have completed the tasks that are required to change the IP addresses of the
clustered system.
9.9.6 Setting the clustered system time zone and time
Use the -timezone parameter to specify the numeric ID of the time zone that you want to set.
Issue the lstimezones command to list the time zones that are available on the system; this
command displays a list of valid time zone settings.
Setting the clustered system time zone
Perform the following steps to set the clustered system time zone and time:
1. Find out for which time zone your system is currently configured. Enter the showtimezone
command, as shown in Example 9-92.
Example 9-92 showtimezone command
IBM_2145:ITSO_SVC1:admin>showtimezone
id timezone
522 UTC
2. To find the time zone code that is associated with your time zone, enter the lstimezones
command, as shown in Example 9-93. A truncated list is provided for this example. If this
setting is correct (for example, 522 UTC), go to Step 4. If not, continue with Step 3.
Example 9-93 lstimezones command
IBM_2145:ITSO_SVC1:admin>lstimezones
id timezone
.
.
507 Turkey
508 UCT
IP type ip_address_list format
IPv4 (no port set, SAN Volume Controller uses
default)
1.2.3.4
IPv4 with specific port 1.2.3.4:22
Full IPv6, default port 1234:1234:0001:0123:1234:1234:1234:1234
Full IPv6, default port, leading zeros suppressed 1234:1234:1:123:1234:1234:1234:1234
Full IPv6 with port [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23
Zero-compressed IPv6, default port 2002::4ff6
Zero-compressed IPv6 with port [2002::4ff6]:23
Tip: If you have changed the time zone, you must clear the event log dump directory before
you can view the event log through the web application.
Chapter 9. SAN Volume Controller operations using the command-line interface 527
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
509 Universal
510 US/Alaska
511 US/Aleutian
512 US/Arizona
513 US/Central
514 US/Eastern
515 US/East-Indiana
516 US/Hawaii
517 US/Indiana-Starke
518 US/Michigan
519 US/Mountain
520 US/Pacific
521 US/Samoa
522 UTC
.
.
3. Now that you know which time zone code is correct for you, set the time zone by issuing
the settimezone (Example 9-94 on page 527) command.
Example 9-94 settimezone command
IBM_2145:ITSO_SVC1:admin>settimezone -timezone 520
4. Set the system time by issuing the setclustertime command (Example 9-95).
Example 9-95 setclustertime command
IBM_2145:ITSO_SVC1:admin>setclustertime -time 061718402008
The format of the time is MMDDHHmmYYYY.
You have completed the necessary tasks to set the clustered system time zone and time.
9.9.7 Starting statistics collection
Statistics are collected at the end of each sampling period (as specified by the -interval
parameter). These statistics are written to a file. A new file is created at the end of each
sampling period. Separate files are created for MDisks, volumes, and node statistics.
Use the startstats command to start the collection of statistics, as shown in Example 9-96.
Example 9-96 startstats command
IBM_2145:ITSO_SVC1:admin>startstats -interval 15
The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts
statistics collection and gathers data at 15-minute intervals.
Example 9-97 Statistics collection status and frequency
IBM_2145:ITSO_SVC1:admin>lssystem
statistics_status on
statistics_frequency 15
-- Note that the output has been shortened for easier reading. --
Statistics collection: To verify that the statistics collection is set, display the system
properties again, as shown in Example 9-97.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
528 Implementing the IBM System Storage SAN Volume Controller V7.2
At this point, we have completed the required tasks to start the statistics collection on the
clustered system.
9.9.8 Determining the status of a copy operation
Use the lscopystatus command, as shown in Example 9-98 on page 528, to determine if a
file copy operation is in progress. Only one file copy operation can be performed at a time.
The output of this command is a status of active or inactive.
Example 9-98 lscopystatus command
IBM_2145:ITSO_SVC1:admin>lscopystatus
status
inactive
9.9.9 Shutting down a clustered system
If all input power to a SAN Volume Controller system is to be removed for more than a few
minutes (for example, if the machine room power is to be shut down for maintenance), it is
important to shut down the clustered system before removing the power. If the input power is
removed from the uninterruptible power supply units without first shutting down the system
and the uninterruptible power supply units, the uninterruptible power supply units remain
operational and eventually become drained of power.
When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SAN Volume Controller does not permit any I/O activity to be performed to the
volumes until the uninterruptible power supply units are charged enough to enable all of the
data on the SAN Volume Controller nodes to be destaged in the event of a subsequent
unexpected power loss. Recharging the uninterruptible power supply can take as long as two
hours.
Shutting down the clustered system prior to removing input power to the uninterruptible power
supply units prevents the battery power from being drained. It also makes it possible for I/O
activity to be resumed as soon as input power is restored.
You can use the following procedure to shut down the system:
1. Use the stopsystem command to shut down your SAN Volume Controller system
(Example 9-99).
Example 9-99 stopsystem command
IBM_2145:ITSO_SVC1:admin>stopsystem
Are you sure that you want to continue with the shut down?
This command shuts down the SAN Volume Controller clustered system. All data is
flushed to disk before the power is removed. At this point, you lose administrative contact
with your system, and the PuTTY application automatically closes.
2. You will be presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
SAN Volume Controller 6.3: Starting with SAN Volume Controller 6.3, the command
svctask stopstats has been removed. You cannot disable the statistics collection.
Chapter 9. SAN Volume Controller operations using the command-line interface 529
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Ensure that you have stopped all FlashCopy mappings, Metro Mirror (remote copy)
relationships, data migration operations, and forced deletions before continuing. Entering
y to this message will execute the command. No feedback is then displayed. Entering
anything other than y(es) or Y(ES) will result in the command not executing. No feedback
is displayed.
3. We have completed the tasks that are required to shut down the system. To shut down the
uninterruptible power supply units, press the power-on button on the front panel of each
uninterruptible power supply unit.
9.10 Nodes
In this section, we describe the tasks that can be performed at an individual node level.
9.10.1 Viewing node details
Use the lsnode command to view the summary information about the nodes that are defined
within the SAN Volume Controller environment. To view more details about a specific node,
append the node name (for example, SVC1N1) to the command.
Example 9-100 shows both of these commands.
Example 9-100 lsnode command
IBM_2145:ITSO_SVC1:admin>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h
ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number
1,SVC1N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n1,,108283,,,
2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n2,,110711,,,
Important: Before shutting down a clustered system, ensure that all I/O operations are
stopped that are destined for this system, because you will lose all access to all
volumes being provided by this system. Failure to do so can result in failed I/O
operations being reported to the host operating systems.
Begin the process of quiescing all I/O to the system by stopping the applications on the
hosts that are using the volumes provided by the clustered system.
Restarting the system: To restart the clustered system, you must first restart the
uninterruptible power supply units by pressing the power button on their front panels.
Then, press the power-on button on the service panel of one of the nodes within the
system. After the node is fully booted (for example, displaying Cluster: on line 1 and
the cluster name on line 2 of the panel), you can start the other nodes in the same way.
As soon as all of the nodes are fully booted, you can reestablish administrative contact
using PuTTY, and your system will be fully operational again.
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons (:) as opposed to wrapping text over multiple lines.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
530 Implementing the IBM System Storage SAN Volume Controller V7.2
3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n4,,110775,,,
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n3,,104643,,,
IBM_2145:ITSO_SVC1:admin>lsnode SVC1N1
id 1
name SVC1N1
UPS_serial_number 1000739004
WWNN 50050768010027E2
status online
IO_group_id 0
IO_group_name io_grp0
partner_node_id 2
partner_node_name SVC1N2
config_node no
UPS_unique_id 10000000000027E2
port_id 50050768014027E2
port_status active
port_speed 2Gb
port_id 50050768013027E2
port_status active
port_speed 2Gb
port_id 50050768011027E2
port_status active
port_speed 2Gb
port_id 50050768012027E2
port_status active
port_speed 2Gb
hardware 8G4
iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.svc1n1
iscsi_alias
failover_active no
failover_name SVC1N2
failover_iscsi_name iqn.1986-03.com.ibm:2145.itsosvc1.svc1n2
failover_iscsi_alias
panel_name 108283
enclosure_id
canister_id
enclosure_serial_number
service_IP_address 10.18.228.101
service_gateway 10.18.228.1
service_subnet_mask 255.255.255.0
service_IP_address_6
service_gateway_6
service_prefix_6
9.10.2 Adding a node
After clustered system creation is completed through the service panel (the front panel of one
of the SAN Volume Controller nodes) and the system web interface, only one node (the
configuration node) is set up.
To have a fully functional SAN Volume Controller system, you must add a second node to the
configuration. To add a node to a clustered system, gather the necessary information, as
explained in these steps:
1. Before you can add a node, you must know which unconfigured nodes are available as
candidates. Issue the lsnodecandidate command (Example 9-101).
Chapter 9. SAN Volume Controller operations using the command-line interface 531
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-101 lsnodecandidate command
IBM_2145:ITSO_SVC1:admin>lsnodecandidate
id panel_name UPS_serial_number UPS_unique_id hardware
50050768010037E5 104643 1000739007 10000000000037E5
8G4
2. You must specify to which I/O Group you are adding the node. If you enter the lsnode
command, you can easily identify the I/O Group ID of the group to which you are adding
your node, as shown in Example 9-102 on page 531.
Example 9-102 lsnode command
IBM_2145:ITSO_SVC1:admin>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS
_unique_id,hardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,
enclosure_serial_number
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,i
qn.1986-03.com.ibm:2145.itsosvc1.svc1n3,,104643,,,
3. Now that we know the available nodes, we can use the addnode command to add the node
to the SAN Volume Controller clustered system configuration. Example 9-103 shows the
command to add a node to the SAN Volume Controller system.
Example 9-103 addnode (wwnodename) command
IBM_2145:ITSO_SVC1:admin>addnode -wwnodename 50050768010037E5 -iogrp io_grp1
Node, id [5], successfully added
This command adds the candidate node with the wwnodename of 50050768010037E5 to
the I/O Group called io_grp1.
We used the -wwnodename parameter (50050768010037E5). However, we can also use the
-panelname parameter (104643) instead, as shown in Example 9-104. If standing in front of
the node, it is easier to read the panel name than it is to get the worldwide node name
(WWNN).
Example 9-104 addnode (panelname) command
IBM_2145:ITSO_SVC1:admin>addnode -panelname 104643 -name SVC1N3 -iogrp io_grp1
We also used the optional -name parameter (SVC1N3). If you do not provide the -name
parameter, the SAN Volume Controller automatically generates the name nodex (where x
is the ID sequence number that is assigned internally by the SAN Volume Controller).
4. If the addnode command returns no information, your second node is powered on, and the
zones are correctly defined, the preexisting system configuration data can be stored in the
node. If you are sure that this node is not part of another active SAN Volume Controller
Tip: The node that you want to add must have a separate uninterruptible power supply
unit serial number from the uninterruptible power supply unit on the first node.
Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0
to 9, the dash (-), and the underscore (_). The name can be between one and 63
characters in length. However, the name cannot start with a number, dash, or the word
node (because this prefix is reserved for SAN Volume Controller assignment only).
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
532 Implementing the IBM System Storage SAN Volume Controller V7.2
system, you can use the service panel to delete the existing system information. After this
action is complete, reissue the lsnodecandidate command and you will see it listed.
9.10.3 Renaming a node
Use the chnode command to rename a node within the SAN Volume Controller system
configuration, as shown in Example 9-105.
Example 9-105 chnode -name command
IBM_2145:ITSO_SVC1:admin>chnode -name ITSO_SVC1_SVC1N3 4
This command renames node ID 4 to ITSO_SVC1_SVC1N3 4.
9.10.4 Deleting a node
Use the rmnode command to remove a node from the SAN Volume Controller clustered
system configuration (Example 9-106).
Example 9-106 rmnode command
IBM_2145:ITSO_SVC1:admin>rmnode SVC1N2
This command removes SVC1N2 from the SAN Volume Controller clustered system.
Because SVC1N2 was also the configuration node, the SAN Volume Controller transfers the
configuration node responsibilities to a surviving node, within the I/O Group. Unfortunately,
the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the
PuTTY application loses communication and closes automatically.
We must restart the PuTTY application to establish a secure session with the new
configuration node.
9.10.5 Shutting down a node
On occasion, it can be necessary to shut down a single node within the clustered system to
perform tasks, such as scheduled maintenance, while leaving the SAN Volume Controller
environment up and running.
Use the stopcluster -node command, as shown in Example 9-107, to shut down a single
node.
Name: The chnode command specifies the new name first. You can use letters A to Z and
a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one
and 63 characters in length. However, the name cannot start with a number, dash, or the
word node (because this prefix is reserved for SAN Volume Controller assignment only).
Important: If this node is the last node in an I/O Group, and there are volumes still
assigned to the I/O Group, the node is not deleted from the clustered system.
If this node is the last node in the system, and the I/O Group has no volumes remaining,
the clustered system is destroyed and all virtualization information is lost. Any data that is
still required must be backed up or migrated prior to destroying the system.
Chapter 9. SAN Volume Controller operations using the command-line interface 533
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-107 stopcluster -node command
IBM_2145:ITSO_SVC1:admin>stopcluster -node SVC1N3
Are you sure that you want to continue with the shut down?
This command shuts down node SVC1N3 in a graceful manner. When this node has been shut
down, the other node in the I/O Group will destage the contents of its cache and will go into
write-through mode until the node is powered up and rejoins the clustered system.
If this node is the last node in an I/O Group, all access to the volumes in the I/O Group will be
lost. Verify that you want to shut down this node before executing this command. You must
specify the -force flag.
By reissuing the lsnode command (Example 9-108), we can see that the node is now offline.
Example 9-108 lsnode command
IBM_2145:ITSO_SVC1:admin>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,h
ardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,enclosure_serial_number
1,SVC1N1,1000739004,50050768010027E2,online,0,io_grp0,no,10000000000027E2,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n1,,108283,,,
2,SVC1N2,1000739005,5005076801005034,online,0,io_grp0,yes,1000000000005034,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n2,,110711,,,
3,SVC1N4,1000739006,500507680100505C,online,1,io_grp1,no,20400001C3240004,8G4,iqn.1986-03.c
om.ibm:2145.itsosvc1.svc1n4,,110775,,,
4,SVC1N3,1000739007,50050768010037E5,offline,1,io_grp1,no,10000000000037E5,8G4,iqn.1986-03.
com.ibm:2145.itsosvc1.svc1n3,,104643,,,
IBM_2145:ITSO_SVC1:admin>lsnode SVC1N3
CMMVC5782E The object specified is offline.
At this point, we have completed the tasks that are required to view, add, delete, rename, and
shut down a node within a SAN Volume Controller environment.
9.11 I/O Groups
In this section, we explain the tasks that you can perform at an I/O Group level.
9.11.1 Viewing I/O Group details
Use the lsiogrp command, as shown in Example 9-109, to view information about the I/O
Groups that are defined within the SAN Volume Controller environment.
Example 9-109 I/O Group details
IBM_2145:ITSO_SVC1:admin>lsiogrp
id name node_count vdisk_count host_count
Important: There is no need to stop FlashCopy mappings, remote copy relationships, and
data migration operations. The other node will handle these activities, but be aware that the
system has a single point of failure now.
Restart: To restart the node manually, press the power-on button that is located on the
service panel of the node.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
534 Implementing the IBM System Storage SAN Volume Controller V7.2
0 io_grp0 2 24 9
1 io_grp1 2 22 9
2 io_grp2 0 0 1
3 io_grp3 0 0 1
4 recovery_io_grp 0 0 0
As shown, the SAN Volume Controller predefines five I/O Groups. In a four-node clustered
system (similar to our example), only two I/O Groups are actually in use. The other I/O
Groups (io_grp2 and io_grp3) are for a six-node or eight-node clustered system.
The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group
that normally owns them have suffered multiple failures. This design allows us to move the
volumes to the recovery I/O Group and then into a working I/O Group. Note that while
temporarily assigned to the recovery I/O Group, I/O access is not possible.
9.11.2 Renaming an I/O Group
Use the chiogrp command to rename an I/O Group (Example 9-110).
Example 9-110 chiogrp command
IBM_2145:ITSO_SVC1:admin>chiogrp -name io_grpA io_grp1
This command renames the I/O Group io_grp1 to io_grpA.
To see whether the renaming was successful, issue the lsiogrp command again to see the
change.
At this point, we have completed the tasks that are required to rename an I/O Group.
9.11.3 Adding and removing hostiogrp
To map or unmap a specific host object to a specific I/O Group to reach the maximum number
of hosts supported by a SAN Volume Controller clustered system, use the addhostiogrp
command to map a specific host to a specific I/O Group, as shown in Example 9-111.
Name: The chiogrp command specifies the new name first.
If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, the
dash (-), and the underscore (_). The name can be between one and 63 characters in
length. However, the name cannot start with a number, dash, or the word iogrp (because
this prefix is reserved for SAN Volume Controller assignment only).
Chapter 9. SAN Volume Controller operations using the command-line interface 535
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-111 addhostiogrp command
IBM_2145:ITSO_SVC1:admin>addhostiogrp -iogrp 1 Kanaga
The addhostiogrp command uses these parameters:
-iogrp iogrp_list -iogrpall
Specify a list of one or more I/O Groups that must be mapped to the host. This parameter
is mutually exclusive with the -iogrpall option. The -iogrpall option specifies that all
the I/O Groups must be mapped to the specified host. This parameter is mutually
exclusive with -iogrp.
-host host_id_or_name
Identify the host either by ID or name to which the I/O Groups must be mapped.
Use the rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in
Example 9-112.
Example 9-112 rmhostiogrp command
IBM_2145:ITSO_SVC1:admin>rmhostiogrp -iogrp 0 Kanaga
The rmhostiogrp command uses these parameters:
-iogrp iogrp_list -iogrpall
Specify a list of one or more I/O Groups that must be unmapped to the host. This
parameter is mutually exclusive with the -iogrpall option. The -iogrpall option specifies
that all of the I/O Groups must be unmapped to the specified host. This parameter is
mutually exclusive with -iogrp.
-force
If the removal of a host to I/O Group mapping will result in the loss of volume to host
mappings, the command fails if the -force flag is not used. The -force flag, however,
overrides this behavior and forces the deletion of the host to I/O Group mapping.
host_id_or_name
Identify the host either by the ID or name to which the I/O Groups must be unmapped.
9.11.4 Listing I/O Groups
To list all of the I/O Groups that are mapped to the specified host and vice versa, use the
lshostiogrp command, specifying the host name Kanaga, as shown in Example 9-113.
Example 9-113 lshostiogrp command
IBM_2145:ITSO_SVC1:admin>lshostiogrp Kanaga
id name
1 io_grp1
To list all of the host objects that are mapped to the specified I/O Group, use the lsiogrphost
command, as shown in Example 9-114.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
536 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-114 lsiogrphost command
IBM_2145:ITSO_SVC1:admin> lsiogrphost io_grp1
id name
1 Nile
2 Kanaga
3 Siam
In Example 9-114, io_grp1 is the I/O Group name.
9.12 Managing authentication
In the following sections, we illustrate authentication administration.
9.12.1 Managing users using the CLI
Here, we demonstrate how to operate and manage authentication by using the CLI. All users
must now be a member of a predefined user group. You can list those groups by using the
lsusergrp command, as shown in Example 9-115.
Example 9-115 lsusergrp command
IBM_2145:ITSO_SVC1:admin>lsusergrp
id name role remote
0 SecurityAdmin SecurityAdmin no
1 Administrator Administrator no
2 CopyOperator CopyOperator no
3 Service Service no
4 Monitor Monitor no
Example 9-116 is a simple example of creating a user. User John is added to the user group
Monitor with the password m0nitor.
Example 9-116 mkuser called John with password m0nitor
IBM_2145:ITSO_SVC1:admin>mkuser -name John -usergrp Monitor -password m0nitor
User, id [6], successfully created
Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server.
The user groups already have a defined authority role, as listed in Table 9-2.
Table 9-2 Authority roles
User group Role User
Security admin All commands Superusers
Administrator All commands except:
chauthservice, mkuser, rmuser,
chuser, mkusergrp, rmusergrp,
chusergrp, and setpwdreset
Administrators that control the
SAN Volume Controller
Chapter 9. SAN Volume Controller operations using the command-line interface 537
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
9.12.2 Managing user roles and groups
Role-based security commands are used to restrict the administrative abilities of a user. We
cannot create new user roles, but we can create new user groups and assign a predefined
role to our group.
As of SAN Volume Controller 6.3, you can connect to the clustered system using the same
user name with which you log into a SAN Volume Controller GUI.
To view the user roles on your system, use the lsusergrp command, as shown in
Example 9-117, to list all users.
Example 9-117 lsusergrp command
IBM_2145:ITSO_SVC1:admin>lsusergrp
id name role remote
0 SecurityAdmin SecurityAdmin no
1 Administrator Administrator no
2 CopyOperator CopyOperator no
3 Service Service no
4 Monitor Monitor no
Copy operator All display commands and the
following commands:
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap,
chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership
For users that control all of the
copy functionality of the cluster
Service All display commands
and the following commands:
applysoftware, setlocale,
addnode, rmnode, cherrstate,
writesernum, detectmdisk,
includemdisk, clearerrlog,
cleardumps,
settimezone, stopcluster,
startstats, stopstats, and
settime
For users that perform service
maintenance and other
hardware tasks on the system
Monitor All display commands and the
following commands:
finderr, dumperrlog,
dumpinternallog, and
chcurrentuser
And svcconfig: backup
For users that only need view
access
User group Role User
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
538 Implementing the IBM System Storage SAN Volume Controller V7.2
To view our currently defined users and the user groups to which they belong, we use the
lsuser command, as shown in Example 9-118.
Example 9-118 lsuser command
IBM_2145:ITSO_SVC1:admin>lsuser -delim ,
id,name,password,ssh_key,remote,usergrp_id,usergrp_name
0,superuser,yes,no,no,0,SecurityAdmin
1,admin,yes,yes,no,0,SecurityAdmin
2,Torben,yes,no,no,0,SecurityAdmin
3,Massimo,yes,no,no,1,Administrator
4,Christian,yes,no,no,1,Administrator
5,Alejandro,yes,no,no,1,Administrator
6,John,yes,no,no,4,Monitor
9.12.3 Changing a user
To change user passwords, issue the chuser command.
The chuser command allows you to modify a user that is already created. You can rename a
user, assign a new password (if you are logged on with administrative privileges), and move a
user from one user group to another user group. Be aware, however, that a member can only
be a member of one group at a time.
9.12.4 Audit log command
The audit log is extremely helpful in showing which commands have been entered on a
system. Most action commands that are issued by the old or new CLI are recorded in the
audit log:
The native GUI performs actions by using the CLI programs.
The SAN Volume Controller Console performs actions by issuing Common Information
Model (CIM) commands to the CIM object manager (CIMOM), which then runs the CLI
programs.
Actions performed by using both the native GUI and the SAN Volume Controller Console are
recorded in the audit log.
Certain commands are not audited:
dumpconfig
cpdumps
cleardumps
finderr
dumperrlog
dumpinternallog
svcservicetask dumperrlog
svcservicetask finderror
The audit log contains approximately 1 MB of data, which can contain about 6,000 average
length commands. When this log is full, the system copies it to a new file in the /dumps/audit
directory on the config node and resets the in-memory audit log.
To display entries from the audit log, use the catauditlog -first 5 command to return a list
of five in-memory audit log entries, as shown in Example 9-119.
Chapter 9. SAN Volume Controller operations using the command-line interface 539
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-119 catauditlog command
IBM_2145:ITSO_SVC1:admin>catauditlog -first 5
audit_seq_no timestamp cluster_user ssh_ip_address result res_obj_id action_cmd
459 110928150506 admin 10.18.228.173 0 6 svctask mkuser
-name John -usergrp Monitor -password '######'
460 110928160353 admin 10.18.228.173 0 7 svctask mkmdiskgrp
-name DS5000-2 -ext 256
461 110928160535 admin 10.18.228.173 0 1 svctask mkhost
-name hostone -hbawwpn 210100E08B251DD4 -force -mask 1001
462 110928160755 admin 10.18.228.173 0 1 svctask mkvdisk
-iogrp 0 -mdiskgrp 3 -size 10 -unit gb -vtype striped -autoexpand -grainsize 32 -rsize 20%
463 110928160817 admin 10.18.228.173 0 svctask rmvdisk 1
If you need to dump the contents of the in-memory audit log to a file on the current
configuration node, use the dumpauditlog command. This command does not provide any
feedback; it only provides the prompt. To obtain a list of the audit log dumps, use the lsdumps
command, as shown in Example 9-120.
Example 9-120 lsdumps command
IBM_2145:ITSO_SVC1:admin>lsdumps
id filename
0 dump.110711.110914.182844
1 svc.config.cron.bak_108283
2 sel.110711.trc
3 endd.trc
4 rtc.race_mq_log.txt.110711.trc
5 dump.110711.110920.102530
6 ethernet.110711.trc
7 svc.config.cron.bak_110711
8 svc.config.cron.xml_110711
9 svc.config.cron.log_110711
10 svc.config.cron.sh_110711
11 110711.trc
9.13 Managing Copy Services
In the following sections, we illustrate how to manage Copy Services.
9.13.1 FlashCopy operations
In this section, we use a scenario to illustrate how to use commands with PuTTY to perform
FlashCopy. See the IBM System Storage Open Software Family SAN Volume Controller:
Command-Line Interface Users Guide, GC27-2287, for information about other commands.
Scenario description
We use the following scenario in both the CLI section and the GUI section. In the following
scenario, we want to FlashCopy the following volumes:
DB_Source Database files
Log_Source Database log files
App_Source Application files
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
540 Implementing the IBM System Storage SAN Volume Controller V7.2
We create Consistency Groups to handle the FlashCopy of DB_Source and Log_Source,
because data integrity must be kept on DB_Source and Log_Source.
In our scenario, the application files are independent of the database, so we create a single
FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and
Log_Source and therefore, two Consistency Groups. Figure 9-6 on page 540 shows the
scenario.
Figure 9-6 FlashCopy scenario
9.13.2 Setting up FlashCopy
We have already created the source and target volumes. The source and target volumes are
identical in size, which is a requirement of the FlashCopy function:
DB_Source, DB_Target1, and DB_Target2
Log_Source, Log_Target1, and Log_Target2
App_Source and App_Target1
To set up the FlashCopy, we perform the following steps:
1. Create two FlashCopy Consistency Groups:
FCCG1
FCCG2
2. Create FlashCopy mappings for Source volumes:
DB_Source FlashCopy to DB_Target1, the mapping name is DB_Map1
DB_Source FlashCopy to DB_Target2, the mapping name is DB_Map2
Log_Source FlashCopy to Log_Target1, the mapping name is Log_Map1
Log_Source FlashCopy to Log_Target2, the mapping name is Log_Map2
Chapter 9. SAN Volume Controller operations using the command-line interface 541
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
App_Source FlashCopy to App_Target1, the mapping name is App_Map1
Copyrate 50
9.13.3 Creating a FlashCopy Consistency Group
To create a FlashCopy Consistency Group, we use the command mkfcconsistgrp to create a
new Consistency Group. The ID of the new group is returned. If you have created several
FlashCopy mappings for a group of volumes that contain elements of data for the same
application, it might be convenient to assign these mappings to a single FlashCopy
Consistency Group. Then, you can issue a single prepare or start command for the whole
group so that, for example, all files for a particular database are copied at the same time.
In Example 9-121, the FCCG1 and FCCG2 Consistency Groups are created to hold the
FlashCopy maps of DB and Log. This step is extremely important for FlashCopy on database
applications, because it helps to maintain data integrity during FlashCopy.
Example 9-121 Creating two FlashCopy Consistency Groups
IBM_2145:ITSO_SVC3:admin>mkfcconsistgrp -name FCCG1
FlashCopy Consistency Group, id [1], successfully created
IBM_2145:ITSO_SVC3:admin>mkfcconsistgrp -name FCCG2
FlashCopy Consistency Group, id [2], successfully created
In Example 9-122, we checked the status of the Consistency Groups. Each Consistency
Group has a status of empty.
Example 9-122 Checking the status
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp
id name status
1 FCCG1 empty
2 FCCG2 empty
If you want to change the name of a Consistency Group, you can use the chfcconsistgrp
command. Type chfcconsistgrp -h for help with this command.
9.13.4 Creating a FlashCopy mapping
To create a FlashCopy mapping, we use the mkfcmap command. This command creates a
new FlashCopy mapping, which maps a source volume to a target volume to prepare for
subsequent copying.
When executed, this command creates a new FlashCopy mapping logical object. This
mapping persists until it is deleted. The mapping specifies the source and destination
volumes. The destination must be identical in size to the source or the mapping will fail. Issue
the lsvdisk -bytes command to find the exact size of the source volume for which you want
to create a target disk of the same size.
In a single mapping, source and destination cannot be on the same volume. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a Consistency Group. These groups of mappings can be triggered at
the same time, enabling multiple volumes to be copied at the same time, which creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files reside on separate disks.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
542 Implementing the IBM System Storage SAN Volume Controller V7.2
If no Consistency Group is defined, the mapping is assigned to the default group 0, which is a
special group that cannot be started as a whole. Mappings in this group can only be started
on an individual basis.
The background copy rate specifies the priority that must be given to completing the copy. If 0
is specified, the copy will not proceed in the background. The default is 50.
In Example 9-123, the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
is created.
Example 9-123 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO_SVC3:admin>mkfcmap -source DB_Source -target DB_Target1 -name
DB_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [0], successfully created
IBM_2145:ITSO_SVC3:admin>mkfcmap -source Log_Source -target Log_Target1 -name
Log_Map1 -consistgrp FCCG1
FlashCopy Mapping, id [1], successfully created
IBM_2145:ITSO_SVC3:admin>mkfcmap -source App_Source -target App_Target1 -name
App_Map1
FlashCopy Mapping, id [2], successfully created
Example 9-124 shows the command to create a second FlashCopy mapping for volume
DB_Source and Log_Source.
Example 9-124 Create additional FlashCopy mappings
IBM_2145:ITSO_SVC3:admin>mkfcmap -source DB_Source -target DB_Target2 -name
DB_Map2 -consistgrp FCCG2
FlashCopy Mapping, id [3], successfully created
IBM_2145:ITSO_SVC3:admin>mkfcmap -source Log_Source -target Log_Target2 -name
Log_Map2 -consistgrp FCCG2
FlashCopy Mapping, id [4], successfully created
Example 9-125 shows the result of these FlashCopy mappings. The status of the mapping is
idle_or_copied.
Example 9-125 Check the result of Multiple Target FlashCopy mappings
IBM_2145:ITSO_SVC3:admin>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name
group_id group_name status progress copy_rate clean_progress incremental
partner_FC_id partner_FC_name restoring start_time rc_controlled
Tip: There is a parameter to delete FlashCopy mappings automatically after the
completion of a background copy (when the mapping gets to the idle_or_copied state).
Use the command:
mkfcmap -autodelete
This command does not delete mappings in cascade with dependent mappings, because it
cannot get to the idle_or_copied state in this situation.
Chapter 9. SAN Volume Controller operations using the command-line interface 543
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
0 DB_Map1 3 DB_Source 4 DB_Target1 1
FCCG1 idle_or_copied 0 50 100 off
no no
1 Log_Map1 6 Log_Source 7 Log_Target1 1
FCCG1 idle_or_copied 0 50 100 off
no no
2 App_Map1 9 App_Source 10 App_Target1
idle_or_copied 0 50 100 off
no no
3 DB_Map2 3 DB_Source 5 DB_Target2 2
FCCG2 idle_or_copied 0 50 100 off
no no
4 Log_Map2 6 Log_Source 8 Log_Target2 2
FCCG2 idle_or_copied 0 50 100 off
no no
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied
If you want to change the FlashCopy mapping, you can use the chfcmap command. Type
chfcmap -h to get help with this command.
9.13.5 Preparing (pre-triggering) the FlashCopy mapping
At this point, the mapping has been created, but the cache still accepts data for the source
volumes. You can only trigger the mapping when the cache does not contain any data for
FlashCopy source volumes. You must issue a prestartfcmap command to prepare a
FlashCopy mapping to start. This command tells the SAN Volume Controller to flush the
cache of any content for the source volume and to pass through any further write data for this
volume.
When the prestartfcmap command is executed, the mapping enters the Preparing state.
After the preparation is complete, it changes to the Prepared state. At this point, the mapping
is ready for triggering. Preparing and the subsequent triggering are usually performed on a
Consistency Group basis. Only mappings belonging to Consistency Group 0 can be prepared
on their own, because Consistency Group 0 is a special group that contains the FlashCopy
mappings that do not belong to any Consistency Group. A FlashCopy must be prepared
before it can be triggered.
In our scenario, App_Map1 is not in a Consistency Group. In Example 9-126, we show how to
initialize the preparation for App_Map1.
Another option is that you add the -prep parameter to the startfcmap command, which first
prepares the mapping and then starts the FlashCopy.
In the example, we also show how to check the status of the current FlashCopy mapping. The
status of App_Map1 is prepared.
Example 9-126 Prepare a FlashCopy without a Consistency Group
IBM_2145:ITSO_SVC3:admin>prestartfcmap App_Map1
IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1
id 2
name App_Map1
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
544 Implementing the IBM System Storage SAN Volume Controller V7.2
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status prepared
progress 0
copy_rate 50
start_time
dependent_mappings 0
autodelete off
clean_progress 0
clean_rate 50
incremental off
difference 0
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
9.13.6 Preparing (pre-triggering) the FlashCopy Consistency Group
We use the prestartfcconsistsgrp command to prepare a FlashCopy Consistency Group.
As with 9.13.5, Preparing (pre-triggering) the FlashCopy mapping on page 543, this
command flushes the cache of any data that is destined for the source volume and forces the
cache into the write-through mode until the mapping is started. The difference is that this
command prepares a group of mappings (at a Consistency Group level) instead of one
mapping.
When you have assigned several mappings to a FlashCopy Consistency Group, you only
have to issue a single prepare command for the whole group to prepare all of the mappings at
one time.
Example 9-127 shows how we prepare the Consistency Groups for DB and Log and check the
result. After the command has executed all of the FlashCopy maps that we have, all of them
are in the prepared status and all the Consistency Groups are in the prepared status, too.
Now, we are ready to start the FlashCopy.
Example 9-127 Prepare a FlashCopy Consistency Group
IBM_2145:ITSO_SVC3:admin>prestartfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:admin>prestartfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp FCCG1
id 1
name FCCG1
status prepared
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp
Chapter 9. SAN Volume Controller operations using the command-line interface 545
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
id name status
1 FCCG1 prepared
2 FCCG2 prepared
9.13.7 Starting (triggering) FlashCopy mappings
The startfcmap command is used to start a single FlashCopy mapping. When invoked, a
point-in-time copy of the source volume is created on the target volume.
When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the
destination. We suggest that you use this scenario as a backup copy while the mapping exists
in the Copying state. If the copy is stopped, the destination is unusable.
If you want to end up with a duplicate copy of the source at the destination, set the
background copy rate greater than 0. This way, the system copies all of the data (even
unchanged data) to the destination and eventually reaches the idle_or_copied state. After this
data is copied, you can delete the mapping and have a usable point-in-time copy of the
source at the destination.
In Example 9-128, after the FlashCopy is started, App_Map1 changes to copying status.
Example 9-128 Start App_Map1
IBM_2145:ITSO_SVC3:admin>startfcmap App_Map1
IBM_2145:ITSO_SVC3:admin>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 DB_Map1 3 DB_Source 4 DB_Target1 1
FCCG1 prepared 0 50 0 off
no no
1 Log_Map1 6 Log_Source 7 Log_Target1 1
FCCG1 prepared 0 50 0 off
no no
2 App_Map1 9 App_Source 10 App_Target1
copying 0 50 100 off no
110929113407 no
3 DB_Map2 3 DB_Source 5 DB_Target2 2
FCCG2 prepared 0 50 0 off
no no
4 Log_Map2 6 Log_Source 8 Log_Target2 2
FCCG2 prepared 0 50 0 off
no no
IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status copying
progress 0
copy_rate 50
start_time 110929113407
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
546 Implementing the IBM System Storage SAN Volume Controller V7.2
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 0
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
9.13.8 Starting (triggering) FlashCopy Consistency Group
We execute the startfcconsistgrp command, as shown in Example 9-129, and afterward
the database can be resumed. We have created two point-in-time consistent copies of the DB
and Log volumes. After the execution, the Consistency Group and the FlashCopy maps are all
in the copying status.
Example 9-129 Start FlashCopy Consistency Group
IBM_2145:ITSO_SVC3:admin>startfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:admin>startfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp FCCG1
id 1
name FCCG1
status copying
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO_SVC3:admin>
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp
id name status
1 FCCG1 copying
2 FCCG2 copying
9.13.9 Monitoring the FlashCopy progress
To monitor the background copy progress of the FlashCopy mappings, we issue the
lsfcmapprogress command for each FlashCopy mapping.
Alternatively, you can also query the copy progress by using the lsfcmap command. As
shown in Example 9-130, both DB_Map1 returns information that the background copy is 23%
completed and Log_Map1 returns information that the background copy is 41% completed.
DB_Map2 returns information that the background copy is 5% completed and Log_Map2 returns
information that the background copy is 4% completed.
Example 9-130 Monitoring the background copy progress
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress DB_Map1
id progress
Chapter 9. SAN Volume Controller operations using the command-line interface 547
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
0 23
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress Log_Map1
id progress
1 41
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress Log_Map2
id progress
4 4
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress DB_Map2
id progress
3 5
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress App_Map1
id progress
2 10
When the background copy has completed, the FlashCopy mapping enters the
idle_or_copied state. When all FlashCopy mappings in a Consistency Group enter this status,
the Consistency Group will be at idle_or_copied status.
When in this state, the FlashCopy mapping can be deleted and the target disk can be used
independently if, for example, another target disk is to be used for the next FlashCopy of the
particular source volume.
9.13.10 Stopping the FlashCopy mapping
The stopfcmap command is used to stop a FlashCopy mapping. This command allows you to
stop an active (copying) or suspended mapping. When executed, this command stops a
single FlashCopy mapping.
When a FlashCopy mapping is stopped, the target volume becomes invalid and is set offline
by the SAN Volume Controller. The FlashCopy mapping needs to be prepared again or
retriggered to bring the target volume online again.
Example 9-131 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 has
changed to idle_or_copied.
Example 9-131 Stop App_Map1 FlashCopy
IBM_2145:ITSO_SVC3:admin>stopfcmap App_Map1
IBM_2145:ITSO_SVC3:admin>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group,
consider whether you want to keep any of the dependent mappings. If not, issue the stop
command with the -force parameter, which will stop all of the dependent maps and
negate the need for the stopping copy process to run.
Important: Only stop a FlashCopy mapping when the data on the target volume is not in
use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and is set offline by the SAN Volume
Controller, if the mapping is in the Copying state and progress=100.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
548 Implementing the IBM System Storage SAN Volume Controller V7.2
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status idle_or_copied
progress 100
copy_rate 50
start_time 110929113407
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
9.13.11 Stopping the FlashCopy Consistency Group
The stopfcconsistgrp command is used to stop any active FlashCopy Consistency Group. It
stops all mappings in a Consistency Group. When a FlashCopy Consistency Group is
stopped for all mappings that are not 100% copied, the target volumes become invalid and
are set offline by the SAN Volume Controller. The FlashCopy Consistency Group needs to be
prepared again and restarted to bring the target volumes online again.
As shown in Example 9-132, we stop the FCCG1 and FCCG2 Consistency Groups. The status of
the two Consistency Groups has changed to stopped. Most of the FlashCopy mapping
relationships now have the status of stopped. As you can see, several of them have already
completed the copy operation and are now in a status of idle_or_copied.
Example 9-132 Stop FCCG1 and FCCG2 Consistency Groups
IBM_2145:ITSO_SVC3:admin>stopfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:admin>stopfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied
IBM_2145:ITSO_SVC3:admin>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_
name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,res
toring,start_time,rc_controlled
0,DB_Map1,3,DB_Source,4,DB_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,110929113806,
no
Important: Only stop a FlashCopy mapping when the data on the target volume is not in
use, or when you want to modify the FlashCopy Consistency Group. When a Consistency
Group is stopped, the target volume might become invalid and set offline by the SAN
Volume Controller, depending on the state of the mapping.
Chapter 9. SAN Volume Controller operations using the command-line interface 549
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
1,Log_Map1,6,Log_Source,7,Log_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
2,App_Map1,9,App_Source,10,App_Target1,,,idle_or_copied,100,50,100,off,,,no,110929113407,no
3,DB_Map2,3,DB_Source,5,DB_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,110929113806,
no
4,Log_Map2,6,Log_Source,8,Log_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
9.13.12 Deleting the FlashCopy mapping
To delete a FlashCopy mapping, use the rmfcmap command. When the command is
executed, it attempts to delete the specified FlashCopy mapping. If the FlashCopy mapping is
stopped, the command fails unless the -force flag is specified. If the mapping is active
(copying), it must first be stopped before it can be deleted.
Deleting a mapping only deletes the logical relationship between the two volumes. However,
when issued on an active FlashCopy mapping using the -force flag, the delete renders the
data on the FlashCopy mapping target volume as inconsistent.
As shown in Example 9-133, we delete App_Map1.
Example 9-133 Delete App_Map1
IBM_2145:ITSO_SVC3:admin>rmfcmap App_Map1
9.13.13 Deleting the FlashCopy Consistency Group
The rmfcconsistgrp command is used to delete a FlashCopy Consistency Group. When
executed, this command deletes the specified Consistency Group. If there are mappings that
are members of the group, the command fails unless the -force flag is specified.
If you want to delete all of the mappings in the Consistency Group as well, first delete the
mappings and then delete the Consistency Group.
As shown in Example 9-134, we delete all of the maps and Consistency Groups and then
check the result.
Example 9-134 Remove fcmaps and fcconsistgrp
IBM_2145:ITSO_SVC3:admin>rmfcmap DB_Map1
IBM_2145:ITSO_SVC3:admin>rmfcmap DB_Map2
IBM_2145:ITSO_SVC3:admin>rmfcmap Log_Map1
IBM_2145:ITSO_SVC3:admin>rmfcmap Log_Map2
IBM_2145:ITSO_SVC3:admin>rmfcconsistgrp FCCG1
IBM_2145:ITSO_SVC3:admin>rmfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:admin>lsfcconsistgrp
IBM_2145:ITSO_SVC3:admin>lsfcmap
IBM_2145:ITSO_SVC3:admin>
Tip: If you want to use the target volume as a normal volume, monitor the background copy
progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.
Another option is to set the -autodelete option when creating the FlashCopy mapping.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
550 Implementing the IBM System Storage SAN Volume Controller V7.2
9.13.14 Migrating a volume to a thin-provisioned volume
Use the following scenario to migrate a volume to a thin-provisioned volume:
1. Create a thin-provisioned space-efficient target volume with exactly the same size as the
volume that you want to migrate.
Example 9-135 shows the details of a volume with ID 11. It has been created as a
thin-provisioned volume with the same size as the App_Source volume.
Example 9-135 lsvdisk 11 command
IBM_2145:ITSO_SVC3:admin>lsvdisk 11
id 11
name App_Source_SE
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE00000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 221.17MB
free_capacity 220.77MB
overallocation 4629
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
Chapter 9. SAN Volume Controller operations using the command-line interface 551
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 221.17MB
2. Define a FlashCopy mapping in which the non-thin-provisioned volume is the source and
the thin-provisioned volume is the target. Specify a copy rate as high as possible and
activate the -autodelete option for the mapping. See Example 9-136.
Example 9-136 mkfcmap
IBM_2145:ITSO_SVC3:admin>mkfcmap -source App_Source -target App_Source_SE -name
MigrtoThinProv -copyrate 100 -autodelete
FlashCopy Mapping, id [0], successfully created
IBM_2145:ITSO_SVC3:admin>lsfcmap 0
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status idle_or_copied
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
3. Run the prestartfcmap command and the lsfcmap MigrtoThinProv command, as shown
in Example 9-137.
Example 9-137 prestartfcmap
IBM_2145:ITSO_SVC3:admin>prestartfcmap MigrtoThinProv
IBM_2145:ITSO_SVC3:admin>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status prepared
progress 0
copy_rate 100
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
552 Implementing the IBM System Storage SAN Volume Controller V7.2
start_time
dependent_mappings 0
autodelete on
clean_progress 0
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
4. Run the startfcmap command, as shown in Example 9-138.
Example 9-138 startfcmap command
IBM_2145:ITSO_SVC3:admin>startfcmap MigrtoThinProv
5. Monitor the copy process using the lsfcmapprogress command, as shown in
Example 9-139.
Example 9-139 lsfcmapprogress command
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress MigrtoThinProv
id progress
0 67
6. The FlashCopy mapping has been deleted automatically, as shown in Example 9-140.
Example 9-140 lsfcmap command
IBM_2145:ITSO_SVC3:admin>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status copying
progress 67
copy_rate 100
start_time 110929135848
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
Chapter 9. SAN Volume Controller operations using the command-line interface 553
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
IBM_2145:ITSO_SVC3:admin>lsfcmapprogress MigrtoThinProv
CMMVC5804E The action failed because an object that was specified in the command does
not exist.
IBM_2145:ITSO_SVC3:admin>
An independent copy of the source volume (App_Source) has been created. The migration
has completed, as shown in Example 9-141.
Example 9-141 lsvdisk App_Source
IBM_2145:ITSO_SVC3:admin>lsvdisk App_Source
id 9
name App_Source
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE000000000000009
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
554 Implementing the IBM System Storage SAN Volume Controller V7.2
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB
To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same
scenario.
9.13.15 Reverse FlashCopy
You can also have a reverse FlashCopy mapping without having to remove the original
FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning.
In Example 9-142 on page 554, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1
is a reverse FlashCopy mapping. We have also a cascade FCMAP_2 where its source is
FCMAP_1s target volume, and its target is a separate volume named Volume_FC_T1.
In our example, after creating the environment, we started the FCMAP_1 and later FCMAP_2.
As an example, we started FCMAP_rev_1 without specifying the -restore parameter to show
why we have to use it, and to show the message that is issued if you do not use it:
CMMVC6298E The command failed because a target VDisk has dependent FlashCopy
mappings.
When starting a reverse FlashCopy mapping, you must use the -restore option to indicate
that you want to overwrite the data on the source disk of the forward mapping.
Example 9-142 Reverse FlashCopy
IBM_2145:ITSO_SVC3:admin>lsvdisk
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity
type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count
copy_count fast_write_state se_copy_count RC_change
3 Volume_FC_S 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000003 0 1
empty 0 0 no
4 Volume_FC_T_S1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000004 0 1
empty 0 0 no
5 Volume_FC_T1 0 io_grp0 online 1 Multi_Tier_Pool 10.00GB
striped 60050768018281BEE000000000000005 0 1
empty 0 0 no
IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_S -target Volume_FC_T_S1 -name FCMAP_1
-copyrate 50
FlashCopy Mapping, id [0], successfully created
IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_S -name
FCMAP_rev_1 -copyrate 50
FlashCopy Mapping, id [1], successfully created
IBM_2145:ITSO_SVC3:admin>mkfcmap -source Volume_FC_T_S1 -target Volume_FC_T1 -name FCMAP_2
-copyrate 50
Real size: Independently of what you defined as the real size of the target thin-provisioned
volume, the real size will be at least the capacity of the source volume.
Chapter 9. SAN Volume Controller operations using the command-line interface 555
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
FlashCopy Mapping, id [2], successfully created
IBM_2145:ITSO_SVC3:admin>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
idle_or_copied 0 50 100 off 1 FCMAP_rev_1
no no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
idle_or_copied 0 50 100 off
no no
IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_1
IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_2
IBM_2145:ITSO_SVC3:admin>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 0
50 100 off 1 FCMAP_rev_1 no
no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 4 50 100 off
no 110929143739 no
IBM_2145:ITSO_SVC3:admin>startfcmap -prep FCMAP_rev_1
CMMVC6298E The command failed because a target VDisk has dependent FlashCopy mappings.
IBM_2145:ITSO_SVC3:admin>startfcmap -prep -restore FCMAP_rev_1
IBM_2145:ITSO_SVC3:admin>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
copying 43 100 56 off 1 FCMAP_rev_1 no
110929151911 no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
copying 56 100 43 off 0 FCMAP_1 yes
110929152030 no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 37 100 100 off no
110929151926 no
As you can see in Example 9-142 on page 554, FCMAP_rev_1 shows a restoring value of yes
while the FlashCopy mapping is copying. After it has finished copying, the restoring value field
will change to no.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
556 Implementing the IBM System Storage SAN Volume Controller V7.2
9.13.16 Split-stopping of FlashCopy maps
The stopfcmap command has a -split option. This option allows the source target of a map,
which is 100% complete, to be removed from the head of a cascade when the map is
stopped.
For example, if we have four volumes in a cascade (A B C D), and the map A B is
100% complete, using the stopfcmap -split mapAB command results in mapAB becoming
idle_copied and the remaining cascade becoming B C D.
Without the -split option, volume A remains at the head of the cascade (A C D).
Consider this sequence of steps:
1. User takes a backup using the mapping A B. A is the production volume; B is a backup.
2. At a later point, the user experiences corruption on A and so reverses the mapping to
B A.
3. The user then takes another backup from the production disk A, resulting in the cascade
B A C.
Stopping A B without the -split option results in the cascade B C. Note that the backup
disk B is now at the head of this cascade.
When the user next wants to take a backup to B, the user can still start mapping A B (using
the -restore flag), but the user cannot then reverse the mapping to A (B A or C A).
Stopping A B with the -split option results in the cascade A C. This action does not
result in the same problem, because the production disk A is at the head of the cascade
instead of the backup disk B.
9.14 Metro Mirror operation
In the following scenario, we set up an intercluster Metro Mirror relationship between the SAN
Volume Controller system ITSO_SVC1 primary site and the SAN Volume Controller system
ITSO_SVC4 at the secondary site. Table 9-3 shows the details of the volumes.
Table 9-3 Volume details
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a
CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them.
Intercluster example: This example is for intercluster operations only.
If you want to set up intracluster operations, we highlight those parts of the following
procedure that you do not need to perform.
Content of volume Volumes at primary site Volumes at secondary site
Database files MM_DB_Pri MM_DB_Sec
Database log files MM_DBLog_Pri MM_DBLog_Sec
Application files MM_App_Pri MM_App_Sec
Chapter 9. SAN Volume Controller operations using the command-line interface 557
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Because, in this scenario, application files are independent of the database, a stand-alone
Metro Mirror relationship is created for the MM_App_Pri volume. Figure 9-7 on page 557
illustrates the Metro Mirror setup.
Figure 9-7 Metro Mirror scenario
9.14.1 Setting up Metro Mirror
In the following section, we assume that the source and target volumes have already been
created and that the inter-switch links (ISLs) and zoning are in place, enabling the SAN
Volume Controller clustered systems to communicate.
To set up the Metro Mirror, perform the following steps:
1. Create a SAN Volume Controller partnership between ITSO_SVC1 and ITSO_SVC4 on both
of the SAN Volume Controller clustered systems.
2. Create a Metro Mirror Consistency Group:
Name: CG_W2K3_MM
3. Create the Metro Mirror relationship for MM_DB_Pri:
Master: MM_DB_Pri
Auxiliary: MM_DB_Sec
Auxiliary SAN Volume Controller system: ITSO_SVC4
Name: MMREL1
Consistency Group: CG_W2K3_MM
4. Create the Metro Mirror relationship for MM_DBLog_Pri:
Master: MM_DBLog_Pri
Auxiliary: MM_DBLog_Sec
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
558 Implementing the IBM System Storage SAN Volume Controller V7.2
Auxiliary SAN Volume Controller system: ITSO_SVC4
Name: MMREL2
Consistency Group: CG_W2K3_MM
5. Create the Metro Mirror relationship for MM_App_Pri:
Master: MM_App_Pri
Auxiliary: MM_App_Sec
Auxiliary SAN Volume Controller system: ITSO_SVC4
Name: MMREL3
In the following section, we perform each step by using the CLI.
9.14.2 Creating a SAN Volume Controller partnership between ITSO_SVC1 and
ITSO_SVC4
We create the SAN Volume Controller partnership on both systems.
Preverification
To verify that both systems can communicate with each other, use the
lspartnershipcandidate command.
As shown in Example 9-143, ITSO_SVC4 is an eligible SAN Volume Controller system
candidate at ITSO_SVC1 for the SAN Volume Controller system partnership, and vice versa.
Therefore, both systems communicate with each other.
Example 9-143 Listing the available SAN Volume Controller systems for partnership
IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2
IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate
id configured name
000002006AC03A42 no ITSO_SVC2
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2
000002006BE04FC4 no ITSO_SVC1
Example 9-144 on page 559 shows the output of the lspartnership and lssystem
commands, before setting up the Metro Mirror relationship. We show them so that you can
compare with the same relationship after setting up the Metro Mirror relationship.
As of SAN Volume Controller 6.3, you can create a partnership between the SAN Volume
Controller system and the IBM Storwize V7000 system. Be aware that to create this
partnership, you need to change the layer parameter on the IBM Storwize V7000 system. It
must be changed from storage to replication with the chsystem command.
Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
the next step; instead, go to 9.14.3, Creating a Metro Mirror Consistency Group on
page 561.
Chapter 9. SAN Volume Controller operations using the command-line interface 559
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
This parameter cannot be changed on the SAN Volume Controller system, it is fixed to the
value of appliance, as shown in Example 9-144 on page 559.
Example 9-144 Pre-verification of system configuration
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
IBM_2145:ITSO_SVC4:admin>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
IBM_2145:ITSO_SVC1:admin>lssystem
id 000002006BE04FC4
name ITSO_SVC1
location local
partnership
bandwidth
total_mdisk_capacity 766.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 0.00MB
total_free_space 766.5GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 1.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
560 Implementing the IBM System Storage SAN Volume Controller V7.2
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 766.50GB
has_nas_key no
layer appliance
IBM_2145:ITSO_SVC4:admin>lssystem
id 0000020061C06FCA
name ITSO_SVC4
location local
partnership
bandwidth
total_mdisk_capacity 768.0GB
space_in_mdisk_grps 0
space_allocated_to_vdisks 0.00MB
total_free_space 768.0GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 0.00MB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.84:443
id_alias 0000020061C06FCA
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
Chapter 9. SAN Volume Controller operations using the command-line interface 561
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
tier generic_hdd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
has_nas_key no
layer appliance
Partnership between clustered systems
In Example 9-145, a partnership is created between ITSO_SVC1 and ITSO_SVC4, specifying 50
MBps bandwidth to be used for the background copy.
To check the status of the newly created partnership, issue the lspartnership command.
Also, notice that the new partnership is only partially configured. It remains partially
configured until the Metro Mirror relationship is created on the other node.
Example 9-145 Creating the partnership from ITSO_SVC1 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
In Example 9-146, the partnership is created between ITSO_SVC4 back to ITSO_SVC1,
specifying the bandwidth to be used for a background copy of 50 MBps.
After creating the partnership, verify that the partnership is fully configured on both systems
by reissuing the lspartnership command.
Example 9-146 Creating the partnership from ITSO_SVC4 to ITSO_SVC1 and verifying it
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC1
IBM_2145:ITSO_SVC4:admin>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
9.14.3 Creating a Metro Mirror Consistency Group
In Example 9-147, we create the Metro Mirror Consistency Group using the mkrcconsistgrp
command. This Consistency Group will be used for the Metro Mirror relationships of the
database volumes named MM_DB_Pri and MM_DBLog_Pri. The Consistency Group is named
CG_W2K3_MM.
Example 9-147 Creating the Metro Mirror Consistency Group CG_W2K3_MM
IBM_2145:ITSO_SVC1:admin>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_MM
RC Consistency Group, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
562 Implementing the IBM System Storage SAN Volume Controller V7.2
9.14.4 Creating the Metro Mirror relationships
In Example 9-148, we create the Metro Mirror relationships MMREL1 and MMREL2 for MM_DB_Pri
and MM_DBLog_Pri. Also, we make them members of the Metro Mirror Consistency Group
CG_W2K3_MM. We use the lsvdisk command to list all of the volumes in the ITSO_SVC1 system.
We then use the lsrcrelationshipcandidate command to show the volumes in the
ITSO_SVC4 system.
By using this command, we check the possible candidates for MM_DB_Pri. After checking all of
these conditions, we use the mkrcrelationship command to create the Metro Mirror
relationship.
To verify the newly created Metro Mirror relationships, list them with the lsrcrelationship
command.
Example 9-148 Creating Metro Mirror relationships MMREL1 and MMREL2
IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue name=MM*
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change
0 MM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000031 0 1 empty 0 0
no
1 MM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000032 0 1 empty 0 0
no
2 MM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000033 0 1 empty 0 0
no
IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate
id vdisk_name
0 MM_DB_Pri
1 MM_DBLog_Pri
2 MM_App_Pri
IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate -aux ITSO_SVC4 -master MM_DB_Pri
id vdisk_name
0 MM_DB_Sec
1 MM_DBLog_Sec
2 MM_App_Sec
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO_SVC4 -consistgrp
CG_W2K3_MM -name MMREL1
RC Relationship, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO_SVC4 -consistgrp
CG_W2K3_MM -name MMREL2
RC Relationship, id [3], successfully created
IBM_2145:ITSO_SVC1:admin>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 MMREL1 000002006BE04FC4 ITSO_SVC1 0 MM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 MM_DB_Sec master 0 CG_W2K3_MM
inconsistent_stopped 50 0 metro none
3 MMREL2 000002006BE04FC4 ITSO_SVC1 3 MM_Log_Pri
0000020061C06FCA ITSO_SVC4 3 MM_Log_Sec master 0
CG_W2K3_MM inconsistent_stopped 50 0 metro none
Chapter 9. SAN Volume Controller operations using the command-line interface 563
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
9.14.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri
In Example 9-149, we create the stand-alone Metro Mirror relationship MMREL3 for MM_App_Pri.
After it is created, we check the status of this Metro Mirror relationship.
Notice that the state of MMREL3 is consistent_stopped. MMREL3 is in this state, because it was
created with the -sync option. The -sync option indicates that the secondary (auxiliary)
volume is already synchronized with the primary (master) volume. Initial background
synchronization is skipped when this option is used, even though the volumes are not actually
synchronized in this scenario.
We want to illustrate the option of pre-synchronized master and auxiliary volumes, before
setting up the relationship. We have created the new relationship for MM_App_Sec using the
-sync option.
MMREL2 and MMREL1 are in the inconsistent_stopped state, because they were not created with
the -sync option. Therefore, their auxiliary volumes need to be synchronized with their
primary volumes.
Example 9-149 Creating a stand-alone relationship and verifying it
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster
ITSO_SVC4 -name MMREL3
RC Relationship, id [2], successfully created
IBM_2145:ITSO_SVC1:admin>lsrcrelationship 2
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
Tip: The -sync option is only used when the target volume has already mirrored all of the
data from the source volume. By using this option, there is no initial background copy
between the primary volume and the secondary volume.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
564 Implementing the IBM System Storage SAN Volume Controller V7.2
9.14.6 Starting Metro Mirror
Now that the Metro Mirror Consistency Group and relationships are in place, we are ready to
use Metro Mirror relationships in our environment.
When implementing Metro Mirror, the goal is to reach a consistent and synchronized state
that can provide redundancy for a data set if a failure occurs that affects the production site.
In the following section, we show how to stop and start stand-alone Metro Mirror relationships
and Consistency Groups.
Starting a stand-alone Metro Mirror relationship
In Example 9-150, we start a stand-alone Metro Mirror relationship named MMREL3. Because
the Metro Mirror relationship was in the Consistent stopped state and no updates have been
made to the primary volume, the relationship quickly enters the Consistent synchronized
state.
Example 9-150 Starting the stand-alone Metro Mirror relationship
IBM_2145:ITSO_SVC1:admin>startrcrelationship MMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.14.7 Starting a Metro Mirror Consistency Group
In Example 9-151, we start the Metro Mirror Consistency Group CG_W2K3_MM. Because the
Consistency Group was in the Inconsistent stopped state, it enters the Inconsistent copying
state until the background copy has completed for all of the relationships in the Consistency
Group.
Upon completion of the background copy, it enters the Consistent synchronized state.
Chapter 9. SAN Volume Controller operations using the command-line interface 565
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-151 Starting the Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_MM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4
master inconsistent_copying 2 metro none
9.14.8 Monitoring the background copy progress
To monitor the background copy progress, we can use the lsrcrelationship command. This
command shows all of the defined Metro Mirror relationships if it is used without any
arguments. In the command output, progress indicates the current background copy
progress. Our Metro Mirror relationship is shown in Example 9-152.
Example 9-152 Monitoring the background copy progress example
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL1
id 0
name MMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 0
master_vdisk_name MM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name MM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 81
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL2
id 3
name MMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 3
master_vdisk_name MM_Log_Pri
aux_cluster_id 0000020061C06FCA
Using SNMP traps: Setting up SNMP traps for the SAN Volume Controller enables
automatic notification when Metro Mirror Consistency Groups or relationships change
state.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
566 Implementing the IBM System Storage SAN Volume Controller V7.2
aux_cluster_name ITSO_SVC4
aux_vdisk_id 3
aux_vdisk_name MM_Log_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 82
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
When all Metro Mirror relationships have completed the background copy, the Consistency
Group enters the Consistent synchronized state, as shown in Example 9-153.
Example 9-153 Listing the Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
9.14.9 Stopping and restarting Metro Mirror
Now that the Metro Mirror Consistency Group and relationships are running, in this section
and in the following sections, we describe how to stop, restart, and change the direction of the
stand-alone Metro Mirror relationships and the Consistency Group.
9.14.10 Stopping a stand-alone Metro Mirror relationship
Example 9-154 shows how to stop the stand-alone Metro Mirror relationship, while enabling
access (write I/O) to both the primary and secondary volumes. It also shows the relationship
entering the Idling state.
Chapter 9. SAN Volume Controller operations using the command-line interface 567
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-154 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary
IBM_2145:ITSO_SVC1:admin>stoprcrelationship -access MMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.14.11 Stopping a Metro Mirror Consistency Group
Example 9-155 shows how to stop the Metro Mirror Consistency Group without specifying the
-access flag. The Consistency Group enters the Consistent stopped state.
Example 9-155 Stopping a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_MM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
568 Implementing the IBM System Storage SAN Volume Controller V7.2
If, afterwards, we want to enable access (write I/O) to the secondary volume, we reissue the
stoprcconsistgrp command, specifying the -access flag. The Consistency Group transits to
the Idling state, as shown in Example 9-156.
Example 9-156 Stopping a Metro Mirror Consistency Group and enabling access to the secondary
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp -access CG_W2K3_MM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
9.14.12 Restarting a Metro Mirror relationship in the Idling state
When restarting a Metro Mirror relationship in the Idling state, we must specify the copy
direction.
If any updates have been performed on either the master or the auxiliary volume, consistency
will be compromised. Therefore, we must issue the command with the -force flag to restart a
relationship, as shown in Example 9-157.
Example 9-157 Restarting a Metro Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC1:admin>startrcrelationship -primary master -force MMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
Chapter 9. SAN Volume Controller operations using the command-line interface 569
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.14.13 Restarting a Metro Mirror Consistency Group in the Idling state
When restarting a Metro Mirror Consistency Group in the Idling state, we must specify the
copy direction.
If any updates have been performed on either the master or the auxiliary volume in any of the
Metro Mirror relationships in the Consistency Group, the consistency is compromised.
Therefore, we must use the -force flag to start a relationship. If the -force flag is not used,
the command fails.
In Example 9-158, we change the copy direction by specifying the auxiliary volumes to
become the primaries.
Example 9-158 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp -force -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
9.14.14 Changing the copy direction for Metro Mirror
In this section, we show how to change the copy direction of the stand-alone Metro Mirror
relationship and the Consistency Group.
9.14.15 Switching the copy direction for a Metro Mirror relationship
When a Metro Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship using the switchrcrelationship command, specifying the
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
570 Implementing the IBM System Storage SAN Volume Controller V7.2
primary volume. If the specified volume is already a primary when you issue this command,
the command has no effect.
In Example 9-159, we change the copy direction for the stand-alone Metro Mirror relationship
by specifying the auxiliary volume to become the primary.
Example 9-159 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC1:admin>switchrcrelationship -primary aux MMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the volume that transitions from the primary to the secondary, because all of the I/O will
be inhibited to that volume when it becomes the secondary. Therefore, careful planning is
required prior to using the switchrcrelationship command.
Chapter 9. SAN Volume Controller operations using the command-line interface 571
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.14.16 Switching the copy direction for a Metro Mirror Consistency Group
When a Metro Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the Consistency Group by using the switchrcconsistgrp
command and specifying the primary volume.
If the specified volume is already a primary when you issue this command, the command has
no effect.
In Example 9-160, we change the copy direction for the Metro Mirror Consistency Group by
specifying the auxiliary volume to become the primary volume.
Example 9-160 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
IBM_2145:ITSO_SVC1:admin>switchrcconsistgrp -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the volume that transitions from primary to secondary, because all of the I/O will be
inhibited when that volume becomes the secondary. Therefore, careful planning is required
prior to using the switchrcconsistgrp command.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
572 Implementing the IBM System Storage SAN Volume Controller V7.2
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
9.14.17 Creating a SAN Volume Controller partnership among many clustered
systems
Starting with SAN Volume Controller 5.1, you can have a clustered system partnership
among many SAN Volume Controller systems. This capability allows you to create four
configurations using a maximum of four connected systems:
Star configuration
Triangle configuration
Fully connected configuration
Daisy-chain configuration
In this section, we describe how to configure the SAN Volume Controller system partnership
for each configuration.
In our scenarios, we configure the SAN Volume Controller partnership by referring to the
clustered systems as A, B, C, and D:
ITSO_SVC1 = A
ITSO_SVC2 = B
ITSO_SVC3 = C
ITSO_SVC4 = D
Example 9-161 shows the available systems for a partnership using the lsclustercandidate
command on each system.
Example 9-161 Available clustered systems
IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC2
IBM_2145:ITSO_SVC2:admin>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
000002006BE04FC4 no ITSO_SVC1
0000020060A06FB8 no ITSO_SVC3
Important: To have a supported and working configuration, all SAN Volume Controller
systems must be at level 5.1 or higher.
Chapter 9. SAN Volume Controller operations using the command-line interface 573
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
IBM_2145:ITSO_SVC3:admin>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC1
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC2
IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC1
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC2
9.14.18 Star configuration partnership
Figure 9-8 shows the star configuration.
Figure 9-8 Star configuration
Example 9-162 shows the sequence of mkpartnership commands to execute to create a star
configuration.
Example 9-162 Creating a star configuration using the mkpartnership command
From ITSO_SVC1 to multiple systems
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4
From ITSO_SVC2 to ITSO_SVC1
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1
From ITSO_SVC3 to ITSO_SVC1
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC1
From ITSO_SVC4 to ITSO_SVC1
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
574 Implementing the IBM System Storage SAN Volume Controller V7.2
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC1
From ITSO_SVC1
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
000002006AC03A42 ITSO_SVC2 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote fully_configured 50
From ITSO_SVC2
IBM_2145:ITSO_SVC2:admin>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
From ITSO_SVC3
IBM_2145:ITSO_SVC3:admin>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
From ITSO_SVC4
IBM_2145:ITSO_SVC4:admin>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
After the SAN Volume Controller partnership has been configured, you can configure any
rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one
relationship.
Triangle configuration
Figure 9-9 shows the triangle configuration.
Figure 9-9 Triangle configuration
Chapter 9. SAN Volume Controller operations using the command-line interface 575
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-163 shows the sequence of mkpartnership commands to execute to create a
triangle configuration.
Example 9-163 Creating a triangle configuration
From ITSO_SVC1 to ITSO_SVC2 and ITSO_SVC3
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
000002006AC03A42 ITSO_SVC2 remote partially_configured_local 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:admin>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
From ITSO_SVC3 to ITSO_SVC1 and ITSO_SVC2
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC1
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC3:admin>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC2 remote fully_configured 50
After the SAN Volume Controller partnership has been configured, you can configure any
rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one
relationship.
Fully connected configuration
Figure 9-10 shows the fully connected configuration.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
576 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 9-10 Fully connected configuration
Example 9-164 on page 576 shows the sequence of mkpartnership commands to execute to
create a fully connected configuration.
Example 9-164 Creating a fully connected configuration
From ITSO_SVC1 to ITSO_SVC2, ITSO_SVC3 and ITSO_SVC4
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
000002006AC03A42 ITSO_SVC2 remote partially_configured_local 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
From ITSO_SVC2 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC2:admin>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
From ITSO_SVC3 to ITSO_SVC1, ITSO_SVC3 and ITSO-SVC4
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC1
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC3:admin>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC2 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
Chapter 9. SAN Volume Controller operations using the command-line interface 577
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
From ITSO-SVC4 to ITSO_SVC1, ITSO_SVC2 and ITSO_SVC3
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC1
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC4:admin>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
000002006AC03A42 ITSO_SVC2 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
After the SAN Volume Controller partnership has been configured, you can configure any
rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one
relationship.
Daisy-chain configuration
Figure 9-11 on page 577 shows the daisy-chain configuration.
Figure 9-11 Daisy-chain configuration
Example 9-165 shows the sequence of mkpartnership commands to execute to create a
daisy-chain configuration.
Example 9-165 Creating a daisy-chain configuration
From ITSO_SVC1 to ITSO_SVC2
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 50 ITSO_SVC2
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
000002006AC03A42 ITSO_SVC2 remote partially_configured_local 50
From ITSO_SVC2 to ITSO_SVC1 and ITSO_SVC3
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC1
IBM_2145:ITSO_SVC2:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC2:admin>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC2 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
From ITSO_SVC3 to ITSO_SVC2 and ITSO_SVC4
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC2
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
578 Implementing the IBM System Storage SAN Volume Controller V7.2
IBM_2145:ITSO_SVC3:admin>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC3:admin>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006AC03A42 ITSO_SVC2 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
From ITSO_SVC4 to ITSO_SVC3
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 50 ITSO_SVC3
IBM_2145:ITSO_SVC4:admin>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
After the SAN Volume Controller partnership has been configured, you can configure any
rcrelationship or rcconsistgrp that you need. Make sure that a single volume is only in one
relationship.
9.15 Global Mirror operation
In the following scenario, we set up an intercluster Global Mirror relationship between the
SAN Volume Controller system ITSO_SVC1 at the primary site and the SAN Volume Controller
system ITSO_SVC4 at the secondary site.
Table 9-4 shows the details of the volumes.
Table 9-4 Details of volumes for Global Mirror relationship scenario
Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a
Consistency Group to handle Global Mirror relationships for them. Because, in this scenario,
the application files are independent of the database, we create a stand-alone Global Mirror
relationship for GM_App_Pri. Figure 9-12 illustrates the Global Mirror relationship setup.
Intercluster example: This example is for an intercluster Global Mirror operation only. In
case you want to set up an intracluster operation, we highlight those parts in the following
procedure that you do not need to perform.
Content of volume Volumes at primary site Volumes at secondary site
Database files GM_DB_Pri GM_DB_Sec
Database log files GM_DBLog_Pri GM_DBLog_Sec
Application files GM_App_Pri GM_App_Sec
Chapter 9. SAN Volume Controller operations using the command-line interface 579
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Figure 9-12 Global Mirror scenario
9.15.1 Setting up Global Mirror
In the following section, we assume that the source and target volumes have already been
created and that the ISLs and zoning are in place, enabling the SAN Volume Controller
systems to communicate.
To set up the Global Mirror, perform the following steps:
1. Create a SAN Volume Controller partnership between ITSO_SVC1 and ITSO_SVC4 on both
SAN Volume Controller clustered systems.
Bandwidth: 100 MBps
2. Create a Global Mirror Consistency Group.
Name: CG_W2K3_GM
3. Create the Global Mirror relationship for GM_DB_Pri:
Master: GM_DB_Pri
Auxiliary: GM_DB_Sec
Auxiliary SAN Volume Controller system: ITSO_SVC4
Name: GMREL1
Consistency Group: CG_W2K3_GM
4. Create the Global Mirror relationship for GM_DBLog_Pri:
Master: GM_DBLog_Pri
Auxiliary: GM_DBLog_Sec
Auxiliary SAN Volume Controller system: ITSO_SVC4
Name: GMREL2
Consistency Group: CG_W2K3_GM
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
580 Implementing the IBM System Storage SAN Volume Controller V7.2
5. Create the Global Mirror relationship for GM_App_Pri:
Master: GM_App_Pri
Auxiliary: GM_App_Sec
Auxiliary SAN Volume Controller system: ITSO_SVC4
Name: GMREL3
In the following sections, we perform each step by using the CLI.
9.15.2 Creating a SAN Volume Controller partnership between ITSO_SVC1 and
ITSO_SVC4
We create a SAN Volume Controller partnership between these clustered systems.
Preverification
To verify that both clustered systems can communicate with each other, use the
lspartnership command. Example 9-166 confirms that our clustered systems are
communicating, because ITSO_SVC4 is an eligible SAN Volume Controller system candidate at
ITSO_SVC1 for the SAN Volume Controller system partnership, and vice versa. Therefore, both
systems communicate with each other.
Example 9-166 Listing the available SAN Volume Controller systems for partnership
IBM_2145:ITSO_SVC1:admin>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
IBM_2145:ITSO_SVC4:admin>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC1
In Example 9-167, we show the output of the lspartnership command before setting up the
SAN Volume Controller systems partnership for Global Mirror. We show this output for
comparison after we have set up the SAN Volume Controller partnership.
Example 9-167 Pre-verification of system configuration
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
IBM_2145:ITSO_SVC4:admin>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
Partnership between systems
In Example 9-168, we create the partnership from ITSO_SVC1 to ITSO_SVC4, specifying a 100
MBps bandwidth to use for the background copy. To verify the status of the newly created
partnership, we issue the lspartnership command. Notice that the new partnership is only
partially configured. It will remain partially configured until we run the mkpartnership
command on the other clustered system.
Intracluster Global Mirror: If you are creating an intracluster Global Mirror, do not perform
the next step. Instead, go to 9.15.3, Changing link tolerance and system delay simulation
on page 581.
Chapter 9. SAN Volume Controller operations using the command-line interface 581
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-168 Creating the partnership from ITSO_SVC1 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC1:admin>mkpartnership -bandwidth 100 ITSO_SVC4
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 100
In Example 9-169, we create the partnership from ITSO_SVC4 back to ITSO_SVC1, specifying a
100 MBps bandwidth to be used for the background copy. After creating the partnership,
verify that the partnership is fully configured by reissuing the lspartnership command.
Example 9-169 Creating the partnership from ITSO_SVC4 to ITSO_SVC1 and verifying it
IBM_2145:ITSO_SVC4:admin>mkpartnership -bandwidth 100 ITSO_SVC1
IBM_2145:ITSO_SVC4:admin>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 100
IBM_2145:ITSO_SVC1:admin>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC1 local
0000020061C06FCA ITSO_SVC4 remote fully_configured 100
9.15.3 Changing link tolerance and system delay simulation
The gm_link_tolerance parameter defines the sensitivity of the SAN Volume Controller to
inter-link overload conditions. The value is the number of seconds of continuous link
difficulties that will be tolerated before the SAN Volume Controller will stop the remote copy
relationships to prevent affecting host I/O at the primary site. To change the value, use the
following command:
chsystem -gmlinktolerance link_tolerance
The link_tolerance value is between 60 and 86,400 seconds in increments of 10 seconds.
The default value for the link tolerance is 300 seconds. A value of 0 disables link tolerance.
Intercluster and intracluster delay simulation
This Global Mirror feature permits a simulation of a delayed write to a remote volume. This
feature allows testing to be performed that detects colliding writes, and you can use this
feature to test an application before the full deployment of the Global Mirror feature. The delay
simulation can be enabled separately for each intracluster or intercluster Global Mirror. To
enable this feature, run the following command either for the intracluster or intercluster
simulation:
For intercluster:
chsystem -gminterdelaysimulation <inter_cluster_delay_simulation>
Important: We strongly suggest that you use the default value. If the link is overloaded for
a period, which affects host I/O at the primary site, the relationships will be stopped to
protect those hosts.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
582 Implementing the IBM System Storage SAN Volume Controller V7.2
For intracluster:
chsystem -gmintradelaysimulation <intra_cluster_delay_simulation>
The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express
the amount of time (in milliseconds) that secondary I/Os are delayed respectively for
intercluster and intracluster relationships. These values specify the number of milliseconds
that I/O activity (that is, copying a primary volume to a secondary volume) is delayed. You can
set a value from 0 to 100 milliseconds in 1 millisecond increments for the
cluster_delay_simulation in the previous commands. A value of zero (0) disables the feature.
To check the current settings for the delay simulation, use the following command:
lssystem
In Example 9-170, we show the modification of the delay simulation value and a change of
the Global Mirror link tolerance parameters. We also show the changed values of the Global
Mirror link tolerance and delay simulation parameters.
Example 9-170 Delay simulation and link tolerance modification
IBM_2145:ITSO_SVC1:admin>chsystem -gminterdelaysimulation 20
IBM_2145:ITSO_SVC1:admin>chsystem -gmintradelaysimulation 40
IBM_2145:ITSO_SVC1:admin>chsystem -gmlinktolerance 200
IBM_2145:ITSO_SVC1:admin>lssystem
id 000002006BE04FC4
name ITSO_SVC1
location local
partnership
bandwidth
total_mdisk_capacity 866.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 30.00GB
total_free_space 836.5GB
total_vdiskcopy_capacity 30.00GB
total_used_capacity 30.00GB
total_overallocation 3
total_vdisk_capacity 30.00GB
total_allocated_extent_capacity 31.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 20
gm_intra_cluster_delay_simulation 40
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
Chapter 9. SAN Volume Controller operations using the command-line interface 583
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 736.50GB
has_nas_key no
layer appliance
9.15.4 Creating a Global Mirror Consistency Group
In Example 9-171, we create the Global Mirror Consistency Group using the mkrcconsistgrp
command. We will use this Consistency Group for the Global Mirror relationships for the
database volumes. The Consistency Group is named CG_W2K3_GM.
Example 9-171 Creating the Global Mirror Consistency Group CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>mkrcconsistgrp -cluster ITSO_SVC4 -name CG_W2K3_GM
RC Consistency Group, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name
primary state relationship_count copy_type cycling_mode
0 CG_W2K3_GM 000002006BE04FC4 ITSO_SVC1 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none
9.15.5 Creating Global Mirror relationships
In Example 9-172, we create the GMREL1 and GMREL2 Global Mirror relationships for the
GM_DB_Pri and GM_DBLog_Pri volumes. We also make them members of the CG_W2K3_GM
Global Mirror Consistency Group.
We use the lsvdisk command to list all of the volumes in the ITSO_SVC1 system and, then,
use the lsrcrelationshipcandidate command to show the possible candidate volumes for
GM_DB_Pri in ITSO_SVC4.
After checking all of these conditions, we use the mkrcrelationship command to create the
Global Mirror relationship. To verify the newly created Global Mirror relationships, we list them
with the lsrcrelationship command.
Example 9-172 Creating GMREL1 and GMREL2 Global Mirror relationships
IBM_2145:ITSO_SVC1:admin>lsvdisk -filtervalue name=GM*
id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state se_copy_count
RC_change
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
584 Implementing the IBM System Storage SAN Volume Controller V7.2
0 GM_DB_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000031 0 1 empty 0 0
no
1 GM_DBLog_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000032 0 1 empty 0 0
no
2 GM_App_Pri 0 io_grp0 online 0 Pool_DS3500-1 10.00GB striped
6005076801AF813F1000000000000033 0 1 empty 0 0
no
IBM_2145:ITSO_SVC1:admin>lsrcrelationshipcandidate -aux ITSO_SVC4 -master GM_DB_Pri
id vdisk_name
0 GM_DB_Sec
1 GM_DBLog_Sec
2 GM_App_Sec
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4 -consistgrp
CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [0], successfully created
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4
-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [1], successfully created
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO_SVC4 -consistgrp
CG_W2K3_GM -name GMREL1 -global
RC Relationship, id [2], successfully created
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO_SVC4
-consistgrp CG_W2K3_GM -name GMREL2 -global
RC Relationship, id [3], successfully created
IBM_2145:ITSO_SVC1:admin>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 GMREL1 000002006BE04FC4 ITSO_SVC1 0 GM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 GM_DB_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none
1 GMREL2 000002006BE04FC4 ITSO_SVC1 1 GM_DBLog_Pri 0000020061C06FCA
ITSO_SVC4 1 GM_DBLog_Sec master 0 CG_W2K3_GM
inconsistent_stopped 50 0 global none
9.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri
In Example 9-173, we create the stand-alone Global Mirror relationship GMREL3 for
GM_App_Pri. After it is created, we will check the status of each of our Global Mirror
relationships.
Notice that the status of GMREL3 is consistent_stopped, because it was created with the -sync
option. The -sync option indicates that the secondary (auxiliary) volume is already
synchronized with the primary (master) volume. The initial background synchronization is
skipped when this option is used.
GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created with
the -sync option, so their auxiliary volumes need to be synchronized with their primary
volumes.
Chapter 9. SAN Volume Controller operations using the command-line interface 585
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-173 Creating a stand-alone Global Mirror relationship and verifying it
IBM_2145:ITSO_SVC1:admin>mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO_SVC4 -sync -name
GMREL3 -global
RC Relationship, id [2], successfully created
IBM_2145:ITSO_SVC1:admin>lsrcrelationship -delim :
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress:copy_type:cycling_mode
0:GMREL1:000002006BE04FC4:ITSO_SVC1:0:GM_DB_Pri:0000020061C06FCA:ITSO_SVC4:0:GM_DB_Sec:master:0:CG_W2K3_GM:
inconsistent_copying:50:73:global:none
1:GMREL2:000002006BE04FC4:ITSO_SVC1:1:GM_DBLog_Pri:0000020061C06FCA:ITSO_SVC4:1:GM_DBLog_Sec:master:0:CG_W2
K3_GM:inconsistent_copying:50:75:global:none
2:GMREL3:000002006BE04FC4:ITSO_SVC1:2:GM_App_Pri:0000020061C06FCA:ITSO_SVC4:2:GM_App_Sec:master:::consisten
t_stopped:50:100:global:none
9.15.7 Starting Global Mirror
Now that we have created the Global Mirror Consistency Group and relationships, we are
ready to use the Global Mirror relationships in our environment.
When implementing Global Mirror, the goal is to reach a consistent and synchronized state
that can provide redundancy in case a hardware failure occurs that affects the SAN at the
production site.
In this section, we show how to start the stand-alone Global Mirror relationships and the
Consistency Group.
9.15.8 Starting a stand-alone Global Mirror relationship
In Example 9-174, we start the stand-alone Global Mirror relationship named GMREL3.
Because the Global Mirror relationship was in the Consistent stopped state and no updates
have been made to the primary volume, the relationship quickly enters the Consistent
synchronized state.
Example 9-174 Starting the stand-alone Global Mirror relationship
IBM_2145:ITSO_SVC1:admin>startrcrelationship GMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
586 Implementing the IBM System Storage SAN Volume Controller V7.2
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.15.9 Starting a Global Mirror Consistency Group
In Example 9-175 on page 586, we start the CG_W2K3_GM Global Mirror Consistency Group.
Because the Consistency Group was in the Inconsistent stopped state, it enters the
Inconsistent copying state until the background copy has completed for all of the relationships
that are in the Consistency Group.
Upon completion of the background copy, the CG_W2K3_GM Global Mirror Consistency Group
enters the Consistent synchronized state.
Example 9-175 Starting the Global Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp 0
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state inconsistent_copying
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
9.15.10 Monitoring the background copy progress
To monitor the background copy progress, use the lsrcrelationship command. This
command shows us all of the defined Global Mirror relationships if it is used without any
parameters. In the command output, progress indicates the current background copy
progress. Example 9-176 shows our Global Mirror relationships.
Using SNMP traps: Setting up SNMP traps for the SAN Volume Controller enables
automatic notification when Global Mirror Consistency Groups or relationships change
state.
Chapter 9. SAN Volume Controller operations using the command-line interface 587
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-176 Monitoring background copy progress example
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 38
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 76
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
When all of the Global Mirror relationships complete the background copy, the Consistency
Group enters the Consistent synchronized state, as shown in Example 9-177.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
588 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-177 Listing the Global Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
9.15.11 Stopping and restarting Global Mirror
Now that the Global Mirror Consistency Group and relationships are running, we describe
how to stop, restart, and change the direction of the stand-alone Global Mirror relationships
and the Consistency Group.
First, we show how to stop and restart the stand-alone Global Mirror relationships and the
Consistency Group.
9.15.12 Stopping a stand-alone Global Mirror relationship
In Example 9-178, we stop the stand-alone Global Mirror relationship while enabling access
(write I/O) to both the primary and the secondary volume. As a result, the relationship enters
the Idling state.
Example 9-178 Stopping the stand-alone Global Mirror relationship
IBM_2145:ITSO_SVC1:admin>stoprcrelationship -access GMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
Chapter 9. SAN Volume Controller operations using the command-line interface 589
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.15.13 Stopping a Global Mirror Consistency Group
In Example 9-179, we stop the Global Mirror Consistency Group without specifying the
-access parameter. Therefore, the Consistency Group enters the Consistent stopped state.
Example 9-179 Stopping a Global Mirror Consistency Group without specifying -access
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
If, afterwards, we want to enable access (write I/O) for the secondary volume, we can reissue
the stoprcconsistgrp command specifying the -access parameter. The Consistency Group
transits to the Idling state, as shown in Example 9-180.
Example 9-180 Stopping a Global Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp -access CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
590 Implementing the IBM System Storage SAN Volume Controller V7.2
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
9.15.14 Restarting a Global Mirror relationship in the Idling state
When restarting a Global Mirror relationship in the Idling state, we must specify the copy
direction.
If any updates have been performed on either the master or the auxiliary volume, consistency
will be compromised. Therefore, we must issue the -force parameter to restart the
relationship. If the -force parameter is not used, the command will fail, as shown in
Example 9-181.
Example 9-181 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC1:admin>startrcrelationship -primary master -force GMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
9.15.15 Restarting a Global Mirror Consistency Group in the Idling state
When restarting a Global Mirror Consistency Group in the Idling state, we must specify the
copy direction.
If any updates have been performed on either the master or the auxiliary volume in any of the
Global Mirror relationships in the Consistency Group, consistency will be compromised.
Chapter 9. SAN Volume Controller operations using the command-line interface 591
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Therefore, we must issue the -force parameter to start the relationship. If the -force
parameter is not used, the command will fail.
In Example 9-182, we restart the Consistency Group and change the copy direction by
specifying the auxiliary volumes to become the primaries.
Example 9-182 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
9.15.16 Changing the direction for Global Mirror
In this section, we show how to change the copy direction of the stand-alone Global Mirror
relationships and the Consistency Group.
9.15.17 Switching the copy direction for a Global Mirror relationship
When a Global Mirror relationship is in the Consistent synchronized state, we can change the
copy direction for the relationship by using the switchrcrelationship command and
specifying the primary volume.
If the volume that is specified as the primary when issuing this command is already a primary,
the command has no effect.
In Example 9-183 on page 591, we change the copy direction for the stand-alone Global
Mirror relationship, specifying the auxiliary volume to become the primary.
Example 9-183 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the volume that transits from primary to secondary, because all I/O will be inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required prior
to using the switchrcrelationship command.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
592 Implementing the IBM System Storage SAN Volume Controller V7.2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC1:admin>switchrcrelationship -primary aux GMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
Chapter 9. SAN Volume Controller operations using the command-line interface 593
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
9.15.18 Switching the copy direction for a Global Mirror Consistency Group
When a Global Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the relationship by using the switchrcconsistgrp command
and specifying the primary volume. If the volume that is specified as the primary when issuing
this command is already a primary, the command has no effect.
In Example 9-184, we change the copy direction for the Global Mirror Consistency Group,
specifying the auxiliary to become the primary.
Example 9-184 Switching the copy direction for a Global Mirror Consistency Group
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
IBM_2145:ITSO_SVC1:admin>switchrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
Important: When the copy direction is switched, it is crucial that there is no outstanding I/O
to the volume that transits from primary to secondary, because all I/O will be inhibited when
that volume becomes the secondary. Therefore, careful planning is required prior to using
the switchrcconsistgrp command.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
594 Implementing the IBM System Storage SAN Volume Controller V7.2
9.15.19 Changing a Global Mirror relationship to the cycling mode
Starting with SAN Volume Controller 6.3, Global Mirror can operate with or without cycling.
When operating without cycling, write operations are applied to the secondary volume as
soon as possible after they are applied to the primary volume. The secondary volume is
generally less than
1 second behind the primary volume, which minimizes the amount of data that must be
recovered in the event of a failover. However, this capability requires that a high-bandwidth
link is provisioned between the two sites.
When Global Mirror operates in cycling mode, changes are tracked and, where needed,
copied to intermediate Change Volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered in the event of a failover. Because the data transfer can be smoothed
over a longer time period, however, lower bandwidth is required to provide an effective
solution.
A Global Mirror relationship consists of two volumes: primary and secondary. With SAN
Volume Controller 6.3, each of these volumes can be associated to a Change Volume.
Change Volumes are used to record changes to the remote copy volume. A FlashCopy
relationship exists between the remote copy volume and the Change Volume. This
relationship cannot be manipulated as a normal FlashCopy relationship. Most commands will
fail by design, because this relationship is an internal relationship.
Cycling mode transmits a series of FlashCopy images from the primary to the secondary, and
it is enabled using svctask chrcrelationship -cycling=multi.
The primary Change Volume stores changes to be sent to the secondary volume, and the
secondary Change Volume is used to maintain a consistent image at the secondary volume.
Every x seconds, the primary FlashCopy mapping is started automatically, where x is the
cycling period and is configurable. Data is then copied to the secondary volume from the
primary Change Volume. The secondary FlashCopy mapping is started if resynchronization is
needed. Therefore, there is always a consistent copy at the secondary volume. The cycling
period is configurable, and the default value is 300 seconds.
The recovery point objective (RPO) depends on how long the FlashCopy takes to complete. If
the FlashCopy completes within the cycling time, the maximum RPO = 2 x the cycling time;
otherwise, the RPO = 2 x the copy completion time.
You can estimate the current RPO using the new freeze_time rcrelationship property. It is
the time of the last consistent image that is present at the secondary. Figure 9-13 on
page 595 shows the cycling mode with Change Volumes.
Change Volume requirements
Follow these rules for the Change Volume:
The Change Volume can be a thin-provisioned volume.
It must be the same size as the primary and secondary volumes.
The Change Volume must be in the same I/O Group as the primary and secondary
volumes.
It cannot be used for the users remote copy or FlashCopy mappings.
You must have a Change Volume for both the primary and secondary volumes.
You cannot manipulate it like a normal FlashCopy mapping.
Chapter 9. SAN Volume Controller operations using the command-line interface 595
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
In this section, we show how to change the cycling mode of the stand-alone Global Mirror
relationship (GMREL3) and the Consistency Group CG_W2K3_GM Global Mirror relationships
(GMREL1 and GMREL2).
Figure 9-13 Global Mirror with Change Volumes
We assume that the source and target volumes have already been created and that the ISLs
and zoning are in place, enabling the SAN Volume Controller systems to communicate. We
also assume that the Global Mirror relationship has been already established.
To change the Global Mirror to cycling mode with Change Volumes, perform the following
steps:
1. Create thin-provisioned Change Volumes for the primary and secondary volumes at both
sites.
2. Stop the stand-alone relationship GMREL3 to change the cycling mode at the primary site.
3. Set the cycling mode on the stand-alone relationship GMREL3 at the primary site.
4. Set the Change Volume on the master volume relationship GMREL3 at the primary site.
5. Set the Change Volume on the auxiliary volume relationship GMREL3 at the secondary site.
6. Start the stand-alone relationship GMREL3 in cycling mode at the primary site.
7. Stop the Consistency Group CG_W2K3_GM to change the cycling mode at the primary site.
8. Set the cycling mode on the Consistency Group at the primary site.
9. Set the Change Volume on the master volume relationship GMREL1 of the Consistency
Group CG_W2K3_GM at the primary site.
10.Set the Change Volume on the auxiliary volume relationship GMREL1 at the secondary site.
11.Set the Change Volume on the master volume relationship GMREL2 of the Consistency
Group CG_W2K3_GM at the primary site.
12.Set the Change Volume on the auxiliary volume relationship GMREL2 at the secondary site.
13.Start the Consistency Group CG_W2K3_GM in the cycling mode at the primary site.
9.15.20 Creating the thin-provisioned Change Volumes
We start the setup by creating thin-provisioned Change Volumes for the primary and
secondary volumes at both sites, as shown in Example 9-185.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
596 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-185 Creating the thin-provisioned volumes for Global Mirror cycling mode
IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20%
-autoexpand -grainsize 32 -name GM_DB_Pri_CHANGE_VOL
Virtual Disk, id [3], successfully created
IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20%
-autoexpand -grainsize 32 -name GM_DBLog_Pri_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC1:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20%
-autoexpand -grainsize 32 -name GM_App_Pri_CHANGE_VOL
Virtual Disk, id [5], successfully created
IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20%
-autoexpand -grainsize 32 -name GM_DB_Sec_CHANGE_VOL
Virtual Disk, id [3], successfully created
IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20%
-autoexpand -grainsize 32 -name GM_DBLog_Sec_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC4:admin>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb -rsize 20%
-autoexpand -grainsize 32 -name GM_App_Sec_CHANGE_VOL
Virtual Disk, id [5], successfully created
9.15.21 Stopping the stand-alone remote copy relationship
We now display the remote copy relationships to ensure that they are in sync, and then we
stop the stand-alone relationship GMREL3, as shown in Example 9-186.
Example 9-186 Stopping the remote copy stand-alone relationship
IBM_2145:ITSO_SVC1:admin>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name
aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary
consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 GMREL1 000002006BE04FC4 ITSO_SVC1 0 GM_DB_Pri
0000020061C06FCA ITSO_SVC4 0 GM_DB_Sec aux 0
CG_W2K3_GM consistent_synchronized 50 global
none
1 GMREL2 000002006BE04FC4 ITSO_SVC1 1 GM_DBLog_Pri
0000020061C06FCA ITSO_SVC4 1 GM_DBLog_Sec aux 0
CG_W2K3_GM consistent_synchronized 50 global
none
2 GMREL3 000002006BE04FC4 ITSO_SVC1 2 GM_App_Pri
0000020061C06FCA ITSO_SVC4 2 GM_App_Sec aux
consistent_synchronized 50 global none
IBM_2145:ITSO_SVC1:admin>stoprcrelationship GMREL3
9.15.22 Setting the cycling mode on the stand-alone remote copy relationship
In Example 9-187, we set the cycling mode on the relationship using the chrcrelationship
command. Note that the cyclingmode and masterchange parameters cannot be entered in the
same command.
Chapter 9. SAN Volume Controller operations using the command-line interface 597
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-187 Setting the cycling mode
IBM_2145:ITSO_SVC1:admin>chrcrelationship -cyclingmode multi GMREL3
9.15.23 Setting the Change Volume on the master volume
In Example 9-188, we set the Change Volume for the primary volume. A display shows the
name of the master Change Volume.
Example 9-188 Setting the Change Volume
IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_App_Pri_CHANGE_VOL
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
9.15.24 Setting the Change Volume on the auxiliary volume
In Example 9-189, we set the Change Volume on the auxiliary volume in the secondary site.
From the display, we can see the name of the volume.
Example 9-189 Setting the Change Volume on the auxiliary volume
IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_App_Sec_CHANGE_VOL 2
IBM_2145:ITSO_SVC4:admin>
IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
598 Implementing the IBM System Storage SAN Volume Controller V7.2
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
9.15.25 Starting the stand-alone relationship in the cycling mode
In Example 9-190, we start the stand-alone relationship GMREL3. After a few minutes, we
check the freeze_time parameter to see how it changes.
Example 9-190 Starting the stand-alone relationship in the cycling mode
IBM_2145:ITSO_SVC1:admin>startrcrelationship GMREL3
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/37/20
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL3
id 2
Chapter 9. SAN Volume Controller operations using the command-line interface 599
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/42/25
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
9.15.26 Stopping the Consistency Group to change the cycling mode
In Example 9-191, we stop the Consistency Group with two relationships, and you must stop
it to change Global Mirror to cycling mode. A display shows that the state of the Consistency
Group changes to consistent_stopped.
Example 9-191 Stopping the Consistency Group to change the cycling mode
IBM_2145:ITSO_SVC1:admin>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
600 Implementing the IBM System Storage SAN Volume Controller V7.2
9.15.27 Setting the cycling mode on the Consistency Group
In Example 9-192, we change the cycling mode of the Consistency Group CG_W2K3_GM. To
change it, we need to stop the Consistency Group; otherwise, the command fails.
Example 9-192 Setting the Global Mirror cycling mode on the Consistency Group
IBM_2145:ITSO_SVC1:admin>chrcconsistgrp -cyclingmode multi CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
9.15.28 Setting the Change Volume on the master volume relationships of the
Consistency Group
In Example 9-193 on page 600, we change both of the relationships of the Consistency
Group to add the Change Volumes on the primary volumes. A display shows the name of the
master Change Volumes.
Example 9-193 Setting the Change Volume on the master volume
IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_DB_Pri_CHANGE_VOL GMREL1
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
Chapter 9. SAN Volume Controller operations using the command-line interface 601
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC1:admin>
IBM_2145:ITSO_SVC1:admin>chrcrelationship -masterchange GM_DBLog_Pri_CHANGE_VOL GMREL2
IBM_2145:ITSO_SVC1:admin>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
9.15.29 Setting the Change Volumes on the auxiliary volumes
In Example 9-194, we change both of the relationships of the Consistency Group to add the
Change Volumes to the secondary volumes. The display shows the names of the auxiliary
Change Volumes.
Example 9-194 Setting the Change Volumes on the auxiliary volumes
IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_DB_Sec_CHANGE_VOL GMREL1
IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
602 Implementing the IBM System Storage SAN Volume Controller V7.2
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id 3
aux_change_vdisk_name GM_DB_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC4:admin>chrcrelationship -auxchange GM_DBLog_Sec_CHANGE_VOL GMREL2
IBM_2145:ITSO_SVC4:admin>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
aux_change_vdisk_id 4
aux_change_vdisk_name GM_DBLog_Sec_CHANGE_VOL
9.15.30 Starting the Consistency Group CG_W2K3_GM in the cycling mode
In Example 9-195, we start the Consistency Group in the cycling mode. Looking at the field
freeze_time, you can see that the Consistency Group has been started in the cycling mode,
and it is taking consistency images.
Example 9-195 Starting the Consistency Group with cycling mode
IBM_2145:ITSO_SVC1:admin>startrcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
Chapter 9. SAN Volume Controller operations using the command-line interface 603
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/02/33
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
IBM_2145:ITSO_SVC1:admin>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC1
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/07/42
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
9.16 Service and maintenance
In this section, we describe the various service and maintenance tasks that you can execute
within the SAN Volume Controller environment.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
604 Implementing the IBM System Storage SAN Volume Controller V7.2
9.16.1 Upgrading software
In this section, we explain how to upgrade the SAN Volume Controller software.
Package numbering and version
The format for software upgrade packages is four positive integers that are separated by
periods. For example, a software upgrade package is similar to 7.2.0.0, and each software
package is given a unique number.
Check the recommended software levels at this website:
http://www.ibm.com/systems/storage/software/virtualization/svc/index.html
SAN Volume Controller software upgrade test utility
The SAN Volume Controller Software Upgrade Test Utility, which resides on the Master
Console, checks the software levels in the system against the recommended levels, which are
documented on the support website. You will be informed if the software levels are current or
if you need to download and install newer levels. You can download the utility and installation
instructions from this link:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585
After the software file has been uploaded to the system (to the /home/admin/upgrade
directory), you can select the software and apply it to the system. Use the web script and the
applysoftware command. When a new code level is applied, it is automatically installed on all
of the nodes within the system.
The underlying command-line tool runs the sw_preinstall script. This script checks the
validity of the upgrade file and whether it can be applied over the current level. If the upgrade
file is unsuitable, the pre-install script deletes the files, which prevents the buildup of invalid
files on the system.
Precaution before you perform the upgrade
Software installation is normally considered to be a clients task. The SAN Volume Controller
supports concurrent software upgrade. You can perform the software upgrade concurrently
with I/O user operations and certain management activities, but only limited CLI commands
will be operational from the time that the install command starts until the upgrade operation
has either terminated successfully or been backed out. Certain commands will fail with a
message indicating that a software upgrade is in progress.
Before you upgrade the SAN Volume Controller software, ensure that all I/O paths between all
hosts and SANs are working. Otherwise, the applications might have I/O failures during the
software upgrade. Ensure that all I/O paths between all hosts and SANs are working by using
the Subsystem Device Driver (SDD) command. Example 9-196 shows the output.
Example 9-196 Query adapter
#datapath query adapter
Active Adapters :2
Adpt# Name State Mode Select Errors Paths Active
0 fscsi0 NORMAL ACTIVE 1445 0 4 4
1 fscsi1 NORMAL ACTIVE 1888 0 4 4
Important: The support for migration from 6.3.x.x to 7.2.x.x is limited. Check with your
service representative for recommended steps.
Chapter 9. SAN Volume Controller operations using the command-line interface 605
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
#datapath query device
Total Devices : 2
DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized
SERIAL: 60050768018201BF2800000000000000
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk3 OPEN NORMAL 0 0
1 fscsi1/hdisk7 OPEN NORMAL 972 0
DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized
SERIAL: 60050768018201BF2800000000000002
==========================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 fscsi0/hdisk4 OPEN NORMAL 784 0
1 fscsi1/hdisk8 OPEN NORMAL 0 0
Verify that your uninterruptible power supply unit configuration is also set up correctly (even if
your system is running without problems). Specifically, make sure that the following conditions
are true:
Your uninterruptible power supply units are all getting their power from an external source,
and they are not daisy chained. Make sure that each uninterruptible power supply unit is
not supplying power to another nodes uninterruptible power supply unit.
The power cable and the serial cable, which come from each node, go back to the same
uninterruptible power supply unit. If the cables are crossed and go back to separate
uninterruptible power supply units, during the upgrade, while one node is shut down,
another node might also be shut down mistakenly.
You must also ensure that all I/O paths are working for each host that runs I/O operations to
the SAN during the software upgrade. You can check the I/O paths by using the datapath
query commands.
You do not need to check for hosts that have no active I/O operations to the SAN during the
software upgrade.
Upgrade procedure
To upgrade the SAN Volume Controller system software, perform the following steps:
1. Before starting the upgrade, you must back up the configuration (see 9.17, Backing up
the SAN Volume Controller system configuration on page 619) and save the backup
config file in a safe place.
2. Before starting to transfer the software code to the clustered system, clear the previously
uploaded upgrade files in the /home/admin/upgrade SAN Volume Controller system
directory, as shown in Example 9-197.
Write-through mode: During a software upgrade, there are periods when not all of the
nodes in the system are operational, and as a result, the cache operates in write-through
mode. Note that write-through mode has an effect on the throughput, latency, and
bandwidth aspects of performance.
Important: Do not share the SAN Volume Controller uninterruptible power supply unit with
any other devices.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
606 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-197 cleardumps -prefix /home/admin/upgrade command
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /home/admin/upgrade
IBM_2145:ITSO_SVC1:admin>
3. Save the data collection for support diagnosis in case of problems, as shown in
Example 9-198.
Example 9-198 svc_snap -c command
IBM_2145:ITSO_SVC1:admin>svc_snap -c
Collecting system information...
Creating Config Backup
Dumping error log...
Creating
Snap data collected in /dumps/snap.110711.111003.111031.tgz
4. List the dump that was generated by the previous command, as shown in Example 9-199.
Example 9-199 lsdumps command
IBM_2145:ITSO_SVC1:admin>lsdumps
id filename
0 svc.config.cron.bak_108283
1 sel.110711.trc
2 rtc.race_mq_log.txt.110711.trc
3 ethernet.110711.trc
4 svc.config.cron.bak_110711
5 svc.config.cron.xml_110711
6 svc.config.cron.log_110711
7 svc.config.cron.sh_110711
8 svc.config.backup.bak_110711
9 svc.config.backup.tmp.xml
10 110711.trc
11 svc.config.backup.xml_110711
12 svc.config.backup.now.xml
13 snap.110711.111003.111031.tgz
5. Save the generated dump in a safe place using the pscp command, as shown in
Example 9-200.
Note: The pscp command will not work if you have not uploaded your PuTTy SSH
private key or if you are not using the user ID and password with the PuTTy pageant
agent, as shown in Figure 9-14.
Chapter 9. SAN Volume Controller operations using the command-line interface 607
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Figure 9-14 Pageant example
Example 9-200 pscp -load command
C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1
[email protected]:/dumps/snap
.110711.111003.111031.tgz c:snap.110711.111003.111031.tgz
snap.110711.111003.111031 | 4999 kB | 4999.8 kB/s | ETA: 00:00:00 |
100%
6. Upload the new software package using PuTTY Secure Copy. Enter the command, as
shown in Example 9-201.
Example 9-201 pscp -load command
C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 c:\IBM2145_INSTALL_7.2.0.0.
[email protected]:/home/admin/upgrade
110926.tgz.gpg | 353712 kB | 11053.5 kB/s | ETA: 00:00:00 | 100%
7. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure
Copy. Enter the command, as shown in Example 9-202.
Example 9-202 Upload utility
C:\>pscp -load ITSO_SVC1 IBM2145_INSTALL_svcupgradetest_7.2
[email protected]:/home/admin/upgrade
IBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%
8. Verify that the packages were successfully delivered through the PuTTY command-line
application by entering the lsdumps command, as shown in Example 9-203.
Example 9-203 lsdumps command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /home/admin/upgrade
id filename
0 IBM2145_INSTALL_7.2.0.0.
1 IBM2145_INSTALL_svcupgradetest_7.2
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
608 Implementing the IBM System Storage SAN Volume Controller V7.2
9. Now that the packages are uploaded, install the SAN Volume Controller Software Upgrade
Test Utility, as shown in Example 9-204.
Example 9-204 applysoftware command
IBM_2145:ITSO_SVC1:admin>applysoftware -file IBM2145_INSTALL_svcupgradetest_7.2
CMMVC6227I The package installed successfully.
10.Using the following command, test the upgrade for known issues that might prevent a
software upgrade from completing successfully, as shown in Example 9-205.
Example 9-205 svcupgradetest command
IBM_2145:ITSO_SVC1:admin>svcupgradetest -v 7.2.0.0
svcupgradetest version 7.2 Please wait while the tool tests
for issues that may prevent a software upgrade from completing
successfully. The test will take approximately one minute to complete.
The test has not found any problems with the 2145 cluster.
Please proceed with the software upgrade.
11.Use the applysoftware command to apply the software upgrade, as shown in
Example 9-206.
Example 9-206 Applysoftware upgrade command example
IBM_2145:ITSO_SVC1:admin>applysoftware -file IBM2145_INSTALL_7.2.0.0
While the upgrade runs, you can check the status as shown in Example 9-207.
Example 9-207 Checking the update status
IBM_2145:ITSO_SVC1:admin>lssoftwareupgradestatus
status
upgrading
12.The new code is distributed and applied to each node in the SAN Volume Controller
system. After installation, each node is automatically restarted one at a time. If a node
does not restart automatically during the upgrade, you must repair it manually.
13.Eventually, both nodes display Cluster: on line one on the SAN Volume Controller front
panel and the name of your system on line two of the panel. Be prepared for a wait (in our
case, we waited approximately 40 minutes).
14.To verify that the upgrade was successful, you can perform either of the following options:
You can run the lssystem and lsnodevpd commands, as shown in Example 9-208. (We
truncated the lssystem and lsnodevpd information for this example.)
Important: If the svcupgradetest command produces any errors, troubleshoot the
errors using the maintenance procedures before continuing.
Performance: During this process, both your CLI and GUI vary from sluggish (slow) to
unresponsive. The important thing is that I/O to the hosts can continue throughout this
process.
Chapter 9. SAN Volume Controller operations using the command-line interface 609
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-208 lssystem and lsnodevpd commands
IBM_2145:ITSO_SVC1:admin>lssystem
id 000002006BE04FC4
name ITSO_SVC1
location local
partnership
bandwidth
.
.
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.2.0.0 (build 86.6.1310161200)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 200
gm_inter_cluster_delay_simulation 20
gm_intra_cluster_delay_simulation 40
gm_max_host_delay 5
.
.
tier_capacity 766.50GB
tier_free_capacity 736.50GB
has_nas_key no
layer appliance
IBM_2145:ITSO_SVC1:admin>lsnodevpd 1
id 1
system board: 23 fields
part_number 31P1090
.
.
software: 4 fields
id 1
node_name SVC1N1
WWNN 0x50050768010027e2
code_level 7.2.0.0 (build 86.6.1310161200)
Or you can check whether the code installation has completed without error by copying
the log to your management workstation, as explained in the next section. Open the
event log in WordPad and search for the Software Install completed. message.
At this point, you have completed the required tasks to upgrade the SAN Volume Controller
software.
9.16.2 Running the maintenance procedures
Use the finderr command to generate a list of any unfixed errors in the system. This
command analyzes the last generated log that resides in the /dumps/elogs/ directory on the
system.
To generate a new log before analyzing unfixed errors, run the dumperrlog command
(Example 9-209).
Example 9-209 dumperrlog command
IBM_2145:ITSO_SVC1:admin>dumperrlog
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
610 Implementing the IBM System Storage SAN Volume Controller V7.2
This command generates an errlog_timestamp file, such as errlog_110711_111003_090500,
where:
errlog is part of the default prefix for all event log files.
110711 is the panel name of the current configuration node.
111003 is the date (YYMMDD).
090500 is the time (HHMMSS).
You can add the -prefix parameter to your command to change the default prefix of errlog
to something else (Example 9-210).
Example 9-210 dumperrlog -prefix command
IBM_2145:ITSO_SVC1:admin>dumperrlog -prefix ITSO_SVC1_errlog
This command creates a file called ITSO_SVC1_errlog_110711_111003_141111.
To see the file name, enter the following command (Example 9-211).
Example 9-211 lsdumps command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/elogs
id filename
0 errlog_110711_111003_111056
1 testerrorlog_110711_111003_135358
2 ITSO_SVC1_errlog_110711_111003_141111
After you generate your event log, you can issue the finderr command to scan the event log
for any unfixed events, as shown in Example 9-212.
Example 9-212 finderr command
IBM_2145:ITSO_SVC1:admin>finderr
Highest priority unfixed error code is [1550]
As you can see, we have one unfixed event on our system. To analyze this event, we
download it onto our personal computer. To know more about this unfixed event, we look at
the event log in more detail. We use the PuTTY Secure Copy process to copy the file from the
system to our local management workstation, as shown in Example 9-213.
Example 9-213 pscp command: Copy event logs off of the SVC
In W2K3 Start Run cmd
C:\Program Files (x86)\PuTTY>pscp -load ITSO_SVC1 [email protected]:/dumps/elog
s/ITSO_SVC1_errlog_110711_111003_141111 c:\ITSO_SVC1_errlog_110711_111003_141111
ITSO_SVC1_errlog_110711_1 | 6 kB | 6.8 kB/s | ETA: 00:00:00 | 100%
Maximum number of event log dump files: A maximum of ten event log dump files per
node will be kept on the system. When the eleventh dump is made, the oldest existing
dump file for that node will be overwritten. Note that the directory might also hold log files
that are retrieved from other nodes. These files are not counted.
The SAN Volume Controller will delete the oldest file (when necessary) for this node to
maintain the maximum number of files. The SAN Volume Controller will not delete files
from other nodes unless you issue the cleardumps command.
Chapter 9. SAN Volume Controller operations using the command-line interface 611
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
C:\Program Files (x86)\PuTTY>
To use the Run option, you must know where your pscp.exe file is located. In our case, it is in
the C:\Program Files (x86)\PuTTY> folder.
This command copies the file called ITSO_SVC1_errlog_110711_111003_141111 to the C:\
directory on our local workstation and calls the file ITSO_SVC1_errlog_110711_111003_141111.
Open the file in WordPad. (Notepad does not format the window as well.) You will see
information that is similar to the information that is shown in Example 9-214. (We truncated
this list for the purposes of this example.)
Example 9-214 errlog in WordPad
//-------------------
// Error Log Entries
//-------------------
Error Log Entry 0
Node Identifier : SVC1N2
Object Type : node
Object ID : 2
Copy ID :
Sequence Number : 101
Root Sequence Number : 101
First Error Timestamp : Mon Oct 3 10:50:13 2011
: Epoch + 1317664213
Last Error Timestamp : Mon Oct 3 10:50:13 2011
: Epoch + 1317664213
Error Count : 1
Error ID : 980221 : Error log cleared
Error Code :
Status Flag : SNMP trap raised
Type Flag : INFORMATION
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
By scrolling through the list, or searching for the term unfixed, you can find more detail about
the problem. You might see more entries in the errorlog that have the status of unfixed.
After rectifying the problem, you can mark the event as fixed in the log by issuing the
cherrstate command against its sequence number; see Example 9-215.
Example 9-215 cherrstate command
IBM_2145:ITSO_SVC1:admin>cherrstate -sequencenumber 106
If you accidentally mark the wrong event as fixed, you can mark it as unfixed again by
entering the same command and appending the -unfix flag to the end, as shown in
Example 9-216.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
612 Implementing the IBM System Storage SAN Volume Controller V7.2
Example 9-216 -unfix flag
IBM_2145:ITSO_SVC1:admin>cherrstate -sequencenumber 106 -unfix
9.16.3 Setting up SNMP notification
To set up event notification, use the mksnmpserver command. Example 9-217 shows an
example of the mksnmpserver command.
Example 9-217 mksnmpserver command
IBM_2145:ITSO_SVC1:admin>mksnmpserver -error on -warning on -info on -ip
9.43.86.160 -community SVC
SNMP Server id [0] successfully created
This command sends all events and warnings to the SAN Volume Controller community on
the SNMP manager with the IP address 9.43.86.160.
9.16.4 Setting the syslog event notification
You can save a syslog to a defined syslog server, because the SAN Volume Controller
provides support for syslog in addition to email and SNMP traps.
The syslog protocol is a client-server standard for forwarding log messages from a sender to
a receiver on an IP network. You can use syslog to integrate log messages from various types
of systems into a central repository. You can configure SAN Volume Controller to send
information to six syslog servers.
You use the mksyslogserver command to configure the SAN Volume Controller using the
CLI, as shown in Example 9-218 on page 612.
Using this command with the -h parameter gives you information about all of the available
options. In our example, we only configure the SAN Volume Controller to use the default
values for our syslog server.
Example 9-218 Configuring the syslog
IBM_2145:ITSO_SVC1:admin>mksyslogserver -ip 10.64.210.231 -name Syslogserv1
Syslog Server id [0] successfully created
When we have configured our syslog server, we can display the current syslog server
configurations in our system, as shown in Example 9-219.
Example 9-219 lssyslogserver command
IBM_2145:ITSO_SVC1:admin>lssyslogserver
id name IP_address facility error warning info
0 Syslogserv1 10.64.210.231 0 on on on
1 Syslogserv1 10.64.210.231 0
on on on
9.16.5 Configuring error notification using an email server
The SAN Volume Controller can use an email server to send event notification and inventory
emails to email users. It can transmit any combination of events, warning, and informational
Chapter 9. SAN Volume Controller operations using the command-line interface 613
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
notification types. The SAN Volume Controller supports up to six email servers to provide
redundant access to the external email network. The SAN Volume Controller uses the email
servers in sequence until the email is successfully sent from the SAN Volume Controller.
The attempt is successful when the SAN Volume Controller gets a positive acknowledgement
from an email server that the email has been received by the server.
If no port is specified, port 25 is the default port, as shown in Example 9-220.
Example 9-220 The mkemailserver command syntax
IBM_2145:ITSO_SVC1:admin>mkemailserver -ip 192.168.1.1
Email Server id [0] successfully created
IBM_2145:ITSO_SVC1:admin>lsemailserver 0
id 0
name emailserver0
IP_address 192.168.1.1
port 25
We can configure an email user that will receive email notifications from the SAN Volume
Controller system. We can define 12 users to receive emails from our SAN Volume Controller.
Using the lsemailuser command, we can verify which user is already registered and what
type of information is sent to that user, as shown in Example 9-221 on page 613.
Example 9-221 lsemailuser command
IBM_2145:ITSO_SVC1:admin>lsemailuser
id name address
user_type error warning info inventory
0 IBM_Support_Center [email protected]
support on off off on
We can also create a new user, as shown in Example 9-222, for a SAN administrator.
Example 9-222 mkemailuser command
IBM_2145:ITSO_SVC1:admin>mkemailuser -address [email protected] -error on -warning
on -info on -inventory on
User, id [0], successfully created
9.16.6 Analyzing the event log
The following types of events are logged in the event log. An event is an occurrence of
significance to a task or system. Events can include the completion or failure of an operation,
a user action, or a change in the state of a process.
Node event codes now have two classifications:
Important: Before the SAN Volume Controller can start sending emails, we must run the
startemail command, which enables this service.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
614 Implementing the IBM System Storage SAN Volume Controller V7.2
Critical events: Critical events put the node into the service state and prevent the node
from joining the system. The critical events are numbered 500 - 699.
Non-critical events: Non-critical events are partial hardware faults, for example, one
power-supply unit (PSU) failed in the 2145-CF8. The non-critical events are numbered
800 - 899.
To display the event log, use the lseventlog command, as shown in Example 9-223 on
page 615.
IBM_2145:ITSO_SVC1:admin>lseventlog -count 2
sequence_number last_timestamp object_type object_id object_name copy_id status fixed
event_id error_code description
102 111003105018 cluster ITSO_SVC1 message no
981004 FC discovery occurred, no configuration changes were detected
103 111003111036 cluster ITSO_SVC1 message no
981004 FC discovery occurred, no configuration changes were detected
IBM_2145:ITSO_SVC1:admin>lseventlog 103
sequence_number 103
first_timestamp 111003111036
first_timestamp_epoch 1317665436
last_timestamp 111003111036
last_timestamp_epoch 1317665436
object_type cluster
object_id
object_name ITSO_SVC1
copy_id
reporting_node_id 1
reporting_node_name SVC1N1
root_sequence_number
event_count 1
status message
fixed no
auto_fixed no
notification_type informational
event_id 981004
event_id_text FC discovery occurred, no configuration changes were detected
error_code
error_code_text
sense1 01 01 00 00 7E 0B 00 00 04 02 00 00 01 00 01 00
sense2 00 00 00 00 10 00 00 00 08 00 08 00 00 00 00 00
sense3 00 00 00 00 00 00 00 00 F2 FF 01 00 00 00 00 00
sense4 0E 00 00 00 FC FF FF FF 03 00 00 00 07 00 00 00
sense5 00 00 06 00 00 00 00 00 00 00 00 00 00 00 00 00
sense6 00 00 00 00 03 00 00 00 00 00 00 00 00 00 00 00
sense7 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
sense8 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
These commands allow you to view the last events (you can specify the -count parameter to
define how many events you need to display) that were generated. Use the method that is
described in 9.16.2, Running the maintenance procedures on page 609 to upload and
analyze the event log in more detail.
To clear the event log, you can issue the clearerrlog command, as shown in Example 9-223.
Deleting a node: Deleting a node from a system will cause the node to enter the
service state, as well.
Chapter 9. SAN Volume Controller operations using the command-line interface 615
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-223 clearerrlog command
IBM_2145:ITSO_SVC1:admin>clearerrlog
Do you really want to clear the log? y
Using the -force flag will stop any confirmation requests from appearing. When executed,
this command clears all of the entries from the event log. This process proceeds even if there
are unfixed errors in the log. It also clears any status events that are in the log.
9.16.7 License settings
To change the licensing feature settings, use the chlicense command.
Before you change the licensing, you can display the licenses that you already have by
issuing the lslicense command, as shown in Example 9-224.
Example 9-224 lslicense command
IBM_2145:ITSO_SVC1:admin>lslicense
used_flash 0.00
used_remote 0.03
used_virtualization 0.75
license_flash 500
license_remote 500
license_virtualization 500
license_physical_disks 0
license_physical_flash off
license_physical_remote off
The current license settings for the system are displayed in the viewing license settings log
window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror,
Global Mirror, or Virtualization features. The license settings log window also shows the
storage capacity that is licensed for virtualization. Typically, the license settings log contains
entries, because feature options must be set as part of the web-based system creation
process.
Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro
Mirror and Global Mirror feature from your actual 20 TB license. Example 9-225 shows the
command that you enter.
Example 9-225 chlicense command
IBM_2145:ITSO_SVC1:admin>chlicense -remote 25
To turn a feature off, add 0 TB as the capacity for the feature that you want to disable.
To verify that the changes that you have made are reflected in your SAN Volume Controller
configuration, you can issue the lslicense command (see Example 9-226).
Example 9-226 lslicense command: Verifying changes
IBM_2145:ITSO_SVC1:admin>lslicense
Important: This command is a destructive command for the event log. Only use this
command when you have either rebuilt the system, or when you have fixed a major
problem that has caused many entries in the event log that you do not want to fix manually.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
616 Implementing the IBM System Storage SAN Volume Controller V7.2
used_flash 0.00
used_remote 0.03
used_virtualization 0.75
license_flash 500
license_remote 25
license_virtualization 500
license_physical_disks 0
license_physical_flash off
license_physical_remote off
9.16.8 Listing dumps
Starting with SAN Volume Controller 6.3, a new command is available to list the dumps that
were generated over a period of time. You can use lsdumps with the -prefix parameter to
return a list of dumps in the appropriate directory. The command produces a list of the files in
the specified directory on the specified node. If no node is specified, the config node is used.
If no -prefix is set, the files in the /dumps directory are listed.
Error or event dump
The dumps that are contained in the /dumps/elogs directory are dumps of the contents of the
event log at the time that the dump was taken. You create an error or event log dump by using
the dumperrlog command. This command dumps the contents of the error or event log to the
/dumps/elogs directory.
If you do not supply a file name prefix, the system uses the default errlog_ file name prefix.
The full, default file name is errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is the
node front panel name. If the command is used with the -prefix option, the value that is
entered for the -prefix is used instead of errlog.
The lsdumps -prefix command lists all of the dumps in the /dumps/elogs directory
(Example 9-227).
Example 9-227 lsdumps -prefix /dumps/elogs
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/elogs
id filename
0 errlog_110711_111003_111056
1 testerrorlog_110711_111003_135358
2 ITSO_SVC1_errlog_110711_111003_141111
3 ITSO_SVC1_errlog_110711_111003_141620
4 errlog_110711_111003_154759
Featurization log dump
The dumps that are contained in the /dumps/feature directory are dumps of the featurization
log. A featurization log dump is created by using the dumpinternallog command. This
command dumps the contents of the featurization log to the /dumps/feature directory to a file
called feature.txt. Only one of these files exists, so every time that the dumpinternallog
command is run, this file is overwritten.
The lsdumps -prefix /dumps/feature command lists all of the dumps in the /dumps/feature
directory (Example 9-228).
Chapter 9. SAN Volume Controller operations using the command-line interface 617
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-228 lsdumps with -prefix /dumps/feature command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/feature
id filename
0 feature.txt
I/O trace dump
Dumps that are contained in the /dumps/iotrace directory are dumps of I/O trace data. The
type of data that is traced depends on the options that are specified by the settrace
command. The collection of the I/O trace data is started by using the starttrace command.
The I/O trace data collection is stopped when the stoptrace command is used. When the
trace is stopped, the data is written to the file.
The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name,
and prefix is the value that is entered by the user for the -filename parameter in the
settrace command.
The command to list all of the dumps in the /dumps/iotrace directory is lsdumps -prefix
/dumps/iotrace (Example 9-229).
Example 9-229 lsdumps with -prefix /dumps/iotrace command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/iotrace
id iotrace_filename
0 tracedump_104643_080624_172208
1 iotrace_104643_080624_172451
I/O statistics dump
The dumps that are contained in the /dumps/iostats directory are the dumps of the I/O
statistics for the disks on the cluster. You create an I/O statistics dump by using the
startstats command. As part of this command, you can specify a time interval at which you
want the statistics to be written to the file (the default is 15 minutes). Every time that the time
interval is encountered, the I/O statistics that are collected up to that point are written to a file
in the /dumps/iostats directory.
The file names that are used for storing I/O statistics dumps are
m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether
the statistics are for MDisks or volumes. In these file names, NNNNNN is the node front panel
name.
The command to list all of the dumps that are in the /dumps/iostats directory is lsdumps
-prefix (Example 9-230).
Example 9-230 lsdumps -prefix /dumps/iostats command
IBM_2145:ITSO_SVC1:admin>lsdumps -prefix /dumps/iostats
id filename
0 Nm_stats_110711_111003_125706
1 Nn_stats_110711_111003_125706
2 Nv_stats_110711_111003_125706
3 Nd_stats_110711_111003_125706
4 Nv_stats_110711_111003_131204
5 Nd_stats_110711_111003_131204
6 Nn_stats_110711_111003_131204
........
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
618 Implementing the IBM System Storage SAN Volume Controller V7.2
Software dump
The lsdumps command lists the contents of the /dumps directory. The general debug
information, software, application dumps, and live dumps are copied into this directory.
Example 9-231 shows the command.
Example 9-231 lsdumps command without prefix
IBM_2145:ITSO_SVC1:admin>lsdumps
id filename
0 svc.config.cron.bak_108283
1 sel.110711.trc
2 rtc.race_mq_log.txt.110711.trc
3 ethernet.110711.trc
4 svc.config.cron.bak_110711
5 svc.config.cron.xml_110711
6 svc.config.cron.log_110711
7 svc.config.cron.sh_110711
8 svc.config.backup.bak_110711
9 svc.config.backup.tmp.xml
10 110711.trc
11 svc.config.backup.xml_110711
12 svc.config.backup.now.xml
13 snap.110711.111003.111031.tgz
Other node dumps
The lsdumps commands can accept a node identifier as input (for example, append the node
name to the end of any of the node dump commands). If this identifier is not specified, the list
of files on the current configuration node is displayed. If the node identifier is specified, the list
of files on that node is displayed.
However, files can only be copied from the current configuration node (using PuTTY Secure
Copy). Therefore, you must issue the cpdumps command to copy the files from a
non-configuration node to the current configuration node. Subsequently, you can copy them to
the management workstation using PuTTY Secure Copy.
For example, suppose that you discover a dump file and want to copy it to your management
workstation for further analysis. In this case, you must first copy the file to your current
configuration node.
To copy dumps from other nodes to the configuration node, use the cpdumps command.
In addition to the directory, you can specify a file filter. For example, if you specified
/dumps/elogs/*.txt, all of the files, in the /dumps/elogs directory, that end in .txt are copied.
Example 9-232 shows an example of the cpdumps command.
Wildcards: The following rules apply to the use of wildcards with the SAN Volume
Controller CLI:
The wildcard character is an asterisk (*).
The command can contain a maximum of one wildcard.
When you use a wildcard, you must surround the filter entry with double quotation
marks (""), for example:
>cleardumps -prefix "/dumps/elogs/*.txt"
Chapter 9. SAN Volume Controller operations using the command-line interface 619
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
Example 9-232 cpdumps command
IBM_2145:ITSO_SVC1:admin>cpdumps -prefix /dumps/configs n4
Now that you have copied the configuration dump file from Node n4 to your configuration
node, you can use PuTTY Secure Copy to copy the file to your management workstation for
further analysis.
To clear the dumps, you can run the cleardumps command. Again, you can append the node
name if you want to clear dumps off of a node other than the current configuration node (the
default for the cleardumps command).
The commands in Example 9-233 clear all logs or dumps from the SAN Volume Controller
Node SVC1N2.
Example 9-233 cleardumps command
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /dumps SVC1N2
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /dumps/iostats SVC1N2
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /dumps/iotrace SVC1N2
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /dumps/feature SVC1N2
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /dumps/config SVC1N2
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /dumps/elog SVC1N2
IBM_2145:ITSO_SVC1:admin>cleardumps -prefix /home/admin/upgrade SVC1N2
9.17 Backing up the SAN Volume Controller system
configuration
You can back up your system configuration by using the Backing Up a Cluster Configuration
window or the CLI svcconfig command. In this section, we describe the overall procedure for
backing up your system configuration and the conditions that must be satisfied to perform a
successful backup.
The backup command extracts configuration data from the system and saves it to the
svc.config.backup.xml file in the /tmp directory. This process also produces an
svc.config.backup.sh file. You can study this file to see what other commands were issued
to extract information.
An svc.config.backup.log log is also produced. You can study this log for the details of what
was done and when it was done. This log also includes information about the other
commands that were issued.
Any pre-existing svc.config.backup.xml file is archived as the svc.config.backup.bak file.
The system only keeps one archive. We strongly suggest that you immediately move the .XML
file and related KEY files (see the following limitations) off the system for archiving. Then,
erase the files from the /tmp directory using the svcconfig clear -all command.
We further advise that you change all of the objects having default names to non-default
names. Otherwise, a warning is produced for objects with default names.
Also, the object with the default name is restored with its original name with an _r appended.
The prefix _(underscore) is reserved for backup and restore command usage. Do not use this
prefix in any object names.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
620 Implementing the IBM System Storage SAN Volume Controller V7.2
9.17.1 Prerequisites
You must have the following prerequisites in place:
All nodes must be online.
No object name can begin with an underscore.
All objects must have non-default names, that is, names that are not assigned by the SAN
Volume Controller.
Although we advise that objects have non-default names at the time that the backup is taken,
this prerequisite is not mandatory. Objects with default names are renamed when they are
restored.
Example 9-234 shows an example of the svcconfig backup command.
Example 9-234 svcconfig backup command
IBM_2145:ITSO_SVC1:admin>svcconfig backup
..................
CMMVC6130W Cluster ITSO_SVC4 with inter-cluster partnership fully_configured will
not be restored
..................................................................................
.......
CMMVC6155I SVCCONFIG processing completed successfully
As you can see in Example 9-234 on page 620, we received a CMMVC6130W Cluster
ITSO_SVC4 with inter-cluster partnership fully_configured will not be restored
message. This message indicates that individual systems in a multisystem environment will
need to be backed up individually.
In the event that recovery is required, recovery will only be performed on the system where
the recovery commands are executed.
Example 9-235 shows the pscp command.
Example 9-235 pscp command
C:\Program Files\PuTTY>pscp -load ITSO_SVC1
[email protected]:/tmp/svc.config.backup.xml c:\temp\clibackup.xml
clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%
The following scenario illustrates the value of the configuration backup:
1. Use the svcconfig command to create a backup file on the clustered system that contains
details about the current system configuration.
Important: The tool backs up logical configuration data only, not client data. It does not
replace a traditional data backup and restore tool. Instead, this tool supplements a
traditional data backup and restore tool with a way to back up and restore the clients
configuration.
To provide a complete backup and disaster recovery solution, you must back up both user
(non-configuration) data and configuration (non-user) data. After the restoration of the SAN
Volume Controller configuration, you must fully restore user (non-configuration) data to the
systems disks.
Chapter 9. SAN Volume Controller operations using the command-line interface 621
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
2. Store the backup configuration on a form of tertiary storage. You must copy the backup file
from the clustered system or it becomes lost if the system crashes.
3. If a sufficiently severe failure occurs, the system might be lost. Both the configuration data
(for example, the system definitions of hosts, I/O Groups, MDGs, and MDisks) and the
application data on the virtualized disks are lost.
In this scenario, it is assumed that the application data can be restored from normal client
backup procedures. However, before you can perform this restoration, you must reinstate
the system as it was configured at the time of the failure. Therefore, you restore the same
MDGs, I/O Groups, host definitions, and volumes that existed prior to the failure. Then,
you can copy the application data back onto these volumes and resume operations.
4. Recover the hardware: hosts, SAN Volume Controllers, disk controller systems, disks, and
SAN fabric. The hardware and SAN fabric must physically be the same as the hardware
and SAN fabric that were used before the failure.
5. Re-initialize the clustered system with the configuration node; the other nodes will be
recovered when restoring the configuration.
6. Restore your clustered system configuration using the backup configuration file that was
generated prior to the failure.
7. Restore the data on your volumes using your preferred restoration solution or with help
from IBM Support.
8. Resume normal operations.
9.18 Restoring the SAN Volume Controller clustered system
configuration
After the svcconfig restore -execute command is started, consider any prior user data on
the volumes destroyed. The user data must be recovered through your usual application data
backup and restore process.
See the IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line
Interface Users Guide, GC27-2287, for more information about this topic.
For a detailed description of the SAN Volume Controller configuration backup and restore
functions, see the IBM TotalStorage Open Software Family SAN Volume Controller:
Configuration Guide, GC27-2286.
9.18.1 Deleting the configuration backup
We describe in detail the tasks that you can perform to delete the configuration backup that is
stored in the configuration file directory on the system. Never clear this configuration without
having a backup of your configuration that is stored in a separate, secure place.
When using the clear command, you erase the files in the /tmp directory. This command
does not clear the running configuration and prevent the system from working, but the
Important: It is extremely important that you always consult IBM Support before you
restore the SAN Volume Controller clustered system configuration from the backup. IBM
Support can assist you in analyzing the root cause of why the system configuration was
lost.
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
622 Implementing the IBM System Storage SAN Volume Controller V7.2
command clears all of the configuration backup that is stored in the /tmp directory; see
Example 9-236.
Example 9-236 svcconfig clear command
IBM_2145:ITSO_SVC1:admin>svcconfig clear -all
.
CMMVC6155I SVCCONFIG processing completed successfully
9.19 Working with the SAN Volume Controller Quorum MDisks
In this section, we show how to list and change the SAN Volume Controller system Quorum
Managed Disks (MDisks).
9.19.1 Listing the SAN Volume Controller Quorum MDisks
To list SAN Volume Controller system Quorum MDisks and view their numbers and status,
issue the lsquorum command, as shown in Example 9-237.
For more information about SAN Volume Controller Quorum Disk planning and configuration,
see Chapter 3, Planning and configuration on page 71.
Example 9-237 lsquorum command and detail
IBM_2145:ITSO_SVC1:admin>lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no
1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
2 online 3 mdisk3 2 ITSO-DS3500 no mdisk no
IBM_2145:ITSO_SVC1:admin>lsquorum 1
quorum_index 1
status online
id 0
name mdisk0
controller_id 2
controller_name ITSO-DS3500
active yes
object_type mdisk
override no
9.19.2 Changing the SAN Volume Controller Quorum Disks
To move one of your SAN Volume Controller Quorum MDisks from one MDisk to another, or
from one storage subsystem to another, use the chquorum command, as shown in
Example 9-238.
Example 9-238 chquorum command
IBM_2145:ITSO_SVC1:admin>lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no
1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
2 online 3 mdisk3 2 ITSO-DS3500 no mdisk no
Chapter 9. SAN Volume Controller operations using the command-line interface 623
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
chquorum -mdisk 9 2
IBM_2145:ITSO_SVC1:admin>lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 1 mdisk1 2 ITSO-DS3500 no mdisk no
1 online 0 mdisk0 2 ITSO-DS3500 yes mdisk no
2 online 9 mdisk9 3 ITSO-DS5000 no mdisk no
As you can see in Example 9-238, the quorum index 2 has been moved from MDisk3 on the
ITSO-DS3500 controller to MDisk9 on the ITSO-DS5000 controller.
9.20 Working with the Service Assistant menu
SAN Volume Controller V6.1 introduced a new method for performing service tasks on the
system. In addition to being able to perform service tasks from the front panel, you can now
also service a node through an Ethernet connection using either a web browser or the CLI.
The web browser runs a new service application called the Service Assistant. Service
Assistant offers almost all of the function that was previously available through the front panel.
Now, the function is available from the Ethernet connection with an interface that is easier to
use and that you can use remotely from the system.
9.20.1 SAN Volume Controller CLI Service Assistant menu
A set of commands relating to the new method for performing service tasks on the system
has been introduced.
Two major command sets are available:
The sainfo command set allows you to query the various components within the SAN
Volume Controller environment.
The satask command set allows you to make changes to the various components within
the SAN Volume Controller.
When the command syntax is shown, you will see certain parameters in square brackets, for
example [parameter], indicating that the parameter is optional in most if not all instances.
Any information that is not in square brackets is required information. You can view the syntax
of a command by entering one of the following commands:
sainfo -?: Shows a complete list of information commands
satask -?: Shows a complete list of task commands
sainfo commandname -?: Shows the syntax of information commands
satask commandname -?: Shows the syntax of task commands
Example 9-239 shows the two new sets of commands that are introduced with Service
Assistant.
Example 9-239 sainfo and satask commands
IBM_2145:ITSO_SVC1:admin>sainfo -h
The following actions are available with this command :
lscmdstatus
lsfiles
lsservicenodes
lsservicerecommendation
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
624 Implementing the IBM System Storage SAN Volume Controller V7.2
lsservicestatus
IBM_2145:ITSO_SVC1:admin>satask -h
The following actions are available with this command :
chenclosurevpd
chnodeled
chserviceip
chwwnn
cpfiles
installsoftware
leavecluster
mkcluster
rescuenode
setlocale
setpacedccu
settempsshkey
snap
startservice
stopnode
stopservice
t3recovery
9.21 SAN troubleshooting and data collection
When we encounter a SAN issue, the SAN Volume Controller is often extremely helpful in
troubleshooting the SAN, because the SAN Volume Controller is at the center of the
environment through which the communication travels.
SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, contains a
detailed description of how to troubleshoot and collect data from the SAN Volume Controller:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
Use the lsfabric command regularly to obtain a complete picture of the devices that are
connected and visible from the SAN Volume Controller cluster through the SAN. The
lsfabric command generates a report that displays the FC connectivity between nodes,
controllers, and hosts.
Example 9-240 shows the output of an lsfabric command.
Example 9-240 lsfabric command
IBM_2145:ITSO_SVC1:admin>lsfabric
remote_wwpn remote_nportid id node_name local_wwpn local_port local_nportid
state name cluster_name type
5005076801405034 030A00 1 SVC1N1 50050768014027E2 1 030800
active SVC1N2 ITSO_SVC1 node
5005076801405034 030A00 1 SVC1N1 50050768011027E2 3 030900
active SVC1N2 ITSO_SVC1 node
5005076801305034 040A00 1 SVC1N1 50050768013027E2 2 040800
active SVC1N2 ITSO_SVC1 node
5005076801305034 040A00 1 SVC1N1 50050768012027E2 4 040900
active SVC1N2 ITSO_SVC1 node
50050768012027E2 040900 2 SVC1N2 5005076801305034 2 040A00
active SVC1N1 ITSO_SVC1 node
Important: You must use the sainfo and satask command sets under the direction of IBM
Support. The incorrect use of these commands can lead to unexpected results.
Chapter 9. SAN Volume Controller operations using the command-line interface 625
Draft Document for Review March 27, 2014 3:03 pm 7933 09 CLI Operations Libor.fm
50050768012027E2 040900 2 SVC1N2 5005076801205034 4 040B00
active SVC1N1 ITSO_SVC1 node
500507680120505C 040F00 1 SVC1N1 50050768013027E2 2 040800
active SVC4N2 ITSO_SVC4 node
500507680120505C 040F00 1 SVC1N1 50050768012027E2 4 040900
active SVC4N2 ITSO_SVC4 node
500507680120505C 040F00 2 SVC1N2 5005076801305034 2 040A00
active SVC4N2 ITSO_SVC4 node
500507680120505C 040F00 2 SVC1N2 5005076801205034 4 040B00
active SVC4N2 ITSO_SVC4 node
50050768013027E2 040800 2 SVC1N2 5005076801305034 2 040A00
active SVC1N1 ITSO_SVC1 node
....
Above and below rows has been removed for brevity
....
20690080E51B09E8 041900 1 SVC1N1 50050768013027E2 2 040800
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 1 SVC1N1 50050768012027E2 4 040900
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 2 SVC1N2 5005076801305034 2 040A00
inactive ITSO-DS3500 controller
20690080E51B09E8 041900 2 SVC1N2 5005076801205034 4 040B00
inactive ITSO-DS3500 controller
50050768013037DC 041400 1 SVC1N1 50050768013027E2 2 040800
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 1 SVC1N1 50050768012027E2 4 040900
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 2 SVC1N2 5005076801305034 2 040A00
active ITSOSVC3N1 ITSO_SVC3 node
50050768013037DC 041400 2 SVC1N2 5005076801205034 4 040B00
active ITSOSVC3N1 ITSO_SVC3 node
5005076801101D1C 031500 1 SVC1N1 50050768014027E2 1 030800
active ITSOSVC3N2 ITSO_SVC3 node
5005076801101D1C 031500 1 SVC1N1 50050768011027E2 3 030900
active ITSOSVC3N2 ITSO_SVC3 node
5005076801101D1C 031500 2 SVC1N2 5005076801405034 1 030A00
active ITSOSVC3N2 ITSO_SVC3 node
.....
Above and below rows has been removed for brevity
.....
5005076801201D22 021300 1 SVC1N1 50050768013027E2 2 040800
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 1 SVC1N1 50050768012027E2 4 040900
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 2 SVC1N2 5005076801305034 2 040A00
active SVC2N2 ITSO_SVC2 node
5005076801201D22 021300 2 SVC1N2 5005076801205034 4 040B00
active SVC2N2 ITSO_SVC2 node
50050768011037DC 011513 1 SVC1N1 50050768014027E2 1 030800
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 1 SVC1N1 50050768011027E2 3 030900
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 2 SVC1N2 5005076801405034 1 030A00
active ITSOSVC3N1 ITSO_SVC3 node
50050768011037DC 011513 2 SVC1N2 5005076801105034 3 030B00
active ITSOSVC3N1 ITSO_SVC3 node
5005076801301D22 021200 1 SVC1N1 50050768013027E2 2 040800
active SVC2N2 ITSO_SVC2 node
5005076801301D22 021200 1 SVC1N1 50050768012027E2 4 040900
active SVC2N2 ITSO_SVC2 node
7933 09 CLI Operations Libor.fm Draft Document for Review March 27, 2014 3:03 pm
626 Implementing the IBM System Storage SAN Volume Controller V7.2
....
Above and below rows have been removed for brevity
....
For more detail about the lsfabric command, see the IBM System Storage SAN Volume
Controller and Storwize V7000 Command-Line Interface Users Guide Version 6.3.0,
GC27-2287.
9.22 T3 recovery process
A procedure, which is known as T3 recovery, has been tested and used in select cases where
a system has been completely destroyed. (One example is simultaneously pulling power
cords from all nodes to their uninterruptible power supply units. In this case, all nodes boot up
to node error 578 when the power is restored.)
This procedure, in certain circumstances, is able to recover most user data. However, it is not
to be used by the client or IBM service representative without the direct involvement from IBM
Level 3 technical support. This procedure is not published, but we refer to it here only to
indicate that the loss of a system can be recoverable without total data loss. However, it
requires a restoration of application data from the backup. T3 recovery is an extremely
sensitive procedure that is only to be used as a last resort, and it cannot recover any data that
was destaged from cache at the time of the total system failure.
Copyright IBM Corp. 2014. All rights reserved. 627
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Chapter 10. SAN Volume Controller
operations using the GUI
In this chapter, we illustrate IBM System Storage SAN Volume Controller operational
management and system administration using the SAN Volume Controller graphical user
interface (GUI). The SAN Volume Controller management GUI is an easy-to-use tool that
helps you to monitor, manage, and configure your system.
The information is divided into normal operations and advanced operations. We explain the
basic configuration procedures that are required to get your SAN Volume Controller
environment running as quickly as possible using its GUI.
In Chapter 2, IBM System Storage SAN Volume Controller on page 9, we describe the SAN
Volume Controller concepts in greater depth. In this chapter we focus on the operational
aspects.
10
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
628 Implementing the IBM System Storage SAN Volume Controller V7.2
10.1 SAN Volume Controller normal operations using the GUI
In this section, we discuss several of the operations that we have defined as normal,
day-to-day activities.
It is possible for many users to be logged into the GUI at any given time. However, no locking
mechanism exists, so be aware that if two users change the same object at the same time,
the last action entered from the GUI is the one that will take effect.
10.1.1 Introduction to SAN Volume Controller normal operations using the GUI
The SAN Volume Controller Overview panel is an important user interface and throughout
this chapter we refer to it as SAN Volume Controller Overview panel or just Overview panel. In
the later sections of this chapter, we expect users to be able to navigate to it without us
explaining the procedure each time.
Figure 10-1 SAN Volume Controller GUI Overview panel
Dynamic menu
From any page inside the SAN Volume Controller GUI, you always have access to the
dynamic menu. The SAN Volume Controller GUI dynamic menu is located on the left hand
side of the SAN Volume Controller GUI screen. To navigate using this menu, move the mouse
cursor over the various icons and choose a page that you want to display, as shown in
Figure 10-2 on page 629.
Important: Data entries made through the GUI are case sensitive.
Chapter 10. SAN Volume Controller operations using the GUI 629
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-2 The dynamic menu in the left column of the SAN Volume Controller GUI
The SAN Volume Controller dynamic menu consists of multiple panels. These panels group
together common configuration and administration objects, and present individual
administrative objects to the SAN Volume Controller GUI users (see Figure 10-3).
Figure 10-3 SAN Volume Controller GUI Panels
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
630 Implementing the IBM System Storage SAN Volume Controller V7.2
A non-dynamic version of the menu exists for slow network connections. To access the
non-dynamic menu, select Low graphics mode, as shown in Figure 10-4.
Figure 10-4 The SAN Volume Controller GUI Login panel
Figure 10-5 on page 630 shows the non-dynamic version of the menu.
Figure 10-5 Non-dynamic menu in the left column
In this case, the Volumes menu is located in the upper part of the page. There is a pull-down
menu for navigating between Volume selection items. For example, in Figure 10-5, Volumes,
Volumes by Pool, and Volumes by Host are submenus (pull-down menus) for the Volumes
menu.
Both the dynamic and non-dynamic version of the SAN Volume Controller GUI menu contain
the following panels (see Figure 10-3 on page 629):
Home
Monitoring
Pools
Volumes
Chapter 10. SAN Volume Controller operations using the GUI 631
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Hosts
Copy Services
Access
Settings
Persistent state notification status areas
A control panel is available in the bottom part of the window. This dashboard is divided into
three status areas and it provides information about your system. These persistent state
notification widgets are reduced by default, as shown in Figure 10-6.
Figure 10-6 Control panel view
Next, we describe each status area.
Health status area
The rightmost area of the control panel provides information or alerts about internal and
external connectivity; see Figure 10-7 on page 631.
Figure 10-7 Health Status area
If there are non-critical issues for your system nodes, external storage controllers, or remote
partnerships, a new status area pops up next to the Health Status widget, as shown in
Figure 10-8.
Figure 10-8 Controller path Status Alert
You can fix the error by clicking the Status Alerts bar, which directs you to the Events panel
fix procedures.
In case of the critical system connectivity error, the Health Status bar turns red and alerts the
system administrator for immediate action (Figure 10-9 on page 632).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
632 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-9 External storage connectivity loss
Storage allocation area
The leftmost area provides information about the storage allocation, as shown in
Figure 10-10.
Figure 10-10 Storage allocation area
The following information is displayed in this window. To view all of the information, you need
to use the up and down arrows:
Allocated capacity
Virtual capacity
Compression ratio
Long running tasks area
The middle area provides information about the running tasks, as shown in Figure 10-11.
Information such as volume migration, MDisk removal, image mode migration, extend
migration, FlashCopy, Metro Mirror and Global Mirror, volume formatting, space-efficient copy
repair, volume copy verification, and volume copy synchronization, are displayed in this
window. It also shows the time estimated for the task completion.
Figure 10-11 Long running tasks area
By clicking within the square, as shown in Figure 10-9, this area provides information about
recently completed tasks, as shown in Figure 10-12 on page 633.
Chapter 10. SAN Volume Controller operations using the GUI 633
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-12 Recently Completed Tasks information
10.1.2 Organizing based on window content
The following sections describe several windows within the SAN Volume Controller GUI
where you can perform filtering (to minimize the amount of data that is shown on the window)
and sorting and reorganizing (to organize the content on the window). In this section, we
provide a brief overview of these functions.
Table filtering
In most pages, in the upper-left corner of the window, there is an icon to filter the elements,
which is useful if the list of object entries is too long.
Perform these steps to use search filtering:
1. Click the filter icon and open the search box in the upper-left corner of the window, as
shown in Figure 10-13.
Figure 10-13 Show filter row icon
2. Enter the text string that you want to filter and hit Enter.
3. This function enables you to filter your table based on the column names. In this example,
a volume list is displayed containing names that include ESX somewhere in the name. ESX
is highlighted in amber, as shown in Figure 10-14 on page 634. Note that the search
option is not case sensitive.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
634 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-14 Show filter row
4. You can remove this filtered view by clicking the reset filter icon, as shown in Figure 10-15.
Figure 10-15 Reset the filtered view
Table information
In the table view, you are able to add or remove the additional information in the tables that
are available on most pages.
For example, on the Volumes page, we add a column to our table:
1. We can either right-click any column headers of the table, or use the icon at the left corner
of the table header; see Figure 10-16. A menu with all of the available columns appears.
Figure 10-16 Add or remove details in a table
Filtering: This filtering option is available in most pages.
Chapter 10. SAN Volume Controller operations using the GUI 635
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
2. Select the column that you want to add (or remove) from this table. In our example, we
added the volume ID column, as shown on the left in Figure 10-17.
Figure 10-17 Table with an added ID column
3. You can repeat this process several times to create custom tables that meet your
requirements.
4. You can always get back to the default table view by selecting Restore Default View in the
column selection menu (Figure 10-18).
Figure 10-18 Restore default table view
Reorganizing columns in tables
You are able to move columns by left-clicking the mouse and moving the column, as shown in
Figure 10-19 on page 636.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
636 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-19 Reorganizing table columns
Sorting
Regardless of whether you use the filter options, you can sort the displayed data by clicking
one columns table, as shown in Figure 10-20. In this example, we sort the table by volume ID.
Figure 10-20 Selecting the ID column to sort using this field
After we click the volume ID column, the table is sorted by volume ID, as shown in
Figure 10-21 on page 637.
Chapter 10. SAN Volume Controller operations using the GUI 637
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-21 Table sort by volume ID
10.1.3 Help
To access online help, move the mouse cursor over the question mark (?) icon in the
upper-right corner of any panel, and select the context based help topic as shown in
Figure 10-22. Depending on the panel you are currently working with, the help will display its
context item.
Figure 10-22 Help link
By clicking the Information Center icon, you will be directed to the public SAN Volume
Controller Information Center which provides all information about the SAN Volume Controller
clustered system (see Figure 10-23).
Sorting: By clicking a column, you can sort this table based on that column in ascending
or descending order.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
638 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-23 SAN Volume Controller Information Center
10.2 Working with external disk controllers
In this section, we describe the various configuration and administrative tasks that you can
perform on external disk controllers within the SAN Volume Controller environment.
10.2.1 Viewing the disk controller details
Perform the following steps to view information about a back-end disk controller that is being
used by the SAN Volume Controller environment:
1. Select Pools in the dynamic menu and then select External Storage.
2. The External Storage panel that is shown in Figure 10-24 opens.
For more detailed information about a specific controller and mdisks, click the plus (+)
button next to the controller name icon.
Figure 10-24 Disk controller systems
Chapter 10. SAN Volume Controller operations using the GUI 639
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.2.2 Naming a disk controller
After presenting a new storage system to the SAN Volume Controller clustered system,
perform the following steps to name it for the ease of identification by the storage
administrators:
1. Right-click the newly presented controller default name. Select Rename and insert the
name you want to associate with this storage system (Figure 10-25).
Figure 10-25 Renaming a storage system
2. Type the new name that you want to assign to the controller, and click Rename, as shown
in Figure 10-26.
Figure 10-26 Changing the name for a storage system
3. A task is launched to change the name of this storage system. When it completes, you can
close this window.
4. The new name of your controller is displayed on the External Storage panel.
Controller name: The name can consist of the letters A to Z and a to z, the numbers 0
to 9, the dash (-), and the underscore (_) character. The name can be between one and
63 characters in length. However, the name cannot start with a number, dash, or
underscore.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
640 Implementing the IBM System Storage SAN Volume Controller V7.2
10.2.3 Discovering MDisks from the external panel
You can discover managed disks (MDisks) from the External Storage panel.
Perform the following steps to discover new MDisks:
1. Ensure there are no existing controllers highlighted. Click Actions.
2. Click Detect MDisks to discover the MDisks from this controller, as shown in
Figure 10-27.
Figure 10-27 Detect MDisks action
3. The Discover devices task runs.
4. When the task completes, click Close and see the newly available MDisks.
10.3 Working with storage pools
In this section, we describe the tasks that can be performed with the storage pools.
10.3.1 Viewing storage pool information
We perform each of the following tasks from the Pools panel (Figure 10-28 on page 641). To
access this panel, from the SAN Volume Controller Overview panel, move the mouse cursor
over the Pools selection and then click Volumes by Pools.
Chapter 10. SAN Volume Controller operations using the GUI 641
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-28 Viewing the storage pools
You can add information (new columns) to the table, as explained in Table information on
page 634.
To retrieve more detailed information about a specific storage pool, select any storage pool in
the left column. The upper-right corner of the panel, which is shown in Figure 10-29 on
page 641, contains the following information about this pool:
Status
Number of MDisks
Number of volume copies
If Easy Tiering is active on this pool
Volume allocation
Used capacity
Capacity
Figure 10-29 Detailed information about a pool
Change the view to MDisks by Pools. Select the pool with which you want to work, and click
the Plus sign (+) icon, which is the expand button. This panel displays the MDisks that are
present in this storage pool, as shown in Figure 10-30.
Figure 10-30 MDisks that are present in a storage pool
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
642 Implementing the IBM System Storage SAN Volume Controller V7.2
10.3.2 Discovering MDisks
Perform the following steps to discover newly assigned MDisks:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over the Pools
selection, and then click MDisks by Pools.
2. Ensure there are no existing storage pools highlighted. Click Actions.
3. Click Detect MDisks, as shown in Figure 10-31.
Figure 10-31 Detect MDisks action
4. The Discover Device window is displayed.
5. Click Close to see the newly discovered MDisks.
10.3.3 Creating storage pools
Perform the following steps to create a storage pool:
1. From the SAN Volume Controller Overview panel, move the cursor over the Pools
selection and then click MDisks by Pools.
The MDisks by Pools panel opens. On this page, click Create Pool, as shown in
Figure 10-32 on page 642.
Figure 10-32 Selecting the option to create a new storage pool
2. The Create Storage Pools wizard opens.
Chapter 10. SAN Volume Controller operations using the GUI 643
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
3. On this first page, complete the following elements, as shown in Figure 10-33 on
page 643:
a. You can specify a name for the storage pool. If you do not provide a name, the SAN
Volume Controller automatically generates the name mdiskgrpx, where x is the ID
sequence number that is assigned by the SAN Volume Controller internally.
b. You can also change the icon that is associated with this storage pool, as shown in
Figure 10-33 on page 643.
c. If you expand the Advanced Settings box, you can specify this information:
Extent Size (by default, it is 1 GB)
Warning threshold to send a warning to the event log when the capacity is first
exceeded (by default, it is 80%)
d. Click Next.
Figure 10-33 Create Storage Pool window (step 1 of 2)
4. On this page (Figure 10-34), you can specify the MDisks that you want to associate with
the new storage pool. Follow these steps:
a. Select the MDisks that you want to add to this storage pool.
b. Click Finish to complete the creation.
Storage pool name: You can use the letters A to Z and a to z, the numbers 0 to 9,
and the underscore (_) character. The name can be between one and 63 characters
in length and is case sensitive, but it cannot start with a number or the word
MDiskgrp because this prefix is reserved for SAN Volume Controller assignment
only.
Tip: To add multiple MDisks, hold down Ctrl and use your mouse to select the
entries that you want to add.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
644 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-34 Create Storage Pool window (step 2 of 2)
5. Close the task completion window. In the Storage Pools panel (Figure 10-35), the new
storage pool is displayed.
Figure 10-35 A new storage pool was added successfully
At this point, you have completed the tasks that are required to create a storage pool.
Chapter 10. SAN Volume Controller operations using the GUI 645
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.3.4 Renaming a storage pool
To rename a storage pool, perform the following steps:
1. Select the storage pool that you want to rename, and then, click Actions Rename, as
shown in Figure 10-36.
Figure 10-36 Renaming a storage pool
2. Type the new name that you want to assign to the storage pool and press Enter
(Figure 10-37 on page 645).
Figure 10-37 Changing the name for a storage pool
3. A task is launched to change the name of this pool. When it is completed, you can close
this window.
4. From the Storage Pools panel, the new storage pool name is displayed.
10.3.5 Deleting a storage pool
To delete a storage pool, perform the following steps:
1. Select the storage pool that you want to delete, and then, click Actions Delete Pool
(Figure 10-38).
Storage pool name: The name can consist of the letters A to Z and a to z, the
numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be
between one and 63 characters in length. However, the name cannot start with a
number, dash, or underscore.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
646 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-38 Delete Pool option
2. In the Delete Pool window, click Delete to confirm that you want to delete the storage pool
(Figure 10-39). If there are MDisks and volumes within the storage pool that you are
deleting, you must select the Delete all volumes, host mappings, and MDisks that are
associated with this pool option.
Figure 10-39 Deleting a pool
10.3.6 Adding or removing MDisks from a storage pool
For information about adding MDisks to a storage pool, see 10.4.4, Assigning MDisks to a
storage pool on page 651. For information about removing MDisks from a storage pool, see
10.4.5, Unassigning MDisks from a storage pool on page 652.
10.3.7 Showing the volumes that are associated with a storage pool
To show the volumes that are associated with a storage pool, click Volumes and then click
Volumes by Pool. For more information about this feature, see 10.7, Working with volumes
on page 673.
Important: If you delete a storage pool by using the Delete all volumes, host
mappings, and MDisks that are associated with this pool option, and volumes were
associated with that storage pool, you will lose the data on your volumes. The volumes
are deleted before the storage pool is deleted. If you want to save your data, you must
migrate or mirror the volumes to another storage pool before you delete the storage
pool that is containing the volumes.
Chapter 10. SAN Volume Controller operations using the GUI 647
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.4 Working with managed disks
This section describes the various configuration and administrative tasks that you can
perform on the managed disks (MDisks) within the SAN Volume Controller environment.
10.4.1 MDisk information
From the SAN Volume Controller Overview panel, move the cursor over the Pools selection
and click MDisks by Pools. The MDisks panel opens, as shown in Figure 10-40 on page 647.
Click the plus (+) icon (expand button) for one or more storage pools to see the MDisks that
belong to a certain pool.
Figure 10-40 Viewing Managed Disks panel
To retrieve more detailed information about a specific MDisk, perform the following steps:
1. In the MDisks panel, from the expanded view of a storage pool, select an MDisk.
2. Click Properties (Figure 10-41).
Figure 10-41 MDisks menu
3. For the selected MDisk, an overview is displayed showing its parameters and dependent
volumes; see Figure 10-42 on page 648.
Detailed MDisk information: To obtain all of the information about the MDisk, select
Show Details, as shown in Figure 10-42 on page 648.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
648 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-42 MDisk Details page
4. Clicking the Dependent Volumes tab displays information about the volumes that reside
on this MDisk, as shown in Figure 10-43. We discuss the volume panel in more detail in
10.7, Working with volumes on page 673.
Figure 10-43 Dependent volumes for an MDisk
Chapter 10. SAN Volume Controller operations using the GUI 649
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
5. Click Close to return to the previous window.
10.4.2 Renaming an MDisk
Perform the following steps to rename an MDisk that is managed by the SAN Volume
Controller clustered system:
1. In the MDisk panel that is shown in Figure 10-41 on page 647, select the MDisk that you
want to rename.
2. Click the Actions Rename (Figure 10-44).
3. You can select multiple MDisks to rename by holding down Ctrl while selecting the MDisks
that you want to rename.
Figure 10-44 Rename MDisk action
4. In the Rename MDisk window (Figure 10-45), type the new name that you want to assign
to the MDisk and click Rename.
Figure 10-45 Renaming an MDisk
Alternative: You can right-click this MDisk, as shown in Figure 10-41 on page 647, and
select Rename from the list.
MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9,
the dash (-), and the underscore (_) character. The name can be between one and 63
characters in length.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
650 Implementing the IBM System Storage SAN Volume Controller V7.2
10.4.3 Discovering MDisks
Perform the following steps to discover newly assigned MDisks:
1. In the dynamic menu, move the cursor over Pools and click MDisks by Pool.
2. Ensure there are no existing storage pools selected. Click Actions.
3. Click Detect MDisks, as shown in Figure 10-46.
Figure 10-46 Detect MDisks action
The Discover Devices window is displayed.
4. When the task is completed, click Close.
5. Newly assigned MDisks are displayed in the Unassigned MDisks window as Unmanaged.
See Figure 10-47.
Figure 10-47 mdisk2 and mdisk3 are newly discovered unmanaged disks
Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers
(LUNs) from your subsystem are properly assigned to the SAN Volume Controller (for
example, using storage partitioning with a DS5000 or LUN masking with DS8000). Also,
check that appropriate zoning is in place (for example, the SAN Volume Controller can
see the disk subsystem).
Chapter 10. SAN Volume Controller operations using the GUI 651
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.4.4 Assigning MDisks to a storage pool
If you created an empty storage pool or you simply assign additional MDisks to your SAN
Volume Controller environment later, you can assign MDisks to existing storage pools by
performing the following steps:
1. From the MDisks by Pools panel, select the unmanaged MDisk that you want to add to a
storage pool.
2. Click Actions Assign to Pool (Figure 10-48).
Figure 10-48 Actions: Assign to Pool
3. From the Add MDisk to Pool window, select to which pool you want to add this MDisk, and
then, click Add to Pool, as shown in Figure 10-49.
Figure 10-49 Adding an MDisk to an existing storage pool
Important: You can only add unmanaged MDisks to a storage pool.
Alternative: You can also access the Assign to Pool action by right-clicking an
unmanaged MDisk.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
652 Implementing the IBM System Storage SAN Volume Controller V7.2
10.4.5 Unassigning MDisks from a storage pool
To unassign an MDisk from a storage pool, perform the following steps:
1. Select the MDisk that you want to unassign from a storage pool.
2. Click Actions Unassign from Pool (Figure 10-50).
Figure 10-50 Actions: Unassign from Pool
3. From the Remove MDisk from Pool window (Figure 10-50), you need to validate the
number of MDisks that you want to remove from this pool. This verification has been
added to secure the process of deleting data.
If volumes are using the MDisks that you are removing from the storage pool, you must
select the option Remove the MDisk from the storage pool even if it has data on it.
The system migrates the data to other MDisks in the pool. to confirm the removal of
the MDisk.
4. Click Delete, as shown in Figure 10-51.
Figure 10-51 Unassigning an MDisk from an existing storage pool
Alternative: You can also access the Unassign from Pool action by right-clicking an
unmanaged MDisk.
Chapter 10. SAN Volume Controller operations using the GUI 653
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
An error message is displayed, as shown in Figure 10-52, if there is insufficient space to
migrate the volume data to other extents on other MDisks in that storage pool.
Figure 10-52 Unassign MDisk error message
10.4.6 Including an excluded MDisk
If a significant number of errors occur on an MDisk, the SAN Volume Controller automatically
excludes it. These errors can result from a hardware problem, a storage area network (SAN)
zoning problem, or the result of poorly planned maintenance. If it is a hardware fault, you will
receive Simple Network Management Protocol (SNMP) alerts that relate to the state of the
hardware (before the disk was excluded) and the preventive maintenance procedure that has
to be undertaken. If not, the hosts that were using volumes, which used the excluded MDisk,
now have I/O errors or even data access loss.
After you take the necessary corrective action to repair the MDisk (for example, replace the
failed disk on the back-end controller RAID array or repair the SAN zones), you can instruct
the SAN Volume Controller to include the MDisk again.
Perform the following steps to include an excluded MDisk:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over the Pools
selection in the dynamic menu, and then click the MDisks by Pools panel.
2. Select the MDisk that you want to include again.
3. Click Actions Include Excluded MDisk.
10.4.7 Activating Easy Tier
To use Easy Tier, you need to have a true multidisk tier pool with generic HDDs and SSDs.
Alternative: You can also include an excluded MDisk by right-clicking an MDisk and
selecting Include Excluded MDisk from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
654 Implementing the IBM System Storage SAN Volume Controller V7.2
MDisks, after they are detected, have a default disk tier of generic_hdd, which is shown as
Hard Disk Drive in Figure 10-53.
Figure 10-53 Default disk tier
The Easy Tier feature is still inactive for the storage pool because we do not yet have a true
multidisk tier pool. To activate the pool, we have to assign the solid-state drive (SSD) MDisks
to their correct generic_ssd tier.
To set an MDisk as SSD on a storage pool, perform the following steps:
1. Select the MDisk.
2. Click Actions Select Tier, as shown in Figure 10-54.
Figure 10-54 Select Tier option
Note: It is possible to activate Easy Tier on any storage pool within the SAN Volume
Controller system using the SAN Volume Controller CLI (command line interface). Even if it
does not contain SSDs, the SAN Volume Controller will collect the heat data statistics for
individual vdisks extents to provide for further hot-extent analysis (for example, using STAT
tool).
Easy Tier: For more detailed information about Easy Tier, see Chapter 7, Advanced
features for storage efficiency on page 349.
SSDs: Repeat this action for each of your SSD MDisks.
Alternative: You can also access the Select Tier action by right-clicking an MDisk.
Chapter 10. SAN Volume Controller operations using the GUI 655
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
3. In the Select MDisk Tier window, as shown in Figure 10-55 on page 655, select
Solid-State Drive using the drop-down list, and then, click OK.
Figure 10-55 Select MDisk Tier window
4. Now assign the SSD MDisk to the Easy Tier pool: Right-click MDisk name Assign to
Pool select Easy Tier pool and click Add to Pool.
5. The Easy Tier feature is now activated in this multidisk tier pool (HDD and SSD), as shown
in Figure 10-56.
Figure 10-56 Easy Tier activated on a storage pool
10.5 Migration
See Chapter 6, Data migration on page 225 for a comprehensive description of data
migration.
10.6 Working with hosts
In this section, we describe the various configuration and administrative tasks that you can
perform on the host object that is connected to your SAN Volume Controller.
A host system is a computer that is connected to the SAN Volume Controller through either a
Fibre Channel (FC) interface, Fibre Channel over Ethernet (FCoE), or an Internet Protocol
(IP) network.
A host object is a logical object in the SAN Volume Controller that represents a list of
worldwide port names (WWPNs) and a list of IP-based Small Computer System Interface
(iSCSI) names that identify the interfaces that the host system uses to communicate with the
SAN Volume Controller. iSCSI names can be either iSCSI-qualified names (IQN) or extended
unique identifiers (EUI).
Host configuration: For more details about connecting hosts to a SAN Volume Controller
in a SAN environment, see Chapter 5, Host configuration on page 153.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
656 Implementing the IBM System Storage SAN Volume Controller V7.2
A typical configuration has one host object for each host system that is attached to the SAN
Volume Controller. If a cluster of hosts accesses the same storage, you can add host bus
adapter (HBA) ports from several hosts to one host object to make a configuration simpler. A
host object can have both WWPNs and iSCSI names.
There are four ways to visualize and manage your SAN Volume Controller host objects from
the SAN Volume Controller GUI Hosts menu selection:
1. By using the Hosts panel, as shown in Figure 10-57.
Figure 10-57 Hosts panel
2. By using the Ports by Host panel, as shown in Figure 10-58.
Figure 10-58 Ports by Host panel
3. By using the Host Mapping panel, as shown in Figure 10-59.
Figure 10-59 Host Mapping panel
4. By using the Volumes by Hosts panel, as shown in Figure 10-60 on page 657.
Chapter 10. SAN Volume Controller operations using the GUI 657
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-60 Volumes by Hosts
10.6.1 Host information
To access the Hosts panel from the SAN Volume Controller Overview panel that is shown in
Figure 10-1 on page 628, move the mouse cursor over the Hosts selection of the dynamic
menu and click Hosts.
You can add information (new columns) to the table in the Hosts panel, as shown in
Figure 10-16 on page 634. See Table information on page 634.
To retrieve more information about a specific host, perform the following steps:
1. Select a host in the table.
2. Click Actions Properties (Figure 10-61).
Figure 10-61 Actions: Host Properties
3. For a given host in the Overview window, you are presented with information, as shown in
Figure 10-62 on page 658.
Important: Several actions on the hosts are specific to the Ports by Host or the Host
Mapping panel, but all these actions and others are accessible from the Hosts panel. For
this reason, all actions on hosts will be executed from the All Hosts panel.
Alternative: You can also access the Properties action by right-clicking a host.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
658 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-62 Host Details: Overview
4. On the Mapped Volumes tab (Figure 10-63), you can see the volumes that are mapped to
this host.
Figure 10-63 Host Details: Mapped volumes
Show Details option: To obtain more information about the hosts, select Show Details
(Figure 10-62).
Chapter 10. SAN Volume Controller operations using the GUI 659
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
5. The Port Definitions tab (Figure 10-64) displays attachment information, such as the
WWPNs that are defined for this host or the iSCSI IQN that is defined for this host.
Figure 10-64 Host Details: Port Definitions tab
When you finish viewing the details, click Close to return to the previous window.
10.6.2 Creating a host
There are two types of connections to hosts, Fibre Channel (FC and FCoE) and iSCSI and
iSCSI. In this section, we describe both of these methods:
For FC hosts, see the steps in Fibre Channel-attached hosts.
For iSCSI hosts, see the steps in iSCSI-attached hosts on page 661.
Fibre Channel-attached hosts
To create a new host that uses the FC connection type, perform the following steps:
1. Go to the Hosts panel from the SAN Volume Controller Overview panel on Figure 10-1 on
page 628, move the cursor over Hosts selection and click Hosts (Figure 10-56 on
page 655).
2. Click Create Host, as shown in Figure 10-65.
Figure 10-65 Create Host action
3. Select Fibre-Channel Host from the two types of available connection (Figure 10-66 on
page 660).
Note: The Fibre Channel over Ethernet hosts are listed under the FC Hosts add menu
in the SAN Volume Controller GUI, see Figure 10-66 on page 660.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
660 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-66 Create Fibre Channel Host window
4. In the Create Hosts window (Figure 10-67 on page 661), type a name for your host (Host
Name).
5. Fibre Channel Ports section: Use the drop-down list to select the WWPNs that correspond
to your HBA or HBAs, and click Add Port to List in the Fibre Channel Ports window. To
add additional ports, repeat this action.
If your WWPNs do not display, click Rescan to rediscover the available WWPNs that are
new since the last scan.
6. Advanced Settings section: If you need to modify the I/O Group or Host Type, you must
select Advanced to access these Advanced Settings, as shown in Figure 10-67 on
page 661. Perform these tasks:
Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these hosts, select HP/UX (to have
more than eight LUNs supported for HP/UX machines) or TPGS for Sun hosts using
MPxIO.
Host name: If you do not provide a name, the SAN Volume Controller automatically
generates the name hostx (where x is the ID sequence number that is assigned by the
SAN Volume Controller internally). If you want to provide a name, you can use the
letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character. The host
name can be between one and 63 characters in length.
Deleting an FC port: If you added the wrong FC port, you can delete it from the list by
clicking the red X.
WWPN still not displayed: In certain cases, your WWPNs still might not be displayed,
even though you are sure that your adapter is functioning (for example, you see the
WWPN in the switch name server) and your SAN zones are correctly set up. To rectify
this situation, type the WWPN of your HBA or HBAs into the drop-down list and click
Add Port to List. It will be displayed as unverified.
Chapter 10. SAN Volume Controller operations using the GUI 661
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-67 Creating a new Fibre Channel host
7. Click Create Host, as shown in Figure 10-67. This action brings you back to the Hosts
panel (Figure 10-68) where you can see the newly added FC host.
Figure 10-68 New Fibre Channel host
iSCSI-attached hosts
To create a new host that uses the iSCSI connection type, perform the following steps:
1. To go to the Hosts panel from the SAN Volume Controller Overview panel on Figure 10-1
on page 628, move the cursor over the Hosts selection and click Hosts.
2. Click Create Host, as shown in Figure 10-65 on page 659.
3. Select iSCSI Host from the types of host connections (Figure 10-69 on page 662).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
662 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-69 Create iSCSI Host window
4. In the Create Hosts window (Figure 10-70 on page 663), type a name for your host (Host
Name).
5. iSCSI ports section: Enter the iSCSI initiator or IQN as an iSCSI port, and then, click Add
Port to List. This IQN is obtained from the server and generally has the same purpose as
the WWPN. To add additional ports, repeat this action.
If needed, select Use CHAP authentication (all ports) and enter the Challenge
Handshake Authentication Protocol (CHAP) secret, as shown in Figure 10-70 on
page 663.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the systems properties or for each host definition. The CHAP must be identical on
the server and the system/host definition. You can create an iSCSI host definition without
using a CHAP.
6. Advanced Settings section: If you need to modify the I/O Group or Host Type, you must
select Advanced to access these settings as shown in Figure 10-67 on page 661.
Perform these tasks:
Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these types, select HP/UX (to have
more than eight LUNs supported for HP/UX machines) or TPGS for Sun hosts using
MPxIO.
Host name: If you do not provide a name, the SAN Volume Controller automatically
generates the name hostx (where x is the ID sequence number that is assigned by the
SAN Volume Controller internally). If you want to provide a name, you can use the
letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be
between one and 63 characters in length.
Deleting an iSCSI port: If you add the wrong iSCSI port, you can delete it from the list
by clicking the red X.
Chapter 10. SAN Volume Controller operations using the GUI 663
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-70 Creating a new iSCSI host
7. Click Create Host, as shown in Figure 10-70. This action brings you back to the All Hosts
panel (Figure 10-71) where you can see the newly added iSCSI host.
Figure 10-71 Create host results
10.6.3 Renaming a host
Perform the following steps to rename a host:
1. Select the host that you want to rename in the table.
2. Click Actions Rename (Figure 10-72 on page 664).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
664 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-72 Rename Action
3. In the Rename Host window, type the new name that you want to assign and click
Rename (Figure 10-73).
Figure 10-73 Renaming a host
10.6.4 Modifying a host
To modify a host, perform the following steps:
1. Select the host that you want to modify in the table.
2. Click Actions Properties (Figure 10-74 on page 665).
Alternatives: There are two other ways to rename a host. You can right-click a host
and select Rename from the list, or use the method that is described in 10.6.4,
Modifying a host on page 664.
Host name: If you do not provide a name, the SAN Volume Controller automatically
generates the name hostx (where x is the ID sequence number that is assigned by the
SAN Volume Controller internally).
If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0
to 9, and the underscore (_) character. The host name can be between one and 63
characters in length.
Chapter 10. SAN Volume Controller operations using the GUI 665
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-74 Host Properties
3. In the Overview tab (Figure 10-75 on page 666), click Edit to be able to modify parameters
for this host. You can modify:
Host Name
Host Type: The default type is Generic. Use Generic for all hosts, unless you use
Hewlett-Packard UNIX (HP-UX) or Sun. For these, select HP/UX (to have more than
eight LUNs supported for HP/UX machines) or TPGS for Sun hosts using MPxIO.
Advanced Settings: If you need to modify the I/O Group or iSCSI CHAP Secret (in case
you want to convert it to an iSCSI host), you must select Show Details to access these
settings, as shown in Figure 10-75 on page 666.
Alternative: You can also right-click a host and select Properties from the list.
Host name: If you do not provide a name, the SAN Volume Controller automatically
generates the name hostx (where x is the ID sequence number that is assigned by
the SAN Volume Controller internally). If you want to provide a name, you can use
the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_) character.
The host name can be between one and 63 characters in length.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
666 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-75 Modifying a host
4. Save the changes by clicking Save.
5. You can close the Host Details window by clicking Close.
10.6.5 Deleting a host
To delete a host, perform the following steps:
1. Select the host or hosts that you want to delete in the table.
2. Click Actions Delete (Figure 10-76).
Figure 10-76 Delete action
3. The Delete Host window opens, as shown in Figure 10-77. In the Verify the number of
hosts that you are deleting field, enter the number of hosts that you want to remove. This
verification has been added to help you avoid inadvertently deleting the wrong hosts.
If you still have volumes that are associated with the host and if you are sure that you want
to delete the host even if these volumes will be no longer accessible, select the Delete the
Alternative: You can also right-click a host and select Delete from the list.
Chapter 10. SAN Volume Controller operations using the GUI 667
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
host even if volumes are mapped to them. These volumes will no longer be
accessible to the hosts option.
4. Click Delete to complete the operation (Figure 10-77).
Figure 10-77 Deleting a host
10.6.6 Adding ports
If you add an HBA or a network interface controller (NIC) to a server that is already defined
within the SAN Volume Controller, you can simply add additional ports to your host definition
by performing the steps that are described in this section.
To add a port to a host, perform the following steps:
1. Select the host in the table.
2. Click Action Properties (Figure 10-78).
Figure 10-78 Host Properties
3. On the Properties window, click Port Definitions.
4. Click Add and select the type of port that you want to add to your host (Fibre Channel Port
or iSCSI Port), as shown in Figure 10-79 on page 668. In this example, we selected Fibre
Channel Port.
Best practice: A host can have FC and iSCSI ports defined, but it is better to avoid using
them at the same time.
Alternative: You can also right-click a host and select Properties from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
668 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-79 Adding a Fibre Channel Port or an iSCSI Port
5. In the Add Fibre Channel Ports window (Figure 10-80), use the drop-down list to select the
WWPNs that correspond to your HBA or HBAs and click Add Port to List in the Fibre
Channel Ports window. To add additional ports, repeat this action.
If your WWPNs are not displayed, click Rescan to rediscover any available WWPNs that
are new since the last scan.
6. To finish, click Add Ports to Host.
Figure 10-80 Add Fibre Channel Ports
7. This action takes you back to the Port Definitions tab (Figure 10-81 on page 669) where
you can see the newly added ports.
Deleting a port: If you added the wrong Fibre Channel port, you can delete it from the
list by clicking the red X.
Alternative: In certain cases, your WWPNs might still not be displayed, even though
you are sure that your adapter is functioning (for example, you see the WWPN in the
switch name server) and your SAN zones are correctly set up. To rectify this situation,
type the WWPN of your HBA or HBAs into the drop-down list and click Add Port to List.
The port will be displayed as unverified.
Chapter 10. SAN Volume Controller operations using the GUI 669
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-81 Port Definitions tab updated
10.6.7 Deleting ports
To delete a port from a host, perform the following steps:
1. Select the host in the table.
2. Click Actions Properties (Figure 10-82).
Figure 10-82 Host Properties
3. On the opened window, click the Port Definitions tab.
4. Select the port or ports that you want to remove.
5. Click Delete Port (Figure 10-83).
Figure 10-83 Port Definitions tab: Delete Port
iSCSI ports: This action is exactly the same for iSCSI ports, except that you have to
add iSCSI ports.
Tip: You can also right-click a host and select Properties from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
670 Implementing the IBM System Storage SAN Volume Controller V7.2
6. In the Delete Port window (Figure 10-84), in the Verify the number of ports to delete field,
you need to enter the number of ports that you want to remove. This verification has been
added to avoid inadvertently deleting the wrong ports.
Figure 10-84 Delete Port window
7. Click Delete to remove the port or ports.
8. This action brings you back to the Port Definitions window.
10.6.8 Creating or modifying the host mapping
To modify the host Mapping, perform the following steps:
1. Select the host in the table.
2. Click Actions Modify Mappings (Figure 10-85).
Figure 10-85 Modify Mappings action
3. On the Modify Host Mappings window (Figure 10-86 on page 671), select the volume or
volumes that you want to map to this host and move each volume to the table on the right
by clicking the right arrow (>>). If you need to remove them, select the volume and click
the left arrow (<<).
Tip: You can also right-click a host and select Modify Mappings from the list.
Chapter 10. SAN Volume Controller operations using the GUI 671
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-86 Modify Host Mappings window: Adding volumes to a host
In the table on the right, you can edit the SCSI ID by selecting a mapping that is
highlighted in yellow, which indicates a new mapping. Click Edit SCSI ID (Figure 10-86).
In the Edit SCSI ID window, change the SCSI ID and then click OK (Figure 10-87).
Figure 10-87 Modify Host Mappings window: Edit SCSI ID
4. After you have added all the volumes that you want to map to this host, click Map
Volumes or Apply to create the host mapping relationships.
10.6.9 Deleting a host mapping
To delete a host mapping, perform the following steps:
1. Select the host in the table.
2. Click Actions Modify Mappings (Figure 10-88 on page 672).
Changing a SCSI ID: You can only change the SCSI ID on new mappings. To edit an
existing mapping SCSI ID, you must unmap the volume and recreate the map to the
volume.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
672 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-88 Modify Mappings
3. Select the host mapping or mappings that you want to remove.
4. Click the left arrow (<<) in the middle after you have selected the volumes that you want
to remove. Then, click Apply or Map Volumes to complete the Modify Host Mapping
actions (Figure 10-89).
Figure 10-89 Modify Host Mappings: Unmap a volume
10.6.10 Deleting all host mappings for a given host
To delete all host mappings for a given host, perform the following steps:
1. Select the host in the table.
2. Click Actions Unmap All volumes (Figure 10-90 on page 673).
Tip: You can also right-click a host and select Modify Mappings from the list.
Chapter 10. SAN Volume Controller operations using the GUI 673
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-90 Unmap All Volumes option
3. From the Unmap from Host window (Figure 10-91), in the Verify the number of mappings
that this operation affects field, enter the number of mappings that you want to remove.
This verification helps you to avoid deleting the wrong hosts unintentionally.
Figure 10-91 Unmap from Host window
4. Click Unmap to remove the host mapping or mappings. This action brings you back to the
Hosts panel.
10.7 Working with volumes
In this section, we describe the tasks that you can perform at a volume level.
There are more ways to visualize and manage your volumes:
You can use the Volumes panel, as shown in Figure 10-92 on page 674.
Tip: You can also right-click a host and select Unmap All Volumes from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
674 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-92 Volumes panel
Or, you can use the Volumes by Pool panel, as shown in Figure 10-93.
Figure 10-93 Volumes by Pool panel
Or, you can use the Volumes by Host panel, as shown in Figure 10-94 on page 675.
Chapter 10. SAN Volume Controller operations using the GUI 675
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-94 Volumes by Host panel
10.7.1 Volume information
To access the Volumes panel from the SAN Volume Controller Overview panel on Figure 10-1
on page 628, move the cursor over the Volumes selection and click Volumes (Figure 10-92
on page 674).
You can add information (new columns) to the table in the Volumes panel, as shown in
Figure 10-16 on page 634. See Table information on page 634.
To retrieve more information about a specific volume, perform the following steps:
1. Select a volume in the table.
2. Click Actions Properties (Figure 10-95 on page 676).
Important: Several actions on the hosts are specific to the Volumes by Pool panel or to the
Volumes by Host panel. However, all these actions and others are accessible from the
Volumes panel. We execute all of the actions in the following sections from the Volumes
panel.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
676 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-95 Volume Properties action
3. The Overview tab shows information about a given volume (Figure 10-96).
Figure 10-96 Volume properties: Overview tab
4. The Host Maps tab (Figure 10-97 on page 677) displays the hosts that are mapped with
this volume.
Tip: You can also access the Properties action by right-clicking a volume name.
Chapter 10. SAN Volume Controller operations using the GUI 677
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-97 Volume properties: Volume mapped to this host
5. The Member MDisks tab (Figure 10-98) displays the used MDisks for this volume. You can
perform actions on the MDisks, such as removing them from a pool, adding them to a tier,
renaming them, showing their dependent volumes, or seeing their properties.
Figure 10-98 Volume properties: Member MDisks
6. When you finish viewing the details, click Close to return to the Volumes panel.
10.7.2 Creating a volume
To create a new volume or volumes, perform the following steps:
1. Go to the SAN Volume Controller Overview panel on Figure 10-1 on page 628, move the
cursor over the Volumes selection and click Volumes.
2. Click Create Volume (Figure 10-99).
Figure 10-99 Create Volume action
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
678 Implementing the IBM System Storage SAN Volume Controller V7.2
3. Select one of the following presets, as shown in Figure 10-100:
Generic. Create volumes that use a fully allocated (thick) amount of capacity from the
selected storage pool.
Thin Provision. Create volumes whose capacity is virtual (seen by the host), but which
only consumes the real capacity that is written by the host application. The virtual
capacity of a thin-provisioned volume is typically significantly larger than its real
capacity
Mirror. Create volumes with two physical copies that provide data protection. Each
copy can belong to a separate storage pool to protect data from storage failures.
Thin Mirror. Create volumes with two physical copies to protect data from failures while
consuming only the real capacity that is written by the host application.
Compressed. Create volumes whose data is compressed as it is written to disk, saving
additional space.
4. After selecting a preset, in our example, Generic, you must select the storage pool on
which the data will be striped (Figure 10-100).
Figure 10-100 Create Volume: Select preset and the storage pool on which the data will be striped
5. After you select the storage pool, the window will be updated automatically. You must
select a volume quantity, name and capacity, as shown in Figure 10-101 on page 679:
Enter volume quantity. This allows you to create multiple volumes at the same time,
using an automatic sequential numbering suffix.
Enter a name if you want to create a single volume, or a naming prefix if you want to
create multiple volumes.
You can change the preset: For our example, we chose the Generic preset.
However, whatever selected preset you choose, you can reconsider your decision
later by customizing the volume by clicking the Advanced option.
Chapter 10. SAN Volume Controller operations using the GUI 679
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Enter the size of the volume that you want to create and select the capacity unit of
measurement (bytes, KB, MB, GB, or TB) from the list.
An updated summary automatically appears in the bottom of the window to show the
amount of space that will be used and the amount of free space that remains in the pool.
Figure 10-101 Create Volume: Volume Details
Various optional actions are available from this window:
You can modify the storage pool by clicking Edit. In this case, you can select another
storage pool.
You can create additional volumes by clicking the up and down arrow in the quantity
box.
Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore (_) character. The host name can be between one and 63 characters in
length.
Tip: An entry of 1 GB uses 1,024 MB.
Naming: When you create more than one volume, the wizard does not ask you for a
name for each volume to be created. Instead, the name that you use here becomes
the prefix and a number, starting at zero, is appended to this prefix as each volume
is created. You can modify a starting suffix to any numeric value that you prefer
(whole non-negative numbers). Modifying the ending value increases or decreases
the volume quantity based on the whole number count.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
680 Implementing the IBM System Storage SAN Volume Controller V7.2
6. You can activate and customize advanced features, such as thin-provisioning or mirroring,
depending on the preset that you have selected. To access these settings, click
Advanced:
On the Characteristics tab (Figure 10-102), you can set the following options:
General: Format the new volume by selecting the Format Before Use check box
(formatting writes zeros to the volume before it can be used; that is, it writes zeros
to its MDisk extents).
Locality: Choose a caching I/O Group and then select a preferred node. You can
leave the default values for SAN Volume Controller auto-balance. After you select a
caching I/O group, you can also add additional I/O groups as Accessible I/O groups.
For more information about caching and accessible I/O groups refer to chapter
6.2.5, Non-disruptive Volume Move on page 229.
OpenVMS only: Enter the user-defined identifier (UDID) for OpenVMS. You only
need to complete this field for the OpenVMS system.
Figure 10-102 Create Volume: Advanced Settings, Characteristics
On the Capacity Management tab (Figure 10-103 on page 681), after you activate thin
provisioning by selecting the Thin-Provisioned radio button, you can set the following
options:
Real Capacity: Type the real size that you want to allocate. This size is the
percentage of the virtual capacity or a specific number in GB of the disk space that
will actually be allocated.
Automatically Expand: Select auto expand, which allows the real disk size to grow
as required.
UDID: Each OpenVMS fibre-attached volume requires a user-defined identifier
or unit device identifier (UDID). A UDID is a non-negative integer that is used
when an OpenVMS device name is created. To recognize volumes, OpenVMS
issues a UDID value, which is a unique numerical number.
Chapter 10. SAN Volume Controller operations using the GUI 681
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Warning Threshold: Type a percentage of the virtual volume capacity for threshold
warning. It will generate a warning message when the used disk capacity on the
space-efficient copy first exceeds the specified threshold.
Thin-Provisioned Grain Size: Select the grain size: 32 KB, 64 KB, 128 KB, or
256 KB. Smaller grain sizes save space while larger grain sizes produce better
performance. Try to match the FlashCopy grain size if the volume will be used for
FlashCopy.
Figure 10-103 Create Volume: Advanced Settings, Capacity Management,
Thin-Provisioned Volumes
On the Capacity Management tab, after you activate compression by selecting
Compressed radio button, you will create compressed volume using Real-time
Compression feature (Figure 10-104 on page 682). Like thin-provisioned volumes,
compressed volumes have virtual, real, and used capacities.
Important: If you selected the Thin Provision or Thin Mirror preset on the first page
(Figure 10-101 on page 679), the Enable Thin Provisioning check box is already
selected and the following parameter preset values are pre-filled:
Real: 2% of Virtual Capacity
Automatically Expand: Selected
Warning Threshold: Selected with a value of 80% of Virtual Capacity
Thin-Provisioned Grain Size: 256 KB
Note: Compressed volumes and uncompressed volumes should not be mixed in the
same pool.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
682 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-104 Create Volume: Advanced Settings, Capacity Management, Compressed
Volumes
For more details about the Real-time Compression feature , see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859 and
Implementing IBM Real-time Compression in SAN Volume Controller and IBM
Storwize V7000, TIPS1083.
On the Mirroring tab (Figure 10-105), after you activate mirroring by selecting the
Create Mirrored Copy check box, you can set the following option:
Mirror Sync Rate: Enter the Mirror Synchronization rate. It is the I/O governing rate
in a percentage that determines how quickly copies are synchronized. A zero value
disables synchronization.
Important: If you activate this feature from the Advanced menu, you must select
a secondary pool on the main window (Figure 10-101 on page 679).The primary
pool will be used as the primary and preferred copy for read operations. The
secondary pool will be used as the secondary copy.
Chapter 10. SAN Volume Controller operations using the GUI 683
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-105 Create Volume: Advanced Settings, Mirroring
7. After you have set all of the advanced settings, click OK to return to the main menu
(Figure 10-101 on page 679).
8. Then, you can choose to only create the volumes by clicking Create, or to create and map
the volumes by selecting Create and Map to Host:
If you select to only create the volumes, you will be returned to the Volumes panel. You
see that your volumes have been created but not mapped (Figure 10-106). You can
map them later.
Figure 10-106 Volumes created without mapping
If you want to create and map the volumes on the volume creation window, after the
task finishes, click Continue and another window opens. In the Modify Host Mappings
window, select the I/O Group and host to which you want to map these volumes by
using the drop-down list (Figure 10-107 on page 684), and you will be automatically
directed to the host mapping table.
Important: If you selected the Mirror or Thin Mirror preset on the first page
(Figure 10-101 on page 679), the Create Mirrored Copy check box is already
selected and the Mirror Sync Rate preset is pre-filled with 80% of Maximum.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
684 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-107 Modify Host Mappings: Select the host to which to map your volumes
In the Modify Host Mappings window, verify the mapping. If you want to modify the
mapping, select the volume or volumes that you want to map to a host and move each
of them to the table on the right by using the right arrow (>>), as shown in
Figure 10-108. If you need to remove the mappings, use the left arrow (<<).
Figure 10-108 Modify Host Mappings: Host mapping tablet
In the table on the right, you can edit the SCSI ID by selecting a mapping that is
highlighted in yellow, which indicates a new mapping. Next, click Edit SCSI ID.
In the Edit SCSI ID window, change the SCSI ID and then click OK (Figure 10-109 on
page 685).
Changing the SCSI ID: You can only change the SCSI ID on new mappings. To edit
an existing mappings SCSI ID, you must unmap the volume and recreate the map to
the volume.
Chapter 10. SAN Volume Controller operations using the GUI 685
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-109 Modify Host Mappings: Edit SCSI ID
After you have added all of the volumes that you want to map to this host, click Map
Volumes or Apply to create the host mapping relationships and finalize the volume
creation. You return to the main Volumes panel. You can see that your volumes have
been created and mapped, as shown in Figure 10-110.
Figure 10-110 Volumes have been created and mapped to host
10.7.3 Renaming a volume
Perform the following steps to rename a volume:
1. Select the volume that you want to rename in the table.
2. Click Actions Rename (Figure 10-111 on page 686).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
686 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-111 Rename action
3. In the Rename Volume window, type the new name that you want to assign to the volume,
and click Rename (Figure 10-112).
Figure 10-112 Renaming a volume
10.7.4 Modifying a volume
To modify a volume, perform the following steps:
1. Select the volume that you want to modify in the table.
2. Click Actions Properties (Figure 10-113 on page 687).
Tip: There are two other ways to rename a volume. You can right-click a volume and
select Rename from the list, or you can use the method that is explained in section
10.7.4, Modifying a volume.
Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore (_) character. The volume name can be between one and 63 characters in
length.
Chapter 10. SAN Volume Controller operations using the GUI 687
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-113 Properties action
3. In the Overview tab, click Edit to modify parameters for this volume (Figure 10-114 on
page 688).
From this window, you can modify the following parameters:
Volume Name: You can modify the volume name.
Accessible I/O Group: You can select an additional I/O Group from the list to add the
I/O Group which has access to this volume.
Mirror Sync Rate: Change the Mirror Sync rate. It is the I/O governing rate in a
percentage that determines how quickly copies are synchronized. A zero value
disables synchronization.
Cache Mode: Change the caching policy of a volume. Caching policy can be set to
Enabled (read-write caching enabled), Disabled (no caching enabled) or Read Only
(only read caching enabled).
OpenVMS: Enter the UDID (OpenVMS). This field needs to be completed only for an
OpenVMS system.
Tip: You can also right-click a volume and select Properties from the list.
Volume name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore. The host name can be between one and 63 characters in length.
UDID: Each OpenVMS fibre-attached volume requires a user-defined identifier or
unit device identifier (UDID). A UDID is a non-negative integer that is used when an
OpenVMS device name is created. To recognize volumes, OpenVMS issues a UDID
value, which is a unique numerical number.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
688 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-114 Modify a volume
4. Save the changes by clicking Save.
5. You can close the Volume Details window by clicking Close.
10.7.5 Modifying thin-provisioned or compressed volume properties
For thin-provisioned or compressed volumes, in addition to the properties that you can modify
by following the instructions in section 10.7.4, Modifying a volume on page 686, there are
other properties that are specific to thin provisioning or compression that you can modify by
performing the following steps:
1. Depending on whether the volume is non-mirrored or mirrored, use one of the following
actions:
For a non-mirrored volume: Select the volume. Click Actions Volume Copy
Actions Thin Provisioned (or Compressed) Edit Properties, as shown in
Figure 10-115 on page 689.
Note: In the following sections we demonstrate GUI operations using a thin-provisioned
volume as an example. However, the same actions apply to a compressed volume preset.
Chapter 10. SAN Volume Controller operations using the GUI 689
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-115 Non-mirrored volume: Thin-provisioned edit properties
For a mirrored volume: Select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify.
Click Actions, and click Thin Provisioned (or Compressed) Edit Properties, as
shown in Figure 10-116.
Figure 10-116 Mirrored volume: Thin-provisioned Edit Properties action
2. The Edit Properties - volumename (Copy #), (where volumename is the volume that you
selected in the previous step) window opens (Figure 10-117 on page 690). On this
window, you can modify the following volume characteristics:
Warning Threshold: Type a percentage. This function will generate a warning when the
used disk capacity on the thin-provisioned or compressed copy first exceeds the
specified threshold.
Tip: You can also right-click the volume and select Volume Copy Actions Thin
Provisioned (or Compressed) Edit Properties from the list.
Tip: You can also right-click the thin provisioned copy and select Thin
Provisioned Edit Properties from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
690 Implementing the IBM System Storage SAN Volume Controller V7.2
Enable Autoexpand: Autoexpand allows the real disk size to grow as required
automatically.
Figure 10-117 Edit thin-provisioned properties window
10.7.6 Deleting a volume
To delete a volume, perform the following steps:
1. In the table, select the volume or volumes that you want to delete.
2. Click Actions Delete (Figure 10-118).
Figure 10-118 Delete a volume action
3. The Delete Volume window opens, as shown in Figure 10-119 on page 691. In the Verify
the number of volumes that you are deleting field, enter a value for the number of
volumes that you want to remove. This verification helps you to avoid deleting the wrong
volumes.
GUI: You also can modify the real size of your thin-provisioned or compressed volume by
using the GUI, depending on your needs. See 10.7.12, Shrinking the real capacity of a
thin-provisioned or compressed volume on page 700 or 10.7.13, Expanding the real
capacity of a thin-provisioned or compressed volume on page 702.
Alternative: You can also right-click a volume and select Delete from the list.
Chapter 10. SAN Volume Controller operations using the GUI 691
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
If you still have a volume associated with a host used with FlashCopy or remote copy, and
you definitely want to delete the volume, select Delete the volume even if it has host
mappings or is used in FlashCopy mappings or remote-copy relationships. Click
Delete to complete the operation (Figure 10-119).
Figure 10-119 Delete Volume
10.7.7 Creating or modifying the host mapping
To create or modify a host mapping, perform the following steps:
1. Select the volume in the table.
2. Click Actions Map to Host (Figure 10-120 on page 692).
Important: Deleting a volume is a destructive action for user data residing in that
volume.
Note: You can similarly delete a mirror copy of a mirrored volume. For information about
deleting a mirrored copy, see 10.7.16, Deleting a mirrored copy from a volume mirror
on page 709.
Tip: You can also right-click a volume and select Map to Host from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
692 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-120 Map to Host action
3. On the Modify Host Mappings window, select the caching I/O group and the host to which
you want to map this volume by using the drop-down list, and you will be automatically
directed to host mapping table (Figure 10-121).
Figure 10-121 Select the I/O group and host to which you want to map your volume
4. On the Modify Host Mappings table window, verify the mapping. If you want to modify it,
select the volume or volumes that you want to map to a host and move each of them to the
table on the right by using the right arrow (>>) in the middle, as shown in Figure 10-122 on
page 693. If you need to remove them, use the left arrow (<<).
Chapter 10. SAN Volume Controller operations using the GUI 693
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-122 Modify Host Mappings window: Adding volumes to a host
In the table on the right, you can edit the SCSI ID. Select a mapping that is highlighted in
yellow, which indicates a new mapping, and click Edit SCSI ID (as shown in
Figure 10-123).
In the Edit SCSI ID window, change the SCSI ID and then click OK.
Figure 10-123 Modify Host Mappings window: Edit SCSI ID
5. After you have added all of the volumes that you want to map to this host, click Map
Volumes or Apply button. You will return to the main Volumes panel.
Changing the SCSI ID: You can change the SCSI ID only on new mappings. To edit an
existing mappings SCSI ID, you must unmap the volume and recreate the map to the
volume.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
694 Implementing the IBM System Storage SAN Volume Controller V7.2
10.7.8 Deleting a host mapping
To delete a host mapping to a volume, perform the following steps:
1. Select the volume in the table.
2. Click Actions Properties (Figure 10-124).
Figure 10-124 Volume Properties
3. On the Properties window, click the Host Maps tab (Figure 10-125).
Figure 10-125 Volume Details: Host Maps tab
Important: Before deleting a host mapping, make sure that the host is no longer using the
volume. Unmapping a volume from a host does not destroy the volume contents.
Unmapping a volume has the same effect as powering off the computer without first
performing a clean shutdown, thus, the data on the volume might end up in an inconsistent
state. Also, any running application that was using the disk begins to receive I/O errors and
might not recover until a forced application or server reboot.
Tip: You can also right-click a volume and select Properties from the list.
Alternative: You also can access this window by selecting the volume in the table and
clicking View Mapped Hosts in the Actions menu (Figure 10-126 on page 695).
Chapter 10. SAN Volume Controller operations using the GUI 695
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-126 View Mapped Hosts
4. Select the host mapping or mappings that you want to remove.
5. Click Unmap from Host (Figure 10-127).
Figure 10-127 Volume details: Unmap from Host action
In the Unmap Host window (Figure 10-128), in the Verify the number of hosts that this
action affects field, enter a value for the number of host objects that you want to remove.
This verification helps you to avoid deleting the wrong host objects.
Figure 10-128 Volume details: Unmap Host
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
696 Implementing the IBM System Storage SAN Volume Controller V7.2
6. Click Unmap to remove the host mapping or mappings. This action returns you to the Host
Maps window. Click Refresh to verify the results of the unmapping action. (Figure 10-129)
Figure 10-129 Volume details: volume unmapping verification
7. Click Close to return to the Volumes panel.
10.7.9 Deleting all host mappings for a given volume
To delete all host mappings for a given volume, perform the following steps:
1. Select the volume in the table.
2. Click Actions Unmap All Hosts (Figure 10-130).
Figure 10-130 Unmap All Hosts from Actions menu
3. In the Unmap from Hosts window (Figure 10-131 on page 697), in the Verify the number
of mappings that this operation affects field, enter the number of host objects that you
want to remove. This verification has been added to help you to avoid deleting the wrong
host objects.
Tip: You can also right-click a volume and select Unmap All Hosts from the list.
Chapter 10. SAN Volume Controller operations using the GUI 697
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-131 Unmap from Hosts window
4. Click Unmap to remove the host mapping or mappings. This action returns you to the All
Volumes panel.
10.7.10 Shrinking a volume
The method that the SAN Volume Controller uses to shrink a volume is to remove the
required number of extents from the end of the volume. Depending on where the data actually
resides on the volume, this action can be data destructive. For example, you might have a
volume that consists of 128 extents (0 to 127) of 16 MB (2 GB capacity), and you want to
decrease the capacity to 64 extents (1 GB capacity).
In this case, the SAN Volume Controller simply removes extents 64 to 127. Depending on the
operating system, there is no easy way to ensure that your data resides entirely on extents 0
through 63, so be aware that you might lose data.
Although shrinking a volume is easy using the SAN Volume Controller, you must ensure that
your operating system supports shrinking, either natively or by using third-party tools, before
using this function.
In addition, it is a best practice to always have a consistent latest backup before you execute
this task.
Shrinking a volume is useful under following circumstances:
Reducing the size of a candidate target volume of a copy relationship to make it the same
size as the source
Releasing space from volumes to have free extents in the storage pool, provided that you
do not use that space any more and take precautions with the remaining data
Assuming your operating system supports it, perform the following steps to shrink a volume:
1. Perform any necessary steps on your host to ensure that you are not using the space that
you are about to remove.
Important: For thin-provisioned or compressed volumes, using this method to shrink a
volume results in shrinking its virtual capacity. To shrink its real capacity, see the
information that is provided in 10.7.12, Shrinking the real capacity of a thin-provisioned or
compressed volume on page 700.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
698 Implementing the IBM System Storage SAN Volume Controller V7.2
2. In the volume table, select the volume that you want to shrink.
3. Click Actions Shrink (Figure 10-132).
Figure 10-132 Shrink volume action
4. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens. See Figure 10-133.
You can either enter how much you want to shrink the volume by using the Shrink By field
or you can directly enter the final size that you want to use for the volume by using the
Final Size field. The other field will be computed automatically. For example, if you have a
10 GB volume and you want it to become 6 GB, you can specify 4 GB in the Shrink By field
or you can directly specify 6 GB in Final Size field, as shown in Figure 10-133.
5. When you are finished, click Shrink, and the changes become visible on your host.
Figure 10-133 Shrinking a volume
10.7.11 Expanding a volume
Expanding a volume presents a larger capacity disk to your operating system. Although you
can expand a volume easily using the SAN Volume Controller, you must ensure that your
Tip: You can also right-click a volume and select Shrink from the list.
Chapter 10. SAN Volume Controller operations using the GUI 699
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
operating system is prepared for it and supports the volume expansion before you use this
function.
Dynamic expansion of a volume is only supported when the volume is in use by one of the
following operating systems:
AIX 5L V5.2 and higher
Microsoft Windows Server 2000, Windows Server 2003, Windows Server 2008 and
Windows Server 2012 for basic disks
Microsoft Windows Server 2000, Windows Server 2003 with a hot fix from Microsoft
(Q327020) for dynamic disks, Windows Server 2008 and Windows Server 2012
If your operating system supports it, perform the following steps to expand a volume:
1. Select the volume in the table.
2. Click Actions Expand (Figure 10-134).
Figure 10-134 Expand volume action
3. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens; see Figure 10-135 on page 700.
You can either enter how much you want to enlarge the volume by using the Expand By
field, or you can directly enter the final size that you want to use for the volume by using
the Final Size field. The other field will be computed automatically.
For example, if you have a 6 GB volume and you want it to become 10 GB, you can specify
4 GB in the Expand By field or you can directly specify 10 GB in the Final Size field, as
shown in Figure 10-135 on page 700.
Important: For thin-provisioned volumes, using this method results in expanding its virtual
capacity. If you want to expand its real capacity, see 10.7.13, Expanding the real capacity
of a thin-provisioned or compressed volume on page 702.
Tip: You can also right-click a volume and select Expand from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
700 Implementing the IBM System Storage SAN Volume Controller V7.2
4. When you are finished, click Expand (see Figure 10-135).
Figure 10-135 Expanding a volume
10.7.12 Shrinking the real capacity of a thin-provisioned or compressed
volume
From a hosts perspective, the virtual capacity shrinkage (see 10.7.10, Shrinking a volume
on page 697) of a volume affects the host access. To determine these effects, see 10.7.10,
Shrinking a volume on page 697. The real capacity shrinkage of a volume, which is
described in this section, is transparent to the hosts.
To shrink the real capacity of a thin-provisioned or compressed volume, perform the following
steps:
1. Depending on the case, use one of the following actions:
For a non-mirrored volume: Select the thin-provisioned or compressed volume, and
click Actions Volume Copy Actions Thin provisioned (or Compressed)
Shrink, as shown in Figure 10-136 on page 701.
Volume expansion notes:
No support exists for the expansion of image mode volumes.
If there are insufficient extents to expand your volume to the specified size, you
receive an error message.
If you use volume mirroring, all copies must be synchronized before expanding.
Note: In the following sections we demonstrate real capacity operations using a
thin-provisioned volume as an example. However, the same actions apply to a compressed
volume preset.
Chapter 10. SAN Volume Controller operations using the GUI 701
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-136 Non-mirrored volume: Thin-provisioned Shrink action
For a mirrored volume: Select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify, and click Actions Thin provisioned (or
Compressed) Shrink, as shown in Figure 10-137.
Figure 10-137 Mirrored volume: Thin-provisioned Shrink action
2. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens; see Figure 10-138 on page 702.
You can either enter the amount by which you want to shrink the volume by using the
Shrink By field, or you can enter the final real capacity directly that you want to use for the
volume by using the Final Real Capacity field. The other field will be computed
automatically.
Tip: You can also right-click the volume and select Volume Copy Actions Thin
provisioned (or Compressed) Shrink from the list.
Tip: You can also right-click the thin-provisioned or compressed mirrored copy and
select Thin provisioned (or Compressed) Shrink from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
702 Implementing the IBM System Storage SAN Volume Controller V7.2
For example, if you have a current real capacity equal to 221.3 MB and you want a final
real size that is equal to 100 MB, you can specify 121.3 MB in the Shrink By field, or you
can directly specify 100 MB in the Final Real Capacity field, as shown in Figure 10-138.
3. When you are finished, click Shrink (Figure 10-138) and the changes will become visible
on your host.
Figure 10-138 Shrink Volume real capacity window
10.7.13 Expanding the real capacity of a thin-provisioned or compressed
volume
From a host perspective, the virtual capacity expansion (10.7.11, Expanding a volume on
page 698) of a volume affects the host access. To know these effects, see 10.7.11,
Expanding a volume on page 698. The real capacity expansion of a volume, which is
described in this paragraph, is transparent to the hosts.
To expand the real size of a thin-provisioned or compressed volume, perform the following
steps:
1. Depending on the case, use one of the following actions:
For a non-mirrored volume: Select the thin-provisioned or compressed volume, and
click Actions Volume Copy Actions Thin provisioned (or Compressed)
Expand (Figure 10-139 on page 703).
Chapter 10. SAN Volume Controller operations using the GUI 703
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-139 Non-mirrored volume: Thin-provisioned Expand action
For a mirrored volume: Select the thin-provisioned or compressed copy of the mirrored
volume that you want to modify, and click Actions Thin provisioned (or
Compressed) Expand (Figure 10-140).
Figure 10-140 Mirrored volume: Thin provisioned Expand action
2. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens (Figure 10-141 on page 704).
You can either enter the amount by which you want to expand the volume by using the
Expand By field, or you can enter the final real capacity size directly that you want to use
for the volume by using the Final Size field. The other field will be computed automatically.
Tip: You can also right-click the volume and select Volume Copy Actions Thin
provisioned (or Compressed) Expand from the list.
Tip: You can also right-click the thin-provisioned or compressed copy and select
Thin provisioned (or Compressed) Expand from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
704 Implementing the IBM System Storage SAN Volume Controller V7.2
For example, if you have a current real capacity equal to 100 MB and you want a final real
size equal to 1000 MB, you can specify 900 MB in the Expand By field or you can directly
specify 1000 MB in the Final Size field, as shown in Figure 10-141 on page 704.
3. When you are finished, click Expand (Figure 10-141), and the changes become visible on
your host.
Figure 10-141 Expand real capacity window
10.7.14 Migrating a volume
To migrate a volume, perform the following steps:
1. In the table, select the volume that you want to migrate.
2. Click Actions Migrate to Another Pool (Figure 10-142).
Figure 10-142 Migrate to Another Pool action
Tip: You can also right-click a volume and select Migrate to Another Pool from the list.
Chapter 10. SAN Volume Controller operations using the GUI 705
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
3. The Migrate Volume Copy window opens (Figure 10-143 on page 705). Select the storage
pool to which you want to reassign the volume. You will only be presented with a list of
storage pools with the same extent size.
4. When you have finished making your selections, click Migrate to begin the migration
process.
Figure 10-143 Migrate Volume Copy window
5. You can check the migration by using the Running Tasks status area (Figure 10-144).
Figure 10-144 Running Tasks status area
To expand this area, click the icon, and then click Migration. Figure 10-145 on
page 706 shows a detailed view of the running tasks.
Important: After a migration starts, you cannot stop it. Migration continues until it is
complete unless it is stopped or suspended by an error condition, or the volume that is
being migrated is deleted.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
706 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-145 Long Running Task: Volume migration
6. When the migration is finished, the volume will be part of the new pool.
10.7.15 Adding a mirrored copy to an existing volume
You can add a mirrored copy to an existing volume, which will give you two copies of the
underlying disk extents.
You can use a volume mirror for any operation for which you can use a volume. It is
transparent to higher-level operations, such as Metro Mirror, Global Mirror, or FlashCopy.
Creating a volume mirror from an existing volume is not restricted to the same storage pool,
so it is an ideal method to use to protect your data from a disk system or an array failure. If
one copy of the mirror fails, it provides continuous data access to the other copy. When the
failed copy is repaired, the copies automatically re synchronize.
You can also use a volume mirror as an alternative migration tool, where you can synchronize
the mirror before splitting off the original side of the mirror. The volume stays online, and it can
be used normally, while the data is being synchronized. The copies can also be separate
structures, that is, striped, image, sequential, or space-efficient, and separate extent sizes.
To create a mirror copy from within a volumes panel, perform the following steps:
1. Select the volume in the table.
2. Click Actions Volume Copy Actions Add Mirrored Copy (Figure 10-146 on
page 707).
Tip: You can also create a new mirrored volume by selecting the Mirror or Thin Mirror
preset during the volume creation, as shown in Figure 10-100 on page 678.
Chapter 10. SAN Volume Controller operations using the GUI 707
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-146 Add Mirrored Copy actions
3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens (Figure 10-147 on page 708). You can perform the
following steps separately or in combination:
Select the storage pool in which you want to put the copy. To maintain higher
availability, choose a separate group.
Select the Volume type Thin Provisioned using the radio button to make the copy
space-efficient.
The following parameters are used for this thin-provisioned copy:
Real Size: 2% of Virtual Capacity
Enable Autoexpand: Active
Warning Threshold: 80% of Virtual Capacity
Thin-Provisioned Grain Size: 256 KB
4. Click Add Copy (Figure 10-147 on page 708).
Tip: You can also right-click a volume and select Volume Copy Actions Add
Mirrored Copy from the list.
Changing options: You can only change Real Size, Enable Autoexpand, and
Warning Threshold after the thin-provisioned volume copy has been added.
For information about modifying the real size of your thin-provisioned volume, see
10.7.12, Shrinking the real capacity of a thin-provisioned or compressed volume
on page 700 and 10.7.13, Expanding the real capacity of a thin-provisioned or
compressed volume on page 702.
For information about modifying the Auto expand and Warning Threshold of your
thin-provisioned volume, see 10.7.5, Modifying thin-provisioned or compressed
volume properties on page 688.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
708 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-147 Add Copy to volume window
5. You can check the migration using the Running Tasks menu (see Figure 10-148).
To expand this Status Area, click the icon and click Volume Synchronization.
Figure 10-148 shows a detailed view of the running tasks.
Figure 10-148 Running Task: Volume Synchronization
6. When synchronization is finished, the volume will be part of the new pool (Figure 10-149
on page 709).
Mirror Sync rate: You can change the Mirror Sync Rate (the default is 50%) by
modifying the volume properties. For more information, see Figure 10.7.4 on page 686.
Chapter 10. SAN Volume Controller operations using the GUI 709
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-149 Mirrored volume
10.7.16 Deleting a mirrored copy from a volume mirror
To remove a volume copy, perform the following steps:
1. Select the volume copy that you want to remove in the table, and click Actions Delete
this Copy (Figure 10-150).
Figure 10-150 Delete this Copy action
2. The Warning window opens (Figure 10-151 on page 710). Click YES to confirm your
choice.
Primary copy: As shown in Figure 10-149, the primary copy is identified with an
asterisk (*). In this example, Copy 0 is the primary copy.
Tip: You can also right-click a volume and select Delete this Copy from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
710 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-151 Warning window
3. The copy is now deleted.
10.7.17 Splitting a volume copy
To split off a synchronized volume copy to a new volume, perform the following steps:
1. In the table, select the volume copy that you want to split, and click Actions Split into
New Volume (Figure 10-152).
Figure 10-152 Split into New Volume action
2. The Split Volume Copy window opens (Figure 10-153 on page 711). In this window, type a
name for the new volume.
Removing a primary copy: If you try to remove the primary copy before it has been
synchronized with the other copy, you will receive the message: The command failed
because the copy specified is the only synchronized copy. You must wait until the end of
the synchronization to be able to remove this copy.
Tip: You can also right-click a volume and select Split into New Volume from the list.
Chapter 10. SAN Volume Controller operations using the GUI 711
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
3. Click Split Volume Copy (Figure 10-153).
Figure 10-153 Split Volume Copy window
4. This new volume is now available to be mapped to a host.
10.7.18 Validating volume copies
To validate the copies of a mirrored volume, perform the following steps:
1. Select a copy of this volume in the table, and click Actions Validate Volume Copies
(Figure 10-154).
Figure 10-154 Validate Volume Copies actions
Volume name: If you do not provide a name, the SAN Volume Controller automatically
generates the name vdiskx (where x is the ID sequence number that is assigned by the
SAN Volume Controller internally). If you want to provide a name, you can use the
letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be
between one and 63 characters in length.
Important: After you split a volume mirror, you cannot resynchronize or recombine them.
You must create a new volume copy.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
712 Implementing the IBM System Storage SAN Volume Controller V7.2
2. The Validate Volume Copies window opens (Figure 10-155). In this window, select one of
the following options:
Generate Event of Differences: Use this option if you only want to verify that the
mirrored volume copies are identical. If a difference is found, the command stops and
logs an error that includes the logical block address (LBA) and the length of the first
difference.
You can use this option, starting at a separate LBA each time, to count the number of
differences on a volume.
Overwrite Differences: Use this option to overwrite the content from the primary volume
copy to the other volume copy. The command corrects any differing sectors by copying
the sectors from the primary copy to the copies being compared. Upon completion, the
command process logs an event, which indicates the number of differences that were
corrected.
Use this option if you are sure that either the primary volume copy data is correct, or
that your host applications can handle incorrect data.
Return Media Error to Host: Use this option to convert sectors on all volume copies that
contain different contents into virtual medium errors. Upon completion, the command
logs an event, which indicates the number of differences that were found, the number
of differences that were converted into medium errors, and the number of differences
that were not converted.
Use this option if you are unsure what the correct data is, and you do not want an
incorrect version of the data to be used.
Figure 10-155 Validate Volume Copies
3. Click Validate (Figure 10-155).
4. The volume is now checked.
10.7.19 Migrating to a thin-provisioned volume using volume mirroring
To migrate to a thin-provisioned volume, perform the following steps:
1. Select the volume in the table.
2. Click Actions Volume Copy Actions Add Mirrored Copy (Figure 10-156 on
page 713).
Chapter 10. SAN Volume Controller operations using the GUI 713
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-156 Add Mirrored Copy actions
3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens (Figure 10-157 on page 714). You can perform the
following steps separately or in combination:
Select the storage pool in which you want to put the copy. To maintain higher
availability, choose a separate group.
Select Volume type Thin Provisioned to make the copy space-efficient.
The following parameters are used for this thin-provisioned copy:
Real Size: 2% of Virtual Capacity
Autoexpand: Active
Warning Threshold: 80% of Virtual Capacity
Thin-Provisioned Grain Size: 256 KB
4. Click Add Copy (Figure 10-157 on page 714).
Tip: You can also right-click a volume, select Volume Copy Actions Add Mirrored
Copy.
Changing options: You can change the Real Size, Autoexpand, and Warning
Threshold after the volume copy has been added in the GUI. For the
Thin-Provisioned Grain Size, you need to use the command-line interface (CLI).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
714 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-157 Add Volume Copy window
5. You can check the migration by using the Running Tasks status area menu, as shown in
Figure 10-144 on page 705.
To expand this status area, click the icon and click Volume Synchronization.
Figure 10-158 shows the detailed view of the running tasks.
Figure 10-158 Running Tasks status area: Volume Synchronization
6. When the synchronization is finished, select the non-thin-provisioned copy that you want
to remove in the table. Select Actions Delete this Copy (Figure 10-159 on page 715).
Mirror Sync Rate: You can change the Mirror Sync Rate (by default at 50%) by
modifying the volume properties. For more information, see Figure 10.7.4 on page 686.
Chapter 10. SAN Volume Controller operations using the GUI 715
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-159 Delete this Copy window
7. The Warning window opens (Figure 10-160). Click YES to confirm your choice.
Figure 10-160 Warning window
8. When the copy is deleted, your thin-provisioned volume is ready for use.
At this point, you have completed the required tasks to manage volumes within a SAN Volume
Controller environment.
10.7.20 Creating a volume in image mode
See Chapter 6, Data migration on page 225 for the required steps to create a volume in
image mode.
Tip: You can also right-click a volume and select Delete this Copy from the list.
Tip: If you try to remove the primary copy before it has been synchronized with the
other copy, you will receive the following message: The command failed because the
copy specified is the only synchronized copy. You must wait until the end of the
synchronization to be able to remove this copy.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
716 Implementing the IBM System Storage SAN Volume Controller V7.2
10.7.21 Migrating a volume to an image mode volume
See Chapter 6, Data migration on page 225 for the required steps to migrate a volume to an
image mode volume.
10.7.22 Creating an image mode mirrored volume
See Chapter 6, Data migration on page 225 for the required steps to create an image mode
mirrored volume.
10.8 Copy Services: Managing FlashCopy
It is often easier to control working with FlashCopy by using the GUI if you have a small
number of mappings. When using many mappings, however, use the CLI to execute your
commands.
In this section, we describe the tasks that you can perform at a FlashCopy level. There are
three ways to visualize and manage your FlashCopy:
By using the FlashCopy panel from the SAN Volume Controller dynamic menu Copy
Services selection (Figure 10-161)
In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a
target volume. Any data that existed on the target volume is lost and is replaced by the
copied data.
Figure 10-161 FlashCopy panel
Copy Services: See Chapter 8, Advanced Copy Services on page 365 for more
information about the functionality of Copy Services in the SAN Volume Controller
environment.
Chapter 10. SAN Volume Controller operations using the GUI 717
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
By using the Consistency Groups panel (Figure 10-162)
A Consistency Group is a container for mappings. You can add many mappings to a
Consistency Group.
Figure 10-162 Consistency Groups panel
By using the FlashCopy Mappings panel (Figure 10-163)
A FlashCopy mapping defines the relationship between a source volume and a target
volume.
Figure 10-163 FlashCopy Mappings panel
10.8.1 Creating a FlashCopy mapping
In this section, we create FlashCopy mappings for volumes with their respective targets.
To perform this action, follow these steps:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over Copy
Services and click FlashCopy. The FlashCopy panel opens (Figure 10-164).
Figure 10-164 FlashCopy panel
2. Select the volume for which you want to create the FlashCopy relationship (Figure 10-165
on page 718).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
718 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-165 FlashCopy mapping: Select the volume (or volumes)
Depending on whether you have already created the target volumes for your FlashCopy
mappings, there are two options:
If you have already created the target volumes, see Using existing target volumes on
page 718.
If you want the SAN Volume Controller to create the target volumes for you, see Creating
new target volumes on page 722.
Using existing target volumes
Follow these steps to use existing target volumes for the FlashCopy mappings:
1. Select the target volume that you want to use, click Actions Advanced FlashCopy
Use existing target volumes (Figure 10-166).
Figure 10-166 Use existing target volumes
Multiple FlashCopy mappings: To create multiple FlashCopy mappings at one time,
select multiple volumes by holding down Ctrl and using the mouse to select the entries
that you want.
Chapter 10. SAN Volume Controller operations using the GUI 719
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
2. The Create FlashCopy Mapping window opens (see Figure 10-167). In this window, you
have to create the relationship between the source volume (the disk that is copied) and the
target volume (the disk that receives the copy). A mapping can be created between any
two volumes inside a SAN Volume Controller clustered system. Select a source volume
and a target volume for your FlashCopy mapping, and then click Add. If you need to
create other copies, repeat this action.
Figure 10-167 Create FlashCopy Mapping using existing target volume
To remove a relationship that you have created, use (Figure 10-168).
3. Click Next after you have created all of the relationships that you want to create
(Figure 10-168).
Figure 10-168 Create FlashCopy Mapping
Important: The source and target volumes must be of equal size. So, for a given
source volume, only targets of the same size are visible in the list box.
Volumes: The volumes do not have to be in the same I/O Group or storage pool.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
720 Implementing the IBM System Storage SAN Volume Controller V7.2
4. On the next window, select one FlashCopy preset. The GUI interface provides three
presets (Snapshot, Clone, or Backup) to simplify the more common FlashCopy operations
(Figure 10-169).
We describe the presets and their use cases:
Snapshot Creates a copy-on-write point-in-time copy.
Clone Creates an exact replica of the source volume on a target volume. The
copy can be changed without affecting the original volume.
Backup Creates a FlashCopy mapping that can be used to recover data or
objects if the system experiences data loss. These backups can be
copied multiple times from source and target volumes.
Figure 10-169 Create FlashCopy Mapping window
For whichever preset you select, you can customize various advanced options. You can
access these settings by clicking Advanced Settings (Figure 10-170 on page 721).
If you prefer not to customize these settings, go directly to step 5 on page 721.
You can customize the following options, as shown in Figure 10-170 on page 721:
Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
Incremental: This option copies only the parts of the source or target volumes that have
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Delete mapping after completion: This option automatically deletes a FlashCopy
mapping after the background copy is completed. Do not use this option when the
background copy rate is set to zero (0).
Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping has not completed, the target volume is offline while the
mapping is stopping.
Incremental FlashCopy mapping: Even if the type of the FlashCopy mapping is
incremental, the first copy process copies all of the data from the source volume to
the target volume.
Chapter 10. SAN Volume Controller operations using the GUI 721
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-170 Create FlashCopy Mapping Advanced Settings
Once you are done with your modifications, click Next.
5. If you want to include this FlashCopy mapping in a Consistency Group, in the window that
shown in Figure 10-171, select Yes, add the mappings to a consistency group. And,
also select the Consistency Group from the drop-down list box.
Figure 10-171 Add the mappings to a Consistency Group
If you do not want to include this FlashCopy mapping in a Consistency Group, select No,
do not add the mappings to a consistency group (Figure 10-172).
Figure 10-172 Do not add the mappings to a Consistency Group
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
722 Implementing the IBM System Storage SAN Volume Controller V7.2
Then, click Finish as shown in Figure 10-171 on page 721 and Figure 10-172 on
page 721.
6. Check the result of this FlashCopy mapping (Figure 10-173). For each FlashCopy
mapping relationship that has been created, a mapping name is automatically generated
starting with fcmapX, where X is a next available number.
If needed, you can rename these mappings. See 10.8.11, Renaming a FlashCopy
mapping on page 741, for more information about this topic.
Figure 10-173 Flash Copy Mapping
At this point, the FlashCopy mapping is now ready for use.
Creating new target volumes
Perform the following steps to create new target volumes for FlashCopy mapping:
1. If you have not created a target volume for this source volume, click Actions Advanced
FlashCopy Create new target volumes (Figure 10-174 on page 723).
Target volume naming: If the target volume does not exist, it will be created with a
name based on its source volume and a generated number at the end. An example is
source_volume_name_XX, where XX is a number that was generated dynamically.
Chapter 10. SAN Volume Controller operations using the GUI 723
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-174 Create new target volumes action
2. On the Create FlashCopy Mapping window ( Figure 10-175), you need to select one
FlashCopy preset. The GUI interface provides three presets (Snapshot, Clone, or Backup)
to simplify the more common FlashCopy operations.
The presets and their use cases are described here:
Snapshot Create a copy-on-write point-in-time copy.
Clone Creates an exact replica of the source volume on a target volume. The
copy can be changed without affecting the original volume.
Backup Creates a FlashCopy mapping that can be used to recover data or
objects if the system experiences data loss. These backups can be
copied multiple times from source and target volumes. See
Figure 10-175.
Figure 10-175 Create FlashCopy Mapping window
Whichever preset you select, you can customize various advanced options. To access
these settings, click Advanced Settings (Figure 10-176 on page 724).
If you prefer not to customize these settings, go directly to step 3 on page 724.
You can customize the following options, as shown in Figure 10-176 on page 724:
Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
724 Implementing the IBM System Storage SAN Volume Controller V7.2
Incremental: This option copies only the parts of the source or target volumes that have
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Delete mapping after completion: This option automatically deletes a FlashCopy
mapping after the background copy is completed. Do not use this option when the
background copy rate is set to 0.
Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping has not completed, the target volume is offline while the
mapping is stopping.
Figure 10-176 Create FlashCopy Mapping Advanced Settings
3. If you want to include this FlashCopy mapping in a Consistency Group, in the next window
(Figure 10-177 on page 725), select Yes, add the mappings to a consistency group.
Select the Consistency Group in the drop-down list box.
If you do not want to include this FlashCopy mapping in a Consistency Group, select No,
do not add the mappings to a consistency group.
Choose whichever option you prefer, and click Next (Figure 10-177 on page 725).
Incremental FlashCopy mapping: Even if the type of the FlashCopy mapping is
incremental, the first copy process copies all of the data from the source to the
target volume.
Chapter 10. SAN Volume Controller operations using the GUI 725
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-177 Add the mappings to a Consistency Group
4. The next window shows the volume capacity management dialogue. Choose from the four
options depending on what capacity preset you want to use with your target volume
(Figure 10-178). Here you can decide whether the target volume is a generic,
thin-provisioned, compressed, or whether it inherits its capacity properties from the source
volume.
Figure 10-178 Create FlashCopy mapping, capacity management
If you select thin-provisioned as your target volume, you can set up these parameters
(Figure 10-179 on page 726):
Real Capacity: Type the real size that you want to allocate. This size is the amount
of disk space that will be allocated. It can either be a percentage of the virtual size
or a specific number in GBs.
Automatically Expand: Select auto expand, which allows the real disk size to grow
as required.
Warning Threshold: Type a percentage or select a specific size for the usage
threshold warning. This function will generate a warning when the used disk
capacity on the space-efficient copy first exceeds the specified threshold.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
726 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-179 Create FlashCopy mapping, capacity management, Thin-Provisioning
Similarly, if you want to use the compression preset, you can configure real capacity,
auto-expand, and warning threshold on the target volume (Figure 10-180)
Figure 10-180 Create FlashCopy mapping, capacity management, Compression
5. In the next window (Figure 10-181 on page 727), select the storage pool that is used to
automatically create new targets. You can choose to use the same storage pool that is
used by the source volume, or you can select a storage pool from a list. In that case, select
one storage pool and then click Next.
Chapter 10. SAN Volume Controller operations using the GUI 727
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-181 Select the storage pool
6. Check the result of this FlashCopy mapping, as shown in Figure 10-182. For each
FlashCopy mapping relationship created, a mapping name is automatically generated
starting with fcmapX where X is a next available number. If needed, you can rename these
mappings; see 10.8.11, Renaming a FlashCopy mapping on page 741.
Figure 10-182 FlashCopy mapping
At this point, the FlashCopy mapping is ready for use.
Tip: You can invoke FlashCopy from the SAN Volume Controller GUI, but using the SAN
Volume Controller GUI might be impractical if you plan to handle a large number of
FlashCopy mappings or Consistency Groups periodically, or at varying times. In these
cases, creating a script by using the CLI might be more convenient.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
728 Implementing the IBM System Storage SAN Volume Controller V7.2
10.8.2 Creating and starting a snapshot preset with a single click
To create and start a snapshot with one click, perform the following steps:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services in
the dynamic menu, and click FlashCopy.
2. Select the volume that you want to snapshot.
3. Click Actions Create Snapshot (Figure 10-183).
Figure 10-183 Create Snapshot option
4. A volume is created as a target volume for this snapshot in the same pool as the source
volume. The FlashCopy mapping is created, and it is started.
You can check the FlashCopy progress in the Progress column Status area, as shown in
Figure 10-184 on page 729.
Snapshot preset: The snapshot creates a point-in-time view of production data. The
snapshot is not intended to be an independent copy, but instead it is used to maintain a
view of the production data at the time that the snapshot is created. Therefore, the
snapshot holds only the data from regions of the production volume that have changed
since the snapshot was created. Because the snapshot preset uses thin provisioning, only
the capacity that is required for the changes is used.
Snapshot uses these preset parameters:
No background copy.
Incremental: No
Delete after completion: No
Cleaning rate: No
The target pool is the primary copy source pool.
Chapter 10. SAN Volume Controller operations using the GUI 729
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-184 Snapshot created and started
10.8.3 Creating and starting a clone preset with a single click
To create and start a clone with one click, perform these steps:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services in
the dynamic menu and click FlashCopy.
2. Select the volume that you want to clone.
3. Click Actions Create Clone (Figure 10-185 on page 730).
Clone preset: The clone preset creates an exact replica of the volume, which can be
changed without affecting the original volume. After the copy completes, the mapping that
was created by the preset is automatically deleted.
Clone preset parameters:
Background copy rate: 50
Incremental: No
Delete after completion: Yes
Cleaning rate: 50
The target pool is the primary copy source pool.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
730 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-185 Create Clone option
4. A volume is created as a target volume for this clone in the same pool as the source
volume. The FlashCopy mapping is created and started. You can check the FlashCopy
progress in the Progress column or in the Running Tasks Status column. After the
completion of the FlashCopy clone, the mapping is removed and the new cloned volume
becomes available (Figure 10-186).
Figure 10-186 Clone created and FlashCopy relationship removed
10.8.4 Creating and starting a backup preset with a single click
Backup preset: The backup preset creates a point-in-time replica of the production data.
After the copy completes, the backup view can be refreshed from the production data, with
minimal copying of data from the production volume to the backup volume.
Backup preset parameters:
Background Copy rate: 50
Incremental: Yes
Delete after completion: No
Cleaning rate: 50
The target pool is the primary copy source pool.
Chapter 10. SAN Volume Controller operations using the GUI 731
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
To create and start a backup with one click, perform these steps:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services in
the dynamic menu and then click FlashCopy.
2. Select the volume that you want to back up.
3. Click Actions Create Backup (Figure 10-187).
Figure 10-187 Create Backup option
4. A volume is created as a target volume for this backup in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column or in the Running Tasks
Status column (Figure 10-188).
Figure 10-188 Backup created and started
10.8.5 Creating a FlashCopy Consistency Group
To create a FlashCopy Consistency Group in the SAN Volume Controller GUI, perform these
steps:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and then click Consistency Groups. The Consistency Groups panel opens
(Figure 10-189 on page 732).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
732 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-189 Consistency Group panel
2. Click Create Consistency Group (Figure 10-190).
Figure 10-190 Create a FlashCopy Consistency Group
3. Enter the desired FlashCopy Consistency Group name and click Create (Figure 10-191).
Figure 10-191 Create Consistency Group window
4. Figure 10-192 on page 733 shows the result.
Consistency Group name: You can use the letters A to Z and a to z, the numbers 0 to
9, and the underscore (_) character. The volume name can be between one and 63
characters in length.
Chapter 10. SAN Volume Controller operations using the GUI 733
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-192 View Consistency Group
10.8.6 Creating FlashCopy mappings in a Consistency Group
In this section, we create FlashCopy mappings for volumes with their respective targets. The
source and target volumes were created prior to this operation.
To perform this action, follow these steps:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and then click Consistency Groups. The Consistency Groups panel opens, as shown in
Figure 10-189 on page 732.
2. Select in which Consistency Group (see Figure 10-193) you want to create the FlashCopy
mapping.
If you prefer not to create a FlashCopy mapping in a Consistency Group, select Not in a
Group in the list.
Figure 10-193 Consistency Group selection
3. If you select a new Consistency Group, click Actions Create FlashCopy Mapping
(Figure 10-194).
Figure 10-194 Create FlashCopy Mapping action for a Consistency Group
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
734 Implementing the IBM System Storage SAN Volume Controller V7.2
4. If you did not select a Consistency Group, click Create FlashCopy Mapping
(Figure 10-195).
Figure 10-195 Create FlashCopy Mapping
5. The Create FlashCopy Mapping window opens (Figure 10-196). In this window, you must
create the relationships between the source volumes (the volumes that are copied) and
the target volumes (the volumes that receive the copy). A mapping can be created
between any two volumes in a clustered system.
Figure 10-196 Create FlashCopy Mapping
6. Select a volume in the Source Volume column using the drop-down list box. Then, select a
volume in the Target Volume column using the drop-down list box. Click Add, as shown in
Figure 10-196. Repeat this action to create other relationships.
To remove a relationship that has been created, use .
Consistency Groups: If no Consistency Group is defined, the mapping is a
stand-alone mapping. It can be prepared and started without affecting other mappings.
All mappings in the same Consistency Group must have the same status to maintain
the consistency of the group.
Important: The source and target volumes must be of equal size.
Tip: The volumes do not have to be in the same I/O Group or storage pool.
Chapter 10. SAN Volume Controller operations using the GUI 735
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
7. Click Next after all the relationships that you want to create are shown (Figure 10-197).
Figure 10-197 Create FlashCopy Mapping with the relationships that have been created
8. In the next window, you need to select one FlashCopy preset. The GUI interface provides
three presets (Snapshot, Clone, or Backup) to simplify the more common FlashCopy
operations (Figure 10-198). We describe the presets and their use cases:
Snapshot This preset creates a copy-on-write point-in-time copy.
Clone This preset creates an exact replica of the source volume on a target
volume. The copy can be changed without affecting the original volume.
Backup This preset creates a FlashCopy mapping that can be used to recover
data or objects if the system experiences data loss. These backups can
be copied multiple times from the source and target volumes.
Figure 10-198 Create FlashCopy Mapping window
Whichever preset you select, you can customize various advanced options. To access
these settings, click Advanced Settings.
If you prefer not to customize these settings, go directly to step 9.
You can customize the following options, as shown in Figure 10-199 on page 736:
Important: The source and target volumes must be of equal size. So for a given source
volume, only the targets with the appropriate size are shown.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
736 Implementing the IBM System Storage SAN Volume Controller V7.2
Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
Incremental: This option copies only the parts of the source or target volumes that have
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Delete after completion: This option automatically deletes a FlashCopy mapping after
the background copy is completed. Do not use this option when the background copy
rate is set to zero (0).
Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping has not completed, the target volume is offline while the
mapping is stopping.
Figure 10-199 Create FlashCopy Mapping Advanced Settings
9. If you do not want to create these FlashCopy mappings from a Consistency Group (see
step 3 on page 733), you must confirm your choice by selecting No, do not add the
mappings to a consistency group (Figure 10-200).
Figure 10-200 Do not add the mappings to a consistency group
10.Click Finish, as shown in Figure 10-201 on page 737.
Incremental copies: Even if the type of the FlashCopy mapping is incremental, the
first copy process copies all of the data from the source to the target volume.
Chapter 10. SAN Volume Controller operations using the GUI 737
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
11.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown
in Figure 10-201.
For each FlashCopy mapping relationship that you have created, a mapping name is
automatically generated starting with fcmapX where X is an available number. If needed,
you can rename these mappings (see 10.8.11, Renaming a FlashCopy mapping on
page 741).
Figure 10-201 Create FlashCopy mappings result
10.8.7 Showing related volumes
Perform the following steps to show related volumes for a given FlashCopy mapping:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services in
the dynamic menu and then click either the FlashCopy, Consistency Groups, or FlashCopy
Mappings panel.
2. Select the volume (from the FlashCopy panel only) or the FlashCopy mapping that you
want to remove from a Consistency Group.
3. Click Actions Show Related Volumes (Figure 10-202).
Figure 10-202 Show Related Volumes
Tip: You can invoke FlashCopy from the SAN Volume Controller GUI, but using the SAN
Volume Controller GUI might be impractical if you plan to handle a large number of
FlashCopy mappings or Consistency Groups periodically, or at varying times. In this case,
creating a script by using the operating system shell CLI might be more convenient.
Tip: You can also right-click a FlashCopy mapping and select Show Dependent
Mappings from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
738 Implementing the IBM System Storage SAN Volume Controller V7.2
In the Related Volumes window (Figure 10-203), you can see the related mapping for a
given volume. If you click one of these volumes, you can see its properties. For more
information about volume properties, see 10.7.1, Volume information on page 675.
Figure 10-203 Related Volumes
4. Click Close to close this window.
10.8.8 Moving a FlashCopy mapping to a Consistency Group
Perform the following steps to move a FlashCopy mapping to a Consistency Group:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services in
the dynamic menu and then click either the FlashCopy, Consistency Groups, or FlashCopy
Mappings panel.
2. Select the FlashCopy mapping that you want to move to a Consistency Group or the
FlashCopy mapping for which you want to change the Consistency Group.
3. Click Actions Move to Consistency Group (Figure 10-204).
Figure 10-204 Move to Consistency Group action
4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for this FlashCopy mapping by using the drop-down list box (Figure 10-205 on
page 739).
Tip: You can also right-click a FlashCopy mapping and select Move to Consistency
Group from the list.
Chapter 10. SAN Volume Controller operations using the GUI 739
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-205 Move FlashCopy mapping to Consistency Group
5. Click Move to Consistency Group to confirm your changes.
10.8.9 Removing a FlashCopy mapping from a Consistency Group
Perform the following steps to remove a FlashCopy mapping from a Consistency Group:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services in
the dynamic menu and then click either the FlashCopy, Consistency Groups, or FlashCopy
Mappings panel.
2. Select the FlashCopy mapping that you want to remove from a Consistency Group.
3. Click Actions Remove from Consistency Group (Figure 10-206).
Figure 10-206 Remove from Consistency Group action
In the Remove FlashCopy Mapping from Consistency Group window, click Remove
(Figure 10-207 on page 740).
Tip: You can also right-click a FlashCopy mapping and select Remove from
Consistency Group from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
740 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-207 Remove FlashCopy Mapping from Consistency Group
10.8.10 Modifying a FlashCopy mapping
Perform the following steps to modify a FlashCopy mapping:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services in
the dynamic menu and then click either the FlashCopy, Consistency Groups, or FlashCopy
Mappings panel.
2. Select the FlashCopy mapping that you want to modify in the table.
3. Click Actions Edit Properties (Figure 10-208).
Figure 10-208 Edit Properties
4. In the Edit FlashCopy Mapping window, you can modify the following parameters for a
selected FlashCopy mapping, as shown in Figure 10-209 on page 741:
Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping has not completed, the target volume is offline while the
mapping is stopping.
Tip: You can also right-click a FlashCopy mapping and select Edit Properties from the
list.
Chapter 10. SAN Volume Controller operations using the GUI 741
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-209 Edit FlashCopy Mapping
5. Click Save to confirm your changes.
10.8.11 Renaming a FlashCopy mapping
Perform the following steps to rename a FlashCopy mapping:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and then click either Consistency Groups or FlashCopy Mappings.
2. In the table, select the FlashCopy mapping that you want to rename.
3. Click Actions Rename Mapping (Figure 10-210).
Figure 10-210 Rename Mapping action
4. In the Rename Mapping window, type the new name that you want to assign to the
FlashCopy mapping and click Rename (Figure 10-211 on page 742).
Tip: You can also right-click a FlashCopy mapping and select Rename from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
742 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-211 Renaming a FlashCopy mapping
10.8.12 Renaming a Consistency Group
To rename a Consistency Group, perform the following steps:
1. From the SAN Volume Controller Overview panel, move the cursor over the Copy
Services menu and then click Consistency Group.
2. From the left panel, select the Consistency Group that you want to rename. Then, select
Actions Rename (Figure 10-212).
Figure 10-212 Renaming a Consistency Group
3. Type the new name that you want to assign to the Consistency Group and click Rename
(Figure 10-213 on page 743).
FlashCopy name: You can use the letters A to Z and a to z, the numbers 0 to 9, and
the underscore (_) character. The mapping name can be between one and 63
characters in length.
Chapter 10. SAN Volume Controller operations using the GUI 743
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-213 Changing the name for a Consistency Group
4. From the Consistency Group panel, the new Consistency Group name is displayed.
10.8.13 Deleting a FlashCopy mapping
Perform the following steps to delete a FlashCopy mapping:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and then click the FlashCopy, Consistency Groups, or FlashCopy Mappings icon.
2. In the table, select the FlashCopy mapping that you want to delete.
3. Click Actions Delete Mapping (Figure 10-214).
Figure 10-214 Delete Mapping action
4. The Delete FlashCopy Mapping window opens, as shown in Figure 10-215 on page 744.
In the Verify the number of FlashCopy mappings that you are deleting field, you need
Consistency Group name: The name can consist of the letters A to Z and a to z, the
numbers 0 to 9, the dash (-), and the underscore (_) character. The name can be
between one and 63 characters in length. However, the name cannot start with a
number, the dash or the underscore.
Selecting multiple FlashCopy mappings: To select multiple FlashCopy mappings,
hold down Ctrl and use the mouse to select the entries. This capability is only available
in the Consistency Groups and FlashCopy Mappings panels.
Tip: You can also right-click a FlashCopy mapping and select Delete Mapping from the
list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
744 Implementing the IBM System Storage SAN Volume Controller V7.2
to enter the number of volumes that you want to remove. This verification has been added
to help you to avoid deleting the wrong mappings.
If you still have target volumes that are inconsistent with the source volumes and you
definitely want to delete these FlashCopy mappings, select Delete the FlashCopy
mapping even when the data on the target volume is inconsistent, or if the target
volume has other dependencies.
Click Delete to complete the operation (Figure 10-215).
Figure 10-215 Delete FlashCopy Mapping
10.8.14 Deleting a FlashCopy Consistency Group
Perform the following steps to delete a FlashCopy Consistency Group:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and then click the Consistency Groups panel.
2. Select the FlashCopy Consistency Group that you want to delete.
3. Click Actions Delete (Figure 10-216).
Figure 10-216 Delete Consistency Group action
4. The Warning window opens (Figure 10-217 on page 745). Click YES to complete the
operation.
Important: Deleting a Consistency Group does not delete the FlashCopy mappings.
Chapter 10. SAN Volume Controller operations using the GUI 745
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-217 Warning window
10.8.15 Starting the FlashCopy copy process
When the FlashCopy mapping is created, the copy process can be started. Only mappings
that are not members of a Consistency Group can be started individually. Follow these steps:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and then select the FlashCopy Mappings panel.
2. Choose the FlashCopy mapping that you want to start in the table.
3. Click Actions Start (Figure 10-218) to start the FlashCopy process.
Figure 10-218 Start the FlashCopy process action
4. You can check the FlashCopy progress in the Progress column of the table
(Figure 10-219) or in the Running Tasks status area.
Figure 10-219 Checking FlashCopy progress
5. After the task completes, the FlashCopy mapping status is in a Copied state
(Figure 10-220 on page 746).
Tip: You can also right-click a FlashCopy mapping and select Start from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
746 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-220 Example of copied FlashCopy
10.8.16 Stopping the FlashCopy copy process
When a FlashCopy copy process is stopped, the target volume becomes invalid and is set
offline by the SAN Volume Controller. The FlashCopy mapping copy must be retriggered to
bring the target volume online again.
Perform the following steps to stop a FlashCopy copy process:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and select the FlashCopy Mappings panel.
2. Choose the FlashCopy mapping that you want to stop.
3. Click Actions Stop (Figure 10-221) to stop the FlashCopy consistency group copy
process.
Figure 10-221 Stopping the FlashCopy copy process
4. Notice that the FlashCopy Mapping status has now changed to Stopped (Figure 10-222).
Figure 10-222 FlashCopy Mapping status
Important: Only stop a FlashCopy copy process when the data on the target volume is
useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and is set offline by the SAN Volume
Controller.
Chapter 10. SAN Volume Controller operations using the GUI 747
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.8.17 Starting a FlashCopy Consistency Group copy process
All of the mappings in a Consistency Group will be brought to the same state. To start the
FlashCopy Consistency Group copy process, perform these steps:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and select the Consistency Groups panel.
2. Choose the consistency group that you want to start (Figure 10-223).
Figure 10-223 FlashCopy Consistency Groups window
3. Click Actions Start (Figure 10-224) to start the FlashCopy consistency group copy
process.
Figure 10-224 Start FlashCopy Consistency Group copy process
4. You can check the FlashCopy consistency group copy progress in the Progress column
(Figure 10-225) or in the Running Tasks status area.
Figure 10-225 Checking FlashCopy Consistency Group copy progress
5. After the task completes, the FlashCopy status is in a Copied state (Figure 10-226 on
page 748).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
748 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-226 Copied FlashCopy Consistency Group
10.8.18 Stopping the FlashCopy Consistency Group copy process
When a FlashCopy Consistency Group copy process is stopped, the target volumes become
invalid and are set offline by the SAN Volume Controller. The FlashCopy mapping or
Consistency Group must be prepared again or retriggered to bring the target volumes online
again.
Perform the following steps to stop a FlashCopy Consistency Group copy process:
1. From the SAN Volume Controller Overview panel, move the cursor over Copy Services
and select the Consistency Groups panel.
1. In the table, select the FlashCopy consistency group that you want to stop.
2. Click Actions Stop (Figure 10-227) to stop the FlashCopy mapping copy process.
Figure 10-227 FlashCopy Consistency Group copy process Stop action
3. Notice that the FlashCopy mapping status has changed to Stopped (Figure 10-228).
Figure 10-228 FlashCopy Consistency Group status
Important: Only stop a FlashCopy Consistency Group copy process when the data on the
target volume is useless, or if you want to modify the FlashCopy mapping. When a
FlashCopy Consistency Group copy process is stopped, the target volumes become invalid
and are set offline by the SAN Volume Controller, as shown in Figure 10-229 on page 749.
Chapter 10. SAN Volume Controller operations using the GUI 749
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
4. The target volumes are now shown as Offline in the Volumes list (Figure 10-229).
Figure 10-229 FlashCopy Target volumes offline
10.8.19 Migrating between a fully allocated volume and a Space-Efficient
volume
If you want to migrate from a fully allocated volume to a Space-Efficient volume, follow the
same procedure that is described in 10.8.1, Creating a FlashCopy mapping on page 717.
However, make sure that you either select a Space-Efficient volume that has already been
created as your target volume, or create one. You can use this same method to migrate from
a Space-Efficient volume to a fully allocated volume.
Create a FlashCopy mapping with the fully allocated volume as the source and the
Space-Efficient volume as the target.
10.8.20 Reversing and splitting a FlashCopy mapping
You can now perform a reverse FlashCopy mapping without having to remove the original
FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning.
Figure 10-230 on page 750 shows an example of a reverse FlashCopy relationship.
You can start a FlashCopy mapping whose target is the source of another FlashCopy
mapping.
Important: The copy process overwrites all of the data on the target volume. You must
back up all of the data before you start the copy process.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
750 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-230 Related Volumes
This capability enables you to reverse the direction of a FlashCopy map without having to
remove existing maps, and without losing the data from the target, as shown in
Figure 10-231.
Figure 10-231 Reverse FlashCopy
10.9 Copy Services: Managing remote copy
It is often easier to control working with Metro Mirror or Global Mirror by using the GUI, as
long as you have a small number of mappings. When using many mappings, use the CLI to
execute your commands.
In this section, we describe the tasks that you can perform at a remote copy level.
There are two panels to use to visualize and manage your remote copies:
The Remote Copy panel, as shown in Figure 10-232 on page 751
The Metro Mirror and Global Mirror Copy Services features enable you to set up a
relationship between two volumes, so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same SAN Volume
Controller clustered system or on two separate SAN Volume Controller systems. To
access the Remote Copy panel, move the cursor over the Copy Services selection and
click Remote Copy.
For more information: See Chapter 8, Advanced Copy Services on page 365 for more
information about the functionality of Copy Services in the SAN Volume Controller
environment.
Chapter 10. SAN Volume Controller operations using the GUI 751
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-232 Remote Copy panel
The Partnerships panel, as shown in Figure 10-233
Partnerships can be used to create a disaster recovery environment, or to migrate data
between systems that are in separate locations. Partnerships define an association
between a local system and a remote system. To access the Partnerships panel, move the
cursor over Copy Services selection and click Partnerships.
Figure 10-233 Partnerships panel
10.9.1 System partnership
You can create more than a one-to-one system partnership using Fibre Channel or FCoE.
You can have a system partnership among multiple SAN Volume Controller clustered
systems, which allows you to create four types of configurations, using a maximum of four
connected SAN Volume Controller systems:
Star configuration, as shown in Figure 10-234 on page 752
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
752 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-234 Star configuration
Triangle configuration, as shown in Figure 10-235
Figure 10-235 Triangle configuration
Fully connected configuration, as shown in Figure 10-236 on page 753
Chapter 10. SAN Volume Controller operations using the GUI 753
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-236 Fully connected configuration
Daisy-chain configuration, as shown in Figure 10-237
Figure 10-237 Daisy-chain configuration
10.9.2 Creating the fibre channel partnership between two remote SAN
Volume Controller systems
We perform this operation to create the partnership on both SAN Volume Controller systems
using fibre channel.
To create a fibre channel partnership between the SAN Volume Controller systems using the
GUI, follow these steps:
1. From the SAN Volume Controller Overview panel, roll the mouse cursor over Copy
Services and click Partnerships. The Partnerships panel opens, as shown in
Figure 10-238 on page 754.
Important: All SAN Volume Controller clustered systems must be at level 5.1 or higher.
Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
this next step to create the SAN Volume Controller clustered system Metro Mirror
partnership. Instead, go to 10.9.4, Creating stand-alone remote copy relationships on
page 757.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
754 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-238 Partnerships panel
2. Click Create Partnership to create a new partnership with another SAN Volume
Controller system, as shown in Figure 10-239.
Figure 10-239 Create partnership
3. On the Create Partnership window (Figure 10-240), complete the following elements:
Select Fibre Channel partnership type
Select an available system in the drop-down list box. If there is no candidate, you will
receive the following error message: This system does not have any candidates.
Enter a link bandwidth (Mbps) that is used by the background copy process between
the systems in the partnership. Set this value so that it is less than or equal to the
bandwidth that can be sustained by the communication link between the systems. The
link must be able to sustain any host requests and the rate of background copy.
Enter the Background Copy rate.
Click OK to confirm the partnership relationship.
Figure 10-240 Create Partnership window
Chapter 10. SAN Volume Controller operations using the GUI 755
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
4. As shown in Figure 10-241 on page 755, our partnership is in the Partially Configured
state, because we have only performed the work on one side of the partnership so far.
Figure 10-241 Viewing system partnerships
To fully configure the partnership between both systems, we must perform the same steps
on the other SAN Volume Controller system in the partnership. For simplicity and brevity,
only the two most significant windows are shown when the partnership is fully configured.
5. Launching the SAN Volume Controller GUI for SVC_ITSO2, we select SVC_ITSO1 for the
system partnership. We specify the available bandwidth for the background copy, again
200 Mbps, and then click OK to create.
Now that both sides of the SAN Volume Controller system partnership are defined, the
resulting windows, which are shown in Figure 10-242 and Figure 10-243, confirm that our
remote system partnership is now in the Fully Configured state. Figure 10-241 shows the
remote system SVC_ITSO2 from the local system SVC_ITSO1.
Figure 10-242 System SVC_ITSO1: Fully configured remote partnership
Figure 10-243 shows the remote system SVC_ITSO1 from the local system SVC_ITSO2.
Figure 10-243 System SVC_ITSO2: Fully configured remote partnership
10.9.3 Creating the IP partnership between two remote SAN Volume Controller
systems
For more information about this feature, refer to 1.3, What is new in SAN Volume Controller V
7.2.0 on page 5 and Chapter 8, Advanced Copy Services on page 365.
To create an IP partnership between the SAN Volume Controller systems using the GUI,
follow these steps:
1. From the SAN Volume Controller Overview panel, roll a mouse cursor over Copy
Services and click Partnerships. The Partnerships panel opens, as shown in
Figure 10-238 on page 754.
2. Click Create Partnership to create a new partnership with another SAN Volume
Controller system, as shown in Figure 10-239.
3. On the Create Partnership window (Figure 10-244 on page 756), complete the following
elements:
Select the IP partnership type
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
756 Implementing the IBM System Storage SAN Volume Controller V7.2
Enter the IP address of the remote partner system
Select the Link bandwidth, in a unit of megabits per second that is used by the
background copy process between the systems in the partnership. This value should
be set so that it is less than or equal to the bandwidth that can be sustained by the
communication link between the systems. The link must be able to sustain any host
requests and the rate of the background copy.
Select the Background copy rate
If wanted, enable CHAP authentication by providing a CHAP secret
Figure 10-244 Create Partnership window
4. As shown in Figure 10-245, our partnership is in the Partially Configured state, because
we have only performed the work on one side of the partnership so far.
Figure 10-245 Viewing system partnerships
To fully configure the partnership between both systems, we must perform the same steps on
the other SAN Volume Controller system in the partnership. For simplicity and brevity, only the
two most significant windows are shown when the partnership is fully configured.
5. Launching the SAN Volume Controller GUI for SVC_ITSO2, we select SVC_ITSO1 for the
system partnership. We specify the available bandwidth for the background copy, again
100 Mbps, and then click OK to create.
Now that both sides of the SAN Volume Controller system partnership are defined, the
resulting windows, which are shown in Figure 10-246 and Figure 10-247 on page 757,
confirm that our remote system partnership is now in the Fully Configured state.
Figure 10-246 shows the remote system SVC_ITSO2 from the local system SVC_ITSO1.
Figure 10-246 System SVC_ITSO1: Fully configured remote partnership
Chapter 10. SAN Volume Controller operations using the GUI 757
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-247 on page 757 shows the remote system SVC_ITSO1 from the local system
SVC_ITSO2.
Figure 10-247 System SVC_ITSO2: Fully configured remote partnership
10.9.4 Creating stand-alone remote copy relationships
In this section, we create remote copy mappings for volumes with their respective remote
targets. The source and target volumes have been created prior to this operation on both
systems.
To perform this action, follow these steps:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Click New Relationship, as shown in Figure 10-248.
Figure 10-248 New relationship action
3. In the New Relationship window, select the type of relationship that you want to create
(Figure 10-249 on page 758):
Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can either be located on the same system or
on another system.
Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously, so that the copy is continuously updated,
but the copy might not contain the last few updates in the event that a disaster recovery
operation is performed.
Note: The Bandwidth setting definition when creating the IP partnerships has changed:
Previously, the bandwidth setting defaults to 50 MBytes and it was the maximum transfer
rate from primary to secondary site for initial sync/resyncs of volumes.
Link bandwidth setting is now configured using Mbits not MBytes and you set this to a
value that the communication link can sustain or what is actually allocated for replication.
The Background copy rate setting is now a percentage of the link bandwidth and it
determines the bandwidth available for initial sync and resyncs or for Global Mirror with
Change Volumes.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
758 Implementing the IBM System Storage SAN Volume Controller V7.2
Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume. Changes can
then be copied to the remote system asynchronously. The Flash Copy relationship
exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volume is for internal use. The user cannot
manipulate it like a normal FlashCopy mapping. Most svctask *fcmap commands will
fail.
Then, click Next.
Figure 10-249 Select the type of relationship that you want to create
4. In the next window, select the location of the auxiliary volumes, as shown in
Figure 10-250:
On this system, which means that the volumes are local
On another system, which means that you select the remote system from the
drop-down list
After you make a selection, click Next.
Figure 10-250 Specifying the location of the auxiliary volumes
5. In the New Relationship window that is shown in Figure 10-251 on page 759, you can
create new relationships. Select a master volume in the Master drop-down list, and then,
select an auxiliary volume in the Auxiliary drop-down list for this master, and click Add. If
needed, repeat this action to create other relationships.
Chapter 10. SAN Volume Controller operations using the GUI 759
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-251 Create the relationships between the master and auxiliary volumes
To remove a relationship that has been created, click , as shown in Figure 10-251.
After all the relationships that you want to create are shown, click Next.
6. Select if the volumes are already synchronized, as shown in Figure 10-252, and then, click
Next.
Figure 10-252 Volumes synchronized
7. Finally, on the last window, select if you want to start to copy the data, as shown in
Figure 10-253. Then, click Finish.
Figure 10-253 Synchronize now
The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that their status is Inconsistent Copying. You can check the copying progress
in the Running Tasks status area, as shown in Figure 10-254 on page 760.
Important: The master and auxiliary volumes must be of equal size. So for a given
source volume, only the targets with the appropriate size are shown in the list box.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
760 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-254 Remote Copy panel with an inconsistent copying status
After the copy is finished, the relationships status changes to Consistent synchronized.
10.9.5 Creating a Consistency Group
To create a Consistency Group, follow these steps:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Click New Consistency Group (Figure 10-255).
Figure 10-255 New Consistency Group action
3. Enter a name for the Consistency Group, and then, click Next (Figure 10-256).
Figure 10-256 Enter a Consistency Group name
Consistency Group name: If you do not provide a name, the SAN Volume Controller
automatically generates the name rccstgrpX, where X is the ID sequence number that
is assigned by the SAN Volume Controller internally. You can use the letters A to Z and
a to z, the numbers 0 to 9, and the underscore (_) character. The Consistency Group
name can be between 1 and 15 characters in length.
Chapter 10. SAN Volume Controller operations using the GUI 761
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
4. In the next window, select where the auxiliary volumes are located, as shown in
Figure 10-257:
On this system, which means that the volumes are local
On another system, which means that you select the remote system in the drop-down
list
After you make a selection, click Next.
Figure 10-257 Auxiliary volumes location
5. Select if you want to add relationships to this group, as shown in Figure 10-258. There are
two options:
If you answer Yes, click Next to continue the wizard, and go to step 6.
If you answer No, click Finish to create an empty Consistency Group that can be used
later.
Figure 10-258 Add relationships to this group
6. Select the type of relationship that you want to create (Figure 10-259 on page 762):
Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can either be located on the same system or
on another system.
Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated,
but the copy might not contain the last few updates in the event that a disaster recovery
operation is performed.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
762 Implementing the IBM System Storage SAN Volume Controller V7.2
Global Mirror With Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume.
Changes can then be copied to the remote system asynchronously. The Flash Copy
relationship exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volumes is for internal use. The user cannot
manipulate this type of mapping like a normal FlashCopy mapping.
Most svctask *fcmap commands will fail.
Click Next.
Figure 10-259 Select the type of relationship that you want to create
7. As shown in Figure 10-260, you can optionally select existing relationships to add to the
group, and then click Next.
Figure 10-260 Select existing relationships to add to the group
Note: To select multiple relationships, hold down Ctrl and use your mouse to select the
entries that you want to include.
Chapter 10. SAN Volume Controller operations using the GUI 763
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
8. In the window that is shown in Figure 10-261 on page 763, you can create new
relationships. Select a volume in the Master drop-down list box, and then, select a volume
in the Auxiliary drop-down list box for this master. Click Add, as shown in Figure 10-261.
Repeat this action to create other relationships, if needed.
To remove a relationship that has been created, click , as shown in Figure 10-261.
After all the relationships that you want to create are displayed, click Next.
Figure 10-261 Create relationships between the master and auxiliary volumes
9. Select if the volumes are already synchronized, as shown in Figure 10-262, and then, click
Next.
Figure 10-262 Volumes synchronized
10.Finally, on the last window, select if you want to start to copy the data, as shown in
Figure 10-263, and then click Finish.
Figure 10-263 Synchronize now
Important: The Master and Auxiliary volumes must be of equal size. So for a given
source volume, only the targets with the appropriate size are displayed.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
764 Implementing the IBM System Storage SAN Volume Controller V7.2
11.The relationships are visible in the Remote copy panel. If you selected to copy the data,
you can see that the status of the relationships is Inconsistent Copying. You can check the
copying progress in the Running Tasks status area, as shown in Figure 10-264 on
page 764.
Figure 10-264 Consistency Group created with relationship in copying status
After the copies are completed, the relationships and the Consistency Group change to the
Consistent synchronized status.
10.9.6 Renaming a Consistency Group
To rename a Consistency Group, perform the following steps:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Select the Consistency Group that you want to rename in the panel. Then, select
Actions Rename, as shown in Figure 10-265.
Figure 10-265 Renaming a Consistency Group
Chapter 10. SAN Volume Controller operations using the GUI 765
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
3. Type the new name that you want to assign to the Consistency Group, and click Rename
(Figure 10-266).
Figure 10-266 Changing the name for a Consistency Group
4. From the Remote Copy panel, the new Consistency Group name is displayed.
10.9.7 Renaming a remote copy relationship
Perform the following steps to rename a remote copy relationship:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. In the table, select the remote copy relationship mapping that you want to rename. Click
Actions Rename (Figure 10-267)
Figure 10-267 Rename remote copy relationship action
3. In the Rename Relationship window, type the new name that you want to assign to the
FlashCopy mapping, and click Rename (Figure 10-268 on page 766).
Consistency Group name: The Consistency Group name can consist of the letters A
to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_) character. The
name can be between one and 15 characters in length. However, the name cannot
start with a number, the dash, or the underscore.
Tip: You can also right-click a remote copy relationship and select Rename from the
list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
766 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-268 Renaming a remote copy relationship
10.9.8 Moving a stand-alone remote copy relationship to a Consistency Group
Perform the following steps to move a remote copy relationship to a Consistency Group:
1. From the SAN Volume Controller Overview panel, click Copy Services Remote Copy.
2. Expand the Column Not in a Group.
3. Select the relationship that you want to move to the Consistency Group.
4. Click Actions Add to Consistency Group, as shown in Figure 10-269.
Figure 10-269 Adding to Consistency Group action
5. In the Add Relationship to Consistency Group window, select the Consistency Group for
this remote copy relationship using the drop-down list box (Figure 10-270 on page 767).
Click Add to Consistency Group to confirm your changes.
Remote copy relationship name: You can use the letters A to Z and a to z, the
numbers 0 to 9, and the underscore (_) character. The remote copy name can be
between one and 15 characters in length.
Tip: You can also right-click a remote copy relationship and select Add to Consistency
Group from the list.
Chapter 10. SAN Volume Controller operations using the GUI 767
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-270 Adding a relationship to a Consistency Group
10.9.9 Removing a remote copy relationship from a Consistency Group
Perform the following steps to remove a remote copy relationship from a Consistency Group:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Select a Consistency Group.
3. Select the remote copy relationship that you want to remove from the Consistency Group.
4. Click Actions Remove from Consistency Group (Figure 10-271).
Figure 10-271 Remove from Consistency Group action
5. In the Remove Relationship From Consistency Group window, click Remove
(Figure 10-272).
Figure 10-272 Remove relationship from Consistency Group
Tip: You can also right-click a remote copy relationship and select Remove from
Consistency Group from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
768 Implementing the IBM System Storage SAN Volume Controller V7.2
10.9.10 Starting a remote copy relationship
When a remote copy relationship is created, the remote copy process can be started. Only
relationships that are not members of a Consistency Group, or the only relationship in a
Consistency Group, can be started individually.
Perform the following steps to start a remote copy relationship:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Expand the Column Not in a Group.
3. In the table, select the remote copy relationship that you want to start.
4. Click Actions Start (Figure 10-273) to start the remote copy process.
Figure 10-273 Start action
5. If the relationship was not consistent, you can check the remote copy progress in the
Running Tasks status area (Figure 10-274).
Figure 10-274 Checking remote copy synchronization progress
6. After the task is completed, the remote copy relationship status has a Consistent
Synchronized state (Figure 10-275).
Tip: You can also right-click a relationship and select Start from the list.
Chapter 10. SAN Volume Controller operations using the GUI 769
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-275 Consistent synchronized remote copy relationship
10.9.11 Starting a remote copy Consistency Group
All of the mappings in a Consistency Group will be brought to the same state. To start the
remote copy Consistency Group, follow these steps:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Select the Consistency Group that you want to start (Figure 10-276).
Figure 10-276 Remote copy Consistency Groups view
3. Click Actions Start (Figure 10-277) to start the remote copy Consistency Group.
Figure 10-277 Start action
4. You can check the remote copy Consistency Group progress, as shown in Figure 10-278.
Figure 10-278 Checking remote copy Consistency Group progress
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
770 Implementing the IBM System Storage SAN Volume Controller V7.2
5. After the task is completed, the Consistency Group and all its relationship statuses are in a
Consistent Synchronized state (Figure 10-279).
Figure 10-279 Consistent Synchronized Consistency Group
10.9.12 Switching the copy direction for a remote copy relationship
When a remote copy relationship is in the Consistent Synchronized state, the copy direction
for the relationship can be changed. Only relationships that are not a member of a
Consistency Group, or the only relationship in a Consistency Group, can be switched
individually. Such relationships can be switched from master to auxiliary or from auxiliary to
master, depending the case.
Perform the following steps to switch a remote copy relationship:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to switch.
4. Click Actions Switch (Figure 10-280) to start the remote copy process.
Figure 10-280 Switch copy direction action
5. A Warning window opens (Figure 10-281 on page 771). A confirmation is needed to
switch the remote copy relationship direction. As shown in Figure 10-281 on page 771, the
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that transits from primary to secondary, because all of the I/O will be
inhibited to that volume when it becomes the secondary. Therefore, careful planning is
required prior to switching the copy direction for a remote copy relationship.
Tip: You can also right-click a relationship and select Switch from the list.
Chapter 10. SAN Volume Controller operations using the GUI 771
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
remote copy is switched from the master volume to the auxiliary volume. Click Yes to
confirm your choice.
Figure 10-281 Warning window
6. The copy direction is now switched, as shown in Figure 10-282. The auxiliary volume is
now accessible and indicated as the primary volume. The auxiliary volume is now
synchronized to the master volume.
Figure 10-282 Checking remote copy synchronization direction
10.9.13 Switching the copy direction for a Consistency Group
When a Consistency Group is in the Consistent Synchronized state, the copy direction for this
Consistency Group can be changed.
Perform the following steps to switch a Consistency Group:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Select the Consistency Group that you want to switch.
3. Click Actions Switch (Figure 10-283 on page 772) to start the remote copy process.
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that transits from primary to secondary, because all of the I/O will be
inhibited to that volume when it becomes the secondary. Therefore, careful planning is
required prior to switching the copy direction for a Consistency Group.
Tip: You can also right-click a relationship and select Switch from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
772 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-283 Switch action
4. A Warning window opens (Figure 10-284). A confirmation is needed to switch the
Consistency Group direction. In the example that is shown in Figure 10-284, the
Consistency Group is switched from the master group to the auxiliary group. Click Yes to
confirm your choice.
Figure 10-284 Warning window for SVC_ITSO2
5. The remote copy direction is now switched, as shown in Figure 10-285. The auxiliary
volume is now accessible and indicated as primary volume.
Figure 10-285 Checking Consistency Group synchronization direction
10.9.14 Stopping a remote copy relationship
After it is started, the remote copy process can be stopped, if needed. Only relationships that
are not a member of a Consistency Group, or the only relationship in a Consistency Group,
can be stopped individually. You can also use this command to enable write access to a
consistent secondary volume.
Chapter 10. SAN Volume Controller operations using the GUI 773
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Perform the following steps to stop a remote copy relationship:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. Expand the Not in a Group column.
3. In the table, select the remote copy relationship that you want to stop.
4. Click Actions Stop (Figure 10-286 on page 773) to stop the remote copy process.
Figure 10-286 Stop action
5. The Stop Remote Copy Relationship window opens (Figure 10-287). To allow secondary
read/write access, select Allow secondary read/write access, and then, click Stop
Relationship to confirm your choice.
Figure 10-287 Stop Remote Copy Relationship window
6. The new relationship status can be checked, as shown in Figure 10-288. The relationship
is now Consistent Stopped.
Figure 10-288 Checking remote copy synchronization status
Tip: You can also right-click a relationship and select Stop from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
774 Implementing the IBM System Storage SAN Volume Controller V7.2
10.9.15 Stopping a Consistency Group
After it is started, the Consistency Group can be stopped, if necessary. You can also use this
command to enable write access to consistent secondary volumes.
Perform the following steps to stop a Consistency Group:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. In the table, select the Consistency Group that you want to stop.
3. Click Actions Stop (Figure 10-289) to stop the remote copy Consistency Group.
Figure 10-289 Stop action
4. The Stop Remote Copy Consistency Group window opens (Figure 10-290). To allow
secondary read/write access, select Allow secondary read/write access, and then, click
Stop Consistency Group to confirm your choice.
Figure 10-290 Stop Remote Copy Consistency Group window
5. The new relationship status can be checked, as shown in Figure 10-291. The relationship
is now Consistent Stopped.
Tip: You can also right-click a relationship and select Stop from the list.
Chapter 10. SAN Volume Controller operations using the GUI 775
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-291 Checking remote copy synchronization status
10.9.16 Deleting stand-alone remote copy relationships
Perform the following steps to delete a stand-alone remote copy mapping:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. In the table, select the remote copy relationship that you want to delete.
3. Click Actions Delete Relationship (Figure 10-292).
Figure 10-292 Delete Relationship action
4. The Delete Relationship window opens (Figure 10-293). In the Verify the number of
relationships that you are deleting field, enter the number of volumes that you want to
remove. This verification has been added to help you avoid deleting the wrong
relationships.
Click Delete to complete the operation (Figure 10-293).
Multiple remote copy mappings: To select multiple remote copy mappings, hold down
Ctrl and use your mouse to select the entries that you want.
Tip: You can also right-click a remote copy mapping and select Delete Relationship
from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
776 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-293 Delete remote copy relationship
10.9.17 Deleting a Consistency Group
Perform the following steps to delete a Consistency Group:
1. From the SAN Volume Controller Overview panel, select Copy Services Remote
Copy.
2. In the left column, select the Consistency Group that you want to delete.
3. Click Actions Delete (Figure 10-294).
Figure 10-294 Delete Consistency Group action
4. A Warning window opens, as shown in Figure 10-295. Click Yes to complete the
operation.
Figure 10-295 Confirmation message
Important: Deleting a Consistency Group does not delete its remote copy mappings.
Chapter 10. SAN Volume Controller operations using the GUI 777
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.10 Managing the SAN Volume Controller clustered system
using the GUI
This section explains the various configuration and administrative tasks that you can perform
on the SAN Volume Controller clustered system.
10.10.1 System Status information
From the System panel, perform the following steps to display the system and node
information:
1. On the SAN Volume Controller Overview panel, move the mouse cursor over the
Monitoring selection and click System.
2. The System Status panel (Figure 10-296 on page 777) opens.
Figure 10-296 System Status panel
By simply moving the mouse over the tower in the left part of the panel, you are able to
view the global storage usage, as shown in Figure 10-297. Using this method, you can
monitor the Physical Capacity and the Allocated Capacity of your SAN Volume Controller
system. You can change between Allocation View and Compression View to see the
capacity utilization and space savings of Real time Compression feature (Figure 10-298
on page 778).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
778 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-297 Physical Capacity information: Allocation view
Figure 10-298 Physical Capacity information: Compression View
10.10.2 View I/O Groups and their associated nodes
The right side of the System Status panel shows an overview of the SAN Volume Controller
system with its I/O Groups and their associated nodes. In this dynamic illustration, the node
status can be checked by using a color code that represents its status (Figure 10-299).
Chapter 10. SAN Volume Controller operations using the GUI 779
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-299 System view with node status
You can click on an individual node to see its VPD (vital product data) as shown in
Figure 10-300 on page 779
Figure 10-300 System view with node VPD
VPD tab: The amount of information in the vital product data (VPD) tab is extensive, so we
do not describe it in this section. For the list of these elements, see the IBM System
Storage SAN Volume Controller Command-Line Interface Users Guide, GC27-2287, and
search for the nodevpd command.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
780 Implementing the IBM System Storage SAN Volume Controller V7.2
10.10.3 View SAN Volume Controller clustered system properties
1. From the System Status panel, to obtain information about the system, click the system
name, as shown in Figure 10-301.
Figure 10-301 General system information
2. When you click the Info tab, the following information is displayed:
General information:
Name
ID
Location
Capacity information:
Total MDisk Capacity
Capacity in Pools
Capacity Allocated to Volumes
Total Free Capacity
Total Volume Capacity
Total Volume Copy Capacity
Total Used Capacity
Total Overallocation
Total Drive Raw Capacity
Compressed Volumes information:
Virtual Capacity
Allocated Capacity
Storage Efficiency Savings
10.10.4 Renaming a SAN Volume Controller clustered system
From the System Status panel, perform the following steps to rename the system:
1. Click the SAN Volume Controller system name, as shown in Figure 10-301 on page 780.
Chapter 10. SAN Volume Controller operations using the GUI 781
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
2. Click Manage.
3. Specify a new name for the system and click Save, as shown in Figure 10-302.
Figure 10-302 Manage tab: Change system name
4. Click Save.
5. A Warning window opens, as shown in Figure 10-303. If you are using the iSCSI protocol,
changing either the system name or the iSCSI Qualified Name (IQN) also changes the
iSCSI Qualified Name (IQN) of all of the nodes in the system and might require the
reconfiguration of all iSCSI-attached hosts. This reconfiguration might be required,
because the IQN for each node is generated using the system and node names.
Figure 10-303 System rename Warning window
6. Click Yes to confirm that you want to change the system name.
10.10.5 Shutting down a SAN Volume Controller clustered system
If all input power to a SAN Volume Controller clustered system is removed for more than a
few minutes (for example, if the machine room power is shut down for maintenance), it is
System name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore (_) character. The clustered system name can be between one and 63
characters in length.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
782 Implementing the IBM System Storage SAN Volume Controller V7.2
important that you shut down the system before you remove the power. Shutting down the
system while it is still connected to the main power ensures that the uninterruptible power
supply units batteries will still be fully charged when the power is restored.
If you remove the main power while the system is still running, the uninterruptible power
supply unit will detect the loss of power and will instruct the nodes to shut down. This
shutdown can take several minutes to complete. Although the uninterruptible power supply
unit has sufficient power to perform the shutdown, you will unnecessarily drain the
uninterruptible power supply units batteries.
When power is restored, the SAN Volume Controller nodes will start. However, one of the first
checks that is performed by the SAN Volume Controller node is to ensure that the
uninterruptible power supply units batteries have sufficient power to survive another power
failure, therefore enabling the node to perform a clean shutdown. (You do not want the
uninterruptible power supply unit to run out of power when the nodes shutdown activities
have not yet completed). If the uninterruptible power supply unit batteries are not sufficiently
charged, the node will not start.
Be aware that it can take up to three hours to charge the batteries sufficiently for a node to
start.
SAN Volume Controller uninterruptible power supply units are designed to survive at least two
power failures in a short time. After that, the nodes will refuse to start until the batteries have
sufficient power (to survive another immediate power failure).
If, during your maintenance activities, the uninterruptible power supply unit detected power
and then detected a loss of power multiple times (thus the nodes start and shut down more
than one time in a short time frame), you might find that you have unknowingly drained the
uninterruptible power supply units batteries. You will have to wait until they are charged
sufficiently before the nodes will start.
From the System Status panel, perform the following steps to shut down your system:
1. Click the system name, as shown in Figure 10-304.
Important: When a node shuts down due to loss of power, the node will dump the cache to
an internal hard drive so that the cached data can be retrieved when the system starts.
With 8F2/8G4 nodes, the cache is 8 GB. With CF8/CG8 nodes, the cache is 24 GB. So, it
can take several minutes to dump to the internal drive.
Important: Before shutting down a system, quiesce all I/O operations that are directed to
this system, because you will lose access to all of the volumes that are serviced by this
clustered system. Failure to do so might result in failed I/O operations being reported to
your host operating systems.
There is no need to quiesce all I/O operations if you are only shutting down one SAN
Volume Controller node.
Begin the process of quiescing all I/O activity to the system by stopping the applications on
your hosts that are using the volumes that are provided by the system.
If you are unsure which hosts are using the volumes that are provided by the SAN Volume
Controller system, follow the procedure that is explained in 9.6.21, Showing the host to
which the volume is mapped on page 513, and repeat this procedure for all volumes.
Chapter 10. SAN Volume Controller operations using the GUI 783
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-304 General system information
2. Click the Manage tab, and then, click Shut Down System, as shown in Figure 10-305 on
page 783.
Figure 10-305 Manage tab: Shut Down System
3. The Confirm System Shutdown window (Figure 10-306) opens. You will receive a
message asking you to confirm whether you want to shut down the system. Ensure that
you have stopped all FlashCopy mappings, remote copy relationships, data migration
operations, and forced deletions before continuing. Enter Yes and click OK to begin the
shutdown process.
Important: At this point, you will lose all administrative contact with your system.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
784 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-306 Shutting down the system confirmation window
You have now completed the required tasks to shut down the system. At this point, you can
shut down the uninterruptible power supply units by pressing the power buttons on their front
panels.
10.10.6 Upgrading software
From the System Status panel, perform the following steps to upgrade the software of your
SAN Volume Controller clustered system:
1. Click the system name, as shown in Figure 10-304 on page 783.
2. Click the Manage tab, and then click Upgrade System, as shown in Figure 10-307.
Tip: When you shut down the system, it does not automatically start. You must manually
start the SAN Volume Controller nodes. If the system shuts down because the
uninterruptible power supply unit has detected a loss of power, it will automatically restart
when the uninterruptible power supply unit detects that the power has been restored (and
the batteries have sufficient power to survive another immediate power failure).
Restarting the SAN Volume Controller system: To start the SAN Volume Controller
system, you must first start the uninterruptible power supply units by pressing the power
buttons on their front panels. After they are on, go to the service panel of one of the nodes
within your SAN Volume Controller clustered system and press the power-on button,
releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the
system name on line 2 of the SAN Volume Controller front panel), you can start the other
nodes in the same way. As soon as all nodes are fully booted and you have reestablished
administrative contact using the GUI, your system is fully operational again.
Chapter 10. SAN Volume Controller operations using the GUI 785
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-307 Manage tab: Upgrade software link
3. Follow the instructions that are provided in 10.15.12, Upgrading software on page 842.
10.11 Managing I/O Groups
In the following sections, we illustrate how to manage I/O Groups.
10.11.1 Viewing I/O Group properties
From the System Details panel, you can see the I/O Group properties:
1. Move the mouse cursor over the Monitoring selection on the dynamic menu and click
System Details.
2. Click an I/O Group, as shown in Figure 10-308 on page 785.
Figure 10-308 I/O Group information
3. In the table you can view the following information:
General information:
Name
ID
Number of Nodes
Number of Hosts
Number of Volumes
Memory information:
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
786 Implementing the IBM System Storage SAN Volume Controller V7.2
FlashCopy
Global Mirror and Metro Mirror
Volume mirroring
RAID
10.11.2 Modifying I/O Group properties
From the System Details panel, perform the following steps to modify I/O Group settings:
1. Select an I/O Group which you want to manage and click Actions as shown in
Figure 10-309 on page 786. You can choose to rename the I/O Group, or Modify Memory
of an I/O Group.
Figure 10-309 Modifying I/O Group properties
2. You can modify the following information:
The I/O Group name
The amount of memory for the following features (Figure 10-310)
FlashCopy (default 20 MB - maximum 512 MB)
Global Mirror and Metro Mirror (default 20 MB - maximum 512 MB)
Volume mirroring (default 20 MB - maximum 512 MB)
RAID (default 40 MB - maximum 512 MB)
I/O Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and
the underscore (_) character. The I/O Group name can be between one and 63
characters in length.
Chapter 10. SAN Volume Controller operations using the GUI 787
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-310 Modify I/O Group memory properties window
10.12 Managing nodes
In this section, we show how to manage SAN Volume Controller nodes.
10.12.1 Viewing node properties
From the System Details panel, you can inspect the SAN Volume Controller node properties.
Follow these steps:
1. Move the mouse cursor over the Monitoring selection on the dynamic menu and click
System Details.
2. Expand the I/O group selection by clicking the plus (+) button. Click on a node, as shown
in Figure 10-311.
Important: For volume mirroring, Copy Services (FlashCopy, Metro Mirror, and
Global Mirror), and RAID operations, memory is traded against memory that is
available to the cache. The amount of memory can be decreased or increased. The
maximum combined memory size across all features is 552 MB.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
788 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-311 Node information
3. In the table, you can view the following information:
General information:
Name
ID
Status
Model
WWNN
I/O Group
Redundancy information:
Configuration node
Failover Partner node
iSCSI information:
iSCSI Name (IQN)
iSCSI Alias
Failover iSCSI Name
Failover iSCSI Alias
If iSCSI Failover is active
Uninterruptible power supply information:
Serial Number
Unique ID
Fibre Channel Port information:
WWPNs
Status
Speed
Type
4. For viewing the node hardware properties, expand the node selection by clicking the plus
(+) button. Click the Hardware icon as shown in Figure 10-312:
Chapter 10. SAN Volume Controller operations using the GUI 789
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-312 Node Hardware properties
5. In the Hardware table, you can view following information:
General information:
Name
ID
Status
I/O Group
Detected Hardware information:
Model
Same as configured
Valid
Memory information:
Configured amount
Detected amount
Valid
CPU information:
Socket number
Configured CPU
Detected CPU
Valid
Adapters information:
Location
Configured adapter
Detected adapter
Valid
Additional I/O Ports information
10.12.2 Renaming a node
From the System Details panel, perform the following steps to rename a node:
1. Click on the node, as shown in Figure 10-313.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
790 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-313 Node information window
2. Click the Actions Rename Node (Figure 10-314 on page 790).
Figure 10-314 Rename node action
3. Specify a new name for the node and click Rename, as shown in Figure 10-315.
Figure 10-315 Rename Node, Enter Node name
Chapter 10. SAN Volume Controller operations using the GUI 791
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.12.3 Adding a node to the SAN Volume Controller clustered system
To complete this operation, perform the following steps:
1. On the System Details panel, click an empty node position to view the candidate nodes,
as shown in Figure 10-316 on page 791.
Figure 10-316 Add node window
2. Select the node that you want to add to your system using the drop-down list box. Change
its name, if needed, and click Add Node, as shown in Figure 10-317.
Figure 10-317 Add a node to the system
Node name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore (_) character. The node name can be between one and 63 characters in
length.
Important: Remember that you need to have at least two nodes in an I/O Group. Add
your available nodes in sequence.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
792 Implementing the IBM System Storage SAN Volume Controller V7.2
3. As shown in Figure 10-318 on page 792, a window opens to inform you about the time that
is required to add a node to the system.
Figure 10-318 Warning message
4. If you want to add it, click Yes.
10.12.4 Removing a node from the SAN Volume Controller clustered system
From the System Details panel, perform the following steps to remove a node:
1. Select a node, click Actions Remove Node, as shown in Figure 10-319.
Figure 10-319 Remove node from SAN Volume Controller clustered system action
2. A Warning window opens, as shown in Figure 10-320 on page 793.
By default, the cache is flushed before the node is deleted to prevent data loss if a failure
occurs on the other node in the I/O Group.
In certain circumstances, such as when the system is already degraded, you can take the
specified node offline immediately without flushing the cache or ensuring data loss does
not occur, by selecting Bypass check for volumes that will go offline, and remove the
node immediately without flushing its cache.
Important: When a node is added to a system, it displays a state of adding and a yellow
color. It can take as long as 30 minutes to add the node to the system, particularly if the
software version of the node has changed.
Chapter 10. SAN Volume Controller operations using the GUI 793
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-320 Warning window: Removing a node
3. Click Yes to confirm the removal of the node. See the System Details panel to verify a
node removal (Figure 10-321).
Figure 10-321 System Details panel, SAN Volume Controller node removed
If this node is the last node in the system, the warning message differs, as shown in
Figure 10-322 on page 794. Before you delete the last node in the system, ensure that you
want to destroy the system. Removing the last node destroys the system. The user
interface and any open CLI sessions are lost.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
794 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-322 Warning window for the last node removal
4. If you want to remove the last node, click OK. The node now is a candidate to be added
back into this system or into another system.
10.13 Troubleshooting
The events that are detected by the system are saved in a system event log. When an entry is
made in this event log, the condition is analyzed and classified to help you diagnose
problems.
10.13.1 Events panel
In the Monitoring actions selection, the Events panel (Figure 10-323) displays the event
conditions that require action, and it displays the procedures to diagnose and fix them.
To access this panel, perform the following action from the SAN Volume Controller Overview
panel:
Move a mouse cursor over Monitoring selection in the dynamic menu and select Events.
.
Figure 10-323 Monitoring, Events selection
Chapter 10. SAN Volume Controller operations using the GUI 795
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
The highest-priority event is indicated, along with information about how long ago the event
occurred. It is important to note that if an event is reported, you must select the event and run
a fix procedure.
Event properties
To retrieve the properties and sense data about a specific event, perform the following steps:
1. Select an event in the table.
2. Click Actions Properties (Figure 10-324).
Figure 10-324 Event properties action
3. The Properties and Sense Data for Event sequence_number window (where
sequence_number is the sequence number of the event that you selected in the previous
step) opens, as shown in Figure 10-325 on page 796.
Tip: You can also obtain access to the Properties action by right-clicking an event.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
796 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-325 Properties and Sense Data for Event window
4. Click Close to return to the Recommended Actions panel.
Running the fix procedure
To run a procedure to fix an event, perform the following steps:
1. Select an event in the table.
2. Click Actions Run Fix Procedure (Figure 10-326 on page 797).
Tip: From the Properties and Sense Data for Event window, you can use Previous and
Next in the lower-left corner of the window to navigate between events.
Tip: You can also click Run Fix on the top of the panel (see Figure 10-326 on
page 797) to solve the most critical event.
Chapter 10. SAN Volume Controller operations using the GUI 797
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-326 Run Fix Procedure action
3. The Directed Maintenance Procedure window opens, as shown in Figure 10-327. Follow
the steps in the wizard to fix the event.
Figure 10-327 Directed Maintenance Procedure wizard
4. Click Cancel to return to the Recommended Actions panel.
Tip: You can also obtain access to the Run Fix Procedure action by right-clicking an
event.
Sequence of steps: We do not describe all of the possible steps here, because the
steps that are involved depend on the specific event.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
798 Implementing the IBM System Storage SAN Volume Controller V7.2
10.13.2 Event Log
On the Events panel (Figure 10-328), you can choose to display the SAN Volume Controller
event log by either Recommended Actions, Unfixed Messages and Alerts, or all events.
To access this panel, from the SAN Volume Controller Overview panel that is shown in
Figure 10-1 on page 628, move the mouse cursor over the Monitoring selection in the
dynamic menu and click Events. Then, in the upper-left corner of the panel, select either
Recommended actions, Unfixed messages and alerts, or Show all.
Figure 10-328 SAN Volume Controller Event Log
Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem.
Other alerts also require an action, but they do not have a fix procedure. Messages are fixed
when you acknowledge reading them.
Filtering events
You can filter events in various ways. Filtering can be based on event status (see Basic
filtering), or over a period of time (see Time filtering on page 799). You can also search the
event log for a specific text string using table filtering, as described in 10.1.2, Organizing
based on window content on page 633.
Certain events require a specific number of occurrences in 25 hours, before they are
displayed as unfixed. If they do not reach this threshold in 25 hours, they are flagged as
expired. Monitoring events are beneath the coalesce threshold and are usually transient.
You can also sort events by time or error code. When you sort by error code, the most serious
events (those events with the lowest numbers) are displayed first.
Basic filtering
You can filter the Event Log display in one of three ways by using the drop-down menu in the
upper-right corner of the panel (see Figure 10-329 on page 799):
Display all unfixed alerts and messages: Recommended actions (events requiring your
attention)
Chapter 10. SAN Volume Controller operations using the GUI 799
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Display all alerts and messages: Unfixed messages and alerts
Display all event alerts, messages, monitoring, and expired events: Show all (include the
events that are under the threshold)
Figure 10-329 Filter Event Log display
Time filtering
There are two ways to perform time filtering: by selecting a start date and time, and end date
and time; and by selecting an event and showing the entries within a certain period of time of
this event. In this section, we demonstrate both methods:
By selecting a start date and time, and an end date and time
To use this time frame filter, perform the following steps:
a. Click Actions Filter by Date (Figure 10-330).
Figure 10-330 Filter by Date action
b. The Date/Time Filter window opens (Figure 10-331 on page 800). From this window,
select a start date and time and an end date and time.
Tip: You can also obtain access to the Filter by Date action by right-clicking an
event.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
800 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-331 Date/Time Filter window
c. Click Filter and Close. Your panel is now filtered based on the time frame.
To disable this time frame filter, click Actions Reset Date Filter (Figure 10-332).
Figure 10-332 Reset Date Filter action
By selecting an event and showing the entries within a certain period of time of this event
To use this time frame filter, perform the following steps:
a. Select an event in the table.
b. Click Actions Show entries within, select minutes, hours, or days, and select a
value (Figure 10-333 on page 801).
Chapter 10. SAN Volume Controller operations using the GUI 801
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-333 Show entries within a certain amount of time after this event
c. Now, your window is filtered based on the time frame (Figure 10-334).
Figure 10-334 Time frame filtering
To disable this time frame filter, click Actions Reset Date Filter (Figure 10-335).
Figure 10-335 Reset Date Filter action
Event properties
To retrieve the properties and sense about a specific event, perform the following steps:
1. Select an event in the table.
2. Click Actions Properties (Figure 10-336 on page 802).
Tip: You can also access the Show entries within action by right-clicking an event.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
802 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-336 Event properties action
3. The Properties and Sense Data for Event sequence_number (where sequence_number is
the sequence number of the event that you selected in the previous step) window opens,
as shown in Figure 10-337 on page 803.
Tip: You can also access the Properties action by right-clicking an event.
Chapter 10. SAN Volume Controller operations using the GUI 803
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-337 Properties and Sense Data for Event window
4. Click Close to return to the Event Log.
Marking an event as fixed
To mark one or more events as fixed, perform the following steps:
1. Select one or more entries in the table.
2. Click Actions Mark as fixed (Figure 10-338 on page 804).
Tip: From the Properties and Sense Data for Event window, you can use the Previous
and Next buttons to navigate between events.
Tip: To select multiple events, hold down Ctrl and use the mouse to select the entries
that you want to select.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
804 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-338 Mark as Fixed action
3. The Warning window opens (Figure 10-339).
Figure 10-339 Warning window
4. Click Yes to confirm your choice.
Exporting Event log entries
You can export event log entries into a comma separated values (CSV) file for further
processing and enhanced filtering with external applications. You can either export a full event
log or filtered result based on your requirements. To export an event log entry, perform the
following steps:
1. From the Events panel, show and sort or filter the table to provide the results you want to
export into a CSV file.
2. Click on the floppy disk icon () and save the file to your workstation (Figure 10-340 on
page 805.)
Tip: You can also access the Mark as Fixed action by right-clicking an event.
Chapter 10. SAN Volume Controller operations using the GUI 805
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-340 Export Event log to a CSV file
3. You can view the file using Notepad or another program (Figure 10-341).
Figure 10-341 Viewing the CSV file in Notepad
10.13.3 Running the fix procedure
To run a procedure to fix an alert, perform the following steps:
1. In the table, select an alert with a four-digit error code.
2. Click Actions Run Fix Procedure (Figure 10-342 on page 806).
Alerts: Several alerts have a four-digit error code and a fix procedure that helps you fix the
problem. We describe these steps in this section. Other alerts also require action, but they
do not have a fix procedure. Messages are fixed when you acknowledge them.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
806 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-342 Run Fix Procedure action
3. The Directed Maintenance Procedure window opens (Figure 10-343). You must follow the
wizard and its steps to fix the event.
Figure 10-343 Directed Maintenance Procedure wizard
4. Click Cancel to return to the Event Log window.
Tip: You can also access the Run Fix Procedure action by right-clicking an alert.
Wizard: We do not describe all the various steps, because they depend on the specific
alert.
Chapter 10. SAN Volume Controller operations using the GUI 807
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Clearing the log
To clear the logs, perform the following steps:
1. Click Actions Clear Log (Figure 10-344).
Figure 10-344 Clear log
2. A Warning window opens (Figure 10-345). From this window, you must confirm that you
want to clear all entries from the error log.
Figure 10-345 Warning window
3. Click Yes to confirm your choice.
10.13.4 Support panel
From the support panel that is shown in Figure 10-346 on page 808, you can download a
support package that contains log files and information that can be sent to support personnel
to help troubleshoot the system. You can either download individual log files or download
statesaves, which are dumps or livedumps of system data.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
808 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-346 Support panel
Downloading the support package
To download the support package, perform the following steps:
1. Click Download Support Package (Figure 10-347).
Figure 10-347 Download Support Packages
2. A Download Support Packages window opens (Figure 10-348 on page 809).
From this window, select which kind of logs you want to download:
Standard logs
These logs contain the most recent logs that have been collected for the system.
These logs are the most commonly used by support to diagnose and solve problems.
Standard logs plus one existing statesave
These logs contain the standard logs for the system and the most recent statesaves
from any of the nodes in the system. Statesaves are also known as dumps or
livedumps.
Standard logs plus most recent statesave from each node
These logs contain the standard logs for the system and the most recent statesaves
from each node in the system. Statesaves are also known as dumps or livedumps.
Standard logs plus new statesaves
These logs generate a new statesaves (livedump) for all the nodes in the system and
package the statesaves with the most recent logs.
Chapter 10. SAN Volume Controller operations using the GUI 809
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-348 Download Support Package window
3. Click Download to confirm your choice (Figure 10-348).
4. Finally, select where you want to save these logs (Figure 10-349).
Figure 10-349 Save the log file on your personal workstation
Download individual packages
To download packages manually, perform the following tasks:
1. Activate the individual log files view (Figure 10-350) by clicking Show full log listing
(Figure 10-350).
Figure 10-350 Show full log listing link
Duration varies: Depending on your choice, this action can take several minutes to
complete.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
810 Implementing the IBM System Storage SAN Volume Controller V7.2
2. On the detailed view, select the node from which you want to download the logs by using
the drop-down menu in the upper-left corner of the panel (Figure 10-351).
Figure 10-351 Node selection
3. Select the package or packages that you want to download (Figure 10-352).
Figure 10-352 Selecting individual packages
4. Click Actions Download (Figure 10-353).
Figure 10-353 Download packages
Tip: To select multiple packages, hold down Ctrl and use the mouse to select the
entries that you want to include.
Tip: You can also access the Download action by right-clicking a package.
Chapter 10. SAN Volume Controller operations using the GUI 811
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
5. Finally, select where you want to save these logs on your personal computer.
CIMOM logging level
Select this option to include the Common Information Model (CIM) object manager (CIMOM)
tracing components and logging details.
To change the CIMOM logging level to either high, medium, or low, use the drop-down menu
in the upper-right corner of the panel, as shown in Figure 10-354.
Figure 10-354 Change the CIMOM logging level
10.14 User Management
Users are managed from within the Access selection of the dynamic menu in the SAN
Volume Controller GUI, as shown in Figure 10-355.
Figure 10-355 Users panel
Each user account has a name, role, and password assigned to it, which differs from the
Secure Shell (SSH) key-based role approach that is used by the CLI. Note that starting in 6.3,
you can access the CLI with a password and no SSH key.
We describe the authentication in detail in 2.9, User authentication on page 47.
Tip: You can also delete packages by clicking Delete in the Actions menu.
Maximum logging level: The maximum logging level can have a significant effect on the
performance of the CIMOM interface.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
812 Implementing the IBM System Storage SAN Volume Controller V7.2
The role-based security feature organizes the SAN Volume Controller administrative functions
into groups, which are known as roles, so that permissions to execute the various functions
can be granted differently to the separate administrative users. There are four major roles and
one special role. Table 10-1 lists the user roles.
Table 10-1 Authority roles
The superuser user is a built-in account that has the Security Admin user role permissions.
You cannot change permissions or delete this superuser account; you can only change the
password. You can also change this password manually on the front panels of the clustered
system nodes.
An audit log keeps track of actions that are issued through the management GUI or the CLI.
For more information about this topic, see 10.14.9, Audit log information on page 823.
Role Allowed commands User
Security Admin All commands Superusers
Administrator All commands except:
svctask: chauthservice,
mkuser, rmuser, chuser,
mkusergrp, rmusergrp,
chusergrp, and setpwdreset
Administrators that control the
SAN Volume Controller
Copy Operator All svcinfo commands and
the following svctask
commands:
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap,
chfcmap, startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership
For users that control all copy
functionality of the cluster
Service All svcinfo commands
and the following svctask
commands:
applysoftware, setlocale,
addnode, rmnode, cherrstate,
writesernum, detectmdisk,
includemdisk, clearerrlog,
cleardumps, settimezone,
stopcluster, startstats,
stopstats, and settime
For users that perform service
maintenance and other
hardware tasks on the cluster
Monitor All svcinfo commands and
the following svctask
commands: finderr,
dumperrlog, dumpinternallog,
chcurrentuser, and the
svcconfig command: backup
For users that only need view
access
Chapter 10. SAN Volume Controller operations using the GUI 813
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.14.1 Creating a user
Perform the following steps to create a user:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over the
Access selection in the dynamic menu and click Users.
2. On the Users panel, click Create User (Figure 10-356).
Figure 10-356 Create new user
3. The Create User window opens (Figure 10-357).
Figure 10-357 Create User window
Enter a new user name in the Name field.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
814 Implementing the IBM System Storage SAN Volume Controller V7.2
Authentication Mode section
There are two types of authentication that are available in this section:
Local: The authentication method is located on the system. Users must be part of a user
group that authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 10-1 on page 812) to which you want this user to belong.
Remote: Remote authentication allows users of SAN Volume Controller clustered system
to authenticate to the system using the external authentication service. It can either be
IBM Tivoli Integrated Portal, or a supported LDAP service. Ensure that the remote
authentication service is supported by the SAN Volume Controller clustered system. For
more information about Remote user authentication refer to 2.9, User authentication on
page 47.
Local Credentials section
There are two types of local credentials that can be configured in this section, depending on
your needs:
Password Authentication: The password authenticates users to the management GUI.
Enter the password in the Password field. Verify the password.
SSH Public/Private Key Authentication: The SSH Public Key authenticates users to the
CLI. Use Browse to locate and upload the SSH Public Key. If you have not created a SSH
key pair, you can still access the SAN Volume Controller system by using your user name
and password.
4. To create the new user, click Create (Figure 10-357 on page 813).
10.14.2 Modifying the user properties
Perform the following steps to change the user properties:
1. From the SAN Volume Controller Overview panel, move the cursor over the Access
selection in the dynamic menu, and then click the Users panel.
2. In the left column, select a User Group.
3. Select a user.
4. Click Actions Properties (Figure 10-358 on page 815).
User name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore (_) character. The user name can be between one and 256 characters in
length.
Password: The password can be between 6 and 64 characters in length, and it cannot
begin or end with a space.
Tip: You can also change user properties by right-clicking a user and selecting
Properties from the list.
Chapter 10. SAN Volume Controller operations using the GUI 815
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-358 User Properties action
5. The User Properties window opens (Figure 10-359).
Figure 10-359 User Properties window
From this window, you can change the authentication mode and the local credentials. For
the authentication mode, choose the type of authentication:
Local: The authentication method is located on the system. Users must be part of a
user group which authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 10-1 on page 812) that you want the user to be part of.
Remote: Remote authentication allows users of the SAN Volume Controller clustered
system to authenticate to the system using the external authentication service. It can
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
816 Implementing the IBM System Storage SAN Volume Controller V7.2
either be IBM Tivoli Integrated Portal, or a supported LDAP service. Ensure that the
remote authentication service is supported by the SAN Volume Controller clustered
system. For more information about Remote user authentication refer to 2.9, User
authentication on page 47
For the local credentials, there are two types of local credentials that can be configured in
this section, depending on your needs:
Password authentication: The password authenticates users to the management GUI.
You need to enter the password in the Password field. Verify the password.
SSH Public/Private Key authentication: The SSH Key authenticates users to the CLI.
Use Browse to locate and upload the SSH Public Key.
6. To confirm the changes, click OK (see Figure 10-359 on page 815).
10.14.3 Removing a user password
Perform the following steps to remove a user password:
1. From the SAN Volume Controller Overview panel, move the cursor over the Access
selection in the dynamic menu, and then click the Users panel.
2. Select the user.
3. Click Actions Remove Password, as shown in Figure 10-360.
Figure 10-360 Remove Password action
4. The Warning window opens (Figure 10-361 on page 817). Click Yes to complete the
operation.
Password: The password can be between 6 and 64 characters in length and it
cannot begin or end with a space.
Important: To be able to remove the password for a given user, the SSH Public Key must
be defined. Otherwise, this action is not available.
Tip: You can also remove the password by right-clicking a user and selecting Remove
Password from the list.
Chapter 10. SAN Volume Controller operations using the GUI 817
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-361 Warning window
10.14.4 Removing a user SSH Public Key
Perform the following steps to remove a user password:
1. From the SAN Volume Controller Overview panel, click Access, and then click the Users
panel.
2. Select the user.
3. Click Actions Remove SSH Key, as shown in Figure 10-362.
Figure 10-362 Remove SSH Key action
4. The Warning window opens (Figure 10-363 on page 817). Click Yes to complete the
operation.
Figure 10-363 Warning window
Important: To be able to remove the SSH Public Key for a given user, the password must
be defined. Otherwise, this action is not available.
Tip: You can also remove the SSH Public Key by right-clicking a user and selecting
Remove SSH Key from the list.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
818 Implementing the IBM System Storage SAN Volume Controller V7.2
10.14.5 Deleting a user
Perform the following steps to delete a user:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over the
Access selection, and then click the Users panel.
2. Select the user.
3. Click Actions Delete, as shown in Figure 10-364.
Figure 10-364 Delete user action
10.14.6 Creating a user group
Five user groups are created by default on the SAN Volume Controller. If needed, you can
create additional user groups.
Perform the following steps to create a user group:
1. From the SAN Volume Controller Overview panel, move the cursor over the Access
selection on the dynamic menu, and then click the Users.
2. Click Create User Group as shown in Figure 10-361 on page 817.
Important: To select multiple users to delete, hold down Ctrl and use the mouse to
select the entries that you want to delete.
Tip: You can also delete a user by right-clicking the user and selecting Delete from the
list.
Chapter 10. SAN Volume Controller operations using the GUI 819
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-365 Creating User Group
3. The Create User Group window opens (Figure 10-362).
Figure 10-366 Create User Group window
Enter a name for the group in the Group Name field.
A role needs to be selected between Monitor, Copy Operator, Service, Administrator, or
Security Administrator. See Table 10-1 on page 812 for more information about these
roles.
4. To create the group name, click Create (Figure 10-362).
5. You can verify the creation in the Users panel (Figure 10-367 on page 820).
Group name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore (_) character. The group name can be between one and 63 characters in
length.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
820 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-367 Verify user group creation
10.14.7 Modifying the user group properties
Perform the following steps to change the user group properties:
1. From the SAN Volume Controller Overview panel, move the cursor over the Access
selection on the dynamic menu, and then, click the Users.
2. In the left column, select the User Group.
3. Click Actions Properties, as shown in Figure 10-368.
Figure 10-368 Modify User Group Properties
4. The User Group Properties window opens (Figure 10-369 on page 821).
Important: For preset user groups (SecurityAdmin, Administrator, CopyOperator, Service,
and Monitor), you cannot change the respective roles.
Chapter 10. SAN Volume Controller operations using the GUI 821
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-369 User Group Properties window
From this window, you can change the role. You must select a role among Monitor, Copy
Operator, Service, Administrator, or Security Administrator. See Table 10-1 on page 812
for more information about these roles.
5. To confirm the changes, click OK (Figure 10-369).
10.14.8 Deleting a user group
Perform the following steps to delete a user group:
1. From the SAN Volume Controller Overview panel, move the cursor over the Access
selection on the dynamic menu, and then click the Users.
2. In the left column, select the User Group.
3. Click Actions Delete (Figure 10-370 on page 822).
Important: You cannot delete the following preset user groups: SecurityAdmin,
Administrator, CopyOperator, Service, or Monitor.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
822 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-370 Delete User Group action
4. There are two options:
If you do not have any users in this group, the Delete User Group window opens, as
shown in Figure 10-371. Click Delete to complete the operation.
Figure 10-371 Delete User Group window
If you have users in this group, the Delete User Group window opens, as shown in
Figure 10-372. The users of this group will be moved to the Monitor user group.
Figure 10-372 Delete User Group window
Chapter 10. SAN Volume Controller operations using the GUI 823
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.14.9 Audit log information
An audit log tracks actions that are issued through the management GUI or the CLI. You can
use the audit log to monitor the user activity on your SAN Volume Controller clustered system.
Perform the following steps to view the audit log. From the SAN Volume Controller Overview
panel, move the cursor over the Access selection on the dynamic menu, and then, click the
Audit Log panel, as shown in Figure 10-373.
The audit log entries provide the following types of information:
The time and date when the action or command was issued on the system
The name of the user who performed the action or command
The IP address of the system where the action or command was issued
The parameters that were issued with the command
The results of the command or action
The sequence number
The object identifier that is associated with the command or action
Figure 10-373 Audit Log entries
Time filtering
There are two ways to perform time filtering: by selecting a start date and time and an end
date and time; and by selecting an event and showing the entries within a certain period of
time of this event. In this section, we demonstrate both methods:
By selecting a start date and time and an end date and time
To use this time frame filter, perform the following steps:
Click Actions Filter by Date (Figure 10-374).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
824 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-374 Audit log time filter
The Date/Time Filter window opens (Figure 10-375). From this window, select a start
date and time and an end date and time.
Figure 10-375 Date/Time Filter window
Click Filter and Close. Your panel is now filtered based on its time frame.
To disable this time frame filter, click Actions Reset Date Filter (Figure 10-376).
Figure 10-376 Reset Date Filter action
By selecting an entry and showing the entries within a certain period of time of this event
To use this time frame filter, perform the following steps:
Select an entry in the table.
Click Actions Show entries within. Select minutes, hours, or days. Then, select a
value (Figure 10-377 on page 825).
Tip: You can also access the Filter by Date action by right-clicking an entry.
Chapter 10. SAN Volume Controller operations using the GUI 825
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-377 Show entries within action
Your panel is now filtered based on the time frame (Figure 10-377).
Figure 10-378 Time frame filtering
To disable this time frame filter, click Actions Reset Date Filter (Figure 10-379).
Figure 10-379 Reset Date Filter action
10.15 Configuration
In this section, we describe how to configure various properties of the SAN Volume Controller
system.
Tip: You can also access the Show entries within action by right-clicking an entry.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
826 Implementing the IBM System Storage SAN Volume Controller V7.2
10.15.1 Configuring the Network
With the SAN Volume Controller, you can use both IP ports of each node. There are two
active IP ports on each node. We describe the two active IP ports on each node in further
detail in 2.6.1, Use of IP addresses and Ethernet ports on page 33.
Management IP addresses
In this section, we discuss the modification of management IP addresses.
Management IP addresses can be defined for the system. The system supports one to four IP
addresses. You can assign these addresses to two Ethernet ports and their backup ports.
Multiple ports and IP addresses provide redundancy for the system in the event of connection
interruptions.
At any point in time, the system has an active management interface. Ethernet Port 1 must
always be configured, and the use of Port 2 is optional. Configuring both ports provides
redundancy for the Ethernet connections. If you have configured both ports and you cannot
connect through one IP address, attempt to access the system through the alternate IP
address. Both IPv4 and IPv6 address formats are supported. Ethernet ports can have either
IPv4 addresses or IPv6 addresses, or both.
Perform the following steps to modify the system IP addresses of your SAN Volume Controller
configuration:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over the
Settings selection and click Network.
2. In the left column, select Management IP Addresses.
3. The Management IP Addresses window opens (Figure 10-380).
Important: If you specify a new system management IP address, the existing
communication with the system through the GUI is lost. You need to relaunch the SAN
Volume Controller Application from the browser. You must use the new IP address to
reconnect to the management GUI. When you reconnect, accept the new site certificate.
Modifying the IP address of the system, although it is quite simple, requires reconfiguration
for other items within the SAN Volume Controller environment, including reconfiguration
the central administration GUI by adding the system again with its new IP address.
Chapter 10. SAN Volume Controller operations using the GUI 827
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-380 Modifying the management IP address
4. Click a port to configure the systems management IP address. Notice that you can
configure both ports on the SAN Volume Controller node (Figure 10-381 on page 827).
Figure 10-381 Modifying the management IP addresses
5. Depending on whether you select to configure an IPv4 or IPv6 system, the information that
you enter varies:
For IPv4:
Type an IPv4 address in the IP Address field.
Type an IPv4 gateway in the Gateway field.
Type an IPv4 subnet mask.
For IPv6:
Select Show IPv6.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
828 Implementing the IBM System Storage SAN Volume Controller V7.2
Type an IPv6 prefix in the IPv6 network Prefix field. The Prefix field can have a value of
0 to 127.
Type an IPv6 address in the IP Address field.
Type an IPv6 gateway in the Gateway field.
6. After you enter the information, click OK to confirm the modification (Figure 10-381).
10.15.2 Configuring the service IP addresses
The service IP address is used to access the Service Assistant Tool, which you can use to
perform service-related actions on the node. All nodes in the system have separate service
addresses. A node that is operating in the service state does not operate as a member of the
clustered system.
Configuring this service IP is important, because it will let you access the Service Assistant
Tool. In the case of an issue with a node, you can view a detailed status and error summary,
and you can perform service actions on that node.
Perform the following steps to modify the service IP addresses of your SAN Volume Controller
configuration:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over Settings
selection and click Network.
2. In the left column, select Service IP Addresses (Figure 10-382 on page 828).
Figure 10-382 Service IP Addresses window
3. Select one node, and click the port to which you want to assign a service IP address
(Figure 10-383).
Chapter 10. SAN Volume Controller operations using the GUI 829
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-383 Configure Service IP window
4. Depending on whether you installed an IPv4 system or IPv6 system, the information that
you enter varies:
For IPv4:
Type an IPv4 address in the IP Address field.
Type an IPv4 gateway in the Gateway field.
Type an IPv4 subnet mask.
For IPv6:
Select Show IPv6.
Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of
0 to 127.
Type an IPv6 address in the IP Address field.
Type an IPv6 gateway in the Gateway field.
5. After the information is filled in, click OK to confirm the modification.
6. Repeat steps 3, 4, and 5 for each node of your system.
10.15.3 Configuring Ethernet ports
The SAN Volume Controller clustered system now allows you to configure node Ethernet port
properties and assign additional IP addresses that can be used for iSCSI connections, host
attachment, and long-distance remote copy features (Figure 10-384).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
830 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-384 Network, Ethernet Ports configuration
To configure SAN Volume Controller Ethernet ports for use with these features, perform the
following actions:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over the
Settings selection and click Network.
2. In the left column, select Ethernet Ports (Figure 10-384).
3. Select the node and port that you want to modify and click Actions Modify, as shown in
Figure 10-385.
Figure 10-385 Modify Ethernet Ports
4. On the Modify Port portnumber on Node nodenumber (where portnumber is the Ethernet
port ID and nodenumber is the ID of the SAN Volume Controller node) window you can
modify settings shown in Figure 10-386 on page 831:
Chapter 10. SAN Volume Controller operations using the GUI 831
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-386 Modify port settings
IP Address, Subnet mask and Gateway: set the IP address for use with iSCSI
connections, host attachment, and long-distance remote copy
iSCSI hosts: enable the Ethernet port for use with iSCSI connections
Remote copy: enable the Ethernet port for use with long-distance remote copy and
assign Remote copy Group.
5. After you make your selection, click OK to save the changes and repeat this step for other
Ethernet ports.
10.15.4 iSCSI configuration
From the iSCSI panel, you can configure settings for the system to attach to iSCSI-attached
hosts, as shown in Figure 10-387 on page 832.
Note: The Ethernet port is used to connect to a remote system using a
long-distance IP connection. For a successful partnership to be established
between two systems, the remote copy group must contain at least two ports, one
from the local system and the other from remote system. For more details about
long-distance remote copy using IP connection refer to Chapter 8, Advanced Copy
Services on page 365
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
832 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-387 iSCSI Configuration
The following parameters can be updated:
System Name
It is important to set the system name correctly, because it is part of the iSCSI qualified
name (IQN) for the node.
To change the system name, click the system name and specify the new name.
iSCSI Aliases (optional)
An iSCSI alias is a user-defined name that identifies the node to the host.
Perform the following steps to change an iSCSI alias:
Click an iSCSI alias.
Specify a name for it.
Each node has a unique iSCSI name that is associated with two IP addresses. After the
host has initiated the iSCSI connection to a target node, this IQN from the target node will
be visible in the iSCSI configuration tool on the host.
iSNS and CHAP
You can specify the IP address for the iSCSI Storage Name Service (iSNS). Host systems
use the iSNS server to manage iSCSI targets and for iSCSI discovery.
You can also enable Challenge Handshake Authentication Protocol (CHAP) to
authenticate the system and iSCSI-attached hosts with the specified shared secret.
Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.
System name: You can use the letters A to Z and a to z, the numbers 0 to 9, and the
underscore (_) character. The name can be between one and 63 characters in length.
Chapter 10. SAN Volume Controller operations using the GUI 833
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system properties or for each host definition. The CHAP must be identical on
the server and the system/host definition. You can create an iSCSI host definition without
using a CHAP.
10.15.5 Fibre Channel information
As shown in Figure 10-388, you can use the Fibre Channel Connectivity panel to display the
FC connectivity between nodes and other storage systems and hosts that attach through the
FC network. You can filter by selecting one of the following fields:
All nodes, storage systems, and hosts
Systems
Nodes
Storage systems
Hosts
Figure 10-388 Fibre Channel
10.15.6 Event notifications
The SAN Volume Controller can use Simple Network Management Protocol (SNMP) traps,
syslog messages, and Call Home email to notify you and the IBM Support Center when
significant events are detected. Any combination of these notification methods can be used
simultaneously.
Notifications are normally sent immediately after an event is raised. However, there are
events that can occur because of service actions that are being performed. If a recommended
service action is active, notifications about these events are sent only if the events are still
unfixed when the service action completes.
10.15.7 Email notifications
The Call Home feature transmits operational and event-related data to you and IBM through a
Simple Mail Transfer Protocol (SMTP) server connection in the form of an event notification
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
834 Implementing the IBM System Storage SAN Volume Controller V7.2
email. When configured, this function alerts IBM service personnel about hardware failures
and potentially serious configuration or environmental issues.
Perform the following steps to configure email event notifications:
1. From the SAN Volume Controller Overview panel, move the mouse cursor over the
Settings selection and click Event Notifications.
2. In the left column, select Email.
3. Click Enable Email Event Notification (Figure 10-389).
Figure 10-389 Email Event Notification
4. A wizard opens (Figure 10-390). In the Configure Support Notifications window, you must
first define system location information (Company Name, Street Address, City, State or
Province, Postal Code and Country or Region of where the system is located. Click Next
after you have provided this information.
Figure 10-390 Define System Location
5. On the next window, you must enter contact information to enable IBM Support personnel
to contact this person to assist with problem resolution (Contact Name, Email reply
Address, Machine Location, and Phone numbers). Ensure that all contact information is
valid and click Next. See Figure 10-391.
Chapter 10. SAN Volume Controller operations using the GUI 835
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-391 Define Company Contact information
6. On the next page (Figure 10-392), configure at least one email server that is used by your
site and optionally enable inventory reporting. Enter a valid IP address and a server port
for each server that is added. Ensure that the email servers are valid.
Inventory reports allow IBM service personnel to proactively notify you of any known
issues with your system. To activate the inventory reporting function, enable the inventory
reporting and choose a reporting interval in this window.
Figure 10-392 Configure Email Servers and Inventory Reporting window
7. Next (Figure 10-393 on page 836), you can configure email addresses to receive
notifications. It is advisable to configure an email address, which belongs to a support user
with the error event notification type enabled, to notify IBM service personnel if an error
condition occurs on your system. Ensure that all email addresses are valid.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
836 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-393 Configure Support Notifications window
8. The last window (Figure 10-394) displays a summary of your Email Event Notification
wizard. Click Finish to complete the setup.
Figure 10-394 Email Event Notification Summary
9. The wizard is now closed. Additional information has been added to the panel, as shown
on Figure 10-395 on page 837. You can edit or disable email notification from this window.
Chapter 10. SAN Volume Controller operations using the GUI 837
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-395 Email Event Notification window configured
10.15.8 SNMP notifications
Simple Network Management Protocol (SNMP) is a standard protocol for managing networks
and exchanging messages. The system can send SNMP messages that notify personnel
about an event. You can use an SNMP manager to view the SNMP messages that are sent by
the SAN Volume Controller.
You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (see Figure 10-396 on page 838):
IP Address
The address for the SNMP server
Server Port
The remote port number for the SNMP server. The remote port number must be a value
between 1 - 65535.
Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
Event notifications:
Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.
Select Info if you want the user to receive messages about expected events. No action
is required for these events.
Important: Navigate to Recommended Actions to run the fix procedures on these
notifications.
Important: Navigate to Recommended Actions to run the fix procedures on these
notifications.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
838 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-396 SNMP configuration
To remove an SNMP Server, click the minus (-) button.
To add another SNMP Server, click the plus (+) button.
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be either IPv4 or IPv6. The system can send
syslog messages that notify personnel about an event.
You can configure a syslog server to receive log messages from various systems and store
them in a central repository by entering the following information (see Figure 10-397 on
page 839):
IP Address
The address for the syslog server.
Facility
The facility determines the format for the syslog messages and can be used to determine
the source of the message.
Message Format
The message format depends on the facility.
The system can transmit syslog messages in two formats:
The concise message format provides standard detail about the event.
The expanded format provides more details about the event.
Event Notifications:
Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine if any
corrective action is necessary.
Important: Navigate to Recommended Actions to run the fix procedures on these
notifications.
Important: Navigate to Recommended Actions to run the fix procedures on these
notifications.
Chapter 10. SAN Volume Controller operations using the GUI 839
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Select Info if you want the user to receive messages about expected events. No action
is required for these events.
Figure 10-397 Syslog configuration
To remove a syslog server, click the plus (+) button.
To add another syslog server, click the minus (-) button.
The syslog messages can be sent in either concise message format or expanded message
format.
Example 10-1 shows a compact format syslog message.
Example 10-1 Compact syslog message example
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100
Example 10-2 shows a expanded format syslog message.
Example 10-2 Full format syslog message example
IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node
CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST
#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2
#NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0
(build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234
#AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000000000000000#Additional
Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000
00000000000000000000000000000000000000000000000000000000000
10.15.9 Using the General panel
Use the General panel to change time and date settings, work with licensing options,
download configuration settings, download software upgrade packages, and change
management GUI preferences.
10.15.10 Date and time
Perform the following steps to configure date and time settings:
1. From the SAN Volume Controller Overview panel, move the cursor over Settings and click
General.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
840 Implementing the IBM System Storage SAN Volume Controller V7.2
2. In the left column, select Date and Time (Figure 10-398 on page 840).
Figure 10-398 Date and Time window
3. From this panel, you can modify the following information:
Time zone
Select a time zone for your system using the drop-down list.
Date and time
Two options are available:
If you are not using a Network Time Protocol (NTP) server, select Set Date and
Time, and then manually enter the date and time for your system, as shown in
Figure 10-399. You can also click Use Browser Settings to automatically adjust the
date and time of your SAN Volume Controller system with your local workstation
date and time.
Figure 10-399 Set Date and Time window
If you are using a Network Time Protocol (NTP) server, select Set NTP Server IP
Address and then enter the IP address of the NTP server, as shown in
Figure 10-400.
Figure 10-400 Set NTP Server IP Address window
4. Finally, click Save.
Chapter 10. SAN Volume Controller operations using the GUI 841
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.15.11 Licensing
Perform the following steps to configure the licensing settings:
1. From the SAN Volume Controller Overview panel, move the cursor over Settings and click
General.
2. In the left column, select Licensing (Figure 10-401).
Figure 10-401 Licensing window
3. In the Select Your License section, you can choose between two licensing options for your
SAN Volume Controller system:
Standard Edition: Select the number of terabytes that are available for your license for
virtualization and for Copy Services functions for this license option.
Entry Edition: This type of licensing is based on the number of the physical disks that
you are virtualizing and whether you selected to license the FlashCopy function, the
Metro Mirror and Global Mirror function, or both of these functions.
4. Set the licensing options for the SAN Volume Controller for the following elements:
Virtualization Limit
Enter the capacity of the storage that will be virtualized by this system.
FlashCopy Limit
Enter the capacity that is available for FlashCopy mappings.
Global and Metro Mirror Limit
Enter the capacity that is available for Metro Mirror and Global Mirror relationships.
Real-time Compression Limit
Important: The Used capacity for FlashCopy mapping is the sum of all of the
volumes that are the source volumes of a FlashCopy mapping.
Important: The Used capacity for Global Mirror and Metro Mirror is the sum of the
capacities of all of the volumes that are in a Metro Mirror or Global Mirror
relationship; both master volumes and auxiliary volumes are included.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
842 Implementing the IBM System Storage SAN Volume Controller V7.2
Enter the total number of terabytes of virtual capacity that are licensed for
compression.
Virtualization Limit (Entry Edition only)
Enter the total number of physical drives that you are authorized for virtualization
10.15.12 Upgrading software
See 10.16, Upgrading SAN Volume Controller software on page 843, for information about
this topic.
10.15.13 Setting GUI preferences
Perform the following steps to configure the GUI preferences:
1. From the SAN Volume Controller Overview window, move the cursor over Settings and
click General.
2. In the left column, select GUI Preferences (Figure 10-402).
Figure 10-402 GUI Preferences window
3. You can configure the following elements:
Refresh GUI Objects
This action causes the GUI to refresh every one of its views. It clears the GUI cache.
The GUI will look up every object again.
Restore Default Browser Preferences
This action deletes all GUI preferences that are stored in the browser and restores the
default preferences.
Information Center
You can change the url address of the SAN Volume Controller infocenter
(Figure 10-403)
Important: This is an IBM Support only action button.
Chapter 10. SAN Volume Controller operations using the GUI 843
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-403 Change SAN Volume Controller Infocenter URL address
10.16 Upgrading SAN Volume Controller software
In this section, we explain the operations to upgrade your SAN Volume Controller software
from 6.4.1 to the new 7.2 version.
The format for the software upgrade package name ends in four positive integers separated
by dots. For example, a software upgrade package might have the name
IBM_2145_INSTALL_7.2.0.0.
10.16.1 Precautions before the upgrade
Take the following precautions before attempting an upgrade.
During the upgrade, each node in your SAN Volume Controller clustered system will be
automatically shut down and restarted by the upgrade process. Because each node in an I/O
Group provides an alternate path to volumes, use the Subsystem Device Driver (SDD) to
make sure that all I/O paths between all hosts and SANs work.
If you have not performed this check, certain hosts might lose connectivity to their volumes
and experience I/O errors when the SAN Volume Controller node that provides that access is
shut down during the upgrade process.
You can check the I/O paths by using SDD datapath query commands.
Double-check that your uninterruptible power supply unit power configuration is also set up
correctly (even if your system runs without problems). Specifically, double-check these areas:
Ensure that your uninterruptible power supply units all get their power from an external
source, and that they are not daisy-chained. Make sure that each uninterruptible power
supply unit does not supply power to another nodes uninterruptible power supply unit.
Ensure that the power cable, and the serial cable that comes from the back of each node,
goes back to the same uninterruptible power supply unit. Make sure that the cables do not
cross and go back to separate uninterruptible power supply units. If the cables are
crossed, during the upgrade, as one node is shut down, another node might also be
mistakenly shut down.
Important: Before attempting any SAN Volume Controller code update, read and
understand the SAN Volume Controller concurrent compatibility and code cross-reference
matrix. Go to the following site and click the link for Latest SAN Volume Controller code:
http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001707
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
844 Implementing the IBM System Storage SAN Volume Controller V7.2
10.16.2 SAN Volume Controller software upgrade test utility
The SAN Volume Controller software upgrade test utility is a SAN Volume Controller software
utility that checks for known issues that can cause problems during a SAN Volume Controller
software upgrade. It is available from the following location:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585
You can use the svcupgradetest utility to check for known issues that might cause problems
during a SAN Volume Controller software upgrade.
The software upgrade test utility can be downloaded in advance of the upgrade process, or it
can be downloaded and run directly during the software upgrade, as guided by the upgrade
wizard.
You can run the utility multiple times on the same SAN Volume Controller system to perform a
readiness check in preparation for a software upgrade. We strongly advise running this utility
for a final time immediately prior to applying the SAN Volume Controller upgrade to ensure
that there have not been any new releases of the utility since it was originally downloaded.
The installation and use of this utility are nondisruptive and do not require restarting any SAN
Volume Controller nodes, so there is no interruption to host I/O. The utility is only installed on
the current configuration node.
System administrators must continue to check whether the version of code that they plan to
install is the latest version. You can obtain the latest information at this website:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu
me_Controller%20Code
This utility is intended to supplement rather than duplicate the existing tests that are
performed by the SAN Volume Controller upgrade procedure (for example, checking for
unfixed errors in the error log).
10.16.3 Upgrade procedure
To upgrade the SAN Volume Controller system software, perform the following steps:
1. With a supported web browser, put your system IP address in the following link. The SAN
Volume Controller GUI login window will then open, as shown in Figure 10-404.
http://<your system ip address>
Chapter 10. SAN Volume Controller operations using the GUI 845
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-404 SAN Volume Controller GUI login window
2. Log in with your administrator user ID and password; the SAN Volume Controller
management home page opens. Go to the Settings General menu, and click Upgrade
Software.
3. The window that is shown in Figure 10-405 on page 845 opens.
Figure 10-405 Upgrade Software
From the window that is shown in Figure 10-405, you can click the following options:
Check for updates: Use this option to check, on the IBM website, whether there is a
SAN Volume Controller software version available that is newer than the version that
you have installed on your SAN Volume Controller. You need an Internet connection to
perform this check.
Launch Upgrade Wizard: Use this option to launch the software upgrade process.
4. Click Launch Upgrade Wizard to start the upgrade process. The window that is shown in
Figure 10-406 opens.
Figure 10-406 Upgrade Package Test utility
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
846 Implementing the IBM System Storage SAN Volume Controller V7.2
From the window that is shown in Figure 10-406, you can download the upgrade test utility
from the IBM website, or you can browse and upload the upgrade test utility from the
location where you saved it, as shown in Figure 10-407.
Figure 10-407 Upgrade test utility
5. When the upgrade test utility has been uploaded, the window that is shown in
Figure 10-408 opens.
Figure 10-408 Upload upgrade test utility completed
6. When you click Next (Figure 10-408), the upgrade test utility is installed. The window that
is shown in Figure 10-409 opens.
Figure 10-409 Upgrade test utility applied
Chapter 10. SAN Volume Controller operations using the GUI 847
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
7. Click Close (Figure 10-409). The window that is shown in Figure 10-410 opens. From this
window, you can run your upgrade test utility for the level that you need. Enter the version
to which you want to upgrade, and the upgrade test utility checks the system to ensure
that it is ready for an upgrade to this version.
Figure 10-410 Run upgrade test utility
8. Click Next (Figure 10-410). The window that is shown in Figure 10-411 on page 847
opens.
At this point, the upgrade test utility runs. You will see the suggested actions (if any actions
are needed) or the window that is shown in Figure 10-411 on page 847.
Figure 10-411 Upgrade test utility result
9. Close the window and click Next to start the SAN Volume Controller software upload
procedure. The window that is shown in Figure 10-412 opens.
Figure 10-412 Upgrade Package
10.From the window that is shown in Figure 10-412, you can download the SAN Volume
Controller software upgrade package directly from the IBM website, or you can browse
and upload the software upgrade package from the location where you saved it, as shown
in Figure 10-413.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
848 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-413 Upload SAN Volume Controller software upgrade package
11.Click Open (Figure 10-413 on page 848), and you see the windows that are shown in
Figure 10-414 on page 848 and Figure 10-415 on page 848.
Figure 10-414 Uploading SAN Volume Controller software package
12.Figure 10-415 shows that the SAN Volume Controller package upload has completed.
Figure 10-415 Uploading SAN Volume Controller software package complete
13.Click Next and you see the window that is shown in Figure 10-416.
Figure 10-416 System ready for the upgrade
Chapter 10. SAN Volume Controller operations using the GUI 849
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
14.When you click Finish (Figure 10-416), the SAN Volume Controller software upgrade
starts. The window that is shown in Figure 10-417 opens.
Figure 10-417 Upgrading a node
15.When you click Close (Figure 10-417), the warning message appears that is shown in
Figure 10-418 on page 849.
Figure 10-418 Warning message
16.When you click OK (Figure 10-418), you have completed upgrading the SAN Volume
Controller software. Now, you see the window that is shown in Figure 10-419.
Figure 10-419 Upgrade in progress
17.After a few minutes, the window that is shown in Figure 10-420 opens, showing that the
first node has been upgraded.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
850 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-420 First node is upgraded
18.Now, the process will install the new SAN Volume Controller software version on the
remaining node in the system. You can check the upgrade status, as shown in
Figure 10-420.
19.After all the nodes have been rebooted, you have completed the SAN Volume Controller
software upgrade task.
10.17 Service Assistant Tool with the GUI
SAN Volume Controller code V6.1 introduced a new method for performing service tasks on
the system. In addition to being able to perform service tasks from the SAN Volume Controller
node front panel, you can now also service a node through an Ethernet connection using
either a web browser or command-line interface. The web browser runs a new service
application that is known as the Service Assistant Tool. Almost of all the functions that were
previously possible through the front panel are now available from the Ethernet connection,
offering the benefits of an easier-to-use interface that can be used remotely from the
clustered system.
In this section, we describe useful tasks that you can perform with the new Service Assistant
Tool application using a web browser GUI.
To be able to use the SAN Volume Controller Service Assistant Tool application with the SAN
Volume Controller GUI, you must first have a service IP address that is configured for each
node of your system. For more information about how to set the SAN Volume Controller
service IP address, see 4.3.3, Configuring the Service IP Addresses on page 136.
With a supported web browser, click the following link. The Service Assistant Tool login
window (Figure 10-421) opens:
https://<your service ip address>/service/
Important: We do not describe certain actions, because those actions must be run under
the direction of IBM Support. Do not try to perform actions of this kind without IBM Support.
Chapter 10. SAN Volume Controller operations using the GUI 851
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-421 Service Assistant Tool login page
Log in with your superuser password to reach the Service Assistant Tool Home page
(Figure 10-422 on page 851).
Figure 10-422 Service Assistant Tool Home page
From the Service Assistant Tool Home page (Figure 10-422), you can obtain an overview of
your SAN Volume Controller system and the node status.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
852 Implementing the IBM System Storage SAN Volume Controller V7.2
You can view a detailed status and error summary and manage service actions for the current
node. The current node is the node on which service-related actions are performed. The
connected node displays the Service Assistant Tool and provides the interface for working
with other nodes on the system. To manage a different node, select the radio button on the
left of your node panel name, and the details for the selected node will be shown.
Using the pull-down menu in the Service Assistant Tool Home page, you can select which
action you want to execute in the selected node (Figure 10-423 on page 852).
Figure 10-423 Service Assistant Tool Home page: Available actions
As shown in Figure 10-423, for the selected node, you can perform these service actions:
Enter Service State
Power off
Restart
Reboot
10.17.1 Placing a SAN Volume Controller node into the service state
To place a node into a service state, select the node where the action will be performed from
the Service Assistant Tool Home page. From the pull-down menu, select Enter Service
State, and then, click GO (Figure 10-423).
A confirmation window opens (Figure 10-424 on page 853). Click OK.
Chapter 10. SAN Volume Controller operations using the GUI 853
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-424 Service State confirmation window
At this point, the information window opens (Figure 10-425). Wait until the node is available,
and then, click OK.
Figure 10-425 Action completed window
Now, you are returned to the Service Assistant Tool Home page. You can see the status of the
node that was just entered into the service state (Figure 10-426 on page 854). Also, note
event code 690, which means that several resources have entered a service state.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
854 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-426 Node in the Service State
Now, you have the following choices from the Service Assistant Tool Home page pull-down
menu, as shown in Figure 10-423:
Hold Service State
Exit Service State
Power off
Restart
Reboot
Figure 10-427 Available actions
10.17.2 Exiting the service state
To take a SAN Volume Controller node out of the service state, select the node where the
action will be performed from the Service Assistant Tool Home page. From the pull-down
menu, select Exit Service State, and then, click GO (Figure 10-428).
Figure 10-428 Exit Service State action
Chapter 10. SAN Volume Controller operations using the GUI 855
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
A confirmation window opens (Figure 10-429). Then, click OK.
Figure 10-429 Confirmation window
The information window for your action opens (Figure 10-425 on page 853). Wait until the
node is available, and then, click OK.
When the node is available, the window that is shown in Figure 10-430 opens.
Figure 10-430 Exiting from the Service State
You may see the status that the node is starting and that the event that is shown in the Error
column is simply a regular message. Click Refresh until you can see that your node is active
and that no event is displayed in the Error column.
In our example, we used the Exit from Service State action from the Service Assistant Tool
Home page, but it is also possible to exit from a service state using the Restart action.
10.17.3 Rebooting a SAN Volume Controller node
To reboot a node, select the node where the action will be performed from the Service
Assistant Tool Home page. From the pull-down menu, select Reboot, and then, click GO
(Figure 10-431).
Figure 10-431 Reboot node action
A confirmation window opens (Figure 10-432 on page 856).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
856 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-432 Confirmation window
On the next confirmation window, wait until the operation completes successfully, and then,
click OK (Figure 10-433).
Figure 10-433 Operation completed
From the Service Assistant Tool Home page, notice that the node that you have just rebooted
has disappeared (Figure 10-430). This node will still be visible in an offline state from the GUI
or from the SAN Volume Controller CLI.
Figure 10-434 The rebooted node has disappeared from the Service Assistant Tool
The node that you have just rebooted has to complete before it becomes visible. Normally, a
node reboot takes about 14 minutes.
Chapter 10. SAN Volume Controller operations using the GUI 857
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
10.17.4 Collect Logs page
With the Service Assistant Tool application, you can create and download a package of log
and trace files, or you can download existing log files from the node.
The support package, which is also called SNAP, can be used by support personnel to
understand problems on the system. Unless advised by support, collect the latest statesaves.
Figure 10-435 shows the Service Assistant Tool page that you use to collect logs.
Figure 10-435 Collect Logs page
To create a support package with the last statesaves, select With latest statesave and click
Create and Download. The page that is shown in Figure 10-436 opens.
Figure 10-436 Generating support package page
Save the support package, and click OK (Figure 10-437 on page 858).
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
858 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-437 Save support package page
10.17.5 Manage System page
On this page, you can see system configuration data for the current node (Figure 10-438).
Figure 10-438 Manage System page
10.17.6 Recover System page
You can recover the entire SAN Volume Controller system using the recovery procedure,
which is also known as T3 recovery, if system data has been lost from all nodes. The recovery
procedure recreates the system using the saved configuration data. However, it might not be
able to restore all volume data. This action cannot be performed on an active node.
To recover the SAN Volume Controller system, the node must either be a candidate node or in
the service state. Before attempting the system recovery procedure, investigate the cause of
the system failure and attempt to resolve those issues using other service procedures.
The system recovery procedure is a two-stage process:
1. Click Prepare for Recovery to search the system for the most recent backup file and
quorum drive. If this step is successful, the Recover System panel displays the details of
the backup file and quorum drive that were found. Verify that the dates and times for the
file and drive are the most recent.
Important: We do not describe this procedure, because the recover system action must
be run under the direction and guidance of IBM Support. Do not attempt this action unless
you have been asked to perform it by IBM Support. We include this description simply to
make you aware that there is a recover system process.
Chapter 10. SAN Volume Controller operations using the GUI 859
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
2. If you are satisfied with these files, click Recover to recreate the system. If the backup file
or quorum drive is not suitable for the recovery, exit the task by selecting a separate menu
option.
Figure 10-439 shows the Recover System page.
Figure 10-439 Recover System page
10.17.7 Reinstall software
You can either install a package from the support site or rescue the software from another
node that is visible on the fabric. When the node is added to a SAN Volume Controller
clustered system, the software level on the node is updated to match the level of the system
software. This action cannot be performed on an active node.
To reinstall the software, the node must either be a candidate node or in the service state.
During the reinstallation, the node becomes unavailable. If the connected node and the
current node are the same, the connection to the Service Assistant Tool might be lost.
Figure 10-440 on page 860 shows the Re-install Software page. On this page, clicking Check
for software updates directs you to the IBM website, where you can obtain any available
update for the SAN Volume Controller software, as shown in the following link:
http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volu
me_Controller%20Code
Important: If the connected node and the current node are the same, the connection to
the node can be lost.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
860 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure 10-440 Re-install Software page
10.17.8 Upgrade Manually page
During a standard upgrade procedure, the system upgrades each of the nodes systematically.
The standard upgrade procedure is the recommended method for upgrading software on
nodes. However, to provide more flexibility in the upgrade process, you can also upgrade
each node individually.
When upgrading the software manually, you remove a node from the system, upgrade the
software on the node, and return the node to the system. You repeat this process for the
remaining nodes until the last node is removed from the system. At this point the remaining
nodes switch to running the new software. When the last node is returned to the system, it
upgrades and runs the new level of software. This action cannot be performed on an active
node. To upgrade software manually, the nodes must either be candidate nodes or in service
state. During this procedure, every node must be upgraded to the same software level. You
cannot interrupt the upgrade and switch to installing a different software level. During the
upgrade, the node becomes unavailable.
Figure 10-441 on page 861 shows the Upgrade Manually page.
Important: We do not describe this procedure, because the reinstallation of software
action must be run under the direction of IBM support. Do not try to perform this action
unless you are guided by IBM support.
Note: If the connected node and the current node are the same, the connection to the
service assistant is lost.
Chapter 10. SAN Volume Controller operations using the GUI 861
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-441 Upgrade Manually page
10.17.9 Modify WWNN page
You can verify that the worldwide node name (WWNN) for the node is consistent. Only
change the WWNN if directed to do so in the service procedures. This action cannot be
performed on an active node. To modify the WWNN, the node must either be a candidate
node or in the service state.
Figure 10-442 shows the Modify WWNN page.
Figure 10-442 Modify WWNN page
10.17.10 Change Service IP page
You can set the service IP address that is assigned to Ethernet port 1 for the current node.
This IP address is used to access the Service Assistant Tool and the service command line.
All nodes in the system have different service IP addresses.
Important: Only change the WWNN if directed to do so in the service procedures.
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
862 Implementing the IBM System Storage SAN Volume Controller V7.2
If the connected node and the current node are the same, the connection to the Service
Assistant Tool might be lost. To regain access to the Service Assistant Tool, log in to the
Service Assistant Tool using the new service IP address.
Figure 10-443 shows the Change Service IP page.
Figure 10-443 Change Service IP page
10.17.11 Configure CLI Access page
Use this panel if a valid superuser SSH key is not available for either a node that is currently
in the service state or a candidate node. You can use the SSH key to temporarily gain access
to the CLI or to use secure copy tools, such as Secure Copy Protocol (SCP). The key is
removed when the node is restarted or rejoins a SAN Volume Controller clustered system.
Figure 10-444 shows the Configure CLI Access page.
Figure 10-444 Configure CLI Access page
10.17.12 Restart Service page
With the Service Assistant Tool application, you can restart any of the following services:
CIMOM
Web Server (Tomcat)
Easy Tier
Service Location Protocol Daemon (SLPD)
Secure Shell Daemon (SSHD)
Figure 10-445 on page 863 shows the Restart Service page.
Chapter 10. SAN Volume Controller operations using the GUI 863
Draft Document for Review March 27, 2014 3:03 pm 7933 10 GUI Operations Matus_final.fm
Figure 10-445 Restart Service page
7933 10 GUI Operations Matus_final.fm Draft Document for Review March 27, 2014 3:03 pm
864 Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. 865
Draft Document for Review March 27, 2014 3:03 pm 7933 11 APPENDIX A Performance AB-Libor.fm
Appendix A. Performance data and statistics
gathering
In this appendix, we provide a brief overview of the performance analysis capabilities of the
IBM System Storage SAN Volume Controller 7.2. We also describe a method that you can
use to collect and process SAN Volume Controller performance statistics.
It is beyond the scope of this book to provide an in-depth understanding of performance
statistics or explain how to interpret them. For a more comprehensive look at the performance
of the IBM System Storage SAN Volume Controller, see SAN Volume Controller Best
Practices and Performance Guidelines, SG24-7521, which is available at this site:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
A
7933 11 APPENDIX A Performance AB-Libor.fm Draft Document for Review March 27, 2014 3:03 pm
866 Implementing the IBM System Storage SAN Volume Controller V7.2
SAN Volume Controller performance overview
Although storage virtualization with SAN Volume Controller provides many administrative
benefits, it can also provide a substantial increase in performance for various workloads. The
caching capability of the SAN Volume Controller and its ability to stripe volumes across
multiple disk arrays can provide a significant performance improvement over what can
otherwise be achieved when using midrange disk subsystems.
To ensure that the desired performance levels of your system are maintained, monitor
performance periodically to provide visibility to potential problems that exist or are developing
so that they can be addressed in a timely manner.
Performance considerations
When designing a SAN Volume Controller storage infrastructure or maintaining an existing
infrastructure, you need to consider many factors in terms of their potential effect on
performance. These factors include but are not limited to: dissimilar workloads competing for
the same resources; overloaded resources; insufficient resources available; poor performing
resources; and similar performance constraints.
Remember the following high-level rules when designing your SAN and SAN Volume
Controller layout:
Host-to-SAN Volume Controller inter-switch link (ISL) oversubscription: This area is the
most significant I/O load across ISLs. The recommendation is to maintain a maximum of
7-to-1 oversubscription. Going higher is possible, but it tends to lead to I/O bottlenecks.
This suggestion also assumes a core-edge design, where the hosts are on the edge and
the SAN Volume Controller is on the core.
Storage-to-SAN Volume Controller ISL oversubscription: This area is the second most
significant I/O load across ISLs. The maximum oversubscription is 7-to-1. Going higher is
not supported. Again, this suggestion assumes a multiple-switch SAN fabric design.
Node-to-node ISL oversubscription: This area is the least significant load of the three
possible oversubscription bottlenecks. In standard setups, this load can be ignored; while
it is not entirely negligible, it does not contribute significantly to ISL load. However, it is
mentioned here regarding the split-cluster capability that was made available with 6.3.0.
When running in this manner, the number of ISL links becomes much more important. As
with the Storage-to-SAN Volume Controller ISL oversubscription, this load also has a
requirement for a maximum of 7-to-1 oversubscription. Exercise caution and careful
planning when you determine the number of ISLs to implement. If you need additional
assistance, we recommend that you contact your IBM representative and request
technical assistance.
ISL trunking/port channeling: For the best performance and availability, we highly
recommend that you use ISL trunking/port channeling. Independent ISL links can easily
become overloaded and turn into performance bottlenecks. Bonded or trunked ISLs
automatically share load and provide better redundancy in the case of a failure.
Number of paths per host multipath device: The maximum supported number of paths per
multipath device that is visible on the host is eight. Although the Subsystem Device Driver
Path Control Module (SDDPCM), related products, and most vendor multipathing software
can support more paths, the SAN Volume Controller expects a maximum of eight paths. In
general, you see only an effect on performance from more paths than eight. Although the
SAN Volume Controller can work with more than eight paths, this design is technically
unsupported.
Appendix A. Performance data and statistics gathering 867
Draft Document for Review March 27, 2014 3:03 pm 7933 11 APPENDIX A Performance AB-Libor.fm
Do not intermix dissimilar array types or sizes: Although the SAN Volume Controller
supports an intermix of differing storage within storage pools, it is best to always use the
same array model, same RAID mode, same RAID size (RAID 5 6+P+S does not mix well
with RAID 6 14+2), and same drive speeds.
Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can both provide a validation that design expectations are met and identify opportunities for
improvement.
SAN Volume Controller performance perspectives
The SAN Volume Controller is a combination product that consists of software and hardware.
The software was developed by the IBM Research Group and was designed to run on
commodity hardware (mass-produced Intel-based CPUs with mass-produced expansion
cards), while providing distributed cache and a scalable cluster architecture. One of the main
goals of this design was to be able to use refreshes in hardware. Currently, the SAN Volume
Controller cluster is scalable up to eight nodes and these nodes can be swapped for newer
hardware while online. This capability provides a great investment value, because the nodes
are relatively inexpensive and a node swap can be done online. This capability provides an
instant performance boost with no license changes. Newer nodes, such as CF8 or CG8
models, which dramatically increased cache from 8 GB to 24 GB per node, provide an extra
benefit on top of the typical refresh cycle. The following link provides the node
replacement/swap and node addition instructions:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437
The performance is near linear when adding additional nodes into the cluster until
performance eventually becomes limited by the attached components. Also, while
virtualization with the SAN Volume Controller provides significant flexibility in terms of the
components that are used, it does not diminish the necessity of designing the system around
the components so that it can deliver the desired level of performance. The key item for
planning is your SAN layout. Switch vendors have slightly different planning requirements, but
the end goal is that you always want to maximize the bandwidth that is available to the SAN
Volume Controller ports. The SAN Volume Controller is one of the few devices capable of
driving ports to their limits on average, so it is imperative that you put significant thought into
planning the SAN layout.
Essentially, SAN Volume Controller performance improvements are gained by spreading the
workload across a greater number of back-end resources and additional caching that are
provided by the SAN Volume Controller cluster. Eventually, however, the performance of
individual resources becomes the limiting factor.
Performance monitoring
In this section, we highlight several performance monitoring techniques.
Collecting performance statistics
The SAN Volume Controller is constantly collecting performance statistics. The default
frequency by which files are created is at 5-minute intervals; prior to 4.3.0, the default was
15-minute intervals, with a supported range of 15 - 60 minutes. The collection interval can be
changed using the svctask startstats command. The statistics files (VDisk, MDisk, and
Node) are saved at the end of the sampling interval and a maximum of 16 files (each) are
7933 11 APPENDIX A Performance AB-Libor.fm Draft Document for Review March 27, 2014 3:03 pm
868 Implementing the IBM System Storage SAN Volume Controller V7.2
stored before they are overlaid in a rotating log fashion. This design provides statistics for the
most recent 80-minute period if using the default five-minute sampling interval. The SAN
Volume Controller supports user-defined sampling intervals of from one to 60 minutes. The
maximum space that is required for a performance statistics file is 1,153,482 bytes. There can
be up to 128 (16 per each of the three types across eight nodes) different files across eight
SAN Volume Controller nodes. This design makes the total space requirement a maximum of
147,645,694 bytes for all performance statistics from all nodes in a SAN Volume Controller
cluster. Make note of this maximum when you are in time-critical situations. The required size
is not otherwise important, because SAN Volume Controller node hardware is more than
capable.
You can define the sampling interval by using the svctask startstats -interval 2
command to collect statistics at 2-minute intervals; see 9.9.7, Starting statistics collection on
page 527.
Since SAN Volume Controller 5.1.0, cluster-level statistics are no longer supported. Instead,
use the per node statistics that are collected. The sampling of the internal performance
counters is coordinated across the cluster so that when a sample is taken, all nodes sample
their internal counters at the same time. It is important to collect all files from all nodes for a
complete analysis. Tools, such as Tivoli Storage Productivity Center, perform this intensive
data collection for you.
Statistics file naming
The files that are generated are written to the /dumps/iostats/ directory. The file name is in
the following formats:
Nm_stats_<node_frontpanel_id>_<date>_<time> for MDisk statistics
Nv_stats_<node_frontpanel_id>_<date>_<time> for VDisks statistics
Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics
Nd_stats_<node_frontpanel_id>_<date>_<time> for disk drive statistics, not used for SAN
Volume Controller
The node_frontpanel_id is of the node on which the statistics were collected. The date is in
the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an
MDisk statistics file name:
Nm_stats_104603_111003_094739
Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.
Example A-1 File names of per node statistics
IBM_2145:ITSO_SVC3:superuser>svcinfo lsiostatsdumps
id iostat_filename
0 Nn_stats_104603_111003_094739
1 Nd_stats_104603_111003_094739
2 Nv_stats_104603_111003_094739
3 Nm_stats_104603_111003_094739
4 Nn_stats_104603_111003_100238
5 Nv_stats_104603_111003_100238
6 Nm_stats_104603_111003_100238
Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within the SAN Volume Controller, they shorten the amount of time
that the historical data is available on the SAN Volume Controller. For example, instead of
an 80-minute period of data with the default five-minute interval, if you adjust to 2-minute
intervals, you have a 32-minute period instead.
Appendix A. Performance data and statistics gathering 869
Draft Document for Review March 27, 2014 3:03 pm 7933 11 APPENDIX A Performance AB-Libor.fm
7 Nd_stats_104603_111003_100238
8 Nm_stats_104603_111003_101736
9 Nv_stats_104603_111003_101736
10 Nd_stats_104603_111003_101736
11 Nn_stats_104603_111003_101736
12 Nn_stats_104603_111003_103235
13 Nm_stats_104603_111003_103235
14 Nv_stats_104603_111003_103235
15 Nd_stats_104603_111003_103235
16 Nn_stats_104603_111003_104734
The performance statistics files are in .xml format. They can be manipulated using various
tools and techniques. An example of a tool that you can use to analyze these files is the SAN
Volume Controller Performance Monitor (svcmon).
You can obtain this tool from the following website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177
Figure A-1 shows an example of the type of chart that you can produce using the SAN
Volume Controller performance statistics.
Tip: The performance statistics files can be copied from the SAN Volume Controller nodes
to a local drive on your workstation using the pscp.exe (included with PuTTY) from an
MS-DOS command line, as shown in this example:
C:\Program Files\PuTTY>pscp -unsafe -load ITSO-SVC3
[email protected]:/dumps/iostats/* c:\statsfiles
Use the -load parameter to specify the session that is defined in PuTTY.
Specify the -unsafe parameter when you use wildcards.
svcmon tool: The svcmon tool is not an officially supported tool. It is provided on an as is
basis.
7933 11 APPENDIX A Performance AB-Libor.fm Draft Document for Review March 27, 2014 3:03 pm
870 Implementing the IBM System Storage SAN Volume Controller V7.2
Figure A-1 Spreadsheet example
Real-time performance monitoring
Starting with Version 6.2.0, SAN Volume Controller supports real-time performance
monitoring. Real-time performance statistics provide short-term status information for the
SAN Volume Controller. The statistics are shown as graphs in the management GUI or can be
viewed from the command-line interface (CLI). With system-level statistics, you can quickly
view the CPU utilization and the bandwidth of volumes, interfaces, and MDisks. Each graph
displays the current bandwidth in either megabytes per second (MBps) or I/Os per second
(IOPS), as well as a view of bandwidth over time. Each node collects various performance
statistics, mostly at 5-second intervals, and the statistics that are available from the config
node in a clustered environment. This information can help you determine the performance
effect of a specific node. As with system statistics, node statistics help you to evaluate
whether the node is operating within normal performance metrics.
Real-time performance monitoring gathers the following system-level performance statistics:
CPU utilization
Port utilization and I/O rates
Volume and MDisk I/O rates
Bandwidth
Latency
Real-time statistics are not a configurable option and cannot be disabled.
Real-time performance monitoring with the CLI
The following commands are available for monitoring the statistics through the CLI:
lsnodestats and lssystemstats. Next, we show you examples of how to use them.
Appendix A. Performance data and statistics gathering 871
Draft Document for Review March 27, 2014 3:03 pm 7933 11 APPENDIX A Performance AB-Libor.fm
The lsnodestats command provides performance statistics for the nodes that are part of a
clustered system, as shown in the Example A-2 (note that the output is truncated and shows
only part of the available statistics). You can also specify a node name in the command to limit
the output for a specific node.
Example A-2 lsnodestats command output
IBM_2145:ITSO_SVC3:superuser>lsnodestats
node_id node_name stat_name stat_current stat_peak stat_peak_time
1 ITSOSVC3N1 cpu_pc 1 2 111003154220
1 ITSOSVC3N1 fc_mb 0 9 111003154220
1 ITSOSVC3N1 fc_io 1724 1799 111003153930
...
2 ITSOSVC3N2 cpu_pc 1 1 111003154246
2 ITSOSVC3N2 fc_mb 0 0 111003154246
2 ITSOSVC3N2 fc_io 1689 1770 111003153857
...
The previous example shows statistics for the two node members of cluster ITSO_SVC3:
nodes ITSOSVC3N1 and ITSOSVC3N2. For each node, the following columns are displayed:
stat_name: Provides the name of the statistic field
stat_current: The current value of the statistic field
stat_peak: The peak value of the statistic field in the last 5 minutes
stat_peak_time: The time that the peak occurred
On the other side, the lssystemstats command lists the same set of statistics listed with the
lsnodestats command, but representing all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
Bandwidth: Sum of bandwith of all nodes
Latency: Average latency for the cluster, which is calculated using data from the whole
cluster, not an average of the single node values
IOPS: Total IOPS of all nodes
CPU percentage: Average CPU percentage of all nodes
Example A-3 shows the resulting output of the lssystemstats command.
Example A-3 lssystemstats command output
IBM_2145:ITSO_SVC3:superuser>lssystemstats
stat_name stat_current stat_peak stat_peak_time
cpu_pc 1 1 111003160859
fc_mb 0 0 111003160859
fc_io 1291 1420 111003160504
...
Table A-1 on page 871 has a brief description of each of the statistics presented by the
lssystemstats and lsnodestats commands.
Table A-1 lssystemstats and lsnodestats statistics field name descriptions
Field name Unit Description
cpu_pc Percentage Utilization of node CPUs
fc_mb MBps Fibre Channel bandwidth
7933 11 APPENDIX A Performance AB-Libor.fm Draft Document for Review March 27, 2014 3:03 pm
872 Implementing the IBM System Storage SAN Volume Controller V7.2
fc_io IOPS Fibre Channel throughput
sas_mb MBps SAS bandwidth
sas_io IOPS SAS throughput
iscsi_mb MBps IP-based Small Computer System Interface (iSCSI)
bandwidth
iscsi_io IOPS iSCSI throughput
write_cache_pc Percentage Write cache fullness. Updated every ten seconds.
total_cache_pc Percentage Total cache fullness. Updated every ten seconds.
vdisk_mb MBps Total VDisk bandwidth
vdisk_io IOPS Total VDisk throughput
vdisk_ms Milliseconds Average VDisk latency
mdisk_mb MBps MDisk (SAN and RAID) bandwidth
mdisk_io IOPS MDisk (SAN and RAID) throughput
mdisk_ms Milliseconds Average MDisk latency
drive_mb MBps Drive bandwidth
drive_io IOPS Drive throughput
drive_ms Milliseconds Average drive latency
vdisk_w_mb MBps VDisk write bandwidth
vdisk_w_io IOPS VDisk write throughput
vdisk_w_ms Milliseconds Average VDisk write latency
mdisk_w_mb MBps MDisk (SAN and RAID) write bandwidth
mdisk_w_io IOPS MDisk (SAN and RAID) write throughput
mdisk_w_ms Milliseconds Average MDisk write latency
drive_w_mb MBps Drive write bandwidth
drive_w_io IOPS Drive write throughput
drive_w_ms Milliseconds Average drive write latency
vdisk_r_mb MBps VDisk read bandwidth
vdisk_r_io IOPS VDisk read throughput
vdisk_r_ms Milliseconds Average VDisk read latency
mdisk_r_mb MBps MDisk (SAN and RAID) read bandwidth
mdisk_r_io IOPS MDisk (SAN and RAID) read throughput
mdisk_r_ms Milliseconds Average MDisk read latency
drive_r_mb MBps Drive read bandwidth
drive_r_io IOPS Drive read throughput
Field name Unit Description
Appendix A. Performance data and statistics gathering 873
Draft Document for Review March 27, 2014 3:03 pm 7933 11 APPENDIX A Performance AB-Libor.fm
Real-time performance monitoring with the GUI
The real-time statistics are also available from the SAN Volume Controller GUI. Navigate to
Monitoring Performance, as shown in Figure A-2, to open the performance Monitoring
window.
Figure A-2 SAN Volume Controller Monitoring menu
The Performance monitoring window, as shown in Figure A-3 on page 874, is divided into
four sections that provide utilization views for the following resources:
CPU Utilization
Shows the overall CPU usage %
Volumes
Shows the overall volume utilization with the following fields:
Read
Write
Read latency
Write latency
Interfaces
Shows the overall statistics for each of the available interfaces:
Fibre Channel
iSCSI
SAS
MDisks
Shows the following overall statistics for the MDisks:
Read
Write
Read latency
drive_r_ms Milliseconds Average drive read latency
Field name Unit Description
7933 11 APPENDIX A Performance AB-Libor.fm Draft Document for Review March 27, 2014 3:03 pm
874 Implementing the IBM System Storage SAN Volume Controller V7.2
Write latency
Figure A-3 Performance monitoring window
You can also select to view performance statistics for each of the available nodes of the
system, as shown in Figure A-4.
Figure A-4 Select system node
It is also possible to change the metric between MBps or IOPS (Figure A-5).
Figure A-5 Changing to MBps
On any of these views, you can select any point in time with your cursor to know the exact
value and when it occurred. As soon as you place your cursor over the timeline, it becomes a
dotted line with the various values gathered (Figure A-6).
Appendix A. Performance data and statistics gathering 875
Draft Document for Review March 27, 2014 3:03 pm 7933 11 APPENDIX A Performance AB-Libor.fm
Figure A-6 Detailed resource utilization
For each of the resources, there are various values that you can view by selecting the check
box next to a value. For example, for the MDisks view, as shown in Figure A-7, the four
available fields are selected: Read, Write, Read latency, and Write latency.
Figure A-7 Detailed resource utilization
Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, using .xml files is a less
practical and less user-friendly method to analyze the SAN Volume Controller performance
statistics. Tivoli Storage Productivity Center for Disk is the supported IBM tool to collect and
analyze SAN Volume Controller performance statistics.
Tivoli Storage Productivity Center for Disk comes preinstalled on the System Storage
Productivity Center Console and can be made available by activating the license.
For more information about using Tivoli Storage Productivity Center to monitor your storage
subsystem, see SAN Storage Performance Management Using Tivoli Storage Productivity
Center, SG24-7364, which is available at the following website:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
New SAN Volume Controller port quality statistics: Tivoli Storage Productivity Center
for Disk Version 4.2.1 supports the new SAN Volume Controller port quality statistics that
are provided in SAN Volume Controller Versions 4.3 and later.
Monitoring these metrics, in addition to the performance metrics, can help you to maintain
a stable SAN environment.
7933 11 APPENDIX A Performance AB-Libor.fm Draft Document for Review March 27, 2014 3:03 pm
876 Implementing the IBM System Storage SAN Volume Controller V7.2
Copyright IBM Corp. 2014. All rights reserved. 877
Draft Document for Review March 27, 2014 3:03 pm 7933 12 APPENDIX B GLOSSARY Libor.fm
Appendix B. Terminology
In this appendix, we define the terms that relate to the IBM System Storage SAN Volume
Controller and that are commonly used in this book.
To see the complete set of terms that relate to the SAN Volume Controller, see the Glossary
section of the SAN Volume Controller Information Center, which is available at the following
website:
http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
B
7933 12 APPENDIX B GLOSSARY Libor.fm Draft Document for Review March 27, 2014 3:03 pm
878 Implementing the IBM System Storage SAN Volume Controller V7.2
Commonly encountered terms
This book includes the following SAN Volume Controller-related terminology.
Auto Data Placement Mode
Auto Data Placement Mode is an Easy Tier operating mode in which the host activity on all
the volume extents in a pool are measured, a migration plan is created, and then automatic
extent migration is performed.
Back-end
See front-end and back-end.
Channel extender
A channel extender is a device that is used for long-distance communication connecting other
SAN fabric components. Generally, channel extenders can involve protocol conversion to
asynchronous transfer mode (ATM), Internet Protocol (IP), or another long-distance
communication protocol.
Clustered system (SAN Volume Controller)
A clustered system, which was known as a cluster, is a group of up to eight SAN Volume
Controller nodes that presents a single configuration, management, and service interface to
the user.
Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if moved
from a hard disk drive (HDD) to a solid-state drive (SSD). A cold extent also refers to an
extent that needs to be migrated onto an HDD if it currently resides on an SSD.
Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets
that are maintained with the same time reference so that all copies are consistent in time. A
Consistency Group can be managed as a single entity.
Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete, and
the target disk has no further dependence on the source disk. The time of the last trigger
event is normally displayed with this status.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the information that describes the
cluster configuration, and it provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.
Appendix B. Terminology 879
Draft Document for Review March 27, 2014 3:03 pm 7933 12 APPENDIX B GLOSSARY Libor.fm
Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SAN
Volume Controller nodes are typically connected to a redundant SAN that is made up of two
counterpart SANs. A counterpart SAN is often called a SAN fabric.
Disk tier
It is likely that the MDisks (LUNs) that are presented to the SAN Volume Controller cluster
have different performance attributes due to the type of disk or RAID array on which they
reside. The MDisks can be on 15K RPM Fibre Channel (FC) or serial-attached SCSI (SAS)
disk, Nearline SAS or Serial Advanced Technology Attachment (SATA), or even SSDs. Thus,
a storage tier attribute is assigned to each MDisk, and the default is generic_hdd. SAN
Volume Controller 6.1 introduced a new disk tier attribute for SSDs, which is known as
generic_ssd.
Directed Maintenance Procedures
The fix procedures, which are also known as Directed Maintenance Procedures (DMPs),
ensure that you fix any outstanding errors in the error log. To do so, from the Monitoring
panel, click Events. The Next Recommended Action is displayed at the top of the Events
window. Select Run This Fix Procedure, and follow the instructions.
Easy Tier
Easy Tier is a volume performance function within the SAN Volume Controller that provides
automatic data placement of a volumes extents in a multitiered storage pool. The pool
normally contains a mix of SSDs and HDDs. Easy Tier measures host I/O activity on the
volumes extents and migrates hot extents onto the SSDs to ensure the maximum
performance.
Evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the
volume extents in a pool are measured only. No automatic extent migration is performed.
Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process. Prior
to SAN Volume Controller V6.1, this situation was known as an error.
Event code
An event code is a value that is used to identify an event condition to a user. This value might
map to one or more event IDs or to values that are presented on the service panel. This value
is used to report error conditions to IBM and to provide an entry point into the service guide.
Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
SAN Volume Controller. An event ID is used internally in the cluster to identify the error.
Excluded
The excluded condition is a status condition. It describes an MDisk that the SAN Volume
Controller decided is no longer sufficiently reliable to be managed by the cluster. The user
must issue a command to include the MDisk in the cluster-managed storage.
7933 12 APPENDIX B GLOSSARY Libor.fm Draft Document for Review March 27, 2014 3:03 pm
880 Implementing the IBM System Storage SAN Volume Controller V7.2
Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range from 16 MB to 8 GB in size.
FC port logins
FC port logins refer to the number of hosts that can see any one SAN Volume Controller node
port. The SAN Volume Controller has a maximum limit per node port of FC logins that are
allowed.
Front-end and back-end
The SAN Volume Controller takes MDisks to create pools of capacity from which volumes are
created and presented to application servers (hosts). The MDisks reside in the controllers at
the back-end of the SAN Volume Controller, SAN Volume Controller to back-end controller
zones. The volumes that are presented to the hosts reside in the front-end of the SAN
Volume Controller, SAN Volume Controller to host zones.
Field replaceable units
Field replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the units components fails. They are held as spares by the IBM service organization.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap
(64 KB/256 KB) in the SAN Volume Controller. It is also the unit to extend the real size of a
thin-provisioned volume (32 KB, 64 KB, 128 KB, or 256 KB).
Host bus adapter (HBA)
A host bus adapter (HBA) is an interface card that connects a server to the SAN environment
via its internal bus system, for example, PCI Express.
Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or iSCSI host
names for LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to
volumes. The intent is to have a one-to-one relationship between hosts and host IDs,
although this relationship cannot be policed.
Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster (it is equivalent to LUN masking). Prior to SAN Volume Controller
V6.1, this process was known as VDisk-to-Host mapping.
Hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if moved
from an HDD onto an SSD.
Internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in
enclosures and in nodes that are part of the SAN Volume Controller cluster.
iSCSI qualified name (IQN)
IQN refers to special names that identify both iSCSI initiators and targets. One of the three
name formats that iSCSI provides is IQN. The format is iqn.yyyy-mm.{reversed domain
Appendix B. Terminology 881
Draft Document for Review March 27, 2014 3:03 pm 7933 12 APPENDIX B GLOSSARY Libor.fm
name}. For example, the default for a SAN Volume Controller node is:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
Internet storage name service (iSNS)
iSNS refers to the Internet storage name service (iSNS) protocol that is used by a host
system to manage iSCSI targets and the automated iSCSI discovery, management, and
configuration of iSCSI and FC devices. It was defined in Request for Comments (RFC) 4171.
Image mode
The image mode is an access mode that establishes a one-to-one mapping of extents in an
existing LUN or (image mode) MDisk with the extents in a volume.
I/O Group
Each pair of SAN Volume Controller cluster nodes is known as an input/output (I/O) group. An
I/O Group has a set of volumes associated with it that are presented to host systems. Each
SAN Volume Controller node is associated with exactly one I/O Group. The nodes in an I/O
Group provide a failover and failback function for each other.
Inter-switch link (ISL) hop
An inter-switch link (ISL) is a connection between two switches and is counted as one ISL
hop. The number of hops is always counted on the shortest route between two N-ports
(device connections). In a SAN Volume Controller environment, the number of ISL hops is
counted on the shortest route between the pair of nodes farthest apart. The SAN Volume
Controller recommends maximum hops for certain fabric paths.
Local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect
the components (nodes, hosts, and switches) of the local cluster together.
Local and remote fabric interconnect
The local fabric interconnect and the remote fabric interconnect are the SAN components that
are used to connect the local and remote fabrics. Usually depending on the distance between
the two fabrics, they can be single-mode optical fibers that are driven by long wave (LW)
gigabit interface converters (GBICs) or small form-factor pluggables (SFPs), or more
sophisticated components, such as channel extenders or special SFP modules that are used
to extend the distance between SAN components.
Logical unit (LU) and logical unit number (LUN)
LUN is defined by the SCSI standards as a logical unit number. It is an abbreviation for an
entity that exhibits disk-like behavior, for example, a volume or an MDisk.
Managed disk (MDisk)
An MDisk is a SCSI disk that is presented by a RAID controller and that is managed by the
SAN Volume Controller. The MDisk is not visible to host systems on the SAN.
Managed Disk Group (storage pool)
See storage pool.
Master Console (MC)
The Master Console is a SAN Volume Controller term that refers to the System Storage
Productivity Center server that runs optional software and assists in the management of the
SAN Volume Controller.
7933 12 APPENDIX B GLOSSARY Libor.fm Draft Document for Review March 27, 2014 3:03 pm
882 Implementing the IBM System Storage SAN Volume Controller V7.2
Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the SAN Volume Controller as copy 0, and the secondary
copy is known within the SAN Volume Controller as copy 1.
Node
A SAN Volume Controller node is a hardware entity that provides virtualization, cache, and
copy services for the cluster. SAN Volume Controller nodes are deployed in pairs called I/O
Groups. One node in a clustered system is designated as the configuration node.
Oversubscription
The term oversubscription refers to the ratio of the sum of the traffic on the initiator N-port
connections to the traffic on the most heavily loaded ISLs, where more than one connection is
used between these switches. Oversubscription assumes a symmetrical network, and a
specific workload applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.
Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volumes data from cache in preparation for the FlashCopy
operation.
Real-time Compression
IBM integrated software function for storage space efficiency. The RACE engine compressed
data on volumes in real time with minimal impact on performance.
Reliability, availability, and serviceability (RAS)
RAS stands for reliability, availability, and serviceability.
Redundant Array of Independent Disks (RAID)
RAID stands for a Redundant Array of Independent Disks, with two or more physical disk
drives combined in an array in a certain way, incorporating a RAID level for either failure
protection or better performance. The most common RAID levels are 0, 1, 5, 6, and 10.
RAID 0
RAID 0 is a data striping technique that is used across an array, and no data protection is
provided.
RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.
RAID 10
RAID 10 is a combination of a RAID 0 stripe that is mirrored (RAID 1). Thus, two identical
copies of striped data exist; there is no parity.
RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.
Appendix B. Terminology 883
Draft Document for Review March 27, 2014 3:03 pm 7933 12 APPENDIX B GLOSSARY Libor.fm
RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. This level, therefore, can continue to process read and write requests to
all of the arrays virtual disks in the presence of two concurrent disk failures.
Redundant SAN
A redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so
no matter what component fails, data traffic continues. Connectivity between the devices
within the SAN is maintained, although possibly with degraded performance, when an error
occurred. A redundant SAN design is normally achieved by splitting the SAN into two
independent counterpart SANs (two SAN fabrics), so that if one path of the counterpart SAN
is destroyed, the other counterpart SAN path keeps functioning.
Remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that
connect the components (nodes, hosts, and switches) of the remote cluster together. There
can be significant distances between the components in the local cluster and those
components in the remote cluster.
Storage area network (SAN)
SAN stands for storage area network.
SAN Volume Controller
The IBM System Storage SAN Volume Controller is an appliance that is designed for
attachment to various host computer systems. The SAN Volume Controller performs
block-level virtualization of disk storage.
Small Computer Systems Interface (SCSI)
SCSI stands for Small Computer Systems Interface.
Service Location Protocol
The Service Location Protocol (SLP) is an Internet service discovery protocol that allows
computers and other devices to find services in a local area network (LAN) without prior
configuration. It was defined in RFC 2608.
Solid-state disk (SSD)
A solid-state disk (SSD) is a disk that is made from solid-state memory and thus has no
moving parts. Most SSDs use NAND-based flash memory technology. It is defined to the
SAN Volume Controller as a disk tier generic_ssd.
Storage pool (Managed Disk group)
A storage pool is a collection of storage capacity that is made up of MDisks, which provides
the pool of storage capacity for a specific set of volumes. A storage pool can contain more
than one tier of disk, which is known as a multitier storage pool, and is a prerequisite of Easy
Tier automatic data placement. Prior to SAN Volume Controller V6.1, this storage pool was
known as a Managed Disk Group (MDG).
System Storage Productivity Center
IBM System Storage Productivity Center (SSPC) is a hardware server on which many
software products are preinstalled. The required storage management products are activated
or enabled through licenses. The SSPC can be used to manage the SAN Volume Controller
and DS8000 products.
7933 12 APPENDIX B GLOSSARY Libor.fm Draft Document for Review March 27, 2014 3:03 pm
884 Implementing the IBM System Storage SAN Volume Controller V7.2
Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a logical capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Thus, a thin-provisioned volume is a volume with a virtual capacity that differs
from its real capacity. Prior to SAN Volume Controller V6.1, this thin-provisioned volume was
known as space efficient.
Volume
A volume is a SAN Volume Controller logical device that appears to host systems that are
attached to the SAN as a SCSI disk. Each volume is associated with exactly one I/O Group. A
volume has a preferred node within the I/O Group. Prior to SAN Volume Controller 6.1, this
volume was known as a VDisk or virtual disk.
Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.
Copyright IBM Corp. 2014. All rights reserved. 885
Draft Document for Review March 27, 2014 3:03 pm 7933 13 Split IO Group Libor.fm
Appendix C. SAN Volume Controller Stretched
Cluster
In this appendix, we briefly describe the IBM System Storage SAN Volume Controller
Stretched Cluster (formerly known as Split I/O Group). We also explain the term Enhanced
Stretched Cluster.
We do not provide deep technical details or implementation guidelines as those are fully
covered in IBM SAN and SVC Enhanced Stretched Cluster and VMware Solution
Implementation, SG24-8211.
For more information about Enhanced Stretched Cluster prerequisites, visit:
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp
C
7933 13 Split IO Group Libor.fm Draft Document for Review March 27, 2014 3:03 pm
886 Implementing the IBM System Storage SAN Volume Controller V7.2
Stretched cluster overview
Starting with SAN Volume Controller 5.1, IBM introduced General Availability (GA) support
for SAN Volume Controller node distribution across two independent locations up to 10 km
(6.2 miles) apart.
With SAN Volume Controller 6.3, IBM offers significant enhancements for a Split I/O Group in
one of two configurations:
No inter-switch link ISL configuration:
Passive Wavelength Division Multiplexing (WDM) devices can be used between both
sites.
No ISLs can be used between SAN Volume Controller nodes (similar to the SAN
Volume Controller 5.1 supported configuration).
The distance extension is to up to 40 km (24.8 miles).
ISL configuration:
ISLs are allowed between SAN Volume Controller nodes (not allowed with releases
earlier that 6.3).
The maximum distance is similar to Metro Mirror (MM) distances.
The physical requirements are similar to MM requirements.
ISL distance extension is allowed with active and passive WDM devices.
This chapter explores the characteristics of both configurations.
The SAN Volume Controller 7.1 further exploits the options for Enhanced Stretched Cluster by
automatic selection of quorum disks, and placement of one in each of the three sites. Users
can still manually select quorum disks in each of the 3 sites if they wish.
Non-ISL configuration
In a non-ISL configuration, each SAN Volume Controller I/O Group consists of two
independent SAN Volume Controller nodes. In contrast to a standard SAN Volume Controller
environment, nodes from the same I/O Group are not placed close together, they are
distributed across two sites. If a node fails, the other node in the same I/O Group takes over
the workload, which is standard in a SAN Volume Controller environment.
Volume mirroring provides a consistent data copy in both sites. If one storage subsystem fails,
the remaining subsystem processes the I/O requests. The combination of SAN Volume
Controller node distribution in two independent data centers and a copy of data in two
independent data centers creates a new level of availability: Stretched Cluster.
All SAN Volume Controller nodes and the storage system in a single site might fail; the other
SAN Volume Controller nodes take over the server load using the remaining storage
subsystems. The volume ID, behavior, and assignment to the server are still the same. No
server reboot, no failover scripts, and thus no script maintenance are required.
However, you must consider that a Stretched Cluster typically requires a special setup and
might exhibit substantially reduced performance. In a Stretched Cluster environment, the
SAN Volume Controller nodes from the same I/O Group reside in two sites. A third quorum
location is required for handling split brain scenarios.
Appendix C. SAN Volume Controller Stretched Cluster 887
Draft Document for Review March 27, 2014 3:03 pm 7933 13 Split IO Group Libor.fm
Figure C-1 on page 887 shows an example of a non-ISL Stretched Cluster configuration as it
is supported in SAN Volume Controller V5.1.
Figure C-1 Standard SAN Volume Controller 5.1 environment using volume mirroring
The Stretched Cluster uses the SAN Volume Controller volume mirroring functionality. Volume
mirroring allows creation of one volume with two copies of MDisk extents; there are not two
volumes with the same data on them. The two data copies can be in different MDisk groups.
Thus, volume mirroring can minimize the effect on volume availability if one set of MDisks
goes offline. The resynchronization between both copies after recovering from a failure is
incremental; SAN Volume Controller starts the resynchronization process automatically.
Like a standard volume, each mirrored volume is owned by one I/O Group with a preferred
node. Thus, the mirrored volume goes offline if the whole I/O Group goes offline. The
preferred node performs all I/O operations, which means reads as well as writes. The
preferred node can be set manually.
The quorum disk keeps the status of the mirrored volume. The last status (in sync or not in
sync) and the definition of the primary and secondary volume copy are saved there. Thus, an
active quorum disk is required for volume mirroring. To ensure data consistency, the SAN
Volume Controller disables all mirrored volumes if there is no access to any quorum disk
candidate any longer. Therefore, quorum disk placement is an important topic with volume
mirroring and Stretched Cluster configuration.
The following list provides preferred practices for Stretched Cluster:
Drive read I/O to the local storage system.
For distances less than 10 km (6.2 miles), drive the read I/O to the faster of the two disk
subsystems if they are not identical.
Take long distance links into account.
The preferred node must stay at the same site as the server accessing the volume.
7933 13 Split IO Group Libor.fm Draft Document for Review March 27, 2014 3:03 pm
888 Implementing the IBM System Storage SAN Volume Controller V7.2
The volume mirroring primary copy must stay at the same site as the server accessing the
volume to avoid any potential latency effect where the longer distance solution will be
implemented.
In many cases, no independent third site is available. It is possible to use an already existing
building or computer room from the two main sites to create a third, independent failure
domain. There are several considerations:
The third failure domain needs an independent power supply or uninterruptible power
supply. If the hosting site fails, the third failure domain must continue to operate.
A separate storage controller for the active SAN Volume Controller quorum disk is
required. Otherwise, the SAN Volume Controller loses multiple quorum disk candidates at
the same time if a single storage subsystem fails.
Each site (failure domain) must be placed in a different location in case of fire.
Fibre Channel (FC) cabling must not go through another site (failure domain). Otherwise,
a fire in one failure domain might destroy the links (and break access) to the SAN Volume
Controller quorum disk.
As shown in Figure C-1 on page 887, the setup is similar to a standard SAN Volume
Controller environment, but the nodes are distributed to two sites.
SAN Volume Controller nodes and data are equally distributed across two separate sites with
independent power sources, which are named as separate failure domains (Failure Domain 1
and Failure Domain 2). The quorum disk is located in a third site with a separate power
source (Failure Domain 3).
Each I/O Group requires four dedicated fiber-optic links between site 1 and site 2.
If the non-ISL configuration is implemented over a 10 km (6.2 mile) distance, passive WDM
devices (without power) can be used to pool multiple fiber-optic links with different
wavelengths in one or two connections between both sites. SFPs with different wavelengths
(colored SFPs, that is, small form-factor pluggables (SFPs) that are used in Coarse Wave
Division Multiplexing (CWDM) devices) are required here.
The maximum distance between both major sites is limited to 40 km (24.8 miles).
Because we have to prevent the risk of burst traffic (due to the lack of buffer-to-buffer credits),
the link speed must be limited. The link speed depends on the cable length between the
nodes in the same I/O Group, as shown in Table C-1.
Table C-1 SAN Volume Controller code level lengths and speed
The quorum disk at the third site must be FC-attached. Fibre Channel over IP (FCIP) can be
used if the round-trip delay time to the third site is always less than 80 ms, which is 40 ms in
each direction.
SAN Volume
Controller code level
Minimum length Maximum length Maximum link speed
>= SAN Volume
Controller 5.1
>= 0 km = 10 km (6.2 miles) 8 Gbps
>= SAN Volume
Controller 6.3
>= 10 km (6.2 miles) = 20 km (12.4 miles) 4 Gbps
>= SAN Volume
Controller 6.3
>= 20 km (12.4 miles) = 40 km (24.8 miles) 2 Gbps
Appendix C. SAN Volume Controller Stretched Cluster 889
Draft Document for Review March 27, 2014 3:03 pm 7933 13 Split IO Group Libor.fm
Figure C-2 on page 889 shows a detailed diagram where passive WDMs are used to extend
the links between site 1 and site 2.
Figure C-2 Connection with passive WDMs
The best performance server in site 1 must access the volumes in site 1 (preferred node and
primary copy in site 1). SAN Volume Controller volume mirroring copies the data to storage 1
and storage 2. A similar setup must be implemented for the servers in site 2 with access to
the SAN Volume Controller node in site 2.
The configuration that is shown in Figure C-2 covers several failover cases:
Power off FC switch 1: FC switch 2 takes over the load and routes I/O to SAN Volume
Controller node 1 and SAN Volume Controller node 2.
Power off SAN Volume Controller node 1: SAN Volume Controller node 2 takes over the
load and serves the volumes to the server. SAN Volume Controller node 2 changes the
cache mode to write-through to avoid data loss in case SAN Volume Controller node 2
fails as well.
Power off storage 1: The SAN Volume Controller waits a short time (15 - 30 seconds),
pauses volume copies on storage 1, and continues I/O operations using the remaining
volume copies on storage 2.
7933 13 Split IO Group Libor.fm Draft Document for Review March 27, 2014 3:03 pm
890 Implementing the IBM System Storage SAN Volume Controller V7.2
Power off site 1: The server has no access to the local switches any longer, causing
access loss. You optionally can avoid this access loss by using additional fiber-optic links
between site 1 and site 2 for server access.
Of course, the same scenarios are valid for site 2. And, similar scenarios apply in a mixed
failure environment, for example, failure of switch 1, SAN Volume Controller node 2, and
storage 2. No manual failover/failback activities are required, because the SAN Volume
Controller performs the failover/failback operation.
Using AIX Life Partition Mobility or VMware VMotion can increase the number of use cases
significantly. Online system migrations, including running virtual machines and applications,
are possible. Online system migrations are a perfectly acceptable functionality to handle
maintenance operations in an appropriate way.
Advantages
Business continuity solution distributed across two independent data centers
Configuration similar to a standard SAN Volume Controller clustered system
Limited hardware effort: Passive WDM devices can be used, but are not required
Requirements
Four independent fiber-optic links for each I/O Group between both data centers required
LW SFPs with support over long distance required for direct connection to remote storage
area network (SAN)
Optional usage of passive WDM devices
Passive WDM device: No power required for operation
Colored SFPs required to make different wavelength available
Colored SFPs must be supported by the switch vendor
Two independent fiber-optic links between site 1 and site 2 recommended
Third site for quorum disk placement required
Quorum disk storage system must use FC for attachment with similar requirements, such
as Metro Mirror storage (80 ms round-trip delay time, which is 40 ms in each direction)
Bandwidth reduction
Buffer credits, which are also called buffer-to-buffer (BB) credits, are used as a flow control
method by FC technology and represent the number of frames that a port can store.
Thus, buffer-to-buffer credits are necessary to have multiple FC frames in parallel in-flight. An
appropriate number of buffer-to-buffer credits are required for optimal performance. The
number of buffer credits to achieve the maximum performance over a given distance actually
depends on the speed of the link:
1 buffer credit = 2 km (1.2 miles) at 1 Gbit/s
1 buffer credit = 1 km (.62 miles) at 2 Gbit/s
1 buffer credit = 0.5 km (.3 miles) at 4 Gbit/s
1 buffer credit = 0.25 km (.15 miles) at 8 Gbit/s
The preceding guidelines give the minimum numbers. The performance drops if there are not
enough buffer credits, according to the link distance and link speed, as shown in Table C-2 on
page 891.
Appendix C. SAN Volume Controller Stretched Cluster 891
Draft Document for Review March 27, 2014 3:03 pm 7933 13 Split IO Group Libor.fm
Table C-2 FC link speed buffer-to-buffer and distance
The number of buffer-to-buffer credits provided by a SAN Volume Controller FC host bus
adapter (HBA) is limited. An HBA of a 2145-CF8 node provides 41 buffer credits, which is
sufficient for a 10 km (6.2 miles) distance at 8 Gbit/s. The SAN Volume Controller adapters in
all earlier models provide only eight buffer credits, which is enough only for a 4 km (2.4 miles)
distance with a 4 Gbit/s link speed. These numbers are determined by the hardware of the
HBA and cannot be changed. We suggest that you use 2145-CF8 or CG8 nodes for distances
longer than 4 km (2.4 miles) to provide enough buffer-to-buffer credits at a reasonable FC
speed.
ISL configuration
Where a longer distance that is beyond 40 km (24.8 miles) between site 1 and site 2 is
required, a new configuration needs to be applied. The setup is quite similar to a standard
SAN Volume Controller environment, but the nodes are allowed to communicate over long
distance using ISL links between both sites using active or passive WDM and a different SAN
configuration. Figure C-3 shows a detailed diagram that relates to a configuration with
active/passive WDM.
Figure C-3 Connection with active/passive WDM and ISL
The Stretched Cluster configuration that is shown in Figure C-3 supports distances of up to
300 km (186.4 miles), which is the same as the recommended distance for Metro Mirror.
FC link speed B2B credits for 10 km
(6.2 miles)
Distance with eight credits
1 Gbit/s 5 16 km (9.9 miles)
2 Gbit/s 10 8 km (4.9 miles)
4 Gbit/s 20 4 km (2.4 miles)
8 Gbit/s 40 2 km (1.2 miles)
7933 13 Split IO Group Libor.fm Draft Document for Review March 27, 2014 3:03 pm
892 Implementing the IBM System Storage SAN Volume Controller V7.2
Technically, the SAN Volume Controller will tolerate a round-trip delay of up to 80 ms between
nodes. Cache mirroring traffic rather than Metro Mirror traffic is sent across the inter-site link
and data is mirrored to back-end storage using volume mirroring.
Data is written by the preferred node to both the local and remote storage. The SCSI write
protocol results in two round-trips. This latency is hidden from the application by the write
cache.
The Stretched Cluster is often used to move the workload between servers at separate sites.
VMotion or the equivalent can be used to move applications between servers; therefore,
applications no longer necessarily issue I/O requests to the local SAN Volume Controller
nodes.
SCSI write commands from hosts to remote SAN Volume Controller nodes result in an
additional two round-trips worth of latency that is visible to the application. For Stretched
Cluster configurations in a long distance environment, it is advisable to use the local site for
host I/O. Certain switches and distance extenders use extra buffers and proprietary protocols
to eliminate one of the round-trips worth of latency for SCSI write commands.
These devices are already supported for use with the SAN Volume Controller. They do not
benefit or affect inter-node communication, but they benefit the host to remote SAN Volume
Controller I/Os and SAN Volume Controller to remote storage controller I/Os.
Requirements
A Stretched Cluster with ISL configuration must meet the following requirements:
Four independent, extended SAN fabrics are shown in Figure C-3 on page 891. Those
fabrics will be named Public SAN1, Public SAN2, Private SAN1, and Private SAN2. Each
Public or Private SAN can be created with a dedicated FC switch or director. Or, they can
be a virtual SAN in a CISCO or Brocade FC switch or director.
Two ports per SAN Volume Controller node attach to the private SANs.
Two ports per SAN Volume Controller node attach to the public SANs.
SAN Volume Controller volume mirroring exists between site 1 and site 2.
Hosts and storage attach to the public SANs.
The third site quorum attaches to the public SANs.
Figure C-4 on page 893 shows the possible configurations with a virtual SAN.
Appendix C. SAN Volume Controller Stretched Cluster 893
Draft Document for Review March 27, 2014 3:03 pm 7933 13 Split IO Group Libor.fm
Figure C-4 ISL configuration with a virtual SAN
Figure C-5 shows the possible configurations with a physical SAN.
Figure C-5 ISL configuration with a physical SAN
Use a third site to house a quorum disk. Connections to the third site can be via FCIP
because of the distance (no FCIP or FC switches were shown in the previous layouts for
simplicity). In many cases, no independent third site is available.
7933 13 Split IO Group Libor.fm Draft Document for Review March 27, 2014 3:03 pm
894 Implementing the IBM System Storage SAN Volume Controller V7.2
It is possible to use an already existing building from the two main sites to create a third,
independent failure domain, but you have several considerations:
The third failure domain needs an independent power supply or uninterruptible power
supply. If the hosting site failed, the third failure domain should continue to operate.
Each site (failure domain) needs to be placed in separate fire compartments.
FC cabling must not go through another site (failure domain). Otherwise, a fire in one
failure domain destroys the links (and breaks the access) to the SAN Volume Controller
quorum disk.
Applying these considerations, the SAN Volume Controller clustered system can be
protected, although two failure domains are in the same building. Consider an IBM
Advanced Technical Support (ATS) review or processing a request for price quotation
(RRQ)/Solution for Compliance in a Regulated Environment (SCORE) to review the
proposed configuration.
The storage system that provides the quorum disk at the third site must support extended
quorum disks. Storage systems that provide extended quorum support are listed at the
following website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003907
Four active/passive WDMs, two per each site, are needed to extend the public and private
SAN over a distance.
Place independent storage systems at the primary and secondary sites, and use volume
mirroring to mirror the host data between storage systems at the two sites.
The SAN Volume Controller nodes that are in the same I/O Group must be located in two
remote sites.
More information
More information about SAN Volume Controller Stretched Cluster and Enhanced Stretched
cluster including planning, implementation, configuration steps, and troubleshooting is
available in the following materials:
IBM SAN and SVC Enhanced Stretched Cluster and VMware Solution Implementation,
SG24-8211
IBM SAN Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142
IBM System Storage SAN Volume Controller Best Practices and Performance Guidelines,
SG24-7521
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp
Copyright IBM Corp. 2014. All rights reserved. 895
Draft Document for Review March 27, 2014 3:03 pm 7933bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Introduction to Storage Area Networks, SG24-5470
IBM System Storage: Implementing an IBM SAN, SG24-6116
DS4000 Best Practices and Performance Tuning Guide, SG24-6363
IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547
IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548
Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687
IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
DS8000 Performance Monitoring and Tuning, SG24-7146
SAN Storage Performance Management Using Tivoli Storage Productivity Center,
SG24-7364
Using the SVC for Business Continuity, SG24-7371
SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521
SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574
IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659
IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725
IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
These publications are also relevant as further information sources:
IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551
IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
IBM System Storage SAN Volume Controller: Service Guide, GC26-7901
7933bibl.fm Draft Document for Review March 27, 2014 3:03 pm
896 Implementing the IBM System Storage SAN Volume Controller V7.2
IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation
Guide, GC27-2219
IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation
Guide, GC27-2220
IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware
Installation Guide, GC27-2221
IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286
IBM System Storage SAN Volume Controller Command-Line Interface Users Guide,
GC27-2287
IBM System Storage Master Console: Installation and Users Guide, GC30-4090
Multipath Subsystem Device Driver Users Guide, GC52-1309
IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation
Guide, GC52-1356
IBM System Storage Productivity Center Software Installation and Users Guide,
SC23-8823
IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824
Subsystem Device Driver Users Guide for the IBM TotalStorage Enterprise Storage
Server and the IBM System Storage SAN Volume Controller, SC26-7540
IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface Users Guide, SC26-7544
IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
IBM System Storage Open Software Family SAN Volume Controller: Host Attachment
Guide, SC26-7563
IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336
IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096
IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905
IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for
Replication Installation and Configuration Guide, SC27-2337
IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096
Related publications 897
Draft Document for Review March 27, 2014 3:03 pm 7933bibl.fm
Online resources
These websites are also relevant as further information sources:
IBM System Storage home page
http://www.storage.ibm.com
SAN Volume Controller supported platform
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
Download site for Windows Secure Shell (SSH) freeware
http://www.chiark.greenend.org.uk/~sgtatham/putty
IBM site to download SSH for AIX
http://oss.software.ibm.com/developerworks/projects/openssh
Open source site for SSH for Windows and Mac
http://www.openssh.com/windows.html
Cygwin Linux-like environment for Windows
http://www.cygwin.com
IBM Tivoli Storage Area Network Manager site
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNe
tworkManager.html
Microsoft Knowledge Base Article 131658
http://support.microsoft.com/support/kb/articles/Q131/6/58.asp
Microsoft Knowledge Base Article 149927
http://support.microsoft.com/support/kb/articles/Q149/9/27.asp
Sysinternals home page
http://www.sysinternals.com
Subsystem Device Driver download site
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
IBM System Storage Virtualization home page
http://www-1.ibm.com/servers/storage/software/virtualization/index.html
SVC support page
http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&
brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu
e.x=1
SVC online documentation
http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp
lBM Redbooks publications about SVC
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Help from IBM
IBM Support and downloads
7933bibl.fm Draft Document for Review March 27, 2014 3:03 pm
898 Implementing the IBM System Storage SAN Volume Controller V7.2
ibm.com/support
IBM Global Services
ibm.com/services
T
o
d
e
t
e
r
m
i
n
e
t
h
e
s
p
i
n
e
w
i
d
t
h
o
f
a
b
o
o
k
,
y
o
u
d
i
v
i
d
e
t
h
e
p
a
p
e
r
P
P
I
i
n
t
o
t
h
e
n
u
m
b
e
r
o
f
p
a
g
e
s
i
n
t
h
e
b
o
o
k
.
A
n
e
x
a
m
p
l
e
i
s
a
2
5
0
p
a
g
e
b
o
o
k
u
s
i
n
g
P
l
a
i
n
f
i
e
l
d
o
p
a
q
u
e
5
0
#
s
m
o
o
t
h
w
h
i
c
h
h
a
s
a
P
P
I
o
f
5
2
6
.
D
i
v
i
d
e
d
2
5
0
b
y
5
2
6
w
h
i
c
h
e
q
u
a
l
s
a
s
p
i
n
e
w
i
d
t
h
o
f
.
4
7
5
2
"
.
I
n
t
h
i
s
c
a
s
e
,
y
o
u
w
o
u
l
d
u
s
e
t
h
e
.
5
s
p
i
n
e
.
N
o
w
s
e
l
e
c
t
t
h
e
S
p
i
n
e
w
i
d
t
h
f
o
r
t
h
e
b
o
o
k
a
n
d
h
i
d
e
t
h
e
o
t
h
e
r
s
:
S
p
e
c
i
a
l
>
C
o
n
d
i
t
i
o
n
a
l
T
e
x
t
>
S
h
o
w
/
H
i
d
e
>
S
p
i
n
e
S
i
z
e
(
-
-
>
H
i
d
e
:
)
>
S
e
t
.
M
o
v
e
t
h
e
c
h
a
n
g
e
d
C
o
n
d
i
t
i
o
n
a
l
t
e
x
t
s
e
t
t
i
n
g
s
t
o
a
l
l
f
i
l
e
s
i
n
y
o
u
r
b
o
o
k
b
y
o
p
e
n
i
n
g
t
h
e
b
o
o
k
f
i
l
e
w
i
t
h
t
h
e
s
p
i
n
e
.
f
m
s
t
i
l
l
o
p
e
n
a
n
d
F
i
l
e
>
I
m
p
o
r
t
>
F
o
r
m
a
t
s
t
h
e
C
o
n
d
i
t
i
o
n
a
l
T
e
x
t
S
e
t
t
i
n
g
s
(
O
N
L
Y
!
)
t
o
t
h
e
b
o
o
k
f
i
l
e
s
.
D
r
a
f
t
D
o
c
u
m
e
n
t
f
o
r
R
e
v
i
e
w
M
a
r
c
h
2
7
,
2
0
1
4
3
:
0
3
p
m
7
9
3
3
s
p
i
n
e
.
f
m
8
9
9
(
0
.
1
s
p
i
n
e
)
0
.
1
<
-
>
0
.
1
6
9
5
3
<
-
>
8
9
p
a
g
e
s
(
0
.
2
s
p
i
n
e
)
0
.
1
7
<
-
>
0
.
4
7
3
9
0
<
-
>
2
4
9
p
a
g
e
s
(
1
.
5
s
p
i
n
e
)
1
.
5
<
-
>
1
.
9
9
8
7
8
9
<
-
>
1
0
5
1
p
a
g
e
s
(
1
.
0
s
p
i
n
e
)
0
.
8
7
5
<
-
>
1
.
4
9
8
4
6
0
<
-
>
7
8
8
p
a
g
e
s
(
0
.
5
s
p
i
n
e
)
0
.
4
7
5
<
-
>
0
.
8
7
3
2
5
0
<
-
>
4
5
9
p
a
g
e
s
I
m
p
l
e
m
e
n
t
i
n
g
t
h
e
I
B
M
S
y
s
t
e
m
S
t
o
r
a
g
e
S
A
N
V
o
l
u
m
e
C
o
n
t
r
o
l
l
e
r
V
7
.
2
I
m
p
l
e
m
e
n
t
i
n
g
t
h
e
I
B
M
S
y
s
t
e
m
S
t
o
r
a
g
e
S
A
N
V
o
l
u
m
e
C
o
n
t
r
o
l
l
e
r
V
7
.
2
I
m
p
l
e
m
e
n
t
i
n
g
t
h
e
I
B
M
S
y
s
t
e
m
S
t
o
r
a
g
e
S
A
N
V
o
l
u
m
e
C
o
n
t
r
o
l
l
e
r
V
7
.
2
I
m
p
l
e
m
e
n
t
i
n
g
t
h
e
I
B
M
S
y
s
t
e
m
S
t
o
r
a
g
e
S
A
N
V
o
l
u
m
e
C
o
n
t
r
o
l
l
e
r
V
7
.
2
(
2
.
0
s
p
i
n
e
)
2
.
0
<
-
>
2
.
4
9
8
1
0
5
2
<
-
>
1
3
1
4
p
a
g
e
s
(
2
.
5
s
p
i
n
e
)
2
.
5
<
-
>
n
n
n
.
n
1
3
1
5
<
-
>
n
n
n
n
p
a
g
e
s
T
o
d
e
t
e
r
m
i
n
e
t
h
e
s
p
i
n
e
w
i
d
t
h
o
f
a
b
o
o
k
,
y
o
u
d
i
v
i
d
e
t
h
e
p
a
p
e
r
P
P
I
i
n
t
o
t
h
e
n
u
m
b
e
r
o
f
p
a
g
e
s
i
n
t
h
e
b
o
o
k
.
A
n
e
x
a
m
p
l
e
i
s
a
2
5
0
p
a
g
e
b
o
o
k
u
s
i
n
g
P
l
a
i
n
f
i
e
l
d
o
p
a
q
u
e
5
0
#
s
m
o
o
t
h
w
h
i
c
h
h
a
s
a
P
P
I
o
f
5
2
6
.
D
i
v
i
d
e
d
2
5
0
b
y
5
2
6
w
h
i
c
h
e
q
u
a
l
s
a
s
p
i
n
e
w
i
d
t
h
o
f
.
4
7
5
2
"
.
I
n
t
h
i
s
c
a
s
e
,
y
o
u
w
o
u
l
d
u
s
e
t
h
e
.
5
s
p
i
n
e
.
N
o
w
s
e
l
e
c
t
t
h
e
S
p
i
n
e
w
i
d
t
h
f
o
r
t
h
e
b
o
o
k
a
n
d
h
i
d
e
t
h
e
o
t
h
e
r
s
:
S
p
e
c
i
a
l
>
C
o
n
d
i
t
i
o
n
a
l
T
e
x
t
>
S
h
o
w
/
H
i
d
e
>
S
p
i
n
e
S
i
z
e
(
-
-
>
H
i
d
e
:
)
>
S
e
t
.
M
o
v
e
t
h
e
c
h
a
n
g
e
d
C
o
n
d
i
t
i
o
n
a
l
t
e
x
t
s
e
t
t
i
n
g
s
t
o
a
l
l
f
i
l
e
s
i
n
y
o
u
r
b
o
o
k
b
y
o
p
e
n
i
n
g
t
h
e
b
o
o
k
f
i
l
e
w
i
t
h
t
h
e
s
p
i
n
e
.
f
m
s
t
i
l
l
o
p
e
n
a
n
d
F
i
l
e
>
I
m
p
o
r
t
>
F
o
r
m
a
t
s
t
h
e
C
o
n
d
i
t
i
o
n
a
l
T
e
x
t
S
e
t
t
i
n
g
s
(
O
N
L
Y
!
)
t
o
t
h
e
b
o
o
k
f
i
l
e
s
.
D
r
a
f
t
D
o
c
u
m
e
n
t
f
o
r
R
e
v
i
e
w
M
a
r
c
h
2
7
,
2
0
1
4
3
:
0
3
p
m
7
9
3
3
s
p
i
n
e
.
f
m
9
0
0
I
m
p
l
e
m
e
n
t
i
n
g
t
h
e
I
B
M
S
y
s
t
e
m
S
t
o
r
a
g
e
S
A
N
V
o
l
u
m
e
C
o
n
t
r
o
l
l
e
r
V
7
.
2
I
m
p
l
e
m
e
n
t
i
n
g
t
h
e
I
B
M
S
y
s
t
e
m
S
t
o
r
a
g
e
S
A
N
V
o
l
u
m
e
C
o
n
t
r
o
l
l
e
r
V
7
.
2
SG24-7933-02 ISBN
Draft Document for Review March 27, 2014 3:04 pm
INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
For more information:
ibm.com/redbooks