Sg 248559
Sg 248559
Redbooks
IBM Redbooks
December 2024
SG24-8559-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
This edition applies to DS8000 G10 systems with Licensed Machine Code (LMC) 7.10.0 (bundle version
10.0.xxx.x), referred to as Release 10.0.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Contents v
6.2.5 Updating the embedded IBM Copy Services Manager . . . . . . . . . . . . . . . . . . . . 167
6.3 Web User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.3.1 Logging in to the HMC WUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.3.2 IBM ESSNI server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.4 Management Console activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.4.1 Management Console planning tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
6.4.2 Planning for Licensed Internal Code upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.4.3 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.4.4 Monitoring DS8A00 with the Management Console . . . . . . . . . . . . . . . . . . . . . . 174
6.4.5 Event notification through syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.4.6 Call Home and remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.5 Management Console network settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.5.1 Configuring the Management Console Network . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.5.2 Private networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.6 User management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.6.1 Password policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.6.2 Remote authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
6.6.3 Service Management Console User Management . . . . . . . . . . . . . . . . . . . . . . . 180
6.6.4 Service Management Console LDAP authentication . . . . . . . . . . . . . . . . . . . . . 185
6.6.5 Multifactor authentication (MFA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.7 Secondary Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.7.1 Management Console redundancy benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Contents vii
10.3.3 Creating the ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
10.3.4 Creating the extent pools and assigning ranks to the extent pools. . . . . . . . . . 341
10.3.5 Creating the FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
10.3.6 Creating the volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
10.3.7 Creating host connections and clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
10.3.8 Mapping open system host disks to storage unit volumes . . . . . . . . . . . . . . . . 359
10.4 DS8A00 storage configuration for the CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . 359
10.4.1 Disk classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10.4.2 Creating the arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10.4.3 Creating the ranks and extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
10.4.4 Creating the extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10.4.5 Logical control unit creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
10.4.6 Creating the CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
10.4.7 Resource groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10.4.8 IBM Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10.5 Metrics with DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10.5.1 Overview of metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
10.5.2 Offloading performance data and other parameters . . . . . . . . . . . . . . . . . . . . . 379
10.6 Private network security commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
10.7 Copy Services commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
10.8 Earlier DS CLI commands and scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
10.9 For more information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Contents ix
x IBM Storage DS8A00 Architecture and Implementation Guide
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM FlashSystem® PowerPC®
Db2® IBM Research® PowerVM®
DS8000® IBM Security® RACF®
Easy Tier® IBM Services® Redbooks®
Enterprise Storage Server® IBM Spectrum® Redbooks (logo) ®
FICON® IBM Sterling® Sterling™
FlashCopy® IBM Z® z/Architecture®
GDPS® IBM z13® z/OS®
Guardium® IBM z16™ z/VM®
HyperSwap® Parallel Sysplex® z13®
IBM® POWER® z15®
IBM Cloud® Power Architecture® z16™
IBM Cloud Pak® Power8® zEnterprise®
IBM FlashCore® Power9® zSystems™
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Red Hat, OpenShift, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, VMware vSphere, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or
its subsidiaries in the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redbooks® publication describes the concepts, architecture, and implementation
of the IBM DS8A00 family. The book provides reference information to assist readers who
need to plan for, install, and configure the IBM DS8000® G10 systems. This edition applies to
DS8000 G10 systems with Licensed Machine Code (LMC) 7.10.0 (bundle version 10.0.xxx.x),
referred to as Release 10.0.
The DS8000 G10 systems are all-flash exclusively, and currently two models and classes are
offered:
IBM DS8A50: Agility Class
The Agility Class consolidates all your mission-critical workloads for IBM Z®,
IBM LinuxONE, IBM Power Systems, and distributed environments under a single all-flash
storage solution.
IBM DS8A10: Flexibility Class
The Flexibility Class reduces complexity and addresses various workloads at the lowest
DS8000 family entry cost.
The DS8A00 architecture relies on powerful IBM Power9+ processor-based servers that
manage the cache to streamline disk input/output (I/O), which maximizes performance and
throughput. These capabilities are further enhanced by High-Performance Flash Enclosures
(HPFE) Gen-3.
Like its predecessors, the DS8A00 models support advanced disaster recovery (DR)
solutions, business continuity solutions, and thin provisioning. In addition to several
performance enhancements, they offer compression.
The predecessor DS8900F models are described in IBM Storage DS8900F Architecture and
Implementation, SG24-8456, and IBM DS8910F Model 993 Rack-Mounted Storage System,
REDP-5566.
Authors
This book was produced by a team of specialists from around the world.
Vasfi Gucer leads projects for the IBM Redbooks® team, leveraging his 20+ years of
experience in systems management, networking, and software. A prolific writer and global
IBM instructor, his focus has shifted to storage and cloud computing in the past eight years.
Vasfi holds multiple certifications, including IBM Certified Senior IT Specialist, PMP, ITIL V2
Manager, and ITIL V3 Expert.
Sherry Brunson joined IBM in March of 1985 and worked as a large system IBM service
representative before becoming a Top Gun in 1990. Sherry is a Top Gun in the Eastern US for
all storage products, Power systems, and IBM Z Systems. She has supported and
implemented DS8000 and Scaled out Network Appliance storage products globally as well as
developing and teaching educational classes. She has also taught Z System classes in the
United States.
Nielson “Nino” de Carvalho is a Level 2 certified IT specialist in IBM Switzerland with over a
decade of experience in IBM Mainframe computing. He specializes in IBM Z, LinuxONE,
Jeff Cook is a DS8000 Subject Matter Expert (SME), and leads the Tucson, Arizona DS8000
Product Engineering team. He has been with IBM for 45 years, providing implementation and
technical support to customers and service representatives in complex enterprise
environments. For the past 26 years, he has specialized in IBM direct access storage device
products, specifically the DS8000 and former enterprise storage systems.
Michael Frankenberg is a Certified IT Specialist in Germany and joined IBM in 1995. With
more than twenty-five years of experience in high-end storage he works in Technical Sales
Support for EMEA. His area of expertise includes performance analysis, establishing high
availability and disaster recovery solutions and implementation of storage systems. He
supports the introduction of new products and provides advice for business partners,
Technical Sales, and customers. He holds a degree in Electrical Engineering / Information
Technology from University of Applied Sciences Bochum, Germany.
Carsten Haag is a DS8000 Subject Matter Expert (SME) and leads the EMEA DS8000
support team. He joined IBM in 1998 and has been working with IBM High End Storage
Systems for over 20 years. He holds a Diploma (MSc) degree in microelectronics from the
Technical University of Darmstadt.
Peter Kimmel is a Senior Platform Engineer for Enterprise Storage in the IBM EMEA Client
Engineering team Frankfurt, Germany. He joined IBM Storage in 1999, and since then has
worked with all DS8000 generations, with a focus on architecture and performance. Peter has
co-authored several DS8000 IBM publications. He holds a Diploma (MSc) degree in physics
from the University of Kaiserslautern.
Radoslav Neshev is a DS8000 Subject Matter Expert (SME) and a recent addition to the
DS8000 development field support team, specializing in storage hardware. He joined IBM at
2016 as part of the newly formed DS8000 Remote Support team in Bulgaria, working closely
with clients to troubleshoot and resolve any hardware and software issues related to the
product. With experience in networking, prior to joining IBM Radoslav has worked as a
Level 3 technical support engineer and technical auditing specialist for one of the leading
internet service providers in the UK.
Connie Riggins is a DS8000 Copy Services and Copy Services Manager Subject Matter
Expert with the DS8000 Product Engineering group. She started working at IBM in 2015.
Prior to joining IBM, starting in 1991, Connie worked at Amdahl Corp. as a Systems Engineer
and later at Softek Storage Solutions as Product Manager of TDMF for z/OS.
Mike Stenson is the team lead for the DS8000 development field screen team. He has over
20 years of experience of engineering, and development in storage, server, and networking
environments. He joined IBM in 2005 and has worked on every generation of the DS8000. He
has co-authored several IBM publications.
Robert Tondini is a mainframe storage Subject Matter Expert based in IBM Australia and
New Zealand. He has 28 years of experience in mainframe storage implementation,
management, capacity and performance sizing with focus on designing and exploiting high
availability, disaster recovery and cyber resiliency solutions. He co-authored several
IBM Redbooks publications and workshops for DS8000 systems.
Brian Rinaldi, John Bernatz, Cheryl Friauf, Randy Blea, Carl Brown, Andy Benbenek
IBM USA
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xv
xvi IBM Storage DS8A00 Architecture and Implementation Guide
1
All models of this DS8000 generation belong to the IBM DS8A00 high-performance model
family:
IBM DS8A50: Agility Class
IBM DS8A10: Flexibility Class
The DS8A00 models support the most demanding business applications with their
exceptional all-around performance and data throughput. Some models are shown in
Figure 1-2 on page 3.
The DS8000 offers features that clients expect from a high-end storage system:
High performance
High capacity
High Availability
Security and encryption
Cost efficiency
Energy efficiency
Scalability
Business continuity and data-protection functions.
The DS8A00 is an all-flash storage system that is equipped with encryption-capable flash
drives. High-density storage enclosures offer a considerable reduction in the footprint and
energy consumption. It can be equipped with compression-capable drives, which reduce this
footprint even further.
The DS8000 includes a power design that is based on intelligent Power Distribution Units
(iPDUs), and non-volatile dual inline memory modules (NVDIMMs) to store data in case of a
power outage. The iPDUs allow the DS8A00 to achieve the highest energy efficiency in the
DS8000 series. The DS8A00 comes with Titanium power supplies which comply with latest
regulations for an enhanced energy efficiency. All models are available for both three-phase
and single-phase power attachment.
Figure 1-3 on page 4 shows the various components within the base frame. The expansion
frame which is an option for the bigger model enables clients to add more capacity to their
storage systems in once again the same footprint.
High-Performance
Flash Enclosure Pair
Keyboard/Display
Drawer
High-Performance
Flash Enclosure Pair
High-Performance
Flash Enclosure Pair
Hardware Management
Consoles (2)
I/O enclosures are attached to the Power9+ processor-based servers with Peripheral
Component Interconnect Express (PCIe) Generation-4 cables. The I/O enclosure has six
PCIe adapter slots for host/device adapters, and two slots for zHyperLink ports.
These racked models also have an integrated keyboard and display that can be accessed
from the front of the rack. A pair of 1U Hardware Management Consoles (HMCs) are installed
in the base rack management enclosure. The height of the rack is 42U for all units.
Note: For more information about the IBM Z synergy features, see IBM DS8900F and
IBM Z Synergy DS8900F, REDP-5186.
A CPC is also referred to as a storage server. For more information, see Chapter 4, “IBM
DS8A00 reliability, availability, and serviceability” on page 103.
For more information, see Chapter 2, “IBM DS8A00 hardware components and architecture”
on page 39.
HPFE Gen3
The HPFE flash RAID adapters are installed in pairs and split across an I/O enclosure pair.
They occupy the respective top PCIe slots according to the adapter pair plugging order.
HPFE drive enclosures are also installed in pairs, and connected to the corresponding flash
RAID adapter pair over eight PCIe Gen4 cables for high bandwidth and redundancy. Each
drive enclosure can contain up to twenty-four 2.5-inch NVMe flash drives or modules. The
flash devices are installed in groups of 16, and split evenly across the two drive enclosures in
the pair.
Each flash adapter pair and HPFE pair delivers a throughput in a range of around 20 GBps,
and for small blocks, with reads, more than a million IOPS.
For more information, see Chapter 2, “IBM DS8A00 hardware components and architecture”
on page 39.
Drive options
Flash drives use less power than HDDs, and their IOPS count per drive can be up to 100
times higher with a 10 times shorter response time. Drive vendors also offer flash drives with
various specifications, such as enterprise versus consumer, single-level versus multi-level
cell, endurance, and DRAM cache, and each generation is a little more advanced.
All flash drives in the DS8000 Gen10 are encryption-capable and internally, encrypt always.
Fully enabling encryption is optional, and requires either using the local key management or
at least two external key servers.
Easy Tier
The DS8000 can use Easy Tier to automatically balance data placement on disk drives to
avoid hot spots on flash arrays. Easy Tier can place data on a storage array that is less
loaded and best suits the access frequency of the data. For certain drive combinations, such
as between the capacity tiers of the industry-standard drives, data can be moved
nondisruptively to a higher or lower tier.
For more information about Easy Tier, see IBM DS8000 Easy Tier, REDP-4667.
Host adapters
The DS8A00 offers 32 Gbps host adapters. They have four ports each, and each port can be
independently configured for either FCP or FICON. Each port independently auto-negotiates
to a 32 Gbps, 16 Gbps, or 8 Gbps link speed.
For more information, see Chapter 2, “IBM DS8A00 hardware components and architecture”
on page 39.
You can also encrypt data before it is transmitted to the cloud when using the TCT feature.
For more information, see the IBM DS8000 Encryption for Data at Rest, Transparent Cloud
Tiering, and Endpoint Security, REDP-4500.
The available drive options provide industry-class capacity and performance to address a
wide range of business requirements. The DS8000 storage arrays can be configured as
RAID 6 or RAID 10. RAID 6 is the default and preferred setting for the DS8A00. As of the time
of this writing, a drive extent pool consists of either FCM only, or industry-standard drives only.
For more information, see 2.2, “DS8A00 configurations and models” on page 42.
The DS8A00 supports over 70 platforms. For the list of supported platforms, see the IBM
System Storage Interoperation Center (SSIC) for IBM Storage Enterprise Disk/DS8A00.
This support of heterogeneous environments and attachments, with the flexibility to partition
the DS8000 storage capacity among the attached environments, can help support storage
consolidation requirements and dynamic environments.
Tip: Copy Services (CS) are currently supported for LUN sizes of up to 4 TB.
The maximum CKD volume size is 1,182,006 cylinders (1 TB), which can greatly reduce the
number of volumes that are managed. This large CKD volume type is called a 3390 Model A.
It is referred to as an Extended Address Volume (EAV).
OpenStack
The DS8000 supports the OpenStack cloud management software for business-critical
private, hybrid, and public cloud deployments. The DS8A00 supports features in the
OpenStack environment, such as volume replication and volume retype. The Cinder driver for
DS8000 is open source in the OpenStack community. The /etc/cinder.conf file can be
directly edited for the DS8000 back-end information.
For more information about the DS8000 and OpenStack, see Using IBM DS8000 in an
OpenStack Environment, REDP-5220.
DS8000 offers persistent storage for mission-critical containers with support for
IBM Cloud Pak® solutions to enhance and extend the functionality of Red Hat OpenShift.
The DS8000 supports the Container Storage Interface (CSI) specification. IBM released an
open-source CSI driver for IBM storage that allows dynamic provisioning of storage volumes
for containers on Kubernetes and IBM Red Hat OpenShift Container Platform (OCP).
The CSI driver for IBM block storage systems enables container orchestrators such as
Kubernetes to manage the lifecycle of persistent storage. This CSI is the official operator to
deploy and manage the IBM block storage CSI driver.
For more information about CSI, see IBM block storage CSI driver documentation.
For more information about the RESTful API, see DS8900F/DS8880 4.0 RESTful API Guide,
SC27-9823, or Exploring the DS8870 RESTful API Implementation, REDP-5187.
Data in the FCM Compression Tier is always stored thin and compressed. New or enhanced
system commands, GUI views, warning thresholds, and alerts are available to monitor and
manage the available capacities. When you use over-provisioning, define a policy to handle
these alerts and let IBM help you with an appropriate planning for the right capacities and
procedures.
To meet the challenges of cybersecurity, the Safeguarded Copy function, which is based on
the FlashCopy technology, can create and retain hundreds of PTCs for protection against
logical data corruption. In PPRC-replicated DS8000 pairs, you can also incrementally restore
a recovered Safeguarded Copy to a production copy of the data. With the additional
integration level to a full Cyber Vault solution you could automate and accelerate detection
and response to a cyberattack. For more information, see IBM Storage DS8000 Safeguarded
Copy, REDP-5506, and Getting Started with IBM Z Cyber Vault, SG24-8511.
For data protection and availability needs, the DS8000 provides MM, GM, GC, and MGM,
which are Remote Mirror and Remote Copy functions. These functions are also available and
are fully interoperable with previous models of the DS8000 family. These functions provide
storage mirroring and copying over large distances for DR or availability purposes.
CS scope limiting is the ability to specify policy-based limitations on CS requests. For more
information about CS, see IBM DS8000 Copy Services, SG24-8367.
1-Year
Hardware Maintenance 24×7 24×7
24×7
(IOR = IBM On-site Repair) same day same day
same day
Support Line
(24×7 remote technical support) ✓ ✓
Predictive Support ✓ ✓
SI SI Pro
Media Retention
(optional add-on via Service Pac)
Optional add-on Optional add-on
Physical installation of the DS8000 is performed by IBM by using the installation procedure for
this system. The client’s responsibility is the installation planning, retrieval, and installation of
feature activation codes, logical configuration, and execution.
The storage system HMC is the focal point for maintenance and service operations. Two
HMCs are inside each DS8000 Base rack, and they continually monitor the state of the
system. HMCs notify IBM, and they can be configured to notify you when service is required.
The HMC is also the interface for Call Home and remote support, which can be configured to
meet client requirements. It is possible to allow one or more of the following configurations:
Call Home on error (machine-detected)
Remote connection for a few days (client-initiated)
Remote problem log collection (service-initiated)
You can also select IBM Remote Support Center (RSC) so that IBM Support can access your
systems remotely.
For customers who choose Expert Care Advanced or the base warranty service,
Customer-controlled Code Load is the default method for performing concurrent microcode
updates:
Microcode bundles are downloaded and activated by the customer by using the standard
DS Storage Manager GUI.
The download defaults to the current recommended bundle, or an alternative compatible
bundle can be chosen.
Health checks are run before the download, and again before activation to ensure that the
system is in good health.
If a problem is encountered anywhere in the process, a ticket is opened automatically with
IBM Support, and the ticket number is provided in the GUI for reference.
After the problem is corrected, the code load can be restarted.
Customers who want to have an IBM Systems Service Representative (IBM SSR) perform
Onsite Code Load can purchase Feature Code #AHY2 with Expert Care Premium, or Feature
Code #AHY3 with Expert Care Advanced. Customers selecting EC Advanced but still opting
for RCL can select Feature Code #AHY4.
With all of these components, the DS8A00 is positioned at the top of the high-performance
category.
SARC, AMP, and IWC play complementary roles. Although SARC carefully divides the cache
between the RANDOM and the SEQ lists to maximize the overall hit ratio, AMP is managing
the contents of the SEQ list to maximize the throughput that is obtained for the sequential
workloads. IWC manages the write cache and decides the order and rate to destage to disk.
Given the capacity increase of the flash modules, the price drop of capacity-optimized flash
compared to high-RPM HDDs, and given their savings on energy consumption and space,
most clients decide on an all-flash array storage strategy. Several types and tiers of flash are
available. Also, within the higher durability range of enterprise flash, flash modules with
different performance characteristics or extra functionality are offered to cover more
cost-economical large-capacity use cases and highly-performing and intense workloads with
heavy access densities.
For all these scenarios, the DS8A00 offers the different tiers of drives including two
performance classes of industry-standard drives and another tier on the IBM FlashCore®
Modules (FCMs), which features hardware compression and encryption. Each FCM has extra
added processing power and DRAM to perform the compression task. Also, caching and
internal SLC/QLC tiering is available that helps to keep latency at the same low levels.
All flash devices now come with the NVM Express (NVMe) protocol, running on the PCIe
hardware interfaces. NVMe was developed for SSD flash, and NAND flash is used in our
devices. Compared to earlier SAS standards, NVMe allows a much a much higher I/O
parallelism and more transactions and IOPS per drive.
Considering the RAID implementation that is always needed, with the flash drives and the
specific architecture that is used in the HPFEs, much higher IOPS densities (IOPS per GB)
are possible than with ordinary solid-state drives (SSDs). The transition to PCIe Gen-4 for the
HPFE and drive backend has significantly enhanced performance that results in
approximately double the typical per-HPFE and per-array throughput compared to the
previous DS8000 generation.
The flash drives use the flash RAID adapters in the I/O enclosures, and PCIe connections to
the processor complexes. The high-performance flash drives that are available with the
Capacity Tier 1 are high-IOPS class enterprise storage devices that are targeted at most
I/O-intensive workload applications, which need high-level, fast-access storage. The
high-capacity flash drive types of Capacity Tier 2 have a more economical acquisition cost
point. For many workloads, the Tier 2 drives can often fulfill standard enterprise workload
requirements as a single tier. The Compression Tier gives some extra capacity advantage
while maintaining a comparable excellent level of low latency. With its always-on hardware
For more information about performance on IBM Z, see IBM DS8900F and IBM Z Synergy
DS8900F, REDP-5186.
Base frame
The DS8A00 has two available base frame models. The model numbers, DS8A50 and
DS8A10, depend on the hardware configuration for each. In this chapter, the DS8A00 family
name, or model number, are used interchangeably. Table 2-1 lists each of the frame models.
Each base frame is equipped with dual Hardware Management Consoles (HMCs). To
increase the storage capacity and connectivity, an expansion frame can be added to any
DS8A50 model A05.
For more information about the base frame configuration, see 2.2.3, “DS8A00 base frames”
on page 45.
Expansion frame
The DS8A50 supports one optional expansion frame, which provides space for extra storage
capacity, and also supports up to two additional I/O enclosure pairs.
With this model, you can place the expansion frame a maximum of 20 meters away from the
base frame. To use this feature, use the optical Peripheral Component Interconnect Express
(PCIe) I/O Bay interconnect. The Copper PCIe I/O Bay interconnect is used when the
expansion frame is physically next to the base frame. For more information about the
expansion frame connections, see 2.2.4, “DS8A00 expansion frame” on page 48.
All DS8A00 system memory and processor upgrades can be performed concurrently.
Figure 2-1 on page 41 shows the front view of both Power9 HMCs.
The characteristics for CPCs for each model type are listed in Table 2-2.
2048 GB
3584 GB
512 GB
Both CPCs in a DS8A00 system share the system workload. The CPCs are redundant, and
either CPC can fail over to the other for scheduled maintenance, for upgrade tasks, or if a
failure occurs. The CPCs are identified as CPC 0 and CPC 1. A logical partition (LPAR) in
each CPC runs the AIX V7.x operating system (OS) and storage-specific Licensed Internal
Code (LIC). This LPAR is called the storage node. The storage servers are identified as
Node 0 and Node 1 or server0 and server1.
The main variations between models are the combinations of CPCs, I/O enclosures, storage
enclosures, and flash drives. System memory, processors, storage capacity, and host
attachment upgrades from the smallest to the largest configuration can be performed
concurrently.
The DS8A00 storage systems use machine type 5341. Warranty and services are offered as
part of Expert Care. Options range from a 1-year base warranty to a 5-year Expert Care
Advanced or Premium, where Premium is the default option in eConfig.
Keyboard/Monitor 1
a. For more information, see 2.4.2, “I/O enclosure adapters” on page 58.
b. For more information, see Getting Started with IBM zHyperLink for z/OS, REDP-5493.
Note: The DS8A00 hardware uses iPDUs, non-volatile dual inline memory modules
(NVDIMMs) and Backup Power Modules (BPMs).
Figure 2-3 DS8A10 model A01 system min. and max. configuration
Table 2-4 lists the hardware and the minimum and maximum configuration options for the
DS8A10 model A01.
Frames 1
Keyboard/Monitor 1
a. For more information, see 2.4.2, “I/O enclosure adapters” on page 58.
b. For more information, see Getting Started with IBM zHyperLink for z/OS, REDP-5493.
Note: Intermix of SW and LW adapters is also allowed. For HA, IBM recommends
installing host adapters in pairs. The 32 Gbps HA adapter is using the same
technology as in DS8900F systems but with a different form factor.
For more information about I/O enclosures and I/O adapters, see 2.4, “I/O enclosures and
adapters” on page 56.
A DS8A50 system supports the installation of an expansion frame without any additional
features.
Note: Intermix of SW and LW adapters is also allowed. For HA, IBM recommends
installing host adapters in pairs. The 32 Gbps HA adapter uses the same technology
that is used in DS8900F systems but with a different form factor.
– Device adapter:
• One device adapter pair is required for each HPFE Gen3 pair.
• Each I/O enclosure pair supports up to two DA pairs.
• DA pairs are connected to HPFE Gen3 pairs through redundant NVMe over PCIe
Gen4 connections.
– zHyperLink connections to IBM Z hosts:
• Supports direct connectivity to IBM Z at distances up to 150 m.
• zHyperLink adapters should be installed in pairs in two different I/O enclosures.
• zHyperLink cables are available in lengths of 40 m, or 150 m. For other lengths, see
IBM Z Connectivity Handbook, SG24-5444, or contact your optical cable vendor.
For more information, see 2.4, “I/O enclosures and adapters” on page 56.
To ease floor planning for future expansions, an available optical PCIe cable allows a distance
up to 20 m. The cable set contains optical cables and transceivers. One cable set is required
for each installed I/O enclosure pair in the expansion frame.
As shown in Figure 2-5 on page 50, this extension makes the positioning of an expansion
frame more flexible, especially for future capacity expansion. An extra rack side cover pair is
available if needed.
Figure 2-6 shows CPC front view for the DS8A50 systems.
In the DS8A10 configuration, the CPCs are IBM Power 9009-22G servers, which are
populated with two 8-core processors with 5 cores used per processor, which makes 10
Figure 2-7 shows the CPC front view as configured in the DS8A10 system.
For more information about the server hardware that is used in the DS8A10 and DS8A50, see
IBM Power Systems S922, S914, and S924 Technical Overview and Introduction Featuring
PCIe Gen 4 Technology, REDP-5595.
In the DS8A00, processor core and system memory configurations dictate the hardware that
can be installed in the storage system. Processors and memory can be upgraded
concurrently as required to support storage system hardware upgrades. The supported
maximum system hardware components depend on the total processor and system memory
configuration.
Figure 2-8 shows the supported components for the DS8A00 processor and memory options.
NVS values are typically 1/16th of installed system memory, except for the smallest systems
with 192 GB system memory, where only 8 GB, that is, 4 GB per CPC, is used as NVS, and
for the biggest systems of 3.5 TB memory, where NVS remains at 192 GB.
Figure 2-8 Supported components for the DS8A00 processor and memory options
Each CPC contains half of the total system memory. All memory that is installed in each CPC
is accessible to all processors in that CPC. The absolute addresses that are assigned to the
memory are common across all processors in the CPC. The set of processors is referred to
as a symmetric multiprocessor (SMP) system.
The Power9 processor that is used in the DS8A00 operates in simultaneous multithreading
(SMT) mode, which runs multiple instruction streams in parallel. The number of simultaneous
instruction streams varies according to processor and LIC level. SMT mode enables the
Power9 processor to maximize the throughput of the processor cores by processing multiple
concurrent threads on each processor core.
The DS8A00 configuration options are based on the total installed memory, which in turn
depends on the number of installed and active processor cores:
In the DS8A50, you can install a memory upgrade from 512 GB per server to 1024 GB per
server or to 1792 GB per server nondisruptively.
In the DS8A10, you can install a memory upgrade from 128 GB per server to 256 GB per
server.
Caching is a fundamental technique for reducing I/O latency. Like other modern caches, the
DS8A00 system contains volatile memory (RDIMM) that is used as a read/write cache, and
NVDIMM that is used for a persistent memory write cache. (A portion of the NVDIMM
capacity is also used for read/write cache.) The NVDIMM technology eliminates the need for
the large backup battery sets that were used in previous generations of DS8000. If power is
lost, the system shuts down in 20 ms, but power is maintained to the NVDIMMs, and data in
the NVS partition is hardened to onboard NAND flash.
NVS scales according to the processor memory that is installed, which also helps to optimize
performance. NVS is typically 1/16th of the installed CPC memory, with a minimum of 16 GB
and a maximum of 192 GB.
The FSP controls power and cooling for the CPC. The FSP performs predictive failure
analysis (PFA) for installed processor hardware, and performs recovery actions for processor
or memory errors. The FSP monitors the operation of the firmware during the boot process,
and can monitor the OS for loss of control and take corrective actions.
A pair of optional adapters is available for TCT as a chargeable Feature Code. Each adapter
provides two 10 Gbps small form-factor pluggable plus (SFP+) optical ports for short
Figure 2-9 shows the location codes of the CPCs in DS8A50 systems. Figure 2-10 shows the
location codes of the CPC in a DS8A10 system.
Figure 2-9 Location codes of the CPC in DS8A50 systems in the rear
Figure 2-10 Location codes of the CPC in the DS8A10 system in the rear
Figure 2-11 shows the PCIe adapter locations in the DS8A50 CPC. Figure 2-12 shows the
PCIe adapter locations in the DS8A10 CPC.
The I/O enclosures are PCIe Gen4-capable, and are attached to the CPCs with 8-lane PCIe
Gen4 cables. The I/O enclosures have six PCIe adapter slots, plus two CXP connectors.
DS8A50 CPCs have up to six 1-port and one 2-port PCIe Gen4 adapters that provide
connectivity to the I/O enclosures.
DS8A10 CPCs have up to four 1-port PCIe Gen4 adapters that provide connectivity.
One or two I/O enclosure pairs can be installed in the base frame of the DS8A00 and also in
the E05 expansion frame. Each I/O enclosure can have up to four host adapters. A maximum
of 16 host adapter ports are supported in a single I/O enclosure. The I/O enclosure has up to
two zHyperLink adapters. For more information about zHyperLink availability for DS8A00
models, see Table 2-5 on page 48.
Figure 2-13 on page 57 shows the DS8A50 CPC to I/O enclosure connectivity.
Figure 2-15 shows PCIe connections of a DS8A50, One CPC is on the left side and one CPC
is on the right side.
If a failure occurs in one or more I/O enclosures, any of the remaining enclosures can be used
to maintain communication between the servers.
The I/O bay can contain up to four host adapters and two zHyperlink adapters (zHL) that
provide attachment to host systems Also, the I/O bay can contain up to two device adapters to
provide attachment to the High-Performance Flash Enclosure (HPFE) Gen3. Each I/O bay
has two PCIe Gen4 x8 CXP connectors on the I/O bay for the internal PCIe fabric
connections to CPC 0 and CPC 1.
Both 32 Gbps Short Wave and 32 Gbps Long Wave host adapters are available. Both have
four ports and the adapters can auto-negotiate their data transfer rate down to 8 Gbps
full-duplex data transfer. Both host adapters support FCP or FICON protocol.
The 32 Gbps host adapter supports both IBM Fibre Channel Endpoint Security authentication
and line-rate encryption.
For more information, see IBM Fibre Channel Endpoint Security for IBM DS8900F and IBM Z,
SG24-8455.
The DS8A50 configurations support a maximum of 16 host adapters in the base frame and 16
extra host adapters in the model E05 expansion frame. The DS8A10 supports a maximum of
16 host adapters.
Host adapters are installed in slots 3, 4, 5, and 6 of the I/O enclosure. Figure 2-16 shows the
locations for the host adapters in the DS8A00 I/O enclosure. The system supports an intermix
of both adapter types up to the maximum number of ports, as shown in Table 2-6.
Optimum availability: To obtain optimum availability and performance, one host adapter
must be installed in each available I/O enclosure before a second host adapter is installed
in the same enclosure.
The host adapter locations and installation order for the four I/O enclosures in the base frame
are the same for the I/O enclosures in the expansion frame.
C1 C2 C3 C4 C5 C6
For four I/O enclosures (Model A05, Model A01, Model E05a)
The DS8A00 uses the FCP to transmit Small Computer System Interface (SCSI) traffic inside
FC frames. It also uses FC to transmit FICON traffic for IBM Z I/O. Each of the ports on a
DS8A00 host adapter can be configured for FCP or FICON, but a single port cannot be
configured for both concurrently. The port topology can be changed by using the DS GUI or
DS CLI.
zHyperlink adapters
zHyperlink adapters can be attached via a point-to-point connection to IBM Z servers running
the z/OS operating system. It provides a low latency interface to a z/OS LPAR, which allows
synchronous I/Os and means the CPU is waiting for a response from the DS8A00. For more
information about zHyperLink, see Getting Started with IBM zHyperLink for z/OS,
REDP-5493.
1 1 Bay 6 C7-C1
2 2 Bay 7 C7-C1
3 Bay 4 C7-C1
4 Bay 5 C7-C1
5 3 1 1 Bay 2 C7-C1
6 4 2 Bay 1 C7-C1
7 5 3 2 Bay 3 C7-C1
8 6 4 Bay 0 C7-C1
9 7 Bay 6 C7-C2
10 8 Bay 7 C7-C2
Bay 4 C7-C2
Bay 5 C7-C2
5 3 Bay 2 C7-C2
6 Bay 1 C7-C2
7 4 Bay 3 C7-C2
8 Bay 0 C7-C2
Device adapters
Device adapters (DA) provide redundant NVMe access to the internal storage devices and
support RAID6 and RAID10. Each DA manages a pair of HPFE Gen3 enclosures. The
adapters are always installed as a pair. Logical configuration is then balanced across the DA
pair for load-balancing and the highest throughput.
The DAs are installed in the I/O enclosures and are connected to the CPCs through the PCIe
Gen4 network. The DAs are responsible for managing and monitoring the RAID arrays
(RAID6 / RAID10) in the HPFE enclosures. The DAs provide remarkable performance
because of a high-function and high-performance NVMe design. To ensure maximum data
integrity, the adapter supports metadata creation and checking.
HPFE Gen3 enclosures are always installed in pairs. Each enclosure pair supports 16, 32, or
48 flash drives. In Figure 2-17, a pair of HPFE Gen3 enclosures is shown.
Each HPFE Gen3 pair is connected to a redundant pair of devices adapters. The PCIe Gen 4
DAs are installed in the DS8A00 I/O enclosures.
The DS8A50 configuration can support up to four HPFE Gen3 pairs in the base frame, and up
to four HPFE Gen3 pairs in the expansion frame for a total of eight HPFE Gen3 pairs, with a
maximum of 384 flash drives. The DS8A10 configuration supports up to four HPFE Gen3
pairs, with a maximum of 192 flash drives.
Storage-enclosure fillers
Storage-enclosure fillers occupy empty drive slots in the storage enclosures. The fillers help
ensure consistent airflow through an enclosure. For HPFE Gen3, one filler feature provides a
set of 16 fillers.
For FCMs, Easy Tier auto mode must be enabled to allow extent management.
FCMs supports space reclamation on a track boundary (56 KB CKD / 64 KB Open) instead of
an extent boundary.
Note: With DS8A00 Release 10.0, no intermix of Industry-Standard NVMe and IBM FCMs
in a High-Performance Flash Enclosure or in a storage pool is supported.
The IBM FlashCore Modules are available in 4.8 TB, 9.6 TB, and 19.2 TB capacities, see
Table 2-11. For more information about FCMs, see IBM FlashCore Module Product Guide.
Note: For all drive types, RAID 6 is the default in the DS GUI and DS CLI, but RAID 10 is
optional.
Table 2-12 Maximum usable and provisioned capacity based on system cache size
Cache Maximum Maximum Maximum Maximum
usable size with provisioned size usable size with provisioned size
large extents with large small extents with small
extents extents
Less than or Fixed-Block (FB): FB: 8192 TiB FB: 1024 TiB FB: 2048 TiB
equal to 512 GB 8192 TiB CKD: 7304 TiB CKD: 1102 TiB CKD: 1826 TiB
Count Key Data
(CKD): 7304 TiB
Greater than FB: 32768 TiB FB: ~32768 TiB FB: 4096 TiB FB: ~8192 TiB
512 GB CKD: 29216 TiB CKD: ~29216 TiB CKD: 4410 TiB CKD: ~7304 TiB
Table 2-13 shows the maximum number of flash drives and maximum raw storage capacity
for the different models.
The iPDUs are available in single or three-phase power configurations in all models. Each
iPDU has one AC power connector and a dedicated inline power cord. Output power is
provided by 12 C13 power outlets with circuit breaker protection.
serial connector
iPDUs are installed in pairs. Each DS8A00 rack has a minimum of one iPDU pair. For models
A05, a second pair can be installed in the base frame to provide power for more I/O and
storage enclosures.
DS8A00 can tolerate a power line disturbance (PLD) of up to 20 ms. If the PLD exceeds this
threshold on both sides of the power domain, the system initiates an orderly shutdown, and
data in write cache is saved to flash memory in the NVDIMMs. The NVDIMMs remain
functional, even if the system has a complete power outage. For more information about
NVDIMMs, see 2.6.3, “Backup Power Modules and NVDIMM” on page 68.
The iPDUs are managed by using the black and gray internal private networks. Each of the
outlets can be individually monitored, and powered on or off. The iPDUs support Simple
Network Management Protocol (SNMP), telnet, and a web interface.
DS8A00 HMCs can control system power and monitoring by communicating to the network
interfaces of the iPDUs and CPCs.
Figure 2-19 on page 66 shows redundant Ethernet connections (black and gray network) of
the iPDUs and CPCs for a DS8A50, which is equivalent to the network configuration on a
DS8A10. The red connection for both rack power control hubs (Ethernet switches) provides
backup power from the other rack control hub in the event of local power failure.
Adding a model E05 expansion frame to a DS8A50 system also adds another iPDU pair in
that frame, which requires Ethernet connections to the rack control hub in the base frame.
The redundant power supplies in the CPCs, I/O enclosures, HPFE Gen3 enclosures, and the
HMC are connected across both power domains. The left power supplies connect to the
green domain, and the right power supplies connect to the yellow domain.
For full redundancy, each power domain must be connected to separate power distribution
systems that are fed by independent building power sources or service entrances.
During normal operation, the NVDIMMs behave like any other DRAM, but when a power
outage or other system failure occurs, the NVS partition contents are hardened in NAND flash
storage. This NAND flash storage is with the DRAM chips on the NVDIMM module. The
content is encrypted when written to the flash storage to prevent unauthorized access to the
contents. Storing the write cache data in flash chips replaces the need for a data dump, which
was used on earlier DS8000 models to harden NVS data to disk. Figure 2-21 shows a
symbolic view of an NVDIMM module.
BPMs connect directly to the NVDIMM modules to provide power during the DRAM to flash
operation. They are specific nickel-based hybrid energy storage modules with a high-power
discharge and fast charge times of 3–15 minutes. When system power is restored, the
NVDIMMs move the preserved data from flash back to DRAM to be destaged to the storage
system arrays during initial microcode load (IML).
The size of a BPM is smaller than a standard 2.5-inch disk drive module (DDM) and fits into
one of the free CPC disk drive bays. A maximum of two BPMs are installed per CPC.
With the BPMs connected directly to the NVDIMMs, the DRAM to flash operation functions
independently without the need for any power that is provided by the CPC.
The NVDIMM capability is in addition to the data protection concept of storing the write cache
NVS on the alternative node. For more information, see 4.2, “CPC failover and failback” on
page 110.
Note: The DS8A00 is designed for efficient air flow and to be compliant with hot and cold
aisle data center configurations.
Figure 2-23 on page 71 shows locations of both HMCs, the keyboard and monitor drawer and
the Ethernet switches (rack power control hubs) in a DS8A10 base frame.
The storage administrator runs all DS8A00 logical configuration tasks by using the Storage
Management GUI or DS CLI. All client communications to the storage system are through the
HMCs.
Clients that use the DS8A00 advanced functions, such as MM or FlashCopy, communicate to
the storage system with IBM Copy Services Manager.
The HMCs provide connectivity between the storage system and external Encryption Key
Manager (EKM) servers.
HMCs also provide remote Call Home and remote support connectivity.
For more information about the HMC, see Chapter 6, “IBM DS8A00 Management Console
planning and setup” on page 161.
Each HMC also uses two designated Ethernet interfaces for the internal black (eth0) and gray
(eth3) networks.
The black and gray networks provide fully redundant communication between the HMCs and
CPCs. These networks cannot be accessed externally, and no external connections are
allowed. External customer network connections for both HMCs are provided at the rear of
the base rack.
Important: The internal Ethernet switches (Rack Power Control Hubs) are for the DS8A00
private networks only. Do not connect an external network (or any other equipment) to the
black or gray network switches.
The basis for virtualization begins with the physical drives, which are mounted in storage
enclosures and connected to the internal storage servers. DS8A00 uses the
High-Performance Flash Enclosure (HPFE) Gen3 storage enclosures exclusively. To learn
more about the drive options and their connectivity to the internal storage servers, see 2.5,
“Flash drive enclosures” on page 62.
Note: The DS GUI streamlines the process to configure arrays, ranks, and extent pool
pairs in a single operation. For more granular control, the DS CLI can be used to manage
these entities individually. For more information, see Chapter 10, “IBM DS8A00 Storage
Management Command-line Interface” on page 319
3.3.2 Arrays
An array is created from one array site. When an array is created, its RAID level, array type,
and array configuration are defined. This process is also called defining an array. In all
IBM DS8000 series implementations, one array is always defined as using one array site.
Each HPFE Gen3 pair can contain up to six array sites. The first set of 16 flash drives creates
two 8-drive array sites. RAID 6 arrays are created by default on each array site. RAID 10 is
optional for all flash drive sizes (RAID-5 is no longer supported).
During logical configuration, RAID 6 arrays with the required number of spares are created.
Each HPFE Gen3 pair has two global spares that are created from the first increment of 16
flash drives. The first two arrays to be created from these array sites are 5+P+Q+S.
Subsequent RAID 6 arrays in the same HPFE Gen3 Pair are 6+P+Q.
For more information about the sparing algorithm, see 4.5.10, “Spare creation” on page 128.
Figure 3-1 shows the creation of a RAID 6 array with one spare, which is also called a
5+P+Q+S array. It has a capacity of five drives for data, two drives for double distributed
parity, and a spare drive. According to the RAID 6 rules, parities are distributed across all
seven drives in this example.
Depending on the selected RAID level and sparing requirements, four types of arrays are
possible, as shown in Figure 3-2.
Tip: Larger drives have a longer rebuild time. Only RAID 6 can recover from a double drive
failure during a rebuild, by using the additional parity data. RAID 6 is the best choice for
systems that require high availability (HA), and is the default in DS8A00.
Encryption group
All drives that are offered in the DS8A00 are Full Disk Encryption (FDE)-capable to secure all
logical volume data at rest. In the DS8A00, the Encryption Authorization license is included in
the Base Function (BF) license group.
If you plan to use encryption for data at rest, you must define an encryption group before any
ranks are created. The DS8A00 supports only one encryption group. All ranks must be in this
encryption group. The encryption group is an attribute of a rank. Therefore, your choice is to
encrypt everything or nothing. If you want to enable encryption later (create an encryption
group), all logical configuration must be deleted and re-created, and volume data restored.
For more information, see IBM DS8000 Encryption for Data at Rest, Transparent Cloud
Tiering, and Endpoint Security (DS8000 Release 9.1), REDP-4500.
Defining ranks
When a new rank is defined, its name is chosen by the DS GUI or Data Storage
Command-line Interface (DS CLI), for example, R1, R2, or R3. The rank is then associated
with an array.
Important: In all DS8000 series implementations, a rank is defined as using only one
array. Therefore, rank and array can be treated as synonyms.
An FB rank features an extent size of either 1 GB (more precisely a gibibyte (GiB), which is a
binary gigabyte that is equal to 230 bytes), called large extents, or an extent size of 16
mebibytes (MiB), called small extents.
IBM Z users or administrators typically do not deal with gigabytes or gibibytes. Instead,
storage is defined in terms of the original 3390 volume sizes. A 3390 Model 3 is three times
the size of a Model 1. A Model 1 features 1113 cylinders, which are about 0.946 GB.
In a 3390 Model 1, the large extent size for CKD ranks is 1113 cylinders. The small extent
size for CKD ranks is 21 cylinders, which corresponds to the z/OS allocation unit for EAV
volumes larger than 65520 cylinders. z/OS changes the addressing modes and allocates
storage in 21 cylinder units.
For example, the DS8A00 theoretically supports a minimum CKD volume size of one cylinder,
but the volume still claims one full extent of 1113 cylinders if large extents are used or 21
cylinders for small extents. So, 1112 cylinders are wasted if large extents are used.
Note: In the DS8A00 firmware, all volumes have a common metadata structure. All
volumes have the metadata structure of ESE volumes, whether the volumes are
thin-provisioned or fully provisioned. ESE is described in 3.4.4, “Volume allocation and
metadata” on page 91.
No rank or array affinity to an internal server (central processor complex (CPC) is predefined.
The affinity of the rank (and its associated array) to a server is determined when it is assigned
to an extent pool. One or more ranks with the same extent type (FB or CKD) can be assigned
to an extent pool.
Important: Because a rank is formatted to have small or large extents, the first rank that is
assigned to an extent pool determines whether the extent pool is a pool of all small or all
large extents. You cannot have a pool with a mixture of small and large extents. You cannot
change the extent size of an extent pool. In addition, FCM (Flash Copy Manager) capacity
cannot be intermixed with independent storage capacity within an extent pool.
If you want Easy Tier to automatically optimize rank utilization, configure more than one rank
in an extent pool. A rank can be assigned to only one extent pool. As many extent pools as
ranks can exist, but for the majority of systems, a single pair of extent pools for each rank type
(FB or CKD) will provide the best overall performance.
Easy Tier moves data within an extent pool to optimize the placement of the data within the
pools.
Storage pool striping can enhance performance significantly. However, in the unlikely event
that a whole RAID array fails, the loss of the associated rank affects the entire extent pool
because data is striped across all ranks in the pool. For data protection, consider mirroring
your data to another DS8000 family storage system.
As with ranks, extent pools are also assigned to encryption group 0 or 1, where group 0 is
non-encrypted, and group 1 is encrypted. The DS8A00 supports only one encryption group,
and all extent pools must use the same encryption setting that is used for the ranks.
A minimum of two extent pools must be configured to balance the capacity and workload
between the two servers. One extent pool is assigned to internal server 0. The other extent
pool is assigned to internal server 1. In a system with both FB and CKD volumes, four extent
pools provide one FB pool for each server and one CKD pool for each server.
In DS8A00, small extents are the default when creating extent pools. If you plan on using
large extents for ESE volumes you must create an additional pool pair with large extents.
Small and large extents cannot be in the same pool.
Figure 3-3 shows an example of a mixed environment that features CKD and FB extent pools.
POWER9 - Server-1
16 MB extents
Rank R1
1 GB extents Rank R4
Rank R2
1 6 MB extents
1 GB extents
When merging extent pools, the logical volumes remain accessible to the host systems.
Dynamic extent pool merge can be used for the following reasons:
Consolidation of two extent pools with the equivalent storage type (Independent Storage
or Flash Copy Manager), equivalent formatting (FB or CKD), and equivalent extent size,
into one extent pool. If you create a pool that contains more extents, logical volumes can
be distributed over a greater number of ranks, which improves overall performance in the
presence of skewed workloads. Newly created volumes in the merged extent pool allocate
capacity as specified by the selected extent allocation algorithm. Logical volumes that
existed in either the source or the target extent pool can be redistributed over the set of
ranks in the merged extent pool by using the Migrate Volume function.
Note: Migrating volumes from one FCM pool to another FCM pool is not supported with
DS8A00 Release 10.0.
Consolidating Independent Storage extent pools with different storage tiers to create a
merged extent pool with a mix of storage drive technologies. This type of an extent pool is
called a multitiered pool, and it is a prerequisite for using the Easy Tier automatic mode
feature.
Merge Pools
Merged
Important: Volume migration (or DVR) within the same extent pool is not supported in
multitiered pools. Easy Tier automatic mode rebalances the volumes’ extents within the
multitiered extent pool automatically based on I/O activity.
Dynamic extent pool merge is allowed only among extent pools with the same internal server
affinity or rank group. Additionally, the dynamic extent pool merge is not allowed in the
following circumstances:
If source and target pools have different storage types (Independent Storage and
FlashCopy Manager)
If source and target pools have different format types (Fixed Block and Count Key Data).
If source and target pools have different extent sizes
If you selected an extent pool that contains volumes that are being moved.
If the combined extent pools include 2 PB or more of ESE effective (virtual) capacity
For more information about Easy Tier, see IBM DS8000 Easy Tier (Updated for DS8000
R9.0), REDP-4667.
Fixed-Block LUNs
A logical volume that is composed of FB extents is called a LUN. An FB LUN is composed of
one or more 1 GiB (230 bytes) large extents or one or more 16 MiB small extents from one FB
extent pool. A LUN cannot span multiple extent pools, but a LUN can have extents from
multiple ranks within the same extent pool. You can construct LUNs up to 16 TiB (16 x 240
bytes, or 244 bytes) when you use large extents.
Important: DS8000 CS does not support FB logical volumes larger than 4 TiB. Do not
create a LUN that is larger than 4 TiB if you want to use CS for the LUN unless the LUN is
integrated as Managed Disks in an IBM SAN Volume Controller (SVC), and is using
IBM Spectrum® Virtualize CS.
LUNs can be provisioned (allocated) in binary GiB (230 bytes), decimal GB (109 bytes), or
512-byte or 520-byte blocks. However, the usable (physical) capacity that is provisioned
(allocated) is a multiple of 1 GiB. For small extents, it is a multiple of 16 MiB. Therefore, it is a
good idea to use LUN sizes that are a multiple of a gibibyte or a multiple of 16 MiB. If you
define a LUN with a size that is not a multiple of 1 GiB (for example, 25.5 GiB), the LUN size
is 25.5 GiB. However, 26 GiB are physically provisioned (allocated), of which 0.5 GiB of the
physical storage is unusable. When you want to specify a LUN size that is not a multiple of 1
GiB, specify the number of blocks. A 16 MiB extent has 32768 blocks.
1 GB 1 GB
Rank-b used
free
used
free Allocate a 3 GB LUN
1 GB 1 GB 1 GB 1 GB
Rank-a used
3 GB LUN 2.9 GB LUN
created
1 GB 1 GB
used used
Rank-b used used
100 MB unused
A FB LUN must be managed by a Logical SubSystem (LSS). One LSS can manage up to 256
LUNs. The LSSs are created and managed by the DS8A00, as required. A total of 255 LSSs
can be created in the DS8A00.
IBM i LUNs can use the unprotected attribute, in which case the DS8A00 reports that the
LUN is not RAID-protected. Selecting either the protected or unprotected attribute does not
affect the RAID protection that is used by the DS8A00 on the open volume.
IBM i LUNs display a 520-byte block to the host. The operating system (OS) uses eight of
these bytes, so the usable space is still 512 bytes like other Small Computer System Interface
(SCSI) LUNs. The capacities that are quoted for the IBM i LUNs are in terms of the 512-byte
block capacity, and they are expressed in GB (109 ). Convert these capacities to GiB (230 )
when you consider the effective usage of extents that are 1 GiB (230 ).
Important: The DS8A00 supports IBM i variable volume (LUN) sizes in addition to fixed
volume sizes.
IBM i volume enhancement adds flexibility for volume sizes and can optimize the DS8A00
capacity usage for IBM i environments.
The DS8A00 supports IBM i variable volume data types A50, which is an unprotected variable
size volume, and A99, which is a protected variable size volume. For more information, see
Table 3-1. IBM i variable volumes can be dynamically expanded.
Example 3-1 demonstrates the creation of both a protected and an unprotected IBM i variable
size volume by using the DS CLI.
Example 3-1 Creating the IBM i variable size for unprotected and protected volumes
dscli> mkfbvol -os400 050 -extpool P4 -name itso_iVarUnProt1 -cap 10 5413
CMUC00025I mkfbvol: FB volume 5413 successfully created.
Note: IBM i fixed volume sizes continue to be supported in current DS8A00 code levels.
Consider the best option for your environment between fixed and variable-size volumes.
A T10 DIF-capable LUN uses 520-byte sectors instead of the common 512-byte sector size.
Eight bytes are added to the standard 512-byte data field. The 8-byte DIF consists of 2 bytes
of CRC data, a 4-byte Reference Tag (to protect against misdirected writes), and a 2-byte
Application Tag for applications that might use it.
On a write, the DIF is generated by the HBA, which is based on the block data and LBA. The
DIF field is added to the end of the data block, and the data is sent through the fabric to the
storage target. The storage system validates the CRC and Reference Tag and, if correct,
stores the data block and DIF on the physical media. If the CRC does not match the data, the
data was corrupted during the write. The write operation is returned to the host with a write
error code. The host records the error and retransmits the data to the target. In this way, data
corruption is detected immediately on a write, and the corrupted data is never committed to
the physical media.
On a read, the DIF is returned with the data block to the host, which validates the CRC and
Reference Tags. This validation adds a small amount of latency for each I/O, but it might affect
overall response time on smaller block transactions (less than 4 KB I/Os).
The DS8A00 supports the T10 DIF standard for FB volumes that are accessed by the Fibre
Channel Protocol (FCP) channels that are used by Linux on IBM Z or AIX. You can define
LUNs with an option to instruct the DS8A00 to use the CRC-16 T10 DIF algorithm to store the
data.
You can also create T10 DIF-capable LUNs for OSs that do not yet support this feature
(except for IBM i). Active protection is available for Linux on IBM Z and for AIX on Power
Systems servers. For other distributed OSs, read their documentation.
When you create an FB LUN by running the DS CLI command mkfbvol, add the option
-t10dif. If you query a LUN with the showfbvol command, the data type is FB 512T instead
of the standard FB 512 type.
Important: Because the DS8A00 internally always uses 520-byte sectors to support IBM i
volumes, no capacity is considered when standard or T10 DIF-capable volumes are used.
Target LUN: When FlashCopy for a T10 DIF LUN is used, the target LUN must also be a
T10 DIF-type LUN. This restriction does not apply to mirroring.
Before a CKD volume can be created, a Logical Control Unit (LCU) must be defined that
provides up to 256 possible addresses that can be used for CKD volumes. Up to 255 LCUs
can be defined. For more information about LCUs, see 3.4.5, “Logical subsystems” on
page 95.
On a DS8A00, you can define CKD volumes with up to 1,182,006 cylinders, or about 1 TB.
This volume capacity is called an EAV, and it is supported by the 3390 Model A.
A CKD volume cannot span multiple extent pools, but a volume can have extents from
different ranks in the same extent pool. You can also stripe a volume across the ranks. For
more information, see “Storage pool striping: Extent rotation” on page 87.
Figure 3-6 shows an example of how a logical volume is provisioned (allocated) with a CKD
volume.
1113 1000
Rank-y used
used
used
used
113 cylinders unused
Classically, to start an I/O to a base volume, z/OS can select any alias address only from the
same LCU as the base address to perform the I/O. With SuperPAV, the OS can use alias
addresses from other LCUs to perform an I/O for a base address.
The restriction is that the LCU of the alias address belongs to the same DS8A00 server. In
other words, if the base address is from an even / odd LCU, the alias address that z/OS can
select must also be from an even / odd LCU. In addition, the LCU of the base volume and the
LCU of the alias volume must be in the same path group. z/OS prefers alias addresses from
the same LCU as the base address, but if no alias address is free, z/OS looks for free alias
addresses in LCUs of the same Alias Management Group.
An Alias Management Group is all the LCUs that have affinity to the same DS8A00 internal
server and have the same paths to the DS8A00. SMF can provide reports at the Alias
Management Group level.
Initially, each alias address must be assigned to a base address. Therefore, it is not possible
to define an LCU with only alias addresses.
As with PAV and HyperPAV, SuperPAV must be enabled. SuperPAV is enabled by the
HYPERPAV=XPAV statement in the IECIOSxx parmlib member or by the SETIOS HYPERPAV=XPAV
command. The D M=DEV(address) and the D M=CU(address) display commands show whether
XPAV is enabled or not. With the D M=CU(address) command, you can check whether aliases
from other LCUs are being used (Example 3-2).
With cross-LCU HyperPAV, which is called SuperPAV, the number of alias addresses can
further be reduced while the pool of available alias addresses to handle I/O bursts to volumes
is increased.
This construction method of using fixed extents to form a logical volume in the DS8A00 allows
flexibility in the management of the logical volumes. You can delete LUNs or CKD volumes,
resize LUNs or volumes, and reuse the extents of those LUNs to create other LUNs or
volumes, including ones of different sizes. One logical volume can be removed without
affecting the other logical volumes that are defined on the same extent pool.
The extents are cleaned after you delete a LUN or CKD volume. The reformatting of the
extents is a background process, and it can take time until these extents are available for
reallocation.
Two extent allocation methods (EAMs) are available for the DS8000: Storage pool striping
(rotate extents) and rotate volumes.
Note: Although the preferred SAM was storage pool striping, it is now a better choice to let
Easy Tier manage the storage pool extents. This chapter describes rotate extents for the
sake of completeness, but it is now mostly irrelevant.
The DS8A00 maintains a sequence of ranks. The first rank in the list is randomly picked at
each power-on of the storage system. The DS8A00 tracks the rank in which the last allocation
started. The allocation of the first extent for the next volume starts from the next rank in that
sequence.
If more than one volume is created in one operation, the allocation for each volume starts in
another rank. When several volumes are provisioned (allocated), rotate through the ranks, as
shown in Figure 3-9.
You might want to consider this allocation method if you prefer to manage performance
manually. The workload of one volume is going to one rank. This configuration makes the
identification of performance bottlenecks easier. However, by putting all the volumes’ data
onto one rank, you might introduce a bottleneck, depending on your actual workload.
However, as previously stated, Easy Tier is the preferred choice for managing the storage
pool extents.
In a mixed-tier extent pool that contains different tiers of ranks, the storage pool striping EAM
is used independently of the requested EAM, and the EAM is set to managed.
Easy Tier's default allocation order prioritizes the highest-performance drive classes. In the
case of a DS8A00, this typically means Flash Tier 1 followed by Flash Tier 2. The system
allocates new data to the fastest available tier to optimize performance.
There is a GUI and a CLI option for the whole Storage Facility to change the allocation
preference. The two options are High Performance and High Utilization. With the High
Utilization setting, the machine populates drive classes in this order: Flash Tier 1, Flash Tier
2, and then Flash Tier 0. The chsi command can be used to switch the ETTierOrder
parameter between High Performance and High Utilization.
When you create striped volumes and non-striped volumes in an extent pool, a rank might be
filled before the others. A full rank is skipped when you create striped volumes.
By using striped volumes, you distribute the I/O load of a LUN or CKD volume to more than
one set of eight drives, which can enhance performance for a logical volume. In particular,
OSs that do not include a volume manager with striping capability benefit most from this
allocation method.
Small extents can increase the parallelism of sequential writes. Although the system stays
within one rank until 1 GiB is written, with small extents it jumps to the next rank after 16 MiB.
This configuration uses more disk drives when performing sequential writes.
Important: If you must add capacity to an extent pool because it is nearly full, it is better to
add several ranks concurrently, not just one rank. This method allows new volumes to be
striped across the newly added ranks.
With the Easy Tier manual mode facility, if the extent pool is a single-tier pool, the user can
request an extent pool merge followed by a volume relocation with striping to run the same
function. For a multitiered managed extent pool, extents are automatically relocated over
time, according to performance needs. For more information, see IBM DS8000 Easy Tier
(Updated for DS8000 R9.0), REDP-4667.
Rotate volume EAM: The rotate volume EAM is not allowed if one extent pool is
composed of flash drives and configured for effective (virtual) capacity.
A logical volume includes the attribute of being striped across the ranks or not. If the volume
was created as striped across the ranks of the extent pool, the extents that are used to
increase the size of the volume are striped. If a volume was created without striping, the
system tries to allocate the additional extents within the same rank that the volume was
created from originally.
Because most OSs have no means of moving data from the end of the physical drive off to
unused space at the beginning of the drive, and because of the risk of data corruption, IBM
does not support shrinking a volume. The DS CLI and DS GUI interfaces cannot reduce the
size of a volume.
DVR allows data that is stored on a logical volume to be migrated from its allocated storage to
newly allocated storage while the logical volume remains accessible to attached hosts.
The user can request DVR by using the Migrate Volume function that is available through the
DS GUI or DS CLI. DVR allows the user to specify a target extent pool and an EAM. The
target extent pool can be a separate extent pool than the extent pool where the volume is. It
can also be the same extent pool, but only if it is a single-tier pool. However, the target extent
pool must be managed by the same DS8A00 internal server.
Important: DVR in the same extent pool is not allowed in a managed pool. In managed
extent pools, Easy Tier automatic mode automatically relocates extents within the ranks to
allow performance rebalancing.
You can move volumes only among pools of the same drive type, same extent size, and
same formatting. However, DVR is not supported between two FCM pools in R10.0.
Each logical volume has a configuration state. To begin a volume migration, the logical
volume initially must be in the normal configuration state.
More functions are associated with volume migration that allow the user to pause, resume, or
cancel a volume migration. Any or all logical volumes can be requested to be migrated at any
time if available capacity is sufficient to support the reallocation of the migrating logical
volumes in their specified target extent pool. For more information, see IBM DS8000 Easy
Tier (Updated for DS8000 R9.0), REDP-4667.
The metadata is allocated in the storage pool when volumes are created, and the space that
is used by metadata is referred to as the pool overhead. Pool overhead means that the
amount of space that can be provisioned (allocated) by volumes is variable and depends on
both the number of volumes and the logical capacity of these volumes.
For storage pools with large extents, metadata is also allocated as large extents (1 GiB for FB
pools or 1113 cylinders for CKD pools). Large extents that are allocated for metadata are
subdivided into 16 MiB subextents, which are also referred to as metadata extents, for FB
volumes, or 21 cylinders for CKD. For extent pools with small extents, metadata extents are
also small extents. Sixty-four metadata subextents are in each large metadata extent for FB,
and 53 metadata subextents are in each large metadata extent for CKD.
For each FB volume that is provisioned (allocated), an initial 16 MiB metadata subextent or
metadata small extent is allocated, and an extra 16 MiB metadata subextent or metadata
small extent is allocated for every 10 GiB of provisioned (allocated) capacity or portion of
provisioned capacity.
For each CKD volume that is provisioned (allocated), an initial 21 cylinders metadata
subextent or metadata small extent is allocated, and an extra 21 cylinders metadata
subextent or metadata small extent is allocated for every 11130 cylinders (or ten 3390 Model
1) of allocated capacity or portion of allocated capacity.
For example, a 3390-3 (that is, 3339 cylinders or about 3 GB) or 3390-9 (that is, 10,017
cylinders or 10 GB) volume takes two metadata extents (one metadata extent for the volume
and another metadata extent for any portion of the first 10 GB). A 128 GB FB volume takes 14
metadata extents (one metadata extent for the volume and another 13 metadata extents to
account for the 128 GB).
Metadata extents with free space can be used for metadata by any volume in the extent pool.
When you use metadata extents and user extents within an extent pool, some planning and
calculations are required, especially in a mainframe environment where thousands of
volumes are defined and the whole capacity is provisioned (allocated) during the initial
configuration. You must calculate the capacity that is used up by the metadata to get the
capacity that can be used for user data. This calculation is important only when fully
provisioned volumes are used. Thin-provisioned volumes use no space when created; only
metadata and space are used when data is written.
For extent pools with small extents, the number of available user data extents can be
estimated as follows:
For extent pools with regular 1 GiB extents where the details of the volume configuration are
not known, you can estimate the number of metadata extents based on many volumes only.
The calculations are performed as shown:
FB pool overhead = (number of volumes × 2 + total volume extents/10)/64 and rounded up
to the nearest integer
CKD pool overhead = (number of volumes × 2 + total volume extents/10)/53 and rounded
up to the nearest integer
The formulas overestimate the space that is used by the metadata by a small amount
because it assumes wasted space on every volume. However, the precise size of each
volume does not need to be known.
Note: Volumes configured using FlashCore Module (FCM) drives must be thin provisioned.
Space for a thin-provisioned volume is allocated when a write occurs. More precisely, it is
allocated when a destage from the cache occurs and insufficient free space is left on the
currently allocated extent.
Therefore, thin provisioning allows a volume to exist that is larger than the usable (physical)
capacity in the extent pool to which it belongs. This approach allows the “host” to work with
the volume at its defined capacity, even though insufficient usable (physical) space might exist
to fill the volume with data.
The assumption is that either the volume is never filled, or as the DS8A00 runs low on raw
capacity, more is added. This approach also assumes that the DS8A00 is not at its maximum
raw capacity.
Note: If thin provisioning is used, the metadata is allocated for the entire volume (effective
provisioned capacity) when the volume is created, not when extents are used. The
DS8A00 creates thin-provisioned volumes by default.
With the mixture of thin-provisioned (ESE) and fully provisioned (non-ESE) volumes in an
extent pool, a method is needed to dedicate part of the extent-pool storage capacity for ESE
user data usage, and to limit the ESE user data usage within the extent pool.
Also, you must be able to detect when the available storage space within the extent pool for
ESE volumes is running out of space.
ESE capacity controls provide extent pool attributes to limit the maximum extent pool storage
that is available for ESE user data usage. These controls also ensure that a proportion of the
extent pool storage is available for ESE user data usage.
Capacity controls exist for an extent pool and also for a repository, if it is defined. There are
system-defined warning thresholds at 15% and 0% free capacity left, and you can set your
own user-defined threshold for the whole extent pool or for the ESE repository. Thresholds for
an extent pool are set by running the DS CLI chextpool -threshold (or mkextpool)
command. Thresholds for a repository are set by running the chsestg -repcapthreshold (or
mksestg) command.
The threshold exceeded status refers to the user-defined threshold. The extent pool or
repository status shows one of the three following values:
0: The percentage of available capacity is greater than the extent pool / repository
threshold.
1: The percentage of available capacity is greater than zero but less than or equal to the
extent pool / repository threshold.
10: The percentage of available capacity is zero.
A Simple Network Management Protocol (SNMP) trap is associated with the extent pool /
repository capacity controls, and it notifies you when the extent usage in the pool exceeds a
user-defined threshold and when the extent pool is out of extents for user data.
When the size of the extent pool remains fixed or when it increases, the available usable
(physical) capacity remains greater than or equal to the provisioned (allocated) capacity.
However, a reduction in the size of the extent pool can cause the available usable (physical)
capacity to become less than the provisioned (allocated) capacity in certain cases.
For example, if the user requests that one of the ranks in an extent pool is depopulated, the
data on that rank is moved to the remaining ranks in the pool. This process causes the rank to
become not provisioned (allocated) and removed from the pool. The user is advised to
inspect the limits and threshold on the extent pool after any changes to the size of the extent
pool to ensure that the specified values are still consistent with the user’s intentions.
A new attribute (-opratiolimit) is available when creating or modifying extent pools to add
operational limits. Example 3-3 provides an example of creating and modifying an extent pool
with a defined operational limit that cannot be exceeded.
Setting an overprovisioning ratio limit results in the following changes to system behavior to
prevent an extent pool from exceeding the overprovisioning ratio:
Prevent volume creation, expansion, or migration.
Prevent rank depopulation.
Prevent pool merge.
Prevent turning on Easy Tier space reservation.
For more information, see IBM DS8880 Thin Provisioning (Updated for Release 8.5),
REDP-5343.
All even-numbered LSSs (X’00’, X’02’, X’04’, up to X’FE’) are handled by internal server 0,
and all odd-numbered LSSs (X’01’, X’03’, X’05’, up to X’FD’) are handled by internal server 1.
LSS X’FF’ is reserved. This configuration allows both servers to handle host commands to the
volumes in the DS8A00 if the configuration takes advantage of this capability. If either server
is not available, the remaining operational server handles all LSSs. LSSs are also placed in
address groups of 16 LSSs, except for the last group that has 15 LSSs. The first address
group is 00–0F, and so on, until the last group, which is F0–FE.
Because the LSSs manage volumes, an individual LSS must manage the same type of
volumes. An address group must also manage the same type of volumes. The first volume
(either FB or CKD) that is assigned to an LSS in any address group sets that group to
manage those types of volumes. For more information, see “Address groups” on page 97.
Volumes are created in extent pools that are associated with either internal server 0 or 1.
Extent pools are also formatted to support either FB or CKD volumes. Therefore, volumes in
any internal server 0 extent pools can be managed by any even-numbered LSS if the LSS
and extent pool match the volume type. Volumes in any internal server 1 extent pools can be
managed by any odd-numbered LSS if the LSS and extent pool match the volume type.
Volumes also have an identifier 00 - FF. The first volume that is assigned to an LSS has an
identifier of 00. The second volume is 01, and so on, up to FF if 256 volumes are assigned to
the LSS.
Conversely, for CKD volumes, the LCU is significant. The LCU must be defined in a
configuration that is called the input/output configuration data set (IOCDS) on the host. The
LCU definition includes a control unit address (CUADD). This CUADD must match the LCU ID
in the DS8A00. A device definition for each volume, which has a unit address (UA) that is
included, is also included in the IOCDS. This UA must match the volume ID of the device. The
host must include the CUADD and UA in the “frame” that is sent to the DS8A00 when it wants
to run an I/O operation on the volume. This method is how the DS8A00 knows which volume
on which to run the operation.
For both FB and CKD volumes, when the “frame” that is sent from the host arrives at a host
adapter port in the DS8A00, the adapter checks the LSS or LCU identifier to know which
internal server to pass the request to inside the DS8A00. For more information about host
access to volumes, see 3.4.6, “Volume access” on page 97.
FB LSSs are created automatically when the first FB logical volume on the LSS is created. FB
LSSs are deleted automatically when the last FB logical volume on the LSS is deleted. CKD
LCUs require user parameters to be specified and must be created before the first CKD
logical volume can be created on the LCU. They must be deleted manually after the last CKD
logical volume on the LCU is deleted.
Certain management actions in Metro Mirror (MM), Global Mirror (GM), or Global Copy (GC)
operate at the LSS level. For example, the freezing of pairs to preserve data consistency
across all pairs in case a problem occurs with one of the pairs is performed at the LSS level.
The option to put all or most of the volumes of a certain application in one LSS makes the
management of remote copy operations easier, as shown in Figure 3-11.
Array
Site
LSS X'17'
DB2
Array
….
….
Site
24
24
. ...
. ...
Array
Site
Array
Site
LSS X'18'
…….
….
Array DB2-test
24
24
Site
. ... ...
…. ...
Array
Site
All devices in an LSS must be CKD or FB. This restriction goes even further. LSSs are
grouped into address groups of 16 LSSs. LSSs are numbered X’ab’, where a is the address
group and b denotes an LSS within the address group. For example, X’10’–X’1F’ are LSSs in
address group 1.
All LSSs within one address group must be of the same type (CKD or FB). The first LSS that
is defined in an address group sets the type of that address group.
Server1
X'1E01'
X'1D00'
Server0
LSS X'1E'
Extent Pool FB-1 LSS X'1F' Extent Pool FB-2
Rank-y
Rank-c
The LUN identification X’gabb’ is composed of the address group X’g’, the LSS number within
the address group X’a’, and the ID of the LUN within the LSS X’bb’. For example, FB LUN
X’2101’ denotes the second (X’01’) LUN in LSS X’21’ of address group 2.
An extent pool can have volumes that are managed by multiple address groups. The example
in Figure 3-12 shows only one address group that is used with each extent pool.
Each host attachment can be associated with a volume group to define the LUNs that the
host is allowed to access. Multiple host attachments can share the volume group. The host
attachment can also specify a port mask that controls the DS8A00 I/O ports that the host
HBA is allowed to log in to. Whichever ports the HBA logs in to, it sees the same volume
group that is defined on the host attachment that is associated with this HBA.
The maximum number of host attachments on a DS8A00 is 8,192. This host definition is
required only for open systems hosts. Any IBM Z servers can access any CKD volume in a
DS8A00 if its IOCDS is correct.
Volume group
A volume group is a named construct that defines a set of logical volumes. A volume group is
required only for FB volumes. When a volume group is used with CKD hosts, a default volume
group contains all CKD volumes. Any CKD host that logs in to a Fibre Channel connection
(IBM FICON) I/O port has access to the volumes in this volume group. CKD logical volumes
are automatically added to this volume group when they are created and are automatically
removed from this volume group when they are deleted.
When a host attachment object is used with open systems hosts, a host attachment object
that identifies the HBA is linked to a specific volume group. Define the volume group by
indicating the FB volumes that are to be placed in the volume group. Logical volumes can be
added to or removed from any volume group dynamically.
Important: Volume group management is available only with the DS CLI. In the DS GUI,
users define hosts and assign volumes to hosts. A volume group is defined in the
background. No volume group object can be defined in the DS GUI.
Two types of volume groups are used with open systems hosts. The type determines how the
logical volume number is converted to a host addressable LUN_ID in the Fibre Channel (FC)
SCSI interface. A SCSI map volume group type is used with FC SCSI host types that poll for
LUNs by walking the address range on the SCSI interface. This type of volume group can
map any FB logical volume numbers to 256 LUN IDs that have zeros in the last 6 bytes and
the first 2 bytes in X’0000’–X’00FF’.
A SCSI mask volume group type is used with FC SCSI host types that use the Report LUNs
command to determine the LUN IDs that are accessible. This type of volume group can allow
any FB logical volume numbers to be accessed by the host where the mask is a bitmap that
specifies the LUNs that are accessible. For this volume group type, the logical volume number
X’abcd’ is mapped to LUN_ID X’40ab40cd00000000’. The volume group type also controls
whether 512-byte block LUNs or 520-byte block LUNs can be configured in the volume group.
When a host attachment is associated with a volume group, the host attachment contains
attributes that define the logical block size and the Address Discovery Method (LUN Polling or
Report LUNs) that is used by the host HBA. These attributes must be consistent with the
volume group type of the volume group that is assigned to the host attachment. This
consistency ensures that HBAs that share a volume group have a consistent interpretation of
the volume group definition and have access to a consistent set of logical volume types.
FB logical volumes can be defined in one or more volume groups. This definition allows a
LUN to be shared by host HBAs that are configured to separate volume groups. An FB logical
volume is automatically removed from all volume groups when it is deleted.
Figure 3-13 shows the relationships between host attachments and volume groups. Host
AIXprod1 has two HBAs, which are grouped in one host attachment, and both HBAs are
granted access to volume group DB2-1. Most of the volumes in volume group DB2-1 are also
in volume group DB2-2, which is accessed by the system AIXprod2.
However, one volume in each group is not shared in Figure 3-13. The system in the lower-left
part of the figure has four HBAs, and they are divided into two distinct host attachments. One
HBA can access volumes that are shared with AIXprod1 and AIXprod2. The other HBAs have
access to a volume group that is called docs.
W W PN-7
Host att: Prog
W W PN-8 Volum e group: docs
When working with Open Systems clusters for defining volumes, use the Create Cluster
function of the DS Storage Manager GUI to easily define volumes. In general, the GUI hides
the complexity of certain DS8000 internal definition levels like volume groups, array sites, and
ranks. It helps save time by directly processing these definitions internally in the background
without presenting them to the administrator.
Data
1 GB FB
1 GB FB
1 GB FB
Data
1 GB FB
1 GB FB
1 GB FB
Data
Data
Server0
Data
1 GB FB
1 GB FB
1 GB FB
Data
Parity
Spare
X'2x' FB
4096
addresses
LSS X'27'
X'3x' CKD
4096
addresses
This virtualization concept provides a high degree of flexibility. Logical volumes can be
dynamically created, deleted, and resized. They can also be grouped logically to simplify
storage management. Large LUNs and CKD volumes reduce the total number of volumes,
which contributes to the reduction of management effort.
Tip: The DS GUI helps save administration steps by handling some of these virtualization
levels in the background and automatically processing them for the administrator.
However, the CPCs are also redundant so that if either one fails, the system switches to the
remaining CPC and continues to run without any host I/O interruption. This section looks at
the RAS features of the CPCs, including the hardware, the operating system (OS), and the
interconnections.
The AIX OS uses PHYP services to manage the Translation Control Entry (TCE) tables. The
OS communicates the wanted I/O bus address to logical mapping, and the PHYP returns the
I/O bus address to physical mapping within the specific TCE table. The PHYP needs a
dedicated memory region for the TCE tables to convert the I/O address to the partition
memory address. The PHYP then can monitor direct memory access (DMA) transfers to the
Peripheral Component Interconnect Express (PCIe) adapters.
The remainder of this section describes the RAS features of the Power9 processor. These
features and abilities apply to the DS8A00. You can read more about the Power9 and
processor configuration from the DS8A00 architecture point of view in 2.3.1, “IBM Power9
processor-based CPCs” on page 51.
With the instruction retry function, when an error is encountered in the core in caches and
certain logic functions, the Power9 processor first automatically retries the instruction. If the
source of the error was truly transient, the instruction succeeds and the system can continue
normal operation.
The L2 and L3 caches in the Power9 processor are protected with double-bit detect single-bit
correct error correction code (ECC). Single-bit errors are corrected before they are forwarded
to the processor, and then they are written back to L2 or L3.
In addition, the caches maintain a cache line delete capability. A threshold of correctable
errors that is detected on a cache line can result in purging the data in the cache line and
removing the cache line from further operation without requiring a restart. An ECC
uncorrectable error that is detected in the cache can also trigger a purge and delete of the
cache line.
For most faults, a good FFDC design means that the root cause is detected automatically
without intervention by an IBM Systems Service Representative (IBM SSR). Pertinent error
data that relates to the fault is captured and saved for analysis. In hardware, FFDC data is
collected from the fault isolation registers and the associated logic. In firmware, this data
consists of return codes, function calls, and other items.
FFDC check stations are carefully positioned within the server logic and data paths to ensure
that potential errors can be identified quickly and accurately tracked to a field-replaceable unit
(FRU).
This proactive diagnostic strategy is an improvement over the classic, less accurate restart
and diagnose service approach.
Redundant components
High opportunity components (those components that most affect system availability) are
protected with redundancy and the ability to be repaired concurrently.
Self-healing
For a system to be self-healing, it must be able to recover from a failing component by
detecting and isolating the failed component. The system is then able to take the component
offline, fix, or isolate it, and then reintroduce the fixed or replaced component into service
without any application disruption. Self-healing technology includes the following examples:
Chipkill, which is an enhancement that enables a system to sustain the failure of an entire
DRAM chip. The system can continue indefinitely in this state with no performance
degradation until the failed dual inline memory module (DIMM) can be replaced.
Single-bit error correction by using ECC without reaching error thresholds for main, L2,
and L3 cache memory.
L2 and L3 cache line delete capability, which provides more self-healing.
The memory bus between processors and the memory uses CRC with retry. The design also
includes a spare data lane so that if a persistent single data error exists, the faulty bit can be
“self-healed.” The Power9 busses between processors also have a spare data lane that can
be substituted for a failing one to “self-heal” the single bit errors.
A rank of four ISDIMMs contains enough DRAMs to provide 64 bits of data at a time with
enough check bits to correct the case of a single DRAM module after the bad DRAM is
detected, and then correct an extra faulty bit.
The ability to correct an entire DRAM is what IBM traditionally called Chipkill correction.
Correcting this kind of fault is essential in protecting against a memory outage and should be
considered as a minimum error correction for any modern server design.
The Power9 processors that are used in DS8A00 are designed internally for ISDIMMs without
an external buffer chip. The ECC checking is at the 64-bit level, so Chipkill protection is
provided with x4 DIMMs plus some additional sub Chipkill level error checking after a Chipkill
event.
The memory DIMMs also use hardware scrubbing and thresholding to determine when
memory modules within each bank of memory must be used to replace modules that
exceeded their threshold of error count. Hardware scrubbing is the process of reading the
contents of the memory during idle time and checking and correcting any single-bit errors that
accumulated by passing the data through the ECC logic. This function is a hardware function
on the memory controller chip, and does not influence normal system memory performance.
The ability to use hardware accelerated scrubbing to refresh memory that might have
experienced soft errors is a given. The memory bus interface is also important. The direct
bus-attach memory that is used in the scale-out servers supports RAS features in that design,
including register clock driver (RCD) parity error detection and retry.
Fault masking
If corrections and retries succeed and do not exceed threshold limits, the system remains
operational with full resources, and no external administrative intervention is required.
Figure 4-1 shows the redundant PCIe fabric design for XC communication in the DS8A00 and
depicts the single-chip modules (SCMs) (SCM #0 and SCM#1) in each CPC. If the I/O
enclosure that is used as the XC communication path fails, the system automatically uses an
available alternative I/O enclosure for XC communication.
C2-T1 C2-T1
bay06
P9 C3-T2 P9
SCM #1 C3-T2 SCM #1
C3-T1 C3-T1
bay05
bay04
bay03
C12-T1 C12-T1
bay02
P9 P9
SCM #0
C9-T1 C9-T1 SCM #0
bay01
C8-T1 C8-T1
Figure 4-1 DS8A00 XC communication through the PCIe fabric and I/O enclosures
Voltage monitoring provides a warning and an orderly system shutdown when the voltage is
out of the operational specification range.
More monitoring support can be found by running the DS CLI showsu command and viewing
the Added Energy Report (ER) Test Mode, ER Recorded, ER Power Usage, ER Inlet Temp,
ER I/O Usage, and ER Data Usage fields, as shown in Example 4-1.
All DS8A00 configurations use 1/16th of system memory except for the largest DS8A50
Model A05 with 3.5 TB of total system memory, which uses 192 GB of NVS. NVS contains
write data until the data is destaged from cache to the drives. NVS data is protected and kept
by non-volatile dual inline memory module (NVDIMM) technology, where the data is moved
from DRAM to a flash memory on the NVDIMM modules if the DS8A00 experiences a
complete loss of input AC power.
When a write is sent to a volume and both the nodes are operational, the write data is placed
into the cache memory of the owning node and into the NVS of the other CPC. The NVS copy
of the write data is accessed only if a write failure occurs and the cache memory is empty or
possibly invalid. Otherwise, the NVS copy of the write data is discarded after the destaging
from cache to the drives is complete.
The location of write data when both CPCs are operational is shown in Figure 4-2 on
page 111, which shows how the cache memory of node 0 in CPC0 is used for all logical
volumes that are members of the even logical subsystems (LSSs). Likewise, the cache
memory of node 1 in CPC1 supports all logical volumes that are members of odd LSSs. For
every write that is placed into cache, a copy is placed into the NVS memory that is in the
alternate node. Therefore, the following normal flow of data for a write when both CPCs are
operational is used:
1. Data is written to cache memory in the owning node. At the same time, data is written to
the NVS memory of the alternate node.
2. The write operation is reported to the attached host as complete.
3. The write data is destaged from the cache memory to a drive array.
4. The write data is discarded from the NVS memory of the alternate node.
Cache Cache
Memory for Memory for
EVEN ODD
numbered numbered
LSS LSS
CPC0 CPC1
Figure 4-2 NVS write data when both CPCs are operational
Under normal operation, both DS8A00 nodes are actively processing I/O requests. The
following sections describe the failover and failback procedures that occur between the CPCs
when an abnormal condition affects one of them.
4.2.2 Failover
In the example that is shown in Figure 4-3, CPC0 failed. CPC1 must take over all of the CPC0
functions. All storage arrays are accessible by both CPCs.
CPC0 CPC1
Failover
Figure 4-3 CPC0 failover to CPC1
2. The NVS and cache of node 1 are divided in two portions, one for the odd LSSs and one
for the even LSSs.
3. Node 1 begins processing the I/O for all the LSSs, taking over for node 0.
This entire process is known as a failover. After failover, the DS8A00 operates as shown in
Figure 4-3 on page 111. Node 1 now owns all the LSSs, which means all reads and writes are
serviced by node 1. The NVS inside node 1 is now used for both odd and even LSSs. The
entire failover process is transparent to the attached hosts.
The DS8A00 can continue to operate in this state indefinitely. No functions are lost, but the
redundancy is lost, and performance is decreased because of the reduced system cache.
Any critical failure in the working CPC renders the DS8A00 unable to serve I/O for the arrays,
so the IBM Support team begins work immediately to determine the scope of the failure and
build an action plan to restore the failed CPC to an operational state.
4.2.3 Failback
The failback process is initiated when the DS8A00 determines that the failed CPC has been
repaired and is ready to resume operations. The DS8A00 then automatically starts the
resume action, or if manual intervention is required, an IBM SSR or remote support engineer
will initiate the failback process.
This example in which CPC0 failed assumes that CPC0 was repaired and resumed. The
failback begins with server 1 in CPC1 starting to use the NVS in node 0 in CPC0 again, and
the ownership of the even LSSs being transferred back to node 0. Normal I/O processing,
with both CPCs operational, then resumes. Just like the failover process, the failback process
is transparent to the attached hosts.
In general, recovery actions (failover or failback) on the DS8A00 do not affect I/O operation
latency by more than 8 seconds.
If you require real-time response in this area, contact IBM to determine the latest information
about how to manage your storage to meet your requirements.
During normal operation, the DS8A00 preserves write data by storing a duplicate copy in the
NVS of the alternative CPC. To ensure that write data is not lost during a power failure event,
the DS8A00 stores the NVS contents on non-volatile DIMMs (NVDIMMs). Each CPC contains
up to four NVDIMMs with dedicated Backup Power Modules (BPMs). The NVDIMMs act as
regular DRAM during normal operation. During AC power loss, the BPMs provide power to
the NVDIMM modules until they have moved all modified data (NVS) to integrated flash
memory. The NVDIMM save process is autonomous, and requires nothing from the CPC.
Important: DS8A00 can tolerate a power line disturbance (PLD) for up to 20 ms. A PLD
that exceeds 20 ms on both power domains initiates an emergency shutdown.
The following sections describe the steps that occur when AC input power is lost to both
power domains.
Power loss
When a wall power loss condition occurs, the following events occur:
1. All host adapter I/O is blocked.
2. Each NVDIMM begins copying its NVS data to the internal flash partition.
3. The system powers off without waiting for the NVDIMM copy operation.
4. The copy process continues and completes independently from the storage systems
power.
Power restored
When power is restored, the DS8A00 must be powered on manually unless the remote power
control mode is set to automatic.
Note: Be careful if you decide to set the remote power control mode to automatic. If the
remote power control mode is set to automatic, after input power is restored, the DS8A00
is powered on automatically.
For more information about how to set power control on the DS8A00 system, see IBM
Documentation.
The DS8A00 I/O enclosures use adapters with PCIe connections. The adapters in the I/O
enclosures are concurrently replaceable. Each slot can be independently powered off for
installation, replacement, or removal of an adapter.
In addition, each I/O enclosure has 2N power and cooling redundancy in the form of two
PSUs with integrated fans, and four double-stacked enclosure cooling fans. The PSUs and
enclosure fans can be replaced concurrently without disruption to the I/O enclosure.
Important: For host connectivity, hosts that access the DS8A00 must have at least two
connections to I/O ports on separate host adapters in separate I/O enclosures.
A more robust design is shown in Figure 4-5 on page 115, in which the host is attached to
separate FC host adapters in separate I/O enclosures. This configuration is also important
because during a LIC update, a host adapter port might need to be taken offline. This
configuration allows host I/O to survive a hardware failure on any component on either path.
HBA
HP HP HP HP HP HP HP HP
Host Host
Adapter Adapter
I/O enclosure 2
PCI PCI
CPC 0 Express Express
CPC 1
I/O enclosure 3
Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP
HBA HBA
HP HP HP HP HP HP HP HP
Host Host
Adapter Adapter
I/O enclosure 2
PCI PCI
CPC 0 Express Express
CPC 1
I/O enclosure 3
Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP
A logic or power failure in a switch or director can interrupt communication between hosts and
the DS8A00. Provide more than one switch or director to ensure continued availability.
Configure ports from two separate host adapters in two separate I/O enclosures to go through
each of two directors. The complete failure of either director leaves the paths that are
configured to the alternative director still available.
When data is read, the DIF is checked before the data leaves the DS8A00 and again when
the data is received by the host system. Previously, it was possible to ensure the data integrity
within the storage system only with ECC. However, T10 DIF can now check end-to-end data
integrity through the SAN. Checking is done by hardware, so no performance impact occurs.
For more information about T10 DIF implementation in the DS8A00, see “T10 Data Integrity
Field support” on page 84.
To provide more proactive system diagnosis information about SAN fabric systems, the Read
Diagnostic Parameters (RDP) function, which complies with industry standards, is
implemented on the DS8A00. This function provides host software with the capability to
perform predictive failure analysis (PFA) on degraded SAN links before they fail.
When troubleshooting SAN errors, the IBM SSR can run a wrap test on a single host adapter
port without taking the entire adapter offline.
Multipathing software
Each attached host OS requires multipathing software to manage multiple paths to the same
device, and to provide redundant routes for host I/O requests. When a failure occurs on one
path to a logical device, the multipathing software on the attached host can identify the failed
path and route the I/O requests for the logical device to alternative paths. Furthermore, it can
likely detect when the path is restored. The multipathing software that is used varies by
attached host OS and environment, as described in the following sections.
Open systems
In most open systems environments, multipathing is available at the OS level. The Subsystem
Device Driver (SDD), which was provided and maintained by IBM for several OSs, is now an
obsolete approach for a multipathing solution.
For AIX OS, the DS8000 is supported through the AIX multipath I/O (MPIO) framework, which
is included in the base AIX OS. Use the base AIX Multipath I/O Path Control Module
(AIXPCM) support instead of the old SDDPCM.
For multipathing under Microsoft Windows, the DS8000 is supported by the native Microsoft
MPIO stack by using Microsoft Device Specific Module (MSDSM). Existing environments that
rely on the old SDDDSM should be moved to the native OS driver.
Note: To move existing SDDPCM and SDDDSM implementations, see the following
resources:
How To Migrate SDDPCM to AIXPCM
Migrating from SDDDSM to Microsoft MSDSM - SVC/Storwize
For all newer versions of RHEL and SUSE Linux Enterprise Server, the native Linux
multipathing driver, Device-Mapper Multipath (DM Multipath), is used.
Also, on the VMware vSphere ESXi server, the VMware Native Multipathing Plug-in (NMP) is
the supported multipathing solution.
For more information about the multipathing software that might be required for various OSs,
see the IBM System Storage Interoperation Center (SSIC).
IBM Z
In the IBM Z environment, a best practice is to provide multiple paths from each host to a
storage system. Typically, four or eight paths are configured. The channels in each host that
can access each logical control unit (LCU) in the DS8A00 are defined in the hardware
configuration definition (HCD) or input/output configuration data set (IOCDS) for that host.
Dynamic Path Selection (DPS) allows the channel subsystem to select any available
(non-busy) path to start an operation to the disk subsystem. Dynamic Path Reconnect (DPR)
allows the DS8A00 to select any available path to a host to reconnect and resume a
disconnected operation, for example, to transfer data after disconnection because of a cache
miss.
These functions are part of the IBM z/Architecture®, and are managed by the channel
subsystem on the host and the DS8A00.
A physical FICON path is established when the DS8A00 port sees light on the fiber, for
example, a cable is plugged in to a DS8A00 host adapter, a processor or the DS8A00 is
powered on, or a path is configured online by z/OS. Logical paths are established through the
port between the host, and part of or all of the LCUs in the DS8A00 are controlled by the HCD
definition for that host. This configuration happens for each physical path between an IBM Z
host and the DS8A00. Multiple system images can be in a CPU. Logical paths are established
for each system image. The DS8A00 then knows the paths that can be used to communicate
between each LCU and each host.
CUIR is available for the DS8A00 when it operates in the z/OS and IBM z/VM environments.
CUIR provides automatic channel path vary on and vary off actions to minimize manual
operator intervention during selected DS8A00 service actions.
CUIR also allows the DS8A00 to request that all attached system images set all paths that
are required for a particular service action to the offline state. System images with the correct
level of software support respond to such requests by varying off the affected paths, and
either notifying the DS8A00 system that the paths are offline, or that it cannot take the paths
offline. CUIR reduces manual operator intervention and the possibility of human error during
maintenance actions and reduces the time that is required for the maintenance. This function
is useful in environments in which many z/OS or z/VM systems are attached to a DS8A00.
The metadata check is independent of the DS8A00 T10 DIF support for FB volumes. For
more information about T10 DIF implementation in the DS8000, see “T10 Data Integrity Field
support” on page 84.
The HMC is the DS8A00 management focal point. If no HMC is operational, it is impossible to
run maintenance, modifications to the logical configuration, or Copy Services (CS) tasks,
such as the establishment of FlashCopy backups, Metro Mirror (MM) or Global Mirror (GM),
by using the DS Command-line Interface (DS CLI), Storage Management GUI, or IBM Copy
Services Manager. The implementation of a secondary HMC provides a redundant
management focal point and is especially important if CS or Encryption Key Manager (EKM)
are used.
The DS8A00 CPCs have an OS (AIX) and Licensed Machine Code (LMC) that can be
updated. As IBM continues to develop and improve the DS8A00, new releases of firmware
and LMC become available that offer improvements in function and reliability. For more
information about LIC updates, see Chapter 11, “Licensed Machine Code” on page 385.
Call Home
Call Home is the capability of the DS8A00 to notify the client and IBM Support to report a
problem. Call Home is configured in the HMC at installation time. Call Home to IBM Support
is done over the customer network through a secure protocol. Customer notifications can also
be configured as email (SMTP) or Simple Network Management Protocol (SNMP) alerts. An
example of an email notification output is shown in Example 4-2.
For more information about planning the connections that are needed for HMC installations,
see Chapter 6, “IBM DS8A00 Management Console planning and setup” on page 161.
For more information about setting up SNMP notifications, see Chapter 12, “Monitoring and
support” on page 403.
Remote support
Remote support provides the ability of IBM Support personnel to remotely access the
DS8A00. This capability can be configured at the HMC, and access is through Assist On-site
(AOS) or by IBM Remote Support Center (RSC).
For more information about remote support operations, see Chapter 12, “Monitoring and
support” on page 403.
For more information about AOS, see IBM Assist On-site for Storage Overview, REDP-4889.
Note: Due to the added resiliency of RAID 6, RAID 5 is no longer supported on DS8A00
Models.
IBM Storage Modeler (StorM) is an easy-to-use web tool that is available only to IBM
personnel and Business Partners to help with capacity planning for physical and usable
capacities that are based on installation drive capacities and quantities in intended RAID
configurations.
RAID 6 is the default when creating new arrays by using the DS Storage Manager GUI.
For the latest information about supported RAID configurations and to request an RPQ /
SCORE, contact your IBM SSR.
Each flash drive has two separate connections to the enclosure backplane. This configuration
allows a flash drive to be simultaneously attached to both NVMe expander switches. If either
ESM is removed from the enclosure, the NVMe expander switch in the remaining ESM retains
the ability to communicate with all the flash drives and both flash RAID controllers in the DA
pair. Similarly, each DA has a path to each switch, so it can also tolerate the loss of a single
path. If both paths from one DA fail, it cannot access the switches. However, the partner DA
retains connectivity to all drives in the enclosure pair.
CPC 1
Device
Adapter
For more information about the drive subsystem of the DS8A00, see 2.5, “Flash drive
enclosures” on page 62.
The arrays are balanced between the flash enclosures to provide redundancy and
performance. Both flash RAID controllers can access all arrays within the DA pair. Each
controller in a DA pair is installed in different I/O enclosures, and each has allegiance to a
different CPC.
The DS8A00 introduces a dual-active design that allows each node to connect directly to both
adapters in a pair. This setup boosts performance and fault tolerance by ensuring that if one
adapter fails, I/O can be redirected locally without needing to switch arrays or change roles.
This design not only reduces disruptions during software updates but also enhances how
adapter failures are managed, especially in single CEC mode.
If ECC detects correctable bad bits, the bits are corrected immediately. This ability reduces
the possibility of multiple bad bits accumulating in a block beyond the ability of ECC to correct
them. If a block contains data that is beyond ECC’s ability to correct, RAID is used to
regenerate the data and write a new copy onto a spare block or cell of the flash drive. This
scrubbing process applies to flash drives that are array members and spares.
Data scrubbing can proactively relocate data, which reduces the probability of data reread
impact. Data scrubbing does this relocation before errors increase to a level beyond error
correction capabilities.
Important: RAID 6 is now the default and preferred setting for the DS8A00. RAID 10
continues to be an option for all-flash drive types.
The DS8A00 uses the idea of rotating parity, which means that no single drive in an array is
dedicated to holding parity data, which makes the drive active in every I/O operation. Instead,
the drives in an array rotate between holding data stripes and holding parity stripes, balancing
out the activity level of all drives in the array
Spare drives
An HPFE Gen3 pair in a DS8A00 can contain up to six array sites. Each array site contains
eight flash drives, and the HPFE Gen3 pair has two spare flash drives for each enclosure pair.
The first two array sites on a flash RAID controller (DA) pair have a spare that is assigned,
and the rest of the array sites have no spare that is assigned if all flash drives are the same
capacity. The number of required spare drives per flash enclosure pair applies to all available
RAID levels.
Note: RAID 6 is the default and preferred array configuration in DS8A00 systems.
RAID 6 provides around a 1,000 times improvement over RAID 5 for impact risk. RAID 6
allows more fault tolerance by using a second independent distributed parity scheme (dual
parity). Data is striped on a block level across a set of drives while the second set of parity is
calculated and written across all the drives, and allows reconstruction of the data even when
two drives fail. The striping is shown in Figure 4-8 on page 125.
RAID 6 is best used with large-capacity drives because they have a longer rebuild time. One
risk is that longer rebuild times increase the possibility that a second drive error occurs during
the rebuild window.
When RAID 6 is sized correctly for the I/O demand, it is a considerable reliability
enhancement, as shown in Figure 4-8.
During the rebuilding of the data on the new drive, the DA can still handle I/O requests from
the connected hosts to the affected array. Performance degradation might occur during the
reconstruction because DAs and path resources are used to do the rebuild. Because of the
dual-path architecture of the DS8A00, this effect is minimal. Additionally, any read requests
for data on the failing drive require data to be read from the other drives in the array, and then
the DA reconstructs the data.
Any subsequent failure during the reconstruction within the same array (second drive failure,
second coincident medium errors, or a drive failure and a medium error) can be recovered
without data loss.
Performance of the RAID 6 array returns to normal when the data reconstruction on the spare
drive is complete. The rebuild time varies, depending on the capacity of the failed drive and
the workload on the array and the DA. The completion time is comparable to a RAID 5 rebuild,
but slower than rebuilding a RAID 10 array in a single drive failure.
RAID 10 offers faster data reads and writes than RAID 6 because it does not need to manage
parity. However, with half of the drives in the group used for data and the other half mirroring
that data, RAID 10 arrays have less usable capacity than RAID 6 arrays.
RAID 10 is commonly used for workloads that require the highest performance from the drive
subsystem. With RAID 6, each front-end random write I/O might theoretically lead to six
back-end I/Os, including the parity updates (RAID penalty), but this number is only two for
RAID 10 (not counting cache optimizations). A typical use case for RAID 10 is for workloads
with a high random-write ratio. Either member in the mirrored pair can respond to the read
requests.
Performance of the RAID 10 array returns to normal when the data copy onto the spare drive
completes. The time that is taken for rebuild can vary, depending on the capacity of the failed
drive and the workload on the array and the DA.
Compared to RAID 6, RAID 10 rebuild completion time is faster because rebuilding a RAID 6
array requires several reads on the remaining stripe units plus two parity operations for each
write. However, a RAID 10 configuration requires one read and one write (essentially, a direct
copy).
The suspect drive and the new member-spare are set up in a temporary RAID 1 association,
allowing the troubled drive to be duplicated onto the spare rather than running a full RAID
reconstruction (rebuild) from data and parity. The new member-spare is then made a regular
member of the array and the suspect drive is rejected from the RAID array. The array never
goes through an n-1 stage in which it might suffer a complete failure if other drives in this
array encounter errors. The result saves substantial time and provides a new level of
availability that is not available in other RAID products.
Smart Rebuild is not applicable in all situations, so it is not always used. Smart Rebuild runs
only for healthy RAID arrays. If two drives with errors are in a RAID 6 configuration, or if the
drive mechanism failed to the point that it cannot accept any I/O, the standard RAID rebuild
procedure is used for the RAID array. If communications across a drive fabric are
compromised, such as an NVMe path link error that causes the drive to be bypassed,
standard RAID rebuild procedures are used because the suspect drive is not available for a
one-to-one copy with a spare. If Smart Rebuild is not possible or cannot complete, a standard
RAID rebuild occurs.
Drive error patterns are continuously analyzed as part of the scheduled tasks that are run by
the DS8A00 LIC. Drive firmware is optimized to report predictive errors to the DA. At any time,
when certain drive errors (following specific criteria) reach a specified threshold, the RAS LIC
component starts Smart Rebuild within the hour. This enhanced technique, when it is
combined with a more frequent schedule, leads to considerably faster identification of drives
showing signs of imminent failure.
Smart Rebuild is also used to proactively rebalance member and spare drive distribution
between the paths in a DA pair. Also, if a DA pair has a mix of different capacity flash drives, a
larger spare might, sometimes, be taken by a smaller drive array. Smart Rebuild corrects this
situation after the failing drives are replaced, and returns the larger drive to the spare pool.
DS8000 Release 9.1 code provided an enhancement of the rebuild process by avoiding the
rebuild of areas that are not mapped to logical volumes.
This process is performed by running a status command to the drives to determine whether
the parity stripe is unmapped. This process prevents unnecessary writes (P/E cycles) of
zeroed data to the target drive in a rebuild, allowing faster rebuild for partially allocated RAID
arrays.
IBM SSRs and remote support can manually initiate a Smart Rebuild if needed, such as when
two drives in an array are logging temporary media errors.
A minimum of one spare is created for each array site that is assigned to an array until the
following conditions are met:
A minimum of two spares per DA pair exist.
A minimum of two spares for the largest capacity array site on the DA pair exist.
Spare rebalancing
The DS8A00 implements a spare rebalancing technique for spare drives. When a drive fails
and a hot spare is taken, it becomes a member of that array. When the failed drive is repaired,
DS8A00 LIC might choose to allow the hot spare to remain where it was moved. However, it
can instead choose to move the spare to a more optimum position. This migration is
performed to better balance the spares across the two dual flash enclosure paths to provide
the optimum spare location based on drive capacity and spare availability.
In a flash drive intermix on a DA pair, it is possible to rebuild the contents of a smaller flash
drive onto a larger spare drive. When the failed flash drive is replaced with a new drive, the
DS8A00 LIC moves the data back onto the recently replaced drive.
When this process completes, the smaller flash drive rejoins the array, and the larger drive
becomes a spare again.
Hot-pluggable drives
Replacing a failed flash drive does not affect the operation of the DS8A00 system because
the drives are fully hot-pluggable. Each drive plugs into a NVMe expander switch, so no path
break is associated with the removal or replacement of a drive. In addition, no potentially
disruptive loop initialization process occurs.
All power and cooling components that constitute the DS8A00 power subsystem are fully
redundant. The key element that allows this high level of redundancy is a dual power domain
configuration that is formed of iPDU pairs. Dual PSUs in all major components provide a 2N
redundancy for the system.
Combined with the NVDIMMs and the BPMs, which preserve the NVS write cache, the design
protects the storage system in an input power failure.
The BPMs in each of the CPCs provide power to complete the movement of write data from
cache memory to non-volatile flash storage if an input power loss occurs in both power
domains (as described in 4.2.4, “NVS and power outages” on page 113).
The CPCs, I/O enclosures, and flash enclosures components in the frame all feature
duplicated PSUs.
The iPDUs are firmware upgradeable and controlled and managed by the HMCs through its
Ethernet interfaces.
For more information, see 2.6.1, “Intelligent Power Distribution Units” on page 64.
iPDUs support high or low voltage three-phase, and low-voltage single-phase input power.
The correct power cables must be used. For more information about power cord Feature
Codes, see IBM Storage DS8000 10.0 Introduction and Planning Guide, G10-I53-00.
The Backup Power Modules (BPMs) provide the power for this emergency copy process of
the NVDIMMs. They are firmware upgradeable. The condition of the BPMs is continually
monitored by the CPC FSP. The BPMs have fast charge times that ensure that an empty BPM
is charged and fully operational during the IML phase of the system when the storage system
powers on so that no SPoF occurs. For more information, see 2.6.3, “Backup Power Modules
and NVDIMM” on page 68.
The DS8A00 BPMs have a 5-year lifetime. If a BPM must be replaced, the containing CPC
must be set to service mode and shut down, which starts a failover of all operations to the
other CPC. Because of the high resilience of the system, the remaining CPC keeps the whole
storage facility operable and in production servicing all I/Os. As a best practice, replacement
should be done in a scheduled service window to avoid reduced performance and
redundancy during peak workload hours. As the BPM is monitored, sufficient warning is given
to schedule the service action.
Each flash drive enclosure power supply plugs into two separate iPDUs, which must each be
supplied by redundant independent customer power feeds.
Figure 4-10 shows the power control settings window of the Storage Management GUI.
Figure 4-10 DS8A00 modify power control settings from the Storage Management GUI
Note: The Ethernet switches that are used internally in DS8A00 are for private network
communication only. No external connection to the private networks is allowed. Client
connectivity to the DS8A00 is allowed only through the provided external customer HMC
Ethernet connectors (eth2 and eth1) at the rear of the base frame.
Storage system frames with this optional seismic kit include hardware at the bottom of the
frame that secures it to the floor. Depending on the flooring in your environment (specifically,
non-raised floors), installation of the required floor mounting hardware might be disruptive.
This kit must be special-ordered for the DS8A00. The kit is not available for the
rack-mountable DS8A10 model A00. For more information, contact your IBM SSR.
The storage system also overwrites the areas that are usually not accessible and used only
internally by the disk.
NVDIMM
The NVDIMMs are cleared by applying a single-pass overwrite in accordance with NIST
SP-800-881. This process is run in parallel on both CPCs.
Process overview
The SDO process is summarized in these steps:
1. After the logical configuration is removed, SDO is started from the primary HMC.
2. The primary HMC performs a dual cluster restart.
3. The crypto-erase and format of the flash drives is started.
4. The overwrite of the CPC hard disk drives (HDDs) are started in parallel (with each other
and with the above).
5. The overwrite of the secondary HMC is started in parallel.
6. Both CPCs are restarted and the NVDIMMs are cleared.
7. After the overwrite of the CPC and secondary HMC HDDs is complete, the primary HMC
HDD is overwritten.
8. The certificate is generated.
Secure Data Overwrite Service for the IBM System Storage DS8A00
Certificate of Completion
2. IBM performed such Secure Data Overwrite Service as set forth herein
In all cases, the successful complexion of all erasure commands is a prerequisite for successful erasure.
Flash module (shown as 2.5" FLASH-FDE) were PURGED in accordance with NIST SP-800-88R1 for flash-based media, by
issuing the sanitize command, which performs a crypto erase followed by block overwrite.
NVDIMMÕs NAND Flash blocks were CLEARED in accordance with NIST SP-800-88R1 for flash-based media, by applying a
single overwrite pattern of 0x00. After the blocks are cleared, the data was read back to verify that the contents
are erased. The overwrite and verification was performed by using vendor provided tools/methods.
CPC drives were CLEARED in accordance with NIST SP-800-88R1 for magnetic disks, by applying a single pass overwrite
pattern of 0x00. Random samples, the first two sectors, and the last 10000 sectors were read back and verified to
match the data written.
HMC flash base media drives were NOT securely erased. This device does not contain customer data, but the partition
containing all trace data and diagnostic dumps was overwritten with single pass overwrite pattern of 0x00.
Scope
==================
This report covers the secure data overwrite service that is performed on the
DS8A00 storage system with the serial number 75NHxxx
The Drive Types Table provides information about each drive type that is installed on the DS8000 system.
a) Drive Type: This identifies that the drive is solid-state class of full disk encryption drive that is, 2.5"
FLASH-FDE.
b) Drive block type: This identifies that the drive block consists of 528 bytes.
c) Drive Capacity: This identifies the specified drive type's capacity in GB.
This section covers the devices that are used to store customer data (and associated metadata) both of which are
subject to erasure.
Disk Type - All these devices are flash memory-based and are labeled as FLASH-FDE
Disk Serial# - Manufacturer assigned serial number visible on the device case
WWNN. - Device WWNN
Drive Location - Rack, Enclosure, and slot where the device is installed.
Overwrite Status - The success or failure of the overwrite operation
Sector Defect Count - Always zero for these devices.
This section covers the devices on the processors that are used to store the operating system,
configuration data and trace data on the Central Processor Complex (CPC) servers.
--------------------------------------------------------------------------------
| Processor | hdisk | CPC Drive | Overwrite | Completion |
| Complex # | Number | Serial Number | Status | Date |
--------------------------------------------------------------------------------
| CPC 0 | hdisk0 | WAE1045Q | Successful | 2021/03/09 19:11:46 |
| CPC 0 | hdisk1 | WAE10N39 | Successful | 2021/03/09 20:10:31 |
| CPC 1 | hdisk0 | 0TJ5SJLP | Successful | 2021/03/09 19:13:16 |
| CPC 1 | hdisk1 | WAE104DZ | Successful | 2021/03/09 20:12:32 |
--------------------------------------------------------------------------------
This section covers the devices on the processors that are used to store the operating system,
As noted above, these devices were NOT erased and only the partition containing logs
and dumps were deleted.
HMC Type - Indicates whether this is the first or optional second HMC
HMC Drive Serial Number - Manufacturer assigned serial number visible on the device case
Overwrite Status - The success or failure of the overwrite operation
Completion Date - Completion timestamp
HMC Drive Type - Always SSD for these systems
--------------------------------------------------------------------------------------------------------
| HMC Type | HMC Drive Serial Number | SDO Results | Completion Date | HMC Drive Type |
| | | | | Hard Disk Drive/ |
| | | | | SSD |
--------------------------------------------------------------------------------------------------------
| First Management | N/A | Successful | 25/09/24-06:33:51 | SSD |
| Console | | | | |
--------------------------------------------------------------------------------------------------------
|Secondary Management| N/A | Successful | 25/09/24-03:49:39 | SSD |
| Console | | | | |
--------------------------------------------------------------------------------------------------------
This section covers the devices that are used to store customer data when system goes through emergency power off and
the device is subject to erasure.
NVDIMM Location Code - Central Processor Complex (CPC) and slot where the device is
installed
Serial Number- Manufacturer assigned serial number visible on the device
NVDIMM Capacity- Device capacity in GB
Overwrite Status- The success or failure of the overwrite operation
Completion Date - Completion timestamp
For more information about the configuration and installation process, see IBM Storage
DS8000 10.0 Introduction and Planning Guide, G10-I53-00.
Important: The IBM DS8A50 system can support an expansion frame that can be
installed adjacent or 20 meters away from the base frame. (Feature Code AB01 is
needed.)
Consider location suitability, floor loading, access constraints, elevators, and doorways.
Analyze power requirements, such as redundancy and using an uninterruptible power
supply (UPS).
Examine environmental requirements, such as adequate cooling capacity.
Full Disk Encryption (FDE) drives are a standard feature for the DS8A00. If encryption
activation is required, consider either local or external key management. If external, then
consider the location and connection needs for the external key servers, such as IBM
Security Key Lifecycle Manager or Gemalto SafeNet KeySecure servers.
Consider the integration of Lightweight Directory Access Protocol (LDAP) to allow a single
user ID and password management. LDAP can be configured from the Storage
Management GUI, as described in 6.6.2, “Remote authentication” on page 179.
Call Home through a Secure Sockets Layer (SSL) installation to provide a continued
secure connection to the IBM Support center.
Consider connecting to IBM Storage Insights that can help you predict and prevent
storage problems before they impact your business.
Plan for logical configuration, Copy Services (CS), and staff education. For more
information, see Chapter 8, “DS8000 G10 configuration flow” on page 213.
IBM Services® also can apply or modify your logical configuration, which is a fee-based
service.
5.1.2 Participants
A project manager must coordinate the many tasks that are necessary for a successful
installation. Installation requires close cooperation with the user community, IT support staff,
and technical resources that are responsible for floor space, power, and cooling.
A storage administrator must also coordinate requirements from the user applications and
systems to build a storage plan for the installation. This plan is needed to configure the
storage after the initial hardware installation is complete.
The following people must be briefed and engaged in the planning process for the physical
installation:
Systems and storage administrators
Installation planning engineer
Building engineer for floor loading, air conditioning, and electrical considerations
Security engineers for AOS, LDAP, key servers, and encryption
Administrator and operator for monitoring and handling considerations
IBM Systems Service Representative (IBM SSR) or IBM Business Partner
Table 5-1 lists the final packaged dimensions and maximum packaged weight of the DS8A00
storage unit ship group. The maximum packaged weight is the maximum weight of the frame
plus the packaging weight.
IBM DS8A10 model A01 Height 2.22 m (87.7 in.) 924 kg (2037 lb)
Width 1 m (39.4 in.)
Depth 1.50 m (59.1 in.)
DS8A50 model A05 Height 2.22 m (87.7 in.) 924 kg (2037 lb)
Width 1.0 m (39.4 in.)
Depth 1.50 m (59.1 in.)
Expansion Frame model E05 Height 2.22 m (87.7 in.) 603 kg (1330 lb)
Width 1.0 m (39.4 in.)
Depth 1.50 m (59.1 in.)
For the maximum weight of the various DS8A00 models, see Table 5-1.
Important: Verify with the building engineer or other appropriate personnel to ensure that
the floor loading is correctly considered.
For more information about floor loading and weight distribution, see IBM DS8900F
Introduction and Planning Guide, SC27-9560.
For more information, see IBM Storage DS8000 (10th Generation) 10.0 Introduction and
Planning Guide, SC27-9560-09.
Power connectors
Each DS8A00 base and expansion frame features redundant intelligent Power Distribution
Unit (iPDU) rack systems. The base frame can have 2–4 power cords, and the expansion
frame has two power cords.
Attach the power cords to each frame to separate AC power distribution systems. For more
information about power connectors and power cords, see IBM Storage DS8000 (10th
Generation) 10.0 Introduction and Planning Guide, SC27-9560-09.
Input voltage
When you plan the power requirements of the storage system, consider the input voltage
requirements. Table 5-3 and Table 5-4 shows the DS8A00 input voltages and frequencies.
Table 5-5 Power consumption and environmental information (fully equipped frames)
The values represent data that was obtained from the following configured systems:
A model A01, A05, and E01 that contained four fully configured High Performance Flash
Enclosure Gen3 Pairs and 16 Fibre Channel adapters.
All Frames and configurations were used in single-phase mode and did not exceed 8kVA.
DS8A00 cooling
Air circulation for the DS8A00 is provided by the various fans that are installed throughout the
frame. All of the fans in the DS8A00 system direct air flow from the front of the frame to the
rear of the frame. No air exhausts out of the top of the frame.
The use of such directional air flow allows cool aisles to the front and hot aisles to the rear of
the systems, as shown in Figure 5-4.
The operating temperature for the DS8A00 is 16–32 °C (60–90 °F) at relative humidity limits
of 20%–80% and optimum at 45%.
Table 5-6 shows the minimum and maximum numbers of host adapters that are supported by
the DS8A00.
The FC and FICON shortwave (SW) host adapter, when it is used with 50 μm multi-mode
fiber cable, supports point-to-point distances. For more information about cable limits, see
Table 5-7.
16 GFC 32 GFC
The FC and FICON longwave (LW) host adapter, when it is used with 9 μm single-mode fiber
cable, extends the point-to-point distance to 10 km (6.2 miles).
Different fiber optic cables with various lengths can be ordered for each FC adapter port.
Table 5-8 lists the fiber optic cable features for the FCP/FICON adapters.
For more information about host types, models, adapters, and operating systems (OSs) that
are supported by the DS8A00, see the IBM System Storage Interoperation Center (SSIC) for
DS8000.
zHyperLink does not replace zHPF. It works in cooperation with it to reduce the workload on
zHPF. zHyperLink provides a new PCIe connection. The physical number of current zHPF
connections is not reduced by zHyperLink.
On the DS8A00, the number of zHyperLink ports that can be installed varies, depending on
the number of cores per CPC that are available and the number of I/O bay enclosures.
The number of zHyperLinks that can be installed based on the number of cores available is
listed in Table 5-9.
Each zHyperLink connection requires a zHyperLink I/O adapter to connect the zHyperLink
cable to the storage system, as shown in Table 5-10 and Table 5-11.
With host adapters that are configured as FC protocols, the DS8A00 provides the following
configuration capabilities:
A maximum of 128 FC ports.
A maximum of 509 logins per FC port, which includes host ports and Peer-to-Peer Remote
Copy (PPRC) target and initiator ports.
Access to 63750 logical unit numbers (LUNs) for each target (one target for each host
adapter), depending on the host type.
Either switched-fabric (FC-SW), or point-to-point topologies.
The adapters do not support arbitrated loop topology at any speed.
Note: The IBM z16, z15®, z14, and z13 servers support 32,000 devices for each FICON
host channel. The IBM zEnterprise® EC12 and IBM zEnterprise BC12 servers support
24,000 devices for each FICON host channel. Earlier IBM Z servers support 16,384
devices for each FICON host channel. To fully access 65,280 devices, it is necessary to
connect multiple FICON host channels to the storage system. You can access the devices
through an FC switch or FICON director to a single storage system.
Note: IBM z16 supports 32 GFC host adapters and FICON Express32S Channels. The
support for 32 host adapters provides twice the read/write bandwidth compared to 16
GFC host adapters on previous models, thus taking full advantage of 32 GFC host
adapters on the DS8A00.
Better performance for copy services can be obtained by using dedicated host ports for
remote copy links, and other path optimization. For more information, see IBM DS8900F
Performance Best Practices and Monitoring, SG24-8501.
Note: DS8000 has a set of internal parameters that are known as pokeables, which are
sometimes referred to as product switches. These internal parameters are set to
provide the best behavior in most typical environments. In special cases, like
intercontinental distances or when bandwidth is low, some internal tuning might be
required to adjust those internal controls to keep Global Mirror (GM) as efficient as it is
in more common environments. Pokeable values can be displayed by a GUI or by Copy
Services Manager, but they can be changed only by IBM Support. For more
information, see DS8000 Global Mirror Best Practices, REDP-5246.
The z and x:xx values are unique combinations for each system and each SFI that are based
on a machine’s serial number. Use the DS CLI command lssi to determine the SFI WWNN,
as shown in Example 5-1.
However, the DS8A00 WWPN is a child of the SFI WWNN, where the WWPN inserts the z
and x:xx values from SFI WWNN. It also includes the YY:Y from the logical port naming,
which is derived from where the host adapter is physically installed. Use the DS CLI
command lsioport to determine the SFI WWPN, as shown in Example 5-3.
You can also determine the host adapter port WWPN by completing the following steps:
1. Connect to the HMC IP address by using a web browser:
https://< hmc IP address >
2. Select Actions.
3. Select Modify Fibre Channel Port Protocols.
The default view may show protocols and the state only. The view can be customized to
display the port WWPN and the frame.
4. Click Actions, and then select Customize Columns to include the WWPN and frame in
the view. You receive the full list of each installed I/O port with its WWPN and its physical
location, as shown in Figure 5-7 on page 153.
Consider the following network and communications requirements when you plan the location
and interoperability of your storage systems:
HMC network access (one IP per HMC).
Remote support connection.
SAN connectivity.
An IBM Security Guardium Key Lifecycle Manager connection if encryption, end-point
security, or TCT is activated, or an LDAP connection if LDAP is implemented.
For more information about physical network connectivity, see IBM Storage DS8000 10.0
Introduction and Planning Guide, G10-I53-00.
A dual Ethernet connection is available for client access. The two HMCs provide redundant
management access to enable continuous availability access for encryption key servers and
other advanced functions.
The HMC can be connected to the client network for the following tasks:
Remote management of your system by using the DS CLI
Remote management by using the DS Storage Management GUI by opening a browser to
the network address of the HMC:
https://<HMC IP address>
Note: Users can also control a second Ethernet adapter within the HMCs.
For more information about HMC planning, see Chapter 6, “IBM DS8A00 Management
Console planning and setup” on page 161.
Important: The DS8A00 uses 172.16.y.z and 172.17.y.z private network addresses. If the
client network uses the same addresses, the IBM SSR can reconfigure the private
networks to use another address range option.
IBM Spectrum Control simplifies storage management by providing the following benefits:
Centralizing the management of heterogeneous storage network resources with IBM
storage management software
Providing greater synergy between storage management software and IBM storage
devices
Reducing the number of servers that are required to manage your software infrastructure
Migrating from basic device management to storage management applications that
provide higher-level functions
IBM Storage Insights is offered free of charge to customers who own IBM block storage
systems. It is an IBM Cloud storage service that monitors IBM block storage. It provides
single-pane views of IBM block storage systems, such as the Operations dashboard and the
Notifications dashboard.
With the information that is provided, such as the diagnostic event information; key capacity;
and performance information, and the streamlined support experience, you can quickly
assess the health of your storage environment and get help with resolving issues.
On the Advisor window, IBM Storage Insights provides recommendations about the remedial
steps that can be taken to manage risks and resolve issues that might impact your storage
services. For a brief illustration of IBM Storage Insights features, see 12.11, “Using IBM
Storage Insights” on page 428.
A DS CLI script file is a text file that contains one or more DS CLI commands. It can be
issued as a single command. DS CLI can also be used to manage other functions for a
storage system, including managing security settings, querying point-in-time performance
information or the status of physical resources, and exporting audit logs.
The DS CLI client can be installed on a workstation, and can support multiple OSs. The DS
CLI client can access the DS8A00 over the client’s network. For more information about
hardware and software requirements for the DS CLI, see IBM DS8000 Series Command-Line
Interface User’s Guide, SC27-9562.
Embedded AOS
The preferred remote support connectivity method for IBM is through Transport Layer
Security (TLS) for the Management Console (MC) to IBM communication. DS8A00 uses an
embedded AOS server solution. Embedded AOS is a secure and fast broadband form of
remote access.
For more information, see Chapter 6, “IBM DS8A00 Management Console planning and
setup” on page 161 and Chapter 12, “Monitoring and support” on page 403.
A SAN allows your host bus adapter (HBA) host ports to have physical access to multiple host
adapter ports on the storage system. Zoning can be implemented to limit the access (and
provide access security) of host ports to the storage system.
Shared access to a storage system host adapter port is possible from hosts that support a
combination of HBA types and OSs.
Important: A SAN administrator must verify periodically that the SAN is working correctly
before any new devices are installed. SAN bandwidth must also be evaluated to ensure
that it can handle the new workload.
With a DS8A00 system, you can choose among IBM Security Guardium Key Lifecycle
Manager, Gemalto SafeNet KeySecure, and Thales Vormetric Data Security Manager for data
at rest and TCT encryption. IBM Fibre Channel Endpoint Security encryption requires IBM
Security Guardium Key Lifecycle Manager. You cannot mix IBM Security Guardium Key
Lifecycle Manager and SafeNet or Vormetric key servers. For more information, see IBM
DS8000 Encryption for Data at Rest, Transparent Cloud Tiering, and Endpoint Security
(DS8000 Release 9.2), REDP-4500.
Important: Clients must acquire an IBM Security Guardium Key Lifecycle Manager license
to use the Guardium Key Lifecycle Manager software.
The licensing for IBM Security Guardium Key Lifecycle Manager includes both an
installation license for the Guardium Key Lifecycle Manager management software and
licensing for the encrypting drives.
The DS8000 series is supported with current IBM Security Guardium Key Lifecycle Manager
V4.x versions. This version also uses a connection between the HMC and the key server,
which complies with the National Institute of Standards and Technology (NIST) SP 800-131A
standard.
You are advised to upgrade to the latest version of the IBM Security Guardium Key Lifecycle
Manager.
Two network ports must be opened on a firewall to allow the DS8A00 connection and to
obtain an administration management interface to the IBM Security Guardium Key Lifecycle
Manager server. These ports are defined by the IBM Security Guardium Key Lifecycle
Manager administrator.
For more information, see the following IBM publications for IBM Security Guardium Key
Lifecycle Manager:
LDAP authentication can be configured through the Storage Management GUI, as described
in 6.6.2, “Remote authentication” on page 179.
Plan the distance between the primary and auxiliary storage systems carefully to correctly
acquire fiber optic cables of the necessary length that are required. If necessary, the CS
solution can include hardware, such as channel extenders or dense wavelength division
multiplexing (DWDM).
For more information, see IBM DS8000 Copy Services: Updated for IBM DS8000 Release
9.1, SG24-8367.
For more information about the DS8000 sparing concepts, see 4.5.10, “Spare creation” on
page 128.
For the effective capacity of one rank in the various possible configurations, see IBM Storage
DS8000 10.0 Introduction and Planning Guide, G10-I53-00.
Important: When you review the effective capacity, consider the following points:
Effective capacities are in decimal gigabytes (GB). 1 GB is 1,000,000,000 bytes.
Although drive sets contain 16 drives, arrays use only eight drives. The effective
capacity assumes that you have two arrays for each set of 16 disk drives.
The IBM Storage Modeller tool can help you determine the raw and net storage capacities
and the numbers for the required extents for each available type of RAID. IBM Storage
Modeller is available only for IBM employees and IBM Business Partners.
Flash drives in HPFE Gen3 are ordered in sets of 16 within an enclosure pair. There are three
sets of 16 drives in an HPFE Gen3 enclosure pair.
For the latest information about supported RAID configurations and requesting an RPQ or
SCORE, contact your IBM SSR.
Note: The DFSMSdss Compression tool utility is not included in the base z/OS 3.1 and
z/OS 2.5 releases. Therefore, all applicable APARs and PTFs must be installed.
Run the SMP/E REPORT MISSINGFIX for the following function fix categories:
IBM.Function.DiskCompression
Enables or exploits hardware compression on the IBM DS8A00 series devices.
IBM.Function.zHyperLink
Fixes for the zHyperLink function.
IBM.Function.DFSMSCloudStorage
Fixes for Transparent Cloud Tiering Function.
For more information about the SMP/E APPLY or REPORT MISSINGFIX commands, See
SMP/E for z/OS commands.
The HMC does not process any of the data from hosts. It is not even in the path that the data
takes from a host to the storage. The HMC is a configuration and management station for the
whole DS8A00 system.
To enhance security by ensuring that only trusted software is run, starting with DS8000
release 10, the HMC uses Secure Boot.
The HMC, which is the focal point for DS8A00 management, includes the following functions:
DS8A00 power control
Storage provisioning
Storage system health monitoring
Storage system performance monitoring
Copy Services (CS) monitoring
Embedded IBM Copy Services Manager
Interface for onsite service personnel
Collection of diagnostic and Call Home data
Problem management and alerting
Remote support access capability
Storage management through the DS GUI
Connection to IBM Security Guardium Key Lifecycle Manager or other supported external
key manager for encryption management functions, if required
Connection to an external IBM Copy Services Manager or IBM Spectrum Control
Interface for Licensed Internal Code (LIC) and other firmware updates
Both of the 1U primary and secondary HMCs are mounted in the bottom of the main rack.
The primary HMC is installed above the secondary HMC. There is an 1U keyboard and
display tray that are available.
The HMC connects to the customer network and provides access to functions that can be
used to manage the DS8A00. Management functions include logical configuration, problem
notification, Call Home for service, remote service, and CS management.
These management functions can be performed from the DS GUI, Data Storage
Command-Line Interface (DS CLI), or other storage management software that supports the
DS8A00.
The HMC provides connectivity between the DS8000 and Encryption Key Manager (EKM)
servers (Security Guardium Key Lifecycle Manager), and also provides the functions for
remote call home and remote support connectivity.
The HMCs are equipped with Ethernet connections for the client’s network. For more
information, see 6.1.2, “Private and Management Ethernet networks” on page 163.
To provide continuous availability to the HMC functions, the DS8A00 includes a second HMC.
The secondary HMC is needed for redundancy, such as for encryption management or CS
management functions. For more information about the secondary HMC, see 6.7, “Secondary
Management Console” on page 187.
Use the DS CLI command lshmc to show the HMC types, whether both HMCs are online, and
their amount of disk capacity and memory, as shown in Example 6-1.
Each central processor complex (CPC) flexible service processor (FSP) and each CPC
logical partition (LPAR) network are connected to both RCHs. Each of these components
(FSP and LPAR) uses their own designated interface for the black network and another
interface for the gray network.
Each HMC also uses two designated Ethernet interfaces for the internal black (eth0) and gray
(eth3) networks.
Additionally, an HMC contains two Ethernet interfaces (eth1 and eth2) for the customer
network connection to allow management functions to be started over the network. These
adapters can be used to connect the HMC to two separate customer networks, usually for
separating internet traffic (call home and remote access) from storage management tasks
(DS GUI, DS CLI, and IBM Copy Services Manager).
Figure 6-1 on page 164 and Table 6-1 on page 164 show these internal and external network
connections.
Important: The Rack Control Hubs that are shown in Figure 6-2 on page 165 (the black
and gray private networks) are for DS8A00 internal communication only. Do not connect
these ports directly to your network. There is no connection between the customer network
interfaces and the isolated black and gray networks.
An HMC communicates to these iPDUs by using the Ethernet network, and it manages and
monitors the system power state, iPDU configuration, System AC power on and off, iPDU
firmware update, iPDU health check and error, and power usage reporting.
The iPDUs’ network interfaces are also connected to the black and gray Rack Control Hubs.
They are distributed over the black and gray networks, which means iPDUs that belong to one
power domain (usually on the left side of the rear of the rack) connect to a gray network RCH
and the iPDUs that belong to the other power domain (usually on the right side of the rear of
the rack) connect to the black network RCH.
There are two10-port Rack Control Hubs. One is for the black network, and one is for the gray
network. The 1U space that is required is already reserved at the bottom of the base rack.
Unused 10 Unused
Note: The DS Open API with IBM System Storage Common Information Model (CIM)
agent is no longer supported. The removal of the CIM Agent simplifies network security
because fewer open ports are required.
For more information, see 9.2.1, “Accessing the DS GUI” on page 223.
Note: The DS Storage Management GUI also provides a built-in DS CLI. Look for the
console icon on the lower left of the browser window after logging in.
For more information about DS CLI usage and configuration, see Chapter 10, “IBM DS8A00
Storage Management Command-line Interface” on page 319. For a complete list of DS CLI
commands, see IBM DS8000 Series: Command-Line Interface User’s Guide, SC27-9562.
This feature removes the requirement for an external server to host IBM Copy Services
Manager, which provides savings on infrastructure costs and operating system (OS)
licensing. Administration costs are also reduced because the embedded IBM Copy Services
Manager instance is upgraded through the DS8A00 code maintenance schedule, which is
Important: Avoid configuring the primary HMC and the secondary HMC of the same
storage system as the active and standby IBM Copy Services Manager servers within a CS
environment.
Important: Updating the HMC embedded IBM Copy Services Manager must be done
exclusively through the IBM DS CLI tool that is installed on the workstation, laptop, or
server.
Update IBM Copy Services Manager on the HMC by completing the following steps:
1. Verify the current level of the DS CLI.
2. Verify the current level of IBM Copy Services Manager on the HMC.
3. Download selected releases of DS CLI, if necessary, and IBM Copy Services Manager
from IBM Fix Central.
4. Update the DS CLI, if needed.
5. Update IBM Copy Services Manager on the HMC.
The DS8000 Code Recommendation page provides a link to the DS8A00 code bundle
information page, as shown in Figure 6-3 and Figure 6-4.
Verifying the current level of IBM Copy Services Manager on the HMC
To verify the current IBM Copy Services Manager release that is installed on a DS8000 HMC,
run the lssoftware DS CLI command:
Example 6-2 on page 168 shows an example where the IBM Copy Services Manager release
on both HMCs is 6.3.12.0.
Complete the following steps. Assume that IBM Copy Services Manager 6.3.0 is the release
to be installed.
1. On the IBM Fix Central page, select IBM Copy Services Manager as the product,
6.3.12.0 as the installed version, and Linux PPC as the platform. Figure 6-5 shows a
summary of selected options.
Figure 6-5 Selected IBM Copy Services Manager Version for HMC
Note: The HMC OS is Linux PPC. Ensure that the correct platform is selected.
2. Be sure to download the correct Linux-ppcle release. Figure 6-6 shows the correct
package type selected. Check the Release Notes, and if there is a newer fix pack file, you
can use it instead.
Updating IBM Copy Services Manager on the HMC by using the DS CLI
Update the IBM Copy Services Manager on each HMC. In a dual HMC environment, update
one IBM Copy Services Manager instance at a time.
Note: If your IBM Copy Services Manager installation has active CS sessions, you must
follow best practices while applying maintenance to an active management server.
Note: The Active and Standby servers must be updated concurrently, to the same code
level. Failure to do so results in the inability to reconnect Active and Standby CSM servers.
The DS CLI command that is used for the IBM Copy Services Manager update is
installsoftware. You can find more information about the command in IBM Documentation.
Table 6-3 describes the parameters that are necessary for the installsoftware command.
Note: Ensure that no spaces are included in the path that you specify for the location of the
software package and certificate file.
Note: In addition to the standard 1751 port, DS CLI also uses port 1755 (TCP protocol) to
transfer the IBM Copy Services Manager installation file to the HMC. That port must be
open on any physical or software firewall standing between the workstation where DS CLI
is installed and the DS8000 HMCs.
To effectively run the command, you must use a DS8000 user ID that is part of the
Administrator role (for example, the default admin user ID).
dscli> lssoftware
Type Version Status
====================================
CSM V6.3.0.6-a20210622-1237 Running
CSM V6.2.9.1-a20200804-1704 Running
The next step is to update IBM Copy Services Manager on HMC2, as shown in Example 6-4.
dscli> lssoftware
Type Version Status
====================================
CSM V6.3.0.6-a20210622-1237 Running
CSM V6.3.0.6-a20210622-1237 Running
3. If you are successfully logged in, you see the HMC window, in which you can select
Status Overview to see the status of the DS8A00. Other areas of interest are shown in
Figure 6-10 on page 172.
Because the HMC web UI is mainly a services interface used by IBM service engineers, it is
not covered here. For more information, see the Help menu.
Maintenance windows
The LIC update of the DS8A00 is a nondisruptive action. Scheduling a maintenance
window with added time for contingency is still a best practice. Also, plan for sufficient time
to confirm that all environment prerequisites are met before the upgrade begins.
For more information about LIC upgrades, see Chapter 11, “Licensed Machine Code” on
page 385.
Important: For correct error analysis, the date and time information must be synchronized
on all components in the DS8A00 environment. These components include the DS8A00
HMC, the attached hosts, IBM Spectrum Control, and DS CLI workstations.
Up to eight external syslog servers can be configured, with varying ports if required. Events
that are forwarded include user login and logout, all commands that are issued by using the
GUI or DS CLI while the user is logged in, and remote access events. Events are sent from
Facility 19, and are logged as level 6.
Call Home is the capability of the HMC to contact the IBM Support Center to report a
serviceable event. Remote support is the capability of the IBM SSR to connect to the HMC to
perform service tasks remotely. If the IBM Support Center can connect to the HMC to perform
service tasks remotely based on the setup of the client’s environment, an IBM SSR can
connect to the HMC to perform detailed problem analysis. The IBM SSR can view error logs
and problem logs and start trace or memory dump retrievals.
Remote support can be configured by using the embedded Assist On-site (AOS) or Remote
Support Console. The setup of the remote support environment is performed by the IBM SSR
during the initial installation. For more information, see Chapter 12, “Monitoring and support”
on page 403.
This activity includes the configuration of the private (internal) and management (customer)
network with IPv6 or IPv4, hostname, DNS, NTP, routing, and remote support settings.
When you change the internal private network, you do not need to configure each individual
network interface. Instead, each change that you make changes both the black and gray
networks at once.
Note: Only the customer management network interfaces eth2 and eth1 are shown and
can be configured in the Network Settings dialog because the internal private black and
gray networks with interfaces eth0 and eth3 are used for the running system. The eth0 and
eth3 interfaces can be changed only by opening an IBM support request.
If the default address range cannot be used because it conflicts with another network, you
can instead specify one of three optional addresses ranges. Table 6-4 shows the possible
options that can be chosen during installation.
Note: Changing the internal private network range on the storage system facility can be
done in concurrent mode, but requires special care. For that reason, an IBM service
request must be opened before making such a change.
To manage the DS GUI and DS CLI credentials, you can use the DS CLI or the DS GUI. An
administrator user ID is preconfigured during the installation of the DS8A00 and this user ID
uses the following defaults:
User ID: admin
Password: admin
The password of the admin user ID must be changed before it can be used. The GUI forces
you to change the password when you first log in. By using the DS CLI, you log in but you
After you issue that command, you can run other commands.
Recommendation: Do not set the value of the chpass command to 0 because this setting
indicates that passwords never expire and unlimited login attempts are allowed.
If access is denied for the admin user, for example, because of the number of invalid login
attempts, the administrator can use the security recovery utility tool on the HMC to reset the
password to the default value. The detailed procedure is described by selecting Help
Contents and can be accessed from the DS GUI.
Important: Upgrading an existing storage system to the latest code release does not
change the old default user-acquired rules. Existing default values are retained to prevent
disruption. The user might opt to use the new defaults by running the chpass -reset
command. The command resets all default values to the new defaults immediately.
The password for each user account is forced to adhere to the following rules:
Passwords must contain one character from at least two groups of the following ones:
alphabetic, numeric, and punctuation.
The range for minimum password length is 6–64 characters. The default minimum
password length is 8 characters.
Passwords cannot contain the user’s ID.
Passwords are case-sensitive.
The length of the password is determined by the administrator.
Initial passwords on new user accounts are expired.
Passwords that are reset by an administrator are expired.
Users must change expired passwords at the next logon.
The remote authentication setup can be found in the Storage Manager GUI. Go to the
Access menu and select Remote Authentication. From there, click Configure Remote
Authentication. The installation is guided by the Remote Authentication wizard.
Figure 6-14 shows the window that opens directly after the Welcome window. After you
complete all the wizard steps of the wizard, the DS8000 is enabled and configured for remote
authentication.
The following prerequisites are required to complete the Remote Authentication wizard:
Access to create users and groups on your remote authentication server.
A primary LDAP repository URI is required.
A secondary LDAP repository URI is optional.
A User search base (only for Direct LDAP).
For more information about LDAP-based authentication and configuration, see LDAP
Authentication for IBM Storage DS8000 Systems: Updated for DS8000 Release 9.3.2,
REDP-5460.
In the HMC Management section of the WUI, two options are available:
Managed User Profiles and Access
Configure LDAP
Important: Do not delete the last user ID in a role. For more information about removing
user IDs for a role, see Table 6-6 on page 181.
There are three predefined user roles that are related to the Customer, Service, and
Engineering user IDs, as shown in Table 6-5 on page 181.
esshmcpe Requires the IBM proprietary challenge/response key for remote access.
The roles, access, and properties for each user ID are described in Table 6-6.
A new window opens that lists the user IDs and profiles for the defined console users, as
shown in Figure 6-17.
User IDs PE, CE, and customer are specifically for DS8A00 use. Ignore the other profiles.
Note: Do not change the user ID PE because it uses the remote challenge/response login
process, which is logged and audited.
The user ID root cannot log in to the WUI. The user IDs hscroot and essbase cannot
access HMC functions externally. Do not use them.
Do not create user IDs with a Task Role beginning with “hmc”.
2. Click Add. The Add User window opens, as shown in Figure 6-19.
Only those roles that are outlined by the boxes are valid Task Roles.
3. Complete the following fields:
a. Under Description, define a user or use HMC User as an example.
b. Passwords must adhere to the DS8A00 password policies. For more information, see ,
“After you issue that command, you can run other commands.” on page 178.
c. Choose the type of Authentication that you want.
The User Profiles are updated and list the new user ID. As an example, user ID IBM_RSC was
created and is shown in Figure 6-21 and Figure 6-22 on page 185.
MFA enables the DS8000 storage system user to configure remote authentication with RSA
SecurID Authentication Manager or with direct LDAP+RSA SecurID Authentication Manager.
It supports a PIN (something that the user knows) and a token (something that the user has)
as factors of authentication.
You can also configure IBM RACF® to use MFA by implementing IBM Z Multi-Factor
Authentication (IBM Z MFA).
For more information, see LDAP Authentication for IBM Storage DS8000 Systems: Updated
for DS8000 Release 9.3.2, REDP-5460.
Important: The primary and secondary HMCs are not available to be used as
general-purpose computing resources.
When a configuration or CS command is run, the DS CLI or DS GUI sends the command to
the first HMC. If the first HMC is unavailable, it automatically sends the command to the
second HMC instead. Typically, you do not need to reissue the command.
Any changes that are made by using one HMC are instantly reflected in the other HMC. No
host data is cached within the HMC, so no cache coherency issues occur.
The licensed functions are bundled into groups, as shown in Table 7-1.
The grouping of licensed functions facilitates ordering. The licenses establish the extent of
IBM authorization for the use of the licensed functions on an IBM Storage DS8000.
Licensed functions enable the operating system and functions of the storage system. Some
features, such as the operating system, are always enabled, and other functions are optional.
Licensed functions are purchased as 5341 machine function authorizations for billing
purposes.
All maintenance and support falls under Expert Care, which defines the support duration (1,
2, 3, 4, or 5 years) and the service level (Advanced or Premium). When purchasing an IBM
DS8000 (machine type 5341), the inclusion of Expert Care is mandatory. For more
information, see 7.4, “Expert Care” on page 211.
All DS8A00 models are sold with a 1-year warranty. This warranty is extended by Expert
Care from 2 to 5 years. The machine type 5341 remains in all cases.
Each license function authorization is associated with a fixed 1-year function authorization
9031-FFA.
The licensed function indicator feature numbers enable the technical activation of the
function, subject to a feature activation code that is made available by IBM, which must be
applied by the client.
Licensed functions are activated and enforced with a defined license scope. License scope
refers to the type of storage and the type of servers that the function can be used with. For
example, the zsS licenses are available only with the CKD (z/FICON) scope.
The BFs are mandatory. The BFs must always be configured for both mainframe and open
systems, which have a scope of ALL. Also, to configure CKD volumes, Feature Code BACx
is required.
With CS, if these services are used only for either mainframe or open systems, the
restriction to either FB or CKD is possible. However, most clients want to configure CS for
the scope ALL.
Typically, the DS8000 10th-generation licensed functions are enabled, authorized, managed,
activated, and enforced based upon the physical capacity contained in a 5341 system.
For each group of licensed functions, specific Feature Code numbers indicate the licensed
capacity.
The following features are available after the license bundle is activated:
MM is a synchronous way to perform remote replication. GM enables asynchronous
replication, which is useful for longer distances and lower bandwidth.
MGM enables cascaded 3-site replication, which combines synchronous mirroring to an
intermediate site with asynchronous mirroring from that intermediate site to a third site at a
long distance.
Combinations with other CS features, like MM, GM or FlashCopy, are possible and often
needed.
Hardware drive compression, as done by the IBM FlashCore Modules (FCM), is not listed as
any sublicense and comes directly with the respective base function group. So this function is
essentially available as soon as when having FCM drive hardware purchased.
For more information about these features, see the IBM Storage DS8000 10.0 Introduction
and Planning Guide, G10-I53-00.
Important: With the CS license bundle, order subcapacity licensing, which is less than the
total physical raw capacity, only when a steady remote connection for the DS8000 is
available and a Storage Insights installation will be active to monitor usage.
By using a remote connection for Call Home, and when having Storage Insights, the CS
license can be based on the usable capacity of the volumes that will potentially be in CS
Note: The CS license goes by the capacity of all volumes that are involved in at least one
CS relationship. The CS license is based on the provisioned capacity of volumes and not
on raw capacity. If overprovisioning (or compression) is used on the DS8000 with a
significant number of CS functions, the CS license needs to be equal only to the total raw
physical capacity. This situation is true even if the logical volume capacity of volumes in CS
is greater.
For example, with overprovisioning, if the total rank raw capacity of a DS8000 is 100 TB but
200 TB of thin-provisioning volumes are in MM, only a 100 TB of CS license is needed.
When using the IBM FlashCore Modules (FCM), compression is always on. Hence for these,
ordering the CS license as per what the nominal drive raw capacity says is usually
recommended.
Tip (also for industry-standard drives): If you want to ensure future use of
overprovisioning, use the raw capacity of the system for CS to ensure coverage.
For FlashCopy volumes, you must count the source plus target volumes as provisioned
capacity. Several examples are shown as follows.
Drive features
The BF is based on the raw (decimal terabyte) capacity of the drives. The pricing is based on
the drive performance, capacity, and other characteristics that provide more flexible and
optimal price and performance configurations.
To calculate the raw (gross) physical capacity, multiply for each drive set the number of drives
with their individual capacities. Therefore, for example, an industry-standard drive set of
sixteen 3.84 TB drives has a 61.44 TB raw physical capacity. Or an FCM compression tier
drive set of sixteen 19.2 TB drives has a 307.2 TB raw physical capacity.
Important: Check with an IBM Systems Service Representative (IBM SSR) or go to the
IBM website for an up-to-date list of available drive types.
Ordering granularity
You order the license bundles by the terabyte, but not by a single terabyte. The granularity is
slightly larger. For example, below 100 TB total raw capacity, the granularity increment for an
upgrade is 10 TB. With larger total capacities, the granularity is larger. For more information,
see Table 7-2.
https://www.ibm.com/docs/en/search/ds8000?type=announcement
IBMers and BPs when looking for more details can go to the IBM Federated Catalog and
search by using the keywords DS8A10, DS8A50, or for the advanced functions DS8A00 as
search terms.
Before you connect to the ESS website to obtain your feature activation codes, ensure that
you have the following items:
The IBM License Function Authorization documents. If you are activating codes for a new
storage unit, these documents are included in the shipment of the storage unit. If you are
activating codes for an existing storage unit, IBM sends the documents to you in an
envelope.
A USB memory device that can be used for downloading your activation codes if you
cannot access the DS Storage Manager from the system that you are using to access the
ESS website. Instead of downloading the activation codes in softcopy format, you can print
the activation codes and manually enter them by using the DS Storage Manager GUI or
the DS CLI. However, this process is slow and error-prone because the activation keys are
32-character strings.
You can obtain the required information by using the DS Storage Management GUI or the
DS CLI. If you use the Storage Management GUI, you can obtain and apply your activation
keys at the same time. These options are described next.
Note: Before you begin this task, resolve any current DS8000 problems that might
exist. You can contact IBM Support to help you resolve these problems.
4. After an option to give your system another nickname, then to begin the guided procedure
to acquire and activate your feature activation keys, select and complete the Activate
Licensed Functions routine, as shown in Figure 7-2.
Note: You can download the keys and save the XML file to the folder that is shown here,
or you can copy the license keys from the IBM ESS website.
5. Use the help function to get further information where and how to obtain the license keys,
and how to apply the XML file, as shown in Figure 7-3 on page 199.
6. All generated license keys become visible after importing, and you click Activate to start
the activation process, as shown in Figure 7-4.
7. The next screens document the activation of the keys, as in Figure 7-5.
9. Click Summary in the System Setup wizard to view the list of licensed functions or feature
keys that are installed on your DS8000, as shown in Figure 7-7. With Finish you leave the
wizard dialog.
Important: The ESS usually expects the Serial Number of the Storage Unit, finishing
with a 0, not the one of the Storage Image which finishes with 1.
Figure 7-9 Properties window showing the machine signature and MTM
The following activation activities are disruptive and require an initial machine load or
restart of the affected image:
Removal of a DS8000 licensed function to deactivate the function.
A lateral change or reduction in the license scope. A lateral change is defined as
changing the license scope from FB to CKD or from CKD to FB. A reduction is
defined as changing the license scope from all physical capacity (ALL) to only FB or
only CKD capacity.
Note: Before you begin this task, you must resolve any current DS8000 problems that
exist. You can contact IBM Support for help with resolving these problems.
12.Click Activate to enter and activate your licensed keys, as shown in Figure 7-4 on
page 199.
13.Wait for the activation process to complete and select Licensed Functions to show the
list of activated features.
dscli> lssi
Name ID Storage Unit Model WWNN State ESSNet
===================================================================================
ds8k-g10-01 IBM.2107-78HAL91 IBM.2107-78HAL90 A05 5005076301234567 Online Enabled
dscli> showsi
Date/Time: 25 November 2024 19:56:17 CET IBM DSCLI Version: 7.10.0.882 DS: -
Name ds8k-g10-01
desc Saw Shark
ID IBM.2107-78HAL91
Storage Unit IBM.2107-78HAL90
Model A05
WWNN 5005076301234567
Signature abcd-ef12-3456-7890
State Online
ESSNet Enabled
Volume Group V0
os400Serial 567
NVS Memory 64.0 GB
Cache Memory 832.0 GB
Processor Memory 1015.6 GB
MTS IBM.5341-78HAL90
numegsupported 16
ETAutoMode tiered
ETMonitor automode
IOPMmode Disabled
ETCCMode -
ETHMTMode Enabled
ETSRMode Enabled
ETTierOrder High performance
ETAutoModeAccel Disabled
Figure 7-10 Obtaining DS8000 information by using the DS CLI
Note: The showsi command can take the storage facility image (SFI) serial number as a
possible argument. The SFI serial number is identical to the storage unit serial number,
except that the SFI serial number ends in 1 instead of 0 (zero).
Table 7-3 documents this information, which is entered at the ESS website to retrieve the
activation codes.
Machine type
Machine signature
3. Click DS8000. The DS8000 machine information window opens, as shown in Figure 7-13.
Figure 7-14 DS8000 information: Select model and enter serial number
The View machine summary window shows the total purchased licenses and the number
of licenses that are currently assigned. When you assign licenses for the first time, the
Assigned field shows 0.0 TB.
5. The View machine summary window shows the total purchased licenses and the number
of licenses that are currently assigned. When you assign licenses for the first time, the
Assigned field might show 0.0 TB, or the results might show a yellow warning message for
unassigned license scopes as shown in Figure 7-15.
6. Figure 7-16 shows a DS8950F model where licenses are already assigned.
Figure 7-17 ESS View machine summary: New DS8A00 with unassigned scopes
8. If it is not done yet, the Actions Assign function authorization and Manage activations
allow you to specify the details for the respective license group. Managing activations is
shown in Figure 7-18 on page 208.
For each license type and storage image, enter the following information:
– Select the license scope from the list box:
• FB (Fixed-Block data)
• CKD
• All
– Type the capacity value (in TB) to assign to the storage image. The capacity values are
expressed in decimal terabytes. The sum of the storage image capacity values for a
license cannot exceed the total license value.
– Ensure any Unassigned scopes (as shown in Figure 7-18 on page 208) are all
assigned. Unassigned scopes can be assigned, for example, to an All scope.
9. Once done, you can either copy each of the generated activation codes, or can proceed to
Download all codes, what saves the activation codes in an XML file that you can import
into the DS8000.
10.This XML file, or the individual keys, can apply in the DS8000 GUI as shown in Figure 7-3
on page 199, and following Figures.
Important: In most situations, the ESS website and application can locate your 9031
licensed function authorization record when you enter the DS8000 5341 serial number, or
serial number and signature. However, if the 9031 licensed function authorization record is
not attached to the 5341 record, you must assign it to the 5341 record by using the Assign
function authorization link in the ESS application. In this case, you need the 9031 serial
number (which you can find on the License Function Authorization document).
To apply activation codes by using the DS CLI, complete the following steps:
dscli> showsi
Date/Time: 25 November 2024 20:01:45 CET IBM DSCLI Version: 7.10.0.882 DS: -
Name ds8k-r9-01
desc Sand Shark
ID IBM.2107-78HAL91
Storage Unit IBM.2107-78HAL90
Model 998
WWNN 5005076301234567
Signature abcd-ef12-3456-7890
State Online
ESSNet Enabled
Volume Group V0
os400Serial 567
NVS Memory 127.5 GB
Cache Memory 4168.4 GB
Processor Memory 4343.4 GB
MTS IBM.5341-78HAL90
numegsupported 16
ETAutoMode tiered
ETMonitor automode
IOPMmode Disabled
ETCCMode -
ETHMTMode Enabled
ETSRMode Enabled
ETTierOrder High performance
ETAutoModeAccel Disabled
2. Obtain your license activation codes from the IBM ESS website, as described in 7.2.2,
“Obtaining the activation codes” on page 204.
3. Enter the applykey command at the following DS CLI. The -file parameter specifies the
key file. The second parameter specifies the storage image.
dscli> applykey -file c:\5341_78XXXX0.xml IBM.2107-78XXXX1
Or you can apply individual keys by running the following command:
dscli> applykey -key f190-1234-1234-1234-1234-5678-1234-5678 IBM.2107-78XXXX1
CMUC00199I applykey: License Machine Code successfully applied to storage image
IBM.2107-78XXXX1.
4. Verify that the keys were activated for your storage unit by running the lskey command, as
shown in Figure 7-20 on page 210.
The BF license bundle must be installed before ranks can be formatted for FB (open
systems). The zsS license bundle must be installed before ranks can be formatted for CKD
(mainframe).
Tip: Because the BF license must be ordered for the full physical capacity anyway, and
because the CS license can be ordered for only those volumes that are in CS
relationships, consider the following tip: For BF and CS, configure these bundles with
scope “ALL” from the beginning.
The Technical Account Manager is a new role that combines the previous roles of Technical
Sales Manager and Technical Advisor. The TAM acts as the single point of contact for the
client. They set up a welcome call, schedule monthly activity reports, advise on code
currency, help schedule code upgrades, facilitate the initial installation, help with Call Home
and remote support setup, and perform other activities.
Enhanced Support Time targets an incident response time of 30 minutes or less for Severity 1
and 2 incidents in the United States and selected countries.
With Predictive Support, IBM pro-actively notifies customers of possible problems to prevent
issues from escalating or causing an impact. Predictive Support leverages statistics and
performance metrics from IBM Storage Insights. For more information about IBM Storage
Insights, see 12.11, “Using IBM Storage Insights” on page 428.
Table 7-5 shows the Feature Codes for each of the available options.
The feature codes for Expert Care might differ from region to region. For a full listing, see the
relevant IBM Hardware Announcement for your region.
Specific options are also available regarding contact and resolution times, including 4-hour,
6-hour, 8-hour, 12-hour, 24-hour, 48-hour, or 72-hour committed fix time, each with a
corresponding feature code. For more information, contact your IBM Sales Representative.
Note: Planning information for all DS8000 G10 models, A01, A05, and E05.
The purpose of the configuration worksheets is to enable a smooth installation of the DS8000
G10 system by ensuring that the necessary information is available to the IBM Support
Representative during system installation. It is a best practice to present the completed
worksheets to the IBM Support Representative before the delivery of the DS8000 G10
system.
The completed customization worksheets specify the initial setup for the following items:
Company information: Provide important company and contact information. This
information is required to ensure that IBM support personnel can reach the appropriate
contact person or persons in your organization, or send a technician to service your
system as quickly as possible during a critical event .
Hardware Management Console (HMC) network: Provide the IP address and local area
network (LAN) settings. This information is required to establish connectivity to the
Management Console (MC).
Remote support, including Call Home: Provide information to configure Remote Support
and Call Home. This information helps to ensure timely support for critical serviceable
events on the system.
Notification: Provide information to receive Simple Network Management Protocol (SNMP)
traps and email notifications. This information is required if you want to be notified about
serviceable events.
Power control: Provide your preferences for the power mode on the system.
Control switch: Provide information to set up the control switches on the system. This
information is helpful if you want to customize settings that affect host connectivity for
IBM i and IBM Z hosts.
Important: Resource groups offer an enhanced security capability that supports the
hosting of multiple customers with CS requirements. It also supports a single client with
requirements to isolate the data of multiple operating system (OS) environments. For
more information, see IBM DS8000 Copy Services, SG24-8367.
For more information about the capabilities of certain user roles, see User roles or use the
DS GUI Help function.
The DS8000 G10 provides a storage administrator with the ability to create custom user roles
with a fully customized set of permissions by using the DS GUI or DS CLI. This set of
permission helps to ensure that the authorization level of each user on the system exactly
matches their job role in the company so that the security of the system is more robust
against internal attacks or mistakes.
You can also consider using a Lightweight Directory Access Protocol (LDAP) server for
authenticating IBM DS8000 users. You can now take advantage of the IBM Copy Services
Manager and its LDAP client that comes preinstalled on the DS8000 G10 HMC. For more
information about remote authentication and LDAP for the DS8000 G10, see LDAP
Authentication for IBM Storage DS8000 Systems, REDP-5460.
For more information, including considerations and best practices for DS8900F encryption,
see 5.3.6, “Key manager servers for encryption” on page 156 and IBM DS8000 Encryption for
Data at Rest, Transparent Cloud Tiering, and Endpoint Security, REDP-4500.
For more information about encryption license considerations, see “Encryption activation
review planning” on page 156.
Note: The details for implementing and managing security requirements are provided in
IBM DS8870 and NIST SP 800-131a Compliance, REDP-5069.
If you perform logical configuration by using the DS CLI, the following steps provide a
high-level overview of the configuration flow. For more detailed information about using and
performing logical configuration with the DS CLI, see Chapter 10, “IBM DS8A00 Storage
Management Command-line Interface” on page 319.
2. Create arrays: Configure the installed flash drives as a redundant array of independent
disks (RAID) 6, which is the default and preferred RAID configuration for the DS8000 G10.
3. Create ranks: Assign each array as a Fixed-Block (FB) rank or a Count Key Data (CKD)
rank.
4. Create extent pools: Define extent pools, associate each one with Server 0 or Server 1,
and assign at least one rank to each extent pool. To take advantage of storage pool
striping, you must assign multiple ranks to an extent pool.
Important: If you plan to use IBM Easy Tier (in particular, in automatic mode),
select the All pools option to receive all of the benefits of Easy Tier data
management. IBM Easy Tier is required to be enabled with the new thin-provisioned
NVMe flash drives.
Note: You can modify the Pool capacity (Logical Capacity) and Compression capacity
(Physical Capacity) thresholds from their default value of 85%.
6. Configure the FC ports: Define the topology of the FC ports. The port type can be Fibre
Channel Protocol (FCP) or Fibre Channel connection (IBM FICON).
7. Create the volume groups for open systems: Create volume groups where FB volumes are
assigned.
8. Create the host connections for open systems: Define open systems hosts and their FC
host bus adapter (HBA) worldwide port names (WWPNs). Assign volume groups to the
host connections.
9. Create the open systems volumes: Create striped open systems FB volumes and assign
them to one or more volume groups.
10.Create the IBM Z logical control units (LCUs): Define their type and other attributes, such
as subsystem identifiers (SSIDs).
11.Create the striped IBM Z volumes: Create IBM Z CKD base volumes and parallel access
volumes (PAV) aliases for them.
Note: Avoid intermixing host I/O with CS I/O on the same ports for performance
reasons.
The DS GUI was designed and developed with three major objectives:
Speed: A graphical interface that is fast and responsive.
Simplicity: A simplified and intuitive design that can drastically reduce the time that is
required to perform functions with the system, which reduces the total cost of ownership
(TCO).
Commonality: Use of common graphics, widgets, terminology, and metaphors that
facilitate the management of multiple IBM storage products and software products. The
DS GUI was introduced with Release 10.0 to provide a consistent graphical experience
and make it easier to switch between other products like IBM FlashSystem®, IBM
Spectrum Virtualize, or IBM Spectrum Control and IBM Storage Insights.
Based on these objectives, following the initial setup of the storage system, a system
administrator can use the DS GUI to complete the logical configuration and then prepare the
system for I/O. After the initial setup is complete, the system administrator can perform
routine management and maintenance tasks with minimal effort, including the monitoring of
performance, capacity, and other internal functions.
Logical storage configuration is streamlined in the DS GUI for ease of use. The conceptual
approach of array site, array, and ranks is streamlined into a single resource, which is
referred to as an array (or managed array). The storage system automatically manages flash
adapter pairs and balances arrays and spares across the two processor nodes without user
intervention.
Creating usable storage volumes for your hosts is equally simplified in the DS GUI. The
system can automatically balance volume capacity over a pool pair. If custom options are
required for your workload, the DS GUI can override defaults and customize your workload
needs.
Configuring connections to hosts is also easy. Host ports are updated automatically and host
mapping is allowed at volume creation.
The overall storage system status can be viewed at any time from the dashboard window. The
dashboard presents a view of the overall system performance when a system administrator
first logs in for a picture of the system status. This window also contains a “Hardware View”
and a “System Health View” that displays the status and attributes of all hardware elements
on the system.
Additionally, functions that include user access, licensed function activation, setup of
encryption, IBM Fibre Channel Endpoint Security, and remote authentication, and modifying
the power or Fibre Channel (FC) port protocols are available to the system administrator.
All functions that are performed in the DS GUI can be scripted by using the DS Command Life
Interface (DS CLI), which is described in Chapter 10, “IBM DS8A00 Storage Management
Command-line Interface” on page 319.
For any specific requirements on your browser, see DS8000 Storage Management GUI web
browser support and configuration.
On a new storage system, the user must log on as the administrator. The password expires
immediately: the user must change the password.
Figure 9-2 IBM Copy Services Manager window started from the DS GUI
The wizard guides the admin user through the following tasks:
1. Set the system name.
2. Activate the licensed functions.
3. Provide a summary of actions.
2. Click Next. The Licensed Functions window opens. Click Activate Licensed Functions.
3. The Activate Licensed Functions window opens. Keys for licensed functions that are
purchased for this storage system must be retrieved by using the Machine Type, Serial
Number, and Machine Signature. The keys can be stored in a flat file or an XML file.
Licensed function keys are downloaded from the IBM Entitled Systems Support (ESS)
website.
4. When the license keys are entered, click Activate to enable the functions, as illustrated in
Figure 9-6.
Note: The Summary shows licenses for basic functions. The list might include some
advanced functions such as Copy Services (CS), Z Synergy Services (zsS), and IBM
Copy Services Manager (CSM) on the HMC if the corresponding licenses were
activated.
Another option is to click Settings → Network → Fibre Channel Ports, select a specific
port from the list, and select Actions to set the ports, as shown in Figure 9-9.
You can also configure the port topology from the System view or from the System Health
overview, as shown in Figure 9-10 on page 229.
You can also perform this configuration during logical configuration. For more information, see
9.7.4, “Creating FB host attachments” on page 264, and 9.14, “Monitoring system health” on
page 289.
Note: Different users might have a limited view of the Dashboard when logging in,
depending on their role. Most of the material that is documented here describes what the
Administrator role sees.
Figure 9-11 on page 230 presents a high-level overview of the System window and all the
objects that can be accessed from this window.
Note: The menu items and actions that are shown in the DS GUI depend on the role of the
user that is logged in, and they can vary for each user. For more information about user
roles, click Help at the upper right of the DS GUI window and search for user role.
This initial view of the system provides access to a wealth of information in a single view. The
following items are included in this initial view:
Dashboard icon:
– Click the Dashboard icon from anywhere in the DS GUI to return to the system
Dashboard. The Dashboard provides an overall view of the DS8000 system.
– At the top, the serial number of the DS8000 system you are working on is shown.
– Actions menu:
• Rename
Change the name of the DS8000 G10 storage system.
• Modify Fibre Channel Port Protocols
Select the protocol that is used by FC ports to connect the storage system to a host
or to another storage system. The user can also display the properties of the
selected port, which opens the same view that you get when you select Settings →
Network → Fibre Channel Ports.
• Power Off/On
Initiate a power-off or power-on of the DS8000 storage system.
• Performance
Shows graphs that display performance metrics for I/O operations per second
(IOPS), Latency, Bandwidth, and Caching.
• Properties
This view displays the system properties that are shown in Table 9-1 on page 231.
System property
System name
Current state
Product type
Machine Signature (string of characters that identifies a DS8000 storage system), which is
used to retrieve license keys from the IBM ESS website
Hardware component summary (such as processor type, total subsystem memory, raw
data storage capacity, or number of FC ports)
– Export reports
Every user role, including Monitor, can download reports such as the System Summary
comma-separated values (CSV) report, Performance Summary CSV report, Easy Tier
Summary report, and FC Connectivity report. Previously, exporting reports was limited
to the Storage Administrator role.
To select multiple options to be saved in a compressed CSV file, click the Download
icon. The options include the System Summary, Performance Summary, Easy Tier
Summary, and the FC Connectivity report.
Note: The System Capacity section in the System Summary CSV is composed of
consolidated data. The System Capacity, Used IBM Z (Count Key Data (CKD))
Capacity, and Used Open Systems (Fixed-Block (FB)) Capacity sections in the
System Summary CSV are now combined into one section that is called System
Capacity. All sections are now shown with the column headers listed even if there
are no items in the list. For example, the FlashCopy section is shown even if no
FlashCopy relationships are present on the system.
The FC connectivity report is a compressed file that contains a report that shows one
row for every connected path between the DS8000 and a host system, a switch, or
another DS8000. It also shows the status of these paths and their security settings.
– Alerts
Highlights events for which you should look into with priority.
– Suggested tasks
If there are any suggested tasks, they are indicated here.
– Help icon
Clicking this icon opens a drop-down menu that includes the following options:
• The first option in this menu is context-sensitive, and it depends on the window in
which you are currently in the DS GUI. Clicking this option opens a separate
Access menu:
– Users
Only users with the administrator role can access this menu. This menu opens the
Users window. A system administrator can use this menu to perform the following
actions:
• Create user accounts
• Set a user account role
• Set temporary passwords (to be reset at first use by the new user account)
• Modify an existing user account role
• Reset an existing user account password
• Disconnect a user account
• Determine a user account connection (DS CLI or GUI)
• Remove user accounts
– Roles:
A storage or security administrator can set up user roles in the GUI or CLI with a fully
customizable set of permissions to ensure that the authorization level of each user
account matches their job role. This helps to ensure that the security of the system is
more robust against internal attacks or mistakes. The following actions can be taken
with roles:
• Create custom user roles
• Modify remote mappings
• Delete roles
In IBM Documentation, you can discover introductory information about the DS8A00
architecture, features, and advanced functions. You can also learn about the available
management interfaces and tools, and troubleshooting and support.
You can obtain more information about using the DS GUI for common tasks:
Logically configuring the storage system for open systems and IBM Z attachment.
Managing user access.
Attaching host systems.
IBM Documentation also provides links to external links for more information about
IBM storage systems, and other related online documentation.
Ethernet Network
The network settings for both HMCs are performed by IBM Support personnel during system
installation. To modify the HMC network information postinstallation, click Settings →
Network → Ethernet Network, as shown in Figure 9-14 on page 238.
Figure 9-15 on page 239 shows the FC ports window with all available options that are listed.
Note: Exporting the FC port information does not produce the comprehensive report that is
available in the FC connectivity report.
Use these settings to configure the security settings for your DS8A00 system.
Data-at-rest encryption
To enable data-at-rest encryption, select the Settings icon from the DS GUI navigation menu
on the left. Click Security to open the Security window, and click the Data at Rest
Encryption tab, as shown in Figure 9-16.
Important: If you plan to activate Data-at-Rest Encryption for the storage system, ensure
that the encryption license is activated and the encryption group is configured before you
begin any logical configuration on the system. After the pools are created, you cannot
disable or enable encryption.
If you use the Local Key Management feature (included in the base license), then the
DS8A00 manages the key group. Local Key Management can be set up only by using the
DS CLI. For more information, see IBM DS8000 Encryption for Data at Rest, Transparent
Cloud Tiering, and Endpoint Security, REDP-4500.
To enable IBM Fibre Channel Endpoint Security, select the Settings icon from the DS GUI
navigation menu on the left. Click Security to open the Security window, and click Configure
IBM Fibre Channel Endpoint Security, as shown in Figure 9-17. This action starts a wizard
that guides you through the process.
To implement password rules, complete these steps (see Figure 9-18 on page 241):
1. From the system window, click the Settings icon.
2. Click Security to open the Security window.
3. Click the Local Password Rules tab.
Communications Certificate
The Communications Certificate tab of the Security Settings window can be used to assign
or create an encryption certificate for each HMC with HTTPS connections to the storage
system. You can also create Certificate Signing Requests (CSRs), import existing certificates,
create self-signed certificates, and view the certificate information for each HMC, as shown in
Figure 9-19.
The Create Certificate Signing Request button is used to generate a CSR that is sent to a
certificate authority (CA) for verification. As shown in Figure 9-20 on page 242, the necessary
information to include in the CSR are the HMC Fully Qualified Domain Name (FQDN),
organization details, the length of time that the certificate must be valid, and an email
address.
After the CSR file is created, you can download that file for processing with your trusted CA.
Licensed functions
You can display all the installed licensed functions and activate new function keys from this
menu, as shown in Figure 9-21 on page 243.
For more information about Easy Tier settings, see IBM DS8000 Easy Tier, REDP-4667.
Note: To take advantage of zHyperLink in DS8000, ensure that CUIR support (under
IBM Z) is enabled.
For more information, see Getting Started with IBM zHyperLink for z/OS, REDP-5493.
The Advanced settings window is shown in Figure 9-25 and includes the following settings:
Power control mode
You can determine how to control the power supply to the storage system. From the
System window, click the Settings icon. Click System to open the System window. Click
the Advanced tab to open the window to manage Power control mode (as shown in
Figure 9-25 on page 245). The following options are available:
– Automatic. Control the power supply to the storage system through the external wall
switch.
– Manual. Control the power supply to the storage system by using the Power Off action
on the System window.
Function settings
The Resource Group Checking option is available in the Function Settings section. It
allows a storage administrator to specify which users can perform certain logical
configuration actions, such as create or delete volumes in a pool.
Service Access
The following options are available in the Service Access section:
– DS Service GUI Access
Allows authorized IBM SSRs to access the DS Service GUI.
– SSH Service Access
Allows authorized IBM SSRs to access the Secure Shell (SSH) CLI on the HMC.
IBM i
Enter the IBM i serial number suffix to avoid duplicate logical unit number (LUN) IDs for an
IBM i (AS/400) host. Restart the storage system to assign the new serial number.
Call Home
The DS8A00 uses the Call Home feature to report serviceable events to IBM. To ensure
timely action from IBM Support personnel for these events, it is important to enable and
properly configure Call Home on the system.
When enabling Call Home for the first time, you must accept the Agreement for Service
Program when presented. Enter your Company Information, Administrator Information,
and System Information details. Finally, after completing the setup, you can test the Call
Home feature by clicking Test, as shown in Figure 9-26.
Important: All DS8000 storage systems must enable Domain Name Systems (DNS) on
both HMCs or ensure both HMCs are able to access esupport.ibm.com through a firewall
or proxy. Failure to do so results in Call Home not working.
Syslog
The Syslog window displays the Syslog servers that are configured to receive logs from the
DS8A00 system. A user with an administrator role can define, modify, or remove up to eight
Syslog target servers. Each Syslog server must use the same TLS certificate. Events such as
user login and logout, commands that are issued by an authorized user by using the DS GUI
or DS CLI, and remote access events are forwarded to the Syslog servers. Additionally,
events in the RAS audit log and Product Field Engineer (PFE) actions are also forwarded to
the Syslog servers. Messages from the DS8A00 are sent by using facility code 10 and
severity level 6.
Note: A DS8A00 server must use TLS for its communications with the Syslog server. To
configure TLS, the customer must generate their own trusted certificate for the DS8A00
Syslog process with the CA and import the trusted CA file, signed machine (in this case
the HMC and Syslog process) Syslog server certificate file, and key file, as shown in
Figure 9-28.
For more information about the setup of the SYSLOG server with TLS, see Encrypting
Syslog Traffic with TLS (Secure Sockets Layer) (SSL).
This process involves external entities such as your trusted CA and potentially the use
of the openssl command to retrieve the Syslog server generated key if it is not already
provided by the CA.
The files that are entered into the fields that are shown in Figure 9-28 are:
CA Certificate (ca.pem)
HMC Signed Certificate (cert.pem)
HMC Key (key.pem)
3. To enable TLS, in the Certificates area, click Enable TLS, as shown in Figure 9-28 on
page 249.
4. In the Enable TLS window, browse for the following certificate files on your local machine:
– The CA certificate file (Example: ca.pem).
– The Syslog communications certificate file, which is signed by the CA. (Example:
hmc.pem).
– The extracted Private Key file, which is the private key for the storage system.
(Example: key.pem).
5. Click Enable to complete the TLS configuration.
6. To add a Syslog server, click Add Syslog Server, as shown in Figure 9-29, and provide
the following parameters:
– IP Address: The IP address of the external Syslog server.
– Port: The TCP port for the external Syslog server (the default is 514).
7. After you review the details, click Add to create the Syslog server entry.
8. After the required Syslog servers are created, you can Modify, Test, Activate,
Deactivate, and Remove a selected Syslog server, as shown in Figure 9-30 on page 250.
Update System
You can perform a System Health Check on the storage system. In addition, you can view and
manage the microcode levels that control all storage system functions.
Figure 9-31 IBM Remote Support Center settings and full configuration
You can configure the RSC access to stay open continuously, close 2 hours after RSC logs
off, or keep it closed. You can require IBM service to use an access code for remote support
connections with the HMC on your storage system. Click Generate to generate an access
code or enter your own access code. The access code is case-sensitive and must be fewer
than 16 characters.
Assist On-site
If AOS is used for an IBM Support connection to the HMC, you can Start, Stop, or Restart
the AOS service from the GUI, as shown in Figure 9-32 on page 251.
To configure AOS, click Show Full Configuration and enter the required settings, as shown
in Figure 9-32.
Troubleshooting
Use the Troubleshooting tab to perform actions that resolve common issues with your
storage system:
Restart HMCs
If there are connectivity issues with the storage management software (DS GUI, DS CLI,
IBM Copy Services Manager, or IBM Spectrum Control), click Restart HMC. You can also
use this feature to restart an HMC after you modify the settings of the HMC.
Refresh GUI Cache
If there are inconsistencies between what is displayed in the DS GUI and the DS CLI or
IBM Spectrum Control, click Refresh GUI Cache.
Reset Communications Path
To restart the web servers and communication paths that are used by IBM ESSNI, click
Reset Communications Path.
GUI Preferences
Use the GUI Preferences tab that is shown in Figure 9-34 to set the following options for the
DS GUI:
Login Message
With an administrator role, you can enter a message that is displayed when users log in to
either the DS GUI or the DS CLI.
General GUI settings
On the General tab of the GUI Preferences window, you can set the default logout time for
the DS GUI.
When the storage pools are created, arrays are first assigned to the pools, and then volumes
are created in the pools. FB volumes are connected through host ports to an open system
host. CKD volumes require LSSs to be created so that they can be accessed by an IBM Z
host.
Pools must be created in pairs to balance the storage workload. Each pool in the pool pair is
controlled by a processor node (either Node 0 or Node 1). Balancing the workload helps to
prevent one node from performing most of the work and results in more efficient I/O
processing, which can improve overall system performance. Both pools in the pair must be
formatted for the same storage type, either FB or CKD storage. Multiple pools can be created
to isolate workloads.
When you create a pool pair, all available arrays can be assigned to the pools, or the choice
can be made to manually assign them later. If the arrays are assigned automatically, the
system balances them across both pools so that the workload is distributed evenly across
both nodes. Automatic assignment also ensures that spares and device adapter (DA) pairs
are distributed equally between the pools.
If the storage connects to an IBM Z host, you must create the LSSs before you create the
CKD volumes.
It is possible to create a set of volumes that share characteristics, such as capacity and
storage type, in a pool pair. The system automatically balances the capacity in the volume
sets across both pools. If the pools are managed by Easy Tier, the capacity in the volumes is
automatically distributed among the arrays. If the pools are not managed by Easy Tier, it is
possible to choose to use the rotate capacity allocation method, which stripes capacity
across the arrays.
When you plan your configuration with the DS8A00, all volumes, including standard
provisioned volumes, use metadata capacity when they are created, which causes the usable
capacity to be reduced. The 1 (gibibyte) GiB extents that are allocated for metadata are
subdivided into 16 mebibyte (MiB) subextents. The metadata capacity of each volume that is
created affects the configuration planning.
If the volumes must connect to an IBM Z host, the next steps of the configuration process are
completed on the host. For more information about logically configuring storage for IBM Z,
see 9.8, “Logical configuration for Count Key Data volumes” on page 273.
If the volumes connect to an open system host, map the volumes to the host, and then add
host ports to the host and map them to FC ports on the storage system.
FB volumes can accept I/O only from the host ports of hosts that are mapped to the volumes.
Host ports are zoned to communicate only with certain FC ports on the storage system.
Zoning is configured either within the storage system by using FC port masking, or on the
SAN. Zoning ensures that the workload is spread correctly over FC ports and that certain
workloads are isolated from one another.
Note: Deleting a pool with volumes is available via the GUI. A warning is displayed, and
the user must enter a code that is presented by the DS8A00 to confirm the delete. A “force
deletion” option is also available. For more information, see Figure 9-84 on page 286.
Note: If the requirement is to create a single pool, see “Creating a single pool” on
page 258.
2. Click the Create Pool Pair tab, as shown in Figure 9-36. The Create Pool Pair window
opens.
Note: By default, arrays within the DS8A00 are created using the virtualization
technology of Redundant Array of Independent Disks (RAID). When an array is
assigned to a pool, the choice of RAID 6 or RAID 10 is available.
3. Specify the pool pair parameters, as shown in Figure 9-37 on page 256:
– Storage type: Ensure that Open Systems (FB) is selected.
– Name prefix: Add the pool pair name prefix. A suffix ID sequence number is added
during the creation process.
4. Select whether the pair will be Compressed. Compression is important:
– If no compression is selected, the industry-standard NVMe disks are used. Standard,
full-provisioned, volumes can be created. No arrays are automatically assigned.
– If compression is selected, FCMs are used. Only ESE volumes can be created. Arrays
are automatically assigned during pool pair creation. See Figure 9-66 on page 274.
5. When the pool pair parameters are correctly specified, click Create to proceed.
Figure 9-38 shows a pool pair that is created.
6. Assign arrays to the pool pair. Click Unassigned Arrays to display them, Right-click the
array to be assigned and then Assign as shown in Figure 9-39 on page 256.
2. Select the target pool from the drop-down list, and the RAID level that you want.
3. Select the Redistribute checkbox to redistribute all existing volumes across the pool,
including the new array.
4. Click Assign.
Note: In a pool that is managed by Easy Tier, redistributing volumes across the pool is
automatic. This redistribution is called Dynamic Pool Reconfiguration. For more
information, see IBM DS8000 Easy Tier, REDP-4667.
4. Assign one or more arrays to the single pool, as shown in Figure 9-45.
2. Selecting one of the first two options opens a view that lists all the volumes or pools on the
system. Figure 9-47 shows the Volumes by Pool view.
3. From this view, click Create Volumes. The Create Volumes dialog opens listing various
operating system platforms, as shown in Figure 9-48 on page 261.
Important: Starting with the DS8A00 G10, Standard volumes are only available when
pools are made up of industry-standard NVMe drives. Those pools made up of IBM
FlashCore Modules (FCM) support ESE volumes only.
The Administrator, or User with Administrator role, or storage administrator, when they
create the new volumes, can assign the address range to the volume in the Advanced
section, as shown in Figure 9-50 on page 263. It is possible to specify the volumes by
using the T10 Data Integrity Field (DiF)/Protection Information. After you specify the
volume set that you want to create, click Save. Then, you either create another one by
selecting ⊕ New Volume Set, or, when all the volume sets are specified, click Create to
create them in a row all at once.
Tips:
By providing a target host or host cluster, you can create volumes and map them to
the host in one step.
Selecting the suitable range of addresses for the new volume set is important from
the Copy Services planning point of view and the CPC preferred path affinity. After
you create a volume, you cannot change its address.
When FlashCopy is used on FB volumes, the source and the target volumes must
have the same protection type, that is, they must both use T10-DIF or standard.
Note: Dynamic Volume Expansion (DVE) of IBM i 050 and 099 volume types in
increments of 1–2000 GB is available. The minimum software level of the IBM i hosts
must be IBM i 7.4 and later.
Optionally, you can map the volumes to a defined IBM i host in this step too.
Further volume sets can be prepared and saved before you create what is defined in a
row.
The DS8A00 offers 4-port, 32 Gbps encryption-capable PCIe Gen3 FCP/FICON host
adapters (referred to in the DS GUI as 32 Gbps). Each port can be independently configured
to one of the following topologies:
FCP: Also known as FC-switched fabric (which is also called switched point-to-point) for
open system host attachment, and for Metro Mirror (MM), Global Copy (GC), Global Mirror
(GM), and Metro/Global Mirror (MGM) connectivity
FICON: To connect to IBM Z hosts
To set the FC port topology for open system hosts, complete the following steps:
1. From the DS GUI left navigation menu, click Settings → Network and select Fibre
Channel Ports to open the Fibre Channel Ports window (Figure 9-52 on page 265).
2. Select the port to modify. Multiple ports can be selected by using the Shift or Ctrl key.
3. From the Actions tab, click Modify Protocol to open the Modify Protocol window, as
shown in Figure 9-53.
4. Choose from the available protocols to modify the selected host adapter port or ports. For
open system hosts attachment, select SCSI FCP (Small Computer System Interface
(SCSI) FCP).
5. Click Modify to perform the action.
For reference, a host port is the FC port of the Host Bus Adapter (HBA) FC adapter that is
installed on the host system. It connects to the FC port of the host adapter that is installed on
the DS8A00.
Creating clusters
To configure a cluster object, complete these steps:
1. Click the Hosts icon from the DS GUI navigation pane on the left.
2. Select Hosts from the menu, as shown in Figure 9-54.
3. The Hosts window opens, as shown in Figure 9-55. Click Create Cluster.
4. The Create Cluster window opens, as shown in Figure 9-55. Specify the name of the
cluster, and click Create.
5. The Add Hosts window opens (Figure 9-57 on page 268). Specify the following items:
– Name. The user-defined name for the host to add.
– Type. The operating system (OS) of the host to add.
– Host port (WWPN). Optionally, provide the WWPN of the host port. If the host port
logged in to the system, it can be selected from the Host Port (WWPN) list as shown in
Figure 9-56.
3. The Assign Host window opens. From the drop-down list, select the cluster to which to add
the host.
4. Click Assign to complete the action.
Note: When there are multiple FC connections to the DS8A00 from a host, you should
use native multipathing software that is provided by the host OS to manage these
paths.
Figure 9-61 Host properties displaying the Fibre Channel port mask
If the storage administrator wants to restrict the FC ports that can communicate with the host,
FC port masking must be defined. Modify the FC port mask to allow or disallow host
communication to and from one or more ports on the system.
The properties of the selected host now reflect the number of FC ports that have access as
shown in Figure 9-63.
3. The Map Volume to Host or Cluster window opens. Select the host or cluster from the list
of configured stand-alone hosts and clusters, as shown in Figure 9-64, and then click
Map.
Note: When mapping volumes to a cluster, volumes that are mapped to the cluster are
public volumes that are seen by all hosts in the cluster. Volumes that are mapped to a
single host in a cluster are private volumes.
It is the responsibility of the system administrator to ensure that the correct clustering
software is implemented to ensure data integrity when a volume is mapped to more
than one host.
Figure 9-65 on page 273 shows a mixture of public and private volumes that are mapped to a
cluster.
Figure 9-66 CKD pool pair creation and assigning arrays to the pool pair
8. Click Create.
Important: The CKD LSSs cannot be created in an address group that already
contains FB LSSs. The address groups are identified by the first digit in the two-digit
LSS ID.
9. When the pool pair creation is complete, the arrays are assigned to the pool pair as shown
in Figure 9-67 on page 275. The DS GUI configures the selected arrays for CKD storage
and distributes them evenly between the two pools.
Note: With the added resiliency of RAID 6 and RAID 10, RAID 5 is no longer supported on
DS8A00 models.
For more information about the supported drive types and available RAID levels for DS8A00
models, see Chapter 4, “IBM DS8A00 reliability, availability, and serviceability” on page 103.
Figure 9-68 shows the Arrays by Pool window, which shows how to assign the arrays.
Note: You can create LSS ranges, exact volume address ranges, and aliases in one step.
For an example, see 9.8.4, “Creating CKD volumes” on page 279.
The DS8000 LSS emulates a CKD Logical Control Unit image (LCU). A CKD LSS must be
created before CKD volumes can be associated to the LSS.
4. The Create CKD LSSs window opens, as shown in Figure 9-70 on page 277. Enter the
required information. After you enter the values for the LSS range, SubSystem Identifier
(SSID) prefix, and LSS type, click Create. The Need Help icon shows information about
how the unique SSID for each LSS is determined based on the SSID prefix that is
provided.
Note: The CKD LSSs cannot be created in an address group that already contains FB
LSSs. The address groups are identified by the first digit in the two-digit LSS ID.
Important: This situation is important in an IBM Z environment where the SSIDs were
previously defined in input/output definition files (IODFs) and might differ from the
SSIDs that are automatically generated by the Storage Management GUI. Be careful
when changing SSIDs as they must be unique in an IBM Z environment.
Important: Use caution when changing SSIDs that are in a Copy Services relationship. If a
change must be done, then the entire relationship must be broken down including the
PPRC paths.
SSIDs must be unique across all DS8000 systems participating in copy services
relationships.
Note: Occasionally, the DS8A00 GUI view does not immediately update after
modifications are made. After you modify the SSID, if the view is not updated, refresh
the GUI cache to reflect the change by clicking Settings → Support →
Troubleshooting → Refresh GUI cache. For more information, see “Troubleshooting”
on page 251.
Note: SSIDs must be unique across all DS8000 systems participating in copy services
relationships.
Note: The storage administrator can create configurations that specify new LSS
ranges, exact volume address ranges, and aliases in one step.
4. Determine the LSS range for the volumes that you want to create.
5. Determine the name prefix and the quantity of volumes to create for each LSS.
Enter a prefix name and capacity for each group. The capacity can be specified in three
ways:
– Device: Select one of these choices from the list: 3380-2, 3380-3, 3390-1, 3390-3,
3390-9, 3390-27, 3390-54, or 3390-A (extended address volume (EAV)). These device
types have a fixed capacity that is based on the number of cylinders of each model. A
3390 disk volume contains 56,664 bytes for each track, 15 tracks for each cylinder, and
849,960 bytes for each cylinder. The most common 3390 model capacities are shown:
• 3390-1 = 1113 cylinders
• 3390-3 = 3339 cylinders
• 3390-9 = 10017 cylinders
• 3390-27 = 30051 cylinders
• 3390-54 = 60102 cylinders
Note: Pools with FCM storage support only Thin Provisioned (ESE) volumes. To
configure “Standard” volumes, select or create a pool with Industry-standard NVMe
drives.
7. After completing the details, either define another volume set directly by using the ⊕
symbol, or click Save and validate the complete volume preset information. If you like to
create another volume, click ⊕ New Volume Set. Otherwise, click Create to create all the
volume definitions together in a row.
In the first version of PAV, the disk controller assigns a PAV to a UCB (static PAV). The second
version of PAV processing, Workload Manager (WLM), reassigns a PAV to new UCBs from
time to time (dynamic PAV). Following that is now HyperPAV which simplifies operational
procedures by removing static alias to base bindings and associating a PAV alias device to a
pool. There is also SuperPAV mode which supports the borrowing of aliases from neighboring
LSSs, thus requiring fewer aliases to be defined per system.
The restriction for configuring PAVs is that the total number of base and alias addresses for
each LSS cannot exceed 256 (00–FF). These addresses must be defined in the IODF so that
they match the correct type, base, or alias.
Typically, when you configure PAVs in the IODF, the base addresses start at 00 and increment
toward FF. Alias addresses are typically configured to start at FF and decrement (decrease)
toward 00. A storage administrator might configure only 16 or 32 aliases for each LSS.
However, no restrictions exist other than the total of 256 addresses that are available per LSS
(bases and aliases).
The DS GUI configures aliases in this manner, starting at FF and descending. The storage
administrator can either configure many aliases against the LSS, in which case those aliases
are assigned to the lowest address in the LSS. Alternatively, the system administrator can
define any number of aliases to any specific base address. For more information about PAVs,
see IBM DS8000 and IBM Z Synergy, REDP-5186.
3. Select Manage Aliases to open the Aliases for LSS xx (where xx = 00 - FE) dialog box.
Click Create Aliases to open the dialog box that is shown in Figure 9-75. Enter the
number of aliases to create. The example in Figure 9-75 shows 32 aliases being created
for LSS 80.
4. Click Create. The aliases are created for LSS 80, as shown in Figure 9-76.
Figure 9-77 Thirty-two aliases against the lowest base volume address
6. To display the aliases, select the base volume with those aliases that are assigned to it
and then click Action → Manage Aliases.
A list with the addresses of all aliases that are assigned to the base volume is displayed,
as shown in Figure 9-78.
Figure 9-78 List of aliases with their alias IDs starting at F5 and in ascending order
Note: The alias IDs start at F5 and are in ascending order, as shown in Figure 9-78.
The five aliases for a single base volume address (ITSO_CKD_8006) are created with a
starting address of DF and end with DB in ascending order, as shown in Figure 9-80.
(Aliases E0–FF were created before).
Figure 9-80 List of five aliases that are created for a single base address
To set the FC port protocols of the FC ports that the host uses to communicate with the
DS8A00, complete these steps:
1. Select Settings → Network → Fibre Channel Ports. Select a port then select
Actions → Modify Fibre Channel port protocols.
2. Select one or multiple ports to modify and select Actions → Modify.
3. The Modify Protocol for the selected Ports window opens. Select the FICON protocol.
4. Click Modify to set the topology for the selected FC ports, as shown in Figure 9-81.
The following example shows the steps that are required to expand an IBM i volume of type
099 (Figure 9-82 on page 285):
1. Go to any Volumes view, such as Volumes → Volumes by Pool.
2. Select the volume and select Actions → Expand. You can also open the Actions menu
by selecting the volume and right-clicking it.
3. The Expand Volume dialog opens. Enter the new capacity for the volume and click
Expand.
4. A warning might appear to inform you that certain OSs do not support this action, and it
asks for confirmation to continue the action. Verify that the OS of the host to which the
volume is mapped supports the operation, and click Yes.
A task window opens and is updated with progress on the task until it is completed.
The storage administrator can also Expand the Safeguarded capacity, as shown in
Figure 9-83. For more information, see IBM Storage DS8000 Safeguarded Copy,
REDP-5506.
You can instruct the GUI to force the deletion of volumes that are in use by selecting the
optional checkbox in the Delete Volumes dialog box, as shown by 1 in Figure 9-85 on page
287.
Important: This setting does not apply to volumes that are in a Safeguarded Copy
relationship.
The following example shows the steps that are needed to delete an FB volume that is in use:
1. Go to the hosts-centric Volumes view by selecting Volumes → Volumes by Host.
2. Select the volume to be deleted, and then select Actions → Delete, as shown by 2 in
Figure 9-85 on page 287.
There are three drive classes available as noted previously. You cannot change the drive
class of a device. Figure 9-87 on page 289 shows examples of two. “A” denotes assignment
of Performance Flash Tier 1, and “B” shows assignment of Compression Flash Tier.
For more information about the settings that are available to configure Easy Tier, see “Easy
Tier settings” on page 243.
For more information about Easy Tier, see IBM DS8000 Easy Tier, REDP-4667.
This section provides more information about these hardware components, including how to
see more information about them from the system window.
Processor nodes
Two processor nodes exist that are named ID 0 and ID 1. Each node consists of a CPC and
the Licensed Internal Code (LIC) that runs on it. You can also display the system health
overview by clicking the System Health View icon, as shown in Figure 9-90.
Here are the node attributes that are shown in Figure 9-91:
ID: The node identifier, which is node 0 or node 1.
State: The current state of the node is shown:
– Online: The node is operating.
– Initializing: The node is starting or not yet operational.
– Service required: The node is online, but it requires service. A Call Home was initiated
to IBM Hardware Support.
– Service in progress: The node is being serviced.
– Drive service required: One or more drives that are online require service. A call home
was initiated.
– Offline: The node is offline and non-operational. A Call Home was initiated.
Release: The version of the Licensed Machine Code (LMC) or hardware bundle that is
installed on the node.
Processor: The type and configuration of the processor that is on the node.
Memory: The amount of raw system memory that is installed in the node.
Location Code: Logical location of the processor node.
Storage enclosures
A storage enclosure is a specialized chassis that houses and powers the flash drives in the
DS8A00 storage system. The storage enclosure also provides the mechanism to allow the
drives to communicate with one or more host systems. All enclosures in the DS8A00 are
High-Performance Flash Enclosures (HPFEs). These enclosures contain flash drives, which
are Peripheral Component Interconnect Express (PCIe)-connected to the I/O enclosures.
To view detailed information about the enclosures that are installed in the system, select
Storage Enclosures, as shown in Figure 9-90 on page 291. Some of the attributes of the
storage enclosure are shown in Figure 9-93.
A drive is a data storage device. From the GUI perspective, a drive can be Performance Flash
Tier 1, Capacity Flash Tier 2, or Compression FlashCore Module drive. To see the data
storage devices and their attributes from the Hardware view, click a storage enclosure when
the magnifying glass pointer appears (Figure 9-88 on page 290). This action shows the
storage enclosure and installed storage devices in more detail (Figure 9-94). You can also
select Drives from the System Health Overview to display information for all installed drives.
This action displays the I/O enclosure adapter view (rear of the enclosure).
To see the attributes for all installed DAs or host adapters in the System Health overview,
select Device Adapters or Host Adapters from System Health Overview (Figure 9-90 on
page 291) to open the dialog that is shown in Figure 9-95.
The attributes for the I/O enclosure are described in the following list:
ID: The enclosure ID.
State: The current state of the I/O enclosure is one of the following states:
– Online: Operational and normal.
– Offline: Service is required. A service request to IBM was generated.
– Service: The enclosure is being serviced.
Location Code: Logical location code of the I/O enclosure.
Host adapter: Number of host adapters that are installed.
DA: Number of DAs that are installed.
The FC ports are ports on a host adapter that connect the DS8A00 to hosts or another
storage system either directly or through SAN switches/directors.
9.14.2 Viewing components health and state from the system views
The Hardware view and System Health overview are useful tools in the DS GUI to visualize
the state and health of hardware components in the system. Two sample scenarios are
illustrated here:
Failed drives: Figure 9-97 on page 297 shows two failed drives in one of the storage
enclosures of the system. Clicking this enclosure from the Hardware view provides a
detailed view of the enclosure with all the drives, including the failed drives. Hovering your
cursor over the failed drives provide more information about them.
The same information can be obtained by clicking the Show System Health Overview
icon, as shown in Figure 9-97.
The System Health Overview opens and shows the failed state of the drives, as shown in
Figure 9-98.
Failed processor node: Figure 9-99 on page 298 shows the Hardware view with a failed
processor node. Hovering your cursor over the node provides more information about this
component.
The same information can be obtained from the System Health Overview, as shown in
Figure 9-100.
The Events table updates continuously so that you can monitor events in real time and track
events historically.
To access the Events window, click Monitoring → Events. The Events window can also be
displayed by clicking the Event Status icon, as shown in Figure 9-101.
The events can be exported as a CSV file by selecting Export Table on the Events window.
The Export Table action creates a CSV file of the events that are displayed in the Events table
with detailed descriptions.
System Summary
This data includes CSV offload of the system summary (full hardware and logical
configuration) by exporting individual tables (Volumes, I/O Ports, Users, and Volumes by
LSS).
Figure 9-103 illustrates the formatting that occurs when you import the data in XLS format.
Data is presented as one or more lines per port, depending on the number of logins. This
data illustrates four ports on one adapter, which are split into three captures for
presentation.
Local Port
Local Port Security Local Port Local Port
Local Port Local Port Security Security Local Port Capable Authentication Encrypted
Local Port ID FC_ID Local Port WWPN Local Port WWNN Capability Config Logins Logins Only Logins Logins
Attached Port
Attached Port WWPN Attached Port WWNN Attached Port Interface ID Attached Port Type Attached Port Model Manufacturer Attached Port SN
0x202E8894715EC810 0x10008894715EC810 0x002E 8960 F64 IBM 0000010550HA
0x5005076306009339 0x5005076306FFD339 0x0002 2107 996 IBM 0000000DMC01
0x5005076306005339 0x5005076306FFD339 0x0001 2107 996 IBM 0000000DMC01
0x10000000C9CED91B 0x20000000C9CED91B Unknown Unknown Unknown Unknown Unknown
Remote Port Remote Port Remote Port Security Remote Port Remote Port Remote Port Remote Port Remote Port
Remote Port WWPN Remote Port WWNN FC_ID PRLI Complete Remote Port Login Type State Security Config Interface ID Type Model Remote Port SN System Name
0x5005076306041339 0x5005076306FFD339 0x010600 Yes Mirroring secondary Not capable Disabled 0x0040 2107 996 0000000DMC01 Unknown
Mirroring primary and
0x5005076306009339 0x5005076306FFD339 0x000001 Yes secondary Not capable Disabled 0x0002 2107 996 0000000DMC01 Unknown
Mirroring primary and
0x5005076306005339 0x5005076306FFD339 0x000002 Yes secondary Not capable Disabled 0x0001 2107 996 0000000DMC01 Unknown
0x10000000C9CED91B 0x20000000C9CED91B 0x000002 Yes FCP host Not capable Disabled Unknown Unknown Unknown Unknown Unknown
The audit log is an unalterable record of all actions and commands that were initiated by
users on the system through the DS GUI, DS CLI, DS Network Interface (DSNI), or
IBM Spectrum Control. The audit log does not include commands that were received from
host systems or actions that were completed automatically by the storage system. The audit
log is downloaded as a compressed text file.
You can create your own performance graphs for the storage system, pools, volumes, and FC
ports. You can use predefined graphs and compare performance statistics for multiple pools,
up to six volumes at a time, or FC ports.
To learn how to obtain statistics from the DS CLI, see 10.5, “Metrics with DS CLI” on
page 372.
Important: All the listed performance statistics are averaged over 1 minute. The
performance graphs cover data that is collected for the last 7 days. For long-term
performance statistics, use IBM Spectrum Control or IBM Storage Insights.
You can use these performance metrics to define your own graphs. To add the custom graph
to the Favorites menu, click the star icon, as shown in Figure 9-105 on page 302. You can
also export the sample data that is used to create the performance graphs into a CSV file by
clicking the Save icon, as shown in Figure 9-107.
For detailed performance analysis, you can define more detailed statistics and graphs, which
can help identify and isolate problems. You can perform the following actions:
Define your own performance graphs on demand.
Add defined graphs to the Favorites menu.
Pin defined graphs to the toolbar.
Set defined graphs as a default in the Performance window.
Rename or delete your graphs. You cannot delete predefined graphs.
Change the time range of displayed graphs.
To create a graph of a pool’s performance, see Figure 9-109 on page 306, which shows how
to create a chart, and then complete the following steps:
1. Select Pool from the resources to monitor.
2. Select the pool name to monitor.
3. Select the metrics that you want.
Figure 9-112 shows the metric options that are available for the selected pool.
Figure 9-114 Example of Easy Tier settings for total Data Movement
Figure 9-115 Easy Tier pool level workload reports (Total Data Movement)
An example report of Easy Tier Data Activity for a pool is shown in Figure 9-117.
Note: The Performance action on the Volume, Host, and LSS resources is also available
in all pages where they are shown. For any Volume, Host, or LSS, select Performance
from the Action menu and then click the metric that you want to monitor. The performance
window for the selected resource and metric opens. Figure 9-124 shows the performance
actions and metrics that are available for a volume on the Volume by LSS page.
The following list shows all of the statistics that are available for port error checking:
Total errors. The total number of errors that were detected on the FC port.
Error frame: An FC frame was received that was not consistent with the FCP.
Link failure: FC connectivity with the port was broken. This type of error can occur when
the system that is connected to the port is restarted, replaced, or serviced, and the FC
cable that is connected to the port is temporarily disconnected. It can also indicate a faulty
connector or cable. Link failures result in degraded performance of the FC port until the
failure is fixed.
Loss of sync: A synchronization loss error was detected on the FC link. This type of error
can occur when the system that is connected to the port is restarted, replaced, or
serviced, and the FC cable that is connected to the port is temporarily disconnected. It can
also indicate a faulty connector or cable. If a synchronization loss error persists, it can
result in a link failure error.
Loss of signal: A loss of signal was detected on the FC link. This type of error can occur
when the system that is connected to the port is replaced or serviced, and the FC cable
that is connected to the port is temporarily disconnected. It can also indicate a faulty
connector or cable. If a loss of signal error persists, it can result in a link failure error.
Cyclic redundancy check (CRC) error: An FC frame was received with CRC errors. This
type of error is often fixed when the frame is retransmitted. This type of error is often
recoverable and it does not degrade system performance unless the error persists and the
data cannot be relayed after retransmission.
Primitive sequence protocol error: A primitive sequence protocol error was detected. A
primitive sequence is an ordered set that is transmitted and repeated continuously to
indicate specific conditions within the port. The set might also indicate conditions that are
encountered by the receiver logic of the port. This type of error occurs when an
unexpected primitive sequence is received.
Note: This chapter illustrates only a few essential commands. For a list of all commands
and their parameters, see Command-line interface.
The following list highlights a few of the functions that you can perform with the DS CLI:
Manage user IDs and passwords that can be used with DS GUI, DS CLI, and Hardware
Management Console (HMC).
Install activation keys for licensed features.
Manage storage complexes and units.
Configure and manage storage facility images (SFIs).
Create and delete redundant array of independent disks (RAID) arrays, ranks, and extent
pools.
Create and delete logical volumes.
Manage the host access to volumes.
Check the current Copy Services (CS) configuration that is used by the storage unit.
Create, modify, or delete CS configuration settings.
Integrate Lightweight Directory Access Protocol (LDAP) policy usage and configuration.
Implement encryption functions.
Single installation: In almost all cases, you can use a single installation of the current
version of the DS CLI for all of your system needs. However, it is not possible to test every
version of DS CLI with every Licensed Machine Code (LMC) level. Therefore, an
occasional problem might occur despite every effort to maintain that level of compatibility.
If you suspect a version incompatibility problem, install the DS CLI version that
corresponds to the LMC level that is installed on your system. You can have more than one
version of DS CLI installed on your system, each in its own directory.
Important: For more information about supported OSs, specific preinstallation concerns,
and installation file locations, see IBM Documentation.
The installation process can be performed through a shell, such as the bash or Korn shell, the
Windows command prompt, or through a GUI. If the installation process is installed by using a
shell, the installation can be a silent installation by using a profile file. The installation process
also installs software that allows the DS CLI to be uninstalled when the DS CLI is no longer
required.
This chapter focuses on the DS CLI that is installed on an external server and interacting with
the DS8A00. Alternatively, the DS CLI embedded in the DS GUI can be used, and it can be
accessed from the DS GUI dashboard by using the DS CLI option in the lower left corner of
the dashboard.
After you ensure that Java 8 or later is installed, complete one of the following actions to
correct the Java virtual machine Not Found error:
Run the DS CLI installer again from the console, and provide the path to the Java virtual
machine (JVM) by using the LAX_VM option. The following examples represent paths to
the correct version of Java:
– For a Windows system, specify the following path:
dsclisetup.exe LAX_VM "C:\Program Files\java-whatever\jre\bin\java.exe"
Note: Due to a space in the Program Files directory name, add quotation marks
around the directory name.
Note: If you use the LAX_VM argument, the installer attempts to use whatever JVM
that you specify, even if it is an unsupported version. If an unsupported version is
specified, the installation might complete successfully, but the DS CLI might not run
and return an Unsupported Class Version Error message. You must ensure that
you specify a supported version.
For instances where Java is already set up, the installation starts with running the
dsclisetup.exe program that is found in your installation media, as shown in Figure 10-1.
The DS CLI runs under UNIX System Services for z/OS, and has a separate FMID HIWN63A.
You can also install the DS CLI separately from IBM Copy Services Manager.
For more information, see IBM DS CLI on z/OS Program Directory, GI13-3563. You can use
the order number (GI13-3563) to search for it at IBM Publications Center.
After the installation is done, the first thing to do is to access your UNIX System Services for
z/OS. This process can vary from installation to installation. Ask your z/OS system
programmer how to access it.
Tip: Set your Time Sharing Option (TSO) REGION SIZE to 512 MB to allow the DS CLI to
run.
===> omvs
=> omvs
Figure 10-2 OMVS command to start the z/OS UNIX Shell
The default installation path for the z/OS DS CLI is /opt/IBM/CSMDSCLI. To run the DS CLI,
change your working directory to the installation path by issuing the following command, as
shown in Figure 10-3:
cd /opt/IBM/CSMDSCLI
IBM
Licensed Material - Property of IBM
...
GSA ADP Schedule Contract with IBM Corp.
-----------------------------------------------------------------------
Business Notice:
IBM's internal systems must only be used for conducting IBM's
business or for purposes authorized by IBM management.
-----------------------------------------------------------------------
===> cd /opt/IBM/CSMDSCLI
INPUT
ESC=¢ 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr
8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
Figure 10-3 The cd /opt/IBM/CSMDSCLI command
IBM
Licensed Material - Property of IBM
...
GSA ADP Schedule Contract with IBM Corp.
$ cd /opt/IBM/CSMDSCLI
$
===> ./dscli
INPUT
ESC=¢ 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr
8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
Figure 10-4 The ./dscli command
If you change your mind and decide to quit here, instead of typing ./dscli, press F2 to
activate the SubCmd, as shown in Figure 10-4 (2=SubCmd). The OMVS Subcommand line is
displayed, and you can issue a quit command.
As shown in Figure 10-5, the message CEE5210S The signal SIGHUP was received followed
by *** appears. Press Enter to quit OMVS.
IBM
Licensed Material - Property of IBM
...
IBM is a registered trademark of the IBM Corp.
...
-----------------------------------------------------------------------
$ cd /opt/IBM/CSMDSCLI
$
OMVS Subcommand ==> quit
SUBCOMMAND
ESC= 1=Help 2=SubCmd 3=Return 4=Top 5=Bottom 6=TSO 7=BackScr
8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
-------------------------------------------------------
CEE5210S The signal SIGHUP was received.
***
Figure 10-5 Sequence to leave the DS CLI
Business Notice:
IBM's internal systems must only be used for conducting IBM's
business or for purposes authorized by IBM management.
-----------------------------------------------------------------------
$ cd /opt/IBM/CSMDSCLI
$ ./dscli
Enter the primary management console IP address: <enter-your-machine-ip-address>
Enter the secondary management console IP address:
Enter your username: <enter-your-user-name-as-defined-on-the-machine>
Enter your password: <enter-your-user-password-to-access-the-machine>
dscli> ver -l
...
dscli>
===>
INPUT
ESC=¢ 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr 8=Scroll
9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
The command that you run on DS CLI on z/OS has the same syntax as in other platforms.
Some examples of those commands are shown in Figure 10-7.
dscli> lssi
Name ID Storage Unit Model WWNN State ESSNet
========================================================================================
IBM.2107-75xxxx IBM.2107-75xxxx IBM.2107-75xxxxx A50 5005076303FFD13E Online Enabled
dscli> lsckdvol -lcu EF
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
===========================================================================================
ITSO_EF00 EF00 Online Normal Normal 3390-A CKD Base - P1 262668
ITSO_EF01 EF01 Online Normal Normal 3390-9 CKD Base - P1 10017
dscli> mkckdvol -dev IBM.2107-75xxxxx -cap 3339 -datatype 3390 -eam rotateexts -name ITSO_#h -extpool P1
EF02-EF02
CMUC00021I mkckdvol: CKD Volume EF02 successfully created.
dscli> lsckdvol -lcu EF
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
===========================================================================================
ITSO_EF00 EF00 Online Normal Normal 3390-A CKD Base - P1 262668
ITSO_EF01 EF01 Online Normal Normal 3390-9 CKD Base - P1 10017
ITSO_EF02 EF02 Online Normal Normal 3390-3 CKD Base - P1 3339
dscli> rmckdvol EF02
CMUC00023W rmckdvol: The alias volumes associated with a CKD base volume are automatically deleted
before deletion of the CKD base volume. Are you sure you want to delete CKD volume EF02? Ýy/n¨: y
CMUC00024I rmckdvol: CKD volume EF02 successfully deleted.
dscli>
===>
Example 10-3 shows a JCL to run several commands in a row, each in single-shot mode.
Example 10-4 shows an example of the ver command, where the customer uses an earlier
DS CLI version.
The installsoftware command is used to install a new version of IBM Copy Services
Manager software on the HMC, as shown in Example 10-6.
The default user ID is admin and the password is admin. The system forces you to change the
password at the first login. If you forget the admin password, a reset can be performed that
resets the admin password to the default value.
The following commands are used to manage user IDs by using the DS CLI:
mkuser
A user account that can be used with the DS CLI and the DS GUI is created by using this
command. Example 10-7 shows the creation of a user that is called John, which is in the
op_storage group. The temporary password of the user is passw0rd. The user must
change the temporary password during the first login.
chuser
Use this command to change the password or group (or both) of an existing user ID. It can
also be used to unlock a user ID that was locked by exceeding the allowable login retry
count. The administrator can also use this command to lock a user ID. In Example 10-9,
we unlock the user, change the password, and change the group membership for a user
that is called JohnDoe. The user must use the chpass command the next time that they log
in.
lsuser
By using this command, a list of all user IDs can be generated. Example 10-10 shows a
list of three users, including the administrator account.
dscli> lsuser -l
Name Group State Scope
=================================================
admin admin active *
engineering ibm_engineering active *
John op_storage active PUBLIC
secadmin secadmin active *
service ibm_service active *
suser ibm_service active PUBLIC
showuser
The account details of a user ID can be displayed by using this command. Example 10-11
lists the details of the user JohnDoe.
chpass
By using this command, you can change two password policies: Password expiration (in
days) and the number of failed logins that are allowed. Example 10-13 shows changing
the expiration to 365 days and five failed login attempts.
showpass
The properties for passwords (Password Expiration days and Failed Logins Allowed) are
listed by using this command. Example 10-14 shows that passwords are set to expire in
90 days and that four login attempts are allowed before a user ID is locked.
If you create one or more profiles to contain your preferred settings, you do not need to
specify this information every time that you use the DS CLI. When you start the DS CLI, you
can specify a profile name by using the dscli command. You can override the values of the
profile by specifying a different parameter value for the dscli command.
Default profile file: The default profile file that you created when you installed the DS CLI
might be replaced every time that you install a new version of the DS CLI. It is a best
practice to open the default profile and then save it as under a new file. You can then create
multiple profiles and reference the relevant profile file by using the -cfg parameter.
The following example uses a different profile when it starts the DS CLI:
dscli -cfg newprofile.profile (or whatever name you gave to the new profile)
These profile files can be specified by using the DS CLI command parameter -cfg
<profile_name>. If the profile name is not specified, the default profile of the user is used. If a
profile of a user does not exist, the system default profile is used.
Two default profiles: If two default profiles are called dscli.profile, one profile in the
default system’s directory and one profile in your personal directory, your personal profile is
loaded.
Default newline delimiter: The default newline delimiter is a UNIX delimiter, which can
render text in the notepad as one long line. Use a text editor that correctly interprets
UNIX line endings.
devid: IBM.2107-75xxxxx
hmc1: 10.0.0.1
username: admin
pwfile: c:\mydir\75xxxxx\pwfile.txt
Adding the serial number by using the devid parameter and adding the HMC IP address
by using the hmc1 parameter are suggested. These additions help you to avoid mistakes
when you use more profiles, and you do not need to specify this parameter for certain
dscli commands that require it. Additionally, if you specify the dscli profile for CS usage,
the remotedevid parameter is suggested for the same reasons. To determine the ID of a
storage system, use the lssi CLI command.
Add the username and an encrypted password file by using the managepwfile command. A
password file that is generated by using the managepwfile command is placed in the
user_home_directory/dscli/profile/security/security.dat directory. Specify the
location of the password file with the pwfile parameter.
Important: Be careful if you add multiple devid and HMC entries. Uncomment (remove
the number sign (#) one entry at a time. If multiple hmc1 or devid entries exist, the DS
CLI uses the entry that is closest to the bottom of the profile.
Alternatively, you can modify the following lines in the dscli.profile (or any profile) file:
# Management Console/Node IP Addresses
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1:10.0.0.1
hmc2:10.0.0.5
After these changes are made and the profile is saved, the DS CLI automatically
communicates through HMC2 if HMC1 becomes unreachable. By using this change, you can
perform configuration and run CS commands with full redundancy.
Two HMCs: If you specify only one HMC in a DS CLI command (or profile), any changes
that you make to users are still replicated onto the other HMC.
In Example 10-18 on page 334, lsrank is the command name, -dev and -l are command
parameters and IBM.2107–75xxxxx is the sub parameter for the -dev parameter, and R1, R2,
and R3 are a list of command parameters.
You must supply the login information and the command that you want to process at the same
time. To use the single-shot mode, complete the following steps:
1. At the OS shell prompt, enter one of the following commands:
– dscli -hmc1 <hostname or ip address> -user <adm user> -passwd <pwd> <command>
– dscli -cfg <dscli profile> -pwfile <security file> <command>
Important: For security reasons, avoid embedding the username and password into
the profile. Instead, use the -pwfile command.
Important: When you are typing the command, you can use the hostname or the IP
address of the HMC. When a command is run in single-shot mode, the user must be
authenticated. The authentication process can take a considerable amount of time.
The interactive command mode provides a history function that simplifies repeating or
checking earlier command usage.
Interactive mode: In interactive mode for long outputs, the message Press Enter To
Continue appears. The number of rows can be specified in the profile file. Optionally, you
can turn off the paging feature in the DS CLI profile file by using the paging:off parameter.
Example 10-20 shows the interactive command mode by using the profile DS8A00.profile.
dscli> lsarraysite -l
Date/Time: 20 September 2024 5:42:24 PM IBM DSCLI Version: 7.10.0.767 DS: IBM.2107-78xxxxx
arsite DA Pair dkcap (10^9B) diskrpm State Array diskclass encrypt
===============================================================================
S1 0 4800.0 65000 Unassigned - FlashCoreModule supported
S2 0 4800.0 65000 Unassigned - FlashCoreModule supported
S3 0 4800.0 65000 Unassigned - FlashCoreModule supported
S4 0 4800.0 65000 Unassigned - FlashCoreModule supported
dscli> lssi
Name ID Storage Unit Model WWNN State ESSNet
===================================================================================
ds8k-g10-01 IBM.2107-78xxxxx IBM.2107-78xxxxx A05 5005076303FFF0DD Online Enabled
Example 10-21 shows the contents of a DS CLI script file. The file contains only DS CLI
commands, although comments can be placed in the file by using a number sign (#). Empty
lines are also allowed. One advantage of using this method is that scripts that are written in
this format can be used by the DS CLI on any OS on which you can install the DS CLI, as well
as in the embedded DS CLI in the DS GUI. Only one authentication process is needed to run
all of the script commands.
Example 10-22 shows starting the DS CLI by using the -script parameter and specifying a
profile and script path and name that contains the commands from Example 10-21 on
page 335.
Important: The DS CLI script can contain only DS CLI commands. Using shell commands
results in a process failure.
The return codes/messages that are used by the DS CLI are listed in Messages.
Click the Command-line interface tab to access user assistance. You can also get user
assistance when using the DS CLI program by running the help command. The following
examples of usage are included:
help Lists all the available DS CLI commands.
help -s Lists all the DS CLI commands with brief descriptions of each
command.
help -l Lists all the DS CLI commands with their syntax information.
To obtain information about a specific DS CLI command, enter the command name as a
parameter of the help command. The following examples of usage are included:
help <command name> Provides a detailed description of the specified command.
help -s <command name> Provides a brief description of the specified command.
Man pages
A man page is available for every DS CLI command. Man pages are most commonly seen in
UNIX OSs, and provide information about command capabilities. This information can be
displayed by issuing the relevant command followed by the -h, -help, or -? flags.
The following possible topologies for each I/O port are available:
Small Computer System Interface - Fibre Channel Protocol (SCSI-FCP): Fibre Channel
(FC)-switched fabric, which is also called switched point-to-point. This port type is also
used for mirroring.
Fibre Channel connection (IBM FICON): This port type is for IBM Z system hosts only.
The Security field in the lsioport output (Example 10-23) indicates the status of the
IBM Fibre Channel Endpoint Security feature (In-flight data encryption over FICON links). For
more information, see IBM Fibre Channel Endpoint Security for IBM DS8000 and IBM Z,
SG24-8455.
If added to the setioport command, the -force parameter allows a topology change to an
online I/O port even if a topology is set. Example 10-24 shows setting I/O ports without and
with the -force option to the FICON topology, and then checking the results.
To monitor the status for each I/O port, see 10.5, “Metrics with DS CLI” on page 372.
Important: For more information about the current drive choices and RAID capacities, see
IBM Documentation.
Important: One rank is assigned to one array. An array is made of only one array site. An
array site contains eight drives. There is a one to one relationship among array sites,
arrays, and ranks.
You can issue the mkarray command to create arrays, as shown in Example 10-26. The
example uses one array site to create a single RAID 6 array. If you want to create a RAID 10
array, change the -raidtype parameter to 10.
You can now see the arrays that were created by using the lsarray -l command, as shown
in Example 10-27.
Example 10-28 Listing the high-performance flash disk class flash Tier 0 by using the lsarraysite command
dscli> lsarraysite -l -diskclass FlashCoreModule
arsite DA Pair dkcap (10^9B) diskrpm State Array diskclass encrypt
===============================================================================
S1 0 4800.0 65000 Assigned A0 FlashCoreModule supported
S2 0 4800.0 65000 Assigned A1 FlashCoreModule supported
RAID types
RAID 6 and RAID 10 are supported RAID types for DS8A00 systems. RAID 5 is not
supported with DS8A00. You receive an alert and error message if you try to create RAID 5.
Example 10-29 shows an attempt to create RAID 5.
Example 10-29 Alert and error message when attempting to create RAID 5 array
dscli> mkarray -raidtype 5 -arsite S3
CMUC00536W mkarray: The use of RAID 6 over RAID 5 is highly recommended for
increased reliability. Please acknowledge that you understand the risks associated
with RAID 5.: Are you sure you want to accept the disclaimer above? [Y/N]: y
CMUN81522E mkarray: RAID 5 is not supported by the device adapter pair.
If you enabled Data-at-Rest encryption with local or external key manager, then when you
create ranks, you must include the -keygrp flag in the mkrank command with the
corresponding Encryption Key Group ID parameter which is included in the lskeygrp
command output.
After all the ranks are created, use the lsrank command to display the following information:
List of all created ranks with associated ID.
The rank group 0 or 1. Each group has server affinity (0 for server0 or 1 for server1). Rank
group ID is displayed only when a rank is assigned to a specific extent pool.
The RAID type.
The format of the rank (fb or ckd).
Example 10-30 shows the lskeygrp and mkrank commands and the result of a successful
lsrank command.
Example 10-30 Creating and listing ranks by using the mkrank and lsrank commands
dscli> lskeygrp
ID state reckeystate reckeydate datakeydate keyprotocol type name
========================================================================
1 accessible disabled - 09/18/2024 LOCAL DAR DAR_1
As per the above output, the Encryption Key Group ID = 1. It must be specified in
the mkrank command in -keygrp parameter.
dscli> lsarray
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
====================================================================
A0 Assigned Normal 6 (5+P+Q+S) S1 R0 0 4800.0
A1 Assigned Normal 6 (5+P+Q+S) S2 R1 0 4800.0
When defining a rank, you can also specify the extent size. You can have ranks and extent
pools with large 1 (gibibyte) GiB FB extents or small 16 mebibytes (MiB) FB extents. The
extent unit is specified by the -extsize parameter of the mkrank command. The first rank that
is added to an extent pool determines the extent size of the extent pool.
10.3.4 Creating the extent pools and assigning ranks to the extent pools
The next step is to create the extent pools. Remember the following points when you create
the extent pools:
Each extent pool includes an associated rank group that is specified by the -rankgrp
parameter, which defines the extent pool’s server affinity (0 for server0 or 1 for server1).
The extent pool type is specified by the -stgtype parameter. FB is used for Fixed-Block
format.
The number of extent pools can range from one to the number of existing ranks. However,
to associate ranks with both servers, you need at least two extent pools.
For easier management, create empty extent pools that relate to the type of storage or the
planned usage for that pool. For example, create an extent pool pair for FB open systems
environment and create an extent pool pair for the CKD environment.
When an extent pool is created, the system automatically assigns it an extent pool ID, which
is a decimal number that starts from 0, preceded by the letter P. The ID that was assigned to
an extent pool is shown in the CMUC00000I message, which is displayed in response to a
successful mkextpool command.
Extent pools that are associated with rank group 0 receive an even ID number. Extent pools
that are associated with rank group 1 receive an odd ID number. The extent pool ID is used
when you refer to the extent pool in subsequent DS CLI commands. Therefore, it is a best
practice to note the ID.
The mkextpool command forces you to name the extent pools. To do so, complete the
following steps (Example 10-31 on page 342):
1. Create empty extent pools by using the mkextpool command. The keygrp parameter must
be used because encryption is enabled .S(see Example 10-30 on page 340 for more
information in finding the encryption group ID value.
2. List the extent pools to obtain their IDs using lsextpool command.
3. Assign a rank to an extent pool by using the chrank command.
Example 10-31 Creating an extent pool by using mkextpool, lsextpool, and chrank
dscli> mkextpool -rankgrp 0 -keygrp 1 -stgtype fb FB_0
CMUC00000I mkextpool: Extent pool P0 successfully created.
dscli> mkextpool -rankgrp 1 -keygrp 1 -stgtype fb FB_1
CMUC00000I mkextpool: Extent Pool P1 successfully created.
dscli> lsextpool
First half of the command output:
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=========================================================================================
FB_0 P0 fb 0 full 0 100 0 0 0
FB_1 P1 fb 1 full 0 100 0 0 0
After a rank is assigned to an extent pool, use the lsrank command to display rank
assignments to extent pools. In Example 10-32, you can see that rank R0 is assigned to
extpool P0 and rank R1 to extent pool P1.
Example 10-32 Displaying the ranks after a rank is assigned to an extent pool
dscli> lsrank
ID Group State datastate Array RAIDtype extpoolID stgtype
===========================================================
R0 0 Normal Normal A0 6 P0 fb
R1 1 Normal Normal A1 6 P1 fb
Example 10-33 List rank command showing the extent size, capacity, and compression
dscli> lsrank -l
First half of the command output:
ID Group State datastate Array RAIDtype extpoolID extpoolnam stgtype exts usedexts keygrp marray extsize
============================================================================================================
R0 0 Normal Normal A0 6 P0 FB_0 fb 6464054 0 1 MA1 16MiB
R1 1 Normal Normal A1 6 P1 FB_1 fb 6464054 0 1 MA2 16MiB
Note: Only thin provisioned ESE volumes can be created in extent pools with FlashCore
Modules (FCM) disks.
Extent pools with standard NVMe disks support both fully provisioned standard and thin
provisioned ESE volumes.
The last parameter is the volume_ID, which can be a range or a single entry. The four-digit
entry is based on LL and VV. LL (00–FE) equals the logical subsystem (LSS) that the volume
belongs to, and VV (00–FF) equals the volume number on the LSS. Therefore, the DS8A00
can support 255 LSSs, and each LSS can support a maximum of 256 volumes.
Example 10-34 shows the creation of eight volumes, each with a capacity of 10 GiB. The first
four volumes are assigned to rank group 0, and are assigned to LSS 20 with volume numbers
00 - 03. The second four volumes are assigned to rank group 1, and are assigned to LSS 21
with volume numbers of 00 - 03.
Looking closely at the mkfbvol command that is used in Example 10-34 on page 343, you see
that volumes 2000 - 2003 are in extpool P2. That extent pool is attached to rank group 0,
which means server 0. Rank group 0 can contain only even-numbered LSSs, which means
that volumes in that extent pool must belong to an even-numbered LSS. The first two digits of
the volume serial number are the LSS number. So, in this case, volumes 2000–2003 are in
LSS 20.
For volumes 2100–2103 in extpool P3 in Example 10-34 on page 343, the first two digits of
the volume serial number are 21 (an odd number), which signifies that they belong to rank
group 1. The -cap parameter determines the size. However, because the -type parameter
was not used, the default type is GiB or ds, which is a binary size of 230 bytes.
Therefore, these volumes are 10 GiB binary, which equates to 10,737,418,240 bytes. If you
used the -type ess parameter, the volumes are decimally sized, and they are a minimum of
10,000,000,000 bytes in size.
Example 10-34 on page 343 named the volumes by using the naming scheme fb_0_#h,
where #h means that you are using the hexadecimal volume number as part of the volume
name. This naming convention is shown in Example 10-35, where you list the volumes that
you created by using the lsfbvol command. You then list the extent pools to see how much
space is left after the volume is created.
Example 10-35 Checking the machine after the volumes are created by using lsfbvol and lsextpool
dscli> lsfbvol
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks)
================================================================================================================
fb_0_2000 2000 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
fb_0_2001 2001 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
fb_0_2002 2002 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
fb_0_2003 2003 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
fb_1_2100 2100 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
fb_1_2101 2101 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
fb_1_2102 2102 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
fb_1_2103 2103 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=========================================================================================
FB_0 P2 fb 0 below 7180 0 459531 64 4
FB_1 P3 fb 1 below 8625 0 552015 64 4
Resource group: You can configure a volume to belong to a certain resource group by
using the -resgrp <RG_ID> flag in the mkfbvol command. For more information, see IBM
System Storage DS8000 Copy Services Scope Management and Resource Groups,
REDP-4758.
You configure T10 DIF by adding the -t10dif parameter to the mkfbvol command. It is
possible to create T10 DIF volumes and use them as standard volumes, and then enable
them later without configuration changes.
You can also specify that you want the extents of the volume that you create to be evenly
distributed across all ranks within the extent pool. This allocation method is called rotate
extents. The storage pool striping spreads the I/O of a logical unit number (LUN) to multiple
ranks, which improves performance and greatly reduces hot spots.
The extent allocation method (EAM) is specified by the -eam rotateexts or -eam rotatevols
option of the mkfbvol command, as shown in Example 10-36.
Default allocation policy: For DS8A00, the default allocation policy is rotate extents.
The showfbvol command with the -rank option (Example 10-37 on page 346) shows that the
volume that you created is distributed across two ranks. It also shows how many extents on
each rank were allocated for this volume. Compared to the previous examples, the extent pool
P2 now consists of two ranks, R2 and R3.
The largest LUN size is 16 TiB. CS is not supported for LUN sizes larger than 4 TiB.
New capacity: The new capacity must be larger than the previous capacity. You cannot
shrink the volume.
Because the original volume included the rotateexts attribute, the other extents are also
striped, as shown in Example 10-39. See both examples and check the difference.
Important: Before you can expand a volume, you must delete all CS relationships for that
volume.
The extent size is defined by the extent pools where the volume is created.
More DS CLI commands are available to control and protect the space in an extent pool with
thin-provisioned volumes. One of these commands is the mksestg command, which reserves
space for thin-provisioned volumes. For more information about thin-provisioning, see IBM
DS8000 Thin Provisioning (Updated for Release 9.3), REDP-5343-02.
Deleting volumes
FB volumes can be deleted by using the rmfbvol command. The command includes options
to prevent the accidental deletion of volumes that are in use. An FB volume is considered to
be in use if it is participating in a CS relationship or if the volume received any I/O operation in
the previous 5 minutes.
Volume deletion is controlled by the -safe and -force parameters (they cannot be specified
at the same time) in the following manner:
If none of the parameters are specified, the system performs checks to see whether the
specified volumes are in use. Volumes that are not in use are deleted and the volumes that
are in use are not deleted.
If the -safe parameter is specified and if any of the specified volumes are assigned to a
user-defined volume group, the command fails without deleting any volumes.
The -force parameter deletes the specified volumes without checking whether they are in
use.
Example 10-41 shows the creation of volumes 2200 and 2201, and then the assignment of
volume 2200 to a volume group. An attempt to delete both volumes with the -safe option fails
because the volume 2200 is assigned to volume group V0. You can delete volume 2201 by
using the -safe option because the volume is not assigned to a volume group. Volume 2200
is not in use, so you can delete it by not specifying either parameter.
The command includes options to prevent the accidental reinitialization of volumes that are in
use. An FB volume is considered to be in use if it is participating in a CS relationship or if the
volume received any I/O operation in the previous 5 minutes. All data is lost when this
command is used.
Note: There are two ways to create volume groups and map the volumes to the hosts.
Volume groups can be created manually in single steps or automatically. The automatic
method is done by using the mkhost and chhost commands, and it is the recommended
method for mapping volumes to host systems.
The following sections describe both ways, but understand that the manual way is only there
for compatibility with earlier versions and must be applied with care to make sure that all the
steps are done to fully reflect the results and views in the DS GUI.
Example 10-43 Listing the host types by running the lshostype command
dscli> lshosttype -type scsimask
HostType Profile AddrDiscovery LBS
===========================================================================
Hp HP - HP/UX reportLUN 512
SVC SAN Volume Controller reportLUN 512
SanFsAIX IBM pSeries - AIX/SanFS reportLUN 512
pSeries IBM pSeries - AIX reportLUN 512
pSeriesPowerswap IBM pSeries - AIX with Powerswap support reportLUN 512
zLinux IBM zSeries - zLinux reportLUN 512
dscli> lshosttype -type scsimap256
HostType Profile AddrDiscovery LBS
=================================================
AppleOSX Apple - OSX LUNPolling 512
Fujitsu Fujitsu - Solaris LUNPolling 512
HpTru64 HP - Tru64 LUNPolling 512
HpVms HP - Open VMS LUNPolling 512
Linux Linux Server LUNPolling 512
Novell Novell LUNPolling 512
SGI SGI - IRIX LUNPolling 512
SanFsLinux - Linux/SanFS LUNPolling 512
Sun SUN - Solaris LUNPolling 512
VMWare VMWare LUNPolling 512
Windows Windows Server LUNPolling 512
iLinux IBM iSeries - iLinux LUNPolling 512
nSeries IBM N series Gateway LUNPolling 512
pLinux IBM pSeries - pLinux LUNPolling 512
Example 10-44 Creating a volume group by using mkvolgrp and displaying it by using lsvolgrp
dscli> mkvolgrp -type scsimask -volume 2000-2002,2100-2102 AIX_VG_01
CMUC00030I mkvolgrp: Volume group V1 successfully created.
dscli> lsvolgrp -l -type scsimask
Name ID Type
============================
v0 V0 SCSI Mask
AIX_VG_01 V1 SCSI Mask
pE950_042 V2 SCSI Mask
pE950_048 V3 SCSI Mask
pseries_cluster V7 SCSI Mask
dscli> showvolgrp V1
Name AIX_VG_01
ID V1
Type SCSI Mask
Vols 2000 2001 2002 2100 2101 2102
You might also want to add or remove volumes to this volume group later. To add or remove
volumes, use the chvolgrp command with the -action parameter.
Example 10-45 shows adding volume 2003 to volume group V1, displaying the results, and
then removing the volume.
Important: Not all OSs can manage a volume removal. To determine the safest way to
remove a volume from a host, see your OS documentation.
You can use a set of cluster commands (mkcluster, lscluster, showcluster, and rmcluster)
to create clusters to group hosts that have the same set of volume mappings, map or unmap
volumes directly to these clusters, or both. These commands were added to provide
consistency between the GUI and DS CLI. For more information, see “Creating open system
clusters and hosts” on page 265.
Clusters are grouped hosts that must share volume access with each other. A cluster usually
contains several hosts. Single hosts can exist without a cluster. Clusters are created with the
mkcluster command. This command does not need many parameters and is there only to
organize hosts.
The mkhost command now has two generic host types that are available: Linux Server and
Windows Server. These types were created to simplify and remove confusion when
configuring these host types. You must define the host type first by running the mkhost
command, as shown in Example 10-46.
Example 10-46 Creating generic host types Linux Server and Windows Server
dscli> mkcluster cluster_1
CMUC00538I mkcluster: The cluster cluster_1 is successfully created.
Usage: mkhost [ { -help|-h|-? } ] [-v on|off] [-bnr on|off] [-dev storage_image_ID] -type
AIX|AIX with PowerSwap|HP OpenVMS|HP-UX|IBM i AS/400|iLinux|Linux RHEL|Linux SUSE|Linux
Server|N series Gateway|Novell|pLinux|SAN Volume Controller|Solaris|VMware|Windows
2003|Windows 2008|Windows 2012|Windows Server|zLinux [-hostport wwpn1[,wwpn2,...]]
[-cluster cluster_name] Host_Name | -
More commands are also available: chhost, lshost, showhost, chhost, and rmhost. These
commands were added to provide consistency between the DS GUI and DS CLI.
Example 10-47 provides examples of the commands.
To link the logical hostname with a real physical connection, host ports must be assigned to
the host using the mkhostport command. This task can also be done with the mkhost
command specifying the -hostport wwpn1[,wwpn2,...] option during host creation to save
the additional configuration step.
To see a list of unassigned worldwide port names (WWPNs), which are already logged in to
the storage system and represent the physical HBA ports of the hosts, run the lshostport
command. Specifying -unknown shows all the free ports and -login shows all the logged-in
ports. It takes a while until the storage system displays the new logged-in ports. It is also
possible to add them manually, and they do not need to be logged in to create the host
configuration and volume mappings.
The last step is to assign volumes to the host or cluster to allow host access to the volumes.
The volumes that are assigned to the host are seen only by the host, and the volumes that
are assigned to the cluster can be seen by all hosts inside the cluster. The volume groups for
the cluster and the host are generated automatically by running the chhost/chcluster
-action map command. The automatically created volume groups have the same name as
the host or cluster itself but are different objects.
Example 10-50 shows some change and removal commands. Unassigning the host from the
cluster means that the host also keeps the cluster volume mappings and the cluster also
keeps them.
The automatically created volumes groups are removed only if the host is removed or you run
the rmvolgrp command.
Example 10-51 shows the creation of a host connection that represents two HBA ports in this
AIX host. Use the -hosttype parameter to include the host type that you used in
Example 10-43 on page 350. Allocate it to volume group V1. If the storage area network
(SAN) zoning is correct, the host can see the LUNs in volume group V1.
The option in the mkhostconnect command to restrict access only to certain I/O ports is also
available by using the -ioport parameter. Restricting access in this way is unnecessary. If
you want to restrict access for certain hosts to certain I/O ports on the DS8A00, perform
zoning on your SAN switch.
The mkhostconnect command normally is sufficient to allow the volumes to access the
specified host ports. The command works, but it is not reflected in the modernized GUI
interface. The modernized GUI interface introduced host and cluster grouping for easier
management of groups of hosts with many host ports. If no host or cluster is assigned to the
created connection, the GUI still shows the ports as unassigned host ports with mapped
volumes. Figure 10-8 shows the results in the DS GUI.
Figure 10-8 Volumes that are mapped to a host port without a host
The lshostconnect -l command in Example 10-52 shows that the relationship between the
volume group and a host connection was not built up. The assigned host is missing in the last
column and portgrp 0 is used, which is not recommended because it is the default port group
for new host ports. There is no host that is created yet for the AIX connection in our example.
The first column does not show the hostname: It is a symbolic name for the connection for
better recognition. The ID field makes the connection unique.
Note: As a best practice, it is not advisable to use the host port group ID 0. This ID ties
together a group of SCSI host port objects that are accessing a common volume group. If
the port group value is set to zero, the host port is not associated with any port group. It is
used by default for ports that are not grouped yet.
If you want to use a single command to change the assigned volume group of several host
connections at the same time, assign the host connections to a unique port group. Then, run
the managehostconnect command. This command changes the assigned volume group for all
host connections that are assigned to a particular port group.
Example 10-53 shows the steps to finish the configuration of the mapping. A host must be
created, and then the new host must be assigned to the existing connections A and B and
volume group V1 relationship.
Note: Using the mkvolgrp and mkhostconnect commands for storage partitioning to map
volumes to hosts is not the preferred method. It is available for compatibility for earlier and
existing volume groups. It is better to use the mkhost command from the beginning to
assign hosts, host ports, volume groups, and volume mappings together. It combines the
needed functions in one command and makes sure that no step is forgotten. It also
reduces the number of steps that is needed.
You log on to this host and start DS CLI. It does not matter which HMC you connect to when
you use the DS CLI. Then, run the lshostvol command.
Important: The lshostvol command communicates only with the OS of the host on which
the DS CLI is installed. You cannot run this command on one host to see the attached
disks of another host.
Note: The Subsystem Device Driver Path Control Module (SDDPCM) (a multipath solution
on AIX) and Subsystem Device Driver Device Specific Module (SDDDSM) (a multipath
solution on Windows) are no longer developed for DS8000. Instead, use an OS-native
solution such as AIX Multipath I/O Path Control Module (AIXPCM) (the AIX default
multipath solution) or Microsoft Device Specific Module (MSDSM) (the Windows default
multipath solution). These solutions are fully supported on open systems.
Unlike with FB volumes, you do not need to create volume groups or host connects for CKD
volumes. If I/O ports in FICON mode exist, access to CKD volumes by FICON hosts is
granted automatically, and follows the specifications in the input/output definition file (IODF).
If you enabled Data-at-Rest encryption with local or external key manager, then when you
create ranks, you must include the -keygrp flag in the mkrank command with the
corresponding Encryption Key Group ID parameter which is included in the lskeygrp
command output.
After all the ranks are created, use the lsrank command to display the following information:
List of all created ranks with associated ID.
The rank group 0 or 1. Each group has server affinity (0 for server0 or 1 for server1). Rank
group ID is displayed only when a rank is assigned to a specific extent pool.
The RAID type.
Example 10-55 shows the process of creating ranks using lskeygrp, mkrank, lsrank,
mkextpool, chrank, and lsextpool commands.
As per the above output, the Encryption Key Group ID = 1. It must be specified in
the mkrank command in -keygrp parameter.
dscli> lsrank
ID Group State datastate Array RAIDtype extpoolID stgtype
=============================================================
R0 - Unassigned Normal A0 6 - ckd
R1 - Unassigned Normal A1 6 - ckd
dscli> lsarray
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B)
When you define a rank, you can also specify the extent size. You can have ranks and extent
pools with large 1113 cylinder CKD extents, or small 21 cylinder CKD extents. The extent unit
is specified with the -extsize parameter of the mkrank command. The first rank that is added
to an extent pool determines the extent size of the entire extent pool. Example 10-56 shows
CKD ranks with small and large extent sizes.
For easier management, create empty extent pools that relate to the type of storage or the
planned usage for that pool. For example, create an extent pool pair for the CKD environment.
When an extent pool is created, the system automatically assigns it an extent pool ID, which
is a decimal number that starts from 0, preceded by the letter P. The ID that was assigned to
an extent pool is shown in the CMUC00000I message, which is displayed in response to a
successful mkextpool command.
Extent pools that are associated with rank group 0 receive an even ID number. Extent pools
that are associated with rank group 1 receive an odd ID number. The extent pool ID is used
when you refer to the extent pool in subsequent DS CLI commands. Therefore, it is a best
practice to note the ID.
The mkextpool command forces you to name the extent pools. To do so, complete the
following steps (see Example 10-57 on page 362):
1. Create empty extent pools by using the mkextpool command. The keygrp parameter must
be used because encryption is enabled (see Example 10-30 on page 340 for more
information in finding the encryption group ID value).
2. List the extent pools to obtain their IDs.
3. Attach a rank to an empty extent pool by using the chrank command.
4. List the extent pools again by using lsextpool and note the change in the capacity of the
extent pool after assigning ranks into the extent pools.
After a rank is assigned to an extent pool, use lsrank command to display rank assignments
to extent pools. In Example 10-58, you can see that rank R0 is assigned to extpool P0 and
rank R1 to extent pool P1.
Example 10-58 Displaying the ranks after a rank is assigned to an extent pool
dscli> lsrank
ID Group State datastate Array RAIDtype extpoolID stgtype
===========================================================
R0 0 Normal Normal A0 6 P0 ckd
R1 1 Normal Normal A1 6 P1 ckd
In order to display more information about ranks, use lsrank -l command as shown in
Example 10-59. In addition to the extent size, the output includes more information on
capacity and compression. In Example 10-59, compressionType has value compression
because these ranks include FlashCore Modules disks and compressionRatio value
changes when the volumes are created and in use.
Example 10-60 Trying to create CKD volumes without creating an LCU first
dscli> mkckdvol -extpool p2 -cap 262668 -name CKD_EAV1_#h C200
CMUN02282E mkckdvol: C200: Unable to create CKD logical volume: CKD volumes require a CKD
logical subsystem.
To create the LCUs, run the mklcu command. The command uses the following format:
mklcu -qty XX -id XX -ss XXXX
Note: For the z/OS hardware definition and copy services (Metro Mirror or Global Mirror),
the subsystem identifier (SSID) (-ss -id) must be unique for all connected storage systems.
To display the LCUs that you created, run the lslcu command.
Example 10-61 shows the creation of two LCUs by running the mklcu command and then
listing the created LCUs by running the lslcu command. By default, the LCUs that were
created are the 3990-6 type.
Because two LCUs were created by using the parameter -qty 2, the first LCU, which is ID BC
(an even number), is in address group 0, which equates to rank group 0. The second LCU,
which is ID BD (an odd number), is in address group 1, which equates to rank group 1. By
placing the LCUs into both address groups, performance can be maximized by spreading the
workload across both servers in the DS8A00.
Note: Only thin provisioned ESE volumes can be created in extent pools with FlashCore
Modules (FCM) disks.
Extent pools with standard NVMe disks support both fully provisioned standard and thin
provisioned ESE volumes.
The CKD volumes can be created by using the mkckdvol command. The mkckdvol command
uses the following format:
mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -name CKD_EAV1_#h BC06
With CKD volumes, the capacity is expressed in cylinders or as mod1 (Model 1) extents (1113
cylinders). To use the capacity more efficiently and avoid wasting space, use volume
capacities that are a multiple of 1113 cylinders.
The DS8A00 supports EAV volumes up to 1,182,006 cylinders. The EAV device type is called
3390 Model A and must be specified in the mkckdvol command using parameter datatype
3390-A.
Important: For 3390-A volumes, the size can be specified as 1 - 65,520 in increments of 1,
and from 65,667, which is the next multiple of 1113, to 1,182,006 in increments of 1113.
The last parameter in the command is the volume_ID. This value determines the LCU that the
volume belongs to and the unit address (UA) for the volume. Both of these values must match
the control unit address (CUADD) and device definition in the input/output configuration data
set (IOCDS) that an IBM Z system server uses to access the volume.
The volume_ID has a format of LLVV. LL (00 - FE) equals the LCU to which the volume
belongs, and VV (00 - FF) equals the offset for the volume. Only one volume of an LCU can
use a unique VV of 00 - FF.
You can create only CKD volumes in LCUs that you already created. Volumes in
even-numbered LCUs must be created from an extent pool that belongs to rank group 0.
Volumes in odd-numbered LCUs must be created from an extent pool in rank group 1. With
one mkckdvol command, volumes for one LCU can be defined.
Important: You can configure a volume to belong to a certain resource group by using the
-resgrp <RG_ID> flag in the mkckdvol command. For more information, see IBM System
Storage DS8000 Copy Services Scope Management and Resource Groups, REDP-4758.
More DS CLI commands are available to control and protect the space in an extent pool for
thin-provisioned volumes. One of these commands, the mksestg command, reserves space
for thin-provisioned volumes. For more information about thin-provisioning, see IBM DS8000
Thin Provisioning (Updated for Release 9.3), REDP-5343-02.
You can also specify that you want the extents of the volume to be evenly distributed across
all ranks within the extent pool. This allocation method is called rotate extents.
The eam (extent allocation method) parameter is specified with rotateexts or rotatevols
option in the mkckdvol command (Example 10-64).
The showckdvol command with the -rank option (Example 10-65) shows that the volume that
was created is distributed across two ranks. It also displays how many extents on each rank
were allocated for this volume. In this example, the pool P3 uses small 21-cylinder CKD
extents.
Because the original volume used the rotateexts attribute, the additional extents are also
striped, as shown in Example 10-67. In this example, the pool P3 is using small 21-cylinder
CKD extents.
Important: Before you can expand a volume, you first must delete all Copy Services
relationships for that volume. Also, you cannot specify both -cap and -datatype in the
same chckdvol command.
It is possible to expand a 3390 Model 9 volume to a 3390 Model A. Expand the volume by
specifying new capacity for an existing Model 9 volume. When you increase the size of a
3390-9 volume beyond 65,520 cylinders, its device type automatically changes to 3390-A.
CMUC00332W chckdvol: Some host operating systems do not support changing the volume size.
Data can be at risk if the host does not support this action. Are you sure that you want to
resize the volume? [Y/N]: y
CMUC00022I chckdvol: CKD Volume BD01 successfully modified.
You cannot reduce the size of a volume. If you try to reduce the size, an error message is
displayed.
The command includes a capability to prevent the accidental deletion of volumes that are in
use. A CKD volume is considered in use if it participates in a copy services relationship, or if
If the -force parameter is not specified with the command, volumes that are in use are not
deleted. If multiple volumes are specified and several volumes are in use and several volumes
are not, the volumes that are not in use are deleted.
If the -force parameter is specified in the command, the volumes are deleted without
checking to see whether they are in use.
Note: You cannot delete the volume even with the force parameter if the volume is in an
active SafeGuarded copy relationship (GDPS or CSM). You must first terminate
Safeguarded copy relationship before you can delete a volume with force.
Example 10-69 shows an attempt to delete two volumes, BD02 and BD03. Volume BD02 is
online on a host. Volume BD03 is not online on any host and not in a copy services
relationship. The rmckdvol BD02-BD03 command deletes only volume BD03, which is offline.
To delete volume BD02, use the -force parameter.
The command includes options to prevent the accidental reinitialization of volumes that are in
use. A CKD volume is considered to be in use if it is participating in a Copy Services
relationship or if the IBM Z system path mask indicates that the volume is in a grouped state
or online to any host system. All data is lost when this command is used.
For more information about resource groups, see IBM System Storage DS8000 Copy
Services Scope Management and Resource Groups, REDP-4758.
Easy Tier Heat Map Transfer (HMT) allows the transfer of Easy Tier heat maps from primary
to auxiliary storage sites.
For more information about Easy Tier, see the following publication:
IBM DS8000 Easy Tier (Updated for DS8000 R9.0), REDP-5667
Performance metrics: All performance metrics are an accumulation starting from the
most recent counter-wrap or counter-reset. The performance counters are reset on the
following occurrences:
When the storage unit is turned on.
When a server fails and the failover and fallback sequence is run.
Example 10-72 show an example of the showckdvol command. This command displays the
detailed properties for an individual volume and includes a -metrics parameter that returns
the performance counter-values for a specific volume ID.
Example 10-73 shows an example of the output of the showrank command. This command
generates two types of reports. One report displays the detailed properties of a specified
rank, and the other report displays the performance metrics of a specified rank by using the
-metrics parameter.
Example 10-74 shows an example of the showioport command. This command shows the
properties of a specified I/O port and the performance metrics by using the -metrics
parameter. Monitoring the I/O ports is one of the most important tasks of the system
administrator. The I/O port is where the HBAs, SAN, and DS8A00 exchange information. If
one of these components has problems because of hardware or configuration issues, all of
the other components are affected.
The output of the showioport command includes several metric counters. For example, the
%UtilizeCPU metric for the CPU utilization of the HBA and the CurrentSpeed that the port
uses might be useful information.
Example 10-75 on page 378 shows the many important metrics that are returned by the
command. It provides the performance counters of the port and the FC link error counters.
The FC link error counters are used to determine the health of the overall communication.
Example 10-75 Full output for the showioport -rdp Ixxxx command for a specific I/O port
dscli> showioport -rdp I0032
ID I0032
WWPN 5005076309039462
Attached WWPN 200D00051EF0EC72
Physical Type FC-FS-3
Link Failure Error 0
Loss of sync Error 0
Loss of Signal Error 0
Primitive Sequence Error 0
Invalid Transmission Word Error 0
CRC Error 0
FEC Status Inactive
Uncorrected Blocks -
Corrected Blocks -
Port Speed Capabilities 8GFC 16GFC 32GFC
Port Operating Speed 32GFC
Advertised B-B Credit 20
Attached Port B-B Credit 180
Nominal RTT Link Latency Unknown
Connector Type SFP+
Tx Type Short Wave Laser
Transceiver Temperature 39.9 C [Operating Range -128 - +128 C]
Tx Bias Current 6.5 mAmps [Operating Range 0 - 131 mAmps]
Transceiver Supply Voltage 3364.4 mV [Operating Range 0 - 3600 mVolts]
Rx Power 448.8 uW(-3.5 dBm) [Operating Range 0 - 6550 uW]
Tx Power 681.3 uW(-1.7 dBm) [Operating Range 0 - 6550 uW]
Last SFP Read time 10/12/2024 09:48:04 CEST
======SFP Parameters Alarm Levels=======
Element High Warning Low Warning High Alarm Low Alarm
========================================================================
Transceiver Temperature 0 0 0 0
Tx Bias Current 0 0 0 0
Transceiver Supply Voltage 0 0 0 0
Tx Power 0 0 0 0
Rx Power 0 0 0 0
============================Attached Port=============================
ID N/A
WWPN 200D00051EF0EC72
Attached WWPN 5005076309039462
Physical Type FC-FS-3
Link Failure Error 0
Loss of sync Error 3
Loss of Signal Error 2
Primitive Sequence Error 0
Invalid Transmission Word Error 0
CRC Error 0
FEC Status Inactive
Uncorrected Blocks -
Corrected Blocks -
Port Speed Capabilities 1GFC 2GFC 4GFC 8GFC
Port Operating Speed 32GFC
Advertised B-B Credit 20
Attached Port B-B Credit 180
Nominal RTT Link Latency Unknown
Connector Type SFP+
Tx Type Short Wave Laser
Transceiver Temperature 39.0 C [Operating Range -128 - +128 C]
Tx Bias Current 9.0 mAmps [Operating Range 0 - 131 mAmps]
Transceiver Supply Voltage 3281.1 mV [Operating Range 0 - 3600 mVolts]
Rx Power 690.2 uW(-1.6 dBm) [Operating Range 0 - 6550 uW]
Tx Power 479.8 uW(-3.2 dBm) [Operating Range 0 - 6550 uW]
Last SFP Read time 10/12/2024 08:40:30 CEST
The result of the command in Example 10-76 is a .csv file with detailed information. For more
information, see Figure 10-9 on page 380.
The following security commands are available to manage remote service access settings:
chaccess
Use the chaccess command to change the following HMC settings:
– Enable and disable the HMC command-line shell access
– Enable and disable the HMC WUI access
– Enable and disable Assist On-site (AOS) or IBM Remote Support Center (RSC) access
Important:
This command affects service access only and does not change access to the
system by using the DS CLI or DS Storage Manager.
Only users with administrator authority can access this command.
The following commands enable the TLS protocol for secure syslog traffic. TLS must be
enabled before configuring all syslog servers. If you specify TLS, all syslog servers
configurations use the same protocol and certificates.
mksyslogserver
Example 10-78 shows the new DS CLI command mksyslogserver, which configures
syslogserver as TLS-enabled. The certificate authority (CA) certificate, HMC Certificate,
and HMC private key locations are required when configuring the first syslogserver.
lssyslogserver -l
The lssyslogserver -l displays the list of all syslog servers and their attributes, as shown
in Example 10-79.
Important: For more information about security issues and overall security management
to implement NIST 800-131a compliance, see IBM DS8870 and NIST SP 800-131a
Compliance, REDP-5069.
The following DS CLI commands specify a custom certificate for communication between the
external encryption key servers (typically IBM Guardium Key Lifecycle Manager and the
storage system:
managekeygrp -action -importcert
These commands are not described in this chapter. For more information, see Copy Services
commands.
Even though the command may no longer be referenced in the help pages of the DS CLI, the
command is still supported, as shown in Example 10-83.
Some new commands and their older equivalents are shown in Table 10-1.
In addition, the Licensed Internal Code (LIC) and internal operating system (OS) that run on
the Hardware Management Consoles (HMCs) and each central processor complex (CPC)
can be updated. As IBM continues to develop the DS8A00, new features are released
through new LIC levels.
When IBM releases a new LIC for the DS8A00, it is released in the form of a bundle. The term
bundle is used because a new code release can include updates for various DS8A00
components. These updates are tested together, and then the various code packages are
bundled together into one unified release. Components within the bundle each include their
own revision levels.
For more information about a DS8A00 cross-reference table of code bundles, see DS8A00
Code Bundle Information.
The cross-reference table shows the levels of code for released bundles. The cross-reference
information is updated as new code bundles are released.
In addition to keeping your LIC current, maintain a current version of the Data Storage
Command-line Interface (DS CLI).
The DS8A00 continues to use the same naming convention for bundles like the former
DS8000 generations: PR.MM.FFF.EEEE.
The 10.0 in Example 11-1 stands for the Release 10.0 without a Service Pack.
If DS CLI is used, you can obtain the CLI and LMC code level information by using the ver
command, as shown in Example 11-2. The ver command uses the following optional
parameters and displays the versions of the CLI, Storage Manager, and LMC:
-s (optional) The -s parameter displays the version of the CLI program. You cannot
use the -s and -l parameters together.
-l (optional) The -l parameter displays the versions of the CLI, Storage Manager,
and LMC. You cannot use the -l and -s parameters together.
-cli (optional) Displays the version of the CLI program. Version numbers are in the
format version.release.modification.fixlevel.
-stgmgr (optional) Displays the version of the Storage Manager.
This ID is not for the GUI (Storage Manager GUI). This ID relates to
HMC code bundle information.
-lmc (optional) Displays the version of the LMC.
The Bundle version (Release) can also be retrieved from the DS Storage Manager by clicking
Actions → Properties from the Dashboard window, as shown in Figure 11-1 on page 388.
Important: The LMC is usually provided by and installed by IBM Remote Support
Personnel, or by an IBM Systems Service Representative (IBM SSR). The customer can
also manage the entire process from the DS8000 Storage Manager GUI. During the
planning phase customers must read the following document to know whether any
prerequisites must be considered:
Customer "Must-Read" Information for DS8000 Code Updates
The code is either updated remotely or locally at the HMC by an IBM SSR. Upgrading the
code remotely can be done by IBM through Remote Code Load (RCL) or by the client through
the DS8000 Storage Manager GUI. RCL is the default method. If the client wants the code to
be updated by the local IBM SSR onsite, then the Feature Code for remote code exception
must be ordered with the system.
Note: When the customer does not opt for Expert Care Premium, Customer Code Load is
the default on the DS8A00 system.
Other than the actions of acquiring the microcode, the process of distribution and activation is
the same.
The Code Distribution and Activation (CDA) software preinstall is the method that is used to
run the concurrent code load (CCL) distribution. By using the CDA software preinstall, the
IBM SSR performs every non-impacting CCL step for loading code by inserting the physical
media into the primary HMC or by running a network acquisition of the code level that is
needed. The IBM SSR can also download the bundle from IBM Fix Central on the HMC.
After the CDA software preinstallation starts, the following steps occur automatically:
1. The release bundle is downloaded from either the physical media or network to the
Primary HMC (HMC1) hard disk drive (HDD).
2. The release bundle is copied from HMC1 to the Secondary HMC (HMC2)
3. The HMCs receive any code update-specific fixes.
4. Code updates are distributed to the logical partition (LPAR) and staged on an alternative
base operating system (BOS) repository.
5. Scheduled precheck scans are performed until the distributed code is activated by the
user. After 30 days without activation, the code expires and is automatically removed from
the alternative BOS.
Anytime after the software preinstallation completes, when the user logs in to the primary
HMC, the user is guided automatically to correct any serviceable events that might be open,
update the HMC, and activate the previously distributed code on the storage facility. The
overall process is also known as CCL.
Although the microcode installation process might seem complex, it does not require
significant user intervention. IBM Remote Support Personnel normally start the CDA process
and then monitors its progress by using the HMC. The customer’s experience with upgrading
the code of the DS8000 is the same.
Important: For the DS8A00 models, DS CLI should be maintained at a current level.
Matching the version to the storage facility is not required if the DS CLI version is at a
higher level. The higher level can be used to support all other IBM DS8000 models in the
environment. For more information, see the release notes or speak to your IBM SSR.
Important: The default setting for this feature is off, but can be enabled in the Storage
Manager GUI. For more information, contact your IBM SSR.
To enable this feature, log in to the Storage Manager GUI, select Settings → System →
Advanced, and select Automatic code management, as shown in Figure 11-2 on
page 391.
To address this situation, an HMC Code Image Server function is available. With the HMC
Code Image Server function, a single HMC in the customer data center can acquire code
from IBM Fix Central. One HMC sends those images to other HMCs by using the client
Ethernet network. The advantage of this approach is that there is no need to download the
image from IBM Fix Central multiple times, and the code bundles can be copied locally by
using that download.
The HMC Code Image Server function works with bundle images and other updates, such as
ICS Images. HMC Recovery Images are performed if they are available on the source HMC.
Figure 11-3 on page 392 shows the relevant menu options in the Updates menu of the
service console Web User Interface (WUI).
At the site where the code bundle was acquired and downloaded, the HMC Code Image
Server function must be enabled. The target site then uses the “Remote File Download”
function to copy the available code bundles to a local repository. All images on the source
HMC are copied to the target HMC.
This process copies only the update image files to the local /extra/BundleImage/ directory
on the target HMC. Then, the normal acquisition step still must be performed, and the local
directory on the target HMC must be selected, as shown on Figure 11-4.
After the acquisition is complete, the normal code load process proceeds with the CDA
software preinstallation.
Figure 11-4 on page 392 also shows that it is possible to acquire a bundle from the storage
system LPAR. Every IBM Fix Central acquired image is copied to the HMC and the LPARs of
the storage system and then imported into the HMC library. Because there is a copy on the
LPAR, the partner HMC can now use the LPAR as a source for the acquisition step. This
action can be done on both HMCs on the same storage system because only these HMCs
have access to the internal network to copy the files from the LPARs. Copying from the LPARs
does not require using the HMC Code Image Server function.
RCL is a trusted process where an IBM Remote Support engineer securely connects to a
DS8000 system, enables the remote acquisition, and performs the distribution and activation
of LIC bundles and ICS images.
The RCL process is concurrent, that is, it can be run without interruptions to business
operations. This process consists of the following steps, as illustrated in Figure 11-5.
The following steps are used for the Remote Code Load process:
1. IBM Remote Support Personnel work with IBM Technical Account Managers (TAM) to plan
the microcode update to ensure that the client’s environment is in the planning phase.
2. When an RCL is agreed on and scheduled, IBM Remote Support Personnel in the
IBM Support Center initiate a session with the target HMC.
Notes:
Code bundles are pulled to the HMC. They are not pushed.
IP addresses might change in the future. For this reason, it is strongly recommended to
use host names for firewall configuration instead of IP addresses.
Note: Customer Code Load runs the same background processes as the RCL.
There is a 30-day countdown between these two parts of the upgrade in which the client can
decide when to proceed with the second part. During the 30 days, the system stores the
downloaded code bundle, and it can be activated at any point.
Important: The storage system must be in a healthy hardware status to avoid issues
during the code upgrade process. For that reason, at any point in time, the client can select
the option Health Check to confirm whether the system is ready for the upgrade.
2. After the code level is selected, the process downloads the new code bundle to the
DS8000 Hardware Management Console, and then distributes the separate firmware
packages to each internal component in the DS8000 system.
3. The user can monitor the process until completion. After the download completes, click
Close Status, as shown in Figure 11-8 on page 396.
4. After closing the window, the option Health Check and Activate is available. Also, you can
see and confirm which code level was downloaded, and how many days are left before the
code expires, as shown in Figure 11-9.
Note: After completing this step, the Health Check option still is available, but it is not
mandatory to select it before proceeding with the code activation. A health check is still
performed before and after the activation when the user selects Health Check and
Activate.
5. To activate the code, select Health Check and Activate. A new attention message
appears in the Storage Manager GUI and notifies you about the actions that are about to
be performed, as shown in Figure 11-10 on page 397. To start, click Yes.
6. The code activation progress can be tracked in the Storage Manager GUI until the end.
After it completes, it displays a message confirming that the activation is complete (see
Figure 11-11).
When the problem is resolved, the Code Load can be resumed by going back to step 5 on
page 396 and select Health Check and Activate.
Note: If the option Health Check and Activate is not available, the Code Load needs
to be restarted by going back to step 1 on page 395 to download the same Code
Bundle which failed the activation.
Best practice: Many clients with multiple DS8000 systems follow the update schedule that
is detailed in this chapter. In this schedule, the HMC is updated a day or two before the rest
of the bundle is applied. If a large gap exists between the present and destination level of
bundles, certain DS CLI commands (especially DS CLI commands that relate to IBM Copy
Services (CS)) might not be able to be run until the SFI is updated to the same level of the
HMC. Your IBM SSR or Technical Account Manager can help you in this situation.
Before you update the CPC OS and LIC, a pre-verification test is run to ensure that no
conditions exist that prohibit a successful code load. The HMC code update installs the latest
version of the pre-verification test. Then, the newest test can be run.
If problems are detected, one or two days are available before the scheduled code installation
window date to correct them. This procedure is shown in the following example:
Thursday:
a. Acquire the new code bundle and send them to the HMCs.
The average code load time varies depending on the hardware that is installed, but 2.5–4
hours is normal. Always speak with your IBM SSR or Technical Account Manager about
proposed code load schedules.
Additionally, check multipathing drivers and storage area network (SAN) switch firmware
levels for their current levels at regular intervals.
This fast update means that single path hosts, hosts that boot from SAN, and hosts that do
not have multipathing software do not need to be shut down during the update. They can
keep operating during the host adapter update because the update is so fast. Also, no
Subsystem Device Driver (SDD) path management is necessary.
Interactive host adapters can also be enabled if you want to control the host path manually. If
so, before the host adapters are updated, a notification is sent and a confirmation is needed.
You can then take the corresponding host paths offline and switch to other available paths.
This function is usually enabled by default. For more information about how to enable this
function, contact your IBM SSR.
CUIR allows the DS8A00 to request that all attached system images set all paths that are
required for a particular service action to the offline state. System images with the correct
level of software support respond to these requests by varying off the affected paths. The
image then notifies the DS8A00 subsystem that the paths are offline or that it cannot take the
paths offline. CUIR reduces manual operator intervention and the possibility of human error
during maintenance actions.
CUIR also reduces the time that is required for the maintenance window. This feature is
useful in environments in which many systems are attached to a DS8A00.
Loading the microcode can also be performed entirely by the client. The microcode is
downloaded from IBM Fix Central, and the client can perform all the steps by using the
DS8000 Storage Manager GUI. For more information, see 11.2.2, “Customer Code Load” on
page 394.
The microcode can be loaded by an IBM SSR onsite as well. To review and arrange the
required services, contact your IBM SSR or your IBM Technical Account Manager.
The CCL process includes an autonomic recovery function, which means that it is tolerant to
temporary non-critical errors that might surface during the activation. During the CCL, if an
error is posted, the LIC automatically analyzes the error and evaluates whether CCL can
continue. If it cannot, the LIC suspends the CCL and calls for service. The DS8A00 system
can continue with the code update with tolerable errors. After the code update completes,
your IBM SSR works to resolve any of the problems that were generated during the code
update at a convenient time so that DS8A00 clients can schedule the code update in a
controlled manner.
During an update, a system is under less redundant conditions because certain components
are undergoing a firmware update. DS8000 development always strives to reduce the
firmware activation time to improve system redundancy with less exposure to non-redundant
durations. In addition, firmware distribution time is also minimized because fewer components
are involved in the code update.
The CCL duration of the DS8000 family continues to advance with the introduction of new
technology. With the latest DS8A00 firmware, the LIC preinstall can be arranged before your
code update service window performs the code activation, distribution, and HMC update. The
activation times of various components are greatly reduced.
11.7 Summary
IBM might release changes to the DS8A00 LMC. These changes might include code fixes
and feature updates that relate to the DS8A00.
These updates and the information about them are documented in the DS8A00 Code
cross-reference website. You can find this information for a specific bundle under the Bundle
Release Note information section in DS8000 Code Recommendation.
The chapter also describes the outbound (Call Home and support data offload) and inbound
(code download and remote support) communications for the IBM DS8000 family.
SNMP alert traps provide information about problems that the storage unit detects. You or the
service provider must correct the problems that the traps detect.
The DS8A00 does not include an installed SNMP agent that can respond to SNMP polling.
The default Community Name parameter is set to public.
The management server that is configured to receive the SNMP traps receives all of the
generic trap 6 and specific trap 3 messages, which are sent in parallel with the call home to
IBM.
To configure SNMP for the DS8A00, first get the destination address for the SNMP trap and
information about the port on which the trap daemon listens.
Standard port: The standard port for SNMP traps is port 162.
The file is in the snmp subdirectory on the Data Storage Command-line Interface (DS CLI)
installation CD or the DS CLI installation CD image. The image is available at IBM Fix Central.
A serviceable event is posted as a generic trap 6, specific trap 3 message. The specific trap 3
is the only event that is sent for serviceable events and hardware service-related actions
(data offload and remote secure connection). For reporting CS events, generic trap 6 and
specific traps 100, 101, 102, 200, 202, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,
225, or 226 are sent.
Note: Consistency group traps (200 and 201) must be prioritized above all other traps.
They must be surfaced in less than 2 seconds from the real-time incident.
The SNMP trap is sent in parallel with a call home for service to IBM and email notification (if
configured).
For open events in the event log, a trap is sent every 8 hours until the event is closed.
This chapter describes only the messages and the circumstances when traps are sent by the
DS8A00. For more information about these functions and terms, see IBM DS8000 Copy
Services: Updated for IBM DS8000 Release 9.1, SG24-8367.
If one or several links (but not all links) are interrupted, a trap 100 (Example 12-2) is posted.
Trap 100 indicates that the redundancy is degraded. The reference code (RC) column in the
trap represents the return code for the interruption of the link.
Example 12-2 Trap 100: Remote Mirror and Remote Copy links degraded
PPRC Links Degraded
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-A01 75LRV51 12
SEC: IBM 2107-A05 78NFT51 24
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 15
2: FIBRE 0213 XXXXXX 0140 XXXXXX OK
If all of the links are interrupted, a trap 101 (Example 12-3) is posted. This event indicates that
no communication between the primary and the secondary system is possible.
Example 12-3 Trap 101: Remote Mirror and Remote Copy links are inoperable
PPRC Links Down
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-A01 75-LRV51 10
SEC: IBM 2107-A05 78-NFT51 20
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 17
2: FIBRE 0213 XXXXXX 0140 XXXXXX 17
Example 12-4 Trap 102: Remote Mirror and Remote Copy links are operational
PPRC Links Up
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-A01 75-LRV51 21
SEC: IBM 2107-A05 78-NFT51 11
Path: Type PP PLink SP SLink RC
1: FIBRE 0010 XXXXXX 0143 XXXXXX OK
2: FIBRE 0140 XXXXXX 0213 XXXXXX OK
Example 12-5 Trap 200: LSS pair consistency group Remote Mirror and Remote Copy pair error
LSS-Pair Consistency Group PPRC-Pair Error
UNIT: Mnf Type-Mod SerialNm LS LD SR
PRI: IBM 2107-A01 75-LRV51 84 08
SEC: IBM 2107-A05 78-NFT51 54 84
Trap 202, as shown in Example 12-6, is sent if a Remote Copy pair goes into a suspend state.
The trap contains the serial number (SerialNm) of the primary and secondary machine, the
LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the number of SNMP traps
for the LSS is throttled. The complete suspended pair information is represented in the
summary.
The last row of the trap represents the Suspended state for all pairs in the reporting LSS. The
suspended pair information contains a hexadecimal string of 64 characters. By converting this
hex string into binary code, each bit represents a single device. If the bit is 1, the device is
suspended. Otherwise, the device is still in full duplex mode.
Example 12-6 Trap 202: Primary Remote Mirror and Remote Copy devices on LSS suspended due to
an error
Example 12-7 Trap 210: Global Mirror initial consistency group successfully formed
Asynchronous PPRC Initial Consistency Group Successfully Formed
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A01 75-LRV51
Session ID: 4002
Trap 211, as shown in Example 12-8, is sent if the GM setup is in a severe error state in which
no attempts are made to form a consistency group.
Trap 212, as shown in Example 12-9, is sent when a consistency group cannot be created in
a GM relationship for one of the following reasons:
Volumes were taken out of a copy session.
The Remote Copy link bandwidth might not be sufficient.
The Fibre Channel (FC) link between the primary and secondary system is not available.
Example 12-9 Trap 212: Global Mirror consistency group failure - Retry is attempted
Asynchronous PPRC Consistency Group Failure - Retry will be attempted
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT51
Session ID: 4002
Example 12-10 Trap 213: Global Mirror consistency group successful recovery
Asynchronous PPRC Consistency Group Successful Recovery
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT51
Session ID: 4002
Trap 214, as shown in Example 12-11, is sent if a GM session is ended by using the DS CLI
rmgmir command or the corresponding GUI function.
Example 12-12 Trap 215: Global Mirror FlashCopy at remote site unsuccessful
Asynchronous PPRC FlashCopy at Remote Site Unsuccessful
A UNIT: Mnf Type-Mod SerialNm
IBM 2107-A01 75-LRV51
Session ID: 4002
Trap 216, as shown in Example 12-13, is sent if a GM master cannot end the GC relationship
at one of its subordinates. This error might occur if the master is ended by using the rmgmir
command but the master cannot end the copy relationship on the subordinate.
You might need to run a rmgmir command against the subordinate to prevent any interference
with other GM sessions.
Trap 218, as shown in Example 12-15, is sent if a GM exceeded the allowed threshold for
failed consistency group formation attempts.
Example 12-15 Trap 218: Global Mirror number of consistency group failures exceeds threshold
Global Mirror number of consistency group failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT51
Session ID: 4002
Example 12-16 Trap 219: Global Mirror first successful consistency group after prior failures
Global Mirror first successful consistency group after prior failures
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT51
Session ID: 4002
Example 12-17 Trap 220: Global Mirror number of FlashCopy commit failures exceeds threshold
Global Mirror number of FlashCopy commit failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT51
Session ID: 4002
Trap 225, as shown in Example 12-18, is sent when a GM operation paused on the
consistency group boundary.
Example 12-18 Trap 225: Global Mirror paused on the consistency group boundary
Global Mirror operation has paused on the consistency group boundary
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT01
Session ID: 4002
Trap 226, in Example 12-19, is sent when a GM operation failed to unsuspend one or more
GC members.
03 The host system sent a command to the primary volume of a Remote Mirror and
Remote Copy volume pair to suspend copy operations. The host system might
specify an immediate suspension or a suspension after the copy completes and the
volume pair reaches a full duplex state.
04 The host system sent a command to suspend the copy operations on the secondary
volume. During the suspension, the primary volume of the volume pair can still
accept updates, but updates are not copied to the secondary volume. The
out-of-sync tracks that are created between the volume pair are recorded in the
change recording feature of the primary volume.
05 Copy operations between the Remote Mirror and Remote Copy volume pair were
suspended by a primary storage unit secondary device status command. This
system resource code can be returned only by the secondary volume.
06 Copy operations between the Remote Mirror and Remote Copy volume pair were
suspended because of internal conditions in the storage unit. This system resource
code can be returned by the control unit of the primary volume or the secondary
volume.
07 Copy operations between the Remote Mirror and Remote Copy volume pair were
suspended when the auxiliary storage unit notified the primary storage unit of a
state change transition to the simplex state. The specified volume pair between the
storage units is no longer in a copy relationship.
09 The Remote Mirror and Remote Copy volume pair was suspended when the
primary or auxiliary storage unit was restarted or when the power was restored. The
paths to the auxiliary storage unit might not be unavailable if the primary storage
unit was turned off. If the auxiliary storage unit was turned off, the paths between
the storage units are restored automatically, if possible. After the paths are restored,
run the mkpprc command to resynchronize the specified volume pairs. Depending
on the state of the volume pairs, you might need to run the rmpprc command to
delete the volume pairs and run an mkpprc command to reestablish the volume
pairs.
0A The Remote Mirror and Remote Copy pair was suspended because the host issued
a command to freeze the Remote Mirror and Remote Copy group. This system
resource code can be returned only if a primary volume was queried.
Example 12-20 Trap 221: Space-efficient repository or overprovisioned volume reached a warning
Space Efficient Repository or Over-provisioned Volume has reached a warning
watermark
Unit: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT51
Volume Type: repository
Reason Code: 1
Extent Pool ID: f2
Percentage Full: 100%
Example 12-21 Trap 223: Extent pool capacity reached a warning threshold
Extent Pool Capacity Threshold Reached
UNIT: Mnf Type-Mod SerialNm
IBM 2107-A05 78-NFT51
Extent Pool ID: P1
Limit: 95%
Threshold: 95%Status: 0
The network management server that is configured on the HMC receives all of the generic
trap 6, specific trap 3 messages, which are sent in parallel with any events that call home to
IBM.
The SNMP alerts can contain a combination of a generic and a specific alert trap. The Traps
list outlines the explanations for each of the possible combinations of generic and specific
alert traps. The format of the SNMP traps, the list, and the errors that are reported by SNMP
are available in the Generic and specific alert traps section of the IBM Storage DS8000 (10th
Generation) Documentation.
SNMP alert traps provide information about problems that the storage unit detects. You or the
IBM SSR must perform corrective action for the related problems.
Note: To configure the operation-related traps, use the DS CLI, as shown in 12.3.3, “SNMP
configuration with the DS CLI” on page 417.
1. Log in to the Service Management section on the HMC, as shown in Figure 12-1.
4. To verify the successful setup of your environment, create a Test Event on your DS8A00
MC by selecting the IP address and Test SNMP Trap, as shown in Figure 12-4.
Check the SNMP server for the successful reception of the test trap.
5. The test generates the Service Reference Code BEB20010, and the SNMP server
receives the SNMP trap notification, as shown in Figure 12-5 on page 416.
Errors occurring after configuring the SNMP traps on the HMC are sent to the SNMP server,
as shown in Example 12-23.
PMH=TS01xxxxxxx
Reporting HMC Hostname=ds8k-r10-xxxx.mainz.de.ibm.com."
dscli> showsp
Name IbmStoragePlex
desc -
acct -
SNMP Enabled
SNMPadd 10.10.10.1,10.10.10.2
emailnotify Disabled
emailaddr -
emailrelay Disabled
emailrelayaddr -
emailrelayhost -
numkssupported 4
The Message Information Base file that is delivered with the latest DS8A00 DS CLI CD is
compatible with all previous levels of DS8A00 Licensed Internal Code (LIC) and previous
generations of the DS8000 Product Family. Therefore, ensure that you loaded the latest
Message Information Base file that is available.
The benefits of remote support are that IBM Support can respond quickly to events that are
reported by you or the system.
The following features can be enabled in the DS8A00 for remote support:
Call Home support (outbound remote support):
– Reporting problems to IBM
– Sending heartbeat information
– Offloading data
Remote service (inbound) remote support
IBM Support accesses the DS8A00 HMC through a network-based connection.
During the installation and planning phase, complete the remote support worksheets and
supply them to the IBM SSR at the time of the installation.
Although the MC is based on a Linux operating system (OS), IBM disabled or removed all
unnecessary services, processes, and IDs, including standard internet services, such as
Telnet (the Telnet server is disabled on the HMC), File Transfer Protocol (FTP), r commands
(Berkeley r-commands and Remote Procedure Call (RPC) commands), and RPC programs.
Call Home
Call Home is the capability of the MC to report serviceable events to IBM. The MC also
transmits machine-reported product data (MRPD) information to IBM through Call Home. The
MRPD information includes installed hardware, configurations, and features. Call Home is
configured by the IBM SSR during the installation of the DS8A00 by using the customer
worksheets. A test call home is placed after the installation to register the machine and verify
the Call Home function.
The heartbeat can be scheduled every 1–7 days based on the client’s preference. When a
scheduled heartbeat fails to transmit, a service call with an action plan to verify that the Call
Home function is working is sent to an IBM SSR. The DS8A00 uses an internet connection
through Transport Layer Security (TLS), which is also known as Secure Sockets Layer (SSL),
for Call Home functions.
The entire bundle is collected together in a PEPackage. A DS8A00 PEPackage can be large,
often exceeding 100 MB. In certain cases, more than one PEPackage might be needed to
diagnose a problem correctly. In certain cases, the IBM Support Center might need an extra
memory dump that is internally created by the DS8A00 or manually created through the
intervention of an operator.
OnDemand Data Dump: The OnDemand Data Dump (ODD) provides a mechanism that
allows the collection of debug data for error scenarios. With ODD, IBM can collect data with
no impact to the host I/O after an initial error occurs. ODD can be generated by using the
DS CLI command diagsi -action odd and then offloaded.
The MC is a focal point for gathering and storing all of the data packages. Therefore, the MC
must be accessible if a service action requires the information. The data packages must be
offloaded from the MC and sent in to IBM for analysis. The offload is performed through the
internet through a TLS connection.
When the internet is selected as the outbound connectivity method, the MC uses a TLS
connection over the internet to connect to IBM. For more information about IBM TLS remote
support, planning, and worksheets, see IBM Storage DS8000 10.0 Introduction and Planning
Guide, G10-I53-00.
Note: The offloadfile command cannot be run from the embedded DS CLI window.
Having inbound access that is enabled can greatly reduce the problem resolution time by not
waiting for the IBM SSR to arrive onsite to gather problem data and upload it to IBM. With the
DS8A00, the following inbound connectivity options are available to the client:
External Assist On-site (AOS) Gateway
Embedded remote access feature
The remote support access connection cannot be used to send support data to IBM.
The support data offload always uses the Call Home feature.
IBM Support encourages you to use AOS as your remote access method.
The remote access connection is secured with TLS 1.3, but TLS 1.2 is also supported. In
addition, a mechanism is implemented so that the HMC communicates only as an outbound
connection, but you must specifically allow IBM to connect to the HMC. You can compare this
function to a modem that picks up incoming calls. The DS8A00 documentation refers to this
situation as an unattended service.
For more information, see 12.8.4, “Support access management through the DS CLI and DS
GUI” on page 422.
When you prefer to have a centralized access point for IBM Support, then an AOS Gateway
might be the correct solution. With the AOS Gateway, you install the AOS software externally
to a DS8A00 HMC. You must install the AOS software on a system that you provide and
maintain. IBM Support provides only the AOS software package. Through port-forwarding on
an AOS Gateway, you can configure remote access to one or more DS8A00 systems or other
IBM storage systems.
A simple AOS connection to the DS8000 is shown in Figure 12-6. For more information about
AOS, prerequisites, and installation, see IBM Assist On-site for Storage Overview,
REDP-4889.
In addition, your firewall must allow outbound traffic from the HMC to the AOS infrastructure.
The inbound remote support worksheet provides information about the required firewall
changes.
For more information about AOS, see IBM Assist On-site for Storage Overview, REDP-4889.
Access to the DS8000 by using RSC is controlled by using either the DS GUI or DS CLI. For
more information about RSC, contact your IBM SSR.
Figure 12-7 Controlling service access through the DS Storage Manager GUI
lsaccess: This command displays the access settings of the primary and backup MCs:
lsaccess [-hmc 1|2|all]
See the output in Example 12-27.
Important: The hmc1 value specifies the primary HMC, and the hmc2 value specifies the
secondary HMC, regardless of how -hmc 1 and -hmc 2 were specified during DS CLI start.
A DS CLI connection might succeed even if a user inadvertently specifies a primary HMC
by using -hmc 2 and the secondary backup HMC by using -hmc 1 at DS CLI start.
This on-demand audit log mechanism is sufficient for client security requirements for HMC
remote access notification.
In addition to the audit log, email notifications and SNMP traps can also be configured at the
MC to send notifications in a remote support connection.
1. The HMC IBM Electronic Customer Care client initiates a call home, and the request is
sent through the customer proxy to the IBM Electronic Customer Care server.
2. The IBM Electronic Customer Care server always presents the default certificate “X”.
3. The customer proxy presents the customer-provided certificate “Y” (instead of the default
certificate “X”) to the client.
4. The client has a matching certificate (“Y”) in the truststore, so the proxy allows
communication.
Note: The client still has the original default certificate “X” in the truststore, which is
used if Call Home is not configured with customer proxy.
5. The communication continues with the session certificates through the customer proxy.
A similar communication path takes place between the HMC IBM AOS client and the IBM
AOS server. The only difference is that the customer-provided certificate “Y” is stored in a
property file rather than in a truststore.
Figure 12-11 shows an overview of the communication path for AOS with customer proxy
configured and using a certificate that is provided by the customer.
To install a customer-provided certificate for the AOS connection, from the DS GUI access
Settings → Support → Assist On-Site → select Configure HTTP proxy → click Install
TLS certificate. If initially the setting is not visible, click Show full configuration at the
bottom of the page. Figure 12-13 shows these settings in the DS GUI. For detailed
configuration steps, see Assist On-Site.
The DS CLI offloadauditlog command provides clients with the ability to offload the audit
logs to the client’s DS CLI workstation into a directory of their choice, as shown in
Example 12-28.
The audit log can be exported by using the DS GUI on the Events window by clicking the
Download icon and then selecting Export Audit Log, as shown in Figure 12-14.
The downloaded audit log is a text file that provides information about when a remote access
session started and ended, and the remote authority level that was applied. A portion of the
downloaded file is shown in Example 12-29.
Example 12-29 Audit log entries that relate to a remote support event
MST,,1,IBM.2107-78KNG40,N,8036,Authority_to_root,Challenge Key = 'Fy31@C37';
Authority_upgrade_to_root,,,
U,2024/09/18 12:09:49:000
MST,customer,1,IBM.2107-78KNG40,N,8020,WUI_session_started,,,,
The challenge key that is presented to the IBM SSR is a part of a two-factor authentication
method that is enforced on the MC. It is a token that is shown to the IBM SSR who connects
to the DS8A00. The IBM SSR must use the challenge key in an IBM internal system to
generate a response key that is given to the HMC. The response key acts as a one-time
authorization to the features of the HMC. The challenge and response keys change when a
remote connection is made.
The challenge-response process must be repeated if the SSR needs higher privileges to
access the MC command-line environment. No direct user login and no root login are
available on a DS8A00.
Entries are added to the audit file only after the operation completes. All information about the
request and its completion status is known. A single entry is used to log request and
response information. It is possible, though unlikely, that an operation does not complete
because of an operation timeout. In this case, no entry is made in the log.
Audit logs are automatically trimmed (first in, first out (FIFO)) by the subsystem so that they
do not use more than 50 MB of disk storage.
Combining features such as Call Home, data collectors, a streamlined ticketing process, and
proactive support, the problem resolution gains speed so that the stability, capacity, and
performance of the DS8A00 can be managed more efficiently.
If a problem occurs, you receive help promptly through the unified support experience by
completing the following tasks:
Open IBM Support tickets for a resource and automatically add a log package to the ticket.
Update tickets with a new log package.
View the ticket history of open and closed tickets for a device.
A lightweight data collector is installed in your data center to stream performance, capacity,
asset, and configuration metadata to your IBM Cloud instance.
The metadata flows in one direction, that is, from your data center to IBM Cloud over HTTPS.
In IBM Cloud, your metadata is protected by physical, organizational, access, and security
controls.
Figure 12-16 on page 431 shows a system overview where you can access tickets details and
the actions that you can take to manage tickets.
An example of an IBM Storage Insights Pro resources view is the detailed volume information
that is shown in Figure 12-17 on page 431.
By using IBM Storage Insights, IBM Remote Support personnel can collect log packages from
a device. By default, this feature is not enabled. After the device is added, you must enable
the option for IBM Support to collect logs from the device.
To enable this option for IBM Support, select Configuration → Settings → IBM Support
Log Permission, and then select Edit (Figure 12-18 on page 432).
Figure 12-21 shows the process of collecting the needed ticket information, which includes
the details of the DS8000 storage system.
4. Click Next. In the window that is shown in Figure 12-23, select the severity of the ticket.
6. Review and verify the details of the IBM ticket that you are about to open, as shown in
Figure 12-25. Provide the name of the contact person, along with a valid contact phone
number and email address.
3. In the window that is shown in Figure 12-27, you can either select one of the open tickets
for this machine or type in the ticket number manually.
.
6. In the window that is shown in Figure 12-29, you see a summary of the update that is
about to be left in to the ticket. To complete the process, click Update Ticket.
To set alert policies, select Configuration → Alert Policies, as shown in Figure 12-30.
There are many predefined default policies, but you can also create a policy by selecting
Create Policy. This action opens a window where you define the policy name, and select the
policy type and type of storage system, as shown in Figure 12-31.
Select Create, and the policy is created. Then, define the alert definitions and save the
changes, as shown in Figure 12-32 on page 439.
To work with the application, you must first register for IBM Call Home Connect Cloud. After
you can access your IBM Call Home Connect Cloud at
https://www.ibm.com/support/call-home-connect/cloud/
register each of your IBM assets by providing its machine type, model, serial number,
customer number, and country code. After you complete this task for each device, you can
see them in the mobile application.
Figure 12-33 IBM Call Home Connect Anywhere: iOS mobile device showing DS8000s and a DS8A50
ticket
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only.
Best Practices for DS8000 and z/OS HyperSwap with Copy Services Manager,
SG24-8431
DS8000 Cascading FlashCopy Design and Scenarios, REDP-5463
DS8000 Global Mirror Best Practices, REDP-5246
Exploring the DS8870 RESTful API Implementation, REDP-5187
Getting Started with IBM zHyperLink for z/OS, REDP-5493
Getting Started with IBM Z Cyber Vault, SG24-8511
Getting started with z/OS Container Extensions and Docker, SG24-8457
IBM Assist On-site for Storage Overview, REDP-4889
IBM DS8000 Copy Services, SG24-8367
IBM DS8000 Easy Tier, REDP-4667
IBM DS8000 Encryption for Data at Rest, Transparent Cloud Tiering, and Endpoint
Security, REDP-4500
IBM DS8000 and IBM Z Synergy, REDP-5186
IBM DS8000 Safeguarded Copy, REDP-5506
IBM DS8000 and Transparent Cloud Tiering, SG24-8381
IBM DS8870 and NIST SP 800-131a Compliance, REDP-5069
IBM DS8000 Thin Provisioning, REDP-5343
IBM DS8900F Performance Best Practices and Monitoring, SG24-8501
IBM DS8900F Product Guide Release 9.3, REDP-5554
IBM DS8910F Model 993 Rack-Mounted Storage System, REDP-5566
IBM Fibre Channel Endpoint Security for IBM DS8900F and IBM Z, SG24-8455
IBM FlashCore Module (FCM) Product Guide, REDP-5725
IBM Power Systems S922, S914, and S924 Technical Overview and Introduction
Featuring PCIe Gen 4 Technology, REDP-5595
IBM Storage DS8900F Architecture and Implementation, SG24-8456
IBM System Storage DS8000: Remote Pair FlashCopy (Preserve Mirror), REDP-4504
IBM System Storage DS8000: z/OS Distributed Data Backup, REDP-4701
IBM z16 Technical Introduction, SG24-8950
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and additional materials at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
IBM Storage DS8000 10.0 Introduction and Planning Guide, G10-I53-00
https://www.ibm.com/docs/en/SSY1BJD_10.0/G10-I53-00.pdf
IBM DS8000 Series Command-Line Interface User’s Guide, SC27-9562
IBM Storage DS8000 10.0 Host Systems Attachment Guide, SC27-9563
https://www.ibm.com/docs/en/SSY1BJD_10.0/sc27956302.pdf
RESTful API Guide, SC27-9823
Online resources
These websites are also relevant as further information sources:
DS8000 IBM Documentation:
https://www.ibm.com/docs/en/ds8000-10th-generation/10.0
DS8000 Series Copy Services Fibre Channel Extension Support Matrix:
https://www.ibm.com/support/pages/ds8000-series-copy-services-fibre-channel-ext
ension-support-matrix
IBM Fix Central:
https://www.ibm.com/support/fixcentral/swg/selectFixes?parent=Enterprise%20Stor
age%20Servers&product=ibm/Storage_Disk/DS8A00&release=All&platform=All&function
=all
IBM System Storage Interoperation Center (SSIC) for DS8000:
https://www.ibm.com/systems/support/storage/ssic/
IBM DS8000 Code Recommendations:
https://www.ibm.com/support/pages/ds8000-code-recommendation
DS8000 Code Bundle Information (includes Release Notes):
https://www.ibm.com/support/pages/node/7174030
DS8000 Host Adapter Configuration Guidelines:
https://www.ibm.com/support/pages/ds8000-host-adapter-configuration-guidelines
SG24-8559-00
ISBN 0738461903
Printed in U.S.A.
®
ibm.com/redbooks