0% found this document useful (0 votes)
41 views230 pages

DS8880 Introduction and Planning Guide

Uploaded by

Felipe Alvarez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views230 pages

DS8880 Introduction and Planning Guide

Uploaded by

Felipe Alvarez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 230

IBM DS8880

Version 8 Release 2.3

Introduction and Planning Guide

IBM

GC27-8525-11
Note
Before using this information and the product it supports, read the information in “Safety and environmental notices” on
page 201 and “Notices” on page 199.

This edition applies to version 8, release 2, modification 3 of IBM DS8000 and to all subsequent releases and
modifications until otherwise indicated in new editions.
This edition replaces GC27-8525-09.
© Copyright IBM Corporation 2004, 2017.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this book . . . . . . . . . . . v Extended address volumes for CKD . . . . . . 52
Who should use this book . . . . . . . . . . v Quick initialization . . . . . . . . . . . . 53
Conventions and terminology . . . . . . . . v
Publications and related information . . . . . . v Chapter 3. Data management features 55
IBM Publications Center . . . . . . . . . . ix | Transparent cloud tiering . . . . . . . . . . 55
Sending comments . . . . . . . . . . . . ix Dynamic volume expansion . . . . . . . . . 55
Count key data and fixed block volume deletion
Summary of changes . . . . . . . . . xi prevention. . . . . . . . . . . . . . . 56
Thin provisioning . . . . . . . . . . . . 56
Chapter 1. Overview . . . . . . . . . 1 Extent Space Efficient (ESE) capacity controls for
thin provisioning . . . . . . . . . . . 56
Machine types overview . . . . . . . . . . 3
IBM Easy Tier . . . . . . . . . . . . . 57
Hardware . . . . . . . . . . . . . . . 4
Easy Tier: automatic mode . . . . . . . . 59
System types . . . . . . . . . . . . . 6
Easy Tier: manual mode . . . . . . . . . 67
Storage enclosures . . . . . . . . . . . 20
Volume data monitoring . . . . . . . . . 69
Management console . . . . . . . . . . 21
Easy Tier Heat Map Transfer Utility . . . . . 70
Ethernet switches . . . . . . . . . . . 21
Migration process management. . . . . . . 72
Processor nodes . . . . . . . . . . . . 21
Storage Tier Advisor tool . . . . . . . . . 73
I/O enclosures . . . . . . . . . . . . 21
Easy Tier reporting improvements . . . . . . 74
Power . . . . . . . . . . . . . . . 22
Easy Tier considerations and limitations . . . . 74
Functional overview . . . . . . . . . . . 22
VMware vStorage API for Array Integration support 75
Logical configuration . . . . . . . . . . . 26
Performance for IBM z Systems . . . . . . . 76
Logical configuration with DS8000 Storage
Copy Services . . . . . . . . . . . . . 78
Management GUI . . . . . . . . . . . 26
Disaster recovery through Copy Services . . . 87
Logical configuration with DS CLI. . . . . . 28
Resource groups for Copy Services scope limiting 88
RAID implementation . . . . . . . . . . 30
Comparison of Copy Services features . . . . . 89
Logical subsystems . . . . . . . . . . . 32
I/O Priority Manager . . . . . . . . . . . 90
Allocation methods . . . . . . . . . . . 32
Securing data . . . . . . . . . . . . . . 91
Management interfaces . . . . . . . . . . 33
DS8000 Storage Management GUI . . . . . . 34
DS command-line interface . . . . . . . . 34 Chapter 4. Planning the physical
DS Open Application Programming Interface . . 35 configuration . . . . . . . . . . . . 93
RESTful API . . . . . . . . . . . . . 35 Configuration controls . . . . . . . . . . . 93
IBM Storage Mobile Dashboard. . . . . . . 36 Determining physical configuration features . . . 93
IBM Spectrum Control. . . . . . . . . . 36 Management console features . . . . . . . . 94
IBM Copy Services Manager. . . . . . . . 36 Primary and secondary management consoles . . 94
DS8000 Storage Management GUI supported web Configuration rules for management consoles . . 95
browsers . . . . . . . . . . . . . . . 37 Storage features . . . . . . . . . . . . . 95
Storage enclosures and drives . . . . . . . 95
Chapter 2. Hardware features . . . . . 39 Storage-enclosure fillers . . . . . . . . . 98
Storage complexes . . . . . . . . . . . . 43 Device adapters and flash RAID adapters . . . 98
Management console . . . . . . . . . . . 43 Drive cables . . . . . . . . . . . . . 99
Hardware specifics . . . . . . . . . . . . 44 Configuration rules for storage features . . . 100
Storage system structure . . . . . . . . . 44 Physical and effective capacity . . . . . . 102
Disk drives, flash drives, and flash cards . . . 44 I/O adapter features . . . . . . . . . . . 108
Drive maintenance policy. . . . . . . . . 45 I/O enclosures . . . . . . . . . . . . 108
Host attachment overview . . . . . . . . 45 Fibre Channel (SCSI-FCP and FICON) host
Subsystem device driver for open-systems . . . . 47 adapters and cables . . . . . . . . . . 108
I/O load balancing . . . . . . . . . . . . 48 Configuration rules for I/O adapter features . . 110
Storage consolidation . . . . . . . . . . . 48 Processor complex features . . . . . . . . . 115
Count key data . . . . . . . . . . . . . 49 Feature codes for processor licenses . . . . . 116
Fixed block . . . . . . . . . . . . . . 49 Processor memory features . . . . . . . . . 116
T10 DIF support . . . . . . . . . . . . 49 Feature codes for system memory . . . . . 116
Logical volumes . . . . . . . . . . . . . 50 Power features . . . . . . . . . . . . . 117
Allocation, deletion, and modification of volumes 50 Power cords. . . . . . . . . . . . . 117
LUN calculation . . . . . . . . . . . . . 51 Input voltage . . . . . . . . . . . . 118

© Copyright IBM Corp. 2004, 2017 iii


Direct-current uninterruptible-power supply . . 118 Management console network settings . . . . . 173
Configuration rules for power features . . . . 119 Remote support settings . . . . . . . . . . 174
Other configuration features . . . . . . . . 119 Notification settings . . . . . . . . . . . 175
Extended power line disturbance . . . . . . 119 Power control settings . . . . . . . . . . 175
BSMI certificate (Taiwan) . . . . . . . . 120 Control switch settings . . . . . . . . . . 175
Shipping weight reduction . . . . . . . . 120 Customization worksheets . . . . . . . . . 177

Chapter 5. Planning use of licensed Chapter 8. Planning data migration 179


functions . . . . . . . . . . . . . 121 Selecting a data migration method . . . . . . 180
Licensed function indicators . . . . . . . . 121
License scope . . . . . . . . . . . . . 121 Chapter 9. Planning for security . . . 183
Ordering licensed functions . . . . . . . . 122 Planning for data encryption . . . . . . . . 183
Rules for ordering licensed functions . . . . . 123 Planning for encryption-key servers . . . . . 183
Base Function license . . . . . . . . . . . 124 Planning for key lifecycle managers . . . . . 184
Database Protection . . . . . . . . . . 125 Planning for full-disk encryption activation . . 184
Encryption Authorization . . . . . . . . 125 Planning for user accounts and passwords . . . 185
IBM Easy Tier . . . . . . . . . . . . 126 Managing secure user accounts . . . . . . 185
I/O Priority Manager . . . . . . . . . 126 Managing secure service accounts . . . . . 185
Operating environment license . . . . . . 126 Planning for NIST SP 800-131A security
Thin provisioning . . . . . . . . . . . 126 conformance. . . . . . . . . . . . . . 186
z-synergy Services license . . . . . . . . . 127
High Performance FICON for z Systems . . . 127 Chapter 10. License activation and
IBM HyperPAV. . . . . . . . . . . . 128
management. . . . . . . . . . . . 189
Parallel Access Volumes . . . . . . . . . 128
Planning your licensed functions . . . . . . . 189
z/OS Distributed Data Backup . . . . . . 128
Activation of licensed functions . . . . . . . 190
Copy Services license. . . . . . . . . . . 128
Activating licensed functions . . . . . . . 190
Remote mirror and copy functions . . . . . 129
Scenarios for managing licensing . . . . . . . 191
FlashCopy function (point-in-time copy) . . . 129
Adding storage to your machine . . . . . . 191
z/OS Global Mirror . . . . . . . . . . 130
Managing a licensed feature . . . . . . . 192
z/OS Metro/Global Mirror Incremental Resync 130
Copy Services Manager on the Hardware
Management Console license . . . . . . . . 130 Appendix A. Accessibility features for
IBM DS8000 . . . . . . . . . . . . 193
Chapter 6. Meeting delivery and
installation requirements . . . . . . 131 Appendix B. Warranty information 195
Delivery requirements . . . . . . . . . . 131
Acclimation . . . . . . . . . . . . . 131 Appendix C. IBM equipment and
Shipment weights and dimensions . . . . . 131 documents DS8000 . . . . . . . . . 197
Receiving delivery. . . . . . . . . . . 133 Installation components . . . . . . . . . . 197
Installation site requirements . . . . . . . . 134 Customer components . . . . . . . . . . 198
Planning for floor and space requirements. . . 134 Service components . . . . . . . . . . . 198
Planning for power requirements . . . . . . 156
Planning for environmental requirements . . . 164
Notices . . . . . . . . . . . . . . 199
Planning for safety . . . . . . . . . . 170
Trademarks . . . . . . . . . . . . . . 200
Planning for network and communications
Homologation statement . . . . . . . . . 201
requirements . . . . . . . . . . . . 170
Safety and environmental notices . . . . . . . 201
Safety notices and labels. . . . . . . . . 201
Chapter 7. Planning your storage
complex setup . . . . . . . . . . . 173 Index . . . . . . . . . . . . . . . 211
Company information . . . . . . . . . . 173

iv DS8880 Introduction and Planning Guide


About this book
This book describes how to plan for a new installation of DS8880. It includes
information about planning requirements and considerations, customization
guidance, and configuration worksheets.

Who should use this book


This book is intended for personnel that are involved in planning. Such personnel
include IT facilities managers, individuals responsible for power, cooling, wiring,
network, and general site environmental planning and setup.

Conventions and terminology


Different typefaces are used in this guide to show emphasis, and various notices
are used to highlight key information.

The following typefaces are used to show emphasis:

Typeface Description
Bold Text in bold represents menu items.
bold monospace Text in bold monospace represents command names.
Italics Text in italics is used to emphasize a word. In command syntax, it
is used for variables for which you supply actual values, such as a
default directory or the name of a system.
Monospace Text in monospace identifies the data or commands that you type,
samples of command output, examples of program code or
messages from the system, or names of command flags,
parameters, arguments, and name-value pairs.

These notices are used to highlight key information:

Notice Description
Note These notices provide important tips, guidance, or advice.
Important These notices provide information or advice that might help you
avoid inconvenient or difficult situations.
Attention These notices indicate possible damage to programs, devices, or
data. An attention notice is placed before the instruction or
situation in which damage can occur.

Publications and related information


Product guides, other IBM® publications, and websites contain information that
relates to the IBM DS8000® series.

To view a PDF file, you need Adobe Reader. You can download it at no charge
from the Adobe website(get.adobe.com/reader/).

© Copyright IBM Corp. 2004, 2017 v


Online documentation

The IBM DS8000 series online product documentation ( http://www.ibm.com/


support/knowledgecenter/ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/
f2c_securitybp.html) contains all of the information that is required to install,
configure, and manage DS8000 storage systems. The online documentation is
updated between product releases to provide the most current documentation.

Publications

You can order or download individual publications (including previous versions)


that have an order number from the IBM Publications Center website
(www.ibm.com/shop/publications/order/). Publications without an order number
are available on the documentation CD or can be downloaded here.
Table 1. DS8000 series product publications
Title Description Order number
DS8880 Introduction and This publication provides an overview of V8.2.3 GC27-8525-11
Planning Guide the product and technical concepts for V8.2.1 GC27-8525-09
DS8880. It also describes the ordering V8.2.0 GC27-8525-07
features and how to plan for an V8.1.1 GC27-8525-06
installation and initial configuration of V8.1.0 GC27-8525-05
the storage system. V8.0.1 GC27-8525-04
GC27-8525-03
V8.0.0 GC27-8525-02
DS8870 Introduction and This publication provides an overview of V7.5.0 GC27-4209-11
Planning Guide the product and technical concepts for V7.4.0 GC27-4209-10
DS8870. It also describes the ordering V7.3.0 GC27-4209-09
features and how to plan for an V7.2.0 GC27-4209-08
installation and initial configuration of V7.1.0 GC27-4209-05
the storage system. V7.0.0 GC27-4209-02
DS8800 and DS8700 This publication provides an overview of V6.3.0 GC27-2297-09
Introduction and Planning the product and technical concepts for V6.2.0 GC27-2297-07
Guide DS8800 and DS8700. It also describes
ordering features and how to plan for an
installation and initial configuration of
the storage system.
Host Systems Attachment This publication provides information V8.0.0 SC27-8527-00
Guide about attaching hosts to the storage V7.5.0 GC27-4210-04
system. You can use various host V7.4.0 GC27-4210-03
attachments to consolidate storage V7.2.0 GC27-4210-02
capacity and workloads for open systems V7.1.0 GC27-4210-01
and IBM z Systems™ hosts. V7.0.0 GC27-4210-00
V6.3.0 GC27-2298-02
IBM Storage System This publication provides information Download
Multipath Subsystem Device regarding the installation and use of the
Driver User's Guide Subsystem Device Driver (SDD),
Subsystem Device Driver Path Control
Module (SDDPCM), and Subsystem
Device Driver Device Specific Module
(SDDDSM) on open systems hosts.

vi DS8880 Introduction and Planning Guide


Table 1. DS8000 series product publications (continued)
Title Description Order number
Command-Line Interface This publication describes how to use the V8.2.3 SC27-8526-05
User's Guide DS8000 command-line interface (DS CLI) V8.2.2 SC27-8526-04
to manage DS8000 configuration and V8.2.0 SC27-8526-03
Copy Services relationships, and write V8.1.1 SC27-8526-02
customized scripts for a host system. It V8.1.0 SC27-8526-01
also includes a complete list of CLI V8.0.0 SC27-8526-00
commands with descriptions and V7.5.0 GC27-4212-06
example usage. V7.4.0 GC27-4212-04
V7.3.0 GC27-4212-03
V7.2.0 GC27-4212-02
V7.1.0 GC27-4212-01
V7.0.0 GC27-4212-00
V6.3.0 GC53-1127-07
Application Programming This publication provides reference V7.3.0 GC27-4211-03
Interface Reference information for the DS8000 Open V7.2.0 GC27-4211-02
application programming interface (DS V7.1.0 GC27-4211-01
Open API) and instructions for installing V7.0.0 GC35-0516-10
the Common Information Model Agent, V6.3.0 GC35-0516-10
which implements the API.
RESTful API Guide This publication provides an overview of V1.1 SC27-8502-01
the Representational State Transfer V1.0 SC27-8502-00
(RESTful) API, which provides a
platform independent means by which to
initiate create, read, update, and delete
operations in the DS8000 and supporting
storage devices.

Table 2. DS8000 series warranty, notices, and licensing publications


Title Location
Warranty Information http://www.ibm.com/support/docview.wss?uid=ssg1S7005239
for DS8000 series
IBM Safety Notices Search for G229-9054 on the IBM Publications Center website
IBM Systems http://ibm.co/1fBgWFI
Environmental Notices
International Agreement http://ibm.co/1fBmKPz
for Acquisition of
Software Maintenance
(Not all software will
offer Software
Maintenance under this
agreement.)
License Agreement for http://ibm.co/1mNiW1U
Machine Code
Other Internal Licensed http://ibm.co/1kvABXE
Code
International Program http://www-03.ibm.com/software/sla/sladb.nsf/pdf/ipla/$file/
License Agreement and ipla_en.pdf
International License http://www-304.ibm.com/jct03001c/software/sla/sladb.nsf/pdf/
Agreement for ilan/$file/ilan_en.pdf
Non-Warranted
Programs

About this book vii


See the Agreements and License Information CD that was included with the
DS8000 series for the following documents:
v License Information
v Notices and Information
v Supplemental Notices and Information

Related websites
View the websites in the following table to get more information about DS8000
series.
Table 3. DS8000 series related websites
Title Description
®
IBM website (ibm.com ) Find more information about IBM products and
services.
IBM Support Portal Find support-related information such as downloads,
website(www.ibm.com/storage/ documentation, troubleshooting, and service requests
support) and PMRs.
IBM Directory of Worldwide Find contact information for general inquiries,
Contacts website(www.ibm.com/ technical support, and hardware and software
planetwide) support by country.
IBM DS8000 series website Find product overviews, details, resources, and
(www.ibm.com/servers/storage/ reviews for the DS8000 series.
disk/ds8000)
IBM Redbooks® Find technical information developed and published
website(www.redbooks.ibm.com/) by IBM International Technical Support Organization
(ITSO).
IBM System Storage® Interoperation Find information about host system models,
Center (SSIC) website operating systems, adapters, and switches that are
(www.ibm.com/systems/support/ supported by the DS8000 series.
storage/config/ssic)
IBM Storage SAN Find information about IBM SAN products and
(www.ibm.com/systems/storage/ solutions, including SAN Fibre Channel switches.
san)
IBM Data storage feature activation Download licensed machine code (LMC) feature keys
(DSFA) website that you ordered for your DS8000 storage systems.
(www.ibm.com/storage/dsfa)
IBM Fix Central Download utilities such as the IBM Easy Tier® Heat
(www-933.ibm.com/support/ Map Transfer utility and Storage Tier Advisor tool.
fixcentral)
IBM Java™ SE (JRE)(www.ibm.com/ Download IBM versions of the Java SE Runtime
developerworks/java/jdk) Environment (JRE), which is often required for IBM
products.
IBM Security Key Lifecycle This online documentation provides information
Manager online product about IBM Security Key Lifecycle Manager, which
documentation you can use to manage encryption keys and
(www.ibm.com/support/ certificates.
knowledgecenter/SSWPVP/)
IBM Spectrum Control™ online This online documentation provides information
product documentation in IBM about IBM Spectrum Control, which you can use to
Knowledge Center centralize, automate, and simplify the management of
(www.ibm.com/support/ complex and heterogeneous storage environments
knowledgecenter) including DS8000 storage systems and other
components of your data storage infrastructure.

viii DS8880 Introduction and Planning Guide


Table 3. DS8000 series related websites (continued)
Title Description
DS8700 Code Bundle Information Find information about code bundles for DS8700. See
website(www.ibm.com/support/ section 3 for web links to SDD information.
docview.wss?uid=ssg1S1003593)
The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.
DS8800 Code Bundle Information Find information about code bundles for DS8800. See
website(www.ibm.com/support/ section 3 for web links to SDD information.
docview.wss?uid=ssg1S1003740)
The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.
DS8870 Code Bundle Information Find information about code bundles for DS8870. See
website(www.ibm.com/support/ section 3 for web links to SDD information.
docview.wss?uid=ssg1S1004204)
The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.
DS8880 Code Bundle Information Find information about code bundles for DS8880.
website (www.ibm.com/support/
docview.wss?uid=ssg1S1005392) The version of the currently active installed code
bundle displays with the DS CLI ver command when
you specify the -l parameter.

IBM Publications Center


The IBM Publications Center is a worldwide central repository for IBM product
publications and marketing material.

Procedure

The IBM Publications Center website (ibm.com/shop/publications/order) offers


customized search functions to help you find the publications that you need. You
can view or download publications at no charge.

Sending comments
Your feedback is important in helping to provide the most accurate and highest
quality information.

Procedure

To submit any comments about this publication or any other IBM storage product
documentation:

Send your comments by email to [email protected]. Be sure to include the


following information:
v Exact publication title and version
v Publication form number (for example, GA32-1234-00)
v Page, table, or illustration numbers that you are commenting on
v A detailed description of any information that should be changed

About this book ix


x DS8880 Introduction and Planning Guide
Summary of changes
DS8000 Version 8, Release 2, Modification 3 introduces the following new features.

Version 8.2.3

This table provides the current technical changes and enhancement to the IBM
DS8000 as of June 9, 2017. Changed and new information is indicated by a vertical
bar (|) to the left of the change.

Function Description
Transparent cloud tiering See “Transparent cloud tiering” on page 55 for
more information.

© Copyright IBM Corp. 2004, 2017 xi


xii DS8880 Introduction and Planning Guide
Chapter 1. Overview
IBM DS8880 is a high-performance, high-capacity storage system that supports
continuous operation, data security, and data resiliency. The storage system
consists of a base frame, and depending on the configuration, up to four expansion
frames. For high-availability, the hardware components are redundant.

DS8880 adds a base frame and expansion frame to the 283x machine type family,
and the 533x all-flash machine type family.
v The base frame contains the processor nodes, I/O enclosures, Ethernet switches,
and the Hardware Management Console (HMC), in addition to power and
storage enclosures. The HMC is a small form factor computer and uses a
keyboard and monitor that are stored in the base frame. An optional secondary
HMC is also available in the base frame. A secondary HMC can provide
high-availability, particularly for important processes such as encryption, Copy
Services, and the HMC storage management functions.
v Depending on the system configuration, you can add up to four expansion
frames to the storage system. Only the first expansion frame contains I/O
enclosures, which provide more host adapters, device adapters, and High
Performance Flash Enclosure Gen2 flash RAID adapters.

The DS8880 features five system types: DS8884, DS8884F, DS8886, DS8886F, and
DS8888F. The DS8884 (models 984 and 84E) is an entry-level, high-performance
storage system. The DS8884F (model 984) is an entry-level, high-performance
storage system featuring all High Performance Flash Enclosures Gen2. The DS8886
is a high-density, high-performance storage system with either single-phase power
(models 985 and 85E) or three-phase power (models 986 and 86E). The DS8886F is
a high-density, high-performance storage system with either single-phase power
(models 985 and 85E) or three-phase power (models 986 and 86E) featuring all
High Performance Flash Enclosures Gen2. The DS8888F is a high-performance,
high-efficiency storage system featuring all High Performance Flash Enclosures
Gen2.

Note: Previously available DS8880 models (980, 98B, 981, 98E, 982, 98F) are still
supported, but not covered in this version documentation. For information on
models not documented here, refer to previous version documentation
(GC27-8525-06).
v The DS8884 (models 984 and 84E) storage system includes 6-core processors and
is scalable with up to 96 High Performance Flash Enclosure Gen2 flash cards, up
to 768 standard drives, up to 256 GB system memory, and up to 64 host adapter
ports. The DS8884 includes a base frame (model 984), up to two expansion
frames (model 84E), and a 40U capacity in each frame.
v The DS8884F (model 984) storage system includes 6-core processors and is
scalable with up to 48 High Performance Flash Enclosure Gen2 flash cards, up to
256 GB system memory, and up to 32 host adapter ports. The DS8884F includes
a base frame (model 984) and a 40U capacity.
v The DS8886 (models 985 and 85E) has a single-phase power storage system and
is scalable with up to 24-core processors, up to 192 High Performance Flash
Enclosure Gen2 flash cards, up to 1,536 standard drives, up to 2048 GB system
memory, and up to 128 host adapter ports. The DS8886 includes a base frame
(model 985), up to four expansion frames (model 85E), and an expandable
40-46U capacity in each frame.
© Copyright IBM Corp. 2004, 2017 1
v The DS8886F (models 985 and 85E) has a single-phase power storage system and
is scalable with up to 24-core processors, up to 192 High Performance Flash
Enclosure Gen2 flash cards, up to 2048 GB system memory, and up to 128 host
adapter ports. The DS8886F includes a base frame (model 985) and one
expansion frame (model 85E).
v The DS8886 (models 986 and 86E) has a three-phase power storage system and
is scalable with up to 24-core processors, up to 192 High Performance Flash
Enclosure Gen2 flash cards, up to 1,440 standard drives, up to 2048 GB system
memory, and up to 128 host adapter ports. The DS8886 includes a base frame
(model 986), up to four expansion frames (model 86E), and an expandable
40-46U capacity in each frame.
v The DS8886F (models 986 and 86E) has a three-phase power storage system and
is scalable with up to 24-core processors, up to 192 High Performance Flash
Enclosure Gen2 flash cards, up to 2048 GB system memory, and up to 128 host
adapter ports. The DS8886F includes a base frame (model 986) and one
expansion frame (model 86E).
v The DS8888F (models 988 and 88E) has a three-phase power storage system and
is scalable with up to 48-core processors, up to 384 High Performance Flash
Enclosure Gen2 flash cards, up to 2048 GB system memory, and up to 128 host
adapter ports. The DS8888F includes a base frame (model 988) and one
expansion frame (model 88E).

The DS8880 features standard 19-inch wide frames and 19-inch wide frames with
6U extensions (DS8886 and DS8888F only).

Energy efficient DC-UPS modules provide 8 KW of single-phase power (model 984


and 985) and 8 KW of three-phase power (model 986 and 988).

DS8880 integrates High Performance Flash Enclosures Gen2 and flash cards for all
models documented here to provide a higher level of performance. Previously
available models (980, 98B, 981, 98E, 982, 98F) integrated High-Performance Flash
Enclosures Gen1. For information on models not documented here, refer to
previous version documentation (GC27-8525-06).

Licensed functions are available in four groups:


Base Function
The Base Function license is required for each DS8880 storage system. The
licensed functions include Database Protection, Encryption Authorization,
Easy Tier, I/O Priority Manager, the Operating Environment License, and
Thin Provisioning.
z-synergy Services
The z-synergy Services include z/OS® functions that are supported on the
storage system. The licensed functions include High Performance FICON®
for z Systems, HyperPAV, PAV, and z/OS Distributed Data Backup.
Copy Services
Copy Services features help you implement storage solutions to keep your
business running 24 hours a day, 7 days a week by providing data
duplication, data migration, and disaster recovery functions. The licensed
functions include Global Mirror, Metro Mirror, Metro/Global Mirror,
Point-in-Time Copy/FlashCopy®, z/OS Global Mirror, and z/OS
Metro/Global Mirror Incremental Resync (RMZ).
Copy Services Manager on Hardware Management Console
The Copy Services Manager on Hardware Management Console (CSM on

2 DS8880 Introduction and Planning Guide


HMC) license enables IBM Copy Services Manager to run on the Hardware
Management Console, which eliminates the need to maintain a separate
server for Copy Services functions.

DS8880 also includes features such as:


v POWER8® processors
v Power-usage reporting
v National Institute of Standards and Technology (NIST) SP 800-131A enablement

Other functions that are supported in both the DS8000 Storage Management GUI
and the DS command-line interface (DS CLI) include:
v Easy Tier
v Data encryption
v Thin provisioning

You can use the DS8000 Storage Management GUI and the DS command-line
interface (DS CLI) to manage and logically configure the storage system.

Functions that are supported in only the DS command-line interface (DS CLI)
include:
v Point-in-time copy functions with IBM FlashCopy
v Remote Mirror and Copy functions, including
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– z/OS Global Mirror
– z/OS Metro/Global Mirror
– Multiple Target PPRC
v I/O Priority Manager

DS8880 meets hazardous substances (RoHS) requirements by conforming to the


following EC directives:
v Directive 2011/65/EU of the European Parliament and of the Council of 8 June
2011 on the restriction of the use of certain hazardous substances in electrical
and electronic equipment. It has been demonstrated that the requirements
specified in Article 4 are met.
v EN 50581:2012 technical documentation for the assessment of electrical and
electronic products regarding the restriction of hazardous substances.

The IBM Security Key Lifecycle Manager stores data keys that are used to secure
the key hierarchy that is associated with the data encryption functions of various
devices, including the DS8000 series. It can be used to provide, protect, and
maintain encryption keys that are used to encrypt information that is written to
and decrypt information that is read from encryption-enabled disks. IBM Security
Key Lifecycle Manager operates on various operating systems.

Machine types overview


There are several machine type options available for the DS8000 series. Order a
hardware machine type for the storage system and a corresponding function
authorization machine type for the licensed functions that are planned for use.

The following tables list the available hardware machine types and their
corresponding function authorization machine types.

Chapter 1. Overview 3
Table 4. Available hardware and function-authorization machine types that support a mix of
storage enclosure types
Hardware Licensed functions
Available hardware
Corresponding
models that support
Hardware machine function Available function
High Performance
type authorization authorization models
Flash Enclosures
machine type
Gen2
2831 (1-year warranty 2836 (1-year warranty
period) period)
2832 (2-year warranty 984 and 84E 2837 (2-year warranty
period) period)
985 and 85E LF8
2833 (3-year warranty 2838 (3-year warranty
period) period)
986 and 86E
2834 (4-year warranty 2839 (4-year warranty
period) period)

Table 5. Available hardware and function-authorization machine types that support all-flash
system types
Hardware Licensed functions
Available hardware
Corresponding
models that support
Hardware machine function Available function
High Performance
type authorization authorization models
Flash Enclosures
machine type
Gen2
5331 (1-year warranty 9046 (1-year warranty
period) period)
984
5332 (2-year warranty 9047 (2-year warranty
period) 985 and 85E period)
LF8
5333 (3-year warranty 986 and 86E 9048 (3-year warranty
period) period)
988 and 88E
5334 (4-year warranty 9049 (4-year warranty
period) period)

Note: Previously available DS8880 models (980, 98B, 981, 98E, 982, 98F) are still
supported, but not covered in this version documentation. For information on
models not documented here, refer to previous version documentation
(GC27-8525-06).

The machine types for the DS8000 series specify the service warranty period. The
warranty is used for service entitlement checking when notifications for service are
called hometo IBM. All DS8000 series models report 2107 as the machine type to
attached host systems.

Hardware
The architecture of the IBM DS8000 series is based on three major elements that
provide function specialization and three tiers of processing power.

Figure 1 on page 5 illustrates the following elements:


v Host adapters manage external I/O interfaces that use Fibre Channel protocols
for host-system attachment and for replicating data between storage systems.
4 DS8880 Introduction and Planning Guide
v Flash RAID adapters and device adapters manage the internal storage devices.
They also manage the SAS paths to drives, RAID protection, and drive sparing.
v A pair of high-performance redundant active-active Power® servers is
functionally positioned between the adapters and a key feature of the
architecture.
The internal Power servers support the bulk of the processing to be done in the
storage system. Each Power server has multiple processor cores. The cores are
managed as a symmetric multiprocessing (SMP) pool of shared processing
power to process the work that is done on the Power server. Each Power server
runs an AIX® kernel that manages the processors, manages processor memory as
a data cache, and more. For more information, see IBM DS8000 Architecture and
Implementation on the IBM Redbooks website(www.redbooks.ibm.com/)

HOST adapters

Adapter
Adaptor processors
processors Protocol management

Power server Power server


Shared processors Shared processors
cache cache

Flash RAID adapters and device adapters


Adapter processors RAID & sparing management

f2c01869
Figure 1. DS8000 series architecture

The DS8000 series architecture has the following major benefits.


v Server foundation
– Promotes high availability and high performance by using field-proven Power
servers
– Reduces custom components and design complexity
– Positions the storage system to reap the benefits of server technology
advances
v Operating environment
– Promotes high availability and provides a high-quality base for the storage
system software through a field-proven AIX operating-system kernel
– Provides an operating environment that is optimized for Power servers,
including performance and reliability, availability, and serviceability
– Provides shared processor (SMP) efficiency
– Reduces custom code and design complexity
– Uses Power firmware and software support for networking and service
functions

Chapter 1. Overview 5
System types
For version 8.2.1, DS8880 supports five system types: DS8884 (models 984 and
84E), DS8884F (model 984), DS8886 (single-phase power models 985 and 85E, or
three-phase power models 986 and 86E), DS8886F (single-phase power models 985
and 85E, or three-phase power models 986 and 86E), and DS8888F (models 988 and
88E).

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

DS8884 (models 984 and 84E)


The DS8884 is an entry-level, high-performance storage system that includes
standard disk enclosures and High Performance Flash Enclosures Gen2.

DS8884 storage systems feature 6-core processors and are scalable and support up
to 96 High Performance Flash Enclosure Gen2 flash cards, and up to 768 standard
drives. They are optimized and configured for cost, by minimizing the number of
device adapters and maximizing the number of storage enclosures that are
attached to each storage system. The frame is 19 inches wide and 40U high.

Model 984 supports the following storage enclosures:


v Up to 4 standard drive enclosure pairs and up to 1 High Performance Flash
Enclosure Gen2 pair in a base frame (model 984).
v Up to 5 standard drive enclosure pairs and up to 1 High Performance Flash
Enclosure Gen2 pair in a first expansion frame (model 84E).
v Up to 7 standard drive enclosure pairs in a second expansion frame (model 84E).

The DS8884 uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel
Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps
adapters only) protocol. The High Performance FICON (HPF) feature is also
supported.

The DS8884 supports single-phase power.

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

The following tables list the hardware components and maximum capacities that
are supported for the DS8884, depending on the amount of memory that is
available.
Table 6. Components for the DS8884 (models 984 and 84E)
Host High
adapters Standard drive Performance
System Processor I/O enclosure (8 or 4 Device adapter enclosure pairs1, Flash Enclosure Expansion
Processors memory memory pairs port) pairs 2 Gen2 pairs1, 2, 3 frames
6-core 64 GB 32 GB 1 2-8 0-1 0-4 0-1 0
6-core 128 GB 64 GB 2 2 - 16 0-4 0 - 16 0-2 0-2
6-core 256 GB 128 GB 2 2 - 16 0-4 0 - 16 0-2 0-2

1. Standard drive and High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with either one standard drive enclosure pair (feature code 1241) or one High Performance
Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.

6 DS8880 Introduction and Planning Guide


Table 7. Maximum capacity for the DS8884 (models 984 and 84E)
Maximum
storage
Maximum capacity for
Maximum Maximum 2.5-in. High 2.5-in. High
storage storage Performance Performance
Maximum capacity for Maximum capacity for Flash Flash
2.5-in. 2.5-in. 3.5-in. 3.5-in. Enclosure Enclosure Maximum
System standard disk standard disk standard disk standard disk Gen2 flash Gen2 flash total
Processors memory drives drives drives drives cards cards drives1
6-core 64 GB 192 346 TB 96 576 TB 48 154 TB 240
6-core 128 GB 768 1.38 PB 384 2.3 PB 96 307 TB 864
6-core 256 GB 768 1.38 PB 384 2.3 PB 96 307 TB 864
1. Combined total of 2.5-in. standard disk drives and 2.5-in. High Performance Flash Enclosure Gen2 flash cards.

Base frame (model 984) overview:

The DS8884 (model 984) includes a base frame.

The base frame includes the following components:


v Standard storage enclosures
v High Performance Flash Enclosures Gen2
v Hardware Management Console (HMC), including a keyboard and monitor
v Second HMC (optional)
v Ethernet switches
v Processor nodes (available with POWER8 processors)
v I/O enclosures
v Direct-current uninterruptible power supplies (DC-UPS)
v Rack power control (RPC) cards

Expansion frame (model 84E) overview:

The DS8884 (model 84E) supports up to two expansion frames that can be added
to a base frame. A minimum of 128 GB system memory is required, if expansion
frames are added.

The first expansion frame supports up to 240 standard 2.5-inch disk drives. The
second expansion frame supports up to 336 2.5-inch standard disk drives. When all
three frames are installed, the DS8884 (models 984 and 84E) can support a total of
768 2.5-inch standard disk drives in a compact footprint, creating a high-density
storage system and preserving valuable floor space in data center environments.

Only the first expansion frame includes I/O enclosures. You can add up to one
High Performance Flash Enclosure Gen2 pair to the first expansion frame. The
second expansion frame does not include I/O enclosures or High Performance
Flash Enclosures Gen2.

The main power area is at the rear of the expansion frame. The power system in
each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs)
with internal batteries.

DS8884 (models 984 and 84E) expansion frame location options:

In addition to the standard expansion frame location, the DS8884 offers a remote
expansion frame option.

Chapter 1. Overview 7
With the standard DS8884 expansion frame location, the first expansion frame is
located next to the base frame, and the second expansion frame is located next to
the first expansion frame.

A B C

f2c02367
Figure 2. DS8884 standard expansion frame locations

With the DS8884 remote expansion frame option, the first expansion frame is
located next to the base frame, and the second expansion frame can be located up
to 20 meters away from the first expansion frame. This option requires the
extended drive cable group C (feature code 1266).

Up to 20 M
A B C
f2c02368

Figure 3. DS8884 remote expansion frame option

DS8884F (model 984)


The DS8884F is an entry-level, high-performance storage system that includes only
High Performance Flash Enclosures Gen2.

DS8884F storage systems feature 6-core processors and are scalable and support up
to 48 High Performance Flash Enclosure Gen2 flash cards. The frame is 19 inches
wide and 40U high.

DS8884F supports one High Performance Flash Enclosure Gen2 pair in a base
frame (model 984).

The DS8884F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre
Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8
Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also
supported.

The DS8884F supports single-phase power.

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

8 DS8880 Introduction and Planning Guide


The following tables list the hardware components and maximum capacities that
are supported for the DS8884F, depending on the amount of memory that is
available.
Table 8. Components for the DS8884F (model 984)
High Performance
I/O enclosure Host adapters Flash Enclosure Gen2
Processors System memory Processor memory pairs (8 or 4 port) pairs1, 2, 3 Expansion frames
6-core 64 GB 32 GB 1 2-8 1 N/A
6-core 128 GB 64 GB 1 2-8 1 N/A
6-core 256 GB 128 GB 1 2-8 1 N/A

1. High Performance Flash Enclosures Gen2 are installed in pairs.


2. This configuration of the DS8880 must be populated with one High Performance Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.

Table 9. Maximum capacity for the DS8884F (model 984)


Maximum storage capacity
Maximum 2.5-in. High for 2.5-in. High
Performance Flash Performance Flash
Processors System memory Enclosure Gen2 flash cards Enclosure Gen2 flash cards Maximum total drives
6-core 64 GB 48 154 TB 48
6-core 128 GB 48 154 TB 48
6-core 256 GB 48 154 TB 48

DS8884F base frame (model 984) overview:

The DS8884F (model 984) includes a base frame.

The base frame includes the following components:


v High Performance Flash Enclosures Gen2
v Hardware Management Console (HMC), including a keyboard and monitor
v Second HMC (optional)
v Ethernet switches
v Processor nodes (available with POWER8 processors)
v I/O enclosures
v Direct-current uninterruptible power supplies (DC-UPS)
v Rack power control (RPC) cards

DS8886 (models 985 and 85E or 986 and 86E)


The DS8886 is a high-density, high-performance storage system that includes
standard disk enclosures and High Performance Flash Enclosures Gen2.

The DS8886 models 985 and 85E support single-phase power. The DS8886 models
986 and 86E support three-phase power.

DS8886 (models 985 and 85E):

The DS8886 (models 985 and 85E) is a high-density, high-performance storage


system that includes standard disk enclosures and High Performance Flash
Enclosures Gen2, and supports single-phase power.

DS8886 (models 985 and 85E) storage systems are scalable with up to 24-core
processors, up to 192 High Performance Flash Enclosure Gen2 flash cards, and up
to 1,536 standard drives. They are optimized and configured for performance and
throughput, by maximizing the number of device adapters and paths to the

Chapter 1. Overview 9
storage enclosures. The frame is 19 inches wide and expandable from 40U - 46U.
They support the following storage enclosures:
v Up to 3 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a base frame (model 985).
v Up to 5 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a first expansion frame (model 85E).
v Up to 9 standard drive enclosure pairs in a second expansion frame.
v Up to 9 standard drive enclosure pairs in a third expansion frame.
v Up to 6 standard drive enclosure pairs in a fourth expansion frame.

The DS8886 uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel
Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps
adapters only) protocol. The High Performance FICON (HPF) feature is also
supported.

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

The following tables list the hardware components and maximum capacities that
are supported for the DS8886 (models 985 and 85E), depending on the amount of
memory that is available.
Table 10. Components for the DS8886 (models 985 and 85E)
Host High
adapters Standard drive Performance
System Processor I/O enclosure (8 or 4 Device adapter enclosure pairs1, Flash Enclosure Expansion
Processors memory memory pairs port) pairs 2 Gen2 pairs1, 2, 3 frames
8-core 128 GB 64 GB 2 2 - 16 0-3 0-3 0-2 0
256 GB 128 GB
16-core 256 GB 128 GB 4 2 - 32 0-8 0 - 32 0-4 0-4
512 GB 256 GB
24-core 1024 GB 512 GB 4 2 - 32 0-8 0 - 32 0-4 0-4
2048 GB 1024 GB

1. Standard drive and High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with either one standard drive enclosure pair (feature code 1241) or one High Performance
Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.

Table 11. Maximum capacity for the DS8886 (models 985 and 85E)
Maximum
storage
Maximum capacity for
Maximum Maximum 2.5-in. High 2.5-in. High
storage storage Performance Performance
Maximum capacity for Maximum capacity for Flash Flash
2.5-in. 2.5-in. 3.5-in. 3.5-in. Enclosure Enclosure
System standard standard standard standard Gen2 flash Gen2 flash Maximum
Processors memory disk drives disk drives disk drives disk drives cards cards total drives1
8-core 128 GB 144 259.2 TB 72 432 TB 96 307.2 TB 240
256 GB
16-core 256 GB 1536 2.76 PB 768 7.68 PB 192 614.4 TB 1728
512 GB
24-core 1024 GB 1536 2.76 PB 768 7.68 PB 192 614.4 TB 1728
2048 GB
1. Combined total of 2.5-in. disk drives and 2.5-in. High Performance Flash Enclosure Gen2 flash cards.

10 DS8880 Introduction and Planning Guide


Base frame (model 985) overview:

The DS8886 (model 985) includes a base frame.

The base frame includes the following components:


v Standard storage enclosures
v High Performance Flash Enclosures Gen2
v Hardware Management Console (HMC), including a keyboard and monitor
v Second HMC (optional)
v Ethernet switches
v Processor nodes (available with POWER8 processors)
v I/O enclosures
v Direct-current uninterruptible power supplies (DC-UPS)
v Rack power control (RPC) cards

Expansion frame (model 85E) overview:

The DS8886 supports up to four expansion frames (model 85E) that can be added
to a base frame (model 985). A minimum of 256 GB system memory and a 16-core
processor is required, if expansion frames are added.

The first expansion frame supports up to 240 2.5-inch standard disk drives. The
second and third expansion frames support up to 432 2.5-inch standard disk
drives. A fourth expansion frame supports an extra 288 2.5-inch standard disk
drives. When all four frames are added, the DS8886 can support a total of 1,536
2.5-inch disk drives in a compact footprint, creating a high-density storage system
and preserving valuable floor space in data center environments.

Only the first expansion frame includes I/O enclosures. You can add up to two
High Performance Flash Enclosure Gen2 pairs to the first expansion frame. The
second, third, and fourth expansion frames do not include I/O enclosures or High
Performance Flash Enclosures Gen2.

The main power area is at the rear of the expansion frame. The power system in
each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs)
with internal batteries.

DS8886 (models 986 and 86E):

The DS8886 (models 986 and 86E) is a high-density, high-performance storage


system that includes standard disk enclosures and High Performance Flash
Enclosures Gen2, and supports three-phase power.

DS8886 (models 986 and 86E) storage systems are scalable with up to 24-core
processors, up to 192 High Performance Flash Enclosure Gen2 flash cards, and up
to 1,440 standard drives. They are optimized and configured for performance and
throughput, by maximizing the number of device adapters and paths to the
storage enclosures. The frame is 19 inches wide and expandable from 40U - 46U.
They support the following storage enclosures:
v Up to 2 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a base frame (model 986).
v Up to 4 standard drive enclosure pairs and up to 2 High Performance Flash
Enclosure Gen2 pairs in a first expansion frame (model 86E).
v Up to 9 standard drive enclosure pairs in a second expansion frame.
v Up to 9 standard drive enclosure pairs in a third expansion frame.

Chapter 1. Overview 11
v Up to 9 standard drive enclosure pairs in a fourth expansion frame.

The DS8886 uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre Channel
Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8 Gbps
adapters only) protocol. The High Performance FICON (HPF) feature is also
supported.

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

The following tables list the hardware components and maximum capacities that
are supported for the DS8886 (models 986 and 86E), depending on the amount of
memory that is available.
Table 12. Components for the DS8886 (models 986 and 86E)
Host High
adapters Standard drive Performance
System Processor I/O enclosure (8 or 4 Device adapter enclosure pairs1, Flash Enclosure Expansion
Processors memory memory pairs port) pairs 2 Gen2 pairs1, 2, 3 frames
8-core 128 GB 64 GB 2 2 - 16 0-2 0-2 0-2 0
256 GB 128 GB
16-core 256 GB 128 GB 4 2 - 32 0-8 0 - 30 0-4 0-4
512 GB 256 GB
24-core 1024 GB 512 GB 4 2 - 32 0-8 0 - 30 0-4 0-4
2048 GB 1024 GB

1. Standard drive and High Performance Flash Enclosures Gen2 are installed in pairs.
2. This configuration of the DS8880 must be populated with either one standard drive enclosure pair (feature code 1241) or one High Performance
Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.

Table 13. Maximum capacity for the DS8886 (models 986 and 86E)
Maximum
storage
Maximum capacity for
Maximum Maximum 2.5-in. High 2.5-in. High
storage storage Performance Performance
Maximum capacity for Maximum capacity for Flash Flash
2.5-in. 2.5-in. 3.5-in. 3.5-in. Enclosure Enclosure
System standard standard standard standard Gen2 flash Gen2 flash Maximum
Processors memory disk drives disk drives disk drives disk drives cards cards total drives1
8-core 128 GB 96 172.8 TB 48 288 TB 96 307.2 TB 192
256 GB
16-core 256 GB 1440 2.59 PB 720 4.32 PB 192 614.4 TB 1632
512 GB
24-core 1024 GB 1440 2.59 PB 720 4.32 PB 192 614.4 TB 1632
2048 GB
1. Combined total of 2.5-in. disk drives and 2.5-in. High Performance Flash Enclosure Gen2 flash cards.

Base frame (model 986) overview:

The DS8886 (model 986) includes a base frame.

The base frame includes the following components:


v Standard storage enclosures
v High Performance Flash Enclosures Gen2
v Hardware Management Console (HMC), including a keyboard and monitor
v Second HMC (optional)
v Ethernet switches

12 DS8880 Introduction and Planning Guide


v Processor nodes (available with POWER8 processors)
v I/O enclosures
v Direct-current uninterruptible power supplies (DC-UPS)
v Rack power control (RPC) cards

Expansion frame (model 86E) overview:

The DS8886 supports up to four expansion frames (model 86E) that can be added
to a base frame (model 986). A minimum of 256 GB system memory and a 16-core
processor is required, if expansion frames are added.

The first expansion frame supports up to 192 2.5-inch standard disk drives. The
second, third, and fourth expansion frames support up to 384 2.5-inch standard
disk drives. When all four frames are used, the DS8886 can support a total of 1,440
2.5-inch standard disk drives in a compact footprint, creating a high-density
storage system and preserving valuable floor space in data center environments.

Only the first expansion frame includes I/O enclosures. You can add up to two
High Performance Flash Enclosure Gen2 pairs to the first expansion frame. The
second, third, and fourth expansion frames do not include I/O enclosures or High
Performance Flash Enclosures Gen2.

The main power area is at the rear of the expansion frame. The power system in
each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs)
with internal batteries.

DS8886 expansion frame location options:

In addition to the standard expansion frame location, the DS8886 offers four
remote expansion frame options that allow expansion frames to be located up to 20
meters apart.

With the standard DS8886 expansion frame location, the first expansion frame is
located next to the base frame, the second expansion frame is located next to the
first expansion frame, and each consecutive expansion frame is located next to the
previous one.

A B C D E
f2c02369

Figure 4. DS8886 standard expansion frame locations

The DS8886 offers a remote expansion frame option with one remote expansion
frame. This option requires the extended drive cable group E (feature code 1254).

Chapter 1. Overview 13
Up to 20 M

A B C D E

f2c02370
Figure 5. DS8886 with one remote expansion frame

The DS8886 offers a remote expansion frame option with two remote expansion
frames. This option requires the extended drive cable group D (feature code 1253).

Up to 20 M
A B C D E

f2c02371
Figure 6. DS8886 with two remote expansion frames

The DS8886 offers a remote expansion frame option with three remote expansion
frames. This option requires the extended drive cable group C (feature code 1252).

Up to 20 M
A B C D E
f2c02372

Figure 7. DS8886 with three remote expansion frames

The DS8886 offers a remote expansion frame option with three separate remote
expansion frames. This option requires the extended drive cable groups C, D, and
E (feature codes 1252, 1253, and 1254).

14 DS8880 Introduction and Planning Guide


C

0M
to 2
Up
Up to 20 M

A B D
Up to 20 M

Up Up to 20 M
to
20
M

f2c02373
Figure 8. DS8886 with three separate remote expansion frames

DS8886F (models 985 and 85E or 986 and 86E)


The DS8886F is a high-density, high-performance storage system that includes only
High Performance Flash Enclosures Gen2.

The DS8886F models 985 and 85E support single-phase power. The DS8886F
models 986 and 86E support three-phase power.

DS8886F (models 985 and 85E):

The DS8886F (models 985 and 85E) is a high-density, high-performance storage


system that includes High Performance Flash Enclosures Gen2, and supports
single-phase power.

DS8886F (models 985 and 85E) storage systems are scalable with up to 24-core
processors, and up to 192 High Performance Flash Enclosure Gen2 flash cards.
They are optimized and configured for performance and throughput by
maximizing the number of paths to the storage enclosures. The frame is 19 inches
wide and 40U high. They support the following storage enclosures:
v Up to 2 High Performance Flash Enclosure Gen2 pairs in a base frame (model
985).
v Up to 2 High Performance Flash Enclosure Gen2 pairs in an expansion frame
(model 85E).

Chapter 1. Overview 15
The DS8886F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre
Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8
Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also
supported.

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

The following tables list the hardware components and maximum capacities that
are supported for the DS8886F (models 985 and 85E), depending on the amount of
memory that is available.
Table 14. Components for the DS8886F (models 985 and 85E)
High Performance
I/O enclosure Host adapters Flash Enclosure Gen2
Processors System memory Processor memory pairs (8 or 4 port) pairs1, 2, 3 Expansion frames
8-core 2 2 - 16 1-2 0
128 GB 64 GB
256 GB 128 GB
16-core 4 2 - 32 1-4 0-1
256 GB 128 GB
512 GB 256 GB
24-core 4 2 - 32 1-4 0-1
1024 GB 512 GB
2048 GB 1024 GB

1. High Performance Flash Enclosures Gen2 are installed in pairs.


2. This configuration of the DS8880 must be populated with at least one High Performance Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.

Table 15. Maximum capacity for the DS8886F (models 985 and 85E)
Maximum storage capacity
Maximum 2.5-in. High for 2.5-in. High
Performance Flash Performance Flash
Enclosure Gen2 flash Enclosure Gen2 flash
Processors System memory cards cards Maximum total drives
8-core 96 307.2 TB 96
128 GB
256 GB
16-core 192 614.4 TB 192
256 GB
512 GB
24-core 192 614.4 TB 192
1024 GB
2048 GB

DS8886F base frame (model 985) overview:

The DS8886F (model 985) includes a base frame.

The base frame includes the following components:


v High Performance Flash Enclosures Gen2
v Hardware Management Console (HMC), including a keyboard and monitor
v Second HMC (optional)
v Ethernet switches
v Processor nodes (available with POWER8 processors)
v I/O enclosures
v Direct-current uninterruptible power supplies (DC-UPS)

16 DS8880 Introduction and Planning Guide


v Rack power control (RPC) cards

DS8886F expansion frame (model 85E) overview:

The DS8886F supports one expansion frame (model 85E) that can be added to a
base frame (model 985). A minimum of 256 GB system memory and a 16-core
processor is required to add the expansion frame.

The expansion frame includes I/O enclosures. You can add up to two High
Performance Flash Enclosure Gen2 pairs to the expansion frame.

The main power area is at the rear of the expansion frame. The power system in is
a pair of direct-current uninterruptible power supplies (DC-UPSs) with internal
batteries.

DS8886F (models 986 and 86E):

The DS8886F (models 986 and 86E) is a high-density, high-performance storage


system that includes High Performance Flash Enclosures Gen2, and supports
three-phase power.

DS8886F (models 986 and 86E) storage systems are scalable with up to 24-core
processors, and up to 192 High Performance Flash Enclosure Gen2 flash cards.
They are optimized and configured for performance and throughput by
maximizing the number of paths to the storage enclosures. The frame is 19 inches
wide and 40U high. They support the following storage enclosures:
v Up to 2 High Performance Flash Enclosure Gen2 pairs in a base frame (model
986).
v Up to 2 High Performance Flash Enclosure Gen2 pairs in an expansion frame
(model 86E).

The DS8886F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre
Channel Protocol (FCP), FICON, or Fibre Channel Arbitrated Loop (FC-AL) (for 8
Gbps adapters only) protocol. The High Performance FICON (HPF) feature is also
supported.

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

The following tables list the hardware components and maximum capacities that
are supported for the DS8886F (models 986 and 86E), depending on the amount of
memory that is available.
Table 16. Components for the DS8886F (models 986 and 86E)
High Performance
I/O enclosure Host adapters Flash Enclosure Gen2
Processors System memory Processor memory pairs (8 or 4 port) pairs1, 2, 3 Expansion frames
8-core 2 2 - 16 1-2 0
128 GB 64 GB
256 GB 128 GB
16-core 4 2 - 32 1-4 0-1
256 GB 128 GB
512 GB 256 GB
24-core 4 2 - 32 1-4 0-1
1024 GB 512 GB
2048 GB 1024 GB

Chapter 1. Overview 17
Table 16. Components for the DS8886F (models 986 and 86E) (continued)
High Performance
I/O enclosure Host adapters Flash Enclosure Gen2
Processors System memory Processor memory pairs (8 or 4 port) pairs1, 2, 3 Expansion frames

1. High Performance Flash Enclosures Gen2 are installed in pairs.


2. This configuration of the DS8880 must be populated with at least one High Performance Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.

Table 17. Maximum capacity for the DS8886F (models 986 and 86E)
Maximum storage capacity
Maximum 2.5-in. High for 2.5-in. High
Performance Flash Performance Flash
Enclosure Gen2 flash Enclosure Gen2 flash
Processors System memory cards cards Maximum total drives
8-core 96 307.2 TB 96
128 GB
256 GB
16-core 192 614.4 TB 192
256 GB
512 GB
24-core 192 614.4 TB 192
1024 GB
2048 GB

DS8886F base frame (model 986) overview:

The DS8886F (model 986) includes a base frame.

The base frame includes the following components:


v High Performance Flash Enclosures Gen2
v Hardware Management Console (HMC), including a keyboard and monitor
v Second HMC (optional)
v Ethernet switches
v Processor nodes (available with POWER8 processors)
v I/O enclosures
v Direct-current uninterruptible power supplies (DC-UPS)
v Rack power control (RPC) cards

DS8886F expansion frame (model 86E) overview:

The DS8886F supports one expansion frame (model 86E) that can be added to a
base frame (model 986). A minimum of 256 GB system memory and a 16-core
processor is required for the expansion frame to be added.

The expansion frame includes I/O enclosures. You can add up to two High
Performance Flash Enclosure Gen2 pairs to the expansion frame.

The main power area is at the rear of the expansion frame. The power system is a
pair of direct-current uninterruptible power supplies (DC-UPSs) with internal
batteries.

DS8888F (models 988 and 88E)


The DS8888F (models 988 and 88E) is a high-performance, high-efficiency storage
system that includes only High Performance Flash Enclosures Gen2.

18 DS8880 Introduction and Planning Guide


DS8888F storage systems (models 988 and 88E) are scalable with up to 48-core
processors, 8 high-performance flash enclosures, and up to 480 flash cards. They
are optimized and configured for cost. The frame is 19 inches wide with a 40U
capacity. They support the following storage enclosures:
v Up to 4 High Performance Flash Enclosure Gen2 pairs in the base frame (model
988).
v Up to 4 High Performance Flash Enclosure Gen2 pairs each in the expansion
frame (model 88E).

The DS8888F uses 8 or 16 Gbps Fibre Channel host adapters that run Fibre
Channel Protocol (FCP), FICON, or (for 8 Gbps adapters only) Fibre Channel
Arbitrated Loop (FC-AL) protocol. The High Performance FICON (HPF) feature is
also supported.

The DS8888F supports three-phase power only.

For more specifications, see the IBM DS8000 series specifications web site
(www.ibm.com/systems/storage/disk/ds8000/specifications.html).

The following tables list the hardware components and maximum capacities that
are supported for the DS8888F (models 988 and 88E), depending on the amount of
memory that is available.
Table 18. Components for the DS8888F (models 988 and 88E)
High Performance
Host adapters Flash Enclosure Gen2
Processors System memory Processor memory I/O enclosures (8 or 4 port) pairs1, 2, 3 Expansion frames
24-core 1024 GB 512 GB 4 2 - 16 1-4 0
48-core 2048 GB 1024 GB 8 2 - 32 1-8 0-1

1. High Performance Flash Enclosures Gen2 are installed in pairs.


2. This configuration of the DS8888F must be populated with at least one High Performance Flash Enclosure Gen2 pair (feature code 1600).
3. Each High Performance Flash Enclosure Gen2 pair (feature code 1600) includes a pair of flash RAID adapters.

Table 19. Maximum capacity for the DS8888F (models 988 and 88E)
Maximum storage capacity
Maximum 2.5-in. High for 2.5-in. High
Performance Flash Performance Flash
Processors System memory Enclosure Gen2 flash cards Enclosure Gen2 flash cards Maximum total drives
24-core 1024 GB 192 614.4 TB 192
48-core 2048 GB 384 1113.6 TB 384

DS8888F base frame (model 988) overview:

The DS8888F includes a base frame (model 988).

The base frame includes the following components:


v High Performance Flash Enclosures Gen2
v Hardware Management Console (HMC), including a keyboard and monitor
v Second HMC (optional)
v Ethernet switches
v Processor nodes (available with POWER8 processors)
v I/O enclosures
v Direct-current uninterruptible power supplies (DC-UPS)
v Rack power control (RPC) cards

Chapter 1. Overview 19
DS8888F expansion frame (model 88E) overview:

The DS8888F supports an expansion frame (model 88E) that can be added to a
base frame. A minimum of 2048 GB system memory and a 48-core processor is
required, if the expansion frame is used.

The expansion frame includes I/O enclosures. You can add up to four High
Performance Flash Enclosure Gen2 pairs to the expansion frame.

The main power area is at the rear of the expansion frame. The power system in
each frame is a pair of direct-current uninterruptible power supplies (DC-UPSs)
with internal batteries.

Storage enclosures
DS8880 integrates one of two types of storage enclosures: High Performance Flash
Enclosures Gen2 and standard drive enclosures.

High Performance Flash Enclosures Gen2 pair


The High Performance Flash Enclosure Gen2 is a 2U storage enclosure that is
installed in pairs.

The High Performance Flash Enclosure Gen2 pair (feature code 1600) provides two
2U storage enclosures with associated RAID controllers and cabling. This
combination of components forms a high-performance, fully-redundant flash
storage array. The array components can be exchanged without interruption of
service for concurrent maintenance and firmware updates.

Each High Performance Flash Enclosure Gen2 pair (feature code 1600) contains the
following hardware components:
v Two 2U 24-slot SAS flash card enclosures. Each of the two enclosures contains
the following components:
– Two power supplies with integrated cooling fans
– Two SAS Expander Modules with two SAS ports each
– One middle plane for plugging components that provides concurrent
maintenance of flash cards, Expander Modules, and power supplies
v Two High Performance Flash Enclosure Gen2 flash RAID adapters configured
for redundant access to the 2U flash enclosures. Each RAID adapter supports
concurrent maintenance and includes the following components:
– High Performance ASIC RAID engine
– Four SAS ports and cables connected to the four SAS Expander Modules
providing fully-redundant access from each RAID adapter to both 2U
enclosures
– Integrated cooling
– x8 PCIe Gen2 cable port for direct connection to the I/O enclosure

Standard drive enclosures


The standard drive enclosure is a 2U storage enclosure that is installed in pairs.

Each standard drive enclosure contains the following hardware components:


v Up to 12 large-form factor (LFF), 3.5-inch drive enclosures
v Up to 24 small form factor (SFF), 2.5-inch SAS drives

20 DS8880 Introduction and Planning Guide


Note: Drives can be disk drives or flash drives (also known as solid-state drives
or SSDs). You cannot intermix drives of different types in the same enclosure.
v Two power supplies with integrated cooling fans
v Two Fibre Channel interconnect cards that connect four Fibre Channel 8 Gbps
interfaces to a pair of device adapters or another standard drive enclosure.
v One back plane for plugging components

The 2.5-inch High Performance Flash Enclosures Gen2 flash cards are available in
sets of 16 drives. The 3.5-inch SAS disk drives are available in half-drive sets of
eight drives.

Management console
The management console is also referred to as the Hardware Management Console
(or HMC). It supports storage system hardware and firmware installation and
maintenance activities. The HMC includes a keyboard and monitor that are stored
on the left side of the base frame.

The HMC connects to the customer network and provides access to IBM DS8000
series functions that can be used to manage the storage system. Management
functions include logical configuration, problem notification, call home for service,
remote service, and Copy Services management. You can perform management
functions from the DS8000 Storage Management GUI, DS command-line interface
(DS CLI), or other storage management software that supports the DS8000 series.

Each base frame includes one HMC and space for a second HMC, which is
available as a separately orderable feature to provide redundancy.

Ethernet switches
The Ethernet switches provide internal communication between the management
consoles and the processor complexes. Two redundant Ethernet switches are
provided.

Processor nodes
The processor nodes drive all functions in the storage system. Each node consists
of a Power server that contains POWER8 processors and memory.

I/O enclosures
I/O enclosures provide connectivity between the adapters and the processor
complex.

The I/O enclosure uses PCIe interfaces to interconnect I/O adapters in the I/O
enclosure to both processor nodes. A PCIe device is an I/O adapter or a processor
node.

To improve I/O operations per second (IOPS) and sequential read/write


throughput, each I/O enclosure is connected to each processor node with a
point-to-point connection. I/O enclosures no longer share common loops.

I/O enclosures contain the following adapters:


Flash interface connectors
Interface connector that provides PCIe cable connection from the I/O
enclosure to the High Performance Flash Enclosure Gen2.

Chapter 1. Overview 21
Device adapters
PCIe-attached adapter with four 8 Gbps Fibre Channel arbitrated loop
(FC-AL) ports. These adapters connect the processor nodes to standard
drive enclosures and provide RAID support.
Host adapters
Each I/O enclosure can support up to 16 host ports. For example, if an
8-port adapter is used, then only 2 additional 4-port adapters can be
installed, or one more 8-port adapter. If only 8-port adapters are used, then
only 2 host adapters can be installed in each I/O enclosure. If only 4-port
adapters are used, then 4 host adapters can be installed in each I/O
enclosure.
For PCIe-attached adapters with four or eight 8 Gbps Fibre Channel ports,
each port can be independently configured to use SCSI/FCP, SCSI/FC-AL,
or FICON/zHPF protocols. For PCIe-attached adapters with 4 16 Gbps
Fibre Channel ports, each port can be independently configured to use
SCSI/FCP or FICON/zHPF protocols. Both longwave and shortwave
adapter versions that support different maximum cable lengths are
available. The host-adapter ports can be directly connected to attached
hosts systems or storage systems, or connected to a storage area network.
SCSI/FCP ports are used for connections between storage systems.
SCSI/FCP ports that are attached to a SAN can be used for both host and
storage system connections.
The High Performance FICON Extension (zHPF) protocol can be used by
FICON host channels that have zHPF support. The use of zHPF protocols
provides a significant reduction in channel usage. This reduction improves
I/O input on a single channel and reduces the number of FICON channels
that are required to support the workload.

Power
The power system in each frame is a pair of direct-current uninterruptible power
supplies (DC-UPSs) with internal batteries. The DC-UPSs distribute rectified AC
power and provide power switching for redundancy. A single DC-UPS has
sufficient capacity to power and provide battery backup to the entire frame if one
DC-UPS is out of service. DS8880 uses three-phase and single-phase power.

There are two AC-power cords, each feeding one DC-UPS. If AC power is not
present at the input line, the output is switched to rectified AC power from the
partner DC-UPS. If neither AC-power input is active, the DC-UPS switches to 180
V DC battery power. Storage systems that have the extended power line
disturbance (ePLD) option are protected from a power-line disturbance for up to 40
seconds. Storage systems without the ePLD option are protected for 4 seconds.

An integrated pair of rack-power control (RPC) cards manages the efficiency of


power distribution within the storage system. The RPC cards are attached to each
processor node. The RPC card is also attached to the primary power system in
each frame.

Functional overview
The following list provides an overview of some of the features that are associated
with DS8880.

Note: Some storage system functions are not available or are not supported in all
environments. See the IBM System Storage Interoperation Center (SSIC) website

22 DS8880 Introduction and Planning Guide


(www.ibm.com/systems/support/storage/config/ssic) for the most current
information on supported hosts, operating systems, adapters, and switches.
Nondisruptive and disruptive activities
DS8880 supports hardware redundancy. It is designed to support
nondisruptive changes: hardware upgrades, repair, and licensed function
upgrades. In addition, logical configuration changes can be made
nondisruptively. For example:
v The flexibility and modularity means that expansion frames can be
added and physical storage capacity can be increased within a frame
without disrupting your applications.
v An increase in license scope is nondisruptive and takes effect
immediately. A decrease in license scope is also nondisruptive but does
not take effect until the next IML.
v Easy Tier helps keep performance optimized by periodically
redistributing data to help eliminate drive hot spots that can degrade
performance. This function helps balance I/O activity across the drives
in an existing drive tier. It can also automatically redistribute some data
to new empty drives added to a tier to help improve performance by
taking advantage of the new resources. Easy Tier does this I/O activity
rebalancing automatically without disrupting access to your data.
The following examples include activities that are disruptive:
v The installation of an earthquake resistance kit on a raised or nonraised
floor.
v The removal of an expansion frame from the base frame.
Energy reporting
You can use DS8880 to display the following energy measurements
through the DS CLI:
v Average inlet temperature in Celsius
v Total data transfer rate in MB/s
v Timestamp of the last update for values
The derived values are averaged over a 5-minute period. For more
information about energy-related commands, see the commands reference.
You can also query power usage and data usage with the showsu
command. For more information, see the showsu description in the
Command-Line Interface User's Guide.
National Institute of Standards and Technology (NIST) SP 800-131A security
enablement
NIST SP 800-131A requires the use of cryptographic algorithms that have
security strengths of 112 bits to provide data security and data integrity for
secure data that is created in the cryptoperiod starting in 2014. The DS8880
is enabled for NIST SP 800-131A. Conformance with NIST SP 800-131A
depends on the use of appropriate prerequisite management software
versions and appropriate configuration of the DS8880 and other
network-related entities.
Storage pool striping (rotate capacity)
Storage pool striping is supported on the DS8000 series, providing
improved performance. The storage pool striping function stripes new
volumes across all arrays in a pool. The striped volume layout reduces
workload skew in the system without requiring manual tuning by a
storage administrator. This approach can increase performance with
minimal operator effort. With storage pool striping support, the system

Chapter 1. Overview 23
automatically performs close to highest efficiency, which requires little or
no administration. The effectiveness of performance management tools is
also enhanced because imbalances tend to occur as isolated problems.
When performance administration is required, it is applied more precisely.
You can configure and manage storage pool striping by using the DS8000
Storage Management GUI, DS CLI, and DS Open API. The rotate capacity
allocation method (also referred to as rotate volumes) is an alternative
allocation method that tends to prefer volumes that are allocated to a
single managed array, and is not recommended. The rotate extents option
(storage pool striping) is designed to provide the best performance by
striping volumes across arrays in the pool. Existing volumes can be
reconfigured nondisruptively by using manual volume migration and
volume rebalance.
The storage pool striping function is provided with the DS8000 series at no
additional charge.
Performance statistics
You can use usage statistics to monitor your I/O activity. For example, you
can monitor how busy the I/O ports are and use that data to help manage
your SAN. For more information, see documentation about performance
monitoring in the DS8000 Storage Management GUI.
Sign-on support that uses Lightweight Directory Access Protocol (LDAP)
The DS8000 system provides support for both unified sign-on functions
(available through the DS8000 Storage Management GUI), and the ability
to specify an existing Lightweight Directory Access Protocol (LDAP) server.
The LDAP server can have existing users and user groups that can be used
for authentication on the DS8000 system.
Setting up unified sign-on support for the DS8000 system is achieved by
using IBM Spectrum Control. For more information, see the IBM Spectrum
Control online product documentation in IBM Knowledge Center
(www.ibm.com/support/knowledgecenter) .

Note: Other supported user directory servers include IBM Directory Server
and Microsoft Active Directory.
Easy Tier
Easy Tier is designed to determine the appropriate tier of storage based on
data access requirements and then automatically and nondisruptively move
data, at the subvolume or sub-LUN level, to the appropriate tier on the
DS8000 system. Easy Tier is an optional feature that offers enhanced
capabilities through features such as auto-rebalancing, hot spot
management, rank depopulation, and manual volume migration.
Easy Tier enables the DS8880 system to automatically balance I/O access to
drives to avoid hot spots on arrays. Easy Tier can place data in the storage
tier that best suits the access frequency of the data. Highly accessed data
can be moved nondisruptively to a higher tier, and likewise cooler data is
moved to a lower tier (for example, to Nearline drives).
Easy Tier also can benefit homogeneous drive pools because it can move
data away from over-utilized arrays to under-utilized arrays to eliminate
hot spots and peaks in drive response times.
Z Synergy
The DS8880 storage system can work in cooperation with IBM z Systems
hosts to provide the following performance enhancement functions.

24 DS8880 Introduction and Planning Guide


v Parallel Access Volumes and HyperPAV (also referred to as aliases)
v I/O Priority Manager with z/OS Workload Manager
v Extended Address Volumes
v High Performance FICON for IBM z Systems
v Quick initialization for IBM z Systems
Copy Services
The DS8880 storage system supports a wide variety of Copy Service
functions, including Remote Mirror, Remote Copy, and Point-in-Time
functions. The following includes key Copy Service functions:
v FlashCopy
v Remote Pair FlashCopy (Preserve Mirror)
v Remote Mirror and Copy:
– Metro Mirror
– Global Copy
– Global Mirror
– Metro/Global Mirror
– Multi-Target PPRC
– z/OS Global Mirror
– z/OS Metro/Global Mirror
Multitenancy support (resource groups)
Resource groups provide additional policy-based limitations. Resource
groups, together with the inherent volume addressing limitations, support
secure partitioning of Copy Services resources between user-defined
partitions. The process of specifying the appropriate limitations is
performed by an administrator using resource groups functions. DS CLI
support is available for resource groups functions.
Multitenancy can be supported in certain environments without the use of
resource groups, if the following constraints are met:
v Either Copy Services functions are disabled on all DS8000 systems that
share the same SAN (local and remote sites) or the landlord configures
the operating system environment on all hosts (or host LPARs) attached
to a SAN, which has one or more DS8000 systems, so that no tenant can
issue Copy Services commands.
v The z/OS Distribute Data backup feature is disabled on all DS8000
systems in the environment (local and remote sites).
v Thin provisioned volumes (ESE or TSE) are not used on any DS8000
systems in the environment (local and remote sites).
v On zSeries systems there is only one tenant running in an LPAR, and the
volume access is controlled so that a CKD base volume or alias volume
is only accessible by a single tenant’s LPAR or LPARs.
I/O Priority Manager
The I/O Priority Manager function can help you effectively manage quality
of service levels for each application running on your system. This function
aligns distinct service levels to separate workloads in the system to help
maintain the efficient performance of each DS8000 volume. The I/O
Priority Manager detects when a higher-priority application is hindered by
a lower-priority application that is competing for the same system
resources. This detection might occur when multiple applications request
data from the same drives. When I/O Priority Manager encounters this

Chapter 1. Overview 25
situation, it delays lower-priority I/O data to assist the more critical I/O
data in meeting its performance targets.
Use this function to consolidate more workloads on your system and to
ensure that your system resources are aligned to match the priority of your
applications.
The default setting for this feature is disabled.

Note: If the I/O Priority Manager LIC key is activated, you can enable
I/O Priority Manager on the Advanced tab of the System settings page in
the DS8000 Storage Management GUI.
Restriction of hazardous substances (RoHS)
The DS8880 system meets RoHS requirements. It conforms to the following
EC directives:
v Directive 2011/65/EU of the European Parliament and of the Council of
8 June 2011 on the restriction of the use of certain hazardous substances
in electrical and electronic equipment. It has been demonstrated that the
requirements specified in Article 4 have been met.
v EN 50581:2012 technical documentation for the assessment of electrical
and electronic products with respect to the restriction of hazardous
substances.

Logical configuration
You can use either the DS8000 Storage Management GUI or the DS CLI to
configure storage on the DS8000. Although the end result of storage configuration
is similar, each interface has specific terminology, concepts and procedures.

Note: LSS is synonymous with logical control unit (LCU) and subsystem
identification (SSID).

Logical configuration with DS8000 Storage Management GUI


Before you configure your storage system, it is important to understand the storage
concepts and sequence of system configuration.

Figure 9 on page 27 illustrates the concepts of configuration.

26 DS8880 Introduction and Planning Guide


FB

Pools Open Systems


FB Hosts

Volumes
Arrays

CKD

Pools z Systems
CKD Hosts

Volumes
CKD

ds800001
LSSs

Figure 9. Logical configuration sequence

The following concepts are used in storage configuration.


Arrays
An array, also referred to as a managed array, is a group of storage devices
that provides capacity for a pool. An array generally consists of 7 or 8
drives that are managed as a Redundant Array of Independent Disks
(RAID)
Pools A storage pool is a collection of storage that identifies a set of storage
resources. These resources provide the capacity and management
requirements for arrays and volumes that have the same storage type,
either fixed block (FB) or count key data (CKD).
Volumes
A volume is a fixed amount of storage on a storage device.
LSS The logical subsystem (LSS) that enables one or more host I/O interfaces to
access a set of devices.
Hosts A host is the computer system that interacts with the DS8000 storage
system. Hosts defined on the storage system are configured with a
user-designated host type that enables DS8000 to recognize and interact
with the host. Only hosts that are mapped to volumes can access those
volumes.

Logical configuration of the DS8000 storage system begins with managed arrays.
When you create storage pools, you assign the arrays to pools and then create
volumes in the pools. FB volumes are connected through host ports to an open
systems host. CKD volumes require that logical subsystems (LSSs) be created as
well so that they can be accessed by an IBM z Systems host.

Pools must be created in pairs to balance the storage workload. Each pool in the
pool pair is controlled by a processor node (either Node 0 or Node 1). Balancing
the workload helps to prevent one node from doing most of the work and results
in more efficient I/O processing, which can improve overall system performance.

Chapter 1. Overview 27
Both pools in the pair must be formatted for the same storage type, either FB or
CKD storage. You can create multiple pool pairs to isolate workloads.

When you create a pair of pools, you can choose to automatically assign all
available arrays to the pools, or assign them manually afterward. If the arrays are
assigned automatically, the system balances them across both pools so that the
workload is distributed evenly across both nodes. Automatic assignment also
ensures that spares and device adapter (DA) pairs are distributed equally between
the pools.

If you are connecting to a z Systems host, you must create a logical subsystem
(LSS) before you can create CKD volumes.

You can create a set of volumes that share characteristics, such as capacity and
storage type, in a pool pair. The system automatically balances the volumes
between both pools. If the pools are managed by Easy Tier, the capacity in the
volumes is automatically distributed among the arrays. If the pools are not
managed by Easy Tier, you can choose to use the rotate capacity allocation method,
which stripes capacity across the arrays.

If the volumes are connecting to a z Systems host, the next steps of the
configuration process are completed on the host.

If the volumes are connecting to an open systems host, map the volumes to the
host, add host ports to the host, and then map the ports to the I/O ports on the
storage system.

FB volumes can only accept I/O from the host ports of hosts that are mapped to
the volumes. Host ports are zoned to communicate only with certain I/O ports on
the storage system. Zoning is configured either within the storage system by using
I/O port masking, or on the switch. Zoning ensures that the workload is spread
properly over I/O ports and that certain workloads are isolated from one another,
so that they do not interfere with each other.

The workload enters the storage system through I/O ports, which are on the host
adapters. The workload is then fed into the processor nodes, where it can be
cached for faster read/write access. If the workload is not cached, it is stored on
the arrays in the storage enclosures.

Logical configuration with DS CLI


Before you configure your storage system with the DS CLI, it is important to
understand IBM terminology for storage concepts and the storage hierarchy.

In the storage hierarchy, you begin with a physical disk. Logical groupings of eight
disks form an array site. Logical groupings of one array site form an array. After
you define your array storage type as CKD or fixed block, you can create a rank. A
rank is divided into a number of fixed-size extents. If you work with an
open-systems host, an extent is 1 GB. If you work in an IBM z Systems
environment, an extent is the size of an IBM 3390 Mod 1 disk drive.

After you create ranks, your physical storage can be considered virtualized.
Virtualization dissociates your physical storage configuration from your logical
configuration, so that volume sizes are no longer constrained by the physical size
of your arrays.

28 DS8880 Introduction and Planning Guide


The available space on each rank is divided into extents. The extents are the
building blocks of the logical volumes. An extent is striped across all disks of an
array.

Extents of the same storage type are grouped to form an extent pool. Multiple
extent pools can create storage classes that provide greater flexibility in storage
allocation through a combination of RAID types, DDM size, DDM speed, and
DDM technology. This configuration allows a differentiation of logical volumes by
assigning them to the appropriate extent pool for the needed characteristics.
Different extent sizes for the same device type (for example, count-key-data or
fixed block) can be supported on the same storage unit. The different extent types
must be in different extent pools.

A logical volume is composed of one or more extents. A volume group specifies a


set of logical volumes. Identify different volume groups for different uses or
functions (for example, SCSI target, remote mirror and copy secondary volumes,
FlashCopy targets, and Copy Services). Access to the set of logical volumes that are
identified by the volume group can be controlled. Volume groups map hosts to
volumes. Figure 10 on page 30 shows a graphic representation of the logical
configuration sequence.

When volumes are created, you must initialize logical tracks from the host before
the host is allowed read and write access to the logical tracks on the volumes. The
Quick Initialization feature for open system on FB ESE volumes allows quicker
access to logical volumes. The volumes include host volumes and source volumes
that can be used Copy Services relationships, such as FlashCopy or Remote Mirror
and Copy relationships. This process dynamically initializes logical volumes when
they are created or expanded, allowing them to be configured and placed online
more quickly.

You can specify LUN ID numbers through the graphical user interface (GUI) for
volumes in a map-type volume group. You can create a new volume group, add
volumes to an existing volume group, or add a volume group to a new or existing
host. Previously, gaps or holes in LUN ID numbers might result in a "map error"
status. The Status field is eliminated from the volume groups main page in the
GUI and the volume groups accessed table on the Manage Host Connections
page. You can also assign host connection nicknames and host port nicknames.
Host connection nicknames can be up to 28 characters, which is expanded from the
previous maximum of 12. Host port nicknames can be 32 characters, which are
expanded from the previous maximum of 16.

Chapter 1. Overview 29
Disk

Array Site Array

Rank

= CKD Mod1 Extent in IBM


System z environments

= FB 1GB in an Open
systems Host
Extents
Virtualization

Extent Pool

Volume Groups
Map Hosts to
s
nt
te

Volumes
Ex

f2d00137

Logical Volume Volume Group

Figure 10. Logical configuration sequence

RAID implementation
RAID implementation improves data storage reliability and performance.

Redundant array of independent disks (RAID) is a method of configuring multiple


drives in a storage subsystem for high availability and high performance. The
collection of two or more drives presents the image of a single drive to the system.
If a single device failure occurs, data can be read or regenerated from the other
drives in the array.

30 DS8880 Introduction and Planning Guide


RAID implementation provides fault-tolerant data storage by storing the data in
different places on multiple drives. By placing data on multiple drives, I/O
operations can overlap in a balanced way to improve the basic reliability and
performance of the attached storage devices.

Physical capacity for the storage system can be configured as RAID 5, RAID 6, or
RAID 10. RAID 5 can offer excellent performance for some applications, while
RAID 10 can offer better performance for selected applications, in particular, high
random, write content applications in the open systems environment. RAID 6
increases data protection by adding an extra layer of parity over the RAID 5
implementation.

RAID 6 is the recommended and default RAID type for all drives.

RAID 5 overview
RAID 5 is a method of spreading volume data across multiple drives. The storage
system supports RAID 5 arrays.

RAID 5 increases performance by supporting concurrent accesses to the multiple


drives within each logical volume. Data protection is provided by parity, which is
stored throughout the drives in the array. If a drive fails, the data on that drive can
be restored using all the other drives in the array along with the parity bits that
were created when the data was stored.

RAID 5 is not supported for drives larger than 1 TB.

RAID 6 overview
RAID 6 is a method of increasing the data protection of arrays with volume data
spread across multiple disk drives. The DS8000 series supports RAID 6 arrays.

RAID 6 increases data protection by adding an extra layer of parity over the RAID
5 implementation. By adding this protection, RAID 6 can restore data from an
array with up to two failed drives. The calculation and storage of extra parity
slightly reduces the capacity and performance compared to a RAID 5 array. RAID
6 is suitable for storage using archive class disk drives.

The default RAID type for all drives is RAID 6. For drives over 1 TB, RAID 6 or
RAID 10 selection is enforced.

RAID 10 overview
RAID 10 provides high availability by combining features of RAID 0 and RAID 1.
The DS8000 series supports RAID 10 arrays.

RAID 0 increases performance by striping volume data across multiple disk drives.
RAID 1 provides disk mirroring, which duplicates data between two disk drives.
By combining the features of RAID 0 and RAID 1, RAID 10 provides a second
optimization for fault tolerance.

RAID 10 implementation provides data mirroring from one disk drive to another
disk drive. RAID 10 stripes data across half of the disk drives in the RAID 10
configuration. The other half of the array mirrors the first set of disk drives. Access
to data is preserved if one disk in each mirrored pair remains available. In some
cases, RAID 10 offers faster data reads and writes than RAID 5 because it is not
required to manage parity. However, with half of the disk drives in the group used
for data and the other half used to mirror that data, RAID 10 arrays have less
capacity than RAID 5 arrays.

Chapter 1. Overview 31
Logical subsystems
To facilitate configuration of a storage system, volumes are partitioned into groups
of volumes. Each group is referred to as a logical subsystem (LSS).

As part of the storage configuration process, you can configure the maximum
number of LSSs that you plan to use. The DS8000 can contain up to 255 LSSs and
each LSS can be connected to 16 other LSSs using a logical path. An LSS is a group
of up to 256 volumes that have the same storage type, either count key data (CKD)
for z Systems hosts or fixed block (FB) for open systems hosts.

An LSS is uniquely identified within the storage system by an identifier that


consists of two hex characters (0-9 or uppercase AF) for which the volumes are
associated. A fully qualified LSS is designated using the storage system identifier
and the LSS identifier, such as IBM.2107-921-12FA123/1E. The LSS identifiers are
important for Copy Services operations. For example, for FlashCopy operations,
you specify the LSS identifier when choosing source and target volumes because
the volumes can span LSSs in a storage system.

The storage system has a 64 KB 256 volume address space that is partitioned into
255 LSSs, where each LSS contains 256 logical volume numbers. The 255 LSS units
are assigned to one of 16 address groups, where each address group contains 16
LSSs, or 4 KB volume addresses.

Storage system functions, including some that are associated with FB volumes,
might have dependencies on LSS partitions. For example:
v The LSS partitions and their associated volume numbers must identify volumes
that are specified for storage system Copy Services operations.
v To establish Remote Mirror and Copy pairs, a logical path must be established
between the associated LSS pair.
v FlashCopy pairs must reside within the same storage system.

If you increase storage system capacity, you can increase the number of LSSs that
you have defined. This modification to increase the maximum is a nonconcurrent
action. If you might need capacity increases in the future, leave the number of
LSSs set to the maximum of 255.

Note: If you reduce the CKD LSS limit to zero for z Systems hosts, the storage
system does not process Remote Mirror and Copy functions. The FB LSS limit
must be no lower then eight to support Remote Mirror and Copy functions for
open-systems hosts.

Allocation methods
Allocation methods (also referred to as extent allocation methods) determine the
means by which volume capacity is allocated within a pool. Allocation methods
include rotate capacity, rotate volumes, and managed.

All extents of the ranks that are assigned to an extent pool are independently
available for allocation to logical volumes. The extents for a LUN or volume are
logically ordered, but they do not have to come from one rank and the extents do
not have to be contiguous on a rank. This construction method of using fixed
extents to form a logical volume in the storage system allows flexibility in the
management of the logical volumes. You can delete volumes, resize volumes, and

32 DS8880 Introduction and Planning Guide


reuse the extents of those volumes to create other volumes, different sizes. One
logical volume can be deleted without affecting the other logical volumes that are
defined on the same extent pool.

Because the extents are cleaned after you delete a volume, it can take some time
until these extents are available for reallocation. The reformatting of the extents is a
background process.

There are three allocation methods that are used by the storage system: rotate
capacity (also referred to as storage pool striping), rotate volumes, and managed.

Rotate capacity allocation method

The default allocation method is rotate capacity, which is also referred to as storage
pool striping. The rotate capacity allocation method is designed to provide the best
performance by striping volume extents across arrays in a pool. The storage system
keeps a sequence of arrays. The first array in the list is randomly picked at each
power-on of the storage subsystem. The storage system tracks the array in which
the last allocation started. The allocation of a first extent for the next volume starts
from the next array in that sequence. The next extent for that volume is taken from
the next rank in sequence, and so on. The system rotates the extents across the
arrays.

If you migrate a volume with a different allocation method to a pool that has the
rotate capacity allocation method, then the volume is reallocated. If you add arrays
to a pool, the rotate capacity allocation method reallocates the volumes by
spreading them across both existing and new arrays.

You can configure and manage this allocation method by using the DS8000 Storage
Management GUI, DS CLI, and DS Open API.

Rotate volumes allocation method

Volume extents can be allocated sequentially. In this case, all extents are taken from
the same array until there are enough extents for the requested volume size or the
array is full, in which case the allocation continues with the next array in the pool.

If more than one volume is created in one operation, the allocation for each
volume starts in another array. You might want to consider this allocation method
when you prefer to manage performance manually. The workload of one volume is
allocated to one array. This method makes the identification of performance
bottlenecks easier; however, by putting all the volume data onto just one array, you
might introduce a bottleneck, depending on your actual workload.

Managed allocation method

When a volume is managed by Easy Tier, the allocation method of the volume is
referred to as managed. Easy Tier allocates the capacity in ways that might differ
from both the rotate capacity and rotate volume allocation methods.

Management interfaces
You can use various IBM storage management interfaces to manage your DS8000
storage system.

Chapter 1. Overview 33
These interfaces include DS8000 Storage Management GUI, DS Command-Line
Interface (DS CLI), the DS Open Application Programming Interface, DS8000
RESTful API, IBM Storage Mobile Dashboard, IBM Spectrum Controland IBM
Copy Services Manager.

DS8000 Storage Management GUI


Use the DS8000 Storage Management GUI to configure and manage storage and to
monitor performance. Use the DS8000 Storage Management GUI to configure
storage and manage Copy Services functions.

DS8000 Storage Management GUI is a web-based GUI that is installed on the


Hardware Management Console (HMC). You can access the DS8000 Storage
Management GUI from any network-attached system by using a supported web
browser. For a list of supported browsers, see “DS8000 Storage Management GUI
supported web browsers” on page 37.

You can access the DS8000 Storage Management GUI from a browser by using the
following web address, where HMC_IP is the IP address or host name of the HMC.
https://HMC_IP

If the DS8000 Storage Management GUI does not display as anticipated, clear the
cache for your browser, and try to log in again.

Notes:
v If the storage system is configured for NIST SP 800-131A security conformance, a
version of Java that is NIST SP 800-131A compliant must be installed on all
systems that run the DS8000 Storage Management GUI. For more information
about security requirements, see information about configuring your
environment for NIST SP 800-131A compliance in the IBM DS8000 series online
product documentation ( http://www.ibm.com/support/knowledgecenter/
ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/f2c_securitybp.html).
v User names and passwords are encrypted for HTTPS protocol. You cannot access
the DS8000 Storage Management GUI over the non-secure HTTP protocol (port
8451).

DS command-line interface
The IBM DS command-line interface (DS CLI) can be used to create, delete, modify,
and view Copy Services functions and the logical configuration of a storage
system. These tasks can be performed either interactively, in batch processes
(operating system shell scripts), or in DS CLI script files. A DS CLI script file is a
text file that contains one or more DS CLI commands and can be issued as a single
command. DS CLI can be used to manage logical configuration, Copy Services
configuration, and other functions for a storage system, including managing
security settings, querying point-in-time performance information or status of
physical resources, and exporting audit logs.

The DS CLI provides a full-function set of commands to manage logical


configurations and Copy Services configurations. The DS CLI is available in the
DS8000 Storage Management GUI. The DS CLI can also be installed on and is
supported in many different environments, including the following platforms:
v AIX 6.1, 7.1, 7.2
v Linux, Red Hat Enterprise Linux [RHEL] 4, 5, 6, and 7
v Linux, SUSE Linux, Enterprise Server (SLES) 10, 11, and 12

34 DS8880 Introduction and Planning Guide


v VMware ESX 5, 5.1, 5.5, 6 Console
v IBM i 7.1, 7.2
v Oracle Solaris 7, 8, 9, 10, and 11
v Microsoft Windows Server 2008, 2012 and Windows 7, 8, 8.1, 10

Note: If the storage system is configured for NIST SP 800-131A security


conformance, a version of Java that is NIST SP 800-131A compliant must be
installed on all systems that run DS CLI. For more information about security
requirements, see documentation about configuring your environment for NIST SP
800-131A compliance in IBM Knowledge Center (www.ibm.com/support/
knowledgecenter/HW213_v7.4.0/).

DS Open Application Programming Interface


The DS Open Application Programming Interface (API) is a nonproprietary storage
management client application that supports routine LUN management activities.
Activities that are supported include: LUN creation, mapping and masking, and
the creation or deletion of RAID 5, RAID 6, and RAID 10 volume spaces.

The DS Open API supports these activities through the use of the Storage
Management Initiative Specification (SMI-S), as defined by the Storage Networking
Industry Association (SNIA).

The DS Open API helps integrate configuration management support into storage
resource management (SRM) applications, which help you to use existing SRM
applications and infrastructures. The DS Open API can also be used to automate
configuration management through customer-written applications. Either way, the
DS Open API presents another option for managing storage units by
complementing the use of the IBM Storage Management GUI web-based interface
and the DS command-line interface.

Note: The DS Open API supports the storage system and is an embedded
component.

You can implement the DS Open API without using a separate middleware
application. For example, you can implement it with the IBM Common
Information Model (CIM) agent, which provides a CIM-compliant interface. The
DS Open API uses the CIM technology to manage proprietary devices as open
system devices through storage management applications. The DS Open API is
used by storage management applications to communicate with a storage unit.

RESTful API
The RESTful API is an application on the DS8000 HMC for initiating simple
storage operations through the Web.

The RESTful (Representational State Transfer) API is a platform independent


means by which to initiate create, read, update, and delete operations in DS8000
and supporting storage devices. These operations are initiated with the HTTP
commands: POST, GET, PUT, and DELETE.

The RESTful API is intended for use in the development, testing, and debugging of
DS8000 client management infrastructures. You can use the RESTful API with a
CURL command or through standard Web browsers. For instance, you can use
DS8000 with the RESTClient add-on.

Chapter 1. Overview 35
IBM Storage Mobile Dashboard
IBM Storage Mobile Dashboard is a free application that provides basic monitoring
capabilities for storage systems. You can securely check the health and performance
status of your DS8000 storage system by viewing events and performance metrics.

To install IBM Storage Mobile Dashboard on an iOS device, open the App Store
app and search for “IBM Storage Mobile Dashboard.”

IBM Spectrum Control


IBM Spectrum Control is an integrated software solution that can help you
improve and centralize the management of your storage environment through the
integration of products. With IBM Spectrum Control, it is possible to manage
multiple DS8000 systems from a single point of control.

Note: IBM Spectrum Control is not required for the operation of a DS8000storage
system. However, it is recommended. IBM Spectrum Control can be ordered and
installed as a software product on various servers and operating systems. When
you install IBM Spectrum Control, ensure that the selected version supports the
current system functions. Optionally, you can order a server on which IBM
Spectrum Control is preinstalled.

IBM Spectrum Control simplifies storage management by providing the following


benefits:
v Centralizing the management of heterogeneous storage network resources with
IBMstorage management software
v Providing greater synergy between storage management software and
IBMstorage devices
v Reducing the number of servers that are required to manage your software
infrastructure
v Migrating from basic device management to storage management applications
that provide higher-level functions

For more information, see IBM Spectrum Control online product documentation in
IBM Knowledge Center (www.ibm.com/support/knowledgecenter).

IBM Copy Services Manager


IBM Copy Services Manager controls Copy Services in storage environments. Copy
Services are features that are used by storage systems, such as DS8000, to
configure, manage, and monitor data-copy functions.

IBM Copy Services Manager provides both a graphical interface and command line
that you can use for configuring and managing Copy Services functions across
storage units. Copy Services include the point-in-time function – IBM FlashCopy,
and the remote mirror and copy functions – Metro Mirror, Global Mirror, and
Metro Global Mirror. Copy Services Manager can automate the administration and
configuration of these services; and monitor and manage copy sessions.

You can use Copy Services Manager to complete the following data replication
tasks and help reduce the downtime of critical applications:
v Plan for replication when you are provisioning storage
v Keep data on multiple related volumes consistent across storage systems for a
planned or unplanned outage
v Monitor and track replication operations

36 DS8880 Introduction and Planning Guide


v Automate the mapping of source volumes to target volumes

Starting with DS8000 Version 8.1, Copy Services Manager also comes preinstalled
on the Hardware Management Console (HMC). Therefore, you can enable the
Copy Services Manager software that is already on the hardware system. Doing so
results in less setup time; and eliminates the need to maintain a separate server for
Copy Services functions.

You can also use Copy Services Manager to connect to an LDAP repository for
remote authentication. For more information, see the DS8000 online product
documentation at http://www.ibm.com/support/knowledgecenter/ST5GLJ/
ds8000_kcwelcome.html and search for topics that are related to remote
authentication.

For more information, see the Copy Services Manager online product
documentation at http://www.ibm.com/support/knowledgecenter/SSESK4/
csm_kcwelcome.html. The "What's new" topic provides details on the features
added for each version of Copy Services Manager that can be used by DS8000,
including HyperSwap for multi-target sessions, and incremental FlashCopy
support.

DS8000 Storage Management GUI supported web browsers


To access the DS8000 Storage Management GUI, you must ensure that your web
browser is supported and has the appropriate settings enabled.

The DS8000 Storage Management GUI supports the following web browsers:
Table 20. Supported web browsers
DS8000 version Supported browsers
8.0
Mozilla Firefox 38
Mozilla Firefox Extended Support Release (ESR) 38
Microsoft Internet Explorer 11
Google Chrome 43

IBM supports higher versions of the browsers as long as the vendors do not
remove or disable functionality that the product relies upon. For browser levels
higher than the versions that are certified with the product, customer support
accepts usage-related and defect-related service requests. As with operating system
and virtualization environments, if the support center cannot re-create the issue in
our lab, we might ask the client to re-create the problem on a certified browser
version to determine whether a product defect exists. Defects are not accepted for
cosmetic differences between browsers or browser versions that do not affect the
functional behavior of the product. If a problem is identified in the product, defects
are accepted. If a problem is identified with the browser, IBM might investigate
potential solutions or workaround that the client can implement until a permanent
solution becomes available.

Enabling TLS 1.2 support


If the security requirements for your storage system require conformance with
NIST SP 800-131A, enable transport layer security (TLS) 1.2 on web browsers that
use SSL/TLS to access the DS8000 Storage Management GUI. See your web
browser documentation for instructions on enabling TLS 1.2. For Internet Explorer,
complete the following steps to enable TLS 1.2.
1. On the Tools menu, click Internet Options.

Chapter 1. Overview 37
2. On the Advanced tab, under Settings, select Use TLS 1.2.

Note: Firefox, Release 24 and later, supports TLS 1.2. However, you must configure
Firefox to enable TLS 1.2 support.

For more information about security requirements, see .

Selecting browser security settings

You must select the appropriate web browser security settings to access the DS8000
Storage Management GUI. In Internet Explorer, use the following steps.
1. On the Tools menu, click Internet Options.
2. On the Security tab, select Internet and click Custom level.
3. Scroll to Miscellaneous, and select Allow META REFRESH.
4. Scroll to Scripting, and select Active scripting.

Configuring Internet Explorer to access the DS8000 Storage


Management GUI

If DS8000 Storage Management GUI is accessed through IBM Spectrum Control


with Internet Explorer, complete the following steps to properly configure the web
browser.
1. Disable the Pop-up Blocker.

Note: If a message indicates that content is blocked without a signed by a valid


security certificate, click the Information Bar at the top and select Show
blocked content.
2. Add the IP address of the DS8000 Hardware Management Console (HMC) to
the Internet Explorer list of trusted sites.

For more information, see your browser documentation.

38 DS8880 Introduction and Planning Guide


Chapter 2. Hardware features
Use this information to assist you with planning, ordering, and managing your
DS8000 series.

The following table lists feature codes that are used to order hardware features for
DS8000 series.
Table 21. Feature codes for hardware features
Feature
code Feature Description
0101 Single-phase input power indicator,
200 - 220 V, 30 A
0102 Single-phase input power indicator,
220 - 240 V, 30 A
0170 Top expansion For models 985 and 85E, 986 and
86E, 988 and 88E, increases frame
from 40U to 46U
0200 Shipping weight reduction Maximum shipping weight of any
storage system base model or
expansion model does not exceed 909
kg (2000 lb) each. Packaging adds 120
kg (265 lb).
0400 BSMI certification documents Required when the storage system
model is shipped to Taiwan.
1050 Battery service modules Single-phase DC-UPS
1052 Battery service modules Three-phase DC-UPS
1055 Extended power line disturbance An optional feature that is used to
protect the storage system from a
power-line disturbance for up to 40
seconds.
1062 Single-phase power cord, 200 - 240 V, HBL360C6W, Pin and Sleeve
60 A, 3-pin connector Connector, IEC 60309, 2P3W

HBL360R6W, AC Receptacle, IEC


60309, 2P3W
1063 Single-phase power cord, 200 - 240 V, Inline Connector: not applicable
63 A, no connector
Receptacle: not applicable
1086 Three-phase high voltage (five-wire HBL530C6V02, Pin and Sleeve
3P+N+G), 380-415V (nominal), 30 A, Connector, IEC 60309, 4P5W
IEC 60309 5-pin customer connector
HBL530R6V02, AC Receptacle, IEC
60309, 4P5W
1087 Three-phase low voltage (four-wire HBL430C9W, Pin and Sleeve
3P+G), 200-240V, 30 A, IEC 60309 Connector, IEC 60309, 3P4W
4-pin customer connector
HBL430R9W, AC Receptacle, IEC
60309, 3P4W

© Copyright IBM Corp. 2004, 2017 39


Table 21. Feature codes for hardware features (continued)
Feature
code Feature Description
1088 Three-phase high voltage (five-wire Inline Connector: not applicable
3P+N+G), 380-415V, 40 A, no customer
connector provided Receptacle: not applicable
1089 Three-phase low voltage (four-wire HBL460C9W, Pin and Sleeve
3P+G), 200-240V, 60 A, IEC 60309 Connector, IEC 60309, 3P4W
4-pin customer connector
HBL460R9W, AC Receptacle, IEC
60309, 3P4W
1101 5 ft. ladder for top exit cable For models 984 and 84E, 985 and
management 85E, 986 and 86E , 988 and 88E
1102 3 ft. platform ladder For models 984 and 84E, 985 and
85E, 986 and 86E , 988 and 88E
1103 Rolling step stool For models 984, 985, 986, and 988
1140 Primary management console A required feature that is installed in
the model 98x frame
1150 Secondary management console An optional feature that is installed
in the 98x frame
1241 Drive enclosure pair total Admin feature totaling all disk
enclosure pairs installed in the frame
1242 Standard drive enclosure pair For 2.5-inch disk drives
1244 Standard drive enclosure pair For 3.5-inch disk drives
1245 Standard drive enclosure pair For 400 GB flash drives
1246 Drive cable group A Connects the disk drives to the
device adapters within the same base
model 985 or 986
1247 Drive cable group B Connects the disk drives to the
device adapters in the first expansion
model 85E or 86E
1248 Drive cable group C Connects the disk drives from a
second expansion model 85E or 86E
to the base model 985 or 986 and first
expansion model 85E or 86E
1249 Drive cable group D Connects the disk drives from a third
expansion model 85E or 86E to a
second expansion model 85E or 86E
1251 Drive cable group E Connects the disk drives from a
fourth expansion model 85E or 86E
to a third expansion model 85E or
86E
1252 Extended drive cable group C For model 85E or 86E, 20 meter cable
to connect disk drives from a third
expansion model 85E or 86E to a
second expansion model 85E or 86E
1253 Extended drive cable group D For model 85E or 86E, 20 meter cable
to connect disk drives from a fourth
expansion model 85E or 86E to a
third expansion model 85E or 86E

40 DS8880 Introduction and Planning Guide


Table 21. Feature codes for hardware features (continued)
Feature
code Feature Description
1254 Extended drive cable group E For model 85E or 86E, 20 meter cable
to connect disk drives from a fifth
expansion model 85E or 86E to a
fourth expansion model 85E or 86E
1256 Standard drive enclosure pair For 800 GB flash drives
1257 Standard drive enclosure pair For 1.6 TB flash drives
1261 Drive cable group A Connects the disk drives to the
device adapters within the same base
model 984
1262 Drive cable group B Connects the disk drives to the
device adapters in the first expansion
model 84E
1263 Drive cable group C Connects the disk drives from a
second expansion model 84E to the
base model 984 and first expansion
model 84E
1266 Extended drive cable group C For model 84E, 20 meter cable to
connect disk drives from a second
expansion model 84E to the base
model 984 and first expansion model
84E
1303 I/O enclosure pair
1320 PCIe cable group 1 Connects device and host adapters in
an I/O enclosure pair to the
processor.
1321 PCIe cable group 2 Connects device and host adapters in
I/O enclosure pairs to the processor.
1400 Top-exit bracket for Fibre Channel
cable
1410 Fibre Channel cable 40 m (131 ft), 50 micron OM3 or
higher, multimode
1411 Fibre Channel cable 31 m (102 ft), 50 micron OM3 or
higher, multimode
1412 Fibre Channel cable 2 m (6.5 ft), 50 micron OM3 or
higher, multimode
1420 Fibre Channel cable 31 m (102 ft), 9 micron OS1 or higher,
single mode
1421 Fibre Channel cable 31 m (102 ft), 9 micron OS1 or higher,
single mode
1422 Fibre Channel cable 2 m (6.5 ft), 9 micron OS1 or higher,
single mode
1600 High Performance Flash Enclosure For flash cards
Gen2 pair
1610 400 GB 2.5-inch High Performance Flash card set (16 cards)
Flash Enclosure Gen2 flash cards set
1611 800 GB 2.5-inch High Performance Flash card set (16 cards)
Flash Enclosure Gen2 flash cards set

Chapter 2. Hardware features 41


Table 21. Feature codes for hardware features (continued)
Feature
code Feature Description
1612 1.6 TB 2.5-inch High Performance Flash card set (16 cards)
Flash Enclosure Gen2 flash cards set
1613 3.2 TB 2.5-inch High Performance Flash card set (16 cards)
Flash Enclosure Gen2 flash cards set
1699 High Performance Flash Enclosure Includes 16 fillers
Gen2 filler set
1761 External SKLM isolated-key appliance Model AP1 single server
configuration
1762 Secondary external SKLM isolated-key Model AP1 dual server configuration
appliance
1880 DS8000 Licensed Machine Code R8.0 Microcode bundle 88.x.xx.x for base
models 980 and 981
1881 DS8000 Licensed Machine Code R8.1 Microcode bundle 88.x.xx.x for base
models 980, 981, and 982
1882 DS8000 Licensed Machine Code R8.2 Microcode bundle 88.x.xx.x for base
models 984, 985, 986, and 988
1906 Earthquake resistance kit
1980 DS8000 Licensed Machine Code R8.0 Microcode bundle 88.x.xx.x for
expansion models 98B and 98E
1981 DS8000 Licensed Machine Code R8.1 Microcode bundle 88.x.xx.x for
expansion models 98B, 98E, and 98F
1982 DS8000 Licensed Machine Code R8.2 Microcode bundle 88.x.xx.x for
expansion models 84E, 85E, 86E, and
88E
2997 Disk enclosure filler set For 3.5-in. DDMs; includes eight
fillers
2999 Disk enclosure filler set For 2.5-in. DDMs; includes 16 fillers
3053 Device adapter pair 4-port, 8 Gb
3153 Fibre Channel host-adapter 4-port, 8 Gbps shortwave FCP and
FICON host adapter PCIe
3157 Fibre Channel host-adapter 8-port, 8 Gbps shortwave FCP and
FICON host adapter PCIe
3253 Fibre Channel host-adapter 4-port, 8 Gbps longwave FCP and
FICON host adapter PCIe
3257 Fibre Channel host-adapter 8-port, 8 Gbps longwave FCP and
FICON host adapter PCIe
3353 Fibre Channel host-adapter 4-port, 16 Gbps shortwave FCP and
FICON host adapter PCIe
3453 Fibre Channel host-adapter 4-port, 16 Gbps longwave FCP and
FICON host adapter PCIe
4223 64 GB system memory (6-core)
4224 128 GB system memory (6-core)
4225 256 GB system memory (6-core)
4324 128 GB system memory (8-core)
4325 256 GB system memory (8-core and 16-core)

42 DS8880 Introduction and Planning Guide


Table 21. Feature codes for hardware features (continued)
Feature
code Feature Description
4326 512 GB system memory (16-core)
4327 1 TB system memory (24-core)
4328 2 TB system memory (24-core)
4421 6-core POWER8 processors Requires feature code 4223, 4224, or
4225
4422 8-core POWER8 processors Requires feature code 4324 or 4325
4423 16-core POWER8 processors Requires feature code 4325 or 4326
4424 24-core POWER8 processors Requires feature code 4327 or 4328
4487 1 TB system memory (24-core)
4488 2 TB system memory (48-core)
4868 24-core POWER8 processors Requires feature code 4487
4888 48-core POWER8 processors Requires feature code 4488
5308 300 GB 15 K FDE disk-drive set SAS
5618 600 GB 15 K FDE disk-drive set SAS
5708 600 GB 10K FDE disk-drive set SAS
5768 1.2 TB 10K FDE disk-drive set SAS
5778 1.8 TB 10K FDE disk-drive set SAS
5868 4 TB 7.2 K FDE disk-drive set SAS
5878 6 TB 7.2 K FDE disk-drive set SAS
6158 400 GB FDE flash-drive set SAS
6258 800 GB FDE flash-drive set SAS
6358 1.6 TB SSD FDE drive set SAS

Storage complexes
A storage complex is a set of storage units that are managed by management
console units.

You can associate one or two management console units with a storage complex.
Each storage complex must use at least one of the management console units in
one of the storage units. You can add a second management console for
redundancy.

Management console
The management console supports storage system hardware and firmware
installation and maintenance activities.

The management console is a dedicated processor unit that is located inside your
storage system, and can automatically monitor the state of your system, and notify
you and IBM when service is required.

To provide continuous availability of your access to the management-console


functions, use an additional management console, especially for storage

Chapter 2. Hardware features 43


environments that use encryption. Both management consoles share a keyboard
and display that are stored on the left side of the base frame.

Hardware specifics
The storage system models offer a high degree of availability and performance
through the use of redundant components that can be replaced while the system is
operating. You can use a storage system model with a mix of different operating
systems and clustered and nonclustered variants of the same operating systems.

Contributors to the high degree of availability and reliability include the structure
of the storage unit, the host systems that are supported, and the memory and
speed of the processors.

Storage system structure


The design of the storage system, which contains the base model and the
expansion models, contributes to the high degree of availability. The primary
components that support high availability within the storage unit are the storage
server, the processor complex, and the rack power control card.
Storage system
The storage unit contains a storage server and one or more pairs of storage
enclosures that are packaged in one or more frames with associated power
supplies, batteries, and cooling.
Storage server
The storage server consists of two processor complexes, two or more I/O
enclosures, and a pair of rack power control cards.
Processor complex
The processor complex controls and manages the storage server functions
in the storage system. The two processor complexes form a redundant pair
such that if either processor complex fails, the remaining processor
complex controls and manages all storage server functions.
Rack power control card
A redundant pair of rack power control (RPC) cards coordinate the power
management within the storage unit. The RPC cards are attached to the
service processors in each processor complex, the primary power supplies
in each frame, and indirectly to the fan/sense cards and storage enclosures
in each frame.

Disk drives, flash drives, and flash cards


DS8880 provides you with a choice of drives.

Flash cards are supported for High Performance Flash Enclosures Gen2, and flash
drives are supported for standard drive enclosures. The following drives are
available:
v 2.5-inch High Performance Flash Enclosure Gen2 flash cards with FDE
– 400 GB
– 800 GB
– 1.6 TB
v 2.5-inch flash drives with FDE
– 400 GB
– 800 GB
– 1.6 TB

44 DS8880 Introduction and Planning Guide


v 2.5-inch disk drives with FDE
– 300 GB, 15 K RPM
– 600 GB, 15 K RPM
– 600 GB, 10 K RPM
– 1.2 TB, 10 K RPM
– 1.8 TB, 10 K RPM
v 3.5-inch disk drives with FDE
– 4 TB, 7.2 K RPM
– 6 TB, 7.2 K RPM

Drive maintenance policy


The DS8000 internal maintenance functions use an Enhanced Sparing process that
delays a service call for drive replacement if there are sufficient spare drives. All
drive repairs are managed according to Enhanced Sparing rules.

A minimum of two spare drives are allocated in a device adapter loop. Internal
maintenance functions continuously monitor and report (by using the call home
feature) to IBM when the number of drives in a spare pool reaches a preset
threshold. This design ensures continuous availability of devices while it protects
data and minimizing any service disruptions.

It is not recommended to replace a drive unless an error is generated indicating


that service is needed.

Host attachment overview


The DS8000 series provides various host attachments so that you can consolidate
storage capacity and workloads for open-systems hosts and z Systems.

The DS8000 series provides extensive connectivity using Fibre Channel adapters
across a broad range of server environments.

Host adapter intermix support


Both 4-port and 8-port host adapters (HAs) are available in DS8000. DS8000 can
have a maximum of four host adapters per I/O enclosure including 4-port 16 Gbps
adapters, 4- or 8-port 8 Gbps adapters, or a combination of each.

A maximum of 16 ports per I/O enclosure is supported, which provides for a


maximum of 128 ports in a system. Eight-port 8 Gbps adapters are allowed only in
slots C1 and C4. If an 8-port adapter is present in slot C1, no adapter can be
installed in slot C2. If an 8-port adapter is present in slot C4, no adapter can be
installed in slot C5.

The following table shows the host adapter plug order. The host adapter
installation order for the second frame (I/O enclosures 5 through 8) is the same for
the first four I/O enclosures.
Table 22. Plug order for 4- and 8-port HA slots for two and four I/O enclosures
Slot number
I/O enclosures C1 C2 C3 C4 C5 C6
For two I/O enclosures
Top I/O
enclosure 1

Chapter 2. Hardware features 45


Table 22. Plug order for 4- and 8-port HA slots for two and four I/O enclosures (continued)
Slot number
I/O enclosures C1 C2 C3 C4 C5 C6
Bottom I/O 3 7 1 5
enclosure 3
Top I/O
enclosure 2
Bottom I/O 2 8 4 6
enclosure 4
For four I/O enclosures in a DS8884, DS8886 configuration
Top I/O 7 15 3 11
enclosure 1
Bottom I/O 5 13 1 9
enclosure 3
Top I/O 4 12 8 16
enclosure 2
Bottom I/O 2 10 6 14
enclosure 4

The following HA-type plug order is used during manufacturing when different
types of HA cards are installed.

8-port 8 Gbps longwave host adapters are installed before


8-port 8 Gbps shortwave host adapters are installed before
4-port 16 Gbps longwave host adapters are installed before
4-port 16 Gbps shortwave host adapters are installed before
4-port 8 Gbps longwave host adapters are installed before
4-port 8 Gbps shortwave host adapters.

Open-systems host attachment with Fibre Channel adapters


You can attach a DS8000 series to an open-systems host with Fibre Channel
adapters.

Fibre Channel is a full-duplex, serial communications technology to interconnect


I/O devices and host systems that are separated by tens of kilometers.

The DS8000 series supports SAN speeds of up to 16 Gbps with the current 16 Gbps
host adapters, or up to 8 Gbps with the 8 Gbps host adapters. The DS8000 series
detects and operates at the greatest available link speed that is shared by both
sides of the system. The 16 Gbps host adapters have 4 ports, and the 8 Gbps host
adapters have either 4 or 8 ports.

Fibre Channel technology transfers data between the sources and the users of the
information. Fibre Channel connections are established between Fibre Channel
ports that reside in I/O devices, host systems, and the network that interconnects
them. The network consists of elements like switches, bridges, and repeaters that
are used to interconnect the Fibre Channel ports.

FICON attached z Systems hosts overview


The DS8000 series can be attached to FICON attached z Systems host operating
systems under specified adapter configurations.

46 DS8880 Introduction and Planning Guide


Each storage system Fibre Channel adapter has either four ports or eight ports,
depending on the adapter type. Each port has a unique worldwide port name
(WWPN). You can configure the port to operate with the FICON upper-layer
protocol.
With Fibre Channel adapters that are configured for FICON, the storage system
provides the following configurations:
v Either fabric or point-to-point topologies
v A maximum of 32 ports on DS8884F storage systems, 64 ports on
DS8884 storage systems, and a maximum of 128 ports on
DS8886DS8886F and DS8888F storage systems
v A maximum of 509 logins per Fibre Channel port
v A maximum of 8,192 logins per storage system
v A maximum of 1,280 logical paths on each Fibre Channel port
v Access to all 255 control-unit images (65,280 CKD devices) over each
FICON port
v A maximum of 512 logical paths per control unit image

Note: IBM IBM z13™ servers support 32,000 devices per FICON host channel,
while IBM zEnterprise® EC12 and IBM zEnterprise BC12 servers support 24,000
devices per FICON host channel. Earlier z Systems servers support 16,384 devices
per FICON host channel. To fully access 65,280 devices, it is necessary to connect
multiple FICON host channels to the storage system. You can access the devices
through a Fibre Channel switch or FICON director to a single storage system
FICON port.
The storage system supports the following operating systems for z Systems
hosts:
v Linux
v Transaction Processing Facility (TPF)
v Virtual Storage Extended/Enterprise Storage Architecture
v z/OS
v z/VM®
v z/VSE®

For the most current information on supported hosts, operating systems, adapters,
and switches, go to the IBM System Storage Interoperation Center (SSIC) website
(www.ibm.com/systems/support/storage/config/ssic).

Subsystem device driver for open-systems


The IBMMultipath Subsystem Device Driver (SDD) supports open-systems hosts.

All storage system models include the IBM System StorageMultipath Subsystem
Device Driver (SDD). The SDD provides load balancing and enhanced data
availability capability in configurations with more than one I/O path between the
host server and the storage system. Load balancing can reduce or eliminate I/O
bottlenecks that occur when many I/O operations are directed to common devices
by using the same I/O path. The SDD can eliminate the single point of failure by
automatically rerouting I/O operations when a path failure occurs.

Chapter 2. Hardware features 47


I/O load balancing
You can maximize the performance of an application by spreading the I/O load
across processor nodes, arrays, and device adapters in the storage system.

During an attempt to balance the load within the storage system, placement of
application data is the determining factor. The following resources are the most
important to balance, roughly in order of importance:
v Activity to the RAID drive groups. Use as many RAID drive groups as possible
for the critical applications. Most performance bottlenecks occur because a few
drive are overloaded. Spreading an application across multiple RAID drive
groups ensures that as many drives as possible are available. This is extremely
important for open-system environments where cache-hit ratios are usually low.
v Activity to the nodes. When selecting RAID drive groups for a critical
application, spread them across separate nodes. Because each node has separate
memory buses and cache memory, this maximizes the use of those resources.
v Activity to the device adapters. When selecting RAID drive groups within a
cluster for a critical application, spread them across separate device adapters.
v Activity to the Fibre Channel ports. Use the IBM Multipath Subsystem Device
Driver (SDD) or similar software for other platforms to balance I/O activity
across Fibre Channel ports.

Note: For information about SDD, see IBM Multipath Subsystem Device Driver
User's Guide (http://www-01.ibm.com/support/
docview.wss?uid=ssg1S7000303). This document also describes the product
engineering tool, the ESSUTIL tool, which is supported in the pcmpath
commands and the datapath commands.

Storage consolidation
When you use a storage system, you can consolidate data and workloads from
different types of independent hosts into a single shared resource.

You can mix production and test servers in an open systems environment or mix
open systems and z Systems hosts. In this type of environment, servers rarely, if
ever, contend for the same resource.

Although sharing resources in the storage system has advantages for storage
administration and resource sharing, there are more implications for workload
planning. The benefit of sharing is that a larger resource pool (for example, drives
or cache) is available for critical applications. However, you must ensure that
uncontrolled or unpredictable applications do not interfere with critical work. This
requires the same workload planning that you use when you mix various types of
work on a server.

If your workload is critical, consider isolating it from other workloads. To isolate


the workloads, place the data as follows:
v On separate RAID drive groups. Data for open systems or z Systems hosts is
automatically placed on separate arrays, which reduce the contention for drive
use.
v On separate device adapters.

48 DS8880 Introduction and Planning Guide


v In separate processor nodes, which isolate use of memory buses,
microprocessors, and cache resources. Before you decide, verify that the isolation
of your data to a single node provides adequate data access performance for
your application.

Count key data


In count-key-data (CKD) disk data architecture, the data field stores the user data.

Because data records can be variable in length, in CKD they all have an associated
count field that indicates the user data record size. The key field enables a
hardware search on a key. The commands used in the CKD architecture for
managing the data and the storage devices are called channel command words.

Fixed block
In fixed block (FB) architecture, the data (the logical volumes) are mapped over
fixed-size blocks or sectors.

With an FB architecture, the location of any block can be calculated to retrieve that
block. This architecture uses tracks and cylinders. A physical disk contains multiple
blocks per track, and a cylinder is the group of tracks that exists under the disk
heads at one point in time without performing a seek operation.

T10 DIF support


American National Standards Institute (ANSI) T10 Data Integrity Field (DIF)
standard is supported on IBM z Systems for SCSI end-to-end data protection on
fixed block (FB) LUN volumes. This support applies to the IBM DS8880 unit (98x
models). IBM z Systems support applies to FCP channels only.

IBM z Systems provides added end-to-end data protection between the operating
system and the DS8880 unit. This support adds protection information consisting
of CRC (Cyclic Redundancy Checking), LBA (Logical Block Address), and host
application tags to each sector of FB data on a logical volume.

Data protection using the T10 Data Integrity Field (DIF) on FB volumes includes
the following features:
v Ability to convert logical volume formats between standard and protected
formats supported through PPRC between standard and protected volumes
v Support for earlier versions of T10-protected volumes on the DS8880 with non
T10 DIF-capable hosts
v Allows end-to-end checking at the application level of data stored on FB disks
v Additional metadata stored by the storage facility image (SFI) allows host
adapter-level end-to-end checking data to be stored on FB disks independently
of whether the host uses the DIF format.

Notes:
v This feature requires changes in the I/O stack to take advantage of all the
capabilities the protection offers.
v T10 DIF volumes can be used by any type of Open host with the exception of
iSeries, but active protection is supported only for Linux on IBM z Systems or
AIX on IBM Power Systems™. The protection can only be active if the host
server has T10 DIF enabled.

Chapter 2. Hardware features 49


v T10 DIF volumes can accept SCSI I/O of either T10 DIF or standard type, but if
the FB volume type is standard, then only standard SCSI I/O is accepted.

Logical volumes
A logical volume is the storage medium that is associated with a logical disk. It
typically resides on two or more hard disk drives.

For the storage unit, the logical volumes are defined at logical configuration time.
For count-key-data (CKD) servers, the logical volume size is defined by the device
emulation mode and model. For fixed block (FB) hosts, you can define each FB
volume (LUN) with a minimum size of a single block (512 bytes) to a maximum
size of 232 blocks or 16 TB.

A logical device that has nonremovable media has one and only one associated
logical volume. A logical volume is composed of one or more extents. Each extent
is associated with a contiguous range of addressable data units on the logical
volume.

Allocation, deletion, and modification of volumes


Extent allocation methods (namely, rotate volumes and pool striping) determine
the means by which actions are completed on storage system volumes.

All extents of the ranks assigned to an extent pool are independently available for
allocation to logical volumes. The extents for a LUN or volume are logically
ordered, but they do not have to come from one rank and the extents do not have
to be contiguous on a rank. This construction method of using fixed extents to
form a logical volume in the storage system allows flexibility in the management
of the logical volumes. You can delete volumes, resize volumes, and reuse the
extents of those volumes to create other volumes, different sizes. One logical
volume can be deleted without affecting the other logical volumes defined on the
same extent pool.

Because the extents are cleaned after you delete a volume, it can take some time
until these extents are available for reallocation. The reformatting of the extents is a
background process.

There are two extent allocation methods used by the storage system: rotate
volumes and storage pool striping (rotate extents).

Storage pool striping: extent rotation

The default storage allocation method is storage pool striping. The extents of a
volume can be striped across several ranks. The storage system keeps a sequence
of ranks. The first rank in the list is randomly picked at each power on of the
storage subsystem. The storage system tracks the rank in which the last allocation
started. The allocation of a first extent for the next volume starts from the next
rank in that sequence. The next extent for that volume is taken from the next rank
in sequence, and so on. The system rotates the extents across the ranks.

If you migrate an existing non-striped volume to the same extent pool with a
rotate extents allocation method, then the volume is "reorganized." If you add more
ranks to an existing extent pool, then the "reorganizing" existing striped volumes
spreads them across both existing and new ranks.

50 DS8880 Introduction and Planning Guide


You can configure and manage storage pool striping using the DS Storage
Manager, and DS CLI, and DS Open API. The default of the extent allocation
method (EAM) option that is allocated to a logical volume is now rotate extents.
The rotate extents option is designed to provide the best performance by striping
volume extents across ranks in extent pool.

Managed EAM: Once a volume is managed by Easy Tier, the EAM of the volume
is changed to managed EAM, which can result in placement of the extents
differing from the rotate volume and rotate extent rules. The EAM only changes
when a volume is manually migrated to a non-managed pool.

Rotate volumes allocation method

Extents can be allocated sequentially. In this case, all extents are taken from the
same rank until there are enough extents for the requested volume size or the rank
is full, in which case the allocation continues with the next rank in the extent pool.

If more than one volume is created in one operation, the allocation for each
volume starts in another rank. When allocating several volumes, rotate through the
ranks. You might want to consider this allocation method when you prefer to
manage performance manually. The workload of one volume is going to one rank.
This method makes the identification of performance bottlenecks easier; however,
by putting all the volumes data onto just one rank, you might introduce a
bottleneck, depending on your actual workload.

LUN calculation
The storage system uses a volume capacity algorithm (calculation) to provide a
logical unit number (LUN).

In the storage system, physical storage capacities are expressed in powers of 10.
Logical or effective storage capacities (logical volumes, ranks, extent pools) and
processor memory capacities are expressed in powers of 2. Both of these
conventions are used for logical volume effective storage capacities.

On open volumes with 512 byte blocks (including T10-protected volumes), you can
specify an exact block count to create a LUN. You can specify a standard LUN size
(which is expressed as an exact number of binary GiBs (230)) or you can specify an
ESS volume size (which is expressed in decimal GiBs (109) accurate to 0.1 GB). The
unit of storage allocation for fixed block open systems volumes is one extent. The
extent sizes for open volumes is either exactly 1 GiB, or 16 MiB. Any logical
volume that is not an exact multiple of 1 GiB does not use all the capacity in the
last extent that is allocated to the logical volume. Supported block counts are from
1 to 4 194 304 blocks (2 binary TiB) in increments of one block. Supported sizes are
from 1 to 16 TiB in increments of 1 GiB. The supported ESS LUN sizes are limited
to the exact sizes that are specified from 0.1 to 982.2 GB (decimal) in increments of
0.1 GB and are rounded up to the next larger 32 K byte boundary. The ESS LUN
sizes do not result in standard LUN sizes. Therefore, they can waste capacity.
However, the unused capacity is less than one full extent. ESS LUN sizes are
typically used when volumes must be copied between the storage system and ESS.

On open volumes with 520 byte blocks, you can select one of the supported LUN
sizes that are used on IBM i processors to create a LUN. The operating system uses
8 of the bytes in each block. This leaves 512 bytes per block for your data. Variable
volume sizes are also supported.

Chapter 2. Hardware features 51


Table 23 shows the disk capacity for the protected and unprotected models.
Logically unprotecting a storage LUN allows the IBM i host to start system level
mirror protection on the LUN. The IBM i system level mirror protection allows
normal system operations to continue running in the event of a failure in an HBA,
fabric, connection, or LUN on one of the LUNs in the mirror pair.

Note: On IBM i, logical volume sizes in the range 17.5 GB to 141.1 GB are
supported as load source units. Logical volumes smaller than 17.5 GB or larger
than 141.1 GB cannot be used as load source units.
Table 23. Capacity and models of disk volumes for IBM i hosts running IBM i operating
system
Size Protected model Unprotected model
8.5 GB A01 A81
17.5 GB A02 A82
35.1 GB A05 A85
70.5 GB A04 A84
141.1 GB A06 A86
282.2 GB A07 A87
1 GB to 2000 GB 099 050

On CKD volumes, you can specify an exact cylinder count or a standard volume
size to create a LUN. The standard volume size is expressed as an exact number of
Mod 1 equivalents (which is 1113 cylinders). The unit of storage allocation for CKD
volumes is one CKD extent. The extent size for a CKD volume is either exactly a
Mod-1 equivalent (which is 1113 cylinders), or it is 21 cylinders when using the
small-extents option. Any logical volume that is not an exact multiple of 1113
cylinders (1 extent) does not use all the capacity in the last extent that is allocated
to the logical volume. For CKD volumes that are created with 3380 track formats,
the number of cylinders (or extents) is limited to either 2226 (1 extent) or 3339 (2
extents). For CKD volumes that are created with 3390 track formats, you can
specify the number of cylinders in the range of 1 - 65520 (x'0001' - x'FFF0') in
increments of one cylinder, for a standard (non-EAV) 3390. The allocation of an
EAV volume is expressed in increments of 3390 mod1 capacities (1113 cylinders)
and can be expressed as integral multiples of 1113 between 65,667 - 1,182,006
cylinders or as the number of 3390 mod1 increments in the range of 59 - 1062.

Extended address volumes for CKD


Count key data (CKD) volumes now support the additional capacity of 1 TB. The 1
TB capacity is an increase in volume size from the previous 223 GB.

This increased volume capacity is referred to as extended address volumes (EAV)


and is supported by the 3390 Model A. Use a maximum size volume of up to
1,182,006 cylinders for the IBM zOS. This support is available to you for the z/OS
version 12.1, and later.

You can create a 1 TB IBM z Systems CKD volume on the DS8880.

A z Systems CKD volume is composed of one or more extents from a CKD extent
pool. CKD extents are 1113 cylinders in size. When you define a z Systems CKD
volume, you must specify the number of cylinders that you want for the volume.

52 DS8880 Introduction and Planning Guide


The storage system and the zOS have limits for the CKD EAV sizes. You can define
CKD volumes with up to 1,182,006 cylinders, about 1 TB on the DS8880.

If the number of cylinders that you specify is not an exact multiple of 1113
cylinders, then some space in the last allocated extent is wasted. For example, if
you define 1114 or 3340 cylinders, 1112 cylinders are wasted. For maximum storage
efficiency, consider allocating volumes that are exact multiples of 1113 cylinders. In
fact, multiples of 3339 cylinders should be considered for future compatibility. If
you want to use the maximum number of cylinders for a volume (that is 1,182,006
cylinders), you are not wasting cylinders, because it is an exact multiple of 1113
(1,182,006 divided by 1113 is exactly 1062). This size is also an even multiple (354)
of 3339, a model 3 size.

Quick initialization
Quick initialization improves device initialization speed and allows a Copy
Services relationship to be established after a device is created.

DS8000 systems support quick volume initialization for z Systems environments.


This support helps users who frequently delete volumes by reconfiguring capacity
without waiting for initialization. Quick initialization initializes the data logical
tracks or block within a specified extent range on a logical volume with the
appropriate initialization pattern for the host.

Normal read and write access to the logical volume is allowed during the
initialization process. Therefore, the extent metadata must be allocated and
initialized before the quick initialization function is started. Depending on the
operation, the quick initialization can be started for the entire logical volume or for
an extent range on the logical volume.

Chapter 2. Hardware features 53


54 DS8880 Introduction and Planning Guide
Chapter 3. Data management features
The storage system is designed with many management features that allow you to
securely process and access your data according to your business needs, even if it
is 24 hours a day and 7 days a week.

This section contains information about the data management features in your
storage system. Use the information in this section to assist you in planning,
ordering licenses, and in the management of your storage system data
management features.

| Transparent cloud tiering


| Transparent cloud tiering provides a native cloud storage tier.

| Transparent cloud tiering is software defined capability enabling the usage of cloud
| object storage (public, private, or on-premises) as a secure, reliable, transparent
| storage tier natively integrated with DS8880. Transparent cloud tiering enables a
| DS8880 to migrate and recall data in cloud storage. Existing z/OS software
| manages transparent cloud tiering and attaches metadata to the cloud objects.

| Transparent cloud tiering solves the problem of enterprise data growing at an


| alarming rate. Inactive data constitutes a large proportion of data in the enterprise.
| By moving inactive data to cloud object storage, transparent cloud tiering frees up
| high-performance storage capacity to be used for more active data. Migrating less
| active data to a cloud storage tier can result in cost savings.

Dynamic volume expansion


Dynamic volume expansion is the capability of the DS8000 series to increase
volume capacity up to a maximum size while volumes are online to a host and not
in a Copy Services relationship.

Dynamic volume expansion increases the capacity of open systems and z Systems
volumes, while the volume remains connected to a host system. This capability
simplifies data growth by providing volume expansion without taking volumes
offline.

Some operating systems do not support a change in volume size. Therefore, a host
action is required to detect the change after the volume capacity is increased.

The following volume sizes are the maximum that are supported for each storage
type.
v Open systems FB volumes: 16 TB.
v z Systems CKD volume types 3390 model 9 and custom: 65520 cylinders
v z Systems CKD volume type 3390 model 3: 3339 cylinders
v z Systems CKD volume types 3390 model A: 1,182,006 cylinders

Note: Volumes cannot be in Copy Services relationships (point-in-time copy,


FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global Mirror, and z/OS Global
Mirror) during expansion.

© Copyright IBM Corp. 2004, 2017 55


Count key data and fixed block volume deletion prevention
By default, DS8000 attempts to prevent volumes that are online and in use from
being deleted. The DS CLI and DS Storage Manager provides an option to force
the deletion of count key data (CKD) and fixed block (FB) volumes that are in use.

For CKD volumes, in use means that the volumes are participating in a Copy
Services relationship or are in a pathgroup. For FB volumes, in use means that the
volumes are participating in a Copy Services relationship or there is I/O access to
the volume in the last five minutes.

If you specify the -safe option when you delete an FB volume, the system
determines whether the volumes are assigned to non-default volume groups. If the
volumes are assigned to a non-default (user-defined) volume group, the volumes
are not deleted.

If you specify the -force option when you delete a volume, the storage system
deletes volumes regardless of whether the volumes are in use.

Thin provisioning
Thin provisioning defines logical volume sizes that are larger than the physical
capacity installed on the system. The volume allocates capacity on an as-needed
basis as a result of host-write actions.

The thin provisioning feature enables the creation of extent space efficient logical
volumes. Extent space efficient volumes are supported for FB and CKD volumes
and are supported for all Copy Services functionality, including FlashCopy targets
where they provide a space efficient FlashCopy capability.

Extent Space Efficient (ESE) capacity controls for thin


provisioning
Use of thin provisioning can affect the amount of storage capacity that you choose
to order. ESE capacity controls allow you to allocate storage appropriately.

With the mixture of thin-provisioned (ESE) and fully-provisioned (non-ESE)


volumes in an extent pool, a method is needed to dedicate some of the extent-pool
storage capacity for ESE user data usage, as well as limit the ESE user data usage
within the extent pool. Another thing that is needed is the ability to detect when
the available storage space within the extent pool for ESE volumes is running out
of space.

ESE capacity controls provide extent pool attributes to limit the maximum extent
pool storage available for ESE user data usage, and to guarantee a proportion of
the extent pool storage to be available for ESE user data usage.

An SNMP trap that is associated with the ESE capacity controls notifies you when
the ESE extent usage in the pool exceeds an ESE extent threshold set by you. You
are also notified when the extent pool is out of storage available for ESE user data
usage.

ESE capacity controls include the following attributes:

56 DS8880 Introduction and Planning Guide


ESE Extent Threshold
The percentage that is compared to the actual percentage of storage
capacity available for ESE customer extent allocation when determining the
extent pool ESE extent status.
ESE Extent Status
One of the three following values:
v 0: the percent of the available ESE capacity is greater than the ESE extent
threshold
v 1: the percent of the available ESE capacity is greater than zero but less
than or equal to the ESE extent threshold
v 10: the percent of the available ESE capacity is zero

Note: When the size of the extent pool remains fixed or is only increased, the
allocatable physical capacity remains greater than or equal to the allocated physical
capacity. However, a reduction in the size of the extent pool can cause the
allocatable physical capacity to become less than the allocated physical capacity in
some cases.

For example, if the user requests that one of the ranks in an extent pool be
depopulated, the data on that rank are moved to the remaining ranks in the pool
causing the rank to become not allocated and removed from the pool. The user is
advised to inspect the limits and threshold on the extent pool following any
changes to the size of the extent pool to ensure that the specified values are still
consistent with the user’s intentions.

IBM Easy Tier


Easy Tier is a DS8000 series optional feature that is provided at no cost. Its
capabilities include manual volume capacity rebalance, auto performance
rebalancing in both homogeneous and hybrid pools, hot spot management, rank
depopulation, manual volume migration, and thin provisioning support (ESE
volumes only). Easy Tier determines the appropriate tier of storage that is based on
data access requirements and then automatically and nondisruptively moves data,
at the subvolume or sub-LUN level, to the appropriate tier in the storage system.

Use Easy Tier to dynamically move your data to the appropriate drive tier in your
storage system with its automatic performance monitoring algorithms. You can use
this feature to increase the efficiency of your flash drives and flash cards and the
efficiency of all the tiers in your storage system.

You can use the features of Easy Tier between three tiers of storage within a
DS8880.

Easy Tier features help you to effectively manage your system health, storage
performance, and storage capacity automatically. Easy Tier uses system
configuration and workload analysis with warm demotion to achieve effective
overall system health. Simultaneously, data promotion and auto-rebalancing
address performance while cold demotion works to address capacity.

In automatic mode, Easy Tier data in memory persists in local storage or storage in
the peer server, ensuring the Easy Tier configurations are available at failover, cold
start, or Easy Tier restart.

Chapter 3. Data management features 57


With Easy Tier Application, you can also assign logical volumes to a specific tier.
This can be useful when certain data is accessed infrequently, but needs to always
be highly available.

Easy Tier Application is enhanced by two related functions:


v Easy Tier Application for IBM z Systems provides comprehensive
data-placement management policy support from application to storage.
v Easy Tier application controls over workload learning and data migration
provides a granular pool-level and volume-level Easy Tier control as well as
volume-level tier restriction where a volume can be excluded from the Nearline
tier.

The Easy Tier Heat Map Transfer utility replicates Easy Tier primary storage
workload learning results to secondary storage sites, synchronizing performance
characteristics across all storage systems. In the event of data recovery, storage
system performance is not sacrificed.

With the DS8000, you can also use Easy Tier in automatic mode to help with the
management of your ESE thin provisioning on fixed block (FB) or count key data
(CKD) volumes.

An additional feature provides the capability for you to use Easy Tier in manual
mode for thin provisioning. Rank depopulation is supported on ranks with ESE
volumes allocated (extent space-efficient) or auxiliary volumes.

Use the capabilities of Easy Tier to support:


Three tiers
Using three tiers and efficient algorithms improves system performance
and cost effectiveness.
Five types of drives are managed in up to three different tiers by Easy Tier
within a managed pool. The drives within a tier must be homogeneous.
v Tier 1: flash cards and flash drives
v Tier 2: SAS (10-K or 15-K RPM) disk drives
v Tier 3: Nearline (7.2-K RPM) disk drives
If both 10-K and 15-K RPM disk drives are in the same extent pool, the
disk drives are managed as a single tier. The flash cards and flash drives
are managed as a single tier. In both of these cases, the rank saturation for
different rank types (for example, 10-K RAID-5 and 15-K RAID-5) can be
different. The workload rebalancing within a single tier takes the rank
saturation into consideration when attempting to achieve an equal level of
saturation across the ranks within a tier.
Cold demotion
Cold data (or extents) stored on a higher-performance tier is demoted to a
more appropriate tier. Easy Tier is available with two-tier disk-drive pools
and three-tier pools. Sequential bandwidth is moved to the lower tier to
increase the efficient use of your tiers.
Warm demotion
Active data that has larger bandwidth is demoted from either tier one
(flash cards and flash drives) or tier two (Enterprise) to SAS Enterprise or
Nearline SAS. Warm demotion is triggered whenever the higher tier is over
its bandwidth capacity. Selected warm extents are demoted to allow the
higher tier to operate at its optimal load. Warm demotes do not follow a
predetermined schedule.

58 DS8880 Introduction and Planning Guide


Manual volume or pool rebalance
Volume rebalancing relocates the smallest number of extents of a volume
and restripes those extents on all available ranks of the extent pool.
Auto-rebalancing
Automatically balances the workload of the same storage tier within both
the homogeneous and the hybrid pool that is based on usage to improve
system performance and resource use. Use the auto-rebalancing functions
of Easy Tier to manage a combination of homogeneous and hybrid pools,
including relocating hot spots on ranks. With homogeneous pools, systems
with only one tier can use Easy Tier technology to optimize their RAID
array usage.
Rank depopulations
Allows ranks that have extents (data) allocated to them to be unassigned
from an extent pool by using extent migration to move extents from the
specified ranks to other ranks within the pool.
Thin provisioning
Support for the use of thin provisioning is available on ESE and standard
volumes. The use of TSE volumes (FB and CKD) is not supported.

Easy Tier provides a performance monitoring capability, regardless of whether the


Easy Tier feature is activated. Easy Tier uses the monitoring process to determine
what data to move and when to move it when you use automatic mode. You can
enable monitoring independently (with or without the Easy Tier feature activated)
for information about the behavior and benefits that can be expected if automatic
mode were enabled.

Data from the monitoring process is included in a summary report that you can
download to your Windows system. Use the IBM DS8000 Storage Tier Advisor
Tool application to view the data when you point your browser to that file.

Prerequisites

The following conditions must be met to enable Easy Tier:


v The Easy Tier feature is enabled (required for both manual and automatic mode,
except when monitoring is set to All Volumes).
v For automatic mode to be active, the following conditions must be met:
– Easy Tier automatic mode monitoring is set to either All or Auto mode.
– For Easy Tier to manage pools, the Auto Mode Volumes must be set to either
Tiered Pools or All Pools.

The drive combinations that you can use with your three-tier configuration, and
with the migration of your ESE volumes, are Flash, Enterprise, and Nearline.

Easy Tier: automatic mode


Use of the automatic mode of Easy Tier requires the Easy Tier feature.

In Easy Tier, both IOPS and bandwidth algorithms determine when to migrate
your data. This process can help you improve performance.

Use automatic mode to have Easy Tier relocate extents to the most appropriate
storage tier in a hybrid pool, which is based on usage. Because workloads typically
concentrate I/O operations on only a subset of the extents within a volume or

Chapter 3. Data management features 59


LUN, automatic mode identifies the subset of the frequently accessed extents and
relocates them to the higher-performance storage tier.

Subvolume or sub-LUN data movement is an important option to consider in


volume movement because not all data at the volume or LUN level becomes hot
data. For any workload, there is a distribution of data that is considered either hot
or cold, which can result in significant overhead that is associated with moving
entire volumes between tiers. For example, if a volume is 1 TB, you do not want to
move the entire 1 TB volume when the generated heat map indicates that only 10
GB is considered hot. This capability uses your higher performance tiers to reduce
the number of drives that you need to optimize performance.

Using automatic mode, you can use high performance storage tiers with a much
smaller cost. This means that you invest a small portion of storage in the
high-performance storage tier. You can use automatic mode for relocation and
tuning without the need for your intervention, generating cost-savings while
optimizing storage performance.

You also have the option of assigning specific logical volumes to a storage tier.
This is useful to ensure that critical data is always highly available, regardless of
how often the data is accessed.

Three-tier automatic mode is supported by the following Easy Tier functions:


v Support for ESE volumes with the thin provisioning
v Support for a matrix of device (DDM) and adapter types
v Monitoring of both bandwidth and IOPS limitations
v Data demotion between tiers
v Automatic mode hot spot rebalancing, which applies to the following auto
performance rebalance situations:
– Redistribution within a tier after a new rank is added into a managed pool
– Redistribution within a tier after a rank is removed from a managed pool
– Redistribution when the workload is imbalanced on the ranks within a tier of
a managed pool
v Logical volume assignment to specific storage tiers by using Easy Tier
Application
v Heat map transfer to secondary storage by using the Heat Map Transfer Utility

To help manage and improve performance, Easy Tier is designed to identify hot
data at the subvolume or sub-LUN (extent) level, which is based on ongoing
performance monitoring, and then automatically relocate that data to an
appropriate storage device in an extent pool that is managed by Easy Tier. Easy
Tier uses an algorithm to assign heat values to each extent in a storage device.
These heat values determine on what tier the data would best reside, and
migration takes place automatically. Data movement is dynamic and transparent to
the host server and to applications by using the data.

Easy Tier provides capabilities to support the automatic functions of


auto-rebalance, warm demotion, and cold demotion. This includes support for
pools with three tiers: Flash, Enterprise disk drives, and Nearline disk drives.

With Easy Tier you can use automatic mode to help you manage the thin
provisioning of your ESE volumes.

60 DS8880 Introduction and Planning Guide


Auto-rebalance

Rebalance is a function of Easy Tier automatic mode to balance the extents in the
same tier that is based on usage. Auto-rebalance supports single managed pools
and hybrid pools. You can use the Storage Facility Image (SFI) control to enable or
disable the auto-rebalance function on all pools of an SFI. When you enable
auto-rebalance, every standard and ESE volume is placed under Easy Tier
management for auto-rebalancing procedures. Using auto-rebalance gives you the
advantage of these automatic functions:
v Easy Tier operates within a tier, inside a managed storage pool.
v Easy Tier automatically detects performance skew and rebalances extents within
the same tier.
v Easy Tier automatically rebalances extents when capacity is added to the extent
pool.

In any tier, placing highly active (hot) data on the same physical rank can cause
the hot rank or the associated device adapter (DA) to become a performance
bottleneck. Likewise, over time skews can appear within a single tier that cannot
be addressed by migrating data to a faster tier alone, and require some degree of
workload rebalancing within the same tier. Auto-rebalance addresses these issues
within a tier in both hybrid and homogeneous pools. It also helps the system
respond in a more timely and appropriate manner to overloading, skews, and any
under-utilization that can occur from the addition or deletion of hardware,
migration of extents between tiers, changes in the underlying volume
configurations, and variations in the workload. Auto-rebalance adjusts the system
to continuously provide optimal performance by balancing the load on the ranks
and on DA pairs.

Easy Tier provides support for auto-rebalancing within homogeneous pools. If you
set the Easy Tier Automatic Mode Migration control to Manage All Extent Pools,
pools with a single-tier can rebalance the intra-tier ranks. If Easy Tier is turned off,
then no volumes are managed. If Easy Tier is on, it manages all the volumes that it
supports (Standard or ESE). TSE volumes are not supported by auto-rebalancing.

Notes:
v Standard and ESE volumes are supported.
v Merging pools are restricted to allow repository auxiliary volumes only in a
single pool.
v If Easy Tier’s Automatic Mode Migration control is set to Manage All Extent
Pools, then single-tier pools are also managed to rebalance intra-tier ranks.

Warm demotion
Warm demotion operation demotes warm (or mostly sequential-accessed) extents
in flash cards or flash drives to HDD, or from Enterprise SAS DDMs to NearLine
SAS DDMs to protect the drive performance on the system. The ranks being
demoted to are selected randomly. This function is triggered when bandwidth
thresholds are exceeded. This means that extents are warm-demoted from one rank
to another rank among tiers when extents have high bandwidth but low IOPS.

It is helpful to understand that warm demotion is different from auto-rebalancing.


While both warm demotion and auto-rebalancing can be event-based, rebalancing
movement takes place within the same tier while warm demotion takes place
among more than one tier. Auto-rebalance can initiate when the rank configuration
changes. It also periodically checks for workload that is not balanced across ranks.

Chapter 3. Data management features 61


Warm demotion initiates when an overloaded rank is detected.

Cold demotion

Cold demotion recognizes and demotes cold or semi-cold extents to an appropriate


lower-cost tier. Cold extents are demoted in a storage pool to a lower tier if that
storage pool is not idle.

Cold demotion occurs when Easy Tier detects any of the following scenarios:
v Extents in a storage pool become inactive over time, while other data remains
active. This is the most typical use for cold demotion, where inactive data is
demoted to the SATA tier. This action frees up extents on the enterprise tier
before the extents on the SATA tier become hot, helping the system be more
responsive to new, hot data.
v All the extents in a storage pool become inactive simultaneously due to either a
planned or unplanned outage. Disabling cold demotion assists you in scheduling
extended outages or experiencing outages without effecting the extent
placement.
v All extents in a storage pool are active. In addition to cold demotion by using
the capacity in the lowest tier, an extent is selected which has close to zero
activity, but with high sequential bandwidth and low random IOPS for the
demotion. Bandwidth available on the lowest tier is also used.

All extents in a storage pool can become inactive due to a planned non-use event,
such as an application that reaches its end of life. In this situation, cold demotion
is disabled and you can select one of the following options:
v Allocate new volumes in the storage pool and plan on those volumes that
become active. Over time, Easy Tier replaces the inactive extents on the
enterprise tier with active extents on the SATA tier.
v Depopulate all of the enterprise HDD ranks. When all enterprise HDD ranks are
depopulated, all extents in the pool are on the SATA HDD ranks. Store the
extents on the SATA HDD ranks until they need to be deleted or archived to
tape. After the enterprise HDD ranks are depopulated, move them to a storage
pool.
v Leave the extents in their current locations and reactivate them later.

Figure 11 on page 63 illustrates all of the migration types that are supported by the
Easy Tier enhancements in a three-tier configuration. The auto-performance
rebalance might also include more swap operations.

62 DS8880 Introduction and Planning Guide


Auto
Rebalance

Highest
Performance SSD SSD ... SSD
Tier RANK 1 RANK 2 RANK n

Warm Swap
Promote Demote

Higher
Performance
ENT HDD ENT HDD ... ENT HDD
RANK 1 RANK 2 RANK n
Tier

Warm Expanded
Promote Swap Cold
Demote Cold Demote
Demote

Lower
Performance NLHDD RANK NLHDD RANK ... NLHDD RANK
Tier 1 2 m

f2c01682
Auto
Rebalance

Figure 11. Three-tier migration types and their processes

Easy Tier Application

You can assign logical volumes to specific storage tiers (for non-TSE volumes). This
enables applications or storage administrators to proactively influence data
placement in the tiers. Applications, such as databases, can optimize access to
critical data by assigning the associated logical volumes to a higher performance
tier. Storage administrators, as well, can choose to assign a boot volume (for
example) to a higher performance tier.

Assigning a logical volume applies to all extents that are allocated to the logical
volume. Any extents added to a logical volume by dynamic extent relocation or
volume expansion are also assigned to the specified tier. All assignments have an
infinite lease. Assigning a volume across multiple tiers is not supported.

The completion of a logical volume assignment is a best-effort service that is based


on the following Easy Tier priorities:
1. System Health
Easy Tier monitors devices to ensure that they are not overloaded for the
current configuration and workload. Warm Demote operations and extra checks
receive the highest priority processing in this regard.
2. Performance
Logical volume assignment requests are performed on the appropriate device
types based on configuration, device capabilities, and workload characteristics.

Chapter 3. Data management features 63


3. Capacity
System capacity requirements are monitored.

Additionally, assignment requests can originate at multiple sources and can be


delivered to Easy Tier through channels that do not guarantee ordering of
messages. For this reason, the order of servicing volume assignment requests
cannot be guaranteed.

Because system health is the highest priority, a logical volume assignment can be
overridden by a migration operation (such as, Warm Demote), or by DS8000
microcode. As a result, although Easy Tier Application is designed to achieve
eventual consistency of operations, there is no system state guarantee for an
assignment, even for completed requests. The status of a logical volume
assignment request can be:
v Failure
The request command is invalid and cannot be complete. A failure response is
returned to the calling function.
v Transient State
The request cannot currently be completed, but is awaiting processing. A request
that completed can revert to a pending state if any of its actions are undone by a
higher priority request (such as a Warm Demote operation).
Additionally, the threshold (maximum capacity) for assigning logical volumes to
a specified tier can be reached. The threshold is 80% of the total capacity
available on that tier. In this case, all assignment requests for that tier remain
pending until the assignments fall below the threshold.
v Assignment Failure
In some situations, a volume assignment request is acknowledged by Easy Tier
Application, but subsequent system state changes require that the Easy Tier
Application return the request as a volume assignment failure. Possible scenarios
are:
– A tier definition change due to rank addition, deletion, depopulation, or
merging the extent pool.
– Easy Tier automatic mode is disabled for the volume.
The assignment failure remains, until you unassign the volume. However, even
while in assignment failure status, the volume is still managed by EasyTier auto
functions based on its heat map.
If a logical volume is deleted, Easy Tier Application unassigns the volume, but
does not identify the status as an assignment failure.

Note: For Version 7 Release 4, assignment failure works differently with the
introduction of Easy Tier Application for IBM z Systems. A new status indicator,
"assign pending hardware condition," is used to describe the following
conditions. If a condition is later resolved, the assignment continues to be
processed.
Easy Tier automatic mode becomes disabled:
The assignment remains in a pending state, and you receive a status of
"assign pending hardware condition" instead of an "assign fail." If you
later activate Easy Tier, the committed assignment automatically
proceeds.
Target tier becomes unavailable:
You receive a status of "assign pending hardware condition," and the

64 DS8880 Introduction and Planning Guide


assignment remains in a pending state. If you later add ranks to the
target tier, the committed assignment automatically proceeds.
Tier definition changes:
The physical tier is remembered, and a tier definition change does not
impact the assignment.

Before Version 7 Release 4, for all the assignment failures described above, even
though the condition is later resolved, the affected volumes still stay in the
"assign failure" state. You need to send an unassign request to fix it. In Version 7
Release 4, you can still expect assignment failures caused by various conditions
(the target tier does not exist; Easy Tier management functions are turned off;
the 80% capacity limitation is exceeded; and so on) that cause the assign
command to be rejected. However, after the conditions are fixed and an assign
command is accepted, any changes that affect assignment activities produce only
an "assign pending hardware condition," rather than an assignment-request
failure.

Logical volume assignment state and request information are regularly saved to
local storage or storage on the peer server. If interruptions or error conditions
occur on the storage system, this data is automatically restored from the persistent
storage.

Easy Tier Application for IBM z Systems

Easy Tier Application for IBM z Systems provides comprehensive data-placement


management policy support between an application and storage. With this feature,
you need to program the policy only once, and it is then enforced automatically.
With hints about the data usage and performance expectations, storage is
automatically optimized towards higher performance and efficiency. At the same
time, the hint semantics relieve the application from the burden of storage resource
management.

Easy Tier Application Control at pool and volume levels

Easy Tier Application Control at the pool and volume levels provides a more
granular and flexible control of workload learning and data movement, as well as
providing volume-level tier restriction where a volume can be excluded from the
Nearline tier.

Before this feature, Easy Tier provided control at the system level. To prevent or
control the placement of data, you had to disable and enable Easy Tier for the
entire DS8000. Flexibility was limited. For example, if there was a performance
problem within a single pool or volume, Easy Tier for the entire DS8000 needed be
stopped until the problem was corrected. This stoppage resulted in a loss of
performance benefits in other pools or volumes.

Note: System-level control always has higher priority than the pool-level and
volume-level control settings. If any of the system-level control settings (Easy Tier
monitor; Easy Tier management) are changed, the pool and volume level control
settings are reset. Changes to the system-level control settings are detected by Easy
Tier every five minutes.

Several scenarios of how you can use Easy Tier customer control at the pool level
and volume level are described in Table 24 on page 66.

Chapter 3. Data management features 65


Table 24. Scenarios for Easy Tier Customer control at pool and volume levels
Function Scenario
Suspend/resume At the pool level
Easy Tier learning v A bank has a monthly and quarterly temporary batch workload, during which the workload
differs from normal workloads. During the temporary batch workload, Easy Tier moves data
to get good performance. However, the data configuration might not be optimal for normal
workloads, so when the normal workload starts again, the performance is not as good as
before. In this case, you can suspend pool learning with a duration setting when you start
the temporary batch workload. After the duration expires, the pool learning resumes
automatically, which makes the control easier. Alternately, you can resume the pool learning
manually.
v You could similarly use Easy Tier control at the pool level for other tasks that have
workloads that differ from normal workloads. Examples of such one-off tasks are restoring a
database from a backup, database loading, and database reorganization.
At the volume level

v One application is running the Monday-through-Friday workload, and another application is


running the Saturday-through-Sunday workload. During the first workload, the application
gets good performance because Easy Tier recognizes that it is hot and promotes it to SSD.
But during the weekend, the first workload is no longer hot, and Easy Tier might swap
another application into SSD. On the next Monday morning, the application that depends on
the Monday-through-Friday workload might encounter a performance impact because Easy
Tier needs time to readjust the data placement for it. In this case, you can suspend the
volume learning (keep the heat) of that application at the end of the Monday-through-Friday
period with an additional 48 hours of lease time. On the next Monday morning, the learning
resumes automatically and the performance should be stable.
v During the application maintenance, such as code upgrade or backup, the performance
statistics are not reflecting the real workload. To avoid the population of the learning data,
in this case you can suspend the learning when doing upgrade and resume it after the
upgrade is done.
Reset Easy Tier At the pool level
learning
v You want to redefine the use of all the volumes within a storage pool. The original learning
data of the volumes in the pool are no longer relevant, but you can reset the Easy Tier
learning in the pool, so that Easy Tier learning reacts to the new workload quickly.
Note: In many environments (especially open systems), a pool-level reset of learning is less
typical as there is likely to be a mix of applications and workloads. However, this is
effectively a volume-level reset of learning for all volumes in the pool.
v Another scenario is when you transport a workload. You select a target pool, and the target
pool’s learning data is no longer relevant. But you can reset the pool learning to react to the
new workload quickly.
At the volume level

v When an application with a large amount of hot data is no longer used, the heat of the
volumes associated with the application might take time to cool and stop other applications
from leveraging the flash drive quickly. In this case, you can reset the learning history of the
specific volumes, so other data can take advantage of the flash drive quickly.
v During a database reorganization, the hot-data indexes are moved to another location by the
database, so the learning history of the original data location is no longer relevant. In this
case, you can reset the learning.
v When you deploy a new application, you might define the file system, migrate the
application data, and complete some testing before putting the new application online. The
learning data during the deployment might create data-storage “noise” for the normal
production workload. To alleviate the noise, you can reset the learning before the application
goes online so that Easy Tier reacts quickly to the presence of the application.

66 DS8880 Introduction and Planning Guide


Table 24. Scenarios for Easy Tier Customer control at pool and volume levels (continued)
Function Scenario
Suspend/resume At the pool level
extent relocation
v In one scenario, there might be some response time-sensitive period during which you want
to prevent any data-movement impact to the performance. In this case, you can suspend the
pool migration with a duration setting. After the duration is expired, the pool migration
resumes automatically, which makes the control easier. You can also resume it manually.
v In another scenario, there is a performance issue in a pool, and you want to analyze the
problem. You can prevent an impact to storage during your analysis by suspending the
pool’s migration to stabilize the performance of the storage.
At the volume level
Not applicable.
Query pool-level You can query the Easy Tier control state of a pool or volume.
and volume-level
Easy Tier control
state
Exclude from At the pool level
Nearline tier
Not applicable.
control
At the volume level

v If there is an application for which you do not want the data of the volume to be moved to
the Nearline tier, you can exclude the volume from the Nearline tier.
v During the deployment of an application, before the workload starts, the volumes that are
allocated for the application might be idle. You can exclude the idle volumes from being
demoted to the Nearline tier to avoid the performance issues when application starts.
v To more efficiently prevent a volume from ever being demoted to the Nearline drives, you
can exclude the volume from the Nearline tier so that it is only assigned to the non-Nearline
tiers.

Easy Tier: manual mode


Easy Tier in manual mode provides the capability to migrate volumes and merge
pools, under the same DS8880 system, concurrently with I/O operations.

In Easy Tier manual mode, you can dynamically relocate a logical volume between
pools or within a pool to change the extent allocation method of the volume or to
redistribute the volume across new ranks. This capability is referred to as dynamic
volume relocation. You can also merge two existing pools into one without affecting
the data on the logical volumes that are associated with the pools.

Enhanced functions of Easy Tier manual mode offer more capabilities. You can use
manual mode to relocate your extents, or to relocate an entire volume from one
pool to another pool. Later, you might also need to change your storage media or
configurations. Upgrading to a new disk drive technology, rearranging the storage
space, or changing storage distribution within a specific workload are typical
operations that you can complete with volume relocations. Use manual mode to
achieve these operations with minimal performance impact and to increase the
options you have in managing your storage.

Functions and features of Easy Tier: manual mode

This section describes the functions and features of Easy Tier in manual mode.
Volume migration

Chapter 3. Data management features 67


Volume migration for restriping can be achieved by:
v Restriping - Relocating a subset of extents within the volume for volume
migrations within the same pool.
v Rebalancing - Redistributing the volume across available ranks. This
feature focuses on providing pure striping, without requiring
preallocation of all the extents. This means that you can use rebalancing
when only a few extents are available.
You can select which logical volumes to migrate, based on performance
considerations or storage management concerns. For example, you can:
v Migrate volumes from one pool to another. You might want to migrate
volumes to a different pool that has more suitable performance
characteristics, such as different disk drives or RAID ranks. For example,
a volume that was configured to stripe data across a single RAID can be
changed to stripe data across multiple arrays for better performance.
Also, as different RAID configurations become available, you might
want to move a logical volume to a different pool with different
characteristics, which changes the characteristics of your storage. You
might also want to redistribute the available disk capacity between
pools.

Notes:
– When you initiate a volume migration, ensure that all ranks are in the
configuration state of Normal in the target pool.
– Volume migration is supported for standard and ESE volumes. There
is no direct support to migrate auxiliary volumes. However, you can
migrate extents of auxiliary volumes as part of ESE migration or rank
depopulation.
– Ensure that you understand your data usage characteristics before
you initiate a volume migration.
– The overhead that is associated with volume migration is comparable
to a FlashCopy operation that run as a background copy.
v Change the extent allocation method that is assigned to a volume. You
can relocate a volume within the same pool but with a different extent
allocation method. For example, you might want to change the extent
allocation method to help spread I/O activity more evenly across ranks.
If you configured logical volumes in an pool with fewer ranks than now
exist in the pool, you can use Easy Tier to manually redistribute the
volumes across new ranks.

Note: If you specify a different extent allocation method for a volume,


the new extent allocation method takes effect immediately.
Manual volume rebalance by using volume migration
Volume and pool rebalancing are designed to redistribute the extents of
volumes within a non managed pool. This means skew is less likely to
occur on the ranks.

Notes:
v Manual rebalancing is not allowed in hybrid or managed pools.
v Manual rebalancing is allowed in homogeneous pools.
v You cannot mix fixed block (FB) and count key data (CKD) drives.
Volume rebalance can be achieved by initiating a manual volume
migration. Use volume migration to achieve manual rebalance when a rank

68 DS8880 Introduction and Planning Guide


is added to a pool, or when a large volume with rotate volumes EAM is
deleted. Manual rebalance is often referred to as capacity rebalance because
it balances the distribution of extents without factoring in extent usage.
When a volume migration is targeted to the same pool and the target EAM
is rotate extent, the volume migration acts internally as a volume
rebalance.
Use volume rebalance to relocate the smallest number of extents of a
volume and restripe the extents of that volume on all available ranks of the
pool where it is located. The behavior of volume migration, which differs
from volume rebalance, continues to operate as it did in the previous
version of Easy Tier.

Notes: Use the latest enhancements to Easy Tier to:


v Migrate ESE logical volumes
v Rebalance pools by submitting a volume migration for every standard
and ESE volume in a pool
v Merge pools with virtual rank auxiliary volumes in both the source and
destination pool
Pools You can merge homogenous and hybrid pools. Merged pools can have 1, 2
or 3 tiers and are managed appropriately by Easy Tier in automatic mode.
Rank depopulation
Easy Tier provides an enhanced method of rank depopulation, which can
be used to replace old drive technology, reconfigure pools and tear down
hybrid pools. This method increases efficiency and performance when you
replace or relocate whole ranks. Use the latest enhancements to Easy Tier
to effect rank depopulation on any ranks in the various volume types (ESE
logical, virtual rank auxiliary, TSE repository auxiliary, SE repository
auxiliary, and non SE repository auxiliary).
Use rank depopulation to concurrently stop by using one or more ranks in
a pool. You can use rank depopulation to do any of the following
functions:
v Swap out old drive technology
v Reconfigure pools
v Tear down hybrid pools
v Change RAID types

Note: Rank depopulation is supported on ranks that have extent space


efficient (ESE) extents.

Volume data monitoring


The IBM Storage Tier Advisory tool collects and reports volume data. It provides
performance monitoring data even if the feature is not activated.

You can monitor the use of storage at the volume extent level using the monitoring
function. Monitoring statistics are gathered and analyzed every 24 hours. In an
Easy Tier managed pool, the analysis is used to form an extent relocation plan for
the pool, which provides a recommendation, that is based on your current plan for
relocating extents on a volume to the most appropriate storage device. The results
of this data are summarized in a report that you can download. For more
information, see “Storage Tier Advisor tool” on page 73.

Chapter 3. Data management features 69


Table 25 describes monitor settings and mirrors the monitor settings in the DS CLI.
Table 25. Monitoring settings for the Easy Tier feature
Monitor Setting Not installed Installed
All Volumes All volumes are monitored. All volumes are monitored.
Auto Mode Volumes No volumes are monitored. Volumes in pools that are managed
by Easy Tier are monitored.
No Volumes No volumes are monitored. No volumes are monitored.

You can determine whether volumes are monitored and also disable the
monitoring process temporarily, by using either the DS CLI or the DS8000 Storage
Management GUI.

Easy Tier Heat Map Transfer Utility


A heat map is a workload activity metric that is calculated for each extent in a
logical volume. The workload activity is expressed as a temperature gradient from
hot (high activity) to cold (low activity). Use of the heat map transfer utility
requires the Easy Tier monitoring function to be enabled at each of the primary
and secondary storage systems that are involved in the heat map transfer.

The heat map transfer utility periodically transfers Easy Tier heat map information
from primary to secondary storage systems. The secondary storage system
generates migration plans based on the heat map data and (the secondary storage
system's) current physical configuration. In this way, the performance
characteristics of the secondary storage are consistently updated to reflect that of
primary storage. Multiple secondary storage systems are supported. Alternatively,
you can have multiple primary storage systems that are associated with a single
secondary storage system. It is recommended that the secondary storage system
has the same physical configuration as the primary storage system. Secondary
storage systems are then workload optimized based on primary storage system
usage, with no performance penalties if data recovery is necessary.

Note: Currently, the heat map transfer utility does not support replicating tier
assignment instructions of the Easy Tier Application from the primary to secondary
storage systems. To reflect the same tier assignment on the secondary storage
systems, issue the same tier assignment commands on the secondary storage
systems.

Data that occurs in the I/O cache layer (including the storage and server-side
cache) is not monitored by Easy Tier and not reflected in an Easy Tier heat map.

If a workload failover occurs, the secondary storage system:


v Uses the heat map data that is transferred from the primary storage system.
v Maintains performance levels equivalent to the primary storage system while the
primary storage system is unavailable.

Note: Without the same physical configuration, a secondary storage site is able to
replicate the heat map data, but is unlikely to be able to replicate the performance
characteristics of the primary storage system.

The heat map transfer utility runs either on a separate Windows or Linux host , or
on IBM Copy Services Manager. From the host, the heat map transfer utility

70 DS8880 Introduction and Planning Guide


accesses the primary and secondary storage sites by using an out-of-band IP
connection. Transfer of heat map data occurs through the heat map transfer utility
host.

Host

Easy Tier Heat Map


Transfer Utility (HMTU)

IP network IP network
I/O

Copy Services Replication


Primary Remote
Storage Storage
Monitor and retrieve current Monitor and retrieve current
heat map data Secondary Storage heat map data
Save heat Apply heat map data
map data Apply heat map data Save heat from primary storage
from primary storage map data

f2c02338
Figure 12. Flow of heat map data

The heat map transfer utility imports the heat map data from the primary storage
system, and analyzes this data to:
v Identify those volumes that have a peer-to-peer remote client (PPRC)
relationship.
v Determine the type of PPRC relationship that exists. The relationship can be
Metro Mirror, Global Copy, Global Mirror, or Metro Global Mirror.

In a Metro Global Mirror environment, DS8000 storage systems can be added


under the heat map transfer utility management. Under this management, the heat
map transfer utility treats the systems as Metro Mirror plus Global Mirror (Global
Copy and FlashCopy) relationships. The utility detects the Metro Mirror and
Global Mirror relationships automatically and performs the heat map data transfer
for the relationships on the systems separately.

There are restrictions in a heat map transfer in Metro Global Mirror environment.
For example, assume volumes A, B, C, and D, where:
v Volume A is the Metro Mirror primary (or source) volume
v Volume B is the Metro Mirror secondary (or target) volume and Global Mirror
primary volume at the same time.
v Volume C is the Global Mirror secondary volume and FlashCopy source volume
at the same time. The FlashCopy target volume is referred to as the D volume.
– Heat map data is transferred only from volumes A and B and volumes B and
C. No heat map data is transferred to the volume D copy or any additional
test copies that you create.
– Heat map data that is transferred to volume C might lag for a maximum of
36 hours from volume A. After the transfer to volumes A and B is complete, it
might take a maximum of 24 hours (the default Easy Tier heat map data

Chapter 3. Data management features 71


generation interval) for volume B to generate heat map data. There is a
12-hour interval (the default heat map transfer interval) for the volumes B
and C data transfer.

The heat map information for the selected volumes is then periodically copied
from the primary storage system to the heat map transfer utility host (default copy
period is 12 hours). The heat-map-transfer utility determines the target secondary
storage system that is based on PPRC volume mapping. The utility transfers the
heat-map data to the associated secondary storage systems. The heat-map data is
then imported to the secondary storage system, and Easy Tier migration plans are
generated based on the imported and existing heat map. Finally, the result of the
heat map transfer is recorded (in memory and to a file).

To enable heat map transfer, the heat-map transfer-control switch that is on the
secondary storage system needs to be enabled -ethmtmode enabled. This is the
default mode. Use the DSCLI command chsi to enable or disable heat map
transfer:
chsi -ethmtmode enable | disable

The scope of heat map transfer is determined by the Easy Tier automatic mode
setting:
v To automatically transfer the heat map data and manage data placement for
logical volumes in multi-tiered pools, use the Easy Tier control default settings
(-etmonitor automode, -etautomode tiered).
v To automatically transfer the heat map data and manage data placement for
logical volumes in all pools, use the Easy Tier control settings (-etmonitor all,
-etautomode all).

Note: For PPRC relationships by using Global Mirror, Easy Tier manages data
placement of the Global Copy target and FlashCopy source only, and does not
manage data placement for a FlashCopy target that is involved in the Global
Mirror relationship.

If you want to run an Easy Tier evaluation on both the primary and secondary
storage systems, set the Easy Tier control on both storage systems to "monitor
only" (-etmonitor all). The heat map transfer utility then automatically transfers
the heat map data and uses this data to generate an Easy Tier report, without
changing the data layout on either of the storage systems.

Migration process management


You can initiate volume migrations and pause, resume, or cancel a migration
process that is in progress.

Volumes that are eligible for migration are dependent on the state and access of
the volumes. Table 26 shows the states that are required to allow migration with
Easy Tier.
Table 26. Volume states required for migration with Easy Tier
Volume state Is migration allowed with Easy Tier?
Access state Online Yes
Fenced No

72 DS8880 Introduction and Planning Guide


Table 26. Volume states required for migration with Easy Tier (continued)
Volume state Is migration allowed with Easy Tier?
Data state Normal Yes
Pinned No
Read only Yes
Inaccessible No
Indeterminate data loss No
Extent fault No

Initiating volume migration

With Easy Tier, you can migrate volumes from one extent pool to another. The time
to complete the migration process might vary, depending on what I/O operations
are occurring on your storage unit.

If an error is detected during the migration process, the storage facility image (SFI)
attempts the extent migration again after a short time. If an extent cannot be
successfully migrated, the migration is stopped, and the configuration state of the
logical volume is set to migration error.

Pausing and resuming migration

You can pause volumes that are being migrated. You can also resume the
migration process on the volumes that were paused.

Canceling migration

You can cancel the migration of logical volumes that are being migrated. The
volume migration process pre-allocates all extents for the logical volume when you
initiate a volume migration. All pre-allocated extents on the logical volume that are
not migrated are released when you cancel a volume migration. The state of the
logical volumes changes to migration-canceled and the target extent pool that you
specify on a subsequent volume migration is limited to either the source extent
pool or target extent pool of the original volume migration.

Note: If you initiate a volume migration but the migration was queued and not in
progress, then the cancel process returns the volume to normal state and not
migration-canceled.

Storage Tier Advisor tool


IBM DS8000 Storage Tier Advisor Tool adds performance reporting capability to
your storage system.

The Storage Tier Advisor tool is a Windows application that provides a graphical
representation of performance data that is collected by Easy Tier over a 24-hour
operational cycle. You can use the application to view the data when you point
your browser to the file. The Storage Tier Advisor tool supports the enhancements
that are provided with Easy Tier, including support for flash cards, flash drives
(SSDs), Enterprise, and Nearline disk drives for DS8880 and the auto performance
rebalance feature. You can download the Storage Tier Advisor
Tool(ftp.software.ibm.com/storage/ds8000/updates/
DS8K_Customer_Download_Files/Storage_Tier_Advisor_Tool/).

Chapter 3. Data management features 73


To extract the performance summary data that is generated by the Storage Tier
Advisor tool, you can use the DS CLI. When you extract summary data, two files
are provided, one for each server in the storage facility image (SFI server). The
download operation initiates a long running task to collect performance data from
both selected storage facility images. This information can be provided to IBM if
performance analysis or problem determination is required.

You can view information to analyze workload statistics and evaluate which logical
volumes might be candidates for Easy Tier management. If the Easy Tier feature is
not installed and enabled, you can use the performance statistics that are gathered
by the monitoring process to help you determine whether to use Easy Tier to
enable potential performance improvements in your storage environment.

Easy Tier reporting improvements


The reporting mechanism of Easy Tier and the Storage Tier Advisor Tool that uses
Easy Tier includes updates to a workload categorization, workload skew curve,
and the data-movement daily report.

The output of the Storage Tier Advisor Tool (STAT) is based on data collected by
the Easy Tier monitoring function. Active data moves to a flash drive (SSD) storage
tier while inactive data is demoted to a nearline storage tier. Active large data is
sequential I/O, which might not be suitable to an flash drive tier, while low-active
data might not be active enough to be placed on an Flash tier. The reporting
improvements help you analyze this type of data activity and evaluate workload
statistics across the storage tiers.

The STAT utility analyzes data that Easy Tier gathers and creates a set of
comma-separated value (.csv) files for the workload categorization, workload skew
curve, and data-movement daily report that you can download and generate a
graphical display of the data from the *.csv files. This information provides
insights into your storage workload.

For information on the workload categorization, workload skew curve, and the
daily data movement report, see the Easy Tier section under Product Overview in
the IBM DS8000 series online product documentation ( http://www.ibm.com/
support/knowledgecenter/ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/
f2c_securitybp.html).

Easy Tier considerations and limitations


When you plan for volume migration, it is important to consider how Easy Tier
functions with storage configurations, and recognize its limitations.

Migration considerations

The following information might be helpful in using Easy Tier with your DS8000
storage system:
v You cannot initiate a volume migration on a volume that is being migrated. The
first migration must complete first.
v You cannot initiate, pause, resume, or cancel migration on selected volumes that
are aliases or virtual volumes.
v You cannot migrate volumes from one extent pool to another or change the
extent allocation method unless the Easy Tier feature is installed on the storage
system.
v Volume migration is supported for standard, auxiliary, and ESE volumes.

74 DS8880 Introduction and Planning Guide


v If you specify a different extent allocation method for a volume, the new extent
allocation method takes effect immediately.
v A volume that is being migrated cannot be expanded and a volume that is being
expanded cannot be migrated.
v When a volume is migrated out of an extent pool that is managed with Easy
Tier, or when Easy Tier is no longer installed, the DS8880 disables Easy Tier and
no longer automatically relocates high activity I/O data on that volume between
storage devices.

Limitations

The following limitations apply to the use of Easy Tier:


v You cannot merge two extent pools if you selected an extent pool that contains
volumes that are being migrated.
v It might be helpful to know that some basic characteristics of Easy Tier might
limit the applicability for your generalized workloads. The granularity of the
extent that can be relocated within the hierarchy is large (1 GB). Additionally,
the time period over which the monitoring is analyzed is continuous, and long
(24 hours). Therefore, some workloads might have hot spots, but when
considered over the range of the relocation size, they do not appear, on average,
to be hot. Also, some workloads might have hot spots for short periods of time,
but when considered over the duration of the analysis window, the hot spots do
not appear, on average, to be hot.

VMware vStorage API for Array Integration support


The DS8880 provides support for the VMware vStorage API for Array Integration
(VAAI).

The VAAI API offloads storage processing functions from the server to the DS8880,
reducing the workload on the host server hardware for improved performance on
both the network and host servers.

The DS8880 supports the following operations:


Atomic test and set or VMware hardware-assisted locking
The hardware-assisted locking feature uses the VMware Compare and
Write command for reading and writing the volume's metadata within a
single operation. With the Compare and Write command, the DS8880
provides a faster mechanism that is displayed to the volume as an atomic
action that does not require locking the entire volume.
The Compare and Write command is supported on all open systems fixed
block volumes, including Metro Mirror and Global Mirror primary
volumes and FlashCopy source and target volumes.
XCOPY or Full Copy
The XCOPY (or extended copy) command copies multiple files from one
directory to another or across a network.
Full Copy copies data from one storage array to another without writing to
the VMware ESX Server (VMware vStorage API).
The following restrictions apply to XCOPY:
v XCOPY is not supported on Extent Space Efficient (ESE) volumes
v XCOPY is not supported on volumes greater than 2 TB

Chapter 3. Data management features 75


v The target of an XCOPY cannot be a Metro Mirror or Global Mirror
primary volume
v The Copy Services license is required
Block Zero (Write Same)
The SCSI Write Same command is supported on all volumes. This
command efficiently writes each block, faster than standard SCSI write
commands, and is optimized for network bandwidth usage.
IBM vCenter plug-in for ESX 4.x
The IBM vCenter plug-in for ESX 4.x provides support for the VAAI
interfaces on ESX 4.x.
For information on how to attach a VMware ESX Server host to a DS8880
with Fibre Channel adapters, see IBM DS8000 series online product
documentation ( http://www.ibm.com/support/knowledgecenter/
ST5GLJ_8.1.0/com.ibm.storage.ssic.help.doc/f2c_securitybp.html) and select
Attaching and configuring hosts > VMware ESX Server host attachment.
VMware vCenter Site Recovery Manager 5.0
VMware vCenter Site Recovery Manager (SRM) provides methods to
simplify and automate disaster recovery processes. IBM Site Replication
Adapter (SRA) communicates between SRM and the storage replication
interface. SRA support for SRM 5.0 includes the new features for planned
migration, reprotection, and failback. The supported Copy Services are
Metro Mirror, Global Mirror, Metro-Global Mirror, and FlashCopy.

The IBM Storage Management Console plug-in enables VMware administrators to


manage their systems from within the VMware management environment. This
plug-in provides an integrated view of IBM storage to VMware virtualize
datastores that are required by VMware administrators. For information, see the
IBM Storage Management Console for VMware vCenter (http://www.ibm.com/
support/knowledgecenter/en/STAV45/hsg/hsg_vcplugin_kcwelcome_sonas.html)
online documentation.

Performance for IBM z Systems


The DS8000 series supports the following IBM performance enhancements for IBM
z Systems environments.
v Parallel Access Volumes (PAVs)
v Multiple allegiance
v z/OS Distributed Data Backup
v z/HPF extended distance capability

Parallel Access Volumes

A PAV capability represents a significant performance improvement by the storage


unit over traditional I/O processing. With PAVs, your system can access a single
volume from a single host with multiple concurrent requests.

You must configure both your storage unit and operating system to use PAVs. You
can use the logical configuration definition to define PAV-bases, PAV-aliases, and
their relationship in the storage unit hardware. This unit address relationship
creates a single logical volume, allowing concurrent I/O operations.

Static PAV associates the PAV-base address and its PAV aliases in a predefined and
fixed method. That is, the PAV-aliases of a PAV-base address remain unchanged.
Dynamic PAV, on the other hand, dynamically associates the PAV-base address and

76 DS8880 Introduction and Planning Guide


its PAV aliases. The device number types (PAV-alias or PAV-base) must match the
unit address types as defined in the storage unit hardware.

You can further enhance PAV by adding the IBM HyperPAV feature. IBM
HyperPAV associates the volumes with either an alias address or a specified base
logical volume number. When a host system requests IBM HyperPAV processing
and the processing is enabled, aliases on the logical subsystem are placed in an
IBM HyperPAV alias access state on all logical paths with a specific path group ID.
IBM HyperPAV is only supported on FICON channel paths.

PAV can improve the performance of large volumes. You get better performance
with one base and two aliases on a 3390 Model 9 than from three 3390 Model 3
volumes with no PAV support. With one base, it also reduces storage management
costs that are associated with maintaining large numbers of volumes. The alias
provides an alternate path to the base device. For example, a 3380 or a 3390 with
one alias has only one device to write to, but can use two paths.

The storage unit supports concurrent or parallel data transfer operations to or from
the same volume from the same system or system image for z Systems or S/390®
hosts. PAV software support enables multiple users and jobs to simultaneously
access a logical volume. Read and write operations can be accessed simultaneously
to different domains. (The domain of an I/O operation is the specified extents to
which the I/O operation applies.)

Multiple allegiance

With multiple allegiance, the storage unit can run concurrent, multiple requests
from multiple hosts.

Traditionally, IBM storage subsystems allow only one channel program to be active
to a disk volume at a time. This means that, after the subsystem accepts an I/O
request for a particular unit address, this unit address appears "busy" to
subsequent I/O requests. This single allegiance capability ensures that additional
requesting channel programs cannot alter data that is already being accessed.

By contrast, the storage unit is capable of multiple allegiance (or the concurrent
execution of multiple requests from multiple hosts). That is, the storage unit can
queue and concurrently run multiple requests for the same unit address, if no
extent conflict occurs. A conflict refers to either the inclusion of a Reserve request
by a channel program or a Write request to an extent that is in use.

z/OS Distributed Data Backup

z/OS Distributed Data Backup (zDDB) allows hosts, which are attached through a
FICON interface, to access data on fixed block (FB) volumes through a device
address on FICON interfaces.

If the zDDB LIC feature key is installed and enabled and a volume group type
specifies either FICON interfaces, this volume group has implicit access to all FB
logical volumes that are configured in addition to all CKD volumes specified in the
volume group. In addition, this optional feature enables data backup of open
systems from distributed server platforms through a z Systems host. The feature
helps you manage multiple data protection environments and consolidate those
into one environment that is managed by IBM z Systems. For more information,
see “z/OS Distributed Data Backup” on page 128.

Chapter 3. Data management features 77


z/HPF extended distance

z/HPF extended distance reduces the impact that is associated with supported
commands on current adapter hardware, improving FICON throughput on the
DS8000 I/O ports. The DS8000 also supports the new zHPF I/O commands for
multitrack I/O operations.

Copy Services
Copy Services functions can help you implement storage solutions to keep your
business running 24 hours a day, 7 days a week. Copy Services include a set of
disaster recovery, data migration, and data duplication functions.

The storage system supports Copy Service functions that contribute to the
protection of your data. These functions are also supported on the IBM
TotalStorage Enterprise Storage Server®.

Notes:
v If you are creating paths between an older release of the DS8000 (Release 5.1 or
earlier), which supports only 4-port host adapters, and a newer release of the
DS8000 (Release 6.0 or later), which supports 8-port host adapters, the paths
connect only to the lower four ports on the newer storage system.
v The maximum number of FlashCopy relationships that are allowed on a volume
is 65534. If that number is exceeded, the FlashCopy operation fails.
v The size limit for volumes or extents in a Copy Service relationship is 2 TB.
v Thin provisioning functions in open-system environments are supported for the
following Copy Services functions:
– FlashCopy relationships
– Global Mirror relationships if the Global Copy A and B volumes are Extent
Space Efficient (ESE) volumes. The FlashCopy target volume (Volume C) in
the Global Mirror relationship can be an ESE volume or standard volume.
v PPRC supports any intermix of T10-protected or standard volumes. FlashCopy
does not support intermix.
v PPRC supports copying from standard volumes to ESE volumes, or ESE
volumes to Standard volumes, to allow migration with PPRC failover when both
source and target volumes are on a DS8000version 8.2 or higher.

The following Copy Services functions are available as optional features:


v Point-in-time copy, which includes IBM FlashCopy.
The FlashCopy function allows you to make point-in-time, full volume copies of
data so that the copies are immediately available for read or write access. In z
Systems environments, you can also use the FlashCopy function to perform data
set level copies of your data.
v Remote mirror and copy, which includes the following functions:
– Metro Mirror
Metro Mirror provides real-time mirroring of logical volumes between two
storage system that can be located up to 300 km from each other. It is a
synchronous copy solution where write operations are completed on both
copies (local and remote site) before they are considered to be done.
– Global Copy

78 DS8880 Introduction and Planning Guide


Global Copy is a nonsynchronous long-distance copy function where
incremental updates are sent from the local to the remote site on a periodic
basis.
– Global Mirror
Global Mirror is a long-distance remote copy function across two sites by
using asynchronous technology. Global Mirror processing is designed to
provide support for unlimited distance between the local and remote sites,
with the distance typically limited only by the capabilities of the network and
the channel extension technology.
– Metro/Global Mirror (a combination of Metro Mirror and Global Mirror)
Metro/Global Mirror is a three-site remote copy solution. It uses synchronous
replication to mirror data between a local site and an intermediate site, and
asynchronous replication to mirror data from an intermediate site to a remote
site.
– Multiple Target PPRC
Multiple Target PPRC builds and extends the capabilities of Metro Mirror and
Global Mirror. It allows data to be mirrored from a single primary site to two
secondary sites simultaneously. You can define any of the sites as the primary
site and then run Metro Mirror replication from the primary site to either of
the other sites individually or both sites simultaneously.
v Remote mirror and copy for z Systems environments, which includes z/OS
Global Mirror.

Note: When FlashCopy is used on FB (open) volumes, the source and the target
volumes must have the same protection type of either T10 DIF or standard.

The point-in-time and remote mirror and copy features are supported across
variousIBM server environments such as IBM i, System p, and z Systems, as well
as servers from Oracle and Hewlett-Packard.

You can manage these functions through a command-line interface that is called
the DS CLI. You can use the DS8000 Storage Management GUI to set up and
manage the following types of data-copy functions from any point where network
access is available:

Point-in-time copy (FlashCopy)

You can use the FlashCopy function to make point-in-time, full volume copies of
data, with the copies immediately available for read or write access. In z Systems
environments, you can also use the FlashCopy function to perform data set level
copies of your data. You can use the copy with standard backup tools that are
available in your environment to create backup copies on tape.

FlashCopy is an optional function.

The FlashCopy function creates a copy of a source volume on the target volume.
This copy is called a point-in-time copy. When you initiate a FlashCopy operation,
a FlashCopy relationship is created between a source volume and target volume. A
FlashCopy relationship is a mapping of the FlashCopy source volume and a
FlashCopy target volume. This mapping allows a point-in-time copy of that source
volume to be copied to the associated target volume. The FlashCopy relationship
exists between the volume pair in either case:
v From the time that you initiate a FlashCopy operation until the storage system
copies all data from the source volume to the target volume.

Chapter 3. Data management features 79


v Until you explicitly delete the FlashCopy relationship if it was created as a
persistent FlashCopy relationship.

One of the main benefits of the FlashCopy function is that the point-in-time copy is
immediately available for creating a backup of production data. The target volume
is available for read and write processing so it can be used for testing or backup
purposes. Data is physically copied from the source volume to the target volume
by using a background process. (A FlashCopy operation without a background
copy is also possible, which allows only data modified on the source to be copied
to the target volume.) The amount of time that it takes to complete the background
copy depends on the following criteria:
v The amount of data to be copied
v The number of background copy processes that are occurring
v The other activities that are occurring on the storage systems

The FlashCopy function supports the following copy options:


Consistency groups
Creates a consistent point-in-time copy of multiple volumes, with
negligible host impact. You can enable FlashCopy consistency groups from
the DS CLI.
Change recording
Activates the change recording function on the volume pair that is
participating in a FlashCopy relationship. This function enables a
subsequent refresh to the target volume.
Establish FlashCopy on existing Metro Mirror source
Establish a FlashCopy relationship, where the target volume is also the
source of an existing remote mirror and copy source volume. This allows
you to create full or incremental point-in-time copies at a local site and
then use remote mirroring commands to copy the data to the remote site.
Fast reverse
Reverses the FlashCopy relationship without waiting for the finish of the
background copy of the previous FlashCopy. This option applies to the
Global Mirror mode.
Inhibit writes to target
Ensures that write operations are inhibited on the target volume until a
refresh FlashCopy operation is complete.
Multiple Incremental FlashCopy
Allows a source volume to establish incremental flash copies to a
maximum of 12 targets.
Multiple Relationship FlashCopy
Allows a source volume to have multiple (up to 12) target volumes at the
same time.
Persistent FlashCopy
Allows the FlashCopy relationship to remain even after the FlashCopy
operation completes. You must explicitly delete the relationship.
Refresh target volume
Refresh a FlashCopy relationship, without recopying all tracks from the
source volume to the target volume.
Resynchronizing FlashCopy volume pairs
Update an initial point-in-time copy of a source volume without having to
recopy your entire volume.

80 DS8880 Introduction and Planning Guide


Reverse restore
Reverses the FlashCopy relationship and copies data from the target
volume to the source volume.
Reset SCSI reservation on target volume
If there is a SCSI reservation on the target volume, the reservation is
released when the FlashCopy relationship is established. If this option is
not specified and a SCSI reservation exists on the target volume, the
FlashCopy operation fails.
Remote Pair FlashCopy
Figure 13 on page 82 illustrates how Remote Pair FlashCopy works. If
Remote Pair FlashCopy is used to copy data from Local A to Local B, an
equivalent operation is also performed from Remote A to Remote B.
FlashCopy can be performed as described for a Full Volume FlashCopy,
Incremental FlashCopy, and Dataset Level FlashCopy.
The Remote Pair FlashCopy function prevents the Metro Mirror
relationship from changing states and the resulting momentary period
where Remote A is out of synchronization with Remote B. This feature
provides a solution for data replication, data migration, remote copy, and
disaster recovery tasks.
Without Remote Pair FlashCopy, when you established a FlashCopy
relationship from Local A to Local B, by using a Metro Mirror primary
volume as the target of that FlashCopy relationship, the corresponding
Metro Mirror volume pair went from “full duplex” state to “duplex
pending” state if the FlashCopy data was being transferred to the Local B.
The time that it took to complete the copy of the FlashCopy data until all
Metro Mirror volumes were synchronous again, depended on the amount
of data transferred. During this time, the Local B would be inconsistent if a
disaster were to have occurred.

Note: Previously, if you created a FlashCopy relationship with the


Preserve Mirror, Required option, by using a Metro Mirror primary
volume as the target of that FlashCopy relationship, and if the status of the
Metro Mirror volume pair was not in a “full duplex” state, the FlashCopy
relationship failed. That restriction is now removed. The Remote Pair
FlashCopy relationship completes successfully with the “Preserve Mirror,
Required” option, even if the status of the Metro Mirror volume pair is
either in a suspended or duplex pending state.

Chapter 3. Data management features 81


Local Storage Server Remote Storage Server

full duplex
Local A Remote A
Establish

FlashCopy Metro Mirror FlashCopy

full duplex
Local B
Remote B

f2c01089
Figure 13. Remote Pair FlashCopy

Note: The DS8880 supports Incremental FlashCopy and Metro Global Mirror
Incremental Resync on the same volume.

Remote mirror and copy

The remote mirror and copy feature is a flexible data mirroring technology that
allows replication between a source volume and a target volume on one or two
disk storage systems. You can also issue remote mirror and copy operations to a
group of source volumes on one logical subsystem (LSS) and a group of target
volumes on another LSS. (An LSS is a logical grouping of up to 256 logical
volumes for which the volumes must have the same disk format, either count key
data or fixed block.)

Remote mirror and copy is an optional feature that provides data backup and
disaster recovery.

Note: You must use Fibre Channel host adapters with remote mirror and copy
functions. To see a current list of environments, configurations, networks, and
products that support remote mirror and copy functions, click Interoperability
Matrix at the following location IBM System Storage Interoperation Center (SSIC)
website (www.ibm.com/systems/support/storage/config/ssic).

The remote mirror and copy feature provides synchronous (Metro Mirror) and
asynchronous (Global Copy) data mirroring. The main difference is that the Global
Copy feature can operate at long distances, even continental distances, with
minimal impact on applications. Distance is limited only by the network and
channel extenders technology capabilities. The maximum supported distance for
Metro Mirror is 300 km.

With Metro Mirror, application write performance depends on the available


bandwidth. Global Copy enables better use of available bandwidth capacity to
allow you to include more of your data to be protected.

82 DS8880 Introduction and Planning Guide


The enhancement to Global Copy is Global Mirror, which uses Global Copy and
the benefits of FlashCopy to form consistency groups. (A consistency group is a set
of volumes that contain consistent and current data to provide a true data backup
at a remote site.) Global Mirror uses a master storage system (along with optional
subordinate storage systems) to internally, without external automation software,
manage data consistency across volumes by using consistency groups.

Consistency groups can also be created by using the freeze and run functions of
Metro Mirror. The freeze and run functions, when used with external automation
software, provide data consistency for multiple Metro Mirror volume pairs.

The following sections describe the remote mirror and copy functions.
Synchronous mirroring (Metro Mirror)
Provides real-time mirroring of logical volumes (a source and a target)
between two storage systems that can be located up to 300 km from each
other. With Metro Mirror copying, the source and target volumes can be on
the same storage system or on separate storage systems. You can locate the
storage system at another site, some distance away.
Metro Mirror is a synchronous copy feature where write operations are
completed on both copies (local and remote site) before they are considered
to be complete. Synchronous mirroring means that a storage server
constantly updates a secondary copy of a volume to match changes that
are made to a source volume.
The advantage of synchronous mirroring is that there is minimal host
impact for performing the copy. The disadvantage is that since the copy
operation is synchronous, there can be an impact to application
performance because the application I/O operation is not acknowledged as
complete until the write to the target volume is also complete. The longer
the distance between primary and secondary storage systems, the greater
this impact to application I/O, and therefore, application performance.
Asynchronous mirroring (Global Copy)
Copies data nonsynchronously and over longer distances than is possible
with the Metro Mirror feature. When operating in Global Copy mode, the
source volume sends a periodic, incremental copy of updated tracks to the
target volume instead of a constant stream of updates. This function causes
less impact to application writes for source volumes and less demand for
bandwidth resources. It allows for a more flexible use of the available
bandwidth.
The updates are tracked and periodically copied to the target volumes. As
a consequence, there is no guarantee that data is transferred in the same
sequence that was applied to the source volume.
To get a consistent copy of your data at your remote site, periodically
switch from Global Copy to Metro Mirror mode, then either stop the
application I/O or freeze data to the source volumes by using a manual
process with freeze and run commands. The freeze and run functions can
be used with external automation software such as Geographically
Dispersed Parallel Sysplex™ (GDPS®), which is available for z Systems
environments, to ensure data consistency to multiple Metro Mirror volume
pairs in a specified logical subsystem.
Common options for Metro Mirror/Global Mirror and Global Copy
include the following modes:

Chapter 3. Data management features 83


Suspend and resume
If you schedule a planned outage to perform maintenance at your
remote site, you can suspend Metro Mirror/Global Mirror or
Global Copy processing on specific volume pairs during the
duration of the outage. During this time, data is no longer copied
to the target volumes. Because the primary storage system tracks
all changed data on the source volume, you can resume operations
later to synchronize the data between the volumes.
Copy out-of-synchronous data
You can specify that only data updated on the source volume
while the volume pair was suspended is copied to its associated
target volume.
Copy an entire volume or not copy the volume
You can copy an entire source volume to its associated target
volume to guarantee that the source and target volume contain the
same data. When you establish volume pairs and choose not to
copy a volume, a relationship is established between the volumes
but no data is sent from the source volume to the target volume. In
this case, it is assumed that the volumes contain the same data and
are consistent, so copying the entire volume is not necessary or
required. Only new updates are copied from the source to target
volumes.
Global Mirror
Provides a long-distance remote copy across two sites by using
asynchronous technology. Global Mirror processing is most often associated
with disaster recovery or disaster recovery testing. However, it can also be
used for everyday processing and data migration.
Global Mirror integrates both the Global Copy and FlashCopy functions.
The Global Mirror function mirrors data between volume pairs of two
storage systems over greater distances without affecting overall
performance. It also provides application-consistent data at a recovery (or
remote) site in a disaster at the local site. By creating a set of remote
volumes every few seconds, the data at the remote site is maintained to be
a point-in-time consistent copy of the data at the local site.
Global Mirror operations periodically start point-in-time FlashCopy
operations at the recovery site, at regular intervals, without disrupting the
I/O to the source volume, thus giving a continuous, near up-to-date data
backup. By grouping many volumes into a session that is managed by the
master storage system, you can copy multiple volumes to the recovery site
simultaneously maintaining point-in-time consistency across those
volumes. (A session contains a group of source volumes that are mirrored
asynchronously to provide a consistent copy of data at the remote site.
Sessions are associated with Global Mirror relationships and are defined
with an identifier [session ID] that is unique across the enterprise. The ID
identifies the group of volumes in a session that are related and that can
participate in the Global Mirror consistency group.)
Global Mirror supports up to 32 Global Mirror sessions per storage facility
image. Previously, only one session was supported per storage facility
image.
You can use multiple Global Mirror sessions to fail over only data assigned
to one host or application instead of forcing you to fail over all data if one

84 DS8880 Introduction and Planning Guide


host or application fails. This process provides increased flexibility to
control the scope of a failover operation and to assign different options and
attributes to each session.
The DS CLI and DS Storage Manager display information about the
sessions, including the copy state of the sessions.
Practice copying and consistency groups
To get a consistent copy of your data, you can pause Global Mirror on a
consistency group boundary. Use the pause command with the secondary
storage option. (For more information, see the DS CLI Commands
reference.) After verifying that Global Mirror is paused on a consistency
boundary (state is Paused with Consistency), the secondary storage system
and the FlashCopy target storage system or device are consistent. You can
then issue either a FlashCopy or Global Copy command to make a practice
copy on another storage system or device. You can immediately resume
Global Mirror, without the need to wait for the practice copy operation to
finish. Global Mirror then starts forming consistency groups again. The
entire pause and resume operation generally takes just a few seconds.
Metro/Global Mirror
Provides a three-site, long-distance disaster recovery replication that
combines Metro Mirror with Global Mirror replication for both z Systems
and open systems data. Metro/Global Mirror uses synchronous replication
to mirror data between a local site and an intermediate site, and
asynchronous replication to mirror data from an intermediate site to a
remote site.
In a three-site Metro/Global Mirror, if an outage occurs, a backup site is
maintained regardless of which one of the sites is lost. Suppose that an
outage occurs at the local site, Global Mirror continues to mirror updates
between the intermediate and remote sites, maintaining the recovery
capability at the remote site. If an outage occurs at the intermediate site,
data at the local storage system is not affected. If an outage occurs at the
remote site, data at the local and intermediate sites is not affected.
Applications continue to run normally in either case.
With the incremental resynchronization function enabled on a
Metro/Global Mirror configuration, if the intermediate site is lost, the local
and remote sites can be connected, and only a subset of changed data is
copied between the volumes at the two sites. This process reduces the
amount of data needing to be copied from the local site to the remote site
and the time it takes to do the copy.
Multiple Target PPRC
Provides an enhancement to disaster recovery solutions by allowing data
to be mirrored from a single primary site to two secondary sites
simultaneously. The function builds on and extends Metro Mirror and
Global Mirror capabilities. Various interfaces and operating systems
support the function. Disaster recovery scenarios depend on support from
controlling software such as Geographically Dispersed Parallel Sysplex
(GDPS) and IBM Copy Services Manager.
z/OS Global Mirror
If workload peaks, which might temporarily overload the bandwidth of the
Global Mirror configuration, the enhanced z/OS Global Mirror function
initiates a Global Mirror suspension that preserves primary site application
performance. If you are installing new high-performance z/OS Global
Mirror primary storage subsystems, this function provides improved

Chapter 3. Data management features 85


capacity and application performance during heavy write activity. This
enhancement can also allow Global Mirror to be configured to tolerate
longer periods of communication loss with the primary storage
subsystems. This enables the Global Mirror to stay active despite transient
channel path recovery events. In addition, this enhancement can provide
fail-safe protection against application system impact that is related to
unexpected data mover system events.
The z/OS Global Mirror function is an optional function.
z/OS Metro/Global Mirror Incremental Resync
z/OS Metro/Global Mirror Incremental Resync is an enhancement for
z/OS Metro/Global Mirror. z/OS Metro/Global Mirror Incremental Resync
can eliminate the need for a full copy after a HyperSwap® situation in
3-site z/OS Metro/Global Mirror configurations. The storage system
supports z/OS Metro/Global Mirror that is a 3-site mirroring solution that
uses IBM System Storage Metro Mirror and z/OS Global Mirror (XRC).
The z/OS Metro/Global Mirror Incremental Resync capability is intended
to enhance this solution by enabling resynchronization of data between
sites by using only the changed data from the Metro Mirror target to the
z/OS Global Mirror target after a HyperSwap operation.
If an unplanned failover occurs, you can use the z/OS Soft Fence function
to prevent any system from accessing data from an old primary PPRC site.
For more information, see the GDPS/PPRC Installation and Customization
Guide, or the GDPS/PPRC HyperSwap Manager Installation and Customization
Guide.
z/OS Global Mirror Multiple Reader (enhanced readers)
z/OS Global Mirror Multiple Reader provides multiple Storage Device
Manager readers that allow improved throughput for remote mirroring
configurations in z Systems environments. z/OS Global Mirror Multiple
Reader helps maintain constant data consistency between mirrored sites
and promotes efficient recovery. This function is supported on the storage
system running in a z Systems environment with version 1.7 or later at no
additional charge.

Interoperability with existing and previous generations of the


DS8000 series

All of the remote mirroring solutions that are documented in the sections above
use Fibre Channel as the communications link between the primary and secondary
storage systems. The Fibre Channel ports that are used for remote mirror and copy
can be configured as either a dedicated remote mirror link or as a shared port
between remote mirroring and Fibre Channel Protocol (FCP) data traffic.

The remote mirror and copy solutions are optional capabilities and are compatible
with previous generations of DS8000 series. They are available as follows:
v Metro Mirror indicator feature numbers 75xx and 0744 and corresponding
DS8000 series function authorization (2396-LFA MM feature numbers 75xx)
v Global Mirror indicator feature numbers 75xx and 0746 and corresponding
DS8000 series function authorization (2396-LFA GM feature numbers 75xx).

The DS8000 series systems can also participate in Global Copy solutions with the
IBM TotalStorage ESS Model 750, and IBM TotalStorage ESS Model 800 systems for
data migration. For more information on data migration and migration services,
contact your technical support representative.

86 DS8880 Introduction and Planning Guide


Global Copy is a non-synchronous long-distance copy option for data migration
and backup.

Disaster recovery through Copy Services


Through Copy Services functions, you can prepare for a disaster by backing up,
copying, and mirroring your data at local (production) and remote sites.

Having a disaster recovery plan can ensure that critical data is recoverable at the
time of a disaster. Because most disasters are unplanned, your disaster recovery
plan must provide a way to recover your applications quickly, and more
importantly, to access your data. Consistent data to the same point-in-time across
all storage units is vital before you can recover your data at a backup (normally
your remote) site.

Most users use a combination of remote mirror and copy and point-in-time copy
(FlashCopy) features to form a comprehensive enterprise solution for disaster
recovery. In an event of a planned event or unplanned disaster, you can use
failover and failback modes as part of your recovery solution. Failover and failback
modes can reduce the synchronization time of remote mirror and copy volumes
after you switch between local (or production) and intermediate (or remote) sites
during an outage. Although failover transmits no data, it changes the status of a
device, and the status of the secondary volume changes to a suspended primary
volume. The device that initiates the failback command determines the direction of
the transmitted data.

Recovery procedures that include failover and failback modes use remote mirror
and copy functions, such as Metro Mirror, Global Copy, Global Mirror,
Metro/Global Mirror, Multiple Target PPRC, and FlashCopy.

Note: See the IBM DS8000 Command-Line Interface User's Guide for specific disaster
recovery tasks.

Data consistency can be achieved through the following methods:


Manually using external software (without Global Mirror)
You can use Metro Mirror, Global Copy, and FlashCopy functions to create
a consistent and restartable copy at your recovery site. These functions
require a manual and periodic suspend operation at the local site. For
instance, you can enter the freeze and run commands with external
automated software. Then, you can initiate a FlashCopy function to make a
consistent copy of the target volume for backup or recovery purposes.
Automation software is not provided with the storage system; it must be
supplied by the user.

Note: The freeze operation occurs at the same point-in-time across all
links and all storage systems.
Automatically (with Global Mirror and FlashCopy)
You can automatically create a consistent and restartable copy at your
intermediate or remote site with minimal or no interruption of
applications. This automated process is available for two-site Global Mirror
or three-site Metro / Global Mirror configurations. Global Mirror
operations automate the process of continually forming consistency groups.
It combines Global Copy and FlashCopy operations to provide consistent
data at the remote site. A master storage unit (along with subordinate
storage units) internally manages data consistency through consistency

Chapter 3. Data management features 87


groups within a Global Mirror configuration. Consistency groups can be
created many times per hour to increase the currency of data that is
captured in the consistency groups at the remote site.

Note: A consistency group is a collection of session-grouped volumes


across multiple storage systems. Consistency groups are managed together
in a session during the creation of consistent copies of data. The formation
of these consistency groups is coordinated by the master storage unit,
which sends commands over remote mirror and copy links to its
subordinate storage units.
If a disaster occurs at a local site with a two or three-site configuration,
you can continue production on the remote (or intermediate) site. The
consistent point-in-time data from the remote site consistency group
enables recovery at the local site when it becomes operational.

Resource groups for Copy Services scope limiting


Resource groups are used to define a collection of resources and associate a set of
policies relative to how the resources are configured and managed. You can define
a network user account so that it has authority to manage a specific set of
resources groups.

Copy Services scope limiting overview

Copy services scope limiting is the ability to specify policy-based limitations on


Copy Services requests. With the combination of policy-based limitations and other
inherent volume-addressing limitations, you can control which volumes can be in a
Copy Services relationship, which network users or host LPARs issue Copy
Services requests on which resources, and other Copy Services operations.

Use these capabilities to separate and protect volumes in a Copy Services


relationship from each other. This can assist you with multitenancy support by
assigning specific resources to specific tenants, limiting Copy Services relationships
so that they exist only between resources within each tenant's scope of resources,
and limiting a tenant's Copy Services operators to an "operator only" role.

When managing a single-tenant installation, the partitioning capability of resource


groups can be used to isolate various subsets of an environment as if they were
separate tenants. For example, to separate mainframes from distributed system
servers, Windows from UNIX, or accounting departments from telemarketing.

Using resource groups to limit Copy Service operations

Figure 14 on page 89 illustrates one possible implementation of an exemplary


environment that uses resource groups to limit Copy Services operations. Two
tenants (Client A and Client B) are illustrated that are concurrently operating on
shared hosts and storage systems.

Each tenant has its own assigned LPARs on these hosts and its own assigned
volumes on the storage systems. For example, a user cannot copy a Client A
volume to a Client B volume.

Resource groups are configured to ensure that one tenant cannot cause any Copy
Services relationships to be initiated between its volumes and the volumes of
another tenant. These controls must be set by an administrator as part of the

88 DS8880 Introduction and Planning Guide


configuration of the user accounts or access-settings for the storage system.

Hosts with LPARs Hosts with LPARs

Switches Switches

Client A Client A

Client B Client B

Client A Client A

Client B Client B

f2c01638
Site 1 Site 2

Figure 14. Implementation of multiple-client volume administration

Resource groups functions provide additional policy-based limitations to users or


the DS8000 storage systems, which in conjunction with the inherent volume
addressing limitations support secure partitioning of Copy Services resources
between user-defined partitions. The process of specifying the appropriate
limitations is completed by an administrator using resource groups functions.

Note: User and administrator roles for resource groups are the same user and
administrator roles used for accessing your DS8000 storage system. For example,
those roles include storage administrator, Copy Services operator, and physical
operator.

The process of planning and designing the use of resource groups for Copy
Services scope limiting can be complex. For more information on the rules and
policies that must be considered in implementing resource groups, see topics about
resource groups. For specific DS CLI commands used to implement resource
groups, see the IBM DS8000 Command-Line Interface User's Guide.

Comparison of Copy Services features


The features of the Copy Services aid with planning for a disaster.

Chapter 3. Data management features 89


Table 27 provides a brief summary of the characteristics of the Copy Services
features that are available for the storage system.
Table 27. Comparison of features
Feature Description Advantages Considerations
Multiple Target PPRC Synchronous and Mirrors data from a Disaster recovery
asynchronous single primary site to scenarios depend on
replication two secondary sites support from
simultaneously. controlling software
such as
Geographically
Dispersed Parallel
Sysplex (GDPS) and
IBM Copy Services
Manager
Metro/Global Mirror Three-site, long A backup site is Recovery point
distance disaster maintained objective (RPO)
recovery replication regardless of which might grow if
one of the sites is bandwidth capability
lost. is exceeded.
Metro Mirror Synchronous data No data loss, rapid Slight performance
copy at a distance recovery time for impact.
distances up to 300
km.
Global Copy Continuous copy Nearly unlimited Copy is normally
without data distance, suitable for fuzzy but can be
consistency data migration, only made consistent
limited by network through
and channel synchronization.
extenders
capabilities.
Global Mirror Asynchronous copy Nearly unlimited RPO might grow
distance, scalable, when link bandwidth
and low RPO. The capability is
RPO is the time exceeded.
needed to recover
from a disaster; that
is, the total system
downtime.
z/OS Global Mirror Asynchronous copy Nearly unlimited Additional host
controlled by z distance, highly server hardware and
Systems host scalable, and very software is required.
software low RPO. The RPO might grow
if bandwidth
capability is exceeded
or host performance
might be impacted.

I/O Priority Manager


The performance group attribute associates the logical volume with a performance
group object. Each performance group has an associated performance policy which
determines how the I/O Priority Manager processes I/O operations for the logical
volume.

90 DS8880 Introduction and Planning Guide


Note: The default setting for this feature is “disabled” and must be enabled for
use through either the DS8000 Storage Management GUI or the DS CLI.

The I/O Priority Manager maintains statistics for the set of logical volumes in each
performance group that can be queried. If management is performed for the
performance policy, the I/O Priority Manager controls the I/O operations of all
managed performance groups to achieve the goals of the associated performance
policies. The performance group defaults to 0 if not specified. Table 28 lists
performance groups that are predefined and have the associated performance
policies:
Table 28. Performance groups and policies
Performance policy
1
Performance group Performance policy description
0 0 No management
1-5 1 Fixed block high priority
6-10 2 Fixed block medium priority
11-15 3 Fixed block low priority
16-18 0 No management
19 19 CKD high priority 1
20 20 CKD high priority 2
21 21 CKD high priority 3
22 22 CKD medium priority 1
23 23 CKD medium priority 2
24 24 CKD medium priority 3
25 25 CKD medium priority 4
26 26 CKD low priority 1
27 27 CKD low priority 2
28 28 CKD low priority 3
29 29 CKD low priority 4
30 30 CKD low priority 5
31 31 CKD low priority 6
1
Note: Performance group settings can be managed using DS CLI.

Securing data
You can secure data with the encryption features that are supported by the DS8000
storage system.

Encryption technology has a number of considerations that are critical to


understand to maintain the security and accessibility of encrypted data. For
example, encryption must be enabled by feature code and configured to protect
data in your environment. Encryption also requires access to at least two external
key servers.

It is important to understand how to manage IBM encrypted storage and comply


with IBM encryption requirements. Failure to follow these requirements might
cause a permanent encryption deadlock, which might result in the permanent loss
of all key-server-managed encrypted data at all of your installations.

Chapter 3. Data management features 91


The DS8000 system automatically tests access to the encryption keys every 8 hours
and access to the key servers every 5 minutes. You can verify access to key servers
manually, initiate key retrieval, and monitor the status of attempts to access the
key server.

92 DS8880 Introduction and Planning Guide


Chapter 4. Planning the physical configuration
Physical configuration planning is your responsibility. Your technical support
representative can help you to plan for the physical configuration and to select
features.

This section includes the following information:


v Explanations for available features that can be added to the physical
configuration of your system model
v Feature codes to use when you order each feature
v Configuration rules and guidelines

Configuration controls
Indicator features control the physical configuration of the storage system.

These indicator features are for administrative use only. The indicator features
ensure that each storage system (the base frame plus any expansion frames) has a
valid configuration. There is no charge for these features.

Your storage system can include the following indicators:


Expansion-frame position indicators
Expansion-frame position indicators flag models that are attached to
expansion frames. They also flag the position of each expansion frame
within the storage system. For example, a position 1 indicator flags the
expansion frame as the first expansion frame within the storage system.
Administrative indicators
If applicable, models also include the following indicators:
v IBM / Openwave alliance
v IBM / EPIC attachment
v IBM systems, including System p and IBM z Systems
v Lenovo System x and BladeCenter
v IBM storage systems, including IBM System Storage ProtecTIER®, IBM
Storwize® V7000, and IBM System Storage N series
v IBM SAN Volume Controller
v Linux
v VMware VAAI indicator
v Storage Appliance

Determining physical configuration features


You must consider several guidelines for determining and then ordering the
features that you require to customize your storage system. Determine the feature
codes for the optional features you select and use those feature codes to complete
your configuration.

Procedure
1. Calculate your overall storage needs, including the licensed functions.
The Copy Services and z-Synergy Services licensed functions are based on
usage requirements.

© Copyright IBM Corp. 2004, 2017 93


2. Determine the base and expansion models of which your storage system is to
be comprised.
3. Determine the management console configuration that supports the storage
system by using the following steps:
a. Order one management console for each storage system. The management
console feature code must be ordered for the base model within the storage
system.
b. Decide whether a secondary management console is to be installed for the
storage system. Adding a secondary management console ensures that you
maintain a highly available environment.
4. For each base and expansion model, determine the storage features that you
need.
a. Select the drive set feature codes and determine the amount of each feature
code that you must order for each model.
b. Select the storage enclosure feature codes and determine the amount that
you must order to enclose the drive sets that you are ordering.
c. Select the disk cable feature codes and determine the amount that you need
of each.
5. Determine the I/O adapter features that you need for your storage system.
a. Select the device, flash RAID, and host adapters feature codes to order, and
choose a model to contain the adapters. All base models can contain
adapters, but only the first attached expansion model can contain adapters.
b. For each model chosen to contain adapters, determine the number of each
I/O enclosure feature codes that you must order.
c. Select the cables that you require to support the adapters.
6. Based on the disk storage and adapters that the base model and expansion
models support, determine the appropriate processor memory feature code that
is needed by each base model.
7. Decide which power features that you must order to support each model.
8. Review the other features and determine which feature codes to order.

Management console features


Management consoles are required features for your storage system configuration.

Customize your management consoles by specifying the following different


features:
v A primary management console
v A secondary management console

Primary and secondary management consoles


The management console is the focal point for configuration, Copy Services
functions, remote support, and maintenance of your storage system.

The management console (also known as the Hardware Management Console or


HMC) is a dedicated appliance that is physically located inside your storage
system. It can proactively monitor the state of your storage system and notifying
you and IBM when service is required. It also can be connected to your network
for centralized management of your storage system by using the IBM DS

94 DS8880 Introduction and Planning Guide


command-line interface (DS CLI) or storage management software through the
IBM DS Open API. (The DS8000 Storage Management GUI cannot be started from
the HMC.)

You can also use the DS CLI to control the remote access of your technical support
representative to the HMC.

A secondary management console is available as an optional feature. The


secondary HMC is a redundant management console for environments with
high-availability requirements. If you use Copy Services, a redundant management
console configuration is especially important.

The management console is included with every base frame along with a monitor
and keyboard. An optional secondary management console is also available in the
base frame.

Note: To preserve console function, the management consoles are not available as
a general-purpose computing resource.

Feature codes for management consoles


Use these feature codes to order up to two management consoles (MCs) for each
storage system.
Table 29. Feature codes for the management console
Feature code Description Models
1140 Primary management console A required feature that is installed
in the frame
1150 Secondary management console An optional feature that can be
installed in the frame

Configuration rules for management consoles


The management console is a dedicated appliance in your storage system that can
proactively monitor the state of your storage system. You must order an internal
management console each time that you order a base frame.

You can also order a second management console for your storage system.

Storage features
You must select the storage features that you want on your storage system.

The storage features are separated into the following categories:


v Drive-set features and storage-enclosure features
v Enclosure filler features
v Device adapter features

Storage enclosures and drives


DS8880 supports various storage enclosures and drive options.

Standard drive enclosures and drives


Standard drive enclosures and drives are required components of your storage
system configuration.

Each standard drive enclosure feature contains two enclosures.

Chapter 4. Storage system physical configuration 95


Each drive set feature contains 16 disk drives or flash drives (SSDs) and is installed
with eight drives in each standard drive-enclosure pair.

The 3.5-inch storage enclosure slots are numbered left to right, and then top to
bottom. The top row of drives is D01 - D04. The second row of drives is D05 -
D08. The third row of drives is D09 - D12.

The 2.5-inch storage enclosure slots are numbered from left to right as slots D01 -
D24. For full SFF (2.5-inch) drive sets, the first installation group populates D01 -
D08 for both standard drive enclosures in the pair. The second installation group
populates D09 - D16. The third installation group populates D17 - D24.

Note: Storage enclosures are installed in the frame from the bottom up.

Table 30 provide information on the placement of drive sets in the storage


enclosure.
Table 30. Placement of full drive sets in the storage enclosure
Standard
drive-enclosures
type Set 1 Set 2 Set 3
3.5 inch disk drives D01 - D04 D05 - D08 D09 - D12
2.5 inch disk and D01 - D08 D09 - D16 D17 - D24
flash drives

Feature codes for drive sets


Use these feature codes to order sets of encryption disk drives, flash drives, and
flash cards for DS8880.

All drives that are installed in a standard drive enclosure pair or High
Performance Flash Enclosure Gen2 pair must be of the same drive type, capacity,
and speed.

The flash cards can be installed only in High Performance Flash Enclosures Gen2.
See Table 33 on page 97 for the feature codes. Each High Performance Flash
Enclosure Gen2 pair can contain 16, 32, or 48 flash cards. All flash cards in a High
Performance Flash Enclosure Gen2 must be the same type and same capacity.

Table 31, Table 32 on page 97, Table 33 on page 97 list the feature codes for
encryption drive sets based on drive size and speed.
Table 31. Feature codes for disk-drive sets
Drive speed in Encryption
Feature code Disk size Drive type Drives per set RPM (K=1000) drive RAID support
5308 300 GB 2.5-in. disk 16 15 K Yes 5, 6, 10
drives
5618 600 GB 2.5-in. disk 16 15 K Yes 5, 6, 10
drives
5708 600 GB 2.5-in. disk 16 10 K Yes 5, 6, 10
drives
5768 1.2 TB 2.5-in disk 16 10 K Yes 6, 10
drives

96 DS8880 Introduction and Planning Guide


Table 31. Feature codes for disk-drive sets (continued)
Drive speed in Encryption
Feature code Disk size Drive type Drives per set RPM (K=1000) drive RAID support
5778 1.8 TB 2.5-in disk 16 10 K Yes 6, 10
drives
5868 4 TB 3.5-in. NL disk 8 7.2 K Yes 6, 10
drives
5878 6 TB 3.5-in. NL disk 8 7.2 K Yes 6, 10
drives
Note: Drives are full disk encryption (FDE) self-encrypting drive (SED) capable.

Table 32. Feature codes for flash-drive (SSD) sets for standard enclosures
Drive speed
in RPM Encryption
Feature code Disk size Drive type Drives per set (K=1000) drive RAID support
6158 400 GB 2.5-in flash 16 N/A Yes 5, 6, 10
drives
6258 800 GB 2.5-in flash 16 N/A Yes 5, 6, 10
drives
6358 1.6 TB 2.5-in flash 16 N/A Yes 6, 10
drives

Table 33. Feature codes for flash-card sets for High Performance Flash Enclosures Gen2
Drive speed in Encryption
Feature code Disk size Drive type Drives per set RPM (K=1000) drive RAID support
1600 400 GB, 800 High N/A N/A Yes N/A
GB, 1.6 TB, or Performance
3.2 TB Flash
Enclosure
Gen2 pair for
2.5-in. flash
cards
1610 400 GB 2.5-in. flash 16 N/A Yes 5, 6, 10
cards
1611 800 GB 2.5-in. flash 16 N/A Yes 5, 6, 10
cards
1612 1.6 TB 2.5-in. flash 16 N/A Yes 6, 10
cards
1613 3.2 TB 2.5-in. flash 16 N/A Yes 6, 10
cards
Note:
1. Optional with feature code 1516. If feature code 1518 is not ordered, a storage filler set (feature code 1599) is
required.
2. Optional with feature code 1596. If feature code 1598 is not ordered, a storage filler set (feature code 1599) is
required.

Chapter 4. Storage system physical configuration 97


Feature codes for storage enclosures
Use these feature codes to order standard drive enclosures for your storage system.
Table 34. Feature codes for storage enclosures
Feature code Description Models
1241 Standard drive-enclosure pair 984, 985, 986, 84E,
Note: This feature contains two filler sets in 85E, 86E
each enclosure.
1242 Standard drive-enclosure pair for 2.5-inch 984, 985, 986, 84E,
disk drives 85E, 86E
1244 Standard drive-enclosure pair for 3.5-inch 984, 985, 986, 84E,
disk drives 85E, 86E
1245 Standard drive-enclosure pair for 400 GB 984, 985, 986, 84E,
flash drives 85E, 86E
1256 Standard drive-enclosure pair for 800 GB 984, 985, 986, 84E,
flash drives 85E, 86E
1257 Standard drive-enclosure pair for 1.6 TB 984, 985, 986, 84E,
flash drives 85E, 86E

Storage-enclosure fillers
Storage-enclosure fillers fill empty drive slots in the storage enclosures. The fillers
ensure sufficient airflow across populated storage.

For standard drive enclosures, one filler feature provides a set of 8 or 16 fillers.
Two filler features are required if only one drive set feature is in the standard
drive-enclosure pair. One filler feature is required if two drive-set features are in
the standard drive-enclosure pair.

For High Performance Flash Enclosures Gen2, one filler feature provides a set of 16
fillers.

Feature codes for storage enclosure fillers


Use these feature codes to order filler sets for standard drive enclosures and High
Performance Flash Enclosures Gen2.
Table 35. Feature codes for storage enclosures
Feature code Description
2997 Filler set for 3.5-in. standard disk-drive enclosures; includes
eight fillers
2999 Filler set for 2.5-in. standard disk-drive enclosures; includes
16 fillers
1699 Filler set for 2.5-in. High Performance Flash Enclosures
Gen2; includes 16 fillers

Device adapters and flash RAID adapters


Device adapters and flash RAID adapters provide the connection between storage
devices and the internal processors and memory. Device adapters and flash RAID
adapters perform the RAID management and control functions of the drives that
are attached to them.

98 DS8880 Introduction and Planning Guide


Each pair of device adapters or flash RAID adapters supports two independent
paths to all of the drives that are served by the pair. Two paths connect to two
different network fabrics to provide fault tolerance and to ensure availability. By
using physical links, two read operations and two write operations can be
performed simultaneously around the fabric.

Device adapters are ordered in pairs. For storage systems that use standard drive
enclosures, the device adapters are installed in the I/O enclosure pairs, with one
device adapter in each I/O enclosure of the pair. The device adapter pair connects
to the standard drive enclosures by using 8 Gbps FC-AL. An I/O enclosure pair
can support up to two device adapter pairs.

Feature codes for device adapters


Use these feature codes to order device adapters for your storage system. Each
feature includes two adapters.
Table 36. Feature codes for device adapters
Feature code Description Models
3053 4-port, 8 Gbps device adapter pair 984, 985, 986, 84E,
85E, 86E

Drive cables
You must order at least one drive cable set to connect the disk drives to the device
adapters.

The disk drive cable feature provides you with a complete set of Fibre Channel
cables to connect all the disk drives that are supported by the model to their
appropriate device adapters.

Disk drive cable groups have the following configuration guidelines:


v The minimum number of disk-drive cable group features for each model is one.
v The disk-drive cable groups must be ordered as follows:
– If the disk drives connect to device adapters within the same base frame,
order disk drive cable group A.
– If the disk drives connect to device adapters within the first expansion frame,
order disk drive cable group B.
– If the disk drives are in a second expansion frame, order disk drive cable
group C.
– If the disk drives are in a third expansion frame (85E and 86E only), order
disk drive cable group D.
– If the disk drives are in a fourth expansion frame (85E and 86E only), order
disk drive cable group E.

Feature codes for drive cables


Use these feature codes to order the cable groups for your storage system.

If the disk drives are in a remote expansion frame (up to 20 meters from the base
frame), order extended disk drive cable group C, D, or E as described here.

Chapter 4. Storage system physical configuration 99


Table 37. Feature codes for drive cables
Feature code Description Connection Type
1246 Drive cable group A (DS8886) Connects the drives to the device
adapters within the same base model 985 or
986.
1247 Drive cable group B (DS8886) Connects the drives to the device
adapters in the first expansion model 85E
or 86E.
1248 Drive cable group C (DS8886) Connects the drives from a second
expansion model 85E or 86E to the base
model 985 or 986 and first expansion model
85E or 86E.
1249 Drive cable group D (DS8886) Connects the drives from a third
expansion model , 85E, or 86E to a second
expansion model 85E or 86E.
1251 Drive cable group E (DS8886) Connects the drives from a fourth
expansion model 85E or 86E to a third
expansion model 85E or 86E.
1252 Extended drive cable group (DS8886) Extended 20-meter cable that
C connects the drives from a second
expansion model 85E or 86E to the base
model 985, 986 and first expansion model
85E or 86E.
1253 Extended drive cable group (DS8886) Extended 20-meter cable that
D connects the drives from a third expansion
model 85E or 86E to a second expansion
model 85E or 86E.
1254 Extended drive cable group (DS8886) Extended 20-meter cable that
E connects the drives from a fourth expansion
model 85E or 86E to a third expansion
model 85E or 86E.
1261 Drive cable group A (DS8884) Connects the disk drives to the
device adapters within the same base model
984.
1262 Drive cable group B (DS8884) Connects the disk drives to the
device adapters in the first expansion model
84E.
1263 Drive cable group C (DS8884) Connects the disk drives from a
second expansion model 84E to the base
model 984 and first expansion model 84E.
1266 Extended drive cable group (DS8884) Extended 20-meter cable that
C connects the drives from a second
expansion model 84E to the base model 984
and first expansion model 84E.

Configuration rules for storage features


Use the following general configuration rules and ordering information to help you
order storage features.

High Performance Flash Enclosures Gen2

Follow these configuration rules when you order storage features for storage
systems with High Performance Flash Enclosures Gen2.

100 DS8880 Introduction and Planning Guide


High Performance Flash Enclosures Gen2
DS8884 configurations can have one High Performance Flash Enclosure
Gen2 pair in the base and first expansion frames. DS8884F configurations
can have one High Performance Flash Enclosure Gen2 pair in the base
frame. DS8886 configurations can have two High Performance Flash
Enclosure Gen2 pairs in the base and first expansion frames. DS8886F
configurations can have two High Performance Flash Enclosure Gen2 pair
in the base and first expansion frame. DS8888F configurations can have
four High Performance Flash Enclosure Gen2 pair in the base and first
expansion frame.
Flash-card sets
Each High Performance Flash Enclosure Gen2 pair requires a minimum of
one 16 flash-card set.
Storage enclosure fillers
For High Performance Flash Enclosures Gen2, one filler feature provides a
set of 16 fillers. If only one flash-card set is ordered, then two storage
enclosure fillers are needed to fill the remaining 32 slots in the High
Performance Flash Enclosures Gen2 pair.
Flash RAID adapters
The flash RAID adapters are included in the high-performance flash
enclosure feature and are not ordered separately.
Drive cables
One drive cable set is required to connect the High Performance Flash
Enclosure Gen2 to dedicated PCIe ports in the I/O enclosure. This cable set
is included in the High Performance Flash Enclosure Gen2 feature and is
not ordered separately.

Standard drive enclosures

Follow these configuration rules when you order storage features for storage
systems with standard drive enclosures.
Standard drive enclosures
Storage enclosures are installed from the bottom to the top of each base or
expansion frame. Depending on the number of drive sets that you order
for the expansion frame, you might be required to fully populate the
standard drive enclosure before you can order the next required drive sets.
Drive sets
Each standard high-density drive enclosure requires a minimum of eight
flash drives or disk drives. The drive features that you order for the
standard drive enclosure must be of the same type, capacity, and speed.
Each base frame requires a minimum of one drive set.
Storage enclosure fillers
One filler feature provides a set of 8 or 16 fillers. Two filler features are
required if only one drive-set feature is ordered for a standard
drive-enclosure pair. One filler feature is required if two drive-set features
are ordered for the standard drive-enclosure pair.
Device adapters
Device adapters are ordered in pairs. A minimum of one pair is required
for each base frame.

Chapter 4. Storage system physical configuration 101


For DS8884, configurations with 64 GB of memory can have two
device-adapter pairs, and configurations with 128 GB or 256 GB of memory
can have four device-adapter pairs.
For DS8886, configurations with 8-core processors can have four
device-adapter pairs, and configurations with 16-core and 24-core
processors can have eight device-adapter pairs.
Drive cables
At least one drive cable set is required to connect the disk drives to the
device adapters.
The disk-drive cable groups must be ordered as follows:
v If the disk drives connect to device adapters within the same base frame,
order disk drive cable group A.
v If the disk drives connect to device adapters within the first expansion
frame, order disk drive cable group B.
v If the disk drives are in a second expansion frame, order disk drive cable
group C.
v If the disk drives are in a third expansion frame (85E, 86E), order disk
drive cable group D.
v If the disk drives are in a fourth expansion frame (85E, 86E), order disk
drive cable group E.
v If the disk drives are in a remote expansion frame (up to 20 meters from
the base frame), order extended disk drive cable group C, D, or E.

Physical and effective capacity


Use the following information to calculate the physical and effective capacity of a
storage system.

To calculate the total physical capacity of a storage system, multiply each drive-set
feature by its total physical capacity and sum the values. For the standard drive
enclosures, a full drive-set feature consists of 16 identical disk drives with the same
drive type, capacity, and speed. For High Performance Flash Enclosures Gen2,
there are two drive sets, one with 16 identical flash cards, and the other with 14
identical flash cards.

The logical configuration of your storage affects the effective capacity of the drive
set.

Specifically, effective capacities vary depending on the following configurations:


RAID type and spares
Drives in the DS8000 must be configured as RAID 5, RAID 6, or RAID 10
arrays before they can be used, and then spare drives are assigned. RAID
10 can offer better performance for selected applications, in particular, high
random, write content applications in the open systems environment.
RAID 6 increases data protection by adding an extra layer of parity over
the RAID 5 implementation.
Data format
Arrays are logically configured and formatted as fixed block (FB) or count
key data (CKD) ranks. Data that is accessed by open systems hosts or
Linux on IBM z Systems™ that support Fibre Channel protocol must be
logically configured as FB. Data that is accessed by IBM z Systems hosts

102 DS8880 Introduction and Planning Guide


with z/OS or z/VM must be configured as CKD. Each RAID rank is
divided into equal-sized segments that are known as extents.
The storage administrator has the choice to create extent pools of different
extent sizes. The supported extent sizes for FB volumes are 1 GB or 16 MB
and for CKD volumes it is one 3390 Mod1, which is 1113 cylinders or 21
cylinders. An extent pool cannot have a mix of different extent sizes.

On prior models of DS8000 series, a fixed area on each rank was assigned to be
used for volume metadata, which reduced the amount of space available for use
by volumes. In the DS8880 family, there is no fixed area for volume metadata, and
this capacity is added to the space available for use. The metadata is allocated in
the storage pool when volumes are created and is referred to as the pool overhead.

The amount of space that can be allocated by volumes is variable and depends on
both the number of volumes and the logical capacity of these volumes. If thin
provisioning is used, then the metadata is allocated for the entire volume when the
volume is created, and not when extents are used, so over-provisioned
environments have more metadata.

Metadata is allocated in units that are called metadata extents, which are 16 MB for
FB data and 21 cylinders for CKD data. There are 64 metadata extents in each user
extent for FB and 53 for CKD. The metadata space usage is as follows:
v Each volume takes one metadata extent.
v Ten extents (or part thereof) for the volume take one metadata extent.
For example, both a 3390-3 and a 3390-9 volume each take two metadata extents
and a 128 GB FB volume takes 14 metadata extents.

Note: In a multiple tier pool volume, metadata is allocated on the upper tiers to
provide maximum performance. Unless Nearline drives are the only tier in the
pool, the metadata is never allocated on this tier. If Flash/SSD is in the pool, then
metadata is allocated first on this tier and then on Enterprise drives (if available).
A pool with 10% Flash/SSD or greater would have all of the volume metadata on
this tier.

A simple way of estimating the maximum space that might be used by volume
metadata is to use the following calculations:
FB Pool Overhead = (#volumes*2 + total volume extents / 10)/64 - rounded up
to the nearest integer
CKD Pool Overhead = (#volumes*2 + total volume extents / 10)/53 - rounded up
to the nearest integer

These calculations overestimate the space that is used by metadata by a small


amount, but the precise details of each volume do not need to be known.

Examples:
v For an FB storage pool with 6,190 extents in which you expect to use thin
provisioning and allocate up to 12,380 extents (2:1 overprovisioning) on 100
volumes, you would have a pool overhead of 23 extents -> (100*2+12380/10)/
64=22.46.
v For a CKD storage pool with 6,190 extents in which you expect to allocate all the
space on 700 volumes, then you would have a pool overhead of 39 extents ->
(700*2+6190/10)/53=38.09.

Chapter 4. Storage system physical configuration 103


RAID capacities for DS8880
Use the following information to calculate the physical and effective capacity for
DS8880.

RAID capacities for High Performance Flash Enclosure Gen2 flash


cards

The default RAID type for all drives is RAID 6. For drives over 1 TB, RAID 6 or
RAID 10 selection is enforced.
Physical
capacity of
High
Performance
High Flash Effective capacity of one rank in number of extents
Performance Enclosure
RAID-10 arrays RAID-5 arrays RAID-6 arrays
Flash Enclosure Gen2 disk
Gen2 disk size drive set Rank type 3+3 4+4 6+P 7+P 5+P+Q 6+P+Q
400 GB 6.4 TB FB Lg Ext 1049 1410 2132 2493 1771 2132
FB Sm Ext 67170 90285 136507 159607 113393 136495
CKD Lg Ext 1177 1582 2392 2797 1987 2392
CKD Sm Ext 62388 83858 126797 148256 105328 126787
800 GB 12.8 TB FB Lg Ext 2133 2855 4300 5023 3578 4300
FB Sm Ext 136542 182781 275254 321475 229015 275239
CKD Lg Ext 2392 3203 4823 5633 4013 4823
CKD Sm Ext 126821 169768 255651 298601 212705 255655
1.6 TB 25.6 TB FB Lg Ext 4301 5746 n/a n/a 7191 8636
FB Sm Ext 275284 367771 n/a n/a 460243 552727
CKD Lg Ext 4824 6445 n/a n/a 8065 9686
CKD Sm Ext 255684 341586 n/a n/a 427475 513372
3.2 TB 51.2 TB FB Lg Ext 8637 11527 n/a n/a 14417 17307
FB Sm Ext 552771 737753 n/a n/a 922733 1107703
CKD Lg Ext 9687 12928 n/a n/a 16170 19412
CKD Sm Ext 513414 685225 n/a n/a 857029 1028843

RAID 5 array capacities for standard drive enclosures

The following table lists the RAID 5 array capacities for fully populated standard
storage enclosures.
Table 38. RAID capacities for RAID 5 arrays
Effective capacity in GB
(number of extents) 3, 4
Total physical Fixed block
Rank with RAID 5 array
Drive size and capacity (GB) (FB) or count
type per drive set 1, 2
key data (CKD) 6 + P 7+P
300 GB disk 4,800 FB 1,664.30 1,947.77
drives (1,550) (1,814)
CKD 1,644.16 1,924.18
(1,738) (2,034)
400 2.5-in. GB 6,400 FB 2,289.22 2,676.84
flash drives (2,132) (2,493)
(SSD)
CKD 2,261.90 2,645.03
(2,391) (2,796)

104 DS8880 Introduction and Planning Guide


Table 38. RAID capacities for RAID 5 arrays (continued)
Effective capacity in GB
(number of extents) 3, 4
Total physical Fixed block
Rank with RAID 5 array
Drive size and capacity (GB) (FB) or count
type per drive set 1, 2
key data (CKD) 6 + P 7+P
600 GB disk 9,600 FB 3,383.73 3,959.96
drives (3,156) (3,688)
CKD 3,348.86 3,912.68
(3,540) (4,136)
800 GB 2.5-in 12,800 FB 4,617.09 5,392.33
flash drive (4,300) (5,022)
(SSD)
CKD 4,561.64 5,328.85
(4,822) (5,633)
1.2 TB disk 19,200 FB 6,816.11 7,958.577
drives (6,348) (7,412)
CKD 6,735.56 7,864.14
(71,20) (8,313)
1.6 TB flash 25,600 FB 9,271.76 10,823.32
drive (SSD) (8,635) (10,080)
CKD 9,162.06 10,695.64
(9,685) (11,306)
1.8 TB disk 28,800 FB 10,243.50 11,957.19
drives (9,540) (11,1136)
CKD 10,122.26 11,815.61
(10,700) (12,490)
Notes:
1. Disk-drive and flash-drive sets contain 16 disk drives. Half-drive sets contain 8 disk
drives.
2. Physical capacities are in decimal gigabytes (GB) and terabytes (TB). One decimal GB is
1,000,000,000 bytes. One decimal TB is 1,000,000,000,000 bytes.
3. Rank capacities of DS8880 are different from rank capacities of earlier DS8000 storage
systems. This change in the number of extents must be planned for when you move or
migrate data to the DS8880.

RAID 6 array capacities for standard drive enclosures

The following table lists the RAID 6 array capacities for fully populated storage
enclosures.
Table 39. RAID capacities for RAID 6 arrays
Effective capacity in GB
(number of extents) 2, 3
Total physical Fixed block (FB)
Rank with RAID 6 array
Drive size and capacity (GB) or count key
type per drive set 1, 2
data (CKD) 5+P+Q 6+P+Q
300 GB disk 4,800 FB 1,347.55 1,628.87
drives (1,255) (1,517)
CKD 1,331.98 1,610.10
(1,408) (1,702)

Chapter 4. Storage system physical configuration 105


Table 39. RAID capacities for RAID 6 arrays (continued)
Effective capacity in GB
(number of extents) 2, 3
Total physical Fixed block (FB)
Rank with RAID 6 array
Drive size and capacity (GB) or count key
type per drive set 1, 2
data (CKD) 5+P+Q 6+P+Q
400 GB 2.5-in. 6,400 FB 1,856.50 2,241.97
flash drives (1,729) (2,088)
(SSD)
CKD 1,834.30 2,214.60
(1,939) (2,341)
600 GB disk 9,600 FB 2,750.93 3,318.94
drives (2,562) (3,091)
CKD 2,718.82 3,279.80
(2,874) (3,467)
800 GB 2.5-in. 12,800 FB 3,750.58 4,521.53
flash drives (3,493) (4,211)
(SSD)
CKD 3,706.45 4,467.98
(3,918) (4,723)
1.2 TB disk 19,200 FB 5,541.58 6,676.53
drives (5,161) (6,218)
CKD 5,475.48 6,597.44
(5,788) (6,974)
1.6 TB 2.5-in. 25,600 FB 7,539.82 9,081.71
flash drives (7,022) (8,458)
(SSD)
CKD 7,450.74 8,974.75
(7,876) (9,487)
1.8 TB disk 28,800 FB 8,331.16 10,034.12
drives (7,759) (9,345)
CKD 8,232.14 9,915.08
(8,702) (10,481)
4 TB disk drives 64,000 FB 18,533.86 22,304.84
(Nearline) (17,261) (20,773)
CKD 18,313.72 220,40.98
(19,359) (23,299)
6 TB disk drives 96,000 FB 27,896.89 33,573.76
(Nearline) (25,981) (31,268)
CKD 27,566.60 33,176.41
(29,140) (35,070)
Notes:
1. Disk-drive and flash-drive sets contain 16 disk drives. Half-drive sets contain 8 disk
drives.
2. Physical capacities are in decimal gigabytes (GB) and terabytes (TB). One decimal GB is
1,000,000,000 bytes. One decimal TB is 1,000,000,000,000 bytes.
3. Rank capacities of DS8880 are different from rank capacities of earlier DS8000 storage
systems. This change in the number of extents must be planned for when you move or
migrate data to the DS8880.

RAID 10 array capacities for standard drive enclosures

The following table lists the RAID 10 array capacities for fully populated storage
enclosures.
106 DS8880 Introduction and Planning Guide
Table 40. RAID capacities for RAID 10 arrays
Effective capacity in GB
(number of extents) 3, 4
Total physical Fixed block (FB)
Rank with RAID 10 array
Drive size and capacity (GB) or count key
type per drive set 1 data (CKD) 3+3 4+4
300 GB disk 4,800 FB 813.90 1,097.36
drives (758) (1,022)
CKD 804.10 1,084.12
(850) (1,146)
400 2.5-in. GB 6,400 FB 1,126.36 1,513.98
flash drives (1,049) (1,410)
(SSD)
CKD 1,112.50 1,495.63
(1,176) (1,581)
600 GB disk 9,600 FB 1,676.11 2,247.34
drives (1,561) (2,093)
CKD 1,656.46 2,220.27
(1,751) (2,347)
800 GB 2.5-in. 12,800 FB 2,290.29 3,065.53
flash drive (SSD) (2,133) (2,855)
CKD 2,262.85 3,029.11
(2,392) (3,202)
1.2 TB disk 19,200 FB 3,389.80 4,532.26
drives (3,157) (4,221)
CKD 3,349.81 4,478.39
(3,541) (4,734)
1.6 TB flash 25,600 FB 4,617.09 6,168.65
drive (SSD) (4,300) (5,745)
CKD 4,562.58 6,096.06
(4,823) (6,444)
1.8 TB disk 28,800 FB 5,103.49 6,817.19
drives (4,753) (6,349)
CKD 5,043.16 6,736.51
(5,331) (7,121)
4 TB disk drives 64,000 FB 11,385.96 15,180.56
(Nearline) (10,604) (14,138)
CKD 11,250.84 15,000.81
(11,893) (15,857)
6 TB disk drives 96,000 FB 17,136.92 22,849.23
(Nearline) (15,960) (21,280)
CKD 16,933.50 22,578.31
(17,900) (23,867)
Notes:
1. Disk-drive and flash-drive sets contain 16 disk drives. Half-drive sets contain 8 disk
drives.
2. Physical capacities are in decimal gigabytes (GB) and terabytes (TB). One decimal GB is
1,000,000,000 bytes. One decimal TB is 1,000,000,000,000 bytes.
3. Rank capacities of DS8880 are different from rank capacities of earlier DS8000 storage
systems. This change in the number of extents must be planned for when you move or
migrate data to the DS8880.

Chapter 4. Storage system physical configuration 107


I/O adapter features
You must select the I/O adapter features that you want for your storage system.

The I/O adapter features are separated into the following categories:
v I/O enclosures
v Device adapters
v Host adapters
v Host adapters Fibre Channel cables

I/O enclosures
I/O enclosures are required for your storage system configuration.

The I/O enclosures hold the I/O adapters and provide connectivity between the
I/O adapters and the storage processors. I/O enclosures are ordered and installed
in pairs.

The I/O adapters in the I/O enclosures can be either device or host adapters. Each
I/O enclosure pair can support up to four device adapters (two pairs), and up to
eight host adapters (not to exceed 32 host adapter ports).

Feature codes for I/O enclosures


Use this feature code to order I/O enclosures for your storage system.

The I/O enclosure feature includes two I/O enclosures. This feature supports up to
two device adapter pairs, up to four host adapters with eight ports, and up to
eight host adapters with four ports. This feature includes four dedicated PCIe ports
for attachment of High Performance Flash Enclosures Gen2.
Table 41. Feature codes for I/O enclosures
Feature code Description
1303 I/O enclosure pair for PCIe group 3

Feature codes for I/O cables


Use these feature codes to order the I/O cables for your storage system.
Table 42. Feature codes for PCIe cables
Feature Cable Group Description Models
Code
1320 PCIe cable group 1 Connects device and host adapters in 984, 985,
an I/O enclosure pair to the 986
processor.
1321 PCIe cable group 2 Connects device and host adapters in 984, 985,
I/O enclosure pairs to the processor. 986

Fibre Channel (SCSI-FCP and FICON) host adapters and


cables
You can order Fibre Channel host adapters for your storage-system configuration.

The Fibre Channel host adapters enable the storage system to attach to Fibre
Channel (SCSI-FCP) and FICON servers, and SAN fabric components. They are
also used for remote mirror and copy control paths between DS8000 series storage

108 DS8880 Introduction and Planning Guide


systems, or between a DS8000 series storage system and a 2105 (model 800 or 750)
storage system. Fibre Channel host adapters are installed in an I/O enclosure.

Adapters are either 4-port or 8-port 8 Gbps, or 4-port 16 Gbps.

Supported protocols include the following types:


v SCSI-FCP upper layer protocol (ULP) on point-to-point, fabric, and arbitrated
loop (private loop) topologies.

Note: The 16 Gbps adapter does not support arbitrated loop topology at any
speed.
v FICON ULP on point-to-point and fabric topologies.

Notes:
1. SCSI-FCP and FICON are supported simultaneously on the same adapter, but
not on the same port.
2. For highest availability, ensure that you add adapters in pairs.

A Fibre Channel cable is required to attach each Fibre Channel adapter port to a
server or fabric component port. The Fibre Channel cables can be 50 or 9 micron,
OM3 or higher fiber graded, single or multimode cables.

Feature codes for Fibre Channel host adapters


Use these feature codes to order Fibre Channel host adapters for your storage
system.

A maximum of 8 Fibre Channel host adapters can be ordered with DS8884F. A


maximum of 16 Fibre Channel host adapters can be ordered with DS8884 6-core
and DS8886 and DS8886F 8-core processors. A maximum of 32 Fibre Channel host
adapters can be ordered with DS8886 and DS8886F 16-core and 24-core processors.
A maximum of 16 Fibre Channel host adapters can be ordered with DS8888F with
24-core processors, and a maximum of 32 Fibre Channel host adapters can be
ordered with DS8888F with 48-core processors.
Table 43. Feature codes for Fibre Channel host adapters
Feature code Description Receptacle type
3153 4-port, 8 Gbps shortwave FCP and FICON LC
host adapter, PCIe
3157 8-port, 8 Gbps shortwave FCP and FICON LC
host adapter, PCIe
3253 4-port, 8 Gbps longwave FCP and FICON LC
host adapter, PCIe
3257 8-port, 8 Gbps longwave FCP and FICON LC
host adapter, PCIe
33531 4-port, 16 Gbps shortwave FCP and FICON LC
host adapter, PCIe
34531 4-port, 16 Gbps longwave FCP and FICON LC
host adapter, PCIe

1. If you are replacing an existing host adapter with a higher speed host adapter, once the
exchange process has started, do not make any changes to the host configuration until
the replacement is complete. Port topology is restored during the exchange process, and
the host configuration appears when the host adapter is installed successfully.

Chapter 4. Storage system physical configuration 109


Feature codes for Fibre Channel cables
Use these feature codes to order Fibre Channel cables to connect Fibre Channel
host adapters to your storage system. Take note of the distance capabilities for
cable types.
Table 44. Feature codes for Fibre Channel cables
Feature code Cable type Cable length Compatible Fibre Channel
host adapter features
1410 50 micron OM3 or higher 40 m v Shortwave Fibre Channel
Fibre Channel cable, (131 ft) or FICON host adapters
multimode (feature codes 3153, 3157,
1411 50 micron OM3 or higher 31 m 3453)
Fibre Channel cable, (102 ft)
multimode
1412 50 micron OM3 or higher 2 m
Fibre Channel conversion (6.5 ft)
cable, multimode
1420 9 micron OS1 or higher 31 m v Longwave Fibre Channel
Fibre Channel cable, single (102 ft) or FICON adapters
mode (feature codes 3253, 3257,
1421 9 micron OS1 or higher 31 m 3353).
Fibre Channel cable, single (102 ft)
mode
1422 9 micron OS1 or higher 2 m
Fibre Channel conversion (6.5 ft)
cable, single mode

Table 45. Multimode cabling limits.


Fibre cable type Distance limits relative to Gbps
2 Gbps 4 Gbps 8 Gbps 16 Gbps
OM1 (62.5 150 m 70 m Not Not
micron) recommended recommended
OM2 (50 micron) 300 m 150 m 50 m 35 m, but not
recommended
OM3 (50 micron) 500 m 380 m 150 m 100 m
OM4 (50 micron) 500 m 400 m 190 m 125 m

Configuration rules for I/O adapter features


To order I/O adapter features, you must follow specific configuration rules.

The following configuration rules affect I/O adapter features:


v Configuration rules for I/O enclosures and adapters
v Configuration rules for host adapters and host adapter cables

Configuration rules for I/O enclosures, device adapters, and


cards
Use these configuration rules and ordering information to help you order I/O
enclosures, device adapters, and cards.

Use the following tables to determine the number of I/O enclosures, device
adapters, and High Performance Flash Enclosure Gen2 features that you need in

110 DS8880 Introduction and Planning Guide


various storage configurations. To use the table, find the rows that contain the type
of storage system that you are configuring. Then, find the row that represents the
number of storage enclosures that are installed in that storage system. Use the
remaining columns to find the number of I/O enclosure, device adapters, and
High Performance Flash Enclosure Gen2 cards that you need in the storage system.
Table 46. Required I/O enclosures and device adapters for DS8884 (models 984 and 84E)
Processor Storage frame Standard drive Required High Required
type enclosure pair device adapter Performance I/O
features (1241)1 pair features Flash enclosure
(3053)2 Enclosure features
Gen2 pair (1303)4
features
(1600)3
6-core Base frame 0 - 4 1 0 - 1 1
6-core First expansion 1 - 4 1 0 0
frame 5 1 0 - 1 1
6-core Second 1 - 3 0 n/a n/a
expansion 4 - 7 1 n/a n/a
frame
Notes:
1. Each storage enclosure feature represents one storage enclosure pair.
2. Each device adapter feature code represents one device adapter pair. The maximum
quantity is two device adapter features for each I/O enclosure feature in the storage
system.
3. Each High Performance Flash Enclosure Gen2 card feature code represents one
flash-interface-card pair. The maximum quantity is two High Performance Flash
Enclosure Gen2 features for each I/O enclosure feature in the storage system.
4. Each I/O enclosure feature represents one I/O enclosure pair. An I/O enclosure pair
can support up to two device adapter pairs and one High Performance Flash Enclosure
Gen2 card pairs.

Table 47. Required I/O enclosures for DS8884F (model 984)


Processor type Storage frame High Performance Required I/O
Flash Enclosure enclosure features
Gen2 pair features (1301)2
(1600)1
6-core Base frame 1 1
Notes:
1. Each High Performance Flash Enclosure Gen2 card feature code represents one
flash-interface-card pair. The maximum quantity is two High Performance Flash
Enclosure Gen2 features for each I/O enclosure feature in the storage system.
2. Each I/O enclosure feature represents one I/O enclosure pair. An I/O enclosure pair
can support one High Performance Flash Enclosure Gen2 card pair.

Chapter 4. Storage system physical configuration 111


Table 48. Required I/O enclosures and device adapters for DS8886 (models 985 and 85E)
Processor Storage frame Standard drive Required High Required
type enclosure pair device adapter Performance I/O
features (1241)1 pair features Flash enclosure
(3053)2 Enclosure features
Gen2 pair (1303)4
features
(1600)3
8-core, Base frame 1 1 0 - 1 1
16-core, or 2 2 0 - 1 1
24-core 3 3 0 - 2 2
16-core or First expansion 1 1 0 - 1 1
24-core frame 2 2 0 - 1 1
3 3 0 - 2 2
4 - 5 4 0 - 2 2
16-core or Second or 1 1 n/a n/a
24-core third 2 2 n/a n/a
expansion 3 3 n/a n/a
frame 4 - 9 4 n/a n/a
16-core or Fourth 1 1 n/a n/a
24-core expansion 2 2 n/a n/a
frame 3 3 n/a n/a
4 - 6 4 n/a n/a
Notes:
1. Each storage enclosure feature represents one storage enclosure pair.
2. Each device adapter feature code represents one device adapter pair. The maximum
quantity is two device adapter features for each I/O enclosure feature in the storage
system.
3. Each High Performance Flash Enclosure Gen2 card feature code represents one
flash-interface-card pair. The maximum quantity is two High Performance Flash
Enclosure Gen2 features for each I/O enclosure feature in the storage system.
4. Each I/O enclosure feature represents one I/O enclosure pair. An I/O enclosure pair
can support up to two device adapter pairs and one High Performance Flash Enclosure
Gen2 card pairs.

Table 49. Required I/O enclosures and device adapters for DS8886 (models 986 and 86E)
Processor Storage frame Standard drive Required High Required
type enclosure pair device adapter Performance I/O
features (1241)1 pair features Flash enclosure
(3053)2 Enclosure features
Gen2 pair (1303)4
features
(1600)3
8-core, Base frame 1 1 0 - 1 1
16-core, or 2 2 0 - 1 1
24-core
16-core or First expansion 1 1 0 - 1 1
24-core frame 2 2 0 - 1 1
3 3 0 - 2 2
4 4 0 - 2 2
16-core or Second, third, 1 1 n/a n/a
24-core or fourth 2 2 n/a n/a
expansion 3 3 n/a n/a
frame 4 - 9 4 n/a n/a

112 DS8880 Introduction and Planning Guide


Table 49. Required I/O enclosures and device adapters for DS8886 (models 986 and
86E) (continued)
Processor Storage frame Standard drive Required High Required
type enclosure pair device adapter Performance I/O
features (1241)1 pair features Flash enclosure
(3053)2 Enclosure features
Gen2 pair (1303)4
features
(1600)3
Notes:
1. Each storage enclosure feature represents one storage enclosure pair.
2. Each device adapter feature code represents one device adapter pair. The maximum
quantity is two device adapter features for each I/O enclosure feature in the storage
system.
3. Each High Performance Flash Enclosure Gen2 card feature code represents one
flash-interface-card pair. The maximum quantity is two High Performance Flash
Enclosure Gen2 features for each I/O enclosure feature in the storage system.
4. Each I/O enclosure feature represents one I/O enclosure pair. An I/O enclosure pair
can support up to two device adapter pairs and one High Performance Flash Enclosure
Gen2 card pairs.

Table 50. Required I/O enclosures for DS8886F (models 985 and 85E, or 986 and 86E)
Processor type Storage frame High Performance Required I/O
Flash Enclosure enclosure features
Gen2 pair features (1301)2
(1600)1
8-core, 16-core, or Base frame 1 1
24-core 2 2
16-core, or 24-core Expansion frame 1 1
2 2
Notes:
1. Each High Performance Flash Enclosure Gen2 card feature code represents one
flash-interface-card pair. The maximum quantity is two High Performance Flash
Enclosure Gen2 features for each I/O enclosure feature in the storage system.
2. Each I/O enclosure feature represents one I/O enclosure pair. An I/O enclosure pair
can support one High Performance Flash Enclosure Gen2 card pair.

Table 51. Required I/O enclosures for DS8888F (models 988 and 88E)
Processor type Storage frame High Performance Required I/O
Flash Enclosure enclosure features
Gen2 pair features (1301)2
(1600)1
24-core or 48-core Base frame 1 1
2 2
3 3
4 4
48-core Expansion frame 1 1
2 2
3 3
4 4

Chapter 4. Storage system physical configuration 113


Table 51. Required I/O enclosures for DS8888F (models 988 and 88E) (continued)
Processor type Storage frame High Performance Required I/O
Flash Enclosure enclosure features
Gen2 pair features (1301)2
(1600)1
Notes:
1. Each High Performance Flash Enclosure Gen2 card feature code represents one
flash-interface-card pair. The maximum quantity is two High Performance Flash
Enclosure Gen2 features for each I/O enclosure feature in the storage system.
2. Each I/O enclosure feature represents one I/O enclosure pair. An I/O enclosure pair
can support one High Performance Flash Enclosure Gen2 card pair.

Configuration rules for host adapters


Use the following configuration rules and ordering information to help you order
host adapters.

When you configure your storage system, consider the following issues when you
order the host adapters:
v What are the minimum and maximum numbers of host adapters that I can
install?
v How can I balance the host adapters across the storage system to help ensure
optimum performance?
v What host adapter configurations help ensure high availability of my data?
v How many and what type of cables do I need to order to support the host
adapters?

In addition, consider the following host adapter guidelines.


v You can include a combination of Fibre Channel host adapters in one I/O
enclosure.
v Feature conversions are available to exchange installed adapters when new
adapters of a different type are purchased.

Maximum and minimum configurations

The following table lists the minimum and maximum host adapter features for the
base frame (model 984, 985, 986, 988).
Table 52. Minimum and maximum host adapter features for the base frame
Storage Storage system Minimum number of Maximum number of
system type configuration host adapter features host adapter features
for the base frame for the storage system
1

6-core Base frame + 2 expansion 2 16


frames
8-core Base frame 2 16
16-core or Base frame + 1-4 2 32
24-core (model expansion frames
985, 986)
24-core (model Base frame 2 16
988)
48-core (model Base frame + 1 expansion 2 32
988) frame

114 DS8880 Introduction and Planning Guide


Table 52. Minimum and maximum host adapter features for the base frame (continued)
Storage Storage system Minimum number of Maximum number of
system type configuration host adapter features host adapter features
for the base frame for the storage system
1

Note: 1 For all configurations, the maximum number of host adapters for any one frame
cannot exceed 8 with 8-port 8-Gbps host adapters or 16 with 4-port 8-Gbps or 16-Gbps host
adapters. You can add host adapters only to the base frame (model 984, 985, 986, 988) and
the first expansion frame (model 84E, 85E, 86E, 88E).

Configuring for highest availability

After you meet the initial minimum order requirement, you can order one host
adapter at a time. However, it is recommended that you add host adapters (of the
same type) in pairs.

For optimum performance, it is important that you aggregate the bandwidth across
all the hardware by installing host adapters evenly across all available IO
enclosures.

Notes:
v Although one multiport adapter can provide redundant pathing, keep in mind
that any host requires access to data through a minimum of two separate host
adapters. To maintain access to data, in the event of a host adapter failure or an
I/O enclosure failure, the host adapters must be in different I/O enclosures.
v If a technical support representative moves existing host adapters from one slot
to another, you can configure the host ports on your storage system by using the
DS Storage Manager or the DS CLI.

Ordering host adapter cables

For each host adapter, you must provide the appropriate fiber-optic cables.
Typically, to connect Fibre Channel host adapters to a server or fabric port, provide
the following cables:
v For shortwave Fibre Channel host adapters, provide a 50-micron multimode
OM3 or higher fiber-optic cable that ends in an LC connector.
v For longwave Fibre Channel host adapters, provide a 9-micron single mode OS1
or higher fiber-optic cable that ends in an LC connector.

These fiber-optic cables are available for order from IBM.

IBM Global Services Networking Services can assist with any unique cabling and
installation requirements.

Processor complex features


These features specify the number and type of core processors in the processor
complex. All base frames (model 98x) contain two processor enclosures (POWER8
servers) that contain the processors and memory that drives all functions in the
storage system.

Chapter 4. Storage system physical configuration 115


Feature codes for processor licenses
Use these processor-license feature codes to plan for and order processor memory
for your storage system. You can order only one processor license per system.
Table 53. Feature codes for processor licenses
Corequisite feature code for
Feature code Description memory
4421 6-core POWER8 processor 4223, 4224, or 4225
feature
4422 8-core POWER8 processor 4324 or 4325
feature
4423 16-core POWER8 processor 4325 or 4326
feature
4424 24-core POWER8 processor 4327 or 4328
feature
4868 24-core POWER8 processor 4487
feature
4888 48-core POWER8 processor 4488
feature

Processor memory features


These features specify the amount of memory that you need depending on the
processors in the storage system.

Feature codes for system memory


Use these feature codes to order system memory for your storage system.

Note: Memory is not the same as cache. The amount of cache is less than the
amount of available memory. See the DS8000 Storage Management GUI.
Table 54. Feature codes for system memory
Feature code Description Model
1
4223 64 GB system memory 984 (6-core)
1
4224 128 GB system memory 984 (6-core)
1
4225 256 GB system memory 984 (6-core)
1
4324 128 GB system memory 985 or 986 (8-core)
2 and 3
4325 256 GB system memory 985 or 986 (8-core and
16 core)
43263 512 GB system memory 985 or 986 (16-core)
4
4327 1 TB system memory 985 or 986 (24-core)
4
4328 2 TB system memory 985, 986, 988 (24-core)
5
4487 1 TB system memory 988 (24-core)
6
4488 2 TB system memory 988 (48-core)

116 DS8880 Introduction and Planning Guide


Table 54. Feature codes for system memory (continued)
Feature code Description Model
Notes:
1. Feature codes 4223, 4224, and 4225 require 6-core processor license feature code 4421.
2. Feature codes 4324 and 4325 require 8-core processor license feature code 4422.
3. Feature codes 4325 and 4326 require 16-core processor license feature code 4423.
4. Feature codes 4327 and 4328 require 24-core processor license feature code 4424.
5. Feature code 4487 requires 24-core processor license feature code 4868.
6. Feature code 4488 requires 48-core processor license feature code 4888.

Power features
You must specify the power features to include on your storage system.

The power features are separated into the following categories:


v Power cords
v Input voltage
v DC-UPS (direct current uninterruptible power supply)

For the DS8880 base frame and expansion frame, the DC-UPS is included in your
order.

Power cords
A pair of power cords (also known as power cables) is required for each base or
expansion frame.

The DS8000 series has redundant primary power supplies. For redundancy, ensure
that each power cord to the frame is supplied from an independent power source.

Feature codes for power cords


Use these feature codes to order power cords for DS8880 base or expansion racks.
Each feature code includes two power cords. Ensure that you meet the
requirements for each power cord and connector type that you order.

Important: IBM Safety requires a minimum of one IBM safety-approved ladder


(feature code 1101) to be available at each installation site when the top exit
bracket (feature code 1400) is specified for overhead cabling and when the
maximum height of the overhead power source is 10 ft from the ground level. This
ladder is a requirement for storage-system installation and service.
Table 55. Feature codes for power cords
Feature code Power cord type Wire gauge
1062 Single-phase power cord, 200-240V, 60A, 3-pin 10 mm² (7 AWG)
connector

HBL360C6W, Pin and Sleeve Connector, IEC


60309, 2P3W

HBL360R6W, AC Receptacle, IEC 60309, 2P3W


1063 Single-phase power cord, 200-240V, 63A, no 10 mm² (7 AWG)
connector

Chapter 4. Storage system physical configuration 117


Table 55. Feature codes for power cords (continued)
Feature code Power cord type Wire gauge
1086 Three-phase WYE (3ØY) voltage (five-wire 6 mm² (10awg)
3+N+PE), 380-415V~ (nominal line-to-line
(LL)), 30 A, IEC 60309 5-pin customer
connector

HBL530C6V02, Pin and Sleeve Connector, IEC


60309, 4P5W

HBL530R6V02, AC Receptacle, IEC 60309,


4P5W
1087 Three-phase Delta (3Ø∆) voltage (four-wire 6 mm² (10awg)
3+PE (Protective Earth Ground)), 200-240V, 30
A, IEC 60309 4-pin customer connector
HBL430C9W, Pin and Sleeve Connector, IEC
309, 3P4W

HBL430P9V04, Pin and Sleeve Connector, IEC


309, 3P4W

HBL430R9W, AC Receptacle, IEC 60309, 3P4W


1088 Three-phase WYE (3ØY) voltage (five-wire 6 mm² (10awg)
3+N+PE), 380-415V~ (nominal line-to-line
(LL)), 40 A, no customer connector provided

Inline Connector: not applicable

Receptacle: not applicable


1089 Three-phase Delta (3Ø∆) voltage (four-wire 10 mm² (8awg)
3+PE (Protective Earth Ground)), 200-240V, 60
A, IEC 60309 4-pin customer connector

HBL460C9W, Pin and Sleeve Connector, IEC


309, 3P4W

HBL460R9W, AC Receptacle, IEC 60309, 3P4W

Input voltage
The DC-UPS distributes full wave, rectified power that ranges from 200 V AC to
240 V AC.

Direct-current uninterruptible-power supply


Each frame includes two direct-current uninterruptible-power supplies (DC-UPSs).
Each DC-UPS includes a battery-service-module set.

The DC-UPSs with integrated battery-service-module sets provides the ability to


tolerate a power line disturbance. Loss of power to the frame up to 4 seconds is
tolerated without a separate feature code. With the extended power line
disturbance (ePLD) feature, loss of power for 40 seconds is tolerated without
interruption of service.

The DC-UPS monitors its own alternating current (ac) input. Each DC-UPS rectifies
and distributes the input ac. If a single DC-UPS in a frame loses ac input power,
that DC-UPS receives and distributes rectified ac from the partner DC-UPS in that
frame. If both DC-UPSs in that frame lose ac-input power, the DC-UPSs go “on

118 DS8880 Introduction and Planning Guide


battery.” If ac input is not restored within four seconds minimum (or seconds with
ePLD), the storage system commences shutdown.

Activation and Recovery for system failure: If both power cords lose ac input, the
DC-UPS senses that both partner power and local power is running on batteries.
Both stay on battery power and provide status to the rack power control (RPC),
which initiates a recovery process.

Feature codes for battery service modules


Use these feature codes to order battery service modules for your base and
expansion racks.
Table 56. Feature codes for the battery service module
Feature code Description Models
1050 Single-phase DC-UPS battery 984 and 84E, 985 and 85E
service module
1052 Three-phase DC-UPS battery 986 and 86E, 988 and 88E
service module

Configuration rules for power features


Ensure that you are familiar with the configuration rules and feature codes before
you order power features.

When you order power cord features, the following rules apply.
v You must order a minimum of one power cord feature for each frame. Each
feature code represents a pair of power cords (two cords).
v You must select the power cord that is appropriate to the input voltage and
geographic region where the storage system is located.

If the optional extended power line disturbance (ePLD) option is needed, you must
order feature code 1055 for each base frame and expansion frame. The ePLD
option protects the storage system from a power-line disturbance for 40 seconds.

Other configuration features


Features are available for shipping and setting up the storage system.

You can select shipping and setup options for the storage system. The following
list identifies optional feature codes that you can specify to customize or to receive
your storage system.
v Extended power line disturbance (ePLD) option
v BSMI certificate (Taiwan)
v Shipping weight reduction option

Extended power line disturbance


The extended power line disturbance (ePLD) option (feature code 1055) gives the
storage system the ability to tolerate power line disturbance for 40 seconds, rather
than 4 seconds without the ePLD feature. This feature is optional for your storage
system configuration.

Feature code for extended power-line disturbance


Use this feature code to order the extended power-line disturbance (ePLD) feature
for your storage system.

Chapter 4. Storage system physical configuration 119


Table 57. Feature code for the ePLD
Feature code Description Models
1055 Extended power line disturbance All

BSMI certificate (Taiwan)


The BSMI certificate for Taiwan option provides the required Bureau of Standards,
Metrology, and Inspection (BSMI) ISO 9001 certification documents for storage
system shipments to Taiwan.

If the storage system that you order is shipped to Taiwan, you must order this
option for each frame that is shipped.

Feature code for BSMI certification documents (Taiwan)


Use this feature code to you order the Bureau of Standards, Metrology, and
Inspection (BSMI) certification documents that are required when the storage
system is shipped to Taiwan.
Table 58. Feature code for the BSMI certification documents (Taiwan)
Feature code Description
0400 BSMI certification documents

Shipping weight reduction


Order the shipping weight reduction option to receive delivery of a storage system
in multiple shipments.

If your site has delivery weight constraints, IBM offers a shipping weight reduction
option that ensures the maximum shipping weight of the initial frame shipment
does not exceed 909 kg (2000 lb). The frame weight is reduced by removing
selected components, which are shipped separately.

The IBM technical service representative installs the components that were shipped
separately during the storage system installation. This feature increases storage
system installation time, so order it only if it is required.

Feature code for shipping weight reduction


Use this feature code to order the shipping-weight reduction option for your
storage system.

This feature ensures that the maximum shipping weight of the base rack or
expansion rack does not exceed 909 kg (2000 lb) each. Packaging adds 120 kg (265
lb).
Table 59. Feature code for shipping weight reduction
Feature code Description Models
0200 Shipping weight reduction All

120 DS8880 Introduction and Planning Guide


Chapter 5. Planning use of licensed functions
Licensed functions are the operating system and functions of the storage system.
Required features and optional features are included.

IBM authorization for licensed functions is purchased as 283x or 904x machine


function authorizations. However, the license functions are storage models. For
example, the Base Function license is listed as a 283x or 904x model LF8. The 283x
or 904x machine function authorization features are for billing purposes only.

The following licensed functions are available:


Base Function
The Base Function license is required for each DS8880 storage system.
z-synergy Services
The z-synergy Services include z/OS licensed features that are supported
on the storage system.
Copy Services
Copy Services features help you implement storage solutions to keep your
business running 24 hours a day, 7 days a week by providing data
duplication, data migration, and disaster recovery functions.
Copy Services Manager on Hardware Management Console
The Copy Services Manager on Hardware Management Console (CSM on
HMC) license enables IBM Copy Services Manager to run on the Hardware
Management Console, which eliminates the need to maintain a separate
server for Copy Services functions.

Licensed function indicators


Each licensed function indicator feature that you order on a base frame enables
that function at the system level.

After you receive and apply the feature activation codes for the licensed function
indicators, the licensed functions are enabled for you to use. The licensed function
indicators are also used for maintenance billing purposes.

Note: Retrieving feature activation codes is part of managing and activating your
licenses. Before you can logically configure your storage system, you must first
manage and activate your licenses.

Each licensed function indicator requires a corequisite 283x or 904x function


authorization. Function authorization establishes the extent of IBM authorization
for the licensed function before the feature activation code is provided by IBM.
Each function authorization applies only to the specific storage system (by serial
number) for which it was acquired. The function authorization cannot be
transferred to another storage system (with a different serial number).

License scope
Licensed functions are activated and enforced within a defined license scope.

© Copyright IBM Corp. 2004, 2017 121


License scope refers to the following types of storage and types of servers with
which the function can be used:
Fixed block (FB)
The function can be used only with data from Fibre Channel attached
servers. The Base Function, Copy Services, and Copy Services Manager on
the Hardware Management Console licensed functions are available within
this scope.
Count key data (CKD)
The function can be used only with data from FICON attached servers.
The Copy Services, Copy Services Manager on the Hardware Management
Console, and z-synergy Services licensed functions are available within this
scope.
Both FB and CKD (ALL)
The function can be used with data from all attached servers. The Base
Function, Copy Services, and Copy Services Manager on the Hardware
Management Console licensed functions are available within this scope.

Some licensed functions have multiple license scope options, while other functions
have only a single license scope.

You do not specify the license scope when you order function authorization feature
numbers. Feature numbers establish only the extent of the IBM authorization (in
terms of physical capacity), regardless of the storage type. However, if a licensed
function has multiple license scope options, you must select a license scope when
you initially retrieve the feature activation codes for your storage system. This
activity is performed by using the IBM Data storage feature activation (DSFA)
website (www.ibm.com/storage/dsfa) .

Note: Retrieving feature activation codes is part of managing and activating your
licenses. Before you can logically configure your storage system, you must first
manage and activate your licenses.

When you use the DSFA website to change the license scope after a licensed
function is activated, a new feature activation code is generated. When you install
the new feature activation code into the storage system, the function is activated
and enforced by using the newly selected license scope. The increase in the license
scope (changing FB or CKD to ALL) is a nondisruptive activity. A reduction of the
license scope (changing ALL to FB or CKD) is a disruptive activity, which takes
effect at the next restart.

Ordering licensed functions


After you decide which licensed functions to use with your storage system, you
are ready to order the functions.

About this task

Licensed functions are purchased as function authorization features.

To order licensed functions, use the following general steps:

Procedure
1. Required. Order the Base Function license to support the total physical capacity
of your storage system.

122 DS8880 Introduction and Planning Guide


2. Optional. Order the z-synergy Services license to support the physical capacity
of all ranks that are formatted as CKD.
3. Optional. Order the Copy Services license to support the total usable capacity
of all volumes that are involved in one or more copy services functions.

Note: The Copy Services license is based on the usable capacity of volumes
and not on physical capacity. If overprovisioning is used on the DS8880 with a
significant amount of Copy Services functionality, then the Copy Services
license needs only to be equal to the total rank usable capacity (even if the
logical volume capacity of volumes in Copy Services is greater). For example, if
the total rank usable capacity of a DS8880 is 100 TB but there are 200 TB of thin
provisioning volumes in Metro Mirror, then only a 100 TB of Copy Services
license is needed.
4. Optional. Order the Copy Services Manager on the Hardware Management
Console license that support the total usable capacity of all volumes that are
involved in one or more copy services functions.

Rules for ordering licensed functions


A Base Function license is required for every base frame. All other licensed
functions are optional and must have a capacity that is equal to or less than the
Base Function license.

For all licensed functions, you can combine feature codes to order the exact
capacity that you need. For example, if you require 160 TB of Base Function license
capacity, order 10 of feature code 8151 (10 TB each up to 100 TB capacity) and 4 of
feature code 8152 (15 TB each, for an extra 60 TB).

When you calculate usable capacity for the Copy Services license, use the size of
each volume involved in a copy services relationship and multiply by the size of
each extent.

When you calculate physical capacity, consider the capacity across the entire
storage system, including the base frame and any expansion frames. To calculate
the physical capacity, use the following table to determine the total size of each
regular drive feature in your storage system, and then add all the values.
Table 60. Total physical capacity for drive-set features
Drive sizes Total physical capacity Drives per feature
300 GB disk drives 4.8 TB 16
400 GB flash drives 6.4 TB 16
400 GB flash cards 5.6 TB or 6.4 TB 14 or 16
600 GB disk drives 9.6 TB 16
800 GB flash drives 12.8 TB 16
1.2 TB disk drives 19.2 TB 16
1.6 TB flash drives 25.6 TB 16
1.8 TB disk drives 28.8 TB 16
3.2 TB flash drives 51.2 TB 16
4 TB disk drives 32 TB 8
6 TB disk drives 48 TB 8

Chapter 5. Planning use of licensed functions 123


Rules for removing a licensed function

The initial enablement of any optional DS8000 licensed function is a concurrent


activity (assuming that the appropriate level of microcode is installed on the
machine for the specific function). The removal of a DS8000 licensed function is a
nondisruptive activity but takes effect at the next machine IML.

If you have a licensed function and no longer want to use it, you can deactivate
the license in one of the following ways:
v Order an inactive or disabled license and replace the active license activation key
with the new inactive license activation key at the IBM Data storage feature
activation (DSFA) website (www.ibm.com/storage/dsfa).
v Go to the DSFA website and change the assigned value from the current number
of terabytes (TB) to 0 TB. This value, in effect, makes the feature inactive. If this
change is made, you can go back to DSFA and reactivate the feature, up to the
previously purchased level, without having to repurchase the feature.

Regardless of which method is used, the deactivation of a licensed function is a


nondisruptive activity, but takes effect at the next machine IML.

Note: Although you do not need to specify how the licenses are to be applied
when you order them, you must allocate the licenses to the storage image when
you obtain your license keys on the IBM Data storage feature activation (DSFA)
website (www.ibm.com/storage/dsfa).

Base Function license


The Base Function license provides essential functions for your storage system. A
Base Function license is required for each DS8880 storage system.

The Base Function license is available for the following license scopes: FB and ALL
(both FB and CKD).

The Base Function license includes the following features:


v Database Protection
v Encryption Authorization
v Easy Tier
v I/O Priority Manager
v Operating Environment License (OEL)
v Thin Provisioning

The Base Function license feature codes are ordered in increments up to a specific
capacity. For example, if you require 160 TB of capacity, order 10 of feature code
8151 (10 TB each up to 100 TB capacity) and 4 of feature code 8152 (15 TB each, for
an extra 60 TB).

The Base Function license includes the following feature codes.


Table 61. Base Function license feature codes
Feature Code Feature code for licensed function indicator
8151 BF - 10 TB (up to 100 TB capacity)
8152 BF - 15 TB (from 100.1 TB to 250 TB capacity)
8153 BF - 25 TB (from 250.1 TB to 500 TB capacity)

124 DS8880 Introduction and Planning Guide


Table 61. Base Function license feature codes (continued)
Feature Code Feature code for licensed function indicator
8154 BF - 75 TB (from 500.1 to 1250 TB capacity)
8155 BF - 175 TB (from 1250.1 TB to 3000 TB capacity)
8156 BF - 300 TB (from 3000.1 TB to 6000 TB capacity)
8160 BF - 500 TB (from 6000.1 TB to 10,000 TB capacity)

Base Function license rules

The Base Function license authorizes you to use the model configuration at a
specific capacity level. The Base Function license must cover the full physical
capacity of your storage system, which includes the physical capacity of any
expansion frames within the storage system. The license capacity must cover both
open systems data (fixed block data) and z Systems data (count key data). All
other licensed functions must have a capacity that is equal to or less than the Base
Function license.

Note: Your storage system cannot be logically configured until you activate the
Base Function license. On activation, drives can be logically configured up to the
extent of the Base Function license authorization level.

As you add more drives to your storage system, you must increase the Base
Function license authorization level for the storage system by purchasing more
license features. Otherwise, you cannot logically configure the additional drives for
use.

Database Protection
The IBM Database Protection feature provides the highest level of protection for
Oracle databases by detecting corrupted Oracle data and preventing it from being
processed to storage.

The IBM Database Protection feature complies with the Oracle Hardware Assisted
Resilient Data (HARD) initiative, which provides an end-to-end data protection
between an Oracle database and permanent storage devices.

Data must pass through many software and hardware layers on its way to storage.
It is possible for the data to become corrupted, on a rare occasion, caused by a
malfunction in an intermediate layer. With the IBM Database Protection feature, an
IBM DS8000 model can validate whether Oracle data blocks are consistent using
the same logic that Oracle uses. This validation is done before the write request is
processed. You can designate how the transaction is managed: either rejected and
reported, or processed and reported.

Encryption Authorization
The Encryption Authorization feature provides data encryption by using IBM Full
Disk Encryption (FDE) and key managers, such as IBM Security Key Lifecycle
Manager.

The Encryption Authorization feature secures data at rest and offers a simple,
cost-effective solution for securely erasing any disk drive that is being retired or
re-purposed (cryptographic erasure). The IBM DS8000 series uses disks that have

Chapter 5. Planning use of licensed functions 125


FDE encryption hardware and can perform symmetric encryption and decryption
of data at full disk speed with no impact on performance.

IBM Easy Tier


Support for IBM Easy Tier is available with the IBM Easy Tier feature.

The Easy Tier feature enables the following modes:


v Easy Tier: automatic mode
v Easy Tier: manual mode

The feature enables the following functions for the storage type:
v Easy Tier application
v Easy Tier heat map transfer
v The capability to migrate volumes for logical volumes
v The reconfigure extent pool function of the extent pool
v The dynamic extent relocation with an Easy Tier managed extent pool

I/O Priority Manager


The I/O Priority Manager function can help you effectively manage quality of
service levels for each application running on your system. This function aligns
distinct service levels to separate workloads in the system to help maintain the
efficient performance of each DS8000 volume.

The I/O Priority Manager detects when a higher-priority application is hindered


by a lower-priority application that is competing for the same system resources.
This detection might occur when multiple applications request data from the same
drives. When I/O Priority Manager encounters this situation, it delays
lower-priority I/O data to assist the more critical I/O data in meeting its
performance targets.

Operating environment license


The operating environment model and features establish the extent of IBM
authorization for the use of the IBM DS operating environment.

To determine the operating environment license support function, see “Machine


types overview” on page 3.

Thin provisioning
Thin provisioning defines logical volume sizes that are larger than the physical
capacity installed on the system. The volume allocates capacity on an as-needed
basis as a result of host-write actions.

The thin provisioning feature enables the creation of extent space efficient logical
volumes. Extent space efficient volumes are supported for FB and CKD volumes
and are supported for all Copy Services functionality, including FlashCopy targets
where they provide a space efficient FlashCopy capability.

126 DS8880 Introduction and Planning Guide


z-synergy Services license
The z-synergy Services license includes z/OS® features that are supported on the
storage system.

The z-synergy Services license is available for the following license scope: CKD.

The z-synergy Services license includes the following features:


v High Performance FICON for z Systems
v HyperPAV
v Parallel Access Volumes (PAV)
v z/OS Distributed Data Backup

The z-synergy Services license also includes the ability to attach FICON channels.

The z-synergy Services license feature codes are ordered in increments up to a


specific capacity. For example, if you require 160 TB of capacity, order 10 of feature
code 8351 (10 TB each up to 100 TB capacity), and 4 of feature code 8352 (15 TB
each, for an extra 60 TB).

The z-synergy Services license includes the feature codes listed in the following
table.
Table 62. z-synergy Services license feature codes
Feature Code Feature code for licensed function indicator
8350 zsS - inactive
8351 zsS - 10 TB (up to 100 TB capacity)
8352 zsS - 15 TB (from 100.1 TB to 250 TB capacity)
8353 zsS - 25 TB (from 250.1 TB to 500 TB capacity)
8354 zsS - 75 TB (from 500.1 to 1250 TB capacity)
8355 zsS - 175 TB (from 1250.1 TB to 3000 TB capacity)
8356 zsS - 300 TB (from 3000.1 TB to 6000 TB capacity)
8360 zsS - 500 TB (from 6000.1 TB to 10,000 TB capacity)

z-synergy Services license rules

A z-synergy Services license is required for only the total physical capacity that is
logically configured as count key data (CKD) ranks for use with z Systems host
systems.

Note: If z/OS Distributed Data Backup is being used on a system with no CKD
ranks, a 10 TB z-synergy Services license must be ordered to enable the FICON
attachment functionality.

High Performance FICON for z Systems


High Performance FICON for z Systems (zHPF) is an enhancement to the IBM
FICON architecture to offload I/O management processing from the z Systems
channel subsystem to the DS8880 Host Adapter and controller.

zHPF is an optional feature of z Systems server and of the DS8880. Recent


enhancements to zHPF include Extended Distance Facility zHPF List Pre-fetch

Chapter 5. Planning use of licensed functions 127


support for IBM DB2® and utility operations, and zHPF support for sequential
access methods. All of DB2 I/O is now zHPF-capable.

IBM HyperPAV
IBM HyperPAV associates the volumes with either an alias address or a specified
base logical volume number. When a host system requests IBM HyperPAV
processing and the processing is enabled, aliases on the logical subsystem are
placed in an IBM HyperPAV alias access state on all logical paths with a given
path group ID.

Parallel Access Volumes


The parallel access volumes (PAV) features establish the extent of IBM
authorization for the use of the parallel access volumes function.

Parallel Access Volumes (PAVs), also referred to as aliases, provide your system
with access to volumes in parallel when you use a z Systems host.

A PAV capability represents a significant performance improvement by the storage


unit over traditional I/O processing. With PAVs, your system can access a single
volume from a single host with multiple concurrent requests.

z/OS Distributed Data Backup


z/OS Distributed Data Backup (zDDB) is a licensed feature on the base frame that
allows hosts, which are attached through a FICON interface, to access data on
fixed block (FB) volumes through a device address on FICON interfaces.

If zDDB is installed and enabled and a volume group type specifies either FICON
interfaces, this volume group has implicit access to all FB logical volumes that are
configured in addition to all CKD volumes specified in the volume group. Then,
with appropriate software, a z/OS host can complete backup and restore functions
for FB logical volumes that are configured on a storage system image for open
systems hosts.

Copy Services license


Copy Services features help you implement storage solutions to keep your business
running 24 hours a day, 7 days a week by providing data duplication, data
migration, and disaster recovery functions. The Copy Services license is based on
usable capacity of the volumes involved in Copy Services functionality.

The Copy Services license is available for the following license scopes: FB and ALL
(both FB and CKD).

The Copy Services license includes the following features:


v Global Mirror
v Metro Mirror
v Metro/Global Mirror
v Point-in-Time Copy/FlashCopy
v z/OS Global Mirror
v z/OS Metro/Global Mirror Incremental Resync (RMZ)

128 DS8880 Introduction and Planning Guide


The Copy Services license feature codes are ordered in increments up to a specific
capacity. For example, if you require 160 TB of capacity, order 10 of feature code
8251 (10 TB each up to 100 TB capacity), and 4 of feature code 8252 (15 TB each,
for an extra 60 TB).

The Copy Services license includes the following feature codes.


Table 63. Copy Services license feature codes
Feature Code Feature code for licensed function indicator
8250 CS - inactive
8251 CS - 10 TB (up to 100 TB capacity)
8252 CS - 15 TB (from 100.1 TB to 250 TB capacity)
8253 CS - 25 TB (from 250.1 TB to 500 TB capacity)
8254 CS - 75 TB (from 500.1 to 1250 TB capacity)
8255 CS - 175 TB (from 1250.1 TB to 3000 TB capacity)
8256 CS - 300 TB (from 3000.1 TB to 6000 TB capacity)
8260 CS - 500 TB (from 6000.1 TB to 10,000 TB capacity)

Copy Services license rules

The following ordering rules apply when you order the Copy Services license:
v The Copy Services license should be ordered based on the total usable capacity
of all volumes involved in one or more Copy Services relationships.
v The licensed authorization must be equal to or less that the total usable capacity
allocated to the volumes that participate in Copy Services operations.
v You must purchase features for both the source (primary) and target (secondary)
storage system.

Remote mirror and copy functions


The Copy Services license establishes the extent of IBM authorization for the use of
the remote mirror and copy functions on your storage system.

The following functions are included:


v Metro Mirror
v Global Mirror
v Global Copy
v Metro/Global Mirror
v Multiple Target PPRC

FlashCopy function (point-in-time copy)


FlashCopy creates a copy of a source volume on the target volume. This copy is
called a point-in-time copy.

When you initiate a FlashCopy operation, a FlashCopy relationship is created


between a source volume and target volume. A FlashCopy relationship is a
"mapping" of the FlashCopy source volume and a FlashCopy target volume. This
mapping allows a point-in-time copy of that source volume to be copied to the
associated target volume. The FlashCopy relationship exists between this volume
pair from the time that you initiate a FlashCopy operation until the storage unit
copies all data from the source volume to the target volume or you delete the
FlashCopy relationship, if it is a persistent FlashCopy.

Chapter 5. Planning use of licensed functions 129


z/OS Global Mirror
z/OS Global Mirror (previously known as Extended Remote Copy or XRC)
provides a long-distance remote copy solution across two sites for open systems
and z Systems data with asynchronous technology.

z/OS Metro/Global Mirror Incremental Resync


z/OS Metro/Global Mirror Incremental Resync (RMZ) is an enhancement for z/OS
Global Mirror. z/OS Metro/Global Mirror Incremental Resync can eliminate the
need for a full copy after a HyperSwap situation in 3-site z/OS Global Mirror
configurations.

The storage system supports z/OS Global Mirror that is a 3-site mirroring solution
that uses IBM System Storage Metro Mirror and z/OS Global Mirror (XRC). The
z/OS Metro/Global Mirror Incremental Resync capability is intended to enhance
this solution by enabling resynchronization of data between sites by using only the
changed data from the Metro Mirror target to the z/OS Global Mirror target after a
HyperSwap operation.

Copy Services Manager on the Hardware Management Console license


IBM Copy Services Manager facilitates the use and management of Copy Services
functions such as the remote mirror and copy functions (Metro Mirror and Global
Mirror) and the point-in-time function (FlashCopy). IBM Copy Services Manager is
available on the Hardware Management Console (HMC), which eliminates the
need to maintain a separate server for Copy Services functions.

The Copy Services Manager on Hardware Management Console (CSM on HMC)


license is available for the following license scopes: FB and ALL (both FB and
CKD).

The CSM on HMC license feature codes are ordered in increments up to a specific
capacity. For example, if you require 160 TB of capacity, order 10 of feature code
8451 (10 TB each up to 100 TB capacity), and 4 of feature code 8452 (15 TB each,
for an extra 60 TB).

The CSM on HMC license includes the following feature codes.


Table 64. Copy Services Manager on Hardware Management Console license feature codes
Feature Code Feature code for licensed function indicator
8450 CSM on HMC - inactive
8451 CSM on HMC - 10 TB (up to 100 TB capacity)
8452 CSM on HMC - 15 TB (from 100.1 TB to 250 TB capacity)
8453 CSM on HMC - 25 TB (from 250.1 TB to 500 TB capacity)
8454 CSM on HMC - 75 TB (from 500.1 to 1250 TB capacity)
8455 CSM on HMC - 175 TB (from 1250.1 TB to 3000 TB capacity)
8456 CSM on HMC - 300 TB (from 3000.1 TB to 6000 TB capacity)
8460 CSM on HMC - 500 TB (from 6000.1 TB to 10,000 TB capacity)

130 DS8880 Introduction and Planning Guide


Chapter 6. Meeting delivery and installation requirements
You must ensure that you properly plan for the delivery and installation of your
storage system.

This information provides the following planning information for the delivery and
installation of your storage system:
v Planning for delivery of your storage system
v Planning the physical installation site
v Planning for power requirements
v Planning for network and communication requirements

For more information about the equipment and documentsthat IBM includes with
storage system shipments, see Appendix C, “IBM equipment and documents
DS8000,” on page 197.

Delivery requirements
Before you receive your storage system shipment, ensure that the final installation
site meets all delivery requirements.

Attention: Customers must prepare their environments to accept the storage


system based on this planning information, with assistance from an IBM Advanced
Technical Services (ATS) representative or a technical service representative. The
final installation site within the computer room must be prepared before the
equipment is delivered. If the site cannot be prepared before the delivery time,
customers must make arrangements to have the professional movers return to
finish the transportation later. Only professional movers can transport the
equipment. The technical service representative can minimally reposition the frame
at the installation site, as needed to complete required service actions. Customers
are also responsible for using professional movers in the case of equipment
relocation or disposal.

Acclimation
Server and storage equipment must be acclimated to the surrounding environment
to prevent condensation.

When server and storage equipment is shipped in a climate where the outside
temperature is below the dew point of an indoor location, water condensation
might form on the cooler surfaces inside the equipment when brought into a
warmer indoor environment. If condensation occurs, sufficient time must be
allowed for the equipment to reach equilibrium with the warmer indoor
temperature before you power on the storage system for installation. Leave the
storage system in the shipping bag for a minimum of 24 to 48 hours to let it
acclimate to the indoor environment.

Shipment weights and dimensions


You must ensure that your loading dock and receiving area can support the weight
and dimensions of the packaged storage system shipments.

You receive at least two, and up to three, shipping containers for each frame that
you order. You always receive the following items:

© Copyright IBM Corp. 2004, 2017 131


v A container with the storage system frame. In the People's Republic of China
(including Hong Kong S.A.R. of China), India, and Brazil, this container has a
plywood front door on the package, with a corrugated paperboard outer wrap.
In all other countries, this container is a pallet that is covered by a corrugated
fiberboard (cardboard) cover
v A container with the remaining components, such as power cords, CDs, and
other ordered features or peripheral devices for your storage system.

Table 65 shows the final packaged dimensions and maximum packaged weight of
the storage system frame shipments.

To calculate the weight of your total shipment, add the weight of each frame
container and the weight of one ship group container for each frame.
Table 65. Packaged dimensions and weight for storage system frames (all countries)
Container Packaged dimensions Maximum packaged
weight
DS8880 base frame (model 984) 1065 kg (2348 lb)
Height 2.08 m (81.9 in.)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8880 base frame (model 985) 1250.5 kg (2757 lb)
Height 2.08 m (81.9 in.)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8880 base frame (model 986) 1187.5 kg (2618 lb)
Height 2.08 m (81.9 in.)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8880 base frame (model 988) 1167 kg (2577 lb)
Height 2.08 m (81.9 in.)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8880 expansion frame (model 1128 kg (2487 lb)
Height 2.08 m (81.9 in.)
84E)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8880 expansion frame (model 1205 kg (2657 lb)
Height 2.08 m (81.9 in.)
85E)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8880 expansion frame (model 1043 kg (2297 lb)
Height 2.08 m (81.9 in.)
86E)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8880 expansion frame (model 996.5 kg (2197 lb)
Height 2.08 m (81.9 in.)
88E)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)

132 DS8880 Introduction and Planning Guide


Table 65. Packaged dimensions and weight for storage system frames (all
countries) (continued)
Container Packaged dimensions Maximum packaged
weight
Note:
1. The top expansion (feature code 170) is shipped separately.

Receiving delivery
The shipping carrier is responsible for delivering and unloading the storage system
as close to its final destination as possible. You must ensure that your loading
ramp and your receiving area can accommodate your storage system shipment.

Before you begin

Ensure you read the following caution when you position the rack. If you are
relocating the frame, ease it out of its current position and pull out the outriggers
for the remaining major part of the relocation. Roll the rack on its castors until you
get close to its intended location. Keep the supplemental outriggers in position as
shown in the following image. When the rack is near the final location, you can
recede the outriggers back into the recessed position, flush with the outsides of the
rack. The outriggers are only intended to help move the rack and are not intended
to support the rack in its final location. To prevent unintended movement and
ensure stability of the rack, you can put down the leveler jacks.
CAUTION: The rack cabinet is supplied with native built-in extendable outriggers with small
floating supplemental castors as motion anti-tip features. They must all be extended into a latched
position before and during cabinet movement or relocation. These native built-in outriggers must
not be removed completely, but rather recessed in when finished to ensure they are readily
available for future use. (C050)

About this task

Use the following steps to ensure that your receiving area and loading ramp can
safely accommodate the delivery of your storage system:

Procedure
1. Find out the packaged weight and dimensions of the shipping containers in
your shipment.
2. Ensure that your loading dock, receiving area, and elevators can safely support
the packaged weight and dimensions of the shipping containers.

Chapter 6. Meeting delivery and installation requirements 133


Note: You can order a weight-reduced shipment when a configured storage
system exceeds the weight capability of the receiving area at your site.
3. To compensate for the weight of the storage system shipment, ensure that the
loading ramp at your site does not exceed an angle of 10°. (See Figure 15.)

1uqw3d

10°

Figure 15. Maximum tilt for a packed frame is 10°

Installation site requirements


You must ensure that the location where you plan to install your storage system
meets all requirements.

Planning for floor and space requirements


Ensure that the location where you plan to install your storage system meets space
and floor requirements. Decide whether your storage system is to be installed on a
raised or nonraised floor.

About this task

When you are planning the location of your storage system, you must answer the
following questions that relate to floor types, floor loads, and space:
v What type of floor does the installation site have? The storage system can be
installed on a raised or nonraised floor.
v If the installation site has a raised floor, does the floor require preparation (such
as cutting out tiles) to accommodate cable entry into the system?
v Does the floor of the installation site meet floor-load requirements?
v Can the installation site accommodate the amount of space that is required by
the storage system, and does the space meet the following criteria?
– Weight distribution area that is needed to meet floor load requirements
– Service clearance requirements
v Does the installation site require overhead cable management for host fiber and
power cables?

Procedure

Use the following steps to ensure that your planned installation site meets space
and floor load requirements:

134 DS8880 Introduction and Planning Guide


1. Identify the base frame and expansion frames that are included in your storage
system.
2. Decide whether to install the storage system on a raised or nonraised floor.
a. If the location has a raised floor, plan where the floor tiles must be cut to
accommodate the cables.
b. If the location has a nonraised floor, resolve any safety problems, and any
special equipment considerations, caused by the location of cable exits and
routing.
3. Determine whether the floor of the installation site meets the floor load
requirements for your storage system.
4. Calculate the amount of space to be used by your storage system.
a. Identify the total amount of space that is needed for your storage system by
using the dimensions of the frames and the weight distribution areas that
are calculated in step 3.
b. Ensure that the area around each frame and each storage system meets the
service clearance requirements.

Note: Any expansion frames in the storage system must be attached to the
base frame on the right side as you face the front of the storage system.

Installing on raised or nonraised floors


You can install your storage system on a raised or nonraised floor. Raised floors
can provide better cooling than nonraised floors.

Raised floor considerations

Installing your storage system on a raised floor provides the following benefits:
v Improves operational efficiency and allows greater flexibility in the arrangement
of equipment.
v Increases air circulation for better cooling.
v Protects the interconnecting cables and power receptacles.
v Prevents tripping hazards because cables can be routed underneath the raised
floor.

When you install a raised floor, consider the following factors:


v The raised floor must be constructed of fire-resistant or noncombustible material.
v The raised-floor height must be at least 30.5 cm (12 in.). Clearance must be
adequate to accommodate interconnecting cables, Fibre Channel cable raceways,
power distribution, and any piping that is present under the floor. Floors with
greater raised-floor heights allow for better equipment cooling.
v Fully configured, two-frame storage systems can weigh in excess of 2844 kg
(6270 lbs). You must ensure that the raised floor on which the storage system is
to be installed is able to support this weight. Contact the floor-tile manufacturer
and a structural engineer to verify that the raised floor is safe to support the
concentrated loads equal to one third of the total weight of one frame. Under
certain circumstances such as relocation, it is possible that the concentrated loads
can be as high as one half of the total weight of one frame per caster. When you
install two adjacent frames, it is possible that two casters induce a total load as
high as one third of the total weight of two adjacent frames.
v Depending on the type of floor tile, more supports (pedestals) might be
necessary to maintain the structural integrity of an uncut panel or to restore the
integrity of a floor tile that is cut for cable entry or air supply. Contact the

Chapter 6. Meeting delivery and installation requirements 135


floor-tile manufacturer and a structural engineer to ensure that the floor tiles and
pedestals can sustain the concentrated loads.
v Pedestals must be firmly attached to the structural (concrete) floor by using an
adhesive.
v Seal raised-floor cable openings to prevent chilled air that is not used to directly
cool the equipment from escaping.
v Use noncombustible protective molding to eliminate sharp edges on all floor
cutouts, to prevent damage to cables and hoses, and to prevent casters from
rolling into the floor cutout.
v Avoid the exposure of metal or highly conductive material to the walking
surface when a metallic raised floor structure is used. Such exposure is
considered an electrical safety hazard.
v Concrete subfloors require treatment to prevent the release of dust.
v The use of a protective covering (such as plywood, tempered masonite, or
plyron) is required to prevent damage to floor tiles, carpeting, and tiles while
equipment is being moved to or is relocated within the installation site. When
the equipment is moved, the dynamic load on the casters is greater than when
the equipment is stationary.

Nonraised floor considerations

For environments with nonraised floors, an optional overhead cabling feature is


available.

Follow the special considerations and installation guidelines as described in the


topics about overhead cable management.

When you install a storage system on a non-raised floor, consider the following
factors:
v The use of a protective covering (such as plywood, tempered masonite, or
plyron) is required to prevent damage to floor and carpeting while equipment is
being moved to or is relocated within the installation site.
v Concrete floors require treatment to prevent the release of dust.

Overhead cable management (top-exit bracket)


Overhead cable management (top-exit bracket) is an optional feature that includes
a top-exit bracket for managing your Fibre cables. This feature is an alternative to
the standard, floor-cable exit.

Using overhead cabling provides many of the cooling and safety benefits that are
provided by raised flooring in a nonraised floor environment. Unlike raised-floor
cabling, the installation planning, cable length, and the storage-system location in
relation to the cable entry point are critical to the successful installation of the
top-exit bracket.

Figure 16 on page 137 illustrates the location of the cabling for the top-exit bracket
for fiber cable feature. When you order the overhead-cable management feature,
the feature includes clamping hardware and internal cable routing brackets for
rack 1 or rack 2. The following notes provide more information about the
color-coded cable routing and components in Figure 16 on page 137.
▌1▐ Customer Fibre Channel host cables. The Fibre Channel host cables, which
are shown in red, are routed from the top of the rack down to I/O enclosure
host adapters.

136 DS8880 Introduction and Planning Guide


▌2▐ Network Ethernet cable, power sequence cables, and customer analog
phone line (if used). The network Ethernet cable, in blue, is routed from the top
of the rack to the rear rack connector. The rack connector has an internal cable
to the management console. The power sequence cables and private network
Ethernet cables (one gray and one black) for a partner storage system are also
located here.
▌3▐ Mainline power cords. Two top-exit mainline power cords for each rack,
which are shown in green, are routed here.

Notes:
v A technical service representative tests the power sources. The customer is
required to provide power outlets (for connecting power cords) within the
specified distance.
v Fibre Channel host cables are internally routed and connected by either the
customer or by a technical service representative.
v All remaining cables are internally routed and connected by a technical service
representative.

3 1 3 2
f2c02269

Figure 16. Top exit feature installed (cable routing and top exit locations)

Chapter 6. Meeting delivery and installation requirements 137


Feature codes for overhead cable management (top-exit bracket):

Use this feature code to order cable management for overhead cabling (top exit
bracket) for your storage system.

Note: In addition to the top-exit bracket and top-exit power cords, one IBM
approved ladder (feature code 1101) must also be purchased for a site where the
top-exit bracket for fiber cable feature is used. The IBM approved ladder is used to
ensure safe access when your storage system is serviced with a top-exit bracket
feature installed.
Table 66. Feature codes for the overhead cable (top-exit bracket)
Feature Code Description
1400 Top-exit bracket for fiber cable

Overhead cabling installation and safety requirements:

Ensure that installation and safety requirements are met before your storage
system is installed.

If the cables are too long, there is not enough room inside of the rack to handle the
extra length and excess cable might interfere with the service process, preventing
concurrent repair. Consider the following specifications and limitations before you
order this feature:
v In contrast to the raised-floor power cords, which have a length from the tailgate
to the connector of about 4.9 m (16 ft), the length of the top exit power cords are
only 1.8 m (6 ft) from the top of the storage system.
v IBM Corporate Safety restricts the servicing of your overhead equipment to a
maximum of 3 m (10 ft) from the floor. Therefore, your power source must not
exceed 3 m (10 ft) from the floor and must be within 1.5 m (5 ft) of the top of
the power cord exit gate. Servicing any overhead equipment higher than 3 m (10
ft) requires a special bid contract. Contact your technical service representatives
for more information on special bids.
v To meet safety regulations in servicing your overhead equipment, you must
purchase a minimum of one feature code 1101 for your top exit bracket feature
per site. This feature code provides a safety-approved 5-foot ladder. This ladder
provides technical service representatives the ability to perform power safety
checks and other service activities on the top of your storage system. Without
this approved ladder, technical service representatives are not able to install or
service a storage system with the top-cable exit features.
v To assist you with the top-exit host cable routing, feature code 1400 provides a
cable channel bracket that mounts directly below the topside of the tailgate and
its opening. Cables can be easily slid into the slots on its channels. The cable
bracket directs the cables behind the rack ID card and towards the rear, where
the cables drop vertically into a second channel, which mounts on the left-side
wall (when viewing the storage system from the rear). There are openings in the
vertical channel for cables to exit toward the I/O enclosures.

Accommodating cables
You must ensure that the location and dimensions of the cable cutouts for each
frame in the storage system can be accommodated by the installation location. An
overhead-cable management option (top-exit bracket) is available for DS8880 for
environments that have special planning and safety requirements.

138 DS8880 Introduction and Planning Guide


Use the following steps to ensure that you prepare for cabling for each storage
system:
1. Based on your planned storage system layout, ensure that you can
accommodate the locations of the cables that exit each frame. See the following
figure for the cable cutouts for the DS8880.

Rear
1 2
41.91 cm 10.0 cm
(16.5 in) (3.9 in)

Cable Cutout

8.89 cm
(3.5 in)
3

f2c02241

Front
Figure 17. Cable cutouts for DS8880

2. If you install the storage system on a raised floor, use the following
measurements when you cut the floor tile for cabling:
v ▌1▐ Width: 41.91 cm (16.5 in.)
v ▌2▐ End of frame to edge of cable cutout: 10.0 cm (3.9 in)
v ▌3▐ Depth: 8.89 cm (3.5 in.)

Note: If both frames 1 and 2 use an overhead-cable management (top-exit


bracket) feature for the power cords and communication cables, the PCIe and
SPCN cables can be routed under the frame, on top of the raised floor. This is
the same routing that is used for nonraised floor installations. There is room
under the frame to coil extra cable length and prevent the need for custom
floor tile cutouts. Also, frames 3 and 4 do not need floor tile cutouts when the
top-exit bracket feature is installed, as only routing for the power cords is
needed.

Chapter 6. Meeting delivery and installation requirements 139


Nonraised floors with overhead cable management

For the base frame, an option is available for overhead cabling by using the top
exit bracket feature, which provides many benefits for nonraised floor installations.
Unlike raised-floor cabling, the installation planning, cable length, and the
storage-system location in relation to the cable entry point are critical to the
successful installation of a top-exit bracket feature. Measurements for this feature
are given in the following figure. You can find critical safety, service, and
installation considerations for this feature in the topic that discusses overhead-cable
management.

The following figure illustrates the location of these components:


v ▌1▐ and ▌2▐ Length of tailgate opening
v ▌3▐ Width of tailgate opening

Rear
1 2 3
67.4 96.1 67.4

4
23.7

f2c02240
Front

Figure 18. DS8880 top exit bracket feature

Physical footprint
The physical footprint dimensions, caster locations, and cable openings of the
storage system help you plan your installation site.

The following figure shows the overall physical footprint of a storage system. The
following dimensions are labeled on Figure 19 on page 141
▌1▐ Front service clearance
▌2▐ Depth of frame without covers
▌3▐ Width of frame without covers
▌4▐ Minimum dimension between casters and outside edges of frames
▌5▐ Leveling pads
▌6▐ Back service clearance
▌7▐ Leveling pads

140 DS8880 Introduction and Planning Guide


121.92 cm
1
(48 in)
front service
clearance

Front
4
7.9 cm
(3.1 in) Swivel
4.83 cm casters
7 (1.9 in)

4 7 2.8 cm
(1.1 in)
12.45 cm
(4.9 in) 2
120.02 cm
(47.25 in)
3
59.94 cm
(23.6 in)
without
covers 4
19.05 cm
(7.5 in)
Rear casters
fixed 2.8 cm
(1.1 in) 5

Rear 4.83 cm 5
(1.9 in)
7.9 cm
4
(3.1 in)

76.2 cm
(30 in) 6
f2c02270

Rear service
clearance

Figure 19. Physical footprint. Dimensions are in centimeters (inches).

Meeting floor load requirements


It is important for your location to meet floor load requirements.

About this task

Use the following steps to ensure that your location meets the floor load
requirements and to determine the weight distribution area that is required for the
floor load.

Chapter 6. Meeting delivery and installation requirements 141


Procedure
1. Find out the floor load rating of the location where you plan to install the
storage system.

Important: If you do not know or are not certain about the floor load rating of
the installation site, be sure to check with the building engineer or another
appropriate person.
2. Determine whether the floor load rating of the location meets the following
requirements:
v The minimum floor load rating that is used by IBM is 342 kg per m² (70 lb.
per ft²).
v Use the table to determine the required side, front, and rear weight
distribution area for the specified floor load rating. The side dimensions for
the weight distribution area have a maximum of 76.2 cm (30 in). If the side
dimensions required to meet a specified floor load rating are greater than the
maximum allowed, then the floor load rating is not listed in the table.
v The maximum per caster weight that is transferred to a raised floor tile is 450
kg (1000 lb.).
3. Using the following table, complete these steps for each storage system.
a. Find the rows that are associated with the storage system.
b. Locate the configuration row that corresponds with the floor load rating of
the site.
c. Identify the weight distribution area that is needed for that storage system
and floor load rating.

Note: If you are unsure about the correct placement and weight distribution
areas for your storage system consult a structural engineer.
Table 67. Floor load ratings and required weight-distribution areas
Configurationa Total weight of Floor load rating, Weight distribution areasc, d, e
configurationb kg per m2 (lb per
Sides Front Rear
ft2)
cm (in.) cm (in.) cm (in.)
Model 984 976 kg (2151 lb) 610 (125) 7.62 (3) 76.2 (30) 76.2 (30)
488 (100) 18 (7) 76.2 (30) 76.2 (30)
439 (90) 25 (10) 76.2 (30) 76.2 (30)
342 (70) 51 (20) 76.2 (30) 76.2 (30)
Model 984 and one 2014 kg (4441 lb) 610 (125) 15.3 (6) 76.2 (30) 76.2 (30)
84E expansion
488 (100) 38 (15) 76.2 (30) 76.2 (30)
model
439 (90) 56 (22) 76.2 (30) 76.2 (30)
342 (70) not supported not supported not
supported
Model 984 and two 2863 kg (6311 lb) 610 (125) 15.25 (6) 76.2 (30) 76.2 (30)
84E expansion
488 (100) 51 (20) 76.2 (30) 76.2 (30)
models
439 (90) 74 (29) 76.2 (30) 76.2 (30)
342 (70) not supported not supported not
supported

142 DS8880 Introduction and Planning Guide


Table 67. Floor load ratings and required weight-distribution areas (continued)
Configurationa Total weight of Floor load rating, Weight distribution areasc, d, e
configurationb kg per m2 (lb per
Sides Front Rear
ft2)
cm (in.) cm (in.) cm (in.)
Model 985 1144 kg (2520 lb) 610 (125) 13 (5) 76.2 (30) 76.2 (30)
488 (100) 28 (11) 76.2 (30) 76.2 (30)
439 (90) 38 (15) 76.2 (30) 76.2 (30)
342 (70) 66 (26) 76.2 (30) 76.2 (30)
Model 985 and first 2259 kg (4980 lb) 610 (125) 23 (9) 76.2 (30) 76.2 (30)
85E expansion
488 (100) 53.4 (21) 76.2 (30) 76.2 (30)
model
439 (90) 69 (27) 76.2 (30) 76.2 (30)
342 (70) not supported not supported not
supported
Model 985 with 3190 kg (7032 lb) 610 (125) 28 (11) 76.2 (30) 76.2 (30)
first and second
488 (100) 69 (27) 76.2 (30) 76.2 (30)
85E expansion
models (with top 439 (90) not supported not supported not
expansion) supported
342 (70) not supported not supported not
supported
Model 985 with 4121 kg (9084 lb) 610 (125) 33 (13) 76.2 (30) 76.2 (30)
first, second, and
488 (100) not supported not supported not
third 85E expansion
supported
models (with top
expansion) 439 (90) not supported not supported not
supported
342 (70) not supported not supported not
supported
Model 985 with 4933 kg (10874 lb) 610 (125) 33 (13) 76.2 (30) 76.2 (30)
first, second, third,
488 (100) not supported not supported not
and fourth 85E
supported
expansion models
(with top 439 (90) not supported not supported not
expansion) supported
342 (70) not supported not supported not
supported
Model 986 1099 kg (2421 lb) 610 (125) 13 (5) 76.2 (30) 76.2 (30)
488 (100) 25 (10) 76.2 (30) 76.2 (30)
439 (90) 33 (13) 76.2 (30) 76.2 (30)
342 (70) 61 (24) 76.2 (30) 76.2 (30)
Model 986 and first 2050.7 kg (4521 lb) 610 (125) 18 (7) 76.2 (30) 76.2 (30)
86E expansion
488 (100) 43.2 (17) 76.2 (30) 76.2 (30)
model
439 (90) 58.5 (23) 76.2 (30) 76.2 (30)
342 (70) not supported not supported not
supported

Chapter 6. Meeting delivery and installation requirements 143


Table 67. Floor load ratings and required weight-distribution areas (continued)
Configurationa Total weight of Floor load rating, Weight distribution areasc, d, e
configurationb kg per m2 (lb per
Sides Front Rear
ft2)
cm (in.) cm (in.) cm (in.)
Model 986 with 2985 kg (6581 lb) 610 (125) 21 (8) 76.2 (30) 76.2 (30)
first and second
488 (100) 58.5 (23) 76.2 (30) 76.2 (30)
86E expansion
models (with top 439 (90) not supported not supported not
expansion) supported
342 (70) not supported not supported not
supported
Model 986 with 3919.5 kg (8641 lb) 610 (125) 25 (10) 76.2 (30) 76.2 (30)
first, second, and
488 (100) 74 (29) 76.2 (30) 76.2 (30)
third 86E expansion
models (with top 439 (90) not supported not supported not
expansion) supported
342 (70) not supported not supported not
supported
Model 986 with 4854 kg (10701 lb) 610 (125) 30.5 (12) 76.2 (30) 76.2 (30)
first, second, third,
488 (100) not supported not supported not
and fourth 86E
supported
expansion models
(with top 439 (90) not supported not supported not
expansion) supported
342 (70) not supported not supported not
supported
Model 988 1080 kg (2380 lb) 610 (125) 10 (4) 76.2 (30) 76.2 (30)
488 (100) 21 (8) 76.2 (30) 76.2 (30)
439 (90) 33 (13) 76.2 (30) 76.2 (30)
342 (70) 61 (24) 76.2 (30) 76.2 (30)
Model 988 with 1988 kg (4380 lb) 610 (125) 15.3 (6) 76.2 (30) 76.2 (30)
expansion model
488 (100) 38 (15) 76.2 (30) 76.2 (30)
88E
439 (90) 53.4 (21) 76.2 (30) 76.2 (30)
342 (70) not supported not supported not
supported
Notes:
1. A storage system contains a base frame (model 988, 986, 985, 984) and any expansion frames (model 88E, 86E,
85E, 84E) that are associated with it.
2. The base frame and expansion frame are bolted together. The weights represent a combined configuration, fully
populated with all enclosures and adapters.
3. Weight distribution areas cannot overlap.
4. Weight distribution areas are calculated for maximum weight of the frames.
Note: Keep any future upgrades in mind, and plan for the highest possible weight distribution.
5. The base and expansion frames in each storage system are bolted to each other. Move one side cover and
mounting brackets from the base frame to the side of the expansion frame. Side clearance for frames that are
bolted together applies to both sides of the assembled frames.

Calculating space requirements


When you are planning the installation site, you must first calculate the total
amount of space that is required for the storage system. Consider future expansion,
and plan accordingly.

144 DS8880 Introduction and Planning Guide


Procedure

Complete the following steps to calculate the amount of space that is required for
your storage system.
1. Determine the dimensions of each frame configuration in your storage system.
2. Calculate the total area that is needed for frame configuration by adding the
weight distribution area to the dimensions determined by using the table in
“Meeting floor load requirements” on page 141.
3. Determine the total space that is needed for the storage system by planning the
placement of each frame configuration in the storage system and how much
area each configuration requires based on step 2.
4. Verify that the planned space and layout meet the service clearance
requirements for each frame and storage system.

Dimensions and weight of individual models


When you are planning the floor and space requirements for your storage system,
consider the dimensions and weights of the frames that compose your storage
system.

The following table provides dimensions and weights.


Table 68. DS8880 dimensions and weights
1
Model Dimensions Maximum weight
DS8880 Model 984 976 kg (2151 lb)
Height 193 cm (76 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 985 1144 kg (2520 lb)
Height (without top expansion)
193 cm (76 in.)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 986 1099 kg (2421 lb)
Height (without top expansion)
193 cm (76 in.)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 988 1080 kg (2380 lb)
Height (without top expansion)
193 cm (76 in.)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)

Chapter 6. Meeting delivery and installation requirements 145


Table 68. DS8880 dimensions and weights (continued)
1
Model Dimensions Maximum weight
DS8880 Model 84E 1040 kg (2290 lb)
Height 193 cm (76 in.)
(first expansion
frame) Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 84E 849 kg (1870 lb)
Height 193 cm (76 in.)
(second expansion
frame) Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 85E 1116 kg (2460 lb)
Height (without top expansion)
(first expansion
193 cm (76 in.)
frame)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 85E 931 kg (2052 lb)
Height (without top expansion)
(second and third
193 cm (76 in.)
expansion frames)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 85E 812 kg (1790 lb)
Height (without top expansion)
(fourth expansion
193 cm (76 in.)
frame)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 86E 953 kg (2100 lb)
Height (without top expansion)
(first expansion
193 cm (76 in.)
frame)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 86E 935 kg (2060 lb)
Height (without top expansion)
(second and third
193 cm (76 in.)
expansion frames)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)

146 DS8880 Introduction and Planning Guide


Table 68. DS8880 dimensions and weights (continued)
1
Model Dimensions Maximum weight
DS8880 Model 86E 935 kg (2060 lb)
Height (without top expansion)
(fourth expansion
193 cm (76 in.)
frame)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
DS8880 Model 88E 908 kg (2000 lb)
Height (without top expansion)
193 cm (76 in.)
Height (with top expansion)
220 cm (86.5 in.)
Width 64 cm (25 in.)
Depth 144 cm (56.5 in.)
Notes:
1. These dimensions include casters and covers. The casters are recessed and do not
require extra clearance.

Service clearance requirements


The service clearance area is the area around the storage system that technical
service representatives need to service the system.

CAUTION:
Servicing of this product or unit is to be performed by trained personnel only.
(C032)

For DS8000 series, technical service representatives must open the front and rear
covers to service the storage system. Figure 20 on page 148 illustrates related
service clearance requirements:
v For the rear of the storage system ▌1▐, allow a minimum of 76.2 cm (30.0 in.) for
the service clearance.
v The length of the tailgate opening ▌2▐ on the storage system.
v For a single storage system installation (from the front of the storage system),
allow a minimum of 60.96 cm (24.0 in.) on the right side ▌3▐ in the aisle to the
front of the machine.
v For the front of the storage system ▌4▐, allow a minimum of 121.9 cm (48.0 in.)
for the service clearance.
v The leveling pads ▌5▐ and ▌6▐ on the storage system.
v The opening of the tailgate ▌7▐ on the storage system.

Unlike weight distribution areas that are required to handle floor loading, keep in
mind that service clearances of adjacent unrelated storage systems can overlap.

Note: The terms service clearance and weight distribution area are often confused with
each other. The service clearance is the area that is required to open the service
covers and to pull out components for servicing. The weight distribution area is
the area that is required to distribute the weight of the storage system.

Chapter 6. Meeting delivery and installation requirements 147


Rear view

1
76.2 cm
(30.0 in)

2 8.89 cm
(3.5 in)

7
41.91 cm
114.5 cm
(16.5 in)
(45.0 in) 6
Base Model

This is the area


where the This is the
Hardware Management 5 3 service clearance
Console and area where the
48.26 cm 60.96 cm service representative
processor nodes
are located when (19.0 in) (24.0 in) stands to access the
they are in the keyboard and monitor
service position. 4 to service
121.92 cm the storage system.
(48.0 in)

f2c02248
Front view

Figure 20. Service clearance requirements

Earthquake resistance kit installation preparation


Before an IBM service representative can install the earthquake resistance kit on
any frames in your storage system, you must purchase fastening hardware and
prepare the location where the kit is to be installed.

The required tasks that you must perform before the earthquake resistance kit
installation depends on whether your storage system sits on a raised or a
nonraised floor. For either type of installation, work with a consultant or structural
engineer to ensure that your site preparations meet the requirements.

The following list provides an overview of the preparations necessary for each
type of floor:
Raised floor
v Cut the necessary holes and cable cutouts in the raised floor.
v Purchase and install eyebolt fasteners in the concrete floor.
Nonraised floor
Purchase and install fasteners in the concrete floor.

148 DS8880 Introduction and Planning Guide


Further instructions for the preparation of your site for the earthquake resistance
kit are provided in “Preparing a raised floor for the earthquake resistance kit
installation” and “Preparing a nonraised floor for the earthquake resistance kit” on
page 153.

Preparing a raised floor for the earthquake resistance kit installation:

You must prepare a raised floor before an earthquake resistance kit can be installed
on any frame in your storage system.

About this task

To ensure that you meet all site requirements, obtain the service of a qualified
consultant or structural engineer to help you prepare the floor.

Figure 21 and Figure 22 on page 150 illustrate the earthquake resistance kit after it
is installed by an IBM service representative on a raised floor.

f2c02314
Figure 21. Turnbuckle assembly

Chapter 6. Meeting delivery and installation requirements 149


1

3
4
5
Raised floor

f2c02300

Figure 22. Earthquake resistance kit installed on a raised floor

▌1▐ Frame
▌2▐ Support nut
▌3▐ Leveler feet
▌4▐ Load plate
▌5▐ Rubber bushing
▌6▐ Shaft

Complete the following steps to prepare your raised floor by using Figure 21 on
page 149 and Figure 22 as references.

Procedure
1. Cut the following openings in the raised floor for each frame that uses an
earthquake resistance kit:
v Four holes for the rubber bushings of the kit to fit through the floor.

150 DS8880 Introduction and Planning Guide


v One cable cutout for power and other cables that connect to the frame.
Use Figure 23 as a guide for the location and dimensions of these openings.
The pattern repeats for each frame in the configuration. Dimensions are in
millimeters (inches).

(caster) (frame base) (caster) (frame base)

REAR REAR

(2X) 1144
(45.0)

FRONT FRONT
(2X) 483 121.5 (2X) 483
(19.01) (4.78) (19.01)

f2c02319
(4X) 52 (4X) 52
(2.05) (2.05)

Figure 23. Locations for the cable cutouts, rubber bushing holes on raised floors, and eyebolt
on concrete floors

2. Obtain four fasteners (per frame) that are heavy-duty concrete or slab floor
eyebolts. The four fasteners per frame are different than the eight fasteners per
frame that are needed for non-raised floor installation. These eyebolts are used
to secure the earthquake resistance kit. Work with your consultant or structural
engineer to determine the correct eyebolts to use, but each eyebolt must meet
the following specifications.
v Each eyebolt must withstand a 3600-pound pull force.
v The dimensions of the eyebolt must allow the turnbuckle lower jaw of the kit
to fit over the eyebolt and allow the spacer of earthquake resistance kit to fit
inside the eye. See Figure 24 on page 152 and Figure 25 on page 153.

Chapter 6. Meeting delivery and installation requirements 151


Frame

12
8
11
10

7
6
5
4

2
f2c02313

▌1▐ Yoke
▌2▐ Nut, left-hand thread
▌3▐ Turnbuckle ½-13 right hand and left hand
▌4▐ Nut
▌5▐ Washer
▌6▐ Bushing
▌7▐ Load plates
▌8▐ Bushing, Plastic
▌9▐ Rods, ½-13 right hand, 304 mm, or 533 mm
▌10▐ Spacer
▌11▐ Washer flat, thick
▌12▐ Washer flat, custom

Figure 24. Raised floor tie-down installation

152 DS8880 Introduction and Planning Guide


Lower
jaw

Lower jaw
1 opening Lower
2.8 cm (1.1 in.) jaw
Spacer

Shaft
Spacer
2 1.8 cm (0.7 in.)
Eyebolt
Jam nut
Washer

f2c00815
Side view of eyebolt
Figure 25. Eyebolt required dimensions. Dimensions are in millimeters (inches).

3. Install the eyebolt fasteners in the concrete floor by using the following
guidelines:
v See Figure 23 on page 151 and Figure 25 to determine the placement of the
eyebolts. The eyebolts must be installed so that they are directly below the
holes that you cut in the raised floor for the rubber bushings.
v Ensure that the installed eyebolts do not exceed a height of 10.1 cm (4 in.)
from the floor to the center of the eye. This maximum height helps to reduce
any bending of the eyebolt shaft.
v Ensure that the installation allows the eyebolts to meet the required pull
force after they are installed (3600-pound pull force for raised floor eyebolts).
v If you use a threaded eyebolt that secures into a threaded insert in the floor,
consider using a jam nut and washer on the shaft of the eyebolt. Talk to your
consultant or structural engineer to determine whether a jam nut is
necessary.

Preparing a nonraised floor for the earthquake resistance kit:

You must prepare a nonraised floor before an earthquake resistance kit can be
installed on any frame in your storage system.

Chapter 6. Meeting delivery and installation requirements 153


About this task

To ensure that you meet all site requirements, obtain the service of a qualified
consultant or structural engineer to help you prepare the floor.

Figure 26 provides an illustration of the earthquake resistance kit (▌1▐) after the
IBM service representative installs it on the nonraised floor.

Before the IBM service representative installs the kit, you must prepare the area
that is shown as ▌2▐ in Figure 26. This figure shows two of the most common
fasteners that you can use.

Threaded
rod
Jam nut

2
Rack caster Provided to IBM
Service
1 representative
Earthquake
resistance
kit
Leveler 3
jam nut Required
preparation Nut and
Bolt and
Leveler foot washer
washer
Plastic screwed installed
into floor on stud
bushing

Load plate

f2c02326
Figure 26. Earthquake resistance kit installed on a nonraised floor.

Use the following steps to prepare your nonraised floor:

Procedure
1. Obtain eight fastener sets for each frame that uses the earthquake resistance kit.
These fastener sets are used to secure the earthquake resistance kit load plate.
The type of fastener set that you use can be determined by your consultant or
structural engineer. However, each bolt or stud must meet the following
specifications:
v Each fastener set must withstand a 2400-pound pull force.
v The fasteners must have a dimension that fits into the load plate holes,
which are each 21 mm (0.826 in.) in diameter.
v The fasteners must be long enough to extend through and securely fasten a
load plate that is 3.0 cm (1.2 in.) thick. The fasteners must also be short
enough so that the height of the installed fastener does not exceed 6.5 cm
(2.5 in.). This maximum height ensures that the fastener can fit under the
frame.
The following examples provide descriptions of nonraised floor fastener sets.
Figure 26 illustrates the fastener sets.

154 DS8880 Introduction and Planning Guide


v Threaded hole insert that is secured into the concrete floor and a bolt (with a
washer) that screws into the insert
v Threaded stud that is secured into the concrete floor with a nut (with a
washer) that screws over it
2. Work with your consultant or structural engineer and use the following
guidelines to install the fasteners in the concrete floor:
v Use Figure 27 on page 156 to determine the placement of the loadplate
fasteners ▌1▐ (four per loadplate). The pattern repeats for each frame.
Dimensions are in millimeters (mm).
v Ensure that the installed fasteners do not exceed a height of 65 mm (2.5 in.)
from the floor. This maximum height ensures that the fastener can fit under
the frame.
v Ensure that the installation allows the fasteners to meet the required pull
force after they are installed (2400-pound pull force).
v If you use a threaded bolt that secures into a threaded insert in the floor and
the bolt extends longer than 30 mm (1.2 in.), which is the thickness of the
load plate, consider using a jam nut and a washer on the shaft of the bolt so
that the load plate can be secured snugly to the floor. Talk to your consultant
or structural engineer to determine whether a jam nut is necessary.

Chapter 6. Meeting delivery and installation requirements 155


Front
load plate
540 364
240.5
88 1 21
88 88
30
364

(2X) 50.6
(2X) 50.6

1142
1142

Rack
outline

25 416 416
54.5 1 21

f2c02382
67 67 67
Rear
load plate

Figure 27. Location for fastener installation (nonraised floor)

3. When the IBM service representative arrives to install the earthquake resistance
kit, provide the other fastener parts (▌2▐) in Figure 26 on page 154 so that the
representative can use these parts secure the load plates onto the floor.

Planning for power requirements


You must select a storage system location that meets specific power requirements.

When you consider the storage system location, consider the following issues:
v Power control selections
v Power outlet requirements
v Input voltage requirements
v Power connector requirements
v Remote force power off switch requirements
v Power consumption and environment

IBM cannot install the storage system if your site does not meet these power
requirements.

156 DS8880 Introduction and Planning Guide


Attention: Implementation of surge protection for electronic devices as described
in the EN 62305 standard or IEEE Emerald Book is recommended. If a lightning
surge or other facility transient voltages occur, a surge-protection device limits the
surge voltage that is applied at the storage system power input. A surge-protection
device is required for facilities in Korea or customers that conform to the European
EMC Directive or CISPR 24.

Overview of storage system power controls


The storage system contains power controls on the frames. The power controls can
be configured by a technical service representative. The power controls can also be
accessed through the management console.

The storage system has the following manual power controls in the form of
physical switches that are on the racks:
Local/remote switch
(Available on base frames) The local/remote switch setting determines
your use of local or remote power controls. When you set the switch to
local, the local power on/local force power off switch controls power for
the storage system. You can access this switch by opening the rear cover of
the storage system. When the local/remote switch is set to remote, the
power for the storage system is controlled by remote power control
settings that are entered in the DS8000 Storage Management GUI or DS
Service GUI.

Planning requirements: None.


Local power on/local force power off switch
(Available on base frames) The local power on/local force power off switch
initiates a storage system power-on sequence or a storage system force
power off sequence. This switch is applicable only when the local/remote
switch is set to local. You can access this switch by opening the rear cover
of the storage system.

Planning requirements: None.


DC-UPS circuit breaker switches
Each DC-UPS provides a circuit breaker switch which can be used to
remove connection to input line cord and output power of that DC-UPS. If
both DC-UPS circuit breakers are switched off all power is removed from
the rack immediately.

Attention: These switches are to be used only in specific service actions


or in case of emergency. Switching both DC-UPS circuit breakers off in a
rack can result in loss of data.

The following power controls can be configured by a technical service


representative. You can also use the following power controls through the DS8000
Storage Management GUI (running on the management console):
Local power control mode
(Visible in the DS8000 Storage Management GUI) You cannot change this
setting in the DS8000 Storage Management GUI. This mode is enabled
when the local/remote switch that is on the storage system is in the local
position. When this setting is used, the local power on/local force
power-off switch that is on the storage system controls the power.

Planning requirements: None.

Chapter 6. Meeting delivery and installation requirements 157


Power outlet requirements
Plan for the required power outlets for the installation of your storage system.

The following power outlets are required:


v Two independent power outlets for the two power cords that are needed by
each base and expansion frames.

Important: To eliminate a single point of failure, independent power feeds to


each DS8000 power supply are required. At least one of the feeds should have
power conditioning to ensure an acceptable level of power quality as specified
in standards such as ANSI C84.1 and EN 50160. Further, each power source
must have its own wall circuit breaker.

Attention: Ensure that there is a local site disconnect and/or isolation means
(such as a service branch circuit breaker, power feed switch, and/or wall socket
outlet to an industrial style plug) for each AC main power cord to a system. The
internal rack (system side) appliance cord coupler is NOT intended to be
connected or disconnected live. Follow local electrical authority site regulations for
proper isolation practices such as Lock Out Tag Out (LOTO) and/or approved
procedures.

DANGER
Appliance coupler connector is not intended for AC (Alternating Current -
Voltage) electrically live plugging nor for interrupting current. Remove and/or
isolate input power from the wall end (service branch circuit) to the power
cord before attaching or disconnecting the machine end (appliance coupler).

f2s00080
f2s00079

Input voltage requirements


When you plan for the power requirements of the storage system, consider the
input voltage requirements.

The following table provides the input voltages and frequencies that the storage
system supports.

158 DS8880 Introduction and Planning Guide


Table 69. Single-phase input voltages and frequencies
Characteristic Voltage (single-phase)
Nominal input voltages 200-240 RMS V AC
Minimum tolerated input voltage 180 RMS V AC
Maximum tolerated input voltage 256 RMS V AC
System maximum current rating DS8886 40 Amps
DS8884 24 Amps
1
Wall breaker rating (1 ph) DS8886 US, Canada, and regions that require
20% derating factor 50 - 63 Amps
DS8886 Geos with no derating requirement 40 -
63 Amps
DS8884 All regions 30 - 63 Amps
Steady-state input frequencies 50 ± 3 or 60 ± 3.0 Hz
PLD input frequencies (<10 seconds) 50 ± 3 or 60 ± 3.0 Hz

1. Regions such as the United States and Canada require 20% derating for power
distribution circuits. For these regions, a minimum 50 A circuit breaker is required for a
40 Amp rated system such as the DS8886. Verification of applicable local standards is
required for system installation. Some configuration restrictions to limit power
consumption can allow reduced breaker ratings on a regional basis.

Table 70. Three-phase input voltages and frequencies


Characteristic Three-phase delta (3Ø∆) Three-phase wye (3ØY:
LL[Line-to-Line])
Nominal input voltages 200, 208, 220, or 240 RMS V 380, 400, or 415 RMS V AC
AC
Minimum tolerated 180 RMS V AC 315 RMS V AC
input voltage
Maximum tolerated 256 RMS V AC 456 RMS V AC
input voltage
System maximum 30 Amps for Japan 17 Amps
current rating
24 Amps for other regions
Wall breaker rating (3 30 - 60 Amps 20 - 32 Amps
ph)1
Steady-state input 50 ± 3 or 60 ± 3.0 Hz 50 ± 3 or 60 ± 3.0 Hz
frequencies
PLD input frequencies 50 ± 3 or 60 ± 3.0 Hz 50 ± 3 or 60 ± 3.0 Hz
(<10 seconds)

1. Regions such as the United States and Canada require 20% derating for power
distribution circuits. For these regions, a minimum 50 A circuit breaker is required for a
40 Amp rated system such as the DS8886. Verification of applicable local standards is
required for system installation. Some configuration restrictions to limit power
consumption can allow reduced breaker ratings on a regional basis.

Power connector requirements


Ensure that the site where you plan to install the storage system meets the power
connector requirements.

Chapter 6. Meeting delivery and installation requirements 159


Table 71 shows the power cords and the inline connectors and receptacle types that
they support. Find the power cord row that is appropriate for your site. Ensure
that your site meets the specified power connector requirements.

Attention:
v For reliable operation do not use Ground Fault Circuit Interrupter (GFCI), Earth
Leakage Circuit Breaker (ELCB), or Residual Current Circuit Breaker (RCCB)
type circuit breakers with the DS8880. The DS8880 is certified for safe operation
and compliant with IEC, EN, UL, and CSA 60950-1 standards. If local electrical
practice requires leakage detection circuit breakers, they must be rated at
minimum to 300 mA or larger to reduce the risk of outage due to spurious
actuation.
v Do not exceed the wire rating of the facility and ensure that separate protected
branch circuits are used for each cord in planning for redundancy.
Table 71. DS8880 power cords.
Power cord feature Power cord description6 Inline connector Receptacle
code
10621, 2, 3, 4, 5 Single-phase power cord, HBL360C6W, Pin and Sleeve HBL360R6W, ac Receptacle,
200V - 240V, 60A, IEC 60309 Connector, IEC 60309, 2P3W IEC 60309, 2P3W
3-pin customer connector
10632, 3, 4 Single-phase power cord, Not applicable Not applicable
200V - 240V, 63A, no
connector
1086 1, 4, 5 Three-phase WYE (3ØY) HBL530C6V02, Pin and HBL530R6V02, AC
voltage (five-wire 3+N+PE), Sleeve Connector, IEC 60309, Receptacle, IEC 60309, 4P5W
380-415V~ (nominal 4P5W
line-to-line (LL)), 30 A, IEC
60309 5-pin customer
connector
1087 1, 4, 5 Three-phase Delta (3Ø∆) HBL430C9W, Pin and Sleeve HBL430C9W, AC Receptacle,
voltage (four-wire 3+PE Connector, IEC 309, 3P4W IEC 60309, 3P4W
(Protective Earth Ground)),
200-240V, 30 A, IEC 60309
4-pin customer connector
HBL430C9W, Pin and Sleeve
Connector, IEC 309, 3P4W
1088 4 Three-phase WYE (3ØY) Not applicable Not applicable
voltage (five-wire 3+N+PE),
380-415V~ (nominal
line-to-line (LL)), 40 A, no
customer connector provided
1089 1, 4, 5 Three-phase Delta (3Ø∆) HBL460C9W, Pin and Sleeve HBL460R9W, AC Receptacle,
voltage (four-wire 3+PE Connector, IEC 309, 3P4W IEC 60309, 3P4W
(Protective Earth Ground)),
200-240V, 60 A, IEC 60309
4-pin customer connector

160 DS8880 Introduction and Planning Guide


Table 71. DS8880 power cords (continued).
Power cord feature Power cord description6 Inline connector Receptacle
code
Notes:
1. The customer connector must be IEC 60309-compliant. Power cords that are used with DS8100, DS8300, DS8700,
DS8800, and DS8870 cannot be used again with the DS8880.
2. Power cords without inline connectors are rated at 600 V AC. Other power cords are rated as described in the
table. Single-phase power cord has 2W+G configuration.
3. The conductor size for single-phase power cords is 10 mm2 (8 awg).
4. Power cords that exit the bottom are 4.2 m (14 ft) from the lowest point where they exit the frame to the mating
face of the plug or bare leads. Power cords that exit the top are 1.8 m (6 ft) from the highest point from the
frame to the mating face of the plug or bare leads.
5. The IEC60309 receptacle must be installed in a metal-backed box with the green wire ground-connected to the
grounding lug within the box. Ensure continuity between the box and the metallic shielding of the liquid-tight
conduit.
6. The descriptions indicate the line cord rating and are not an indication of system power rating. The cord rating
defines the maximum breaker rating for the line cord (60A with connector, 63A without connector). Downstream
components are designed for fault currents consistent with facility circuit breaker ratings up to 63A.

The following shows the plug (connector on the line cord).

B A
Vac Vac
G
f2c01860

Figure 28. Single-phase, 200 - 240 V, 60A, IEC60309 cable connector pins

A
Vac

B G
Vac
C
Vac
f2c01862

Figure 29. Three-phase low voltage (200 - 240 V) 60A, IEC60309 cable connector pins

Chapter 6. Meeting delivery and installation requirements 161


C B
Vac Vac

A
Neutral
Vac

f2c01938
Ground

Figure 30. Three-phase high voltage (380 - 415 V) 30A, IEC60309 cable connector pins

The following shows the receptacle or inline connector (the power source, a mirror
image of the plug).

A B
Vac Vac
G
f2c01859

Figure 31. Single-phase, 200 - 240 V, 60A, IEC60309 customer outlet pins

A
Vac

G B
Vac
C
Vac
f2c01861

Figure 32. Three-phase low voltage (200 - 240 V) phase-to-phase, 60A, IEC60309 customer
outlet pins

162 DS8880 Introduction and Planning Guide


B C
Vac Vac

A Neutral
Vac

f2c01937
Ground

Figure 33. Three-phase high voltage (380 - 415 V) phase-to-phase, 30A, IEC60309 customer
outlet pins

Power consumption and environmental information


When you are planning to meet the power requirements for the DS8000 series,
consider the power consumption and other environmental points of the storage
system.

Note: The power consumption for the operating disk enclosure is typically 275
watts - 300 watts. The power consumption for the operating High Performance
Flash Enclosure Gen2 is 500 watts.

The power consumption and environmental information for the IBM DS8000 is
provided in Table 72.
Table 72. Power consumption and environmental information for models 984, 985, 986, 84E, 85E, 86E, 988, 88E
Measurement Unit of measure Base frame Expansion frame
1, 3
Peak electric power kilovolt amperes (kVA) Model 984: 4.9; Model Model 84E: 4.3; Model
985: 6.7 (single-phase); 85E 6.7 (single-phase);
Model 986: 6.2 Model 86E: 6.4
(three-phase); Model 988: (three-phase); Model 88E:
8.1 (three-phase) 4.4 (three-phase)
Thermal load British thermal units (BTU) Model 984: 16595; Model Model 84E: 14795; Model
per hour 985: 22886 (single-phase); 85E: 22971 (single-phase);
Model 986: 21020 Model 86E: 21743
(three-phase); Model 988: (three-phase); Model 88E:
27715 (three-phase); 15062 (three-phase);
Capacity of exhaust cubic meters per minute 44.2 51.8
(cubic feet per minute or (1500) (1800)
CFM)
Ground leakage current milliamperes (mA) 43 43
Startup current amperes (A or amp) ≤ 100 ≤ 100
Startup current duration microseconds (µs or µsec) < 200 < 200
Idle and operating sound A-weighted bels (B) 7.9 7.9.5
power level, LwAd 2

Chapter 6. Meeting delivery and installation requirements 163


Table 72. Power consumption and environmental information for models 984, 985, 986, 84E, 85E, 86E, 988,
88E (continued)
Measurement Unit of measure Base frame Expansion frame
Notes:
1. The values represent data obtained from typical systems, which are configured as follows:
v Standard base frames that contain 15 disk drive sets (16 drives per disk drive set, 15 disk drive sets x 16 = 240
disk drives) and Fibre Channel adapters.
v All-flash configurations that contain 8 sets of fully configured High Performance Flash Enclosures Gen2 and
16 Fibre Channel adapters.
v Expansion models that contain 21 drive sets per storage enclosure (21 drive sets x 16 = 336 drives) and Fibre
Channel adapters.
2. LwAd is the statistical upper-limit A-weighted sound power level, expressed in bels, declared in conformance
with ISO 9296. Bels relate to decibels (dB) as follows: 1 B = 10 dB. The ratings are rounded to the nearest 0.1 B.
Measurements are in conformance with ISO 7779.
3. All frames and configurations that are used in single-phase mode must not exceed 8 kVA.

Acoustic declaration for the DS8000 series

Table 73 describes the acoustic declaration information for the DS8000 series.
Table 73. Acoustic declaration for fully configured DS8000 series
Declared A-weighted sound Declared A-weighted sound
power level, LWAd (B) 1, 4 pressure level, LpAm (dB) 2, 3, 4
Model Operating Idling Operating Idling
Model 984, 985, 8.4 8.4 65 65
986, 988
Model 84E, 85E, 8.4 8.4 65 65
86E, 88E
Notes:
1. LWAd is the statistical upper-limit A-weighted sound power level (rounded to the
nearest 0.1 B).
2. LpAm is the mean A-weighted emission sound pressure level that is measured at the
1-meter bystander positions (rounded to the nearest dB).
3. 10 dB (decibel) = 1 B (bel).
4. All measurements made in conformance with ISO 7779 and declared in conformance
with ISO 9296.

Planning for environmental requirements


You must install your storage system in a location that meets the operating
environment requirements for correct maintenance.

Take the following steps to ensure that you meet these requirements:
1. Note where air intake locations are on the models that compose your storage
system.
2. Verify that you can meet the environmental operating requirements at the air
intake locations.
3. Consider optimizing the air circulation and cooling for the storage system by
using a raised floor, adjusting the floor layout, and adding perforated tiles
around the air intake areas.

164 DS8880 Introduction and Planning Guide


Fans and air intake areas
The storage system provides air circulation through various fans throughout the
frame. You must maintain the correct operating environment requirements for your
models at each air intake location.

Table 74 summarizes fan, intake, and exhaust locations.


Table 74. Machine fan location
DS8880 Fan Location Machine Location Intake Location Exhaust Location
Entire machine Entire Front covers Rear covers
Power complex Bottom Front covers Rear covers

Operating environment requirements


You must meet specific operating environment requirements at all the air intake
locations of your storage system.

The operating points vary depending on the state of the system. The system can be
in the following states:
v Powered on
v Powered off
v In storage

Powered on:

Plan for the operating ranges and recommended operating points of the storage
system.

Table 75 provides the operating ranges for your storage system when its power is
on.
Table 75. Operating extremes with the power on
Measurement Value
Altitude 0 - 2133 m (0 - 7000 ft)
Dry bulb temperature 16 - 32°C (60 - 90°F)
Relative humidity 20 - 80%
Wet bulb temperature (maximum) 23°C (73°F)

Table 76 provides the optimum operating points for your storage system with its
power on.
Table 76. Optimum operating points with the power on
Measurement Value
Temperature 22°C (72°F)
Relative humidity 45%

Table 77 on page 166 provides the operating ranges for a storage system with
power on.

Chapter 6. Meeting delivery and installation requirements 165


Table 77. Optimum operating ranges with power on
Measurement Value
Temperature 16 - 32°C (60 - 90°F)
Relative humidity 40 - 50%

Powered off:

Plan for the required temperature and humidity ranges when the storage system is
off.

Table 78 provides the temperatures and humidity requirements for your storage
system when the power is off.
Table 78. Temperatures and humidity with the power off
Measurement Value
Temperature 10 - 43°C (50 - 110°F)
Relative humidity 8 - 80%
Wet bulb temperature (maximum) 27°C (80°F)

In storage:

Plan for the required temperature and humidity ranges when the storage system is
in storage.

Table 79 provides the temperatures and humidity requirements for storing your
storage system.
Table 79. Temperatures and humidity while in storage
Measurement Value
Temperature 1 - 60°C (34 - 140°F)
Relative humidity 5 - 80%
Wet bulb temperature (maximum) 29°C (84°F)

Corrosive gasses and particulates


Plan for air quality that meets standards for corrosive gases and particulates.

The DS8000 series is designed to operate reliably in a general business-class


environment. A general business-class environment is one that has automated
24x7x365 temperature and humidity controls and also operates with G1
specifications for corrosive gases and P1 specifications for particulates.

Operating vibration and shock requirements


The vibration levels that are designed for DS8880 comply with class V1
requirements included in the product classes for vibration.

DS8880 is designed to operate under the vibration V1 levels that are described in
Table 80 on page 167. Additional information includes random vibration PSD
profile breakpoints and operational shock levels. See Table 81 on page 167 and
Table 82 on page 167.

166 DS8880 Introduction and Planning Guide


Table 80. Vibration levels for DS8880.
1, 2
Class grms g Peak Sine
V1L 0.10 0.06 @ 50 and 60 Hz
Notes:
1. All values in this table are in g2/Hz
2. g is the peak g level of an approximate half-sine pulse.

Table 81. Random vibration PSD profile breakpoints for DS8880.


Class 5 Hz 17 Hz 45 Hz 48 Hz 62 Hz 65 Hz 150 Hz 200 Hz 500 Hz
-7 -5 -5 -5 -5 -5 -5 -5
V1L 2.0x10 2.2x10 2.2x10 2.2x10 2.2x10 2.2x10 2.2x10 2.2x10 2.2x10-5
Note: All values in this table are in g2/Hz.

Table 82. Operational shock levels for DS8880.


Class Axis g1 pw2
S1 Vertical 3.5 3.0
Notes:
1. g is the peak g level of an approximate half-sine pulse.
2. “pw” is the pulse width in milliseconds.

Contamination information
You must consider the air quality and contamination levels at your installation site.

Airborne particulates (including metal flakes or particles) and reactive gases that
act alone or in combination with other environmental factors, such as humidity or
temperature, might pose a risk to the storage system hardware. Risks that are
posed by the presence of excessive particulate levels or concentrations of harmful
gases include damage that might cause the system to malfunction or cease
functioning altogether. This specification presents limits for particulates and gases
that are intended to avoid such damage. The limits must not be viewed or used as
definitive limits because numerous other factors, such as temperature or moisture
content of the air, can influence the impact of particulates or environmental
corrosives and gaseous contaminant transfer.

Attention: In the absence of specific limits that are presented in this document,
you must implement practices that maintain particulate or gas levels that are
consistent with the protection of human health and safety. If IBM determines that
the levels of particulates or gases in your environment damaged the storage
system, the warranty is void. Implementation of correctional measures is a
customer responsibility.

The following criteria must be met:


Gaseous contamination
Severity level G1 according to ANSI/ISA 71.04-19851, which states that the
reactivity rate of copper coupons must be fewer than 300 Angstroms per
month (Å/month, ≈ 0.0039 µg/cm2-hour weight gain)2. In addition, the
reactivity rate of silver coupons must be less than 300Å/month (≈ 0.0035
µg/cm2-hour weight gain)3. The reactive monitoring of gaseous corrosivity
is conducted approximately 2 inches (5 cm) in front of the rack on the air
inlet side at one-quarter and three-quarter frame height off the floor, or
where the air velocity is much higher.

Chapter 6. Meeting delivery and installation requirements 167


Particulate contamination
Data centers must meet the cleanliness level of ISO 14644-1 class 8. For
data centers without airside economizers, the ISO 14644-1 class 8
cleanliness can be met by selecting one of the following filtration methods:
v The room air can be continuously filtered with MERV 8 filters.
v Air entering a data center can be filtered with MERV 11, or preferably
MERV 13 filters.
For data centers with airside economizers, the choice of filters to achieve
ISO class 8 cleanliness depends on the specific conditions present at that
data center. The deliquescent relative humidity of the particulate
contamination must be more than 60% RH4. Data centers must be free of
zinc whiskers5.
1. ANSI/ISA-71.04.1985. Environmental conditions for process measurement and control
systems: Airborne contaminants. Instrument Society of America, Research Triangle
Park, NC, 1985.
2. The derivation of the equivalence between the rate of copper corrosion product
thickness growth in Å/month and the rate of weight gain assumes that Cu2S
and Cu2O grow in equal proportions.
3. The derivation of the equivalence between the rate of silver corrosion product
thickness growth in Å/month and the rate of weight gain assumes that Ag2S is
the only corrosion product.
4. The deliquescent relative humidity of particulate contamination is the relative
humidity at which the dust absorbs enough water to become wet and promote
corrosion, ion migration, or both.
5. Surface debris is randomly collected from 10 areas of the data center on a 1.5
cm diameter disk of sticky, electrically conductive tape on a metal stub. If
examination of the sticky tape in a scanning electron microscope reveals no zinc
whiskers, the data center is considered free of zinc whiskers.

Cooling the storage complex


You can take steps to optimize the air circulation and cooling for your storage
systems.

About this task

To optimize the cooling around your storage systems, prepare the location of your
storage systems as recommended in the following steps.

Note: The installation of a storage system is done by technical service


representatives. However, the following steps describe the process that is needed to
optimize the air circulation and cooling for your storage systems.
1. Prepare for the installation of the storage system on a raised floor. Although the
storage system can be installed on a nonraised floor, installing the storage
system on a raised floor provides increased air circulation for better cooling.
2. Install perforated tiles in the front of each base frame and expansion frame as
follows:
a. For stand-alone base frames, install two fully perforated tiles in front of
each base frame, as shown in Figure 34 on page 169 in the single-machine
examples (▌1▐ and ▌2▐ in the figure).
b. For a row of storage systems, install a row of perforated tiles in front of the
storage systems as shown in Figure 34 on page 169 (▌3▐ in the figure).

168 DS8880 Introduction and Planning Guide


c. For groupings of storage systems, where a hot aisle/cold aisle layout is
used, use a cold aisle row of perforated tiles in front of all storage systems.
An example of a possible minimal arrangement is shown in Figure 34 (▌4▐
in the figure).

Note: Keep in mind that the example represented in Figure 34 meets service
clearance requirements. However, floor loading requirements might require
a wider hot aisle to provide up to 30 inches of non-overlapping clearance
(for total of 60 inches between the rear side of the frames on either side of
the aisle.)

F = front Base = Base model 2 ft tile 2 ft perforated tile


R = rear Exp = Expansion model

Single Machine - 2 examples Row of Machines - 1 example

1 F Base R F Base R

F Exp R

120 in - 123 in
3
F Exp R

F Exp R
2 F Base R
F Exp R

Hot Aisle / Cold Aisle installation


Hot Cold Hot Cold Hot
Aisle Aisle Aisle Aisle Aisle

R Exp F F Base R R Exp F F Base R

4 R Exp F F Exp R R Exp F F Exp R

R Base F F Exp R R Base F F Exp R


f2c02271

8 Tiles (16 ft)

Figure 34. DS8880 layouts and tile setup for cooling

Chapter 6. Meeting delivery and installation requirements 169


Planning for safety
You must consider various safety issues when you plan your storage system
location.

The following list identifies some of the safety issues you must consider:
v Fire suppression
v Earthquake safety

Providing a fire-suppression system


Set up an environment that supports the temperature, cooling, and operating
requirements of your storage system. You are responsible for providing a fire
suppression system for your storage system.

About this task

IBM designs and manufactures equipment to internal and external standards that
require certain environments for reliable operation. Because IBM does not test any
equipment for compatibility with fire-suppression systems, IBM does not make
compatibility claims of any kind. IBM does not provide recommendations on
fire-suppression systems.

Procedure
1. Consult your insurance underwriter, local fire marshal, or local building
inspector about selecting a fire-suppression system that provides the correct
level of coverage and protection.
2. Set up an environment that supports the temperature and cooling requirements
for your storage system as described in the environmental temperature
requirements-planning area.

Earthquake preparedness alternatives


If you are installing your storage system in an area that is prone to earthquakes,
plan for special installation methods to minimize earthquake damage to your
system.

An unsecured base or expansion frame can topple or be displaced during an


earthquake. This displacement puts both the storage system and your personnel in
danger. To help prevent damage, restrain your storage system by using one of the
following two methods:
Restraint method
Allows some system movement and provides for both personnel safety
and protection of your storage system. The DS8000 earthquake resistance
kit provides this type of earthquake protection.
Hard mounting
Physically attaches your system to the floor. This method increases the
safety of personnel during an earthquake. However, the method damages
the storage system because the system absorbs most of the shock. IBM
does not support hard mounting.

Planning for network and communications requirements


You must locate your storage systems in a location that meets the network and
communications requirements.

Keep in mind the following network and communications issues when you plan
the location and interoperability of your storage systems:

170 DS8880 Introduction and Planning Guide


v Management console network requirements
v Remote power control requirements
v Host attachment requirements
v SAN considerations

Management console network requirements


You must plan for the network requirements of the management console.

Each management console requires a dedicated connection to the network.

Note: If you plan on accessing the DS CLI or DS8000 Storage Management GUI
and have a firewall between the management console and your network, open the
following TCP/IP ports before installation: 1750, 1751, 1755 for CLI, and 8452 for
the DS8000 Storage Management GUI.

Remote support connection requirements


IBM highly recommends remote-support connections so that storage system can
use the call home feature to report failures and transmit pertinent debugging data
to IBM. When call home is used, your technical support representative can quickly
isolate and resolve potential issues with the storage system.

The strategic and preferred remote support connectivity method is internet SSL
(Secure Socket Layer) for management-console-to-IBM communication, and Assist
On-site (AOS) for IBM remote access to the management console and the storage
system. AOS provides a network-type connection that is secured by SSL and
state-of-the-art encryption technology. AOS is installed on a PC that is provided
and maintained by the customer. Alternatively, if the customer's security guidelines
allow, AOS can be turned on and configured on the management console. Contact
your technical support representative for more details.

In addition to AOS, IBM also offers remote support center (RSC). It is a simple
SSH proxy based remote service solution that can be used if AOS does not meet
your security guidelines. Contact your technical service representative for more
information on RSC.

Remote support must be ready at the time of the initial installation of the DS8000.
For internet-based remote support, you must open your firewalls.

Remote power control requirements


Use the remote power control settings to control the power of your storage
complex. Settings can be controlled through the DS8000 Storage Management GUI
running on the management console.

There are several settings for remote power control.

Host attachment communication requirements


Use host attachment communication requirements information to connect the host
attachments in your network.
v You must use worldwide port names to uniquely identify Fibre Channel
adapters that are installed in your host system.
v For open-system hosts with Fibre Channel adapters, keep in mind that Fibre
Channel architecture provides various communication protocols. Each
interconnected storage system within the architecture is referred to as a node,
and each host is also a node. Each node corresponds to one or more ports. (In
the case of Fibre Channel I/O adapters, these ports are Fibre Channel ports.)
Each port attaches to a serial-transmission medium that provides duplex

Chapter 6. Meeting delivery and installation requirements 171


communication with the node at the other end of the medium. You can
configure your network structure that is based on one of three basic
interconnection topologies (network structures):
– Point-to-point and switched fabric
– Fibre Channel arbitrated loop
– Fibre Channel (FICON). This connection does not apply to open systems.
See the IBM DS8000 Host Systems Attachment Guide for more information about
these supported topologies.
v The maximum distance between a shortwave host adapter Fibre Channel port
and the following network components is 380 meters (1,246 ft 9 in.) at 4 Gbps,
150 meters (492 ft 2 in.) at 8 Gbps, and 100 meters (328 ft 1 in.) at 16 Gbps. For a
longwave host adapter Fibre Channel port, the maximum distance is 10 km (6.2
miles).
– Fabric switches
– Fabric hubs
– Link extenders
– Storage system Fibre Channel port
The maximum distance might be greater than 10 km (6.2 miles) when a link
extender provides target initiator functions or controller emulation functions.

Note: Do not use link extenders with emulation functions on links over which
Remote Mirror and Copy operations are performed. Link extenders with
emulation functions introduce more path delay.
v Because the Fibre Channel architecture allows any channel initiator to access any
Fibre Channel device, without access restrictions, a security exposure can occur.
Have your technical service representative set the Fibre Channel access modes to
the correct setting. See the IBM DS8000 Host Systems Attachment Guide for more
information about Fibre Channel access modes.
v Storage systems can connect to IBM SAN Volume Controller host systems. See
the IBM DS8000 Host Systems Attachment Guide for more information.

Attention: Signal-loss tolerance for Fibre Channel links that run at 8 or 16 Gbps
are reduced by -1.4 dB as compared to links that run at 4 Gbps. Take necessary
steps to ensure that links are within the signal-loss parameters that are listed when
you plan a move from 4 Gbps FC to 8 or 16 Gbps FC adapters with existing
infrastructure. Using more than two patch panels or other link connectors between
two 8 or 16 Gbps ports at maximum distance of 10 km might result in greater link
loss than is acceptable. Signal loss tolerance with measurements, including all
connectors and patch panel connections, are:
v Loss per 10 km for 16 Gbps speed is -6.4 dB.
v Loss per 10 km for 8 Gbps speed is -6.4 dB.
v Loss per 10 km for 4 Gbps speed is -7.8 dB.

OM3 Fibre Channel cables are required to support 8 Gbps host adapters.

172 DS8880 Introduction and Planning Guide


Chapter 7. Planning your storage complex setup
During installation, IBM customizes the setup of your storage complex that is
based on information that you provide in the customization worksheets.

Each time that you install a new storage system or management console, you must
complete the customization worksheets before installation by technical service
representatives.

Use the customization worksheets at “Customization worksheets” on page 177 to


specify the initial setup for the following items:
v Company information
v Management console network settings
v Remote support (includes call home and remote service settings)
v Notifications (includes SNMP trap and email notification settings)
v Power control
v Control Switch settings

Important: Technical service representatives cannot install a storage system or


management console until you provide them with the completed customization
worksheets.

Company information
Specify on the company information worksheet any information that technical
service personnel (or your service provider) can use to contact you as quickly as
possible or to access your storage complex.

This information includes the following items:


v General company information, such as company name and telephone number
v Administrator contact information
v Storage complex location

You must complete this worksheet for all installations that include a management
console.

Management console network settings


Use the management console network setting worksheet to specify the IP address
and LAN settings for your management console.

The management console network settings include the following items:


v Management console network identification
v Ethernet settings, if you want the management console to connect to your LAN
v DNS settings, if you plan to use a domain name server to resolve network
names
v Routings, if you want to specify a default gateway for routing

Note: IBM attaches your LAN after the storage complex is installed and in
operation.

© Copyright IBM Corp. 2004, 2017 173


You must complete the worksheet for all installations that include a management
console. Before you complete the worksheet, review the exceptions that are listed
in the notes at the bottom of the worksheet.

Remote support settings


The remote support worksheets specify whether you want outbound (call home) or
inbound (remote services) remote support, or both.

Ensure that you enable both outbound and inbound support to help you maintain
the highest availability of your data.

When you enable outbound (call home) remote support, your management console
sends an electronic call home record to IBM support when there is a problem
within the storage complex. If inbound remote service is also enabled, an technical
service representative can securely sign on to the management console from a
remote location in response to the service call.

The DS8000 uses secure Internet SSL connectivity for the outbound (call home)
remote support connection.

Assist On-Site (AOS) is available as a secure inbound remote service option. AOS
provides a mechanism to establish a secure network connection to IBM over the
internet by using SSL encryption. AOS can be installed on a customer gateway
server in a DMZ or, if your security guidelines allow, it can run directly on the
DS8000.

In addition to AOS, you can also use 'rsc' (remote support center). It is a simple
SSH proxy based remote service solution which can be used if AOS does not meet
your security guidelines. Contact your technical service representative for more
information on RSC.

The management console can also be configured to offload error logs to IBM over
the Internet by using FTP. This option is normally only used when the customer
does not want to send error logs over an encrypted connection.

For any of the remote support connectivity methods, you can use the data storage
command-line interface (DS CLI) and its audit log feature to review who
completed any remote service on your storage system, and at what time the work
was completed. Contact your technical service representative for more information
on which service actions are completed remotely. You can also use DS CLI to
control network and remote service access to each management console and the
storage system.

The IBM AOS Redbook at http://www.redbooks.ibm.com/abstracts/


redp4889.html?Open provides additional information on AOS as a secure remote
service solution.

You must complete the worksheet for all installations that include a management
console.

174 DS8880 Introduction and Planning Guide


Notification settings
Use the notification worksheets to specify the types of notifications that you want
to receive and that you want others to receive.

Note: The technical service representative sets up the notification process.

Notifications contain information about your storage complex, such as serviceable


events.

You can receive notifications through the following methods:


v Simple Network Management Protocol (SNMP) traps
v Email

You can choose one or both notification methods.

When you choose to have your storage complex generate SNMP traps, you can
monitor the storage complex over your network. You can control whether
management information base (MIB) information is accessible and what type of
SNMP traps to send. You can also specify the maximum number of traps that are
sent for each event and where to send the traps.

Notes:
1. If you have open-systems hosts and remote mirror and copy functions, you
must enable SNMP notifications for status reporting.
2. If you plan to use advanced functions SNMP messaging, you must set those
functions by using DS CLI.

When you choose to enable email notifications, email messages are sent to all the
email addresses that you specify on the worksheet when the storage complex
encounters a serviceable event or must alert you to other information.

You must complete the worksheet for each management console to be installed.

Power control settings


Use the power control worksheet to specify and schedule whether power turns on
and off automatically.

If you want to use a scheduled power mode, you must enter the schedule on the
power control worksheet. You must complete the power control worksheet for all
installations.

Control switch settings


Use the control switch settings work sheet to specify certain settings that affect
host connectivity. You are asked to enter these choices on the control switch
settings work sheet so that your technical service representative can set them
during the installation of your storage system.

The following control switches are set using the choices you specify on the control
settings work sheet.

Chapter 7. Storage complex setup 175


IBM i LUN serial suffix number

Use this control switch setting only when you attach more than one DS8000
storage system to an AS/400 or IBM i host and the last three digits of the
worldwide node name (WWNN) are the same on any of the storage systems.

Control switch settings - attachment to IBM z Systems


The following control switch settings are specific to IBM z Systems.
16 Gb FC Forward Error Correction
Forward Error Correction (FEC) intercepts and corrects certain errors to
improve the adoption of 16 Gbps FCP and FICON host adapters.
Control-unit initiated reconfiguration (CUIR) support
Control-unit initiated reconfiguration (CUIR) allows automation of channel
path quiesce and resume actions during certain service actions. This setting
eliminates the requirement for manual actions from the host.
Control unit threshold
This control switch provides the threshold level, presenting a SIM to the
operator console for controller related errors. SIMs are always sent to the
attached IBM z Systems hosts for logging to the Error Recording Data Set
(ERDS). SIMs can be selectively reported to the IBM z Systems host
operator console, as determined by SIM type and SIM severity.
Device threshold
This control switch provides the threshold level, presenting a SIM to the
operator console for device related errors. Device threshold levels are the
same type and severity as control unit threshold settings.
Lights-on fast load for the 8 Gb/s host adapters
8 Gb Fibre Channel lights-on fast load provides seamless microcode update
to storage arrays, and eliminates potential loss of access.
"Fast load" is a general term for quickly loading a new code version to a
host adapter card on the DS8000. "Lights-on fast load" is an enhanced
feature of the fast load process. Before the introduction of this enhanced
feature, the laser light that sends the Fibre Channel signals would be
turned off briefly during a fast load. This interruption to the signals caused
the host at the other end to go through time-consuming recovery actions.
The lights-on fast load feature keeps the laser light turned on throughout
the fast load process. There are still intervals when the signals are paused,
but lights-on fast load keeps the light path active so that there is less
disruption to the host when a new code version is loaded to the adapter
card.
Lights-on fast load for the 16 Gb/s host adapters
16 Gb Fibre Channel lights-on fast load provides seamless microcode
update to storage arrays, and eliminates potential loss of access.
Media threshold
This control switch provides the threshold level, presenting a SIM to the
operator console for media related errors. Media threshold levels are the
same type and severity as control unit threshold settings.
Present SIM data to all hosts
Service Information Messages (SIMs) are offloaded to the first I/O request
directed to each logical subsystem in the storage facility if the request is
device or control unit related, or offloaded to the individual logical volume

176 DS8880 Introduction and Planning Guide


when the request is media related. This control switch determines whether
SIMs are sent to all, or to only the first, attached IBM z Systems LPAR
making an I/O request to the logical system or logical volume.
IBM z Systems high-performance FICON enhanced buffer management
IBM z Systems high-performance FICON enhanced buffer management
provides improved performance for multi-site configurations when writing
data remotely (remote site recovery).

Customization worksheets
You must complete the customization worksheets before any installation of a new
storage system or management console. After you fill out the worksheets, give
them to the technical service representatives who complete the installation.

Use the worksheets to specify the initial settings for your storage system. You can
customize settings for company information, the management console network,
remote support settings, notifications, power control, and control switch settings.
The customization worksheets are frequently updated. To ensure that you use the
latest version, download a spreadsheet file that contains all of the worksheets from
IBM Techdocs.

Chapter 7. Storage complex setup 177


178 DS8880 Introduction and Planning Guide
Chapter 8. Planning data migration
The planning and methods of data migration for the DS8000 series vary by
environment.

When you plan for data migration, consider the following factors:

Note: The following lists do not cover every possibility. They provide a high-level
view of some of the tools and factors that you can consider when you move data.
Data
v How much data is to be migrated?
Operating system
v Is it a z Systems or UNIX system? Consider using IBM Remote Mirror
and Copy functions such as Metro Mirror, Global Mirror, or some
variation of a logical volume manager.
v Is it z/OS? Consider using DFDSS, though there are many choices.
v Is it VM? Consider using DASD Dump Restore or PTAPE.
v Is it VSE? Consider using the VSE fastcopy or ditto commands.
Your system administrator selects the data migration method that is the
best compromise between efficiency and impact on the users of the storage
system.
Storage system
v Are the storage systems involved the same type with the same level of
licensed machine code?
v Are the storage systems different? If the storage systems are different,
ensure that the new configuration is large enough to accommodate the
existing data. You also want to ensure that the virtual disks are similar
in configuration to the disk drives that they are replacing.
Time and complexity
v What duration of service outage can be tolerated? Typically data
migration requires that updates or changes cease while the movement
occurs. Also, depending on the amount of data that you are moving and
your migrating method, data might be unavailable for an extended time,
even several hours.
v Does the complexity and time that is involved require the services of
IBM through International Global Services? Contact your technical
support representative for more information.

When you replace existing storage, partition the storage so that the virtual disks
are similar in configuration to the disk drives that they are replacing. New
configurations must be large enough to accommodate the existing data.

You might want to take advantage of this opportunity to do some remapping. The
allocation and distribution of data are not required to be a straight one-to-one
relationship, although that is possible. For instance, you can take advantage of
using a maximum of 255 logical subsystems whereas the prior limitation was 32
logical subsystems.

© Copyright IBM Corp. 2004, 2017 179


Consider creating any new fixed block (FB) volumes with T10 DIF protection. This
protection can be used on volumes to which data is migrated, even if the current
host server volumes are not T10-protected. T10 DIF-protected volumes can be used
even if the host server does not currently support T10 DIF.

Selecting a data migration method


The data migration method that you select must provide the best compromise
between efficiency and impact on the system users. The selected method provides
a simple but robust method that minimizes user impact.

Most methods of data migration affect the everyday operation of a computer


system. When data is moved, the data must be in a certain state, and typically
requires that updates or changes cease while the movement occurs. Depending on
the amount of data that you are moving and your migration method, data might
be unavailable for an extended period, perhaps several hours. The following
factors might contribute to the migration time:
v Creating new logical volumes or file systems
v Modifying configuration files
v Receiving integrity checks

Consider the following items to determine the best method for your data
migration:
v Management software provides simple robust methods that you can use during
production without disturbing users.
v The AIX logical volume manager (LVM) provides methods that you can use at
any time without disrupting user access to the data. You might notice a small
performance degradation, but this is preferable to shutting down databases or
requiring users to log off the system.

Notes:
– AIX and HP-UX 11.xx ship with logical volume management (LVM) software
as part of the base operating system. LVM provides complete control over all
disks and file systems that exist on an AIX system. HP-UX has similar volume
management software.
– Sun Microsystems has a basic volume management product that is called
Solstice, which is available for the Solaris systems.
– Linux systems also use the LVM.
v Methods that use backup and restore procedures have the most impact on the
system usage. These procedures require that databases and file systems are in
quiescent states to ensure a valid snapshot of the data.

Table 83 compares data migration options and lists advantages and disadvantages
of each.
Table 83. Comparison of data migration options
Type Example Advantages Disadvantages
OS / LVM Logical Volume Little or no Potential application
Mirroring Managers, (LVM) Veritas application service delays
Volume Manager disruption
(VxVM), Windows Disk
Administrator

180 DS8880 Introduction and Planning Guide


Table 83. Comparison of data migration options (continued)
Type Example Advantages Disadvantages
UNIX or cpio, cplv, dd, tar, backup Common, easy to Length of service
Windows restore; copy, scopy, use, tested interruption varies;
Commands xcopy, drag and drop scripting prone to
errors and more
testing
Remote Copy Synchronous Mirror Operating system Like storage device
(Metro Mirror); independent types needed
Asynchronous Mirroring
(Global Mirror and
Global Copy)
Third-party Data Migration (XoSoft); Some have little Cost of software;
software Backup /Restore (Tivoli®, application service some have high
packages Legato, Veritas) interruption, application service
standard utilities interruption
Third-party IBM SAN Volume Multiple Cost of migration
migration Controller, DataCore heterogeneous appliance / service,
appliances SANsymphony storage venders application
supported; migration disruption to install
cycles that are / remove appliance
offloaded to
appliance

Chapter 8. Data migration 181


182 DS8880 Introduction and Planning Guide
Chapter 9. Planning for security
The DS8000 series provides functions to manage data secrecy and networking
security, including data encryption, user account management, and functions that
enable the storage system to conform with NIST SP 800-131A requirements.

Planning for data encryption


The DS8000 series supports data encryption by using IBM Security Key Lifecycle
Manager key servers.

To enable disk encryption, the storage system must be configured to communicate


with two or more IBM Security Key Lifecycle Manager key servers. The physical
connection between the Hardware Management Console (HMC) and the key server
is through an Internet Protocol network.

Planning for encryption is a customer responsibility. There are three major


planning components to the implementation of an encryption environment. Review
all planning requirements and include them in the installation considerations.

Planning for encryption-key servers


DS8000 storage systems require at least two encryption-key servers and associated
software for each site that has one or more encryption-enabled storage systems.

One encryption-key server must be isolated. An isolated encryption-key server is a


set of dedicated server resources that run only the encryption-key lifecycle
manager application and its associated software stack. This server is attached
directly to dedicated non-encrypting storage resources containing only key server
code and data objects.

The remaining key servers can be of any supported key-server configuration. Any
site that operates independently of other sites must have key servers for the
encryption-enabled storage systems at that site.

For DS8000 encryption environments a second Hardware Management Console


(HMC) should be configured for high availability.

Important: You are responsible for replicating key labels and their associated key
material across all key servers that are attached to the encryption-enabled DS8000
system before you configure those key labels on the DS8000 system.

You can configure each encryption-enabled storage system with two independent
key labels. This capability allows the use of two independent key-servers when one
or both key-servers are using secure-key mode keystores. The isolated key-server
can be used with a second key-server that is operating with a secure-key mode
keystore.

For dual-platform key server support, the installation of IBM Security Key
Lifecycle Manager interim fix 2 (V1.0.0.2 or later) is recommended to show both
key labels in the DS8000 Storage Management GUI. If you intend to replicate keys
between separate IBM z Systems sysplexes by using ICSF with the
JCECCARACFKS keystore in secure-key mode and with the secure-key

© Copyright IBM Corp. 2004, 2017 183


configuration flag set in IBM Security Key Lifecycle Manager, then IBM Security
Key Lifecycle Manager 3 (V1.0.0.3 or later) is required.

Planning for key lifecycle managers


DS8000 storage systems support IBM Security Key Lifecycle Manager.

If NIST 800-131A security conformance is required on your DS8000 storage system,


select the version of IBM Security Key Lifecycle Manager that is appropriate for
your encryption key server host and connection network protocol requirements.
v If your encryption key server runs on an open system host and you do not plan
to use the Transport Layer Security (TLS) 1.2 protocol with this key server, use
IBM Security Key Lifecycle Manager V2.0.1 or later.
v If your encryption key server runs on an open system host and you plan to use
the TLS 1.2 protocol with this key server, use IBM Security Key Lifecycle
Manager V2.5 or later.
v If your encryption key server runs on an IBM z Systems host LPAR with z/OS,
use IBM Security Key Lifecycle Manager for z/OS V1.1.0.3 or later.
v If your encryption key server is Gemalto Safenet KeySecure, select version 8.0.0
or later.

If NIST 800-131A security conformance is not required on your storage system,


select the appropriate encryption key manager for your encryption key server host.
v If your encryption key server runs on an open system host, install IBM Security
Key Lifecycle Manager V2.0.1 or later.
v If your encryption key server runs on an IBM z Systems host LPAR with z/OS,
install IBM Security Key Lifecycle Manager for z/OS v1.0.1 or later.

IBM Storage Appliance 2421 Model AP1 can be ordered either as a single isolated
key server (feature code 1761) or as two isolated key servers (feature codes 1761
and 1762, ordered together). This order must include an indicator for IBM Security
Key Lifecycle Manager (feature code 0204), which indicates that a DVD with IBM
Security Key Lifecycle Manager software is provided with Storage Appliance AP1.
For more information, search for "IBM Storage Appliance 2421 Model AP1" at the
IBM Publications Center website (www.ibm.com/shop/publications/order).

If you want to acquire a different isolated key server, refer to the IBM Security Key
Lifecycle Manager Installation and Configuration Guide (SC27-5335) or IBM Security
Key Lifecycle Manager online product documentation (www.ibm.com/support/
knowledgecenter/SSWPVP/) for hardware and operating system requirements.

Note: You must acquire an IBM Security Key Lifecycle Manager license for use of
the IBM Security Key Lifecycle Manager software that is ordered separately from
the stand-alone server hardware. The IBM Security Key Lifecycle Manager license
includes both an installation license for the IBM Security Key Lifecycle Manager
management software and a license for encrypting drives.

IBM Security Key Lifecycle Manager for z/OS generates encryption keys and
manages their transfer to and from devices in an IBM z Systems environment.

Planning for full-disk encryption activation


Full-disk-encryption drives are standard on the DS8000 series. These drives encrypt
and decrypt at interface speeds, with no impact on performance.

184 DS8880 Introduction and Planning Guide


Full disk encryption offerings must be activated before use, as part of the system
installation and configuration. This installation and activation review is performed
by the IBM Systems Lab Services team. To submit a request or inquiry, see the
Storage Services website (www-03.ibm.com/systems/services/labservices/
platforms/labservices_storage.html), and click Contact now.

You are responsible for downloading or obtaining from IBM, and installing
designated machine code (such as microcode, basic input/output system code
[BIOS], utility programs, device drivers, and diagnostics that are delivered with an
IBM system) and other software updates in a timely manner from the ibm.com
website (www.ibm.com) or from other electronic media, and following the
instructions that IBM provides. You can request IBM to install machine code
changes; however, you might be charged for that service.

Planning for user accounts and passwords


Planning for administrative user and service accounts and passwords ensures that
you use the best security practices.

Managing secure user accounts


Follow these recommended practices for managing secure user accounts.

Procedure

Complete the following steps to achieve the level of secure access for users that is
required for your storage system.
1. Assign two or more storage administrators and two or more security
administrators to manage your storage system. To preserve the dual control
that is recommended for recovery key management, do not assign both storage
administrator and security administrator roles to the same user. Change the
password for both the default storage administrator and default security
administrator user accounts, or delete the default user account after user
accounts for other administrators are created.
2. Create one user account for each user who is authorized to access your storage
system. Do not share a single user account between multiple users.
3. Assign appropriate user roles and scopes to user accounts in accordance with
the storage management responsibilities of the user.
4. Review configurable user ID policies, and set the policies in accordance with
your security objectives. The default settings are consistent with IBM
recommended user ID and password policies and practices.
5. For applications that require network access to the storage system, assign a
unique user ID (an ID that is not assigned to any other user). You can assign
different user IDs for different software applications or different servers so that
actions can be distinguished by user ID in the audit logs.

Managing secure service accounts


Follow these recommended practices to manage access to your service account in
the DS Service GUI and remote access by IBM Hardware Support.

Procedure

Complete the following steps to achieve the level of secure access that is required
for service accounts on your storage system.

Chapter 9. Security 185


1. Assign one or more service administrators to manage service on your storage
system.
2. Access the DS Service GUI from a web browser on a system that has network
access to the Hardware Management Console (HMC) at https://HMC_IP/
service, where HMC_IP is the IP address or host name of the HMC. You can
also access the DS Service GUI from the link on the login page of the DS8000
Storage Management GUI.
3. Log in to the DS Service GUI by using the service administrator account and
change the password for that account.
The service administrator account is pre-configured with user ID (customer)
and password (cust0mer).
4. Determine how you want IBM Hardware Support to access your storage
system and set remote service access controls accordingly.
Before installation of the storage system, your IBM service representative
consults with you about the types of remote service access available. IBM
recommends Assist On-site (AOS) as a secure remote service method. AOS
provides a mechanism to establish a secure network connection to IBM over the
internet with SSL encryption. It can be configured so that the service
administrator must approve remote service access and can monitor remote
service activity.

Planning for NIST SP 800-131A security conformance


The National Institute of Standards and Technology (NIST) SP 800-131A is a United
States standard that provides guidance for protecting data by using cryptographic
algorithms that have key strengths of 112 bits.

NIST SP 800-131A defines which cryptographic algorithms are valid and which
cryptographic algorithm parameter values are required to achieve a specific
security strength in a specific time period. Starting in 2014, a minimum security
strength of 112 bits is required when new data is processed or created. Existing
data processed with a security strength of 80 bits should remain secure until
around 2031, subject to additional NIST standards with guidelines for managing
secure data.

In general, storage systems allow the use of 112-bit security strengths if the other
unit that is attached to the network connection supports 112-bit security strength. If
security levels are set to conform with NIST SP 800-131A guidelines, the DS8880
storage system requires 112-bit security strength on all SSL/TLS connections, other
than remote support network connections.

On network connections that use SSL/TLS protocols, 112-bit security has the
following requirements:
v The client and server must negotiate the use of TLS 1.2.
v The client and server must negotiate an approved cipher suite that uses
cryptographic algorithms with at least 112-bit security strength.
v The client or server must limit hash and signature algorithms to provide at least
112-bit security strength; for example, the client must prevent the use of SHA-1
hashes.
v Certificates that are used by the client or server must have public keys and
digital signatures with at least 112-bit security strength, such as RSA-2048 keys
with SHA-256 digital signatures.

186 DS8880 Introduction and Planning Guide


v Deterministic random bit generators (DRBGs) must use approved algorithms
with a least 112-bit security strength and must be provided with entropy sources
that have at least 112 bits of entropy.

To enable NIST SP 800-131A security conformance in your environment, update the


following entities. It might not be feasible to update all of these entities at the same
time because of various dependencies. Therefore, you can upgrade them for NIST
SP 800-131A security conformance independently of each other.
v Encryption key servers
v Remote authentication servers
v DS Network Interface clients
v DS Network Interface server
v DS8000 Storage Management GUI and DS Service GUI servers
v SMI-S agents

Attention: Before you disable earlier SSL/TLS protocols on the storage systems,
you must ensure that all external system networks connected to the DS8880 storage
systems are enabled for TLS 1.2 and are NIST SP 800-131A compliant. Otherwise,
network connection to these systems will be prohibited.

For information about configuring your environment for NIST SP 800-131A


conformance, see security best practices in the IBM DS8000 series online product
documentation ( http://www.ibm.com/support/knowledgecenter/ST5GLJ_8.1.0/
com.ibm.storage.ssic.help.doc/f2c_securitybp.html).

Chapter 9. Security 187


188 DS8880 Introduction and Planning Guide
Chapter 10. License activation and management
The management and activation of licensed functions are responsibilities that are
associated with the role of your storage administrator.

Management refers to the use of the IBM Data storage feature activation (DSFA)
website (www.ibm.com/storage/dsfa) to select a license scope and to assign a
license value. You can complete these activities and then activate the function.

Activation refers to the retrieval and installation of the feature activation code into
the storage system. The feature activation code is obtained by using the IBM Data
storage feature activation (DSFA) website (www.ibm.com/storage/dsfa) and is
based on the license scope and license value.

You complete these activities at the following times:


v After the technical service representative installs your storage system, and before
you configure it.
v When you increase the extent of the function authorization for a licensed
function (that is, you add more capacity to your license).

To complete these activities, access the IBM Data storage feature activation (DSFA)
website (www.ibm.com/storage/dsfa).

When you access the DSFA website, you must enter information about your
storage system so the web application can access the correct function authorization
records. You can find this information either by clicking Need Help in the Activate
Licensed Functions page or by selecting Properties on the System page of the
DS8000 Storage Management GUI.

Planning your licensed functions


As you plan for your licensed functions, it is important to consider increases in
your workload requirements. To provide more flexibility with licensing, use-based
licensing is supported on the DS8880.

The Base Function license is based on the entire capacity of the DS8880 system.

The z-synergy Services, Copy Services, and Copy Services Manager on the
Hardware Management Console (HMC) licenses are priced based on the use of
license capacity that you purchase.

With the use-based license capability comes the requirement to plan how much
storage capacity you require for future growth. As you plan for your licensed
functions, it is important to consider increases in your workload requirements. For
example, consider the following guidelines, which include but are not limited to:
v Plan for storage space allocation. Determine your typical storage requirements
and consider how much more storage you would need if you have rapid or
unpredictable growth.
v Estimate the amount of capacity you need for current and future Copy Services
functions. For example, consider the number of target volumes you need for
FlashCopy relationships at remote sites. As the number of FlashCopy target
volumes increase, more available bandwidth of the disk system might be used

© Copyright IBM Corp. 2004, 2017 189


by the copy process. In addition, Copy Services solutions that require multiple
copies of data can also require extensive capacity storage.

Recognizing that both your storage and data requirements will increase over time
and that capacity and performance planning is your responsibility, ensure that you
purchase and manage your licensed functions for maximum usage. It can be more
cost effective to purchase more storage capacity to ensure that the maximum usage
of your licensed functions does not exceed the allowed capacity of the storage that
was purchased. If the capacity is exceeded, IBM is notified that the usage exceeds
the allowed capacity on any given licensed function. You are notified by IBM and
required to extend enablement of your licensed function and install a new licensed
feature key.

Activation of licensed functions


After the technical service representatives complete your DS8000 series installation,
your first step is to activate your licensed functions.

To activate your licensed functions, complete the following actions.


v Obtain your feature activation codes.
v Apply the feature activation codes to your storage system. You can apply the
feature activation codes by importing a downloadable file from the IBM Data
storage feature activation (DSFA) website (www.ibm.com/storage/dsfa) .

The initial enablement of any optional DS8000 licensed function is a concurrent


activity (assuming that the appropriate level of microcode is installed on the
storage system for the function).

Note: Removal of a DS8000 licensed function to deactivate the function is


non-disruptive, but takes effect at the next IML.

Activating licensed functions


You must obtain feature activation codes for the licensed features for each storage
system by connecting to the IBM Disk Storage Feature Activation (DSFA) website.

Before you begin

Before you can connect to the site, ensure that you have the following items:
v A removable media for downloading your feature activation codes into a file.
Use the removable media if you cannot access the Storage Manager from the
system that you are using to access the DSFA website. Instead of using
removable media, you can also write down the activation codes and then
manually enter them into the system that runs the Storage Manager.
v The system serial number, model, and signature.

Notes:
1. Enabling an optional licensed function is a concurrent activity (assuming the
appropriate level of microcode is installed for the function). The following
activating activities are non-disruptive, but take effect at the next IML:
v Removal of a licensed function for its deactivation.
v A lateral change or reduction in the license scope. A lateral change is defined
as changing the license scope from fixed block (FB) to count key data (CKD)
or from CKD to FB. A reduction is defined as changing the license scope
from all physical capacity (ALL) to only FB or only CKD capacity.

190 DS8880 Introduction and Planning Guide


2. Before you begin this task, you must resolve any current system problems.
Contact IBM Hardware Support for assistance.
3. Before you configure, disable or provide paths through any firewall because it
might interfere with system communication.

About this task

Complete the following steps to activate feature activation codes.

Procedure
You can activate the licensed functions from one of two locations in the DS8000
Storage Management GUI: the System Setup wizard during initial configuration; or
the Licensed Functions tab of the System settings page.
1. Click Activate Licensed Functions or Activate.
2. Enter the license keys.
v If you received your license keys from a technical service representative,
enter them in the Activate Licensed Functions window.
v If you need to obtain your license keys from the IBM Data storage feature
activation (DSFA) website, complete the following steps.
a. Go to IBM Data storage feature activation (DSFA) website
(www.ibm.com/storage/dsfa).
b. Click DS8000 series.
c. Enter the machine type, serial number, and machine signature of your
DS8000 system. You can find this information either by clicking Need
Help in the Activate Licensed Functions window or by selecting the
Properties action on the System page.
d. Download the license keys XML file.
e. In the Activate Licensed Functions window, click the Browse icon to
select the license keys XML file that you downloaded and click Open.
3. Click Activate.

Scenarios for managing licensing


These topics provide scenarios for managing your licenses after you initially
activate them.

The following scenarios are provided:


v Adding storage capacity to an existing storage system
v Managing an active license feature

Note: More scenarios can be found on the IBM DS8000 Information Center.

Adding storage to your machine


You can add storage (in terabytes) to an existing licensed function, such as Copy
Services.

About this task

For example, assume that you initially purchased 250 TB of Copy Services capacity.
After several months, you need an extra 100 TB for your point-in-time copy
operations. To increase storage, you must purchase and activate a larger license.

Chapter 10. License activation and management 191


This activity is nondisruptive and does not require that you restart your storage
system.

Procedure
1. For example, you order four of the Copy Services feature code 8253 (25 TB each
for a total of 100 TB) against the serial number of the 283x or 904x machine
type license currently on your storage system. This additional license capacity
increases your Copy Services authorization level.
2. After you order the features, you receive confirmation from IBM that these new
features were processed.
3. Connect to the IBM-supported Disk Storage Feature Activation (DSFA) website
at IBM Data storage feature activation (DSFA) website (www.ibm.com/storage/
dsfa) to retrieve the feature activation code for the licensed feature. This new
feature activation code represents the total capacity that is now licensed (or 350
TB). It licenses the original 250 TB plus the additional 100 TB that you just
ordered.
4. Obtain the feature activation code for the licensed feature from your sales
representative.
5. After you obtain the feature activation code for the licensed feature, enter it
into the DS8000 Storage Management GUI. You replace the existing feature
activation code with the new feature activation code.
6. After the feature activation code is installed successfully, you now have 350 TB
of Copy Services capacity.

Managing a licensed feature


Use the IBM Data storage feature activation (DSFA) website to change an optional
function from active to inactive. Change an assigned value, such as current number
of terabytes, for a function to make that licensed function inactive.

About this task

If you have an active optional function and you want to replace it with an inactive
function, you must repurchase the function if you want to use it again. However,
you can use the following steps if you want to use the feature again.

Procedure
1. From the IBM Data storage feature activation (DSFA) website
(www.ibm.com/storage/dsfa), change the assigned value from the current
number of terabytes (TB) to 0 TB.
2. If this change is made, you can go back to the DSFA website and reactivate the
function, up to the previously purchased level, without having to repurchase
the function.

192 DS8880 Introduction and Planning Guide


Appendix A. Accessibility features for IBM DS8000
Accessibility features help users who have a disability, such as restricted mobility
or limited vision, to use information technology products successfully.

Accessibility features

These are the major accessibility features associated with the IBM DS8000 series
online product documentation.
v You can use screen-reader software and a digital speech synthesizer to hear what
is displayed on the screen. HTML documents have been tested using JAWS
version 15.0.
v This product uses standard Windows navigation keys.
v Interfaces that are commonly used by screen readers.
v Keys are discernible by touch but do not activate just by touching them.
v Industry-standard devices, ports, and connectors.
v The attachment of alternative input and output devices.

The DS8000 online product documentation and its related publications are
accessibility-enabled. The accessibility features of the online documentation are
described in the IBM Knowledge Center website (www.ibm.com/support/
knowledgecenter).

Keyboard navigation

You can use keys or key combinations to perform operations and initiate menu
actions that can also be done through mouse actions. You can navigate the DS8000
online documentation from the keyboard by using the shortcut keys for your
browser or screen-reader software. See your browser or screen-reader software
Help for a list of shortcut keys that it supports.

IBM and accessibility

See the IBM Human Ability and Accessibility Center (www.ibm.com/able/) for
more information about the commitment that IBM has to accessibility.

© Copyright IBM Corp. 2004, 2017 193


194 DS8880 Introduction and Planning Guide
Appendix B. Warranty information
The statement of limited warranty specifies the warranty period, type of warranty
service, and service level.

See IBM Warranty Information for information on machine type models 283x or
533x.

© Copyright IBM Corp. 2004, 2017 195


196 DS8880 Introduction and Planning Guide
Appendix C. IBM equipment and documents DS8000
Use the documents provided with your DS8000 series to identify and check your
main components.

The equipment that you receive can be grouped as follows:


v Components that must stay with the shipment because they are needed for
installation
v Components that are for customer use
v Components that must stay with the storage system after installation because
they are needed by your technical service representatives

Note: These lists are not intended to be a comprehensive list. They describe only
the main shipped components.

Installation components
Your shipment includes all the equipment that is needed for the installation of
your storage systems. Equipment includes storage systems, power cords, adapters,
cables, installation instructions, and other essential material.

The following installation components are included with your shipment:


Storage system
Your shipment includes one or more of the following frames that you
ordered:
v Base frame
v Expansion frames
When the frames arrive, they contain any ordered I/O enclosures, device
adapters, storage enclosures, drives, and the appropriate cables to support
those components. IBM installs these components at the factory.
Hardware Management Console
A primary management console is included with each base frame that you
order. The management console is physically located (installed) inside the
base frame. If needed, you can order a secondary management console for
the base frame.
Power cords
Your shipment includes the country or region-specific power cord that you
ordered.
Various media
IBM ships the following media (typically CDs), which are used during the
installation of your storage systems:
v Installation media, which includes installation scripts for the I/O
attachment for AIX and HP-UX, DS CLI (command-line interface)
software, and IBM Multipath Subsystem Device Driver installation
instructions and software
v Licensed machine code (LMC) media for the MC
v Operating system media
v LMC media for the 283x or 533x machine type

© Copyright IBM Corp. 2004, 2017 197


v Quick Code Reference document that details program code, utilities, and
documentation included in the ship group
Hardcopy installation instructions
Your shipment includes hardcopy installation instructions for the technical
service representatives who install your storage system.
Engineering changes (if applicable)
IBM occasionally releases engineering changes (ECs) to correct problems or
provide more support. If released, these ECs are included in your shipment
for the technical service representative to install.

Customer components
IBM provides media and documents that are intended for you to keep.
v License and warranty documents
v READ ME FIRST for IBM products
v Quick Code Reference, which includes a listing of customer publications and
media that is provided with the storage system
v Customer publications CDs: One CD contains PDFs of customer publications
and the other CD contains PDFs of license and warranty documents.

Service components
IBM provides service-related media and documents with your storage system.

Keep the following components with your storage system so that technical service
representatives can use them when they service your storage system.

Service media

Your delivery includes the following media for technical service representatives to
use:
v Operating system media
v Management console media:
– Management console critical backup SDHC memory card
– Dump, trace, statesave SDHC memory card, which technical support
representatives use for extracting statesave information during service
v A program temporary fix (PTF) CD for the operating system
v Service documents CD, which includes the following documentation: DS8000
service documentation and the DS8000 parts catalog.

198 DS8880 Introduction and Planning Guide


Notices
This information was developed for products and services offered in the US. This
material might be available from IBM in other languages. However, you may be
required to own a copy of the product or product version in that language in order
to access it.

IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive, MD-NC119
Armonk, NY 10504-1785
US

For license inquiries regarding double-byte character set (DBCS) information,


contact the IBM Intellectual Property Department in your country or send
inquiries, in writing, to:

Intellectual Property Licensing


Legal and Intellectual Property Law
IBM Japan Ltd.
19-21, Nihonbashi-Hakozakicho, Chuo-ku
Tokyo 103-8510, Japan

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS


PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may
not apply to you.

This information could include technical inaccuracies or typographical errors.


Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

Any references in this information to non-IBM websites are provided for


convenience only and do not in any manner serve as an endorsement of those

© Copyright IBM Corp. 2004, 2017 199


websites. The materials at those websites are not part of the materials for this IBM
product and use of those websites is at your own risk.

IBM may use or distribute any of the information you provide in any way it
believes appropriate without incurring any obligation to you.

The performance data discussed herein is presented as derived under specific


operating conditions. Actual results may vary.

Information concerning non-IBM products was obtained from the suppliers of


those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBMproducts.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.

Statements regarding IBM's future direction or intent are subject to change or


withdrawal without notice, and represent goals and objectives only.

This information is for planning purposes only. The information herein is subject to
change before the products described become available.

This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to actual people or business enterprises is entirely
coincidental.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Copyright and trademark
information website(www.ibm.com/legal/copytrade.shtml).

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered
trademarks or trademarks of Adobe Systems Incorporated in the United States,
and/or other countries.

Linux is a registered trademark of Linus Torvalds in the United States, other


countries, or both.

Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in


the United States, other countries, or both.

Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.

UNIX is a registered trademark of The Open Group in the United States and other
countries.

200 DS8880 Introduction and Planning Guide


Homologation statement
This product may not be certified in your country for connection by any means
whatsoever to interfaces of public telecommunications networks. Further
certification may be required by law prior to making any such connection. Contact
an IBM representative or reseller for any questions.

Safety and environmental notices


Review the safety notices, environmental notices, and electronic emission notices
for this product before you install and use the product.

Safety notices and labels


Review the safety notices and safety information labels before using this product.

IBM Systems safety notices and information


This publication contains the safety notices for the IBM Systems products in
English and other languages. It also contains the safety information labels found
on the hardware in English and other languages. Anyone who plans, installs,
operates, or services the system must be familiar with and understand the safety
notices. Read the related safety notices before beginning work.

IBM Systems Safety Notices (www.ibm.com/shop/publications/order/),


G229-9054

The publication is organized into three sections:


Safety notices
Lists the danger and caution notices without labels, organized
alphabetically by language.
The following notices and statements are used in IBM documents. They are
listed in order of decreasing severity of potential hazards.
Danger notice definition
A special note that calls attention to a situation that is potentially
lethal or extremely hazardous to people.
Caution notice definition
A special note that calls attention to a situation that is potentially
hazardous to people because of some existing condition, or to a
potentially dangerous situation that might develop because of
some unsafe practice.
Labels Lists the danger and caution notices that are accompanied with a label,
organized by label reference number.
Text-based labels
Lists the safety information labels that might be attached to the hardware
to warn of potential hazards, organized by label reference number.

Note: This product has been designed, tested, and manufactured to comply with
IEC 60950-1, and where required, to relevant national standards that are based on
IEC 60950-1.

Notices 201
Finding translated notices

Each safety notice contains an identification number. You can use this identification
number to check the safety notice in each language. The list of notices that apply
to the product are listed in the “Danger notices for IBMDS8000 systems” on page
207 and the “Caution notices for IBMDS8000 systems” topics.

To find the translated text for a caution or danger notice:


1. In the product documentation, look for the identification number at the end of
each caution notice or each danger notice. In the following examples, the
numbers (D002) and (C001) are the identification numbers.
DANGER
A danger notice indicates the presence of a hazard that has the potential
of causing death or serious personal injury. (D002)

CAUTION:
A caution notice indicates the presence of a hazard that has the potential of
causing moderate or minor personal injury. (C001)
2. Open the IBM Systems Safety Notices (G229-9054) publication.
3. Under the language, find the matching identification number. Review the topics
concerning the safety notices to ensure that you are in compliance.

To view a PDF file, you need Adobe Reader. You can download it at no charge
from the Adobe website (get.adobe.com/reader/).

Caution notices for IBMDS8000 systems


Ensure that you understand the caution notices for IBMDS8000 systems.

Caution notices

Use the reference numbers in parentheses at the end of each notice, such as (C001),
to find the matching translated notice in IBM Systems Safety Notices.

CAUTION: Energy hazard present. Shorting might result in system outage and possible physical
injury. Remove all metallic jewelry before servicing. (C001)

CAUTION: Only trained service personnel may replace this battery. The battery contains lithium.
To avoid possible explosion, do not burn or charge the battery.
Do not: Throw or immerse into water, heat to more than 100°C (212°F), repair or disassemble. (C002)

CAUTION: Lead-acid batteries can present a risk of electrical burn from high, short circuit
current. Avoid battery contact with metal materials; remove watches, rings, or other metal objects,
and use tools with insulated handles. To avoid possible explosion, do not burn. (C004)

CAUTION: The battery is a lithium ion battery. To avoid possible explosion, do not
burn. (C007)

CAUTION: This part or unit is heavy but has a weight smaller than 18 kg (39.7 lb). Use care when
lifting, removing, or installing this part or unit. (C008)

CAUTION: The doors and covers to the product are to be closed at all times except for service by
trained service personnel. All covers must be replaced and doors locked at the conclusion of the
service operation. (C013)

202 DS8880 Introduction and Planning Guide


CAUTION: The system contains circuit cards, assemblies, or both that contain lead solder. To avoid
the release of lead (Pb) into the environment, do not burn. Discard the circuit card as instructed by
local regulations. (C014)

CAUTION: This product is equipped with a 3-wire (two conductors and ground) power cable and
plug. Use this power cable with a properly grounded electrical outlet to avoid electrical shock.
(C018)

CAUTION: This product might be equipped with a hard-wired power cable. Ensure that a
licensed electrician performs the installation per the national electrical code. (C022)

CAUTION: Ensure the building power circuit breakers are turned off BEFORE you connect the
power cord or cords to the building power. (C023)

CAUTION: To avoid personal injury, disconnect the hot-swap, air-moving device cables before
removing the fan from the device. (C024)

CAUTION: This assembly contains mechanical moving parts. Use care when servicing this
assembly. (C025)

CAUTION: This product might contain one or more of the following devices:
CD-ROM drive, DVD-ROM drive, DVD-RAM drive or laser module, which are Class 1 laser
products. Note the following information:

• Do not remove the covers. Removing the covers of the laser product could result in exposure to
hazardous laser radiation. There are no serviceable parts inside the device.

• Use of the controls or adjustments or performance of the procedures other than those specified
herein might result in hazardous radiation exposure. (C026)

CAUTION: Servicing of this product or unit is to be performed by trained service personnel only.
(C032)

CAUTION:
or or

The weight of this part or unit is between 16 and 30 kg (35 and 66 lb). It takes two persons to safely
lift this part or unit. (C040)

CAUTION: Refer to instruction manual. (C041)

CAUTION: Activate locks or brakes, or apply chocks as directed. Parts may shift or fall
and cause personal injury or mechanical damage if these safeguards are not used. (C042)

CAUTION: Following the service procedure assures power is removed from 200-240VDC power
distribution connectors before they are unplugged. However, unplugging 200-240VDC power
distribution connectors while powered on, should not be done because it can cause connector
damage and result in burn and /or shock injury from electrical arcing. (C043)

Notices 203
!
CAUTION: If your system has a module containing a lithium battery, replace it only with the same
module type made by the same manufacturer. The battery contains lithium and can explode if not
properly used, handled, or disposed of.

Do not:
• Throw or immerse into water

• Heat to more than 100°C (212°F)

• Repair or disassemble

Dispose of the battery as required by local ordinances or regulations. (C045)

CAUTION: The rack cabinet is supplied with native built-in extendable outriggers with small
floating supplemental castors as motion anti-tip features. They must all be extended into a latched
position before and during cabinet movement or relocation. These native built-in outriggers must
not be removed completely, but rather recessed in when finished to ensure they are readily
available for future use. (C050)

Use the following general safety information for all rack mounted devices:

DANGER: Observe the following precautions when working on or around your IT rack system:

• Heavy equipment—personal injury or equipment damage might result if mishandled.

• Always lower the leveling pads on the rack cabinet.

• Always install stabilizer brackets on the rack cabinet.

• To avoid hazardous conditions due to uneven mechanical loading, always install the heaviest
devices in the bottom of the rack cabinet. Always install servers and optional devices starting
from the bottom of the rack cabinet.

• Rack-mounted devices are not to be used as shelves or work spaces. Do not place objects on top
of rack-mounted devices.

• Each rack cabinet might have more than one power cord. Be sure to disconnect all power cords in
the rack cabinet when directed to disconnect power during servicing.

• Connect all devices installed in a rack cabinet to power devices installed in the same rack cabinet.
Do not plug a power cord from a device installed in one rack cabinet into a power device
installed in a different rack cabinet.

• An electrical outlet that is not correctly wired could place hazardous voltage on the metal parts of
the system or the devices that attach to the system. It is the responsibility of the customer to
ensure that the outlet is correctly wired and grounded to prevent an electrical shock.
(R001 part 1 of 2)

204 DS8880 Introduction and Planning Guide


CAUTION:

• Do not install a unit in a rack where the internal rack ambient temperatures will exceed the
manufacturer’s recommended ambient temperature for all your rack-mounted devices.

• Do not install a unit in a rack where the air flow is compromised. Ensure that air flow is not
blocked or reduced on any side, front or back of a unit used for air flow through the unit.

• Consideration should be given to the connection of the equipment to the supply circuit so that
overloading of the circuits does not compromise the supply wiring or overcurrent protection. To
provide the correct power connection to a rack, refer to the rating labels located on the equipment
in the rack to determine the total power requirement of the supply circuit.

• (For sliding drawers): Do not pull out or install any drawer or feature if the rack stabilizer
brackets are not attached to the rack. Do not pull out more than one drawer at a time. The rack
might become unstable if you pull out more than one drawer at a time.

• (For fixed drawers): This drawer is a fixed drawer and must not be moved for servicing unless
specified by the manufacturer. Attempting to move the drawer partially or completely out of the
rack might cause the rack to become unstable or cause the drawer to fall out of the rack.
(R001 part 2 of 2)

Notices 205
CAUTION: Removing components from the upper positions in the rack cabinet improves
rack stability during a relocation. Follow these general guidelines whenever you relocate a
populated rack cabinet within a room or building.

• Reduce the weight of the rack cabinet by removing equipment starting at the top of the
rack cabinet. When possible, restore the rack cabinet to the configuration of the rack
cabinet as you received it. If this configuration is not known, you must observe the
following precautions.

- Remove all devices in the 32U position and above.

- Ensure that the heaviest devices are installed in the bottom of the rack
cabinet.

- Ensure that there are no empty U-levels between devices installed in the
rack cabinet below the 32U level.

• If the rack cabinet you are relocating is part of a suite of rack cabinets, detach the rack
cabinet from the suite.

• Inspect the route that you plan to take to eliminate potential hazards.

• Verify that the route that you choose can support the weight of the loaded rack cabinet.
Refer to the documentation that comes with your rack cabinet for the weight of a loaded
rack cabinet.

• Verify that all door openings are at least 760 x 230 mm (30 x 80 in.).

• Ensure that all devices, shelves, drawers, doors, and cables are secure.

• Ensure that the four leveling pads are raised to their highest position.

• Ensure that there is no stabilizer bracket installed on the rack cabinet during movement.

• Do not use a ramp inclined at more than 10 degrees.

• When the rack cabinet is in the new location, complete the following steps:

- Lower the four leveling pads.

- Install stabilizer brackets on the rack cabinet.

- If you removed any devices from the rack cabinet, repopulate the rack cabinet from the
lowest position to the highest position.

• If a long-distance relocation is required, restore the rack cabinet to the configuration of


the rack cabinet as you received it. Pack the rack cabinet in the original packaging
material, or equivalent. Also lower the leveling pads to raise the casters off the pallet and
bolt the rack cabinet to the pallet. (R002)

DANGER: Racks with a total weight of > 227 kg (500 lb.), Use Only Professional Movers!
(R003)

DANGER: Do not transport the rack via fork truck unless it is properly packaged, secured
on top of the supplied pallet. (R004)

206 DS8880 Introduction and Planning Guide


CAUTION:

• Rack is not intended to serve as an enclosure and does not provide any degrees of protection
required of enclosures.

• It is intended that equipment installed within this rack will have its own enclosure. (R005).

CAUTION: Use safe practices when lifting. (R007)

CAUTION: Do not place any object on top of a rack-mounted device unless that rack-mounted
device is intended for use as a shelf. (R008)

DANGER:

Main Protective Earth (Ground):


This symbol is marked on the frame of the rack.
The PROTECTIVE EARTHING CONDUCTORS should be terminated at that point. A recognized
or certified closed loop connector (ring terminal) should be used and secured to the frame with a
lock washer using a bolt or stud. The connector should be properly sized to be suitable for the bolt
or stud, the locking washer, the rating for the conducting wire used, and the considered rating of
the breaker. The intent is to ensure the frame is electrically bonded to the PROTECTIVE
EARTHING CONDUCTORS. The hole that the bolt or stud goes into where the terminal connector
and the lock washer contact should be free of any non-conductive material to allow for metal to
metal contact. All PROTECTIVE BONDING CONDUCTORS should terminate at this main
protective earthing terminal or at points marked with (R010)

Danger notices for IBMDS8000 systems


Ensure that you understand the danger notices for IBMDS8000 systems.

Danger notices

Use the reference numbers in parentheses at the end of each notice, such as (D001),
to find the matching translated notice in IBM Systems Safety Notices.

DANGER: To prevent a possible shock from touching two surfaces with different protective
ground (earth), use one hand, when possible, to connect or disconnect signal cables. (D001)

DANGER: Overloading a branch circuit is potentially a fire hazard and a shock hazard under
certain conditions. To avoid these hazards, ensure that your system electrical requirements do not
exceed branch circuit protection requirements. Refer to the information that is provided with your
device or the power rating label for electrical specifications. (D002)

DANGER: An electrical outlet that is not correctly wired could place hazardous voltage on the
metal parts of the system or the devices that attach to the system. It is the responsibility of the
customer to ensure that the outlet is correctly wired and grounded to prevent an electrical shock.
(D004)

Notices 207
DANGER: When working on or around the system, observe the following precautions:

Electrical voltage and current from power, telephone, and communication cables are hazardous. To
avoid a shock hazard:

• If IBM supplied a power cord(s), connect power to this unit only with the IBM provided power
cord. Do not use the IBM provided power cord for any other product.

• Do not open or service any power supply assembly.

• Do not connect or disconnect any cables or perform installation, maintenance, or reconfiguration


of this product during an electrical storm.

• The product might be equipped with multiple power cords. To remove all hazardous voltages,
disconnect all power cords.

• Connect all power cords to a properly wired and grounded electrical outlet. Ensure that the outlet
supplies proper voltage and phase rotation according to the system rating plate.

• Connect any equipment that will be attached to this product to properly wired outlets.

• When possible, use one hand only to connect or disconnect signal cables.

• Never turn on any equipment when there is evidence of fire, water, or structural damage.

• Do not attempt to switch on power to the machine until all possible unsafe conditions are corrected.

• Assume that an electrical safety hazard is present. Perform all continuity, grounding, and power
checks specified during the subsystem installation procedures to ensure that the machine meets
safety requirements.

• Do not continue with the inspection if any unsafe conditions are present.

• Disconnect the attached power cords, telecommunications systems, networks, and modems before
you open the device covers, unless instructed otherwise in the installation and configuration
procedures.

• Connect and disconnect cables as described in the following procedures when installing, moving,
or opening covers on this product or attached devices.

To disconnect:

1. Turn off everything (unless instructed otherwise).

2. Remove the power cords from the outlets.

3. Remove the signal cables from the connectors.

4. Remove all cables from the devices.

To connect:

1. Turn off everything (unless instructed otherwise).

2. Attach all cables to the devices.

3. Attach the signal cables to the connectors.

4. Attach the power cords to the outlets.

5. Turn on the devices.

• Sharp edges, corners and joints may be present in and around the system. Use care when handling
equipment to avoid cuts, scrapes and pinching. (D005)

DANGER: Heavy equipment — personal injury or equipment damage might result if mishandled.
(D006)

208 DS8880 Introduction and Planning Guide


DANGER: Uninterruptible power supply (UPS) units contain specific hazardous materials.
Observe the following precautions if your product contains a UPS:

• The UPS contains lethal voltages. All repairs and service must be performed only by an authorized
service support representative. There are no user serviceable parts inside the UPS.

• The UPS contains its own energy source (batteries). The output receptacles might carry live voltage
even when the UPS is not connected to an AC supply.

• Do not remove or unplug the input cord when the UPS is turned on. This removes the safety
ground from the UPS and the equipment connected to the UPS.

• The UPS is heavy because of the electronics and batteries that are required. To avoid injury,
observe the following precautions:

- Do not attempt to lift the UPS by yourself. Ask another service representative for assistance.

- Remove the battery, electronics assembly, or both from the UPS before removing the UPS from
the shipping carton or installing or removing the UPS in the rack. (D007)

DANGER: Professional movers are to be used for all relocation activities. Serious injury or death
may occur if systems are handled and moved incorrectly. (D008)

Labels for IBM DS8000 systems


Labels

Use the reference numbers in parentheses at the end of each notice, such as (L001),
to find the matching translated notice in IBM Systems Safety Notices.

DANGER: Hazardous voltage, current, or energy levels are present inside any component that has
this label attached. Do not open any cover or barrier that contains this label. (L001)

DANGER: Multiple power cords. The product might be equipped with multiple power cords. To
remove all hazardous voltages, disconnect all power cords. (L003)

CAUTION: System or part is heavy. The label is accompanied by a specific weight range. (L009)

CAUTION: Pinch hazard. (L012)


ATTENTION: Refer to the Installation Planning Guide for additional explanation on configurations
supported for given voltage groups and Power Cord information. (L025)

ATTENTION: For use at altitude 2000 m or lower (L026)

Text based labels for IBM DS8000 systems


Text Based Labels

and / or

Notices 209
210 DS8880 Introduction and Planning Guide
Index
Numerics cable (continued)
Fibre Channel host adapter 108
conventions (continued)
typefaces v
2.5-inch 44 I/O adapters 108 cooling 164, 168
3.5-inch 44 I/O cables 108 Copy Services 36
4-port HA 45 installation 139 considerations 78
8-port HA 45 overhead cable 139 disaster recovery 87
RIO-G 108 licensed functions 90
top exit bracket 139 overview 78
A cables Copy Services license 128
accessibility drive 100 FlashCopy function 129
features 193 cache 116 Remote mirror and copy 129
acclimation 131 canceling migration 72 z/OS Global Mirror 130
activating licenses 190 capacity z/OS Metro/Global Mirror
adapters 171 calculating physical and effective 102 Incremental Resync 130
configuration rules 114 exhaust 163 Copy Services Manager 36
Fibre Channel host 108, 110 floor load rating 141 Copy Services Manager on hardware
advisor tool 73, 74 caution notices 201, 202 management console license 130
air circulation 164, 168 CCW, channel command words 49 Copy Services software package
intake and exhaust 165 certificate, BSMI 120 Remote mirror and copy 129
air quality 166 circuit breakers corrosive gasses and particulates 166
algorithms 51 high-voltage 160 count key data storage 49
allocation methods 32 low-voltage 160 CUIR, control-unit initiated
AS/400 LUN CKD, count key data storage 49 reconfiguration 175
control switch settings 175 clearances required for service 147
Attachment to IBM z Systems 175 CLI, command-line interface 34
auto-performance rebalance 59 cloud
transparent cloud tiering 55
D
auto-rebalance 57 danger notices 201, 207
auto-rebalancing 59 clusters, RAID disk groups 48
data
automatic 59 CoD 44
securing 91
automatic data migration with Easy cold demote 59
data migration
Tier 59 comments, sending ix
selecting method 180
auxiliary volumes 57 communication requirements, host
data movement daily report 74
availability features 44 attachment 171
data placement 48
company information 173
Database Protection 125
configuration
DC-UPS 118
B battery service modules 119
disk drive cables 99
description
balancing the I/O load 48 VMware 75
I/O (RIO-G) cables 108
Base function license description of Easy Tier 57
processor memory 116
Database Protection 125 description of EAV 52
configuration control indicators 93
Easy Tier 126 device adapters 99
configuration overview 26
Encryption Authorization 125 configuration rules 110
Configuration overview 28
I/O Priority Manager 126 feature codes 99
configuration rules
Operating environment license 126 device driver, subsystem 47
device adapters 110
Base Function license 124 Device threshold 175
flash interface cards 110
thin provisioning 126 dimensions
host adapters and cables 114
battery assemblies 118 storage system, installed 145
I/O adapter 110
battery service modules disaster recovery
I/O enclosures 110
feature codes 119 Copy Services 87
management consoles 95
BSMI certificate 120 disk drive
Standby CoD disk drives 100
BTU 163 cable 95
storage devices 100
cables 99
storage enclosures 100
disk drive module
connectivity
C I/O enclosures 108
maintenance policy 45
disk drive sets 95
cable consolidating storage 48
disk drives 44
configuration rules 114 containers, shipping 131
subsystem device driver 47
cutout locations 139 contamination information 167
disk enclosures 95
disk drive 99 control switch settings 175
fillers 95
drive cables, feature codes 99 control unit threshold 175
disk intermix
feature codes, Fibre Channel conventions
configuration indicators 93
cable 110 terminology v

© Copyright IBM Corp. 2004, 2017 211


Disk Manager Extended Remote Copy (XRC) (see z/OS floor and space requirements 134
monitoring performance 36 Global Mirror) 130 floor load 141
Disk Storage Feature Activation force option 50, 56
(DSFA) 190
drive enclosures F
See standard drive enclosures
drive set
failover and fallback 87 G
FB, fixed block 49 Global mirror 129
capacity 102
FDE 44 Global Mirror 87, 88
drive sets 95, 96
Feature Activation (DSFA), IBM Disk
drives
Storage 190
cable 100
capacity calculation 102
feature codes
See also features
H
DS command-line interface 34 HA intermix 45
additional setup options 119
DS8000 Storage Management GUI 34 hardware features 39
battery service modules 119
DSFA, Disk Storage Feature hardware planning 39
device adapters 99
Activation 190 High Performance FICON 127
drive cables 99
dynamic expansion high-voltage
drive sets 96
volume 55 installations 160
enclosure fillers 98
dynamic volume expansion 55 homogeneous pools 57
extended power line disturbance 119
homologation 201
extended power-line disturbance 120
host adapters
features, other configuration 119
E Fibre Channel cable 110
Fibre Channel 108
host attachment
EAM 50 Fibre Channel host adapters 109
overview 45
earthquake preparedness 170 flash cards 96
host systems
earthquake resistance kit flash enclosures 98
communication requirements 171
required preparation for 148, 154 I/O (RIO-G) cables 108
hot spot management 57
Easy Tier 32, 59, 126 I/O adapter 108
how to order using feature codes 93
application controls 65 I/O enclosures 108
HyperPAV 128
Application for z Systems 65 management console 95
heat map transfer 70 memory 116
manual mode 67 optional 119
overview 57 ordering optional features 93 I
pool merge 67 overhead cable management 138 I/O adapter
Storage Tier Advisor Tool physical configuration 93 configuration rules 110
data movement daily report 74 power cords 117 features 108
workload categorization 74 power features 119 I/O cable
workload skew 74 processors 116 configuration 108
volume migration 67 setup 119 I/O enclosure 45
Easy Tier application controls 65 shipping 119 I/O enclosures 108
Easy Tier Application for z Systems 65 shipping and setup 119 configuration rules 110
EAV CKD shipping weight reduction 120 feature codes 108
1 TB z Systems CKD 52 standard drive enclosures 98 I/O load, balancing 48
3390 Model A 52 features I/O plug order 45
cylinder size 52 input voltage I/O Priority Manager 126
enclosure fillers about 118 IBM Copy Services Manager
feature codes 98 feedback, sending ix copy services 36
encryption Fibre Channel Replication 36
overview 91 host adapters 108 IBM Disk Storage Feature Activation
planning for 183, 184 host adapters feature codes 109 (DSFA) 190
Encryption Authorization 125 host attachment 171 IBM DS8000 Storage Tier Advisor Tool
environment 163 open-systems hosts 46 overview 73
air circulation 165 SAN connections 46 IBM Spectrum Control 36
operating requirements 164, 165 Fibre Channel cable IEC 60950-1 201
ePLD feature codes 110 implementation, RAID 30
See extended power-line disturbance fire suppression 170 initialization 53
EPLD fixed block storage 49 input voltage 118
See extended power line disturbance flash cards 44, 96 configuration rules 119
ESE and auxiliary volumes 67 flash copy 88 input voltage requirements 158
ESE capacity controls 56 flash drives 44 installation
ESE volumes 57 flash drives, feature codes 96 air circulation 168
exhaust 163 flash enclosures components with shipment 197
expansion model position feature codes 98 nonraised floor 135
configuration indicators 93 flash interface cards raised floor 135
extended address volumes configuration rules 110 installation planning 131
overview 52 flash RAID adapters 99 IOPS 57
extended power line disturbance 119 FlashCopy
extended power-line disturbance 120 Multiple incremental FlashCopy 80

212 DS8880 Introduction and Planning Guide


L notices
caution 202
planning (continued)
safety 170
labels 209 danger 207 storage complex setup 173
labels, safety information 201 labels 209 weight 141
LFF 44 safety 201 point-in-time copy 129
licensed functions 190 notification settings pool rebalancing 57
Copy Services 90 methods 175 power
licenses consumption 163
Disk Storage Feature Activation extended power line disturbance
(DSFA) 190
function authorization O feature 119
operating environment, off 166
documents 190 obtaining activation codes 190
operating environment, on 165
limitations 74 operating environment
outlet requirements 158
logical subsystems power on 165
power connector
overview 32 while in storage 165
requirements 160
logical volumes 50 with power on or off 165
specifications 160
low-voltage Operating environment license 126
power cords 117
installations 160 overview 74
feature codes 117
LSS, logical subsystem 32 host attachment 45
power connector 160
LUN power features
calculation 51 configuration rules 119
control switch settings 175 P power frequencies 158
Parallel Access Volumes 128 power supply
understanding static and dynamic 76 input voltage of 118
M pass-through 88 power control settings 175
machine types 3 pausing migration 72 Preparing a raised floor for the
maintenance policy PAV (Parallel Access Volumes) 76 earthquake resistance kit installation
disk drive module 45 performance data 73 earthquake resistance kit
managed allocation 32 performance gathering required preparation for 149
management console Disk Manager 36 Present SIM data to all hosts 175
configuration rules 95 physical configuration processor
ESSNI 94 drive capacity 102 feature codes 116
feature codes 95 drive enclosures 95 memory (cache) 116
HMC 94 drives 95 processors
HMC remote access 94 extended power line disturbance 119 feature codes 116
multiple storage systems 43 flash enclosures 98 publications
network settings 173 I/O adapter features 108 ordering ix
overview 43 I/O cable 108 product v
TCP/IP ports 171 I/O enclosures 108 related v
management consoles input voltage of power supply 118
feature codes 94 keyboard 95
management interfaces 34 management console 95
management consoles 94
Q
manual mode using Easy Tier 67 quick initialization 53
Media threshold 175 power cords 117
memory power features 119
feature codes 116 processors 116
system memory 116 standard drive enclosures 98 R
Metro mirror 129 planning RAID
Metro/Global mirror 129 activating disk groups 48
migrating data full-disk encryption 185 implementation 30
selecting method 180 disk encryption RAID 10 overview 31
migration 59 activating 185 RAID 5 overview 31
canceling 72 planning 185 RAID 6 overview 31
pausing 72 earthquake resistance kit site RAID overview 30
resuming 72 preparation 148, 149, 154 raised floors
migration considerations 74 encryption 183, 184 cutting tiles for cables 139
mobile app 36 environment requirements 164 rank depopulation 57, 67
monitoring floor load 141 redundancy
with Easy Tier 69 full-disk encryption activation 185 management consoles 94
Multiple incremental FlashCopy 80 IBM Security Key Lifecycle Remote mirror and copy 129
Manager 183, 184 remote mirror for z Systems (see z/OS
IBM Storage Appliance 2421 Model Global Mirror) 130
N AP1 184
isolated key servers 184
remote power control 171
remote support
network settings 173 network and communications 170 connections 171
new features xi operating environment, power 166 settings 174
nodes 171 power connector 160
noise level 163

Index 213
replication shipping weight reduction 120 transparent cloud tiering 55
copy services functions 36 feature code 120
requirements SIM 175
floor and space 134
floor load 141, 145
slot plug order 45
space requirements 145
U
understanding fixed block (FB)
host attachment communication 171 specifications
architecture 49
input voltage 158 power connectors 160
understanding logical volumes 50
loading dock 133 standard drive enclosures 95
user interfaces 34
planning network and feature codes 98
communications 170 standards
power connectors 160 air quality 166
power outlets 158 Standby CoD disk drives 100 V
receiving area 133 statement of limited warranty 195 VMware
service clearance 147 storage area network Array Integration support 75
space 145 connections with Fibre Channel restrictions 75
resource groups adapters 46 volume capacity
copy services 88 storage features overview 51
resuming migration 72 configuration rules 100 volume deletion
RGL 88 drives and enclosures 95 force option 56
RIO-G cable 108 storage image safe option 56
rotate capacity 32 cooling 168 volume migration 57
rotate volumes 32 storage system volume rebalancing 57
architecture 4 volumes
implementation 4 allocation 32, 50
S service clearances 147
Storage Tier Advisor Tool
deletion 50
force option 50
safety 170
Easy Tier modification 50
earthquake preparedness 170
data movement daily report 74
earthquake resistance kit 148, 149,
workload categorization 74
154
fire suppression 170
workload skew 74
storage-enclosure fillers 98
W
information labels 201 warm demote 59
storage, consolidating 48
notices 201 warranty 195
subsystem device driver (SDD) 47
operating environment 170 websites v
System i
power outlets 158 weight
control switch settings 175
temperature and cooling 170 floor load capacity 141
system summary report 73
SAN reducing shipment 120
connections with Fibre Channel feature code 120
adapters 46 storage system, installed 145
SAS 44 T weight and dimensions
SAS enterprise and NL SAS 57 T10 DIF shipping container 131
SATA 44 ANSI support 49 workload categorization 74
scenarios Data Integrity Field 49 workload skew 74
adding storage 191 FB LUN 49 worksheets
scope limiting FB LUN protection 49 -provided equipment 197
disaster recovery 88 FB volume 49 WWID, switch settings 175
SDD 47 Linux on z Systems 49
security SCSI end-to-end 49
best practices
service accounts 185
standard protection 49
z Systems 49
X
XRC (Extended Remote Copy) (see z/OS
user accounts 185 Taiwan BSMI certificate 120
Global Mirror) 130
serial number setting 175 terminology v
service clearance requirements 147 thermal load 163
Service Information Messages 175 thin provisioning 126
SFF 44 ESE capacity controls 56 Z
shipments overview 56 z Systems
authorized service components 198 thin provisioning and Easy Tier 57 HyperPAV 128
container weight, dimensions 131 three tiers 57, 59 parallel access volume 128
hardware, software 197 tiles, perforated for cooling 168 Parallel Access Volumes 128
loading ramp 133 Tivoli Storage Productivity Center 43 power control settings 175
media 198 top exit z Systems hosts
planning for receipt 133 bracket 136, 138 FICON attachment overview 47
receiving area 133 measurements 136, 138 z-synergy Services license 127
reducing weight of 120 overhead cable management 136, 138 High Performance FICON 127
requirements placement 136, 138 HyperPAV 128
loading ramp 133 top exit bracket Parallel Access Volumes 128
weight reduction feature code 120 feature codes 138 z/OS Distributed Data Backup 128
shipping containers 131 trademarks 200 z/OS Global Mirror 130

214 DS8880 Introduction and Planning Guide


z/OS Metro/Global Mirror Incremental
Resync 130

Index 215
216 DS8880 Introduction and Planning Guide
IBM®

Printed in USA

GC27-8525-11

You might also like