0% found this document useful (0 votes)
141 views

e05THEORY0

Uploaded by

neelavathypasula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views

e05THEORY0

Uploaded by

neelavathypasula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 171

Hitachi Proprietary DKC910I

Rev.0 Copyright © 2019, 2024, Hitachi, Ltd.


THEORY00-00-00

THEORY OF OPERATION
SECTION

[THEORY00-00-00]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY00-00-10

Contents
1. Storage System Overview ..........................................................................................THEORY01-01-10
1.1 External View of Hardware...................................................................................THEORY01-01-10
1.2 Hardware Component ..........................................................................................THEORY01-02-10
1.3 Hardware Architecture .........................................................................................THEORY01-03-10
1.4 Network Topology.................................................................................................THEORY01-04-10
1.4.1 Management Software .................................................................................THEORY01-04-20
1.4.2 Maintenance Software..................................................................................THEORY01-04-30
1.5 Storage System Function Overview .....................................................................THEORY01-05-10
1.5.1 Basic Functions ............................................................................................THEORY01-05-10
1.5.2 Redundant Design........................................................................................THEORY01-05-50
1.5.3 Impact of Each Failure Part on Storage System ........................................THEORY01-05-100

2. Hardware Specifications .............................................................................................THEORY02-01-10


2.1 Storage System Specifications ............................................................................THEORY02-01-10
2.2 Power Specifications ............................................................................................THEORY02-02-10
2.2.1 Storage System Current ...............................................................................THEORY02-02-10
2.2.2 Input Voltage and Frequency .......................................................................THEORY02-02-30
2.3 Environmental Specifications ...............................................................................THEORY02-03-10
2.4 FC Interface Specifications ..................................................................................THEORY02-04-10
2.4.1 FC Interface Specification Values.................................................................THEORY02-04-10
2.4.2 FC Port WWN ...............................................................................................THEORY02-04-20
2.5 Mainframe fibre channel ......................................................................................THEORY02-05-10
2.6 65280 logical addresses ......................................................................................THEORY02-06-10
2.7 GUM and Peripheral Connections .......................................................................THEORY02-07-10
2.7.1 GUM block diagram and error detection ......................................................THEORY02-07-10

3. Software Specifications ..............................................................................................THEORY03-01-10


3.1 Micro-program and Program Product ..................................................................THEORY03-01-10
3.2 Logical Components Defined by Software ...........................................................THEORY03-02-10
3.3 Program Product (PP) List ...................................................................................THEORY03-03-10
3.4 GUM and Related Software .................................................................................THEORY03-04-10

4. Maintenance Work ......................................................................................................THEORY04-01-10


4.1 Overview of Maintenance Work ...........................................................................THEORY04-01-10
4.2 Maintenance Management Tools for Maintenance Person and Their Usage ......THEORY04-02-10
4.3 Troubleshooting Workflow ....................................................................................THEORY04-03-10
4.4 Important Precautions during Maintenance Work ................................................THEORY04-04-10
4.5 User account security policies .............................................................................THEORY04-05-10
4.5.1 Password requirements ................................................................................THEORY04-05-20
4.5.2 Login requirements .......................................................................................THEORY04-05-30
4.5.3 Password expiration .....................................................................................THEORY04-05-30
4.5.4 Account lockout ............................................................................................THEORY04-05-40
4.5.5 Additional notices .........................................................................................THEORY04-05-40
[THEORY00-00-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY00-00-20

5. Drive Formatting .........................................................................................................THEORY05-01-10


5.1 Logical Volume Formatting ..................................................................................THEORY05-01-10
5.1.1 Overviews .....................................................................................................THEORY05-01-10
5.1.2 Estimation of Logical Volume Formatting Time ............................................THEORY05-01-20
5.1.3 Estimation of Logical Volume (Pool-VOL) Formatting Time .........................THEORY05-01-80
5.1.4 Estimation of Logical Volume (DP-VOL) Formatting Time ...........................THEORY05-01-90
5.1.5 Estimation of Logical Volume (DP-VOL with Capacity Saving Enabled)
Formatting Time ........................................................................................THEORY05-01-100
5.2 Quick Format .......................................................................................................THEORY05-02-10
5.2.1 Overviews .....................................................................................................THEORY05-02-10
5.2.2 Volume Data Assurance during Quick Formatting........................................THEORY05-02-30
5.2.3 Control information format time of M/F VOL .................................................THEORY05-02-40
5.2.4 Quick Formatting Time .................................................................................THEORY05-02-50
5.2.5 Performance during Quick Format ...............................................................THEORY05-02-70
5.2.6 Combination with Other Maintenance ..........................................................THEORY05-02-80
5.2.7 SIM Output When Quick Format Completed ................................................THEORY05-02-90
5.2.8 Coexistence of Drives ................................................................................THEORY05-02-100
5.3 Notes on Maintenance during LDEV Format/Drive Copy Operations ..................THEORY05-03-10
5.4 Verify (Parity Consistency Check) ........................................................................THEORY05-04-10
5.5 PDEV Erase .........................................................................................................THEORY05-05-10
5.5.1 Overview ......................................................................................................THEORY05-05-10
5.5.2 Rough Estimate of Erase Time .....................................................................THEORY05-05-20
5.5.3 Influence in Combination with Other Maintenance Operation ......................THEORY05-05-30
5.5.4 Notes of Various Failures .............................................................................THEORY05-05-60
5.6 Media Sanitization ................................................................................................THEORY05-06-10
5.6.1 Overview ......................................................................................................THEORY05-06-10
5.6.2 Estimated Erase Time ..................................................................................THEORY05-06-20
5.6.3 Checking Result of Erase .............................................................................THEORY05-06-30
5.6.3.1 SIMs Indicating End of Media Sanitization ...........................................THEORY05-06-30
5.6.3.2 Checking Details of End with Warning ..................................................THEORY05-06-40
5.6.4 Influence between Media Sanitization and Maintenance Work ....................THEORY05-06-60
5.6.5 Notes when Errors Occur .............................................................................THEORY05-06-90

6. Data In Place ..............................................................................................................THEORY06-01-10


6.1 Overview ..............................................................................................................THEORY06-01-10
6.2 DIP Procedures from VSP 5100/5100H, 5500/5500H to VSP 5600/5600H ........THEORY06-02-10
6.3 Estimating the Work Time ....................................................................................THEORY06-03-10
6.4 Effects on Performance........................................................................................THEORY06-04-10

7. Appendix A : Maintenance Associated with MF Products ...........................................THEORY07-01-10


7.1 Channel Commands ............................................................................................THEORY07-01-10
7.2 Comparison of Pair Status on Storage Navigator, Command Control
Interface (CCI) ....................................................................................................THEORY07-02-10
7.3 Locations where Configuration Information is Stored and Timing of
Information Update .............................................................................................THEORY07-03-10

[THEORY00-00-20]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY00-00-30

7.4 TPF ......................................................................................................................THEORY07-04-10


7.4.1 An outline of TPF ..........................................................................................THEORY07-04-10
7.4.2 TPF Support Requirement............................................................................THEORY07-04-40
7.4.3 TPF trouble shooting method .......................................................................THEORY07-04-50
7.4.4 The differences of DASD-TPF (MPLF) vs DASD-MVS ................................THEORY07-04-60
7.4.5 Notices for TrueCopy for Mainframe-option setting ......................................THEORY07-04-90
7.4.6 TPF Guideline.............................................................................................THEORY07-04-100
7.4.6.1 SOM required as TPF .........................................................................THEORY07-04-100
7.4.6.2 How to set up TPF configuration ........................................................ THEORY07-04-110
7.4.6.3 Combination of copy P.P. ....................................................................THEORY07-04-130
7.4.6.4 Mixed both zTPF and zOS/zVM .........................................................THEORY07-04-140
7.4.6.5 Combination of HDP ...........................................................................THEORY07-04-140
7.4.6.6 TPF Copy Manager ............................................................................THEORY07-04-140
7.4.6.7 BCM ....................................................................................................THEORY07-04-140
7.4.6.8 Dual Write ...........................................................................................THEORY07-04-150
7.4.6.9 MPL.....................................................................................................THEORY07-04-160
7.5 CHB/DKB - SASCTL#/PSW#, Port# Matrixes .....................................................THEORY07-05-10
7.6 CTL/MPU - MPU#, MP# Matrixes ........................................................................THEORY07-06-10

[THEORY00-00-30]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-01-10

1. Storage System Overview


Section 1 describes the overview of the storage systems.

1.1 External View of Hardware


The DKC910I storage system is mounted in 19-inch racks and composed of the Controller Chassis (CBXs),
HSN Boxes (HSNBXs), and Drive Boxes. The Controller Chassis contains the Controller Boards that control
drives. The HSN Box (HSNBX) contains Interconnect Switches (ISWs) that connect multiple Controller
Boards, and the dedicated PC for maintenance (SVP). The Drive Box contains drives.
The Controller Chassis is 4U high, the HSN Box is 1U high, and the Drive Box is 2U high.
There are the following types of Drive Boxes: DBS2 and DBN in which 2.5-inch SFF drives are installed,
DBL in which 3.5-inch LFF drives are installed, and DBF3 in which FMDs (Flash Module Drives) are
installed.
A set of Drive Boxes is referred to as Disk Unit (DKU). The DKU composed of four DBS2s is referred to
as SBX, the DKU composed of eight DBLs is referred to as UBX, the DKU composed of four DBF3s is
referred to as FBX, and the DKU composed of four DBNs is referred to as NBX.
The storage system operates by connecting multiple Controller Chassis to HSN Boxes. The following
configurations are available:
• Two HSN Boxes and two CBXs
• Two HSN Boxes and four CBXs
• Two HSN Boxes and six CBXs
The maximum number of SBXs that can be installed per two CBXs is eight, while the maximum number
of UBXs/FBXs that can be installed per two CBXs is four. Up to eight DKUs (the total number of SBXs,
UBXs, and FBXs) can be installed per two CBXs.
NBX cannot be used with SBX, UBX, or FBX. Only one NBX can be installed per two CBXs.
An external view of a storage system configuration example is shown below.

Figure 1-1 Storage System Configuration Example (Two CBXs)

Drive Box (2U)

HSN Box (1U) 2

Controller Chassis (4U) 2

[THEORY01-01-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-10

1.2 Hardware Component


1. Controller Chassis (CBX)
The Controller Chassis (CBX) contains the Controller Boards (CTLs), Disk Boards (DKBs), Channel
Boards (CHBs), Interconnect Channel Boards (HIEs), LAN Boards, Power Supplies (DKCPSs), Cache
Flash Memories (CFMs), and Backup Modules (BKMFs) in which batteries and fans are installed.
DKBs are required for the connection between the Controller Chassis (CBX) and the Disk Unit (DKU).
Eight or more DKBs must be installed per VSP 5500, 5600/VSP 5500H, 5600H storage system. The
installation number of DKBs per VSP 5100, 5200/VSP 5100H, 5200H storage system is four, and the
number cannot be increased.
However, DKBs are not required for the drive-less configuration that does not contain DKU.
Up to eight CHBs can be installed per CBX for VSP 5500, 5600/VSP 5500H, 5600H, and up to four for
VSP 5100, 5200/VSP 5100H, 5200H. Two or more CHBs must be installed per storage system.

Figure 1-2 Controller Chassis

Front View

Controller Board

CFM

Rear View
BKMF

DKCPS

CHB

LAN Board
HIE DKCPS
DKB

[THEORY01-02-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-20

(1) Controller Boards (CTL)


The Cache Memories (DIMMs), Cache Flash Memories (CFMs), and Backup Modules (BKMFs)
are installed in the Controller Board.

Figure 1-3 Controller Board

Controller Board Cache Memory (DIMM)

Battery
BKMF
CFM

Table 1-1 Controller Boards Specifications


Item Specifications
Necessary number of CTLs VSP 5500, 5600/ 2
per Controller Chassis VSP 5500H, 5600H
VSP 5100, 5200/ 1
VSP 5100H, 5200H
Number of DIMM slot 8
Cache Memory Capacity 128 GiB to 512 GiB

[THEORY01-02-20]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-30

(2) Cache Memory (DIMM)


The DIMMs shown in the following table can be used.

Table 1-2 Cache Memory Specifications


Capacity Component Model Number
32 GiB 32 GiB DIMM × 1 DW-F850-CM32G
64 GiB 64 GiB DIMM × 1 DW-F850-CM64GL

Figure 1-4 Top of Controller Board

Top view

Controller Board

DIMM Location
• The DIMM with the DIMM location
number DIMM0x belongs to CMG0
DIMM13
(Cache Memory Group 0) and the
DIMM12
DIMM with DIMM1x belongs to
CMG1
CMG1 (Cache Memory Group 1).
DIMM02
• Be sure to install the DIMM in
DIMM03
CMG0.
• Install the same capacity of DIMMs DIMM11
by a set of four. DIMM10
• CMG1 is a slot for adding DIMMs. CMG0
DIMM00
DIMM01

[THEORY01-02-30]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-40

(3) Cache Flash Memory (CFM)


The Cache Flash Memory (CFM) is the memory to back up cache memory data when a power
failure occurs
Up to two CFMs can be installed per CTL.

In case of VSP 5100/5100H/5500/5500H


There are four types of CFMs: BM35, BM45, BM3E and BM4E. The CFM that can be installed
varies depending on the DIMM type as follows:
• 32 GiB DIMM capacity: BM35 or BM3E
• 64 GiB DIMM capacity: BM45 or BM4E
When DIMMs (32 GiB/64 GiB) are installed in CMG1, additional CFMs must be installed in
CFM-x11/x21.

In case of VSP 5200/5200H/5600/5600H


There are two types of CFMs: BM95, and BM9E. The CFM that can be installed varies depending
on the DIMM type as follows:
• 32/64 GiB DIMM capacity: BM95 or BM9E
When DIMMs (32 GiB) are installed in CMG1, additional CFMs need not be installed in
CFM-x11/x21.
When DIMMs (64 GiB) are installed in CMG1, additional CFMs must be installed in CFM-x11/
x21.

NOTE : • It is necessary to match the type (model name) of CFM-x10/x20 and CFM-x11/x21
(addition side).
When adding Cache Memories, check the model name of CFM-x10/x20 and add the
same model.
• When replacing Cache Flash Memories, it is necessary to match the type (model
name) defined in the configuration information.
Example: When the configuration information is defined as BM35, replacing to
BM45, BM3E or BM4E is impossible.

[THEORY01-02-40]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-50

(4) Battery
The battery for the data saving is installed on each Controller Chassis.
• When the power failure continues for more than 20 milliseconds, the Storage System uses power
from the batteries to back up the Cache Memory data and the Storage System configuration data
onto the Cache Flash Memory.
• Environmentally friendly nickel hydride battery is used for the Storage System.

[THEORY01-02-50]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-60

(5) Disk Board (DKB)


The Disk Board (DKB) controls data transfer between the Drive and Cache Memory. The DKB
supporting the encryption and the DKB not supporting the encryption are available. The two types
cannot be mixed in a storage system.

Table 1-3 Disk Board Specifications


Model Number DKC-F910I-BS12G DKC-F910I-BS12GE
Interface SAS
Number of PCB 1 1
Necessary number of PCB per VSP 5500/5600 4 4
Controller Chassis (CBX) VSP 5500H/5600H
VSP 5100/5200 2 2
VSP 5100H/5200H
Data Encryption Not Supported Supported
Performance of SAS Port 12 Gbps 12 Gbps

Model Number DKC-F910I-BN8G DKC-F910I-BN8GE


Interface NVMe (PCIe) NVMe (PCIe)
Number of PCB 1 1
Necessary number of PCB per VSP 5500/5600 4 4
Controller Chassis (CBX) VSP 5500H/5600H
VSP 5100/5200 2 2
VSP 5100H/5200H
Data Encryption Not Supported Supported
Performance of NVMe Port 8 Gbps 8 Gbps

Table 1-4 Number of Installed DKBs and SAS Ports / NVMe Ports by CBX Configuration
Item VSP 5100, 5200/ VSP 5500, 5600/VSP 5500H, 5600H
VSP 5100H, 5200H
2 CBX 2 CBX 4 CBX 6 CBX
(2CBX-2CTL
Configuration)
Number of DKB/ 2 piece/CTL 2 piece/CTL 2 piece/CTL 2 piece/CTL
DKBN (4 piece/system) (8 piece/system) (8, 16 piece/system) (8, 16, 24 piece/
system)
Number of SAS Port 8 port/system 16 port/system 16, 32 port/system 16, 32, 48 port/
system
Number of NVMe 8 port/system 16 port/system 16, 32 port/system 16, 32, 48 port/
Port system

The drive-less configuration that does not require DKBs is also supported.

[THEORY01-02-60]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-70

(6) Channel Board (CHB)


The Channel Board controls data transfer between the upper host and the Cache Memory.
It supports the following CHBs.
The same type CHBs must be installed in sets of two, one CHB for each CBX of a CBX pair.

Table 1-5 Types CHB


Type Parts name Option Name
32 G 4Port FC CHB (32G Ready 4Port FC) DKC-F910I-4HF32R
10 G 2Port iSCSI (Optic) CHB (10G 2Port iSCSI (Optic)) DKC-F910I-2HS10S
16G 4Port Mainframe Fibre CHB (16G 4Port Mainframe Fibre SW) DKC-F910I-4MS16
16G 4Port Mainframe Fibre CHB (16G 4Port Mainframe Fibre LW) DKC-F910I-4ML16
32G 4Port Mainframe Fibre CHB (32G 4Port Mainframe Fibre) DKC-F910I-4MF32
32G 4Port Mainframe Fibre CHB (32G 4Port Mainframe Fibre with DKC-F910I-4MF32E
with Encryption Encryption)

The number of installable CHBs is shown below.

Table 1-6 The Number of Installable CHBs


Item VSP 5100, 5200/ VSP 5500, 5600/VSP 5500H, 5600H
VSP 5100H, 5200H
2 CBX 2 CBX 4 CBX 6 CBX
(2CBX-2CTL
Configuration)
Minimum installable 2 piece (1 piece/CTL)
number
Maximum installable 8 piece/system 16 piece/system 32 piece/system 48 piece/system
number (4 piece/CTL) (4 piece/CTL) (4 piece/CTL) (4 piece/CTL)

[THEORY01-02-70]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-80

The CHB for Fibre Channel connection can correspond to Shortwave or Longwave by port unit by
selecting a transceiver to be installed in each port.
Note that a port of each CHB installs a transceiver for Shortwave as standard.
When changing to a Longwave supported port, addition of SFP for Longwave is required.

Table 1-7 Maximum cable length (Fibre Channel, Shortwave)


Item Maximum cable length
Data Transfer OM2 (50/125 µm OM3 (50/125 µm laser OM4 (50/125 µm laser
Rate multi-mode fibre) optimized multi-mode fibre) optimized multi-mode fibre)
400 MB/s 150 m 380 m 400 m
800 MB/s 50 m 150 m 190 m
1600 MB/s 35 m 100 m 125 m
3200 MB/s 20 m 70 m 100 m

Table 1-8 Maximum cable length (iSCSI, Shortwave)


Item Maximum cable length
Data Transfer OM2 (50/125µm OM3 (50/125µm laser OM4 (50/125µm laser
Rate multi-mode fibre optimized multi-mode fibre) optimized multi-mode fibre)
1000 MB/s 82 m 300 m 550 m

Table 1-9 Maximum cable length (FICON, Shortwave)


Item Maximum cable length
Data Transfer OM2 (50/125µm OM3 (50/125µm laser OM4 (50/125µm laser
Rate multi-mode fibre optimized multi-mode fibre) optimized multi-mode fibre)
400 MB/s 150 m 380 m 400 m
800 MB/s 50 m 150 m 190 m
1600 MB/s 35 m 100 m 125 m
3200 MB/s 20 m 70 m 100 m

(7) Interconnect Channel Board (HIE)


The Interconnect Chanel Board (HIE) is a channel board to connect the CBX and the HSN Box. As
with the CHBs and DKBs, the HIEs are installed in the slots on the rear side of the CBX.

[THEORY01-02-80]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-81

(8) BKMF(Backup FAN module, FAN module, Accelerator FAN module)


The BKMF is a module equipped with a FAN that is installed in a slot on the front side of
the DKC. There are three types: Backup FAN module (BKMF), FAN module (FANM), and
Accelerator FAN module (ACLF).
The ACLF has the accelerator LSI that compresses data in DP-VOLs with Deduplication and
Compression enabled. The compression method of the ACLF achieves faster data compression
at a higher compression rate compared with data compression by micro-programs. ACLFs can be
installed in VSP 5200, 5200H, 5600, and 5600H only.

Table 1-10 BKMF Types


Type Module Name Option Name
BKMF Backup FAN module BKMF (included in Controller Chassis)
FANM FAN module DKC-F910I-FANM
ACLF Accelerator FAN module DKC-F910I-ACLF

Table 1-11 BKMF Installation Location


Slot BKMF-xy0 BKMF-xy1 BKMF-xy2 BKMF-xy3
BKMF type to be installed FANM/ACLF BKMF FANM/ACLF BKMF
x: DKC No. (0, 1, 2, 3, 4, 5)
y: CTL No. (1, 2)

BKMF (FANM) BKMF Controller Board x2 (CTLx2)

BKMF- BKMF- BKMF- BKMF-


x20 x21 x22 x23

BKMF- BKMF- BKMF- BKMF-


x10 x11 x12 x13

BKMF Controller Board x1 (CTLx1)


BKMF (FANM)
Front view of DKC

*1: DKC-x
DKC No. (0, 1, 2, ........ , 5)
*2: CTLx
CTL No. (1, 2)

NOTE: The above illustrations are for VSP 5500, VSP 5500H, VSP 5600, and VSP 5600H.
For VSP 5100, VSP 5100H, VSP 5200, and VSP 5200H, only CTL01 and CTL12 are
installed.

[THEORY01-02-81]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-82

The battery is not installed. The battery is installed.

BKMF (FANM) BKMF

ACLF Controller Board x2 (CTLx2)

BKMF- BKMF-
x20 x22

BKMF- BKMF-
x10 x12

Controller Board x1 (CTLx1)


ACLF
Front view of DKC

*1: DKC-x
DKC No. (0, 1, 2, ........ , 5)
*2: CTLx
CTL No. (1, 2)
NOTE: The above illustrations are for VSP 5600, and VSP 5600H. For VSP 5200, and VSP
5200H, only CTL01 and CTL12 are installed.

The battery is not installed.

Label ACLF

[THEORY01-02-82]
Hitachi Proprietary DKC910I
Rev.15.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-90

2. Drive Box
(1) Drive Box (DBS2)
The Drive Box (DBS2) is a chassis to install the 2.5-inch Disk Drives and the 2.5-inch Flash
Drives, and consists of two ENCs and two Power Supplies with a built-in cooling fan.
There are two types of DBS2. One contains 80 PLUS Gold level certified power supplies and the
other contains 80 PLUS Platinum level certified power supplies. There is no difference in usage
and specifications (dimensions and weight) between them. Only the energy efficiency of their
power supplies differs.

Figure 1-5 Drive Box (DBS2)


Front View Rear View

Power Supply with a


SFF HDD ENC built-in cooling fan

24 SFF HDDs can be installed. ENC and Power Supply take a duplex configuration.

(2) Drive Box (DBL)


The Drive Box (DBL) is a chassis to install the 3.5-inch Disk Drives and consists of two ENCs and
two Power Supplies with a built-in cooling fan.
There are two types of DBL. One contains 80 PLUS Gold level certified power supplies and the
other contains 80 PLUS Platinum level certified power supplies. There is no difference in usage
and specifications (dimensions and weight) between them. Only the energy efficiency of their
power supplies differs.

Figure 1-6 Drive Box (DBL)


Front View Rear View

Power Supply with a


LFF HDD ENC
built-in cooling fan

12 LFF HDDs can be installed. ENC and Power Supply take a duplex configuration.

[THEORY01-02-90]
Hitachi Proprietary DKC910I
Rev.15.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-91

(3) Drive Box (DBF3)


The Drive Box (DBF3) is a chassis to install the FMDs (Flash Module Drives) and consists of two
ENCs and two Power Supplies with a built-in cooling fan.

Figure 1-7 Drive Box (DBF3)

Front View Rear View

FMD Power Supply with a


ENC
built-in cooling fan

12 FMDs can be installed. ENC and Power Supply take a duplex configration.

(4) Drive Box (DBN)


The Drive Box (DBN) is a chassis to install the 2.5-inch NVMe-interface Flash Drives and SCMs,
and consists of two ENCs and two Power Supplies with a built-in cooling fan.
There are two types of DBN. One contains 80 PLUS Gold level certified power supplies and the
other contains 80 PLUS Platinum level certified power supplies. There is no difference in usage
and specifications (dimensions and weight) between them. Only the energy efficiency of their
power supplies differs.

Figure 1-8 Drive Box (DBN)

Front View Rear View

Power Supply with a


SFF Drive ENC
built-in cooling fan

24 SFF Drives can be installed. ENC and Power Supply take a duplex configuration.

[THEORY01-02-91]
Hitachi Proprietary DKC910I
Rev.0 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-100

3. HSN Box (HSNBX)


The HSN Box (HSNBX) is a chassis composed of an SVP, an SSVP, an Operation Panel (HSNPANEL),
two Interconnect Switches (ISWs), and two Power Supplies (ISWPSs), and connects and controls each
Controller Chassis (CBX).

Figure 1-9 HSN Box (HSNBX)

ISWn2
ISWPSn2
ISWn1

HSNPANELn

ISWPSn1

SVP

SSVPn
General view of HSNBX-n

[THEORY01-02-100]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-110

(1) Service Processor (SVP)


The Service Processor (SVP) is mainly used for setting and modification of the storage system
configuration, acquisition of the device availability statistical information, and maintenance.
The redundant SVP configuration can be build by installing two SVPs. The primary SVP is Master
SVP, and the secondary SVP is Standby SVP. When the primary SVP fails, the secondary SVP
is automatically switched into operation with approximately 3 minutes of switching time, and
operates as Master SVP. In the event of a SVP failure, employing the redundant SVP configuration
can prevent the outage of the failure monitoring function or the like of the storage system.
The Maintenance PC with exact specification must be prepared and connects with SVP, to
implement the installation or maintenance of the storage system, because the exclusive SVP for
DKC910I has neither a display nor a keyboard. A power supply for the Maintenance PC also must
be prepared near the SVP.
NOTE: The SVP cannot directly access volumes in the storage system to read and write stored
user data and cannot monitor user data written to volumes in the storage system by the
host or input and output between the storage system and the host. However, the SVP
can order controllers to erase user data by formatting volumes through SVP programs
and Storage Navigator.

NOTE: The host name of the SVP is automatically set based on the IP address and other
information. Do not change to another host name.

Table 1-12 SVP Specifications


Item Specifications
OS Windows® 10 IoT Enterprise LTSC 2019 64bit
Windows® 10 IoT Enterprise LTSC 2021 64bit
LAN 2 GbE LAN
USB USB 3.0 4 ports
Browser Internet Explorer 11 (Windows 10) (*1) (*2) (*3)
Microsoft Edge (*3)

*1: For the SVP micro-program version 90-04-01/00 or later, reports created by Storage
Navigator might not be able to be displayed on the Maintenance PC or client PC depending
on the Web browser version. Use the latest version of the Web browser. For the client PC, ask
the customer to do so. (Use the Maintenance PC or client PC on which the OS that supports
the latest version of the Web browser is installed.)
*2: Installed when the OS is Windows® 10 IoT Enterprise LTSC 2019 64bit.
*3: Installed when the OS is Windows® 10 IoT Enterprise LTSC 2021 64bit.

[THEORY01-02-110]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-120

Table 1-13 Specification of Maintenance PC


Use an OS in a support period. After support is expired, normal operation is not guaranteed.
Specifications
Item
Necessary Specification Recommended Specification
OS Windows 10 / Windows 11
Disk Drive Available hard disk space: 500 MB or more
1024 768 (XGA) or higher- 1280 1024 (SXGA) or higher-
Windows 10
Display resolution resolution
Windows 11 1280 1024 (SXGA) or higher-resolution
DVD Drive Need
LAN Ethernet 1000Base-T / 10Base-T / 100Base-T
USB Need
Tool to view Maintenance Manual Acrobat Reader, Web browser

4. Drives
The drives supported by DKC910I are shown below.

Table 1-14 Supported Drives


Maximum Revolution
Group I/F Size (inch) Transfer Rate Speed (min-1) Capacity
(Gbps) or Memory Type
Disk Drive (HDD) SAS 2.5 (SFF) 12 10,000 2.4 TB
SAS 3.5 (LFF) 12 7,200 10 TB, 14 TB, 18 TB
Flash Drive SAS 2.5 (SFF) 12 MLC/TLC 960 GB, 1.9 TB, 3.8 TB,
(SAS SSD) 7.6 TB, 15 TB, 30 TB
Flash Module Drive SAS 12 MLC/TLC 7 TB, 14 TB
(FMD)
Flash Drive NVMe 2.5 (SFF) 8 TLC 1.9 TB, 3.8 TB, 7.6 TB,
(NVMe SSD) 15 TB, 30 TB
SCM NVMe 2.5 (SFF) 8 3D Xpoint/SLC 375 GB, 750 GB, 800 GB,
(NVMe SCM) (*1) 1.5 TB
*1: The drive type of SCM is displayed as SSD on the maintenance/management software such as
Web Console and Maintenance Utility.
On the maintenance/management software, SCM is treated as Flash Drive. In this manual, SCM is
referred to as Flash Drive or SSD, unless otherwise stated, such as a description of the function
or a procedure for the SIM-RC.

[THEORY01-02-120]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-130

Table 1-15 LFF Disk Drive Specifications


Item DKC-F810I-10RH9M DKC-F810I-14RH9M DKC-F810I-18RH9M
Disk Drive Seagate DKS2K-H10RSS/ DKS2K-H14RSS/ DKS2O-H18RSS
Model Name DKS2N-H10RSS/ DKS2N-H14RSS/
DKS2O-H10RSS DKS2O-H14RSS
HGST DKR2H-H10RSS
User Capacity 9790.36 GB 13706.50 GB 17621.72 GB
Revolution speed (min )-1
7,200 7,200 7,200

Table 1-16 SFF Disk Drive Specifications


Item DKC-F810I-2R4JGM
Disk Drive Seagate DKS5K-J2R4SS
Model Name HGST
User Capacity 2305.58 GB
Revolution speed (min-1) 10,000

[THEORY01-02-130]
Hitachi Proprietary DKC910I
Rev.15.2 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-140

Table 1-17 SFF Flash Drive Specifications


Item DKC-F810I-960MGM DKC-F810I-1T9MGM DKC-F810I-3R8MGM
Flash Drive KIOXIA SLB5F-M960SD/ SLB5I-M1T9SS/ SLB5F-M3R8SS/
Model Name SLB5G-M960SS/ SLB5J-M1T9SS/ SLB5G-M3R8SS/
SLB5J-M960SD/ SLB5K-M1T9SS SLB5J-M3R8SS/
SLB5K-M960SD SLB5K-M3R8SS
HGST SLR5E-M3R8SS/
SLR5F-M3R8SS
Samsung SLM5B-M1T9SS/ SLM5A-M3R8SS/
SLM5C-M1T9SS SLM5B-M3R8SS/
SLM5C-M3R8SS
User Capacity 945.23 GB 1890.46 GB 3780.92 GB
Form Factor 2.5 inch 2.5 inch 2.5 inch

Item DKC-F810I-7R6MGM DKC-F810I-15RMGM DKC-F810I-30RMGM


Flash Drive KIOXIA SLB5G-M7R6SS/ SLB5H-M15RSS/ SLB5J-M30RSS/
Model Name SLB5J-M7R6SS/ SLB5J-M15RSS/ SLB5K-M30RSS
SLB5K-M7R6SS SLB5K-M15RSS
HGST SLR5E-M7R6SS/ SLR5G-M15RSS
SLR5F-M7R6SS
Samsung SLM5A-M7R6SS/ SLM5B-M15RSS/ SLM5A-M30RSS/
SLM5B-M7R6SS/ SLM5C-M15RSS SLM5B-M30RSS/
SLM5C-M7R6SS SLM5C-M30RSS
User Capacity 7561.85 GB 15048.49 GB 30095.90 GB
Form Factor 2.5 inch 2.5 inch 2.5 inch

Table 1-18 Flash Module Drive Specifications


Item DKC-F810I-7R0FP DKC-F810I-14RFP
Flash Module Drive Model NFHAF-Q6R4SS/ NFHAF-Q13RSS/
Name NFHAH-Q6R4SS/ NFHAH-Q13RSS/
NFHAJ-Q6R4SS/ NFHAJ-Q13RSS/
NFHAK-Q6R4SS/ NFHAK-Q13RSS/
NFHAL-Q6R4SS/ NFHAM-Q13RSS/
NFHAM-Q6R4SS/ NFHAN-Q13RSS
NFHAN-Q6R4SS
User Capacity 7036.87 GB 14073.74 GB
Form Factor

[THEORY01-02-140]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-150

Table 1-19 NVMe SFF Flash Drive Specifications


Item DKC-F910I-1R9RVM DKC-F910I-3R8RVM DKC-F910I-7R6RVM
Flash Drive HGST SNR5A-R1R9NC SNR5A-R3R8NC SNR5A-R7R6NC
Model Name KIOXIA SNB5A-R1R9NC SNB5A-R3R8NC SNB5A-R7R6NC
SNB5B-R1R9NC SNB5B-R3R8NC SNB5B-R7R6NC
SNB5C-R1R9NC SNB5C-R3R8NC SNB5C-R7R6NC
Intel
Samsung SNM5A-R1R9NC SNM5A-R3R8NC SNM5A-R7R6NC
SNM5B-R1R9NC SNM5B-R3R8NC SNM5B-R7R6NC
SNM5C-R1R9NC SNM5C-R3R8NC SNM5C-R7R6NC
Seagate SNS5B-R1R9NC SNS5B-R3R8NC SNS5B-R7R6NC
User Capacity 1890.46 GB 3780.92 GB 7561.85 GB
Form Factor 2.5 inch 2.5 inch 2.5 inch

Item DKC-F910I-15RRVM DKC-F910I-30RRVM DKC-F2000-1R9RWM


Flash Drive HGST
Model Name KIOXIA SNB5A-R15RNC SNB5B-R30RNC SNB5C-R1R9NC
SNB5B-R15RNC SNB5C-R30RNC
SNB5C-R15RNC
Intel
Samsung SNM5A-R15RNC SNM5B-R30RNC SNM5C-R1R9NC
SNM5B-R15RNC SNM5C-R30RNC
SNM5C-R15RNC
Seagate SNS5B-R15RNC
User Capacity 15048.49 GB 30095.90 GB 1890.46 GB
Form Factor 2.5 inch 2.5 inch 2.5 inch

Item DKC-F2000-3R8RWM DKC-F2000-7R6RWM DKC-F2000-15RRWM


Flash Drive KIOXIA SNB5C-R3R8NC SNB5C-R7R6NC SNB5C-R15RNC
Model Name Samsung SNM5C-R3R8NC SNM5C-R7R6NC SNM5C-R15RNC
User Capacity 3780.92 GB 7561.85 GB 15048.49 GB
Form Factor 2.5 inch 2.5 inch 2.5 inch

Item DKC-F2000-30RRWM
Flash Drive KIOXIA SNB5C-R30RNC
Model Name Samsung SNM5C-R30RNC
User Capacity 30095.90 GB
Form Factor 2.5 inch

[THEORY01-02-150]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-160

Table 1-20 NVMe SCM Specifications


Item DKC-F910I-375YVM DKC-F910I-750YVM DKC-F910I-800YVM
SCM HGST
Model Name KIOXIA SPB5A-Y375ND SPB5A-Y800NC
Intel SPN5A-Y375NC SPN5A-Y750NC
User Capacity 375.08 GB 750.15 GB 800.16 GB
Form Factor 2.5 inch 2.5 inch 2.5 inch

Item DKC-F910I-1R5YVM
SCM HGST
Model Name KIOXIA
Intel SPN5A-Y1R5NC
User Capacity 1500.30 GB
Form Factor 2.5 inch

[THEORY01-02-160]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-03-10

1.3 Hardware Architecture


• Controller Chassis (CBX) connection patterns
The basic configuration of the storage system is two Controller Chassis (CBXs), which is referred to as
a CBX pair, connected to two HSN Boxes (HSNBXs). In addition, the configuration consisting of two
CBX pairs (four CBXs) connected to two HSNBXs and the configuration consisting of three CBX pairs
(six CBXs) connected to two HSNBXs are available. VSP 5500, 5600/VSP 5500H, 5600H supports up to
3 CBX pairs configuration, and each CBX contains two Controller Boards. VSP 5100, 5200/VSP 5100H,
5200H supports only 1 CBX pair configuration, and each CBX contains one Controller Board.
• Interconnect
The CBXs and the HSNBXs are connected through the Interconnect Channel Boards (HIEs) installed in the
Controller Boards (CTLs) and the Interconnect Switches (ISWs) installed in the HSNBXs by using cables.
All CTLs are connected each other through ISWs.
Up to 12 CTLs can be connected to an ISW. Each fixed port on the ISW is used to connect to each port
location on HIEs. The paths between the CTLs and the ISW are referred to as X-paths, and the connection
cables are referred to as X-path cables.
• Backend connection
The Drive Box connections are configured per CBX pair. A set of Drive Boxes of a DKU cannot be
separately connected to different CBX pairs. The DKU that is directly connected to a CBX pair must be the
SBX composed of four DBS2s, the FBX composed of four DBF3s, or the NBX composed of four DBNs.
Each CTL installed in a CBX pair connects to four DBS2s in the SBX, four DBF3s in the FBX or four
DBNs in the NBX.
DKU other than NBX can be connected to another DKU. A set of Drive Boxes of a DKU cannot be
separately connected to different DKUs connected to different CBX pairs. Therefore, one CTL can access
all drives connected to the same CBX pair. The group of Drive Boxes that a CBX pair can access is referred
to as Shared DB Group. The logical configuration of the storage system is shown below.

Figure 1-10 Hardware Logical Configuration Diagram


)URQWHQG

,6:

&+% +,( &+% +,( &+% +,( &+% +,(

&7/ &7/ &7/ &7/ &7/ &7/ &7/ &7/


&7/ &7/ &7/ &7/        

&DFKH &DFKH &DFKH &DFKH

'.% '.% '.% '.% '.% '.% '.% '.%

'ULYH

DKU 'ULYH
(1&
(1& (1&
(1&
(1& (1&
(SBX/FBX) (1& 'ULYH (1&

'ULYH
Front-end path

X-path
'ULYH
DKU 'ULYH
(1& (1& Back-end path
(SBX/UBX/FBX) (1&
(1&
(1&
'ULYH
(1&
(1&
(1&

'ULYH
Shared DB Group

[THEORY01-03-10]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-03-20

[Notes on Drive Box connection]


There are three types of Drive Boxes: DBS2, DBL, DBF3, and DBN. All Drive Boxes that compose a
DKU are installed at a time (four DBS2s/DBF3s/DBNs or eight DBLs).
Only SBX (DBS2 4), FBX (DBF3 4), or NBX (DBN 4) can be directly connected to DKCs.
UBX (DBL 8) must be connected to SBX, FBX, or UBX. All DKBs in DKCs are connected to SBX,
FBX, or NBX.
UBX can be used by connecting it to the SBX/FBX connected to DKCs to secure the path availability.
Each CTL in DKCs can access all drives.
• DBS2 (for 2.5-inch drives)
Up to 24 2.5-inch drives can be installed. One DB number is assigned to a set of 12 drives. Two
consecutive DB numbers are assigned to a DBS2.
• DBL (for 3.5-inch drives)
Up to 12 3.5-inch drives can be installed. One DB number is assigned to a DBL.
• DBF3 (for FMD)
Up to 12 FMDs (Flash Module Drives) can be installed. One DB number is assigned to a set of 6
drives. Two consecutive DB numbers are assigned to a DBF3.
• DBN (for 2.5-inch drives) (NVMe interface)
Up to 24 2.5-inch NVMe-interface drives can be installed. One DB number is assigned to a set of 12
drives. Two consecutive DB numbers are assigned to a DBN.

[THEORY01-03-20]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-10

1.4 Network Topology


The DKC910I storage system is designed based on the star network topology where the SVP and each
Controller Chassis (DKC) are connected through LAN so that maintenance operation can be performed from
the SVP. The center of the star network is the SSVP installed in the HSN Box, which has the hub function
and SVP monitoring function.
A management PC of a customer is connected to the PUBLIC LAN port on the SVP. The SVP provides the
GUI function for Web browsers, and a customer can set the storage system configuration information and
check the storage system status by accessing the SVP from the management PC through a Web browser.
The Maintenance PC is connected to the LAN port on the HSN Box. A maintenance person remotely logs
in to the SVP from the Maintenance PC to perform maintenance work using maintenance management tools
(Web Console, SVP window, and Maintenance Utility).

GUM:
GUM is a communication port that can be physically accessed from a management LAN port. When the
storage system is not turned on but electricity is supplied to Controller Chassis, the GUM can be accessed
through GUI or CLI. When the storage system is turned on, the GUM operates by sharing information with
micro-programs.

The network diagram is shown below.

Figure 1-11 Network Diagram (VSP 5500, 5600/VSP 5500H, 5600H)

0DQDJHPHQW 5HPRWHORJLQ 0DLQWHQDQFH


3& 3&

+61%; +61%;
693 6693+8% 693 6693
0DQDJHPHQW 
0DLQWHQDQFH 0DQDJHPHQW  +8% 

PDQDJHPHQWWRRO /$1
/$1
6WRUDJH1DYLJDWRU
:HE&RQVROH 
0DLQWHQDQFH8WLOLW\
&RQQHFWVWRᄃ
693ZLQGRZ

0DLQWHQDQFH/$1

*80 *80 *80 *80

&7/ &7/ &7/ &7/

7KH693DQG6693LQWKH+61%;DUHRSWLRQDOFRPSRQHQWV

[THEORY01-04-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-11

For VSP 5100, 5200/VSP 5100H, 5200H consisting of only CTL-1 and CTL-3, unlike the above figure,
GUM in CTL-3 needs to be connected to SSVP in HSNBX0. Furthermore, the maintenance LAN ports (not
illustrated in the above figure) on CTL-0 and CTL-3 need to be directly connected with each other by using
a LAN cable. When the optional SSVP is installed in HSNBX1, GUM in CTL-3 needs to be connected to the
optional SSVP in HSNBX1.

[THEORY01-04-11]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-20

1.4.1 Management Software


Hitachi Device Manager - Storage navigator (hereinafter referred to as Storage Navigator) and Web Console
that contains menus dedicated for maintenance personnel in addition to the Storage Navigator functions are
GUI for managing and operating the storage system. A system administrator accesses Storage Navigator
through a Web browser to operate the storage system using GUI. A maintenance person accesses Web
Console to operate the storage system.

The following is a summary of the management software.

• Storage Navigator (Web Console)


Storage management software used for storage system hardware management (setting configuration
information, defining logical devices, and displaying the statuses) and performance management (tuning).
A system administrator accesses Storage Navigator from a PC connected to LAN through a Web browser to
perform management operations for the storage system. A maintenance person remotely logs in to the SVP
and performs Web Console operations equivalent to Storage Navigator operations.

[THEORY01-04-20]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-30

1.4.2 Maintenance Software


Web Console, SVP window, and Maintenance Utility are used for storage system maintenance, micro-
program exchange, and so on.

The following are summaries of each software.

• Web Console
Storage management software used for storage system hardware management (setting configuration
information, defining logical devices, and displaying the statuses) and performance management (tuning).
Web Console can be used also for maintenance work. A remote login to the SVP is required for accessing
Web Console.

• SVP window
Used for status check, collection of dumps and logs, network settings, micro-program exchange, and so on.
The SVP window is started from Web Console. Maintenance Utility is started from SVP window.

• Maintenance Utility
Web application for storage system failure monitoring, replacement work, and so on. Maintenance Utility
is embedded in the GUM (Gateway for Unified Management) controller installed in the Controller Chassis.
Installation is not necessary. Maintenance Utility is started from the SVP window.

[THEORY01-04-30]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-10

1.5 Storage System Function Overview


1.5.1 Basic Functions
The storage system redundant configuration shown below allows the storage system to continue I/O even
when a failure occurs.
NOTE: SAS-interface DKUs are illustrated in the following diagrams. However, the same
operations are applied also when the NVMe-interface DKU is installed.
1. Cache redundancy and destage
Cache memory in CTL has a space to temporarily store the data sent to and from the front end.
(1) When a write request is received from a server, the data is temporarily stored in cache memory.
(2) The data is duplicated in cache memory in another CTL.
(3) When the data duplication is completed, the write completion is reported to the server.
(4) After the write completion report, the data in cache memory is stored in drives.
Thus, the storage system ensures quick response to servers and enhanced fault tolerance.

Figure 1-12 Write Data Flow

)URQWHQG

ղ ,6:
ճձ
&+% +,( &+% +,( &+% +,( &+% +,(

&7/ &7/ &7/ &7/ &7/ &7/ &7/ &7/


&7/ &7/ &7/ &7/        
:ULWH
&DFKH &DFKH :ULWH
&DFKH &DFKH
GDWD GDWD

'.% '.% '.% '.% '.% '.% '.% '.%

մ
DBS2 or DBF3
(1&

(;3 'ULYH
:ULWH
(1&
GDWD
(;3 'ULYH
Front-end path

X-path
DBS2 or DBF3
(1& Back-end path
(;3 'ULYH
(1&

'ULYH
Shared DB Group
(;3

NOTE: CTLs installed in VSP 5100, 5200/VSP 5100H, 5200H are only CTL-01 and CTL-12.
Therefore, unlike the above figure, the cache redundancy for VSP 5100, 5200/VSP
5100H, 5200H is configured in CTL-01 and CTL-12.
[THEORY01-05-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-20

2. Two separate cache areas


A CBX pair has two separate cache memory areas so that write data can be duplicated.
If a failure occurs in cache memory or a CTL, data is immediately copied to cache memory on another
CTL for duplication.

3. Alternate path (front end)


Alternate paths to servers can be set.
For VSP 5500, 5600/VSP 5500H, 5600H, an alternate path can be set for the other CTL of a CTL pair in
a CBX pair. Paired CTLs are CTLs installed in the same locations in CBXs in a pair. In the figure below,
CTL pairs are the pair of CTL-01 and CTL-11 and the pair of CTL-02 and CTL-12. For VSP 5100, 5200/
VSP 5100H, 5200H, CTL-01 and CTL-12 are paired because installed CTLs are only CTL-01 and CTL-
12.
Setting an alternate path allows the storage system to continue I/O from and to a server even when a front
end path failure occurs.

Figure 1-13 Front End Alternate Path

)URQWHQG

,6:

&+% +,( &+% +,( &+% +,( &+% +,(

&7/
&7/ &7/ &7/ &7/ 

&DFKH &DFKH :ULWH &DFKH


GDWD

'.% '.% '.% '.% '.% '.% '.% '.%

(1&

(;3 'ULYH
:ULWH 
(1&
GDWD
(;3 'ULYH

[THEORY01-05-20]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-30

4. Alternate path (back end)


In the back end connection, alternate paths are set so that each CTL in a CBX pair can access the same
drives.
Even if a failure occurs in a backend path when cache data is stored in drives, an alternate path substitutes
the backend path to continue access to drives.

Figure 1-14 Back End Alternate Path

)URQWHQG

,6:

&+% +,( &+% +,( &+% +,( &+% +,(

&7/
&7/ &7/ &7/ &7/ 

&DFKH
:ULWH &DFKH &DFKH &DFKH
GDWD
'.% '.% '.% '.% '.% '.% '.% '.%

(1&

(;3 'ULYH
:ULWH
(1&
GDWD
(;3 'ULYH

[THEORY01-05-30]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-31

The ENC for DBS2 (SBX), DBF3 (FBX), and DBN (NBX) has two EXP routes, and the EXP routes
in the ENC are connected each other. Two DKBs in a CBX pair are connected to one ENC by four
logical paths. Even if one DKB is blocked, I/O can be continued by the other DKB because one ENC is
connected to two DKBs.
The following is the logical path diagram when SBX or FBX is connected to SBX or FBX.
For NBX, a CBX pair can be connected to one Disk Unit (DKU) only, but cannot be connected to other
DKUs. Four logical paths are created by connecting the EXP routes as shown in the figure for SBX and
FBX.

Figure 1-15 Logical Paths When SBX or FBX Is Connected to SBX or FBX

)URQWHQG

,6:

&+% +,( &+% +,( &+% +,( &+% +,(

&7/ &7/ &7/ &7/


&DFKH &DFKH &DFKH &DFKH

'.% '.% '.% '.% '.% '.% '.% '.%

'%6RU'%)
(1&

(;3 'ULYH
(1&
SBX or FBX
(;3 'ULYH

'%6RU'%)
(1&

(;3 'ULYH
(1&
Logical path
SBX or FBX
(;3 'ULYH

[THEORY01-05-31]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-32

UBX needs to be connected to SBX or FBX. Each ENC of DBL (UBX) is equipped with one EXP route,
so two DBLs need to be connected to one DBS2 or DBF3.
The following is the logical path diagram when UBX is connected to SBX or FBX.

Figure 1-16 Logical Paths When UBX is connected to SBX or FBX


)URQWHQG


,6:

 &+% +,( &+% +,( &+% +,( &+% +,(


 &7/ &7/ &7/ &7/
&DFKH &DFKH &DFKH &DFKH

 '.% '.% '.% '.% '.% '.% '.% '.%





'%6RU'%)
 (1& 

(;3 'ULYH 
SBX or FBX  (1&

 (;3 'ULYH





'%/ '%/
 (1& (1& (1& (1&
Logical path
UBX
(;3 'ULYH (;3 'ULYH





[THEORY01-05-32]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-40

5. I/O among multiple nodes


All CTLs are connected each other through ISW in HSN Box.
Even when the drives to be accessed are not in the Shared DB Group connected to the DKB in the CTL
that receives I/O requests from a server, the drives in the other Shared DB Groups can be accessed
through ISW.
When a failure occurs in the CTL that receives I/O requests, the storage system can continue I/O to
or from a server by using another CTL. When a failure occurs in the CTL that performs drive I/O, the
storage system can continue drive I/O by using another CTL that shares the same Shared DB Group.

Figure 1-17 Drive Access through ISW

)URQWHQG

,6:

&+% +,( &+% +,( &+% +,( &+% +,(

&7/ &7/ &7/ &7/ &7/ &7/ &7/ &7/


&7/ &7/ &7/ &7/        

&DFKH
:ULWH &DFKH &DFKH &DFKH
GDWD
'.% '.% '.% '.% '.% '.% '.% '.%

:ULWH
(1&
GDWD
(;3 'ULYH
(1&

(;3 'ULYH

(1&
(;3 'ULYH
(1&

(;3 'ULYH

[THEORY01-05-40]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-50

1.5.2 Redundant Design


• Power supply redundancy
Controller Box (CBX), Drive Box, and HSN Box are equipped with two power supplies to have power
system redundancy. Even if one power supply fails, the other power supply supplies power to all
components in the chassis to continue to operate. Two power supplies must be connected to different
electricity supply equipment.

• Drive redundancy
RAID configurations composed of multiple drives prevent data from being lost in case of a drive failure.
RAID configurations can be kept even when a drive failure occurs, by installing spare drives in which data
is restored.

• SVP redundancy
Installing the SVP in each of two HSN Boxes provides redundant access to the maintenance and
management tools. One SVP operates as Master SVP, and the other SVP operates as Standby SVP. When
the Master SVP fails, the Standby SVP automatically substitutes the Master SVP.
A redundant SVP to be installed in HSNBX-1 is not standard equipment but an optional component.

• X-path redundancy
Connecting two HSNBXs (four ISWs) and CBXs (HIEs) with X-path cables in mesh topology makes the
communication among Controller Boards have redundancy.
Even if a failure occurs in an X-path cable or HSNBX (ISW), the storage system can continue to operate.

• Data protection in case of power failure


Cache Flash Memories and Batteries are installed in each Controller Box (CBX). When power is not
supplied due to a failure in a power supply or power outage, the Cache Flash Memories back up the cache
memory data and the Batteries supply power to enable the backup processing. If a power outage lasts 20
milliseconds or more, the Batteries supply power to the Controller Board and the cache memory data and
storage system configuration information are copied to Cache Flash Memories.

Figure 1-18 Data Backup Process


Power Failure Occurs

Detection of Power Failure Backing up data


unrestrictedly.
20ms Max. 13 minutes
Data Backup
Mode (*1)
The Cache Memory data and the Storage
Storage Data is backed up in the
System configuration data are backed up
System Cache Flash Memory.
onto the Cache Flash Memory.
Operating

*1: The data backup processing is continued when the power outage is
restored while the data is being backed up.

See below for more specifications related to power outage.


[THEORY01-05-50]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-60

1. Battery lifetime
The battery lifetime is affected by the battery temperature. The battery temperature changes depending
on the intake temperature and installation altitude of the storage system, the configuration and operation
of the Controller Chassis, charge-discharge count, and individual differences of batteries. Therefore, the
battery lifetime varies in the range between three and five years.
The battery lifetime (estimated value) in the standard environment is shown below.

Storage System Intake


Lifetime (Estimated Value)
Temperature
Up to 24 degrees Celsius 5 years
Up to 30 degrees Celsius 5 years
Up to 34 degrees Celsius 4 years
Up to 40 degrees Celsius 3 years

[THEORY01-05-60]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-70

2. Relation between Battery Charge Level and System Startup Action

No. Power Status Battery Charge Level System Startup Action


1 PS ON <Case1> The system does not start up until all batteries in CTLs
There is at least one battery that is in the group A or all batteries in CTLs in the group B
charged less than 30% in each the are charged 30% or more. (It takes a maximum of 90
CTL group A and the CTL group B minutes (*2).) (*1)
(*3).
2 <Case2> The SIM that shows the lack of battery charge is
There is at least one battery that is reported and the system starts up. I/O is processed by
charged less than 50% in each the write through until all batteries in CTLs in the group A
CTL group A and the CTL group B or all batteries in CTLs in the group B are charged 50%
(*3) (in the case other than <Case or more. (It takes a maximum of 60 minutes (*2).)
1>).
3 <Case3> The system starts up normally.
Other than <Case 1> or <Case 2> If the condition changed from <Case 2> to <Case 3>
(All batteries in CTLs in the group during startup, the SIM that shows the completion of
A or all batteries in CTLs in the battery charge is reported.
group B (*3) are charged 50% or
more.)
*1: Action when System Option Mode 837 is off (default setting).
*2: Battery charge time: 150 minutes to charge from 0% to 50%.
270 minutes to charge from 0% to 100%.
*3: Group A: CTL-0x, CTL-2x, and CTL-4x
Group B: CTL-1x, CTL-3x, and CTL-5x

[THEORY01-05-70]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-80

3. Relation between Power Status and SM/CM Data Backup Methods

No. Power Status SM/CM Data Backup Methods Data Restore Methods during Restart
1 PS OFF (planned power off) SM data (including CM directory SM data is restored from CFM.
information) is stored in CFM before If CM data was stored, CM data is
PS OFF is completed. also restored from CFM.
If PIN data exists, all the CM data
including PIN data is also stored.
2 When power Instant power If power is recovered in a moment, SM/CM data in memory is used.
outage occurs outage SM/CM data remains in memory and
is not stored in CFM.
3 Power outage All the SM/CM data is stored in All the SM/CM data is restored from
while the CFM. CFM.
system is in However, if a power outage occurs If CM data was not stored, only CM
operation after the system starts up in the data is volatilized and the system
condition of <Case 2> and before the starts up.
battery charge level of <Case 3> is
restored, only SM data is stored (see
2. Relation between Battery Charge
Level and System Startup Action ).
4 Power outage Data storing in CFM is not done. The data that was stored in the latest
while the (The latest backup data that was power off operation or power outage
system is successfully stored remains.) is restored from CFM.
starting up

[THEORY01-05-80]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-90

4. Action When CFM Error Occurs

No. DKC Status Description of Error Action When Error Occurs


1 In operation CFM error or data comparing error was • CFM Failure SIMRC = 30750x
detected at the time of CFM health check (Environmental error: CFM Failure) is output.
(*1).
2 Planed power off CFM error was detected, and moreover, • DKC power off process is executed.
or power outage retry failed four times during data storing. • Blockage occurs in Controller Board or CMG
• Data storing error is managed in a in Controller Board depending on the location
per module group (MG) basis and is of the failed memory.
classified into data storing error only in
the MG concerned and data storing error
in all the MG depending on the location
of the failed memory.
3 When powered on CFM error or protection cord (*2) error • Blockage occurs in Controller Board or CMG
-1 occurred during data restoring. in Controller Board depending on the location
(In the case that of the failed memory.
data storing was • If the failed memory is in CMG0, the
successfully done Controller Board concerned becomes blocked.
in No.2) If the failed memory is in CMG1, the CACHE
concerned is volatilized and the system starts
up.
If the Controller Board forming the cache
redundant configuration with the Controller
Board that contains the failed CFM is in the
normal status, the data is not lost.
4 When powered on — • Blockage occurs in Controller Board or CMG
-2 in Controller Board depending on the location
(In the case that of data storing error. (Same as described in
data storing failed No.2.)
in No.2)
*1: CFM health check: Function that executes the test of read and write of a certain amount of data at
specified intervals to CFM while the DKC is in operation.
*2: Protection code: The protection code (CRC) is generated and saved onto CFM at the time of data
storing in CFM and is checked at the time of data restoring.
NOTE: CFM handles only the data in the Controller Board in which it is installed.

5. Notes during Planned Power Off (PS OFF)


Removing the Controller Board when the system is off and the breakers on the PDU are on may result in
<Case1> of (1) because of the lack of battery charge.
Therefore, to remove the board and the battery, replace them when the system is on, or remove them after
the breakers on the PDU are powered off.

[THEORY01-05-90]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-100

1.5.3 Impact of Each Failure Part on Storage System


For the impact of each failure part on the storage system, see Impact of failures in Chapter 2 of Hitachi
Virtual Storage Platform 5000 Series Systems configuration guideline.

[THEORY01-05-100]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-10

2. Hardware Specifications
2.1 Storage System Specifications
Table 2-1 and Table 2-2 show the storage system specifications.

Table 2-1 Storage System Specifications (VSP 5100/VSP 5100H/VSP 5500/VSP 5500H)
Item Specifications
VSP 5500/ VSP 5500/ VSP 5500/ VSP 5100/
VSP 5500H VSP 5500H VSP 5500H VSP 5100H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
System Number of 2.5 Minimum 0
Drives Maximum 2,304 1,536 768 768
Number of 3.5 Minimum 0
Drives (*1) Maximum 1,152 768 384 384
Number of Flash Minimum 0
Module Drives Maximum 576 384 192 192
Number of Minimum 0
NVMe SSDs Maximum 288 192 96 96
Number of Minimum 0
NVMe SCMs Maximum 33 (*10) 33 (*10) 33 (*10) 33 (*10)
RAID Level RAID6/RAID5/RAID1 (*9)
RAID Group RAID6 6D+2P, 14D+2P
Configuration RAID5 3D+1P, 7D+1P
RAID1 2D+2D
Maximum Number of Spare 192 (*2) 128 (*2) 64 (*2) 64 (*2)
Disk Drives
Maximum Number of Volumes 65,280
Maximum 30 TB 2.5 61.5 PiB 41.0 PiB 20.5 PiB 20.5 PiB
Storage System SAS SSD
Capacity used
(Physical 30 TB 2.5 7.6 PiB 5.1 PiB 2.5 PiB 2.5 PiB
Capacity) NVMe SSD
used
Maximum External 255 PiB
Configuration
Maximum DBS2/DBL 96 64 32 32
Number of DBs DBF3 48 32 16 16
DBN 12 8 4 4
Memory Cache Memory Capacity 1.536 GiB to 1,024 GiB to 512 GiB to 2,048 256 GiB to 1,024
6,144 GiB 4,096 GiB GiB GiB
Cache Flash Memory Type BM35/BM45/BM3E/BM4E
(To be continued)

[THEORY02-01-10]
Hitachi Proprietary DKC910I
Rev.2 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-20

(Continued from preceding page)


Item Specifications
VSP 5500/ VSP 5500/ VSP 5500/ VSP 5100/
VSP 5500H VSP 5500H VSP 5500H VSP 5100H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
Storage I/F DKC-DB Interface SAS/Dual Port
NVMe/Dual Port
Transfer Rate SAS 12 Gbps
Interface
NVMe 8 Gbps
(PCIe)
Interface
Number of DKB 24/16/8/0 16/8/0 8/0 4/0
Device I/F Support Channel Open System Fibre Channel Short Wave/
Type Fibre Channel Long Wave (*3)/
iSCSI (Optic)
Mainframe Fibre Channel Short Wave/
Fibre Channel Long Wave
Transfer Rate Fibre Open System : 4/8/16/32 Gbps
Channel Mainframe : 4/8/16 Gbps
iSCSI 10 Gbps
(Optic)
Maximum Number of CHB 48 32 16 8
Acoustic Operating CBX LpAm 60 dB, LwA 6.6 Bel
Level HSNBX LpAm 60 dB, LwA 6.6 Bel
LpAm (*4) DBS2 LpAm 60 dB, LwA 6.4 Bel
(*5) (*6) DBL LpAm 60 dB, LwA 6.4 Bel
(*7) (*8) DBF3 LpAm 60 dB, LwA 6.0 Bel
DBN LpAm 60 dB, LwA 6.4 Bel
Standby CBX LpAm 55 dB
HSNBX LpAm 55 dB
DBS2 LpAm 55 dB
DBL LpAm 55 dB
DBF3 LpAm 55 dB
DBN LpAm 55 dB
(To be continued)

[THEORY02-01-20]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-30

(Continued from preceding page)


Item Specifications
VSP 5500/ VSP 5500/ VSP 5500/ VSP 5100/
VSP 5500H VSP 5500H VSP 5500H VSP 5100H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
Non- Control PCB, SVP, ISW Supported
disruptive Cache Memory Supported
Maintenance Cache Flash Memory Supported
Power Supply, Fan Supported
Micro-program Supported
Disk Drive, Flash Drive, Supported
Flash Module Drive, SCM
*1: 3.5 drives are supported only by VSP 5500H and VSP 5100H.
*2: Available as spare or data Disks.
*3: By the replacing SFP transceiver of the fibre port on the Channel Board to SFP for Longwave, the
port can be used for the Longwave.
*4: Acoustic level value of each single chassis.
*5: [LpAm] is the mean A-weighted emission sound pressure level that is measured at the 1-meter
bystander positions under the following conditions in accordance with ISO7779 and the value is
declared based on ISO9296.
In a normal installation area (data center/general office), the storage system is surrounded by
different elements from the following measuring conditions according to ISO, such as noise
sources other than the storage system (other devices) , the walls and ceilings that reflect the sound.
Therefore, the values described in the table do not guarantee the acoustic level in the actual
installation area.
• Measurement environment: In a semi-anechoic room whose ambient temperature is 23 degrees C
± 2 degrees C
• Device installation position: The Controller Chassis is at the bottom of the rack and the Drive
Box is at a height of 1.5 meters in the rack
• Measurement position: 1 meter away from the front, rear, left, or right side of the storage system
and 1.5 meters high (at four points)
• Measurement value: Energy average value of the four points mentioned above (front, rear, left,
and right)
*6: [LpAm] varies between 45 dB and 63 dB according to the ambient temperature, drive
configuration, and operating status. The maximum could be 67 dB during maintenance procedure
for failed ENC or Power Supply.
*7: The maximum power level could be 8.0 Bel according to the ambient temperature, HDD type, and
operating status.

[THEORY02-01-30]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-40

*8: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*9: RAID1 supported by these storage systems is commonly referred to as RAID1+0. RAID1+0
mirrors blocks across two drives and then creates a striped set across multiple drive pairs.
In this manual, the above RAID level is referred to as RAID1.
*10: The maximum number of SCMs that can be controlled per storage system is shown.

[THEORY02-01-40]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-50

Table 2-2 Storage System Specifications (VSP 5200/VSP 5200H/VSP 5600/VSP 5600H)
Item Specifications
VSP 5600/ VSP 5600/ VSP 5600/ VSP 5200/
VSP 5600H VSP 5600H VSP 5600H VSP 5200H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
System Number of 2.5 Minimum 0
Drives Maximum 2,304 1,536 768 768
Number of 3.5 Minimum 0
Drives (*1) Maximum 1,152 768 384 384
Number of Minimum 0
NVMe SSDs Maximum 288 192 96 96
Number of Minimum 0
NVMe SCMs Maximum 33 (*10) 33 (*10) 33 (*10) 33 (*10)
RAID Level RAID6/RAID5/RAID1 (*9)
RAID Group RAID6 6D+2P, 14D+2P
Configuration RAID5 3D+1P, 7D+1P
RAID1 2D+2D
Maximum Number of Spare 192 (*2) 128 (*2) 64 (*2) 64 (*2)
Disk Drives
Maximum Number of Volumes 65,280
Maximum 30 TB 2.5 61.5 PiB 41.0 PiB 20.5 PiB 20.5 PiB
Storage System SAS SSD
Capacity used
(Physical 30 TB 2.5 7.6 PiB 5.1 PiB 2.5 PiB 2.5 PiB
Capacity) NVMe SSD
used
Maximum External 255 PiB
Configuration
Maximum DBS2/DBL 96 64 32 32
Number of DBs DBN 12 8 4 4
Memory Cache Memory Capacity 1.536 GiB to 1,024 GiB to 512 GiB to 2,048 256 GiB to 1,024
6,144 GiB 4,096 GiB GiB GiB
Cache Flash Memory Type BM95/BM9E
(To be continued)

[THEORY02-01-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-60

(Continued from preceding page)


Item Specifications
VSP 5600/ VSP 5600/ VSP 5600/ VSP 5200/
VSP 5600H VSP 5600H VSP 5600H VSP 5200H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
Storage I/F DKC-DB Interface SAS/Dual Port
NVMe/Dual Port
Transfer Rate SAS 12 Gbps
Interface
NVMe 8 Gbps
(PCIe)
Interface
Number of DKB 24/16/8/0 16/8/0 8/0 4/0
Device I/F Support Channel Open System Fibre Channel Short Wave/
Type Fibre Channel Long Wave (*3)/
iSCSI (Optic)
Mainframe Fibre Channel Short Wave/
Fibre Channel Long Wave
Transfer Rate Fibre Open System : 4/8/16/32 Gbps
Channel Mainframe : 4/8/16/32 Gbps
iSCSI 10 Gbps
(Optic)
Maximum Number of CHB 48 32 16 8
Acoustic Operating CBX LpAm 65 dB, LwA 7.7 Bel
Level HSNBX LpAm 60 dB, LwA 6.6 Bel
LpAm (*4) DBS2 LpAm 60 dB, LwA 6.4 Bel
(*5) (*6) DBL LpAm 60 dB, LwA 6.4 Bel
(*7) (*8) DBN LpAm 60 dB, LwA 6.4 Bel
Standby CBX LpAm 55 dB
HSNBX LpAm 55 dB
DBS2 LpAm 55 dB
DBL LpAm 55 dB
DBN LpAm 55 dB
(To be continued)

[THEORY02-01-60]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-70

(Continued from preceding page)


Item Specifications
VSP 5600/ VSP 5600/ VSP 5600/ VSP 5200/
VSP 5600H VSP 5600H VSP 5600H VSP 5200H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
Non- Control PCB, SVP, ISW Supported
disruptive Cache Memory Supported
Maintenance Cache Flash Memory Supported
Power Supply, Fan Supported
Micro-program Supported
Disk Drive, Flash Drive, Supported
SCM
*1: 3.5 drives are supported only by VSP 5600H and VSP 5200H.
*2: Available as spare or data Disks.
*3: By the replacing SFP transceiver of the fibre port on the Channel Board to SFP for Longwave, the
port can be used for the Longwave.
*4: Acoustic level value of each single chassis.
*5: [LpAm] is the mean A-weighted emission sound pressure level that is measured at the 1-meter
bystander positions under the following conditions in accordance with ISO7779 and the value is
declared based on ISO9296.
In a normal installation area (data center/general office), the storage system is surrounded by
different elements from the following measuring conditions according to ISO, such as noise
sources other than the storage system (other devices) , the walls and ceilings that reflect the sound.
Therefore, the values described in the table do not guarantee the acoustic level in the actual
installation area.
• Measurement environment: In a semi-anechoic room whose ambient temperature is 23 degrees C
± 2 degrees C
• Device installation position: The Controller Chassis is at the bottom of the rack and the Drive
Box is at a height of 1.5 meters in the rack
• Measurement position: 1 meter away from the front, rear, left, or right side of the storage system
and 1.5 meters high (at four points)
• Measurement value: Energy average value of the four points mentioned above (front, rear, left,
and right)
*6: [LpAm] varies between 45 dB and 63 dB according to the ambient temperature, drive
configuration, and operating status. The maximum could be 67 dB during maintenance procedure
for failed ENC or Power Supply.
*7: The maximum power level could be 8.0 Bel according to the ambient temperature, HDD type, and
operating status.

[THEORY02-01-70]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-80

*8: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*9: RAID1 supported by these storage systems is commonly referred to as RAID1+0. RAID1+0
mirrors blocks across two drives and then creates a striped set across multiple drive pairs.
In this manual, the above RAID level is referred to as RAID1.
*10: The maximum number of SCMs that can be controlled per storage system is shown.

[THEORY02-01-80]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-02-10

2.2 Power Specifications


2.2.1 Storage System Current
DKC910I input current are shown as each Power Supply.

Table 2-3 Input Power Specifications


Inrush Current
Inrush Current
(Rating) (*1)
Input When Leakage Power Cord
Item When 1st (0-p)
Power two Current Plug Type
one PS is 1st (0-p) 2nd (0-p) Time
PSs are
operating (-25%)
operating
DKCPS (CBX) 7.2 A 3.6 A 1.75 mA 30 A 30 A 25 ms
Single
DBPS (DBS2) 3.2 A 1.6 A 1.75 mA 30 A 30 A 25 ms
phase,
DBPS (DBL) 2.0 A 1.0 A 1.75 mA 35 A 30 A 25 ms
AC200V
DBPS (DBF3) 3.1 A 1.55 A 1.75 mA 20 A 15 A 80 ms
to
DBPS (DBN) 4.0 A 2.0 A 1.75 mA 24 A 18 A 25 ms
AC240V
ISWPS (HSNBX) 1.2 A 0.6 A 1.75 mA 30 A 30 A 25 ms
*1: When two power supplies are operating, each power supply provides about half of the required
power for the storage system. When only one of the two power supplies is operating, the power
supply provides all required power for the storage system. Therefore, use the power supplies that
meet the rated input current for when one power supply is operating.

[THEORY02-02-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-02-20

Figure 2-1 Power Supply Locations


1. Controller Chassis

Controller Chassis to PDU


C14

DKCPS-1
CTL2

DKCPS-2
AC0(*1) to PDU AC1(*1)
CTL1
C14

Plug DKCPS Power Cord

2. Drive Box

ENC ENC
to PDU to PDU

AC0(*1) C14 C14 AC1(*1)


DBPS-1 DBPS-2

Plug DBPS Power Cord

3. HSN Box (HSNBX)

to PDU to PDU
ISW ISW
AC0(*1) C14 PS1 ISW1 ISW2 PS2 C14 AC1(*1)

ISWPS
Plug Power Cord

*1: It is necessary to separate AC0 and AC1 for AC redundant.

[THEORY02-02-20]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-02-30

2.2.2 Input Voltage and Frequency


The following shows the electric power system specifications for feeding to the Storage System.

1. Input Voltage and Frequency


The following shows the input voltage and frequency to be supported.

• CBX/HSNBX/DBS2/DBL/DBF3/DBN
Input Voltage Voltage Tolerance Frequency Wire Connection
200V to 240V +10% or -11% 50Hz 2Hz 1 Phase 2 Wire + Ground
60Hz 2Hz

This unit does not apply to IT Power System.


2. Circuit Breakers and PDU
• Use PDU with the standard plug.
• If PDU is provided with connecting type B plug, use PDU with circuit breaker of 20 (16) A or less, or
install circuit breaker of 20 (16) A in the power supply.

[THEORY02-02-30]
Hitachi Proprietary DKC910I
Rev.9.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-10

2.3 Environmental Specifications


The environmental specifications are shown in the following table.

1. Environmental Conditions

Table 2-4 Usage Environment Conditions


Item Condition
Operating (*1) (*8)
Model Name CBX DBS2/DBL/DBF3/DBN HSNBX
Temperature range (ºC) 10 to 35 10 to 35 10 to 35
Relative humidity (%) (*4) 8 to 80 8 to 80 8 to 80
Maximum wet-bulb 29 29 29
temperature (ºC)
Temperature gradient 10 10 10
(ºC/hour)
Dust (mg/m3) 0.15 or less 0.15 or less 0.15 or less
Gaseous contaminants (*6) G1 classification levels
Altitude (m) (*7) ~ 3,050 (*7) ~3,050 (*7) ~ 3,050
(Ambient temperature) (10 ºC ~ 28 ºC) (10 ºC ~28 ºC) (10 ºC ~ 28 ºC)
~ 950 ~ 950 ~ 950
(10 ºC ~ 35 ºC) (10 ºC ~ 35 ºC) (10 ºC ~ 35 ºC)
Noise Level 90 dB or less (*5)
(Recommended)

Item Condition
Non-Operating (*2)
Model Name CBX DBS2/DBL/DBF3/DBN HSNBX
Temperature range (ºC) (*9) -10 to 50 -10 to 50 -10 to 50
Relative humidity (%) (*4) 8 to 90 8 to 90 8 to 90
Maximum wet-bulb 29 29 29
temperature (ºC)
Temperature gradient 10 10 10
(ºC/hour)
Dust (mg/m3) — — —
Gaseous contaminants (*6) G1 classification levels
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000

[THEORY02-03-10]
Hitachi Proprietary DKC910I
Rev.9.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-20

Item Condition
Transportation, Storage (*3)
Model Name CBX DBS2/DBL/DBF3/DBN HSNBX
Temperature range (ºC) (*10) -30 to 60 -30 to 60 -30 to 60
Relative humidity (%) (*4) 5 to 95 5 to 95 5 to 95
Maximum wet-bulb 29 29 29
temperature (ºC)
Temperature gradient 10 10 10
(ºC/hour)
Dust (mg/m3) — — —
Gaseous contaminants (*6) —
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000
*1: Storage system which is ready for being powered on
*2: Including packed and unpacked storage systems
*3: Storage system packed for shipping
*4: No dew condensation is allowed.
*5: Fire suppression systems and acoustic noise:
Some data center inert gas fire suppression systems when activated release gas from pressurized
cylinders that moves through the pipes at very high velocity. The gas exits through multiple
nozzles in the data center. The release through the nozzles could generate high-level acoustic
noise. Similarly, pneumatic sirens could also generate high-level acoustic noise. These acoustic
noises may cause vibrations to the hard disk drives in the storage systems resulting in I/O
errors, performance degradation in and to some extent damage to the hard disk drives. Hard
disk drives (HDD) noise level tolerance may vary among different models, designs, capacities
and manufactures. The acoustic noise level of 90dB or less in the operating environment table
represents the current operating environment guidelines in which Hitachi storage systems are
designed and manufactured for reliable operation when placed 2 meters from the source of the
noise.
Hitachi does not test storage systems and hard disk drives for compatibility with fire suppression
systems and pneumatic sirens. Hitachi also does not provide recommendations or claim
compatibility with any fire suppression systems and pneumatic sirens. Customer is responsible to
follow their local or national regulations.
To prevent unnecessary I/O error or damages to the hard disk drives in the storage systems, Hitachi
recommends the following options:
(1) Install noise-reducing baffles to mitigate the noise to the hard disk drives in the storage
systems.
(2) Consult the fire suppression system manufacturers on noise reduction nozzles to reduce the
acoustic noise to protect the hard disk drives in the storage systems.
(3) Locate the storage system as far as possible from noise sources such as emergency sirens.
(4) If it can be safely done without risk of personal injury, shut down the storage systems to avoid
data loss and damages to the hard disk drives in the storage systems.
DAMAGE TO HARD DISK DRIVES FROM FIRE SUPPRESSION SYSTEMS OR
PNEUMATIC SIRENS WILL VOID THE HARD DISK DRIVE WARRANTY.

[THEORY02-03-20]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-30

*6: See ANSI/ISA-71.04-2013 Environmental Conditions for Process Measurement and Control
Systems: Airborne Contaminants.
*7: Meets the highest allowable temperature conditions and complies with ASHRAE (American
Society of Heating, Refrigerating and Air-Conditioning Engineers) 2011 Thermal Guidelines Class
A2. The maximum value of the ambient temperature and the altitude is from 35 degrees C at an
altitude of 950 meters (3000 feet) to 28 degrees C at an altitude of 3050 meters (10000 feet).
The allowable ambient temperature is decreased by 1 degree C for every 300-meter increase in
altitude above 950 meters.
*8: The system monitors the intake temperature and the internal temperature of the Controller and the
Power Supply. It executes the following operations in accordance with the temperatures.

(1) Controller Chassis (DKC)


• If the use environment temperature rises to 43 degrees C or higher, or drops to 5 degrees or lower,
the external temperature warning (SIM-RC = af11xx) is notified.
• If the use environment temperature rises to 50 degrees C or higher, the external temperature alarm
(SIM-RC = af12xx) is notified.
• If the temperature of the CPU exceeds its operation guarantee value, the MP temperature
abnormality warning (SIM-RC = af10xx) is notified.

<Automatic stop caused by abnormal temperature>


Controller Boards are divided into two groups (Group A and Group B) as shown below. When
one or more Controller Boards in each group detect the external temperature alarm or the MP
temperature abnormality warning, the power-off processing (planned stop) is automatically
executed.
Group A: Controller Boards in DKC-0, DKC-2, and DKC-4
Group B: Controller Boards in DKC-1, DKC-3, and DKC-5

(2) DBS2
• If the internal temperature of the Power Supply rises to 65 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 75 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.

(3) DBL
• If the internal temperature of the Power Supply rises to 55 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 64.5 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.

[THEORY02-03-30]
Hitachi Proprietary DKC910I
Rev.9.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-40

(4) DBF3
• If the internal temperature of the Power Supply rises to 68 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 78 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.

(5) DBN
• If the internal temperature of the Power Supply rises to 65 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 75 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.

(6) HSNBX
• If the use environment temperature rises to 50 degrees C or higher, the HSNBX ambient
temperature warning (SIM-RC = afb0xx) is notified.

*9: If the storage system is stored at temperatures lower than 40 degrees C after SSDs are installed
in the storge system, power on the storage system within three months. If the storage system is
stored at 40 degrees C or higher after SSDs are installed in the storge system, power on the storage
system within two weeks.
*10: Regarding transportation and storage for relocation, if the storage system in which SSDs are
installed is stored at temperatures lower than 40 degrees C, do not leave the storage system
powered off for three months or more. If the storage system in which SSDs are installed is stored
at 40 degrees C or higher, do not leave the storage system powered off for two weeks or more.

[THEORY02-03-40]
Hitachi Proprietary DKC910I
Rev.5 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-50

2. Mechanical Environmental Conditions


It is recommended to install the storage system in a computer room (*3) in a data center and the like,
where the effects of train vibration and continuous vibration of air conditioner outdoor units are almost
eliminated. The equipment for earthquake resistance or seismic isolation might be required at a customer
site so that the mechanical environmental conditions are met.

Table 2-5 Mechanical Environmental Conditions


Item In operating In non-operating
Guaranteed value to vibration (*1) 0.25 Grms, 5-500 Hz 0.6 Grms, 3-500 Hz
(*2)
Guaranteed value to impact (*2) 5 G, 11 ms, half sine, three-axis
direction,
10 G, 6 ms, half sine, three-axis direction
and
10 G, 11 ms, half sine, falling direction
*1: Vibration that is constantly applied to the storage system due to construction works and so on
*2: Guaranteed value for each chassis of the storage system. If the vibration or impact exceeding the
specified value is imposed, the acceleration value to which the storage system is subjected to needs
to be reduced to the specified value or lower by the equipment for earthquake resistance or seismic
isolation so that the storage system can operate continuously. For general 19-inch racks, the lateral
vibration amplitude tends to be larger at the upper installation location. Therefore, it is recommend
to install the chassis in order from the bottom of the rack without making a vacant space, If the
rack frame and storage system are moved while the storage system is operating, the operation is
not guaranteed.
*3: The definition of computer room is as follows:
• A room where servers in which highly valuable information assets are stored operate
• A separate room. Not an area of a general office room.
• Security devices such as security cameras and burglar alarms are equipped according to
importance of information.
• A few designated doors with locks are used.
• To achieve stable operation 24 hours a day and 365 days a year, room temperature is optimized.
• To achieve stable operation 24 hours a day and 365 days a year, an emergency power system is
installed in case of a power outage.

[THEORY02-03-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-10

2.4 FC Interface Specifications


2.4.1 FC Interface Specification Values
The following table shows signal power level (dBm) of SFPs used for the storage system in a normal state.

Table 2-6 Signal Power Level of SFPs


(UnitdBm
SFP type Model number When used at 8 Gbps When used at 16 Gbps When used at 32 Gbps
Long Wave 16G DKC-F810I-1PL16 -8.4 or more -5.0 or more
Short Wave 16G DKC-F810I-1PS16 -8.2 or more -7.8 or more
Long Wave 32G (*1) DKC-F810I-1PL32 -8.4 or more -5.0 or more -5.0 or more
Short Wave 32G DKC-F810I-1PS32 -8.2 or more -7.8 or more -6.2 or more
*1: Long Wave 32G is supported by VSP 5200, VSP 5200H, VSP 5600, and VSP 5600H only.

[THEORY02-04-10]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-20

2.4.2 FC Port WWN


The Port-WWN that is calculated automatically for each FC port from the Serial Number is described in the
table below. Use this value of the table to find the target FC port when analyzing FC-SW log.

Port-WWN
Byte 7 6 5 4 3 2 1 0
Value 50 06 0E 80 Y8 NN NN PP

Vendor ID 50060E80 Vendor unique value; this will always be here.


Serial Number Y8 If array SN is in the range of 0-65535, then Y = 0.
If array SN is in the range of 65536-99999, then Y = 1.
NNNN This is the HEX equivalent (simple conversion) of the array 5-digit serial number.
e.g. 10152 = x27A8; Y value will equal 0.
e.g. 74320 = x2250; Y value will equal 1.
WWN Port# PP This is the value as defined by the port the WWN represents.
The correspondence between port locations and WWN port# is shown in Table 2-7.

[THEORY02-04-20]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-30

Table 2-7 WWN Port # Corresponding to Port Locations


DKC CHB Location Port Location WWN Port# CHB Location Port Location WWN Port#
(HEX) (HEX)
DKC-0 1A 1A 00 2A 1B 01
3A 20 3B 21
5A 40 5B 41
7A 60 7B 61
1B 1C 02 2B 1D 03
3C 22 3D 23
5C 42 5D 43
7C 62 7D 63
1E 1E 04 2E 1F 05
3E 24 3F 25
5E 44 5F 45
7E 64 7F 65
1F 1G 06 2F 1H 07
3G 26 3H 27
5G 46 5H 47
7G 66 7H 67
DKC-1 1A 2A 10 2A 2B 11
4A 30 4B 31
6A 50 6B 51
8A 70 8B 71
1B 2C 12 2B 2D 13
4C 32 4D 33
6C 52 6D 53
8C 72 8D 73
1E 2E 14 2E 2F 15
4E 34 4F 35
6E 54 6F 55
8E 74 8F 75
1F 2G 16 2F 2H 17
4G 36 4H 37
6G 56 6H 57
8G 76 8H 77
(To be continued)

[THEORY02-04-30]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-40

(Continued from preceding page)


DKC CHB Location Port Location WWN Port# CHB Location Port Location WWN Port#
(HEX) (HEX)
DKC-2 1A 1J 08 2A 1K 09
3J 28 3K 29
5J 48 5K 49
7J 68 7K 69
1B 1L 0A 2B 1M 0B
3L 2A 3M 2B
5L 4A 5M 4B
7L 6A 7M 6B
1E 1N 0C 2E 1P 0D
3N 2C 3P 2D
5N 4C 5P 4D
7N 6C 7P 6D
1F 1Q 0E 2F 1R 0F
3Q 2E 3R 2F
5Q 4E 5R 4F
7Q 6E 7R 6F
DKC-3 1A 2J 18 2A 2K 19
4J 38 4K 39
6J 58 6K 59
8J 78 8K 79
1B 2L 1A 2B 2M 1B
4L 3A 4M 3B
6L 5A 6M 5B
8L 7A 8M 7B
1E 2N 1C 2E 2P 1D
4N 3C 4P 3D
6N 5C 6P 5D
8N 7C 8P 7D
1F 2Q 1E 2F 2R 1F
4Q 3E 4R 3F
6Q 5E 6R 5F
8Q 7E 8R 7F
(To be continued)

[THEORY02-04-40]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-50

(Continued from preceding page)


DKC CHB Location Port Location WWN Port# CHB Location Port Location WWN Port#
(HEX) (HEX)
DKC-4 1A 9A 80 2A 9B 81
BA A0 BB A1
DA C0 DB C1
FA E0 FB E1
1B 9C 82 2B 9D 83
BC A2 BD A3
DC C2 DD C3
FC E2 FD E3
1E 9E 84 2E 9F 85
BE A4 BF A5
DE C4 DF C5
FE E4 FF E5
1F 9G 86 2F 9H 87
BG A6 BH A7
DG C6 DH C7
FG E6 FH E7
DKC-5 1A AA 90 2A AB 91
CA B0 CB B1
EA D0 EB D1
GA F0 GB F1
1B AC 92 2B AD 93
CC B2 CD B3
EC D2 ED D3
GC F2 GD F3
1E AE 94 2E AF 95
CE B4 CF B5
EE D4 EF D5
GE F4 GF F5
1F AG 96 2F AH 97
CG B6 CH B7
EG D6 EH D7
GG F6 GH F7

[THEORY02-04-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-10

2.5 Mainframe fibre channel


Introduction
FICON is a new mainframe architecture based on FC-SB-2/FC-SB-3/FC-SB-4/FC-SB-5/FC-SB-6
protocol which is Fibre channel physical layer protocol (FC-PH) to which the mainframe protocol is
mapped.

The specification of FICON is below.


• Full duplex data transfer
• Multiple concurrent I/O operations on channel
• High bandwidth data transfer (400MB/s, 800MB/s, 1,600MB/s, 3,200MB/s)
• Interlock reduction between disk controller and channel
• Pipelined CCW execution
• High Performance FICON (HPF) function
• Forward Error Correction(FEC) function

[THEORY02-05-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-20

FICON specification
Table 2-8 FICON support DKC specification
Items Contents
Support DKC emulation type I-2107
Range of CU address 0 to FE (*1)
Number of logical volumes 1 to 65280 (*2)
Number of connectable channel port 8 to 192 (*3)
8 to 128 (*4)
8 to 64 (*5)
Support fibre channel Bandwidth 4Mx16: 4Gbps/8Gbps/16Gbps
4Mx32: 8Gbps/16Gbps/32Gbps (*6)
(*7) (*8) (*9)
Cable and connector LC-Duplex
Mode Single Mode Fibre/Multi Mode Fibre

*1: When the number of CUs per FICON channel (CHPID) exceeds the limitation, there is a
possibility that HOST OS IPL fails.
*2: Number of logical volumes connectable to the one FICON channel (CHPID) is 16384.
*3: In the 3 CBX Pairs configuration
*4: In the 2 CBX Pairs configuration
*5: In the 1 CBX Pair configuration
*6: The IBM host Z server is now supported in Z14 and later.
*7: z/TPF is not supported.
*8: Hitachi hosts are not supported.
*9: The mixed configurations that share volumes with IBM/Hitachi hosts are not supported.

[THEORY02-05-20]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-30

The operation procedure


(1) Notice about Speed Auto Negotiation
A stable physical environment (fully mated connectors, no cable flexing, no transient noise sources,
etc.) is expected on Speed Auto Negotiation. Otherwise, Speed Auto Negotiation may settle to not
the fastest speed but an optimum speed.
To change into the fastest speed, check that the physical environment is stable, and then either of
the following operation is required.
Confirm the Link speed from the Mainframe Path Information window of DKC after executing
each operation. (*3)
• DKC PS OFF/ON
• Dummy replace the package including the FICON ports
• Remove and insert the FICON cable which is connected to the FICON port in DKC (*1)
• Block and Unblock the associated outbound switch/director port (*1) (*2)

*1: Execute this after deleting the logical paths from the host with the CHPID OFFLINE
operation. If this operation is not executed, Incident log may be reported.
*2: Alternate method using switch/director configuration.
<Operating procedure from switch control window (Example: in the Brocade Network
Advisor (BNA))>
(a) Block the associated outbound switch/director port to the CHB interface that is
currently Negotiated to not the fastest speed (Example: 8Gbps).
(b) Change the port speed setting from Negotiate mode to Fastest speed fix mode
(Example: 32Gbps mode for 4Mx32 and 16Gbps mode for 4Mx16) , then from Fastest
speed fix mode (Example: 32Gbps mode for 4Mx32 and 16Gbps mode for 4Mx16)
back to Negotiate mode in the switch/director port configuration window.
(c) Unblock the switch/director port.
(d) Confirm that an Online and Fastest speed (Example: 32Gbps for 4Mx32 and
16Gbps for 4Mx16) link is established without errors on the switch/director port
status window.
*3: From the menu of the Maintenance Utility (Sub Panel) window, select [Display]-[Mainframe
Path...].

[THEORY02-05-30]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-40

(2) Configuring Alternate paths


In mainframe systems, the concept of alternate paths is available for the purpose of avoiding
system down.
We recommend you to configure alternate paths based on the following priorities.

[For mainframe fibre 4 port adapter (4Mx16)]

Table 2-9 Internal structures and characteristics of Mainframe packages


Item Mainframe fibre 4 port adapter (4Mx16)
Internal
structure ISW
Port 1A 3A 5A 7A Port 1B 3B 5B 7B

CHB CHB
HIE HIE
HTP HTP HTP HTP
core core core core
core core core core

MPU MPU

Characteristics • Can access all processor through HIE from 1Port ( )


• 1HTP controls 2Port
• Control structure of 2HTP/CHB
HTP : Processor used for controlling FIBARC/FICON protocols
HIE : Interconnect channel board
ISW : Interconnect Switch
MPU : Processor unit

The mainframe fibre 4 port adapter can access all processors from the four ports. Regardless of
the locations of the used ports and the number of ports, it is always possible to perform processing
by using all processors. HTP is shared by two ports, so if you use ports of different HTPs (for
example, Port 1A and 3A), the throughput performance of one path is better than using ports of the
same HTP (for example, Port 1A and 5A).

In addition to the package structure described above, power redundancy is provided using the
cluster configurations.

Considering the structures and performance, we recommend you to set paths based on the
following priorities when configuring alternate paths.

Priority 1: Set paths to modules/clusters


Priority 2: Set paths to packages
Priority 3: Set paths to HTPs

[THEORY02-05-40]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-41

[For mainframe fibre 4 port adapter (4Mx32)]

Table 2-10 Internal structures and characteristics of Mainframe packages


Item Mainframe fibre 4 port adapter (4Mx32)
Internal
structure ISW
1A 3A 5A 7A 1B 3B 5B 7B
Port Port

HIE HIE
core core core core core core core core
MFMH MFMH MFMH MFMH MFMH MFMH MFMH MFMH

CHB CHB

Management Management Management Management


Info. Info. Info. Info.

MPU MPU

Characteristics • Can access all processor through HIE from 1Port ( )


• Each processor in the MFMH controls a different port.
MFMH : Processor used for controlling FICON protocols
HIE : Interconnect channel board
ISW : Interconnect Switch
MPU : Processor unit

The mainframe fibre 4 port adapter can access all processors from the four ports. Regardless of the
locations of the used ports and the number of ports, it is always possible to perform processing by
using all processors.
However, because some of the internal management information resources are shared on two ports,
the combination of two adjacent ports (for example, Port 1A and 3A) is more than a combination
of one-port use (e.g. port 1A and 5A), which improves throughput performance per pass without
resource contention.

In addition to the package structure described above, power redundancy is provided using the
cluster configurations.

Considering the structures and performance, we recommend you to set paths based on the
following priorities when configuring alternate paths.

Priority 1: Set paths to modules/clusters


Priority 2: Set paths to packages
Priority 3: Use from the top two ports of each CHB

[THEORY02-05-41]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-50

(3) Notes on High Performance FICON (HPF)


To use the High Performance FICON (HPF), the Mainframe Fibre Channel Adapter and the P.P.
license of High Performance Connectivity for FICON (R) are required.
Do not install the High Performance FICON (HPF) in the DKC in which no Mainframe Fibre
Channel Adapter is installed.
Do not remove all of the Mainframe Fibre Channel Adapters when High Performance FICON (HPF)
is installed.

(4) Notes on connection with an HBA that supports FICON Express16S series or FICON Express32S
Do not install or uninstall the P.P. license of High Performance Connectivity for FICON (R)
when paths are online.
For the online path connected with an HBA that supports IBM FICON Express16S series or
FICON Express32S, the logical path is temporarily released by installing or uninstalling the
P.P. license of Compatible High Performance Connectivity for FICON (R) . The logical path is
restored by the recovery of the host. However, there is a possibility that all logical paths of the path
group are released depending on the timing and a system down might be caused.

Occurrence condition:
• Mainframe path connected with an HBA that supports FICON Express16S series or FICON
Express32S.
• The path is online with the host.
• The P.P. license of Compatible High Performance Connectivity for FICON (R) is installed or
uninstalled.

[THEORY02-05-50]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-60

(5) Notes on use of FEC (Forward Error Correction) function


When using the FEC function, note the following:
• The FEC feature is enabled in both the 16G mainframe channel board (4Mx16), or the 32G
mainframe channel board (4Mx16), with connections between mainframe hosts and switches that
support the FEC feature.
• At a linkup in FEC enabled status, HTPLOG = 1D84 is output for 4Mx16. At a linkup in FEC
disabled status, SSB = 0D7D is output for 4Mx32.
• For the FEC-enabled connection with a switch that supports a 16Gbps or 32Gbps connection, it is
necessary to enable FEC and TTS for each port of the switch. Ask the vendor of the switch about
how to enable them.
• Only the FEC-enabled connection is available for a 16Gbps or 32Gbps connection of a switch
port with FEC/TTS enabled. If you are connecting a 4Mx32 FICON port and a 16Gbps switch
that does not support FEC, set the FICON port of 4Mx32 to AUTO.
• For FICON port of 4Mx16 or FICON port set to AUTO of 4Mx32, the FEC-enabled connection
is prioritized by the auto-negotiation. However, the FEC-disabled connection might be prioritized
depending on the state of the link with the connection destination.
• In the case of the point to point connection, you can check the connection status by executing the
command shown below from the console of the host (z/OS).
d m = dev (xxxx, (yy)), linkinfo = refresh
xxxx : Device Number: Device that is online as the specified CHPID
yy : CHPID : CHPID whose status you want to check
Executing the command displays the current link information of Channel/Control.
For details of the command display, refer to manuals of IBM z/OS.

The combination of the connection destination, FEC/TTS setting, and linkup speed is shown
below.

Table 2-11 Connection destination and link speed


• For 16Gbps
No. Connection destination Link setting FEC/TTS setting Linkup speed
1 HBA that supports FICON Auto enable 4/8/16/16 FEC
Express16S series
2 Switch Auto disable 4/8/16
3 enable 4/8/16 FEC
4 16G Fix disable 16
5 enable 16 FEC

• For 32Gbps
No. Connection destination Link setting FEC/TTS setting Linkup speed
1 HBA that supports FICON Auto enable 8/16/16 FEC/32 FEC
Express32S
2 32G Switch Auto disable 8/16
3 enable 8/16 FEC/32 FEC
4 32G Fix disable -
5 enable 32 FEC
[THEORY02-05-60]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-10

2.6 65280 logical addresses


The mainframe host connection interface specification are outlined in Table 2-12 and Table 2-13.

Table 2-12 List of Allowable Maximum Values of Mainframe Host Connection Interface Items
on the DKC Side
Item Fibre channel
Maximum number of CUs 255
Maximum number of SSIDs 1020
Maximum number of LDEVs 65280

Table 2-13 Allowable Range of Mainframe Host Connection Interface Items on DKC Side
Item Fibre channel
CU address 0 to FE (*1)
SSID 0004 to FFFD (*2)
Number of logical volumes 1 to 65280 (*3)
*1: Number of CUs connectable to the one FICON channel (CHPID) is 64 or 255.
In the case of 2107 emulation, the CU addresses in the interface with a host are 00 to FE for the
FICON channel.
*2: In the case of 2107 emulation, the SSID in the interface with a host is 0x0004 to 0xFEFF.
*3: Number of logical volumes connectable to one FICON channel (CHPID) is 32768.
NOTE: If you use PPRC command and specify 0xFFXX as SSID of MCU and RCU, the command may
be rejected. Please specify 0x0004, 0xFEFF as SSID of MCU and RCU.
XP8 cannot assign from 0x0001 to 0x0003. Because, XP8 is using them for internally. If you will
set SSID for mainframe, please follow mainframe to hand range of SSID.
If a specified value of SSID is out of the allowable range, the mainframe host might not be able
to use the volume. Specify an SSID value within the allowable range for the volume that the
mainframe host accesses.

Detailed numbers of logical paths of the mainframe fibre and serial channels are shown in Table 2-14.

Table 2-14 List of Numbers of Connectable Logical Paths


Item Fibre channel
Number of channel ports 16 to 192
Max. number of logical paths per CU 2048
Max. number of logical paths per port 65536 (*2) / 261120 (*1) (*2)
Max. number of logical paths per channel adapter 65536 / 261120 (*1)
Max. number of logical paths per system 131072 / 522240 (*1)
*1: In case of 2107 emulation.
*2: The maximum number of paths for connection to a host per fibre channel port is 1024
(1024 host paths 255 CUs = 261120 logical paths for 2107 emulation.).

[THEORY02-06-10]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-20

TrueCopy for Mainframe operations from a Web Console and the corresponding TSO commands are
shown in Table 2-15. Before using TSO commands or DSF commands for PPRC, the serial interface ports
to which the RCU(s) will be connected must be set to the Bidirectional mode.
Table 2-17 shows the value of the SAID (system adapter ID) parameters required for CESTPATH
command. For full description on TSO commands or DSF commands for PPRC, refer to the appropriate
manuals published by IBM corporation.

Table 2-15 TrueCopy for Mainframe operations and corresponding TSO commands for
PPRC
Function TC-MF operations TSO commands
Registering an RCU and establishing remote Add RCU CESTPATH (NOTE)
copy connections
Adding or removing remote copy connection(s) Edit Path CESTPATH
Deleting an RCU registration Delete RCU CDELPATH
Establishing an TC-MF volume pair Add Pair CESTPAIR MODE (COPY)
Suspending an TC-MF volume pair Suspend Pair CSUSPEND
Disestablishing an TC-MF volume pair Delete Pair CDELPAIR
Recovering an TC-MF volume pair from Resume Pair CESTPAIR MODE (RESYNC)
suspended condition
Controlling TC-MF volume groups CGROUP

NOTE: Required Parameters


(How to setup LINK PARAMETER for CESTPATH command)
LINK PARAMETER a a a a b b c c
aaaa : SAID (refer to Table 2-17)
bb : destination address
cc : CUI# of RCU

[THEORY02-06-20]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-30

SAID setting values for the DKC emulation type 2107 are shown in Table 2-16.

Table 2-16 SAID Setting Values for DKC Emulation Type 2107
DKC CHB Port SAID CHB Port SAID
Location Location (HEX) Location Location (HEX)
DKC-0 1A 1A 0000 2A 1B 0010
3A 0001 3B 0011
5A 0002 5B 0012
7A 0003 7B 0013
1B 1C 0004 2B 1D 0014
3C 0005 3D 0015
5C 0006 5D 0016
7C 0007 7D 0017
1E 1E 0008 2E 1F 0018
3E 0009 3F 0019
5E 000A 5F 001A
7E 000B 7F 001B
1F 1G 000C 2F 1H 001C
3G 000D 3H 001D
5G 000E 5H 001E
7G 000F 7H 001F
DKC-1 1A 2A 0020 2A 2B 0030
4A 0021 4B 0031
6A 0022 6B 0032
8A 0023 8B 0033
1B 2C 0024 2B 2D 0034
4C 0025 4D 0035
6C 0026 6D 0036
8C 0027 8D 0037
1E 2E 0028 2E 2F 0038
4E 0029 4F 0039
6E 002A 6F 003A
8E 002B 8F 003B
1F 2G 002C 2F 2H 003C
4G 002D 4H 003D
6G 002E 6H 003E
8G 002F 8H 003F
(To be continued)

[THEORY02-06-30]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-40

(Continued from preceding page)


DKC CHB Port SAID CHB Port SAID
Location Location (HEX) Location Location (HEX)
DKC-2 1A 1J 0040 2A 1K 0050
3J 0041 3K 0051
5J 0042 5K 0052
7J 0043 7K 0053
1B 1L 0044 2B 1M 0054
3L 0045 3M 0055
5L 0046 5M 0056
7L 0047 7M 0057
1E 1N 0048 2E 1P 0058
3N 0049 3P 0059
5N 004A 5P 005A
7N 004B 7P 005B
1F 1Q 004C 2F 1R 005C
3Q 004D 3R 005D
5Q 004E 5R 005E
7Q 004F 7R 005F
DKC-3 1A 2J 0060 2A 2K 0070
4J 0061 4K 0071
6J 0062 6K 0072
8J 0063 8K 0073
1B 2L 0064 2B 2M 0074
4L 0065 4M 0075
6L 0066 6M 0076
8L 0067 8M 0077
1E 2N 0068 2E 2P 0078
4N 0069 4P 0079
6N 006A 6P 007A
8N 006B 8P 007B
1F 2Q 006C 2F 2R 007C
4Q 006D 4R 007D
6Q 006E 6R 007E
8Q 006F 8R 007F
(To be continued)

[THEORY02-06-40]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-50

(Continued from preceding page)


DKC CHB Port SAID CHB Port SAID
Location Location (HEX) Location Location (HEX)
DKC-4 1A 9A 0080 2A 9B 0090
BA 0081 BB 0091
DA 0082 DB 0092
FA 0083 FB 0093
1B 9C 0084 2B 9D 0094
BC 0085 BD 0095
DC 0086 DD 0096
FC 0087 FD 0097
1E 9E 0088 2E 9F 0098
BE 0089 BF 0099
DE 008A DF 009A
FE 008B FF 009B
1F 9G 008C 2F 9H 009C
BG 008D BH 009D
DG 008E DH 009E
FG 008F FH 009F
DKC-5 1A AA 00A0 2A AB 00B0
CA 00A1 CB 00B1
EA 00A2 EB 00B2
GA 00A3 GB 00B3
1B AC 00A4 2B AD 00B4
CC 00A5 CD 00B5
EC 00A6 ED 00B6
GC 00A7 GD 00B7
1E AE 00A8 2E AF 00B8
CE 00A9 CF 00B9
EE 00AA EF 00BA
GE 00AB GF 00BB
1F AG 00AC 2F AH 00BC
CG 00AD CH 00BD
EG 00AE EH 00BE
GG 00AF GH 00BF

[THEORY02-06-50]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-60

Table 2-17 SAID Setting Values Used for CESTPATH Command Parameter
DKC CHB Port SAID CHB Port SAID CHB Port SAID CHB Port SAID
Location Location (HEX) Location Location (HEX) Location Location (HEX) Location Location (HEX)
DKC-0 1A 1A 0000 1B 1C 0002 1E 1E 0004 1F 1G 0006
3A 0020 3C 0022 3E 0024 3G 0026
5A 0040 5C 0042 5E 0044 5G 0046
7A 0060 7C 0062 7E 0064 7G 0066
2A 1B 0001 2B 1D 0003 2E 1F 0005 2F 1H 0007
3B 0021 3D 0023 3F 0025 3H 0027
5B 0041 5D 0043 5F 0045 5H 0047
7B 0061 7D 0063 7F 0065 7H 0067
DKC-1 1A 2A 0010 1B 2C 0012 1E 2E 0014 1F 2G 0016
4A 0030 4C 0032 4E 0034 4G 0036
6A 0050 6C 0052 6E 0054 6G 0056
8A 0070 8C 0072 8E 0074 8G 0076
2A 2B 0011 2B 2D 0013 2E 2F 0015 2F 2H 0017
4B 0031 4D 0033 4F 0035 4H 0037
6B 0051 6D 0053 6F 0055 6H 0057
8B 0071 8D 0073 8F 0075 8H 0077
DKC-2 1A 1J 0008 1B 1L 000A 1E 1N 000C 1F 1Q 000E
3J 0028 3L 002A 3N 002C 3Q 002E
5J 0048 5L 004A 5N 004C 5Q 004E
7J 0068 7L 006A 7N 006C 7Q 006E
2A 1K 0009 2B 1M 000B 2E 1P 000D 2F 1R 000F
3K 0029 3M 002B 3P 002D 3R 002F
5K 0049 5M 004B 5P 004D 5R 004F
7K 0069 7M 006B 7P 006D 7R 006F
DKC-3 1A 2J 0018 1B 2L 001A 1E 2N 001C 1F 2Q 001E
4J 0038 4L 003A 4N 003C 4Q 003E
6J 0058 6L 005A 6N 005C 6Q 005E
8J 0078 8L 007A 8N 007C 8Q 007E
2A 2K 0019 2B 2M 001B 2E 2P 001D 2F 2R 001F
4K 0039 4M 003B 4P 003D 4R 003F
6K 0059 6M 005B 6P 005D 6R 005F
8K 0079 8M 007B 8P 007D 8R 007F
(To be continued)

[THEORY02-06-60]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-70

(Continued from preceding page)


DKC CHB Port SAID CHB Port SAID CHB Port SAID CHB Port SAID
Location Location (HEX) Location Location (HEX) Location Location (HEX) Location Location (HEX)
DKC-4 1A 9A 0080 1B 9C 0082 1E 9E 0084 1F 9G 0086
BA 00A0 BC 00A2 BE 00A4 BG 00A6
DA 00C0 DC 00C2 DE 00C4 DG 00C6
FA 00E0 FC 00E2 FE 00E4 FG 00E6
2A 9B 0081 2B 9D 0083 2E 9F 0085 2F 9H 0087
BB 00A1 BD 00A3 BF 00A5 BH 00A7
DB 00C1 DD 00C3 DF 00C5 DH 00C7
FB 00E1 FD 00E3 FF 00E5 FH 00E7
DKC-5 1A AA 0090 1B AC 0092 1E AE 0094 1F AG 0096
CA 00B0 CC 00B2 CE 00B4 CG 00B6
EA 00D0 EC 00D2 EE 00D4 EG 00D6
GA 00F0 GC 00F2 GE 00F4 GG 00F6
2A AB 0091 2B AD 0093 2E AF 0095 2F AH 0097
CB 00B1 CD 00B3 CF 00B5 CH 00B7
EB 00D1 ED 00D3 EF 00D5 EH 00D7
GB 00F1 GD 00F3 GF 00F5 GH 00F7

[THEORY02-06-70]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-07-10

2.7 GUM and Peripheral Connections


2.7.1 GUM block diagram and error detection
The GUM periodically checks the communication (LAN/PCI) between GUM and MPs and between GUM
and devices other than MPs and reports detected errors.
The block diagram, error types, and items to be reported are shown below.

Figure 2-2 GUM Block Diagram

MPs  Monitoring if the communication between GUM and MPs is alive


or not (PCI)
  Monitoring if the communication between GUM and MPs is alive
or not (LAN ping and asynchronous HTTP API)
PCI  Monitoring if the communication between GUM and GUM is
alive or not (LAN ping and asynchronous HTTP API)

 Connected to LANHUB of the other CTL


GUM LANHUB

Management Maintenance
port port

[THEORY02-07-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-07-20

Table 2-18 Error List


Error Detection Detection trigger Phenomenon SSB SIM Remarks
process output issue
GUM H/B error GUM error MP health check (PCI) No change in MP H/B 33d4
monitoring for 15 minutes
MP health check (LAN) No response for ping 33c3
(in the same CTL)
MP health check (LAN) No response for ping 33c4
(in the other CTL)
MP health check (LAN) No response for ping 7d01xx
from any MP (in the
same CTL)
MP health check (LAN) No response for ping 7d02xx
from any MP (in the
other CTL)
MP health check MP blockage 33c3 7d06xx
(asynchronous HTTP (WCHK1) 33c4
API) 33d4
LANHUB error GUM periodic GUM health check Ping issue failure 33d9 7d02xx * Repeating the SSB output 15
diagnosis (ping) times makes the SIM issued.
LANHUB diagnosis eth0 link up 33c1
LANHUB diagnosis eth0 link down 33c2
LANHUB diagnosis eth1 link down 33c5 7d01xx * Three successive outputs of the
same SSB make the SIM issued.
LANHUB diagnosis eth1 invalid link speed 33d5
LANHUB diagnosis p0-p2 link down 33c6 7d01xx * Three successive outputs of the
same SSB make the SIM issued.
LANHUB diagnosis p3 link down 33d6 7d02xx * Three successive outputs of the
same SSB make the SIM issued.
LANHUB diagnosis p0-3 abnormal link 33c9 7d01xx * Three successive outputs of the
speed same SSB make the SIM issued.
LANHUB diagnosis p4 link up 33c8 Detection of the Maintenance PC
connection
LANHUB diagnosis p4 link down 33c7 Detection of the Maintenance PC
disconnection
LANHUB diagnosis Abnormal end of 33ca
diagnosis
MP state GUM periodic MP state (alive or not) The MP state (alive or f601 aff1xx
(alive or not) diagnosis report to GUM not) report does not f604
monitoring error reach GUM.

Terms
MP
Microprocessor. The processor that executes storage processing.
GUM
Gateway for Unified Management. The system that controls the storage configuration and so on. The
GUM provides Maintenance Utility as GUI.
H/B
Heart Beat. Used to check the state (alive or not) and normality by communicating and responding
periodically.
PCI
Peripheral Component Interconnect. The circuit that performs data transmission (sending and receiving)
among processors and peripheral devices.

[THEORY02-07-20]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-01-10

3. Software Specifications
3.1 Micro-program and Program Product
Software can be categorized into two basic types: micro-program and program product (PP). The micro-
program is the essential software for controlling the DKC910I storage system while the PPs provide a variety
of storage system functions to customers. Customers can select a suitable PP software package, in which
same kinds of PPs are packed together, according to their needs.

A host can read and write the data on the storage system by installing PPs in addition to the micro-program.
PPs also allow customers to handle volumes of external storage systems virtually on the DKC910I storage
system, copy volumes, and make use of various features. For the list of PPs, see THEORY03-03-10.

Upgrades of the micro-program versions are performed by a maintenance person. Micro-program upgrades
are automatically applied to all DKCs. Therefore, a maintenance person does not need to upgrade the micro-
program for each DKC.
PPs chosen by a customer are installed in the storage system before shipment. If the customer wants to add
PPs, the customer or system administrator needs to install the license keys for the PPs. The license keys are
applied to the whole storage system.

[THEORY03-01-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-10

3.2 Logical Components Defined by Software


Logical components defined by software are shown below. The following is the basic terminology.

1. Logical components related to volumes


• Logical volume or LDEV (Logical Device)
Data is distributed and stored in multiple drives in a RAID configuration to provide high redundancy.
The data storage area across the multiples drives is referred to as logical volume or logical device
(LDEV). In this manual, volume is sometimes used instead.

• Internal volume and external volume


An internal volume is a logical volume located in the local storage system.
An external volume is a logical volume which is mapped from a volume located in another storage
system (external storage system) to the local storage system.

Local storage system External storage system


N N N
Mapping

Internal volume External volume

[THEORY03-02-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-20

• DP-VOL, virtual volume, pool, and pool VOL


A DP-VOL (Dynamic Provisioning Volume) is a virtually created volume that has a larger capacity
than a physical capacity. In this manual, DP-VOL is sometimes referred to as virtual volume .
The volume capacity is virtualized by allocating actual area from a pool according to write requests to
a DP-VOL by a host.

A pool VOL is a logical volume comprised of multiple drives and is a component of a pool.
A pool is a virtual area composed of one or more pool VOLs. A pool capacity can be expanded by
adding pool VOLs to the pool. Creating DP-VOLs from a pool allows you to allocate volumes to a host
without considering physical drives.

Host

N N N

DP-VOL

Pool

N N

Pool VOL

N N N N

Physical drive

[THEORY03-02-20]
Hitachi Proprietary DKC910I
Rev.8 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-30

2. Logical components related to host connection


• LU path and alternate path
An HBA for a host and a CHB for the storage system are connected though Fibre Channel. A data I/
O path between a host and a port on a CHB for the storage system is referred to as LU path or LUN
path. Another LU path defined in case of a failure in the data I/O path is referred to as alternate path .
A port on another CHB must be assigned to an alternate path.

Host

LU path Alternate path

Port Port
CHB CHB

Storage system

• Host group
A group of hosts that are connected to the same port of the storage system and operate on the same
platform is referred to as host group. To connect a host to the storage system, register the host to a
host group, associate the host group to a port, and then allocate logical volumes to the combination of
the host group and the port.

• Path and host in NVMe over Fabrics


When NVMe over Fabrics is used for the Fibre Channel connection between HBA of the host and
CHB of the storage system, the path for data input and output from the host is configured by defining a
logical volume as a namespace of an NVM subsystem that has a CHB port set as an NVM subsystem
port.

• NVM subsystem
The NVM subsystem is the control system of the flash memory storage using the NVMe protocol that
has one or more namespaces and one or more communication ports. The NVM subsystem is defined as
a logical resource under which storage system logical volumes, channel ports, and NVMe connection
hosts that use the logical volumes are grouped.

• Namespace
The namespace is a flash memory space formatted in logical blocks. By defining logical volumes in the
storage system as namespaces, the host can use the logical volumes as the ones supporting NVMe.

[THEORY03-02-30]
Hitachi Proprietary DKC910I
Rev.8 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-31

• NVM subsystem port


The NVM subsystem port is the communication port used for host access to the NVM subsystem and
namespace. A storage system CHB port connected to a host adapter via the network fabric is set as an
NVM subsystem port. That enables the host to input and output data from logical volumes through the
NVMe protocol.

3. Logical components related to cache


• Shared memory
The shared memory is the memory that logically exists in the cache memory. The abbreviation for the
shared memory, SM, is also used in this manual. The common information of the storage system, the
cache management information (directory), and so on are stored in the shared memory. The capacity of
a pool and virtual volume that can be created, and so on vary depending on the capacity of the shared
memory. To add the shared memory capacity, set the shared memory function.

• CLPR
The cache memory can be logically divided. Each cache partition after the division is referred to as
CLPR. Allocating CLPRs to DP-VOLs and parity groups prevents a host from occupying a large part
of the cache memory.

[THEORY03-02-31]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-40

4. Logical components related to replication


• Pair, primary volume, and secondary volume
A combination of an original volume and the copy of the volume is called pair. An original volume
is referred to as primary volume or P-VOL, and the copy of the volume is referred to as secondary
volume or S-VOL.
Copying a volume is creating a pair. When you create a pair, data in a primary volume is copied to a
secondary volume (initial copy). After the initial copy, data written to the primary volume is copied to
the secondary volume (update copy), and the secondary volume data always coincides with the primary
volume data in the pair. You can split and resynchronize a created pair. If you split a pair, the update of
the primary volume is not applied to the secondary volume, but the pair is kept. If you resynchronize a
split pair, the secondary volume data coincides with the primary volume data.
Location of a secondary volume, copy method, and copy timing differ depending on program products
(PP). Sometimes, a PP name is added to pair (for example, TrueCopy pair (TC pair) and Universal
Replicator pair (UR pair)).

• Port, remote path, alternate path


When a primary volume and a secondary volume are located in different storage systems, the storage
systems are connected through ports.
A data I/O path between the ports is referred to as remote path . Another remote path defined in case of
a failure in the data I/O path is referred to as alternate path . A port on another CHB must be assigned
to an alternate path.

Remote path
CHB Port Port CHB
Pair
Primary Secondary
volume Copy direction volume

CHB Port Port CHB


Storage system Alternate path Storage system

[THEORY03-02-40]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-03-10

3.3 Program Product (PP) List


The following tables lilts usable PPs on the DKC910I storage system. For details of PPs, refer to the
corresponding User Guides.
A: Available, N/A: Not available
PP Abbreviation Supported Platform User Guide
Open Mainframe
System
Dynamic Provisioning DP, HDP A A Provisioning Guide for Open Systems
Dynamic Provisioning for Mainframe Provisioning Guide for Mainframe
dedupe and compression - A N/A Systems
Dynamic Tiering DT, HDT A A
Dynamic Tiering for Mainframe
active flash - A A
active flash for mainframe
Resource Partition Manager - A A
Data Retention Utility DRU A N/A
Volume Retention Manager VRM N/A A
Virtual LUN VLL A A
Virtual LVI
LUN Manager LUNM A N/A
Thin Image TI A N/A Thin Image User Guide
Thin Image Advanced TIA A N/A Thin Image Advanced User Guide
Adaptive Data Reduction ADR A N/A Provisioning Guide for Open Systems
Thin Image Advanced User Guide
Universal Volume Manager UVM A A Universal Volume Manager User
Guide
Virtual Partition Manager VPM A A Performance Guide
Performance Monitor - A A (Performance Monitor, Server Priority
Server Priority Manager SPM A N/A Manager)
Compatible PAV PAV N/A A Hitachi Compatible PAV User Guide
Volume Migration VM A A Hitachi Volume Migration User Guide
SNMP Agent - A A SNMP Agent User Guide
Audit Log - A A Audit Log User Guide
Encryption License Key - A A Encryption License Key User Guide
Volume Shredder - A A Hitachi Volume Shredder User Guide
ShadowImage SI A A ShadowImage User Guide
ShadowImage for Mainframe SI-MF ShadowImage for Mainframe User
Guide
(To be continued)

[THEORY03-03-10]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-03-20

(Continued from preceding page)


A: Available, N/A: Not available
PP Abbreviation Supported Platform User Guide
Open Mainframe
System
Compatible FlashCopy® V2 FlashCopy N/A A Hitachi Compatible FlashCopy® User
Compatible Software for IBM® Guide
FlashCopy® SE
TrueCopy TC A A Hitachi TrueCopy® User Guide
TrueCopy for Mainframe TC-MF Hitachi TrueCopy® for Mainframe
User Guide
Universal Replicator UR A A Hitachi Universal Replicator User
Universal Replicator for Mainframe UR-MF Guide
Hitachi Universal Replicator for
Mainframe User Guide
global-active device GAD A N/A Global-Active Device User Guide
Compatible XRC XRC N/A A Compatible XRC User Guide

[THEORY03-03-20]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-04-10

3.4 GUM and Related Software


When the GUM detects an error, SSB/SIM is reported.
The following is a software block diagram related to notification.

Alert notification E-mail transmission Syslog recording

SVP

SNMP E-mail Syslog


notification notification notification
process process process

 Transferring SIMs to appropriate notification processes

SIM transfer processing


GUM (up to 12)
 Monitoring the queue
GUM periodically and extracting
SIMs to be transferred
SIM transfer
queue

MP  The MP registers SIMs.

Each CTL performs notification processes independently.


In the single SVP configuration, CTL-x2 in CBX-x executes notification via CTL-x1.
If notification fails, it will be retried until it succeeds.

[THEORY03-04-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-01-10

4. Maintenance Work
4.1 Overview of Maintenance Work
Maintenance work of the storage system includes addition and removal of optional components, preventive
replacement of installed components, change of setting information, and micro-program update, as well as
troubleshooting. Maintenance work can be performed while the storage system is operating.

Troubleshooting must be started as soon as a failure notification from a customer or a report from the remote
monitoring system is received. A maintenance person isolates a failed part by analyzing the notification or
report, and then performs recovery actions according to the troubleshooting workflow. Recovery actions for
some types of failures might need to be performed by a customer.
Maintenance work other than troubleshooting is performed upon a request from the Technical Support
Division.

If a failure occurs, the system must be quickly restored from the failure. The storage system that features a
redundant configuration can operate even after a failure occurs, but the redundancy becomes incomplete. If
another failure occurs in a part operating in a normal state before the storage system is restored, a system
down might be caused.

[THEORY04-01-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-02-10

4.2 Maintenance Management Tools for Maintenance Person and Their Usage
The following maintenance management tools installed in the SVP are used for maintenance work of the
storage system. The tools are operated by remotely connecting to the SVP.

: Usable : Not usable


Tool Name Usage Maintenance Customer
Person
SVP Window The SVP window is a GUI used for operations for the whole -
storage system.
Main uses:
Status check and various settings of the storage system
Failure check according to SIM-RC and SSB log
Dump collection
Status check of volumes
Version check of micro-program
Micro-program update
Maintenance Utility Maintenance Utility is a GUI used for operations for storage -
system components.
Main uses:
Status check of the storage system and each component
Replacement of a failed component (*1)
Addition and removal of optional components (*1)
Web Console / Storage Navigator is a GUI for customers which is used for setting
Storage Navigator and viewing the storage system configuration.
Web Console is a GUI for maintenance persons which contains
menus dedicated for them in addition to Storage Navigator
functions. Maintenance persons use Web Console when creating
parity groups, allocating spare drives, and so on, according to
instructions in the maintenance manual.
Command Control CCI is a CLI used for setting and viewing the storage system
Interface (CCI) configuration. CCI is mainly used by customers, but sometimes
used by maintenance persons for performing procedures described
in the maintenance manual.
Main uses:
Initialization, restoration, deletion, and setting change of pools
and volumes
*1: Maintenance Utility is used for the addition, removal, and replacement of components other than
the SVP or SSVP. The SVP window is used for the addition, removal, and replacement of the SVP
and SSVP.

[THEORY04-02-10]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-03-10

4.3 Troubleshooting Workflow


When a failure notification from a customer or a report from the remote monitoring system is received,
troubleshooting must be started. Troubleshooting workflows differ depending on failure types.
The following is a typical workflow. For details, see 1. Overview of TRBL and 2. Isolation of Failed Part
in TROUBLESHOOTING SECTION.

• Failure notification from a


customer
• Report from the remote monitoring
system

Check the storage system status.

Is a SIM-RC (*1) No
reported?

Yes

Does
a recovery
procedure in
Yes
TROUBLESHOOTING
SECTION need to be
performed for the
SIM-RC?

No
Check ACCs (*2) to identify Perform the recovery procedure in
components to be replaced, and then TROUBLESHOOTING SECTION.
replace the components.

Confirm that all failures are resolved.

END

*1: SIM-RC is a reference code that represents an error name, and is viewed in the SVP window.
*2: ACC is a code that indicates a location of a failed part, and is displayed with SIM-RC.

[THEORY04-03-10]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-04-10

4.4 Important Precautions during Maintenance Work


Important precautions on the storage system maintenance work are shown below.

• Perform maintenance work when a customer is not changing a storage system configuration.

• Do not start Maintenance Utility by entering an IP address of a CTL in your browsers address bar.

• Maintenance work can be generally performed without stopping I/O. However, in some cases, depending
on the failure, maintenance procedure, and configuration, I/O needs to be stopped. If the maintenance
manual instructs you to stop I/O, ask your customer to stop I/O.

• A customer might change the password for the maintenance account of the storage system after the storage
system is installed. Ask the customer about the password for the maintenance account of the storage
system.

• Before collecting dumps, check that the loads on CTLs are not high. Dump collections, which are
concurrently executed by all CTLs, impose a heavy load on the storage system.

• Color code labels for distinguishing connection destinations are attached to the connection cable between
CBX and HSN Box. Colors presented in the maintenance manual might be different from actual colors.
When performing maintenance work, be sure to check location numbers printed on the labels in addition to
colors.

• Setting change operations of the Windows OS on the SVP are prohibited unless specifically allowed.

Precautions are described also in the beginning or middle of the maintenance procedures in the maintenance
manual. When performing the maintenance procedures, be sure to read the precautions.

[THEORY04-04-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-10

4.5 User account security policies


The HDvM-SN user accounts are protected from unauthorized use by user-defined password and login
requirements. The HDvM-SN Security Administrator can enable, disable, and specify these user account
security settings by using HDvM-SN.

The user account security settings are included in the HDvM-SN backup configuration file. If the HDvM-
SN configuration is restored by using the HDvM-SN configuration file and the information in HDvM-SN
configuration file is old, user account will be locked out if the password has expired according to the old
information. In this case, the Security Administrator must release the account lockout.

User account security events are recorded in the audit log for the storage system, except for the following
three events that are not recorded in the audit log:
• Account lockout when the password has expired.
• Account lockout when the user exceeds the maximum number of login attempts.
• Account unlock when the lockout mode is lock.

NOTICE: For CLI and API users:


• The password must be changed at the first login and before expiration.
• Password expiration warnings are not issued by the CLI or API.

[THEORY04-05-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-20

4.5.1 Password requirements


1. Character requirements:
• Minimum number of numeric characters (0-256, default = 0)
• Minimum number of uppercase letters (0-256, default = 0)
• Minimum number of lowercase letters (0-256, default = 0)
• Minimum number of symbol characters (0-256, default = 0)
• Minimum total number of characters (6-256, default = 8)
• Allowed symbol characters: ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~
2. Number of previous passwords that cannot be used (1-10, default = 1)
3. Limit available keywords (yes or no)
4. Require initial password reset (password change on first login) (enabled or disabled)
To enable this function, you must select Yes on the User Account Policies window and also on either the
Create User window or the Edit User window. If No is selected on the User Account Policies window, or
if No is selected on both the Create User and Edit User windows, this function is disabled.
When this function is enabled, HDvM-SN users will not be able to perform any other operations until
they reset their initial password.
If the initial password reset function is changed from disabled to enabled, existing users are not asked to
change their current password at the next login.
5. Password validity period (disabled (default) or number of days, range = 1-365)
If there is only one local user account with the Security Administrator role for the storage system, the
account is not locked out when the password has expired.
The password validity period is not checked until the first time the user changes the password after the
Security Administrator configures the user account security policy. To enforce the password validity
period, the Security Administrator or user must change the password. The Security Administrator can
check the password expiration dates in the HDvM-SN user list.
6. Password change prohibition period (disabled (default) or number of days, range = 1-10)
The password change prohibition period is not checked until the user first changes the password
after the Security Administrator configures the user account security policy. To enforce the password
change prohibition period, the Security Administrator or user must change the password. The Security
Administrator can check the password expiration dates in the HDvM-SN user list.

NOTICE: A password cannot be changed when any of the following conditions applies:
• The password change prohibition period has not yet elapsed.
• The requirements for the new password (for example, number of uppercase letters)
are not met.
• The new password is the same as a previous password within the defined range.
• The new password contains a user name.

[THEORY04-05-20]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-30

4.5.2 Login requirements


1. Maximum number of login attempts
2. Lockout mode (lock or disable)
The account is either locked or disabled when the user exceeds the maximum number of login attempts.
3. Duration of the account lockout period (seconds, 60-345,600, default = 60)

NOTICE: When the maximum number of login attempts is exceeded and the lockout mode is
lock, the user must wait until the account lockout period has elapsed before logging
in again. When the maximum number of login attempts is exceeded and the lockout
mode is disable, the Security Administrator must re-enable the user account and
reset the password.

4.5.3 Password expiration


Users are notified about password expiration by email and also when logging in to HDvM-SN.

1. Email notifications
• Warning 30 days before password expiration
• Warning 14 days before password expiration and daily thereafter
• Notification at password expiration and none thereafter
2. Login notifications
• Warning at each GUI login starting 14 days before password expiration
• Login failure at each login after password expiration

To prevent a password from expiring, the user must change the existing password before the end of the day
(23:59 or earlier) on which the password expires.

After a password has expired, the Security Administrator must re-enable the account and reset the password.

NOTICE: • If a password expires while the user is logged in, the next navigation within HDvM-
SN will fail and the user account is disabled. The user must contact the Security
Administrator to regain access to HDvM-SN.
• The mail server settings for email notification of password expiration are not backed
up in the HDvM-SN configuration file. If the backup HDvM-SN configuration file is
applied, you must re-enter the email notification settings for password expiration.

[THEORY04-05-30]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-40

4.5.4 Account lockout


The following table lists and describes the user account lockout specifications.

NOTICE: The account lockout specifications apply to all user accounts, including the
maintenance personnel user account. If the maintenance personnel user account
becomes locked or disabled, the Security Administrator must re-enable the account
and reset the password.

Lockout type Lockout triggers Unlock details


Account locked The maximum number of failed login attempts The user can log in after the lockout
has been exceeded. period has elapsed.
Account disabled • The maximum number of failed login attempts The Security Administrator must reset
has been exceeded. the password.
• The password has expired.

4.5.5 Additional notices


1. If a version that supports this function is downgraded to an unsupported version, the settings will be
deleted. Retry the settings when the version is upgraded to the supported version.
2. When the user account policies are configured, each parameter of the existing user accounts contains the
following values:
• Password complexity: Current password which does not meet the new password policy is left
unchanged and accepted for user authentication.When changing the password after the microcode
upgrade, the complexity is checked according to the policy.
• Password history: Current password becomes the first password recorded in the password history.

[THEORY04-05-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-10

5. Drive Formatting
5.1 Logical Volume Formatting
5.1.1 Overviews
DKC can format two or more ECCs at the same time by providing HDDs and Flash Module Drive with the
Logical Volume formatting function. However, when using the encryption function, the high-speed format is
unusable.

Table 5-1 Flow of Format


Item No. Item Contents
1 Web Console operation Specify a parity group and execute the LDEV format.
2 Display of execution status The progress (%) is displayed in the Task window or in the summary
of the Parity Group window and LDEV window.
3 Execution result • Normal: Completed normally
• Failed: Terminated abnormally
4 Recovery action when a failure Same as the conventional one. However, a retry is to be executed in
occurs units of ECC. (Because the Logical Volume formatting is terminated
abnormally in units of ECC when a failure occurs in the HDD.)
5 Operation of the Web Console When the Logical Volume format for more than one ECCs is instructed,
which is a high-speed Logical the high-speed processing is carried out(*1).
Volume formatting object
6 PS/OFF or powering off The Logical Volume formatting is suspended.
No automatic restart is executed.
7 Maintenance PC powering off After the SVP is rebooted, the indication before the PC powering off is
during execution of an Logical displayed in succession.
Volume formatting
8 Execution of a high-speed Logical ECC of HDD which the spare is saved fails the high-speed Logical
Volume format in the status that Volume formatting, and changes to a low-speed format. (Because
the spare is saved the low-speed formatting is executed after the high-speed format is
completed, the format time becomes long.)
After the high-speed Logical Volume formatting is completed, execute
the copy back of HDD which the spare is saved from SIM log and
restore it.
*1: Normal Format is used for ECCs of SSDs/SCMs and ECCs for which logical volumes (pool-VOLs)
are defined.

[THEORY05-01-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-11

Logical volume formatting time varies depending on the provisioning type and attribute of the logical
volume. To estimate the time required for formatting a logical volume, check the provisioning type and
attribute of the logical volume in the Logical Devices window, and then see the appropriate reference shown
below.

Table 5-2 Logical Volume Formatting Time References Corresponding to Formatting Target
Provisioning type Attribute Logical volume formatting time reference
Basic 5.1.2 Estimation of Logical Volume Formatting Time
Pool-VOL 5.1.3 Estimation of Logical Volume (Pool-VOL) Formatting Time
DP 5.1.4 Estimation of Logical Volume (DP-VOL) Formatting Time
DP (with Capacity Saving 5.1.5 Estimation of Logical Volume (DP-VOL with Capacity Saving
enabled) Enabled) Formatting Time

[THEORY05-01-11]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-20

5.1.2 Estimation of Logical Volume Formatting Time


The standard formatting time of the high-speed LDEV format and the low-speed LDEV format for each
Drive type is described below.
Note that the Storage System configuration at the time of this measurement is as shown below.

<Storage System Conditions at the Time of Format Measurement>


• DKB/DKBN configuration (8 pieces per system)
• Without I/O
• Perform the formatting for the single ECC
• Define the number of LDEVs (define a maximum number of 100GB LDEVs for the single ECC)
• Measurement emulation (OPEN-V and 3390-M.)

1. HDD
The formatting times for HDD do not vary depending on the number of logical volumes, but instead vary
depending on the capacity of HDD and the rotation speed of HDD.
(1) High speed LDEV formatting
The following table shows the standard formatting times.
The formatting times are an estimation only. Results from real world use might vary depending on
RAID groups and the drive type.

Table 5-3 High-speed format time estimation


(Unit : min)
Standard Formatting Time (*3) Monitoring Time (*1)
Drive type
OPEN-V 3390-M OPEN-V 3390-M
18RH9M (7.2 krpm) 1980 2040 2980 3070
14RH9M (7.2 krpm) 1265 1295 1910 1955
10RH9M (7.2 krpm) 950 980 1435 1480
2R4JGM (10 krpm) 285 290 440 445

[THEORY05-01-20]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-30

(2) Low speed LDEV formatting


The following table shows the standard formatting times.
Estimated low-speed LDEV formatting times per 1TB/1PG without I/O are shown (including the
encryption). (*2) (*4).

Table 5-4 10 krpm


(Unit : min)
Standard Formatting Time (*3)
RAID Level
OPEN-V 3390-M
RAID1 2D+2D 120 130
RAID5 3D+1P 80 85
7D+1P 30 35
RAID6 6D+2P 35 45
14D+2P 20 20

Table 5-5 7.2 krpm


(Unit : min)
Standard Formatting Time (*3)
RAID Level
OPEN-V 3390-M
RAID1 2D+2D 175 200
RAID5 3D+1P 115 120
7D+1P 55 50
RAID6 6D+2P 65 60
14D+2P 30 25

[THEORY05-01-30]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-40

2. SAS SSD/NVMe SSD/SCM


SAS SSD/NVMe SSD/SCM does not have the self LDEV formatting function.
Only the low-speed LDEV formatting can be performed.
Estimated formatting times per 1TB/1PG without I/O are shown (including the encryption). (*2) (*4).

Table 5-6 SAS SSD format time estimation


(Unit : min)
Standard Formatting Time (*3)
RAID Level
OPEN-V 3390-M
RAID1 2D+2D 20 15
RAID5 3D+1P 15 10
7D+1P 10 5
RAID6 6D+2P 10 5
14D+2P 5 5

Table 5-7 NVMe SSD format time estimation


(Unit : min)
Standard Formatting Time (*3)
RAID Level 1.9 TB to 15 TB 30 TB
OPEN-V 3390-M OPEN-V 3390-M
RAID1 2D+2D 10 15 4 4
RAID5 3D+1P 10 10 3 3
7D+1P 5 5 2 3
RAID6 6D+2P 10 5 2 3
14D+2P 5 5 2 3

Table 5-8 SCM format time estimation


(Unit : min)
Standard Formatting Time (*3)
RAID Level
OPEN-V 3390-M
RAID1 2D+2D 5 N/A
RAID5 3D+1P 4 N/A
7D+1P 2 N/A
RAID6 6D+2P 2 N/A
14D+2P 2 N/A

The formatting time becomes the same in 16 drives because the transmission of the format data does not
arrive even at the limit of passing.
Depending on SAS SSD/NVMe SSD/SCM internal condition, formatting time would be approximately
4x faster than these values.

[THEORY05-01-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-50

3. FMD
The formatting times for FMD do not vary depending on the number of logical volumes, but instead vary
depending on the capacity of FMD.
(1) High speed LDEV formatting
The following table shows the standard formatting times.
The formatting times are an estimation only. Results from real world use might vary depending on
RAID groups and the drive type.

Table 5-9 FMD High-speed format time estimation


(Unit : min)
Standard Formatting Time (*3) Monitoring Time (*1)
Drive type
OPEN-V 3390-M OPEN-V 3390-M
7R0FP (FMD) 1 12 6 20
14RFP (FMD) 1 24 6 40

[THEORY05-01-50]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-60

(2) Low speed LDEV formatting


The following table shows the standard formatting times.
Estimated low-speed LDEV formatting times per 1 TB/1 PG without I/O are shown (including the
encryption). (*2) (*4).

Table 5-10 FMD Low-speed format time estimation


(Unit : min)
Standard Formatting Time (*3)
RAID Level
OPEN-V 3390-M
RAID1 2D+2D 5 20
RAID5 3D+1P 5 15
7D+1P 5 15
RAID6 6D+2P 5 15
14D+2P 5 15

*1: After the standard formatting time has elapsed, the display on the Web Console shows 99% until it
reaches to the monitoring time. Because Drive itself performs the format, and the progress rate to
the total capacity is not understood, the ratio at the elapsed time from the format beginning to the
Formatting time required is displayed.
*2: If there is an I/O operation, the minimum formatting time is over 6 times as long as the discrete
value, depending on the I/O load.

[THEORY05-01-60]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-70

*3: The formatting time varies according to the generation of the Drive in standard time distance.
NOTE: The formatting time when mixing the Drive types and the configurations described
in (1) High speed LDEV formatting and (2) Low speed LDEV formatting divides
into the following cases.

(a) When only the high speed formatting available Drives (1. HDD, 3. FMD) are
mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.

(b) When only the low speed formatting available Drives (2. SAS SSD/NVMe SSD/
SCM) are mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.

(c) When the high speed formatting available Drives (1. HDD, 3. FMD) and the low
speed formatting available Drives (2. SAS SSD/NVMe SSD/SCM) are mixed

(1) The maximum standard time in the high speed formatting available Drive
configuration is the maximum high speed formatting time.

(2) The maximum standard time in the low speed formatting available Drive
configuration is the maximum low speed formatting time.

The formatting time is the sum of the above formatting time (1) and (2).

When the high speed formatting available Drives and the low speed formatting
available Drives are mixed in one formatting process, the low speed formatting
starts after the high speed formatting is completed. Even after the high speed
formatting is completed, the logical volumes with the completed high speed
formatting cannot be used until the low speed formatting is completed.

In all cases of (a), (b) and (c), the time required to start using the logical volumes
takes longer than the case that the high speed formatting available Drives and the low
speed formatting available Drives are not mixed.
Therefore, when formatting multiple Drive types and the configurations, we
recommend dividing the formatting work and starting the work individually from a
Drive type and a configuration with the shorter standard time.
*4: The time required to format the drive might be increased by up to approximately 20% in the DB on
the rear stage in cascade connection.

[THEORY05-01-70]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-80

5.1.3 Estimation of Logical Volume (Pool-VOL) Formatting Time


Only the low-speed formatting is supported for pool-VOLs. Even when the high-speed formatting is
instructed, the low-speed formatting is executed.

Rough formatting time per 3 TB/1 PG without host I/O is as follows:


Logical volume (Pool-VOL) formatting time = 2 minute (*1) (*2)

*1: There is no difference in formatting time between drive types or between RAID levels.
*2: When the allocated page capacity is large, actual formatting time might be shorter than the
estimated time.

[THEORY05-01-80]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-90

5.1.4 Estimation of Logical Volume (DP-VOL) Formatting Time


Only the low-speed formatting is supported for DP-VOLs.

Rough formatting time per LDEV without I/O is as follows:


Logical volume (DP-VOL) formatting time = 25 seconds (LDEV capacity/50 TiB) (*1)
(Formula) : Indicates rounding up the calculated value of the formula.

*1: There is no difference in formatting time between drive types or between RAID levels.

[THEORY05-01-90]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-100

5.1.5 Estimation of Logical Volume (DP-VOL with Capacity Saving Enabled) Formatting
Time
Rough formatting time per LDEV without I/O is as follows:
Logical volume (DP-VOL with Capacity Saving enabled) formatting performance = 0.95GB per second (*1)
Logical volume (DP-VOL with Capacity Saving enabled) formatting time
= (Capacity of one LDEV/Logical volume (DP-VOL with Capacity Saving enabled) formatting
performance)
(Formula) : Indicates rounding up the calculated value of the formula.

*1: The logical volume formatting performance varies depending on the storage system configuration,
data layout, and data contents.

[THEORY05-01-100]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-10

5.2 Quick Format


5.2.1 Overviews
Quick Format provides the function to format in the background that allows the volumes to be usable without
waiting for the completion of the formatting when starting the formatting function.
The support specifications are shown below.

Table 5-11 Quick Format Specifications


Item Item Contents
No.
1 Support Drive HDD type All drive types are supported.
2 Support emulation type Quick Format can be performed on volumes of all emulation types.
3 Number of parity groups • Quick Format can be performed on multiple parity groups simultaneously. The
on which Quick Format number of those parity groups depends on the total of parity group entries.
can be performed The number of entries is an indicator for controlling the number of parity groups
on which Quick Format can be performed. The number of parity group entries
depends on the drive capacity configuring each parity group.
The entry count per parity group is shown in the table below.
Emulation type Capacity of a component Entry count per parity group
drive of a parity group
OPEN-V 48 TB or less 1 entry
More than 48 TB 2 entries
3380-xx, 36 TB or less 1 entry
6586-xx More than 36 TB 2 entries
3390-xx, 45 TB or less 1 entry
6588-xx More than 45 TB 2 entries
• When the number of entries is 72 or less, the number of volumes on which Quick
Format can be performed is not limited.
• In the case of four concatenations, the number of parity groups is four. In the case
of two concatenations, the number of parity groups is two.
4 Combination with various It is operable in combination with all P.P.
P.P.
5 Formatting types When performing a format from Web Console or CLI, you can select either Quick
Format or the normal format.
6 Additional execution of Additional Quick Format can be executed during Quick Format execution. In this
Quick Format during its case, the total number of entries during Quick Format and those to be added is
execution limited 72.
7 Preparing Quick Format • When executing Quick Format, management information is created first. I/O
access cannot be executed in the same way as the normal format in this period.
• Creating management information takes up to about one minute for one parity
group, and up to about 36 minutes in case of 36 parity groups for the preparation.
For M/F volumes, the above mentioned time and the time in Table 5-12 need to
be summed up.
(To be continued)

[THEORY05-02-10]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-20

(Continued from the preceding page)


Item Item Contents
No.
8 Blocking and restoring the • When the volume during Quick Format execution is blocked for maintenance,
volume the status of the volume (during Quick Format execution) is stored in the Storage
System. When the volume is restored afterwards, the volume status becomes
Normal (Quick Format) .
Therefore, parity groups in which all volumes during Quick Format are blocked are
included in the number of entries during Quick Format.
The number of entries for additional Quick Format can be calculated with the
following calculating formula: 72 - X - Y
(Legend)
X: The number of entries for parity groups during Quick Format.
Y: The number of entries for parity groups in which all volumes during Quick
Format are blocked.
9 Operation at the time of After P/S ON, Quick Format restarts.
PS OFF/ON
10 Restrictions • Quick Format cannot be executed to an external volume or virtual volume.
• Volume Migration and Quick Restore of ShadowImage cannot be executed to a
volume during Quick Format.
• When the parity group setting is the accelerated compression, Quick Format
cannot be performed. (If performed, it terminates abnormally)

[THEORY05-02-20]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-30

5.2.2 Volume Data Assurance during Quick Formatting


The Quick Formatting management table is kept on SM. This model can prevent the management table from
volatilizing by backing up the SM to a CFM in the Controller Board, and assures the data quality during
Quick Formatting.

[THEORY05-02-30]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-40

5.2.3 Control information format time of M/F VOL


In the case of M/F VOL, the control information at the terminal of each volume is initialized and
then the volume status becomes usable. Therefore, it is required to wait for the format, as well as
the conventional format, until completing the creation processing of control information. The time
required at this time varies depending on the emulation type and the number of volumes as shown
in the following table.

Table 5-12 Control Information Format Time of M/F VOL (Per 1K Volume)
Emulation type Format time (minute)
3390-A 133
3390-M 34
3390-MA/MB/MC 28
6588-M 34
6588-MA/MB/MC 28
3390-L 18
3390-LA/LB/LC 14
6588-L 18
6588-LA/LB/LC 14
3390-9 9
3390-9A/9B/9C 5
6588-9 9
6588-9A/9B/9C 5
Others 3

The above is the time required when formatting a 1K volume and it is proportional to the number of
volumes.

[THEORY05-02-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-50

5.2.4 Quick Formatting Time


Quick Format is executed in the background while I/O from and to the host is performed.
Therefore, the Quick Format time may vary significantly depending on the number of I/Os from and to the
host or other conditions.
You can also calculate a rough estimation of the Quick Format time using the following formula.

Rough estimation of Quick Format time


• When executing Quick Format in the entire area of a parity group
Format time = Format standard time (see Table 5-13, Table 5-14)
Format multiplying factor (see Table 5-15) (The number of parity groups 8)
• When executing Quick Format on some LDEVs in a parity group
Format time = Format standard time (see Table 5-13, Table 5-14)
Format multiplying factor (see Table 5-15) (The number of parity groups 8)
(Capacity of LDEVs on which Quick Format is executed Capacity of a parity group)
NOTE: (The number of parity groups 8) indicates that the first decimal place of
the calculated value is rounded up.
Table 5-13, Table 5-14 shows the Quick Format time when no I/O is performed in the entire area of a parity
group.

Table 5-13 Quick Format Time (OPEN-V)


Drive type Formatting time
H10R0 (7.2 krpm) 130 h
H14R0 (7.2 krpm) 184 h
H18R0 (7.2 krpm) 235 h
J2R4 (10 krpm) 31 h
M960 (SAS SSD) 4h
M1T9 (SAS SSD) 9h
M3R8 (SAS SSD) 17 h
M7R6 (SAS SSD) 34 h
M15R (SAS SSD) 67 h
M30R (SAS SSD) 134 h
R1R9 (NVMe SSD) 9h
R3R8 (NVMe SSD) 17 h
R7R6 (NVMe SSD) 34 h
R15R (NVMe SSD) 67 h
R30R (NVMe SSD) 134 h
Y375 (NVMe SCM) 2h
Y750 (NVMe SCM) 4h
Y800 (NVMe SCM) 4h
Y1R5 (NVMe SCM) 7h
Q6R4 (FMD) 32 h
Q13R (FMD) 64 h

[THEORY05-02-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-60

Table 5-14 Quick Format Time (Other than OPEN-V)


Drive type Formatting time
H10R0 (7.2krpm) 149 h
H14R0 (7.2krpm) 209 h
H18R0 (7.2 krpm) 269 h
J2R4 (10krpm) 35 h
M960 (SAS SSD) 5h
M1T9 (SAS SSD) 10 h
M3R8 (SAS SSD) 20 h
M7R6 (SAS SSD) 40 h
M15R (SAS SSD) 79 h
M30R (SAS SSD) 158 h
R1R9 (NVMe SSD) 10 h
R3R8 (NVMe SSD) 20 h
R7R6 (NVMe SSD) 40 h
R15R (NVMe SSD) 79 h
R30R (NVMe SSD) 158 h
Y375 (NVMe SCM) 2h
Y750 (NVMe SCM) 4h
Y800 (NVMe SCM) 4h
Y1R5 (NVMe SCM) 8h
Q6R4 (FMD) 37 h
Q13R (FMD) 74 h

Table 5-15 Format Multiplying Factor


RAID level I/O Multiplying factor
RAID1 No 0.5
Yes 2.5
RAID5, RAID6 No 1.0
Yes 5.0

• When Quick Format is executed to parity groups with different Drive capacities at the same time, calculate
the time based on the parity group with the largest capacity.

[THEORY05-02-60]
Hitachi Proprietary DKC910I
Rev.0 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-70

5.2.5 Performance during Quick Format


Quick Format executes the formatting in background while executing I/O from HOST.
Therefore, it may influence the HOST performance.
The following table shows the proportion of the performance influence.
(However, this is only a rough standard, and it may change depending on the conditions.)

Table 5-16 Performance during Quick Format


I/O types Performance when the ratio shows 100% at
normal condition
Random read 80%
Random write to the unformatted area 20%
Random write to the formatted area 60%
Sequential read 90%
Sequential write 90%

[THEORY05-02-70]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-80

5.2.6 Combination with Other Maintenance

Table 5-17 Combination with Other Maintenance


Item No. Maintenance Operation Operation during Quick Format
1 Drive copy / correction copy The processing is possible as well as the normal volumes, but
unformatted area is skipped.
2 LDEV Format The LDEV Format is executable for the volumes that Quick Format is
(high-speed / low-speed) not executed.
3 Volume maintenance block It is possible to block the volumes instructed by Web Console or CLI
for the volumes during Quick Format.
4 Volume forcible restore If forcible restore is executed after the maintenance block, it returns to
Quick Formatting.
5 Verify consistency check Possible. However, the Verify consistency check for the unformatted
area is skipped.
6 Replacement with a maintenance Possible as usual
part, addition, removal, and
micro-program exchange (*1)
*1: For details of replacement with a maintenance part, addition, removal, and micro-program
exchange, see Maintenance operation in the table in THEORY05-03-20.

[THEORY05-02-80]
Hitachi Proprietary DKC910I
Rev.0 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-90

5.2.7 SIM Output When Quick Format Completed


After Quick Format is completed, SIM = 0x410100 is output when performing Quick Format.
However, SIM is not output when Quick Format is performed by RAID Manager.

[THEORY05-02-90]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-100

5.2.8 Coexistence of Drives


Table 5-18 shows permitted coexistence of RAID levels and drive types respectively.

Table 5-18 Specifications for Coexistence of Elements


Item Specifications Remarks
Coexistence of RAID RAID1 (2D+2D), RAID5 (3D+1P, 7D+1P), RAID6 (6D+2P, 14D+2P) can
levels exist in the system.
Drive type Different drive types cannot be mixed in the same parity group. However,
the drive type can be different between parity groups in the same system.
Spare drive When the following conditions 1 and 2 are met, the drives can be used as
spare drives.
1. Capacity of the spare drives is the same as or larger than the drives in
operation.
2. The type of the drives in operation and the type of the spare drives fulfill
the following conditions.

Type of Drive in Operation Type of Usable Spare Drive


HDD (7.2 krpm) HDD (7.2 krpm)
HDD (10 krpm) HDD (10 krpm)
SAS SSD SAS SSD
NVMe SSD NVMe SSD
NVMe SCM NVMe SCM
FMD (QxRy) FMD (QxRy)

NOTE: “x and y are an arbitrary number. Some drives do not contain the
number of y (e.g. Q13R).
The numbers (x, y) of Type of Drive in Operation need not be the
same as those of Type of Usable Spare Drive.
For example, when the drives in operation are Q6R4, the drives of
Q6R4, Q13R, etc. can be used as spare drives.

[THEORY05-02-100]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-10

5.3 Notes on Maintenance during LDEV Format/Drive Copy Operations


This section describes whether maintenance operations can be performed when Dynamic Sparing, Correction
Copy, Copy Back, Correction Access or LDEV Format is running or when data copying to a spare Disk is
complete.
If Correction Copy runs due to a Drive failure or Dynamic Sparing runs due to preventive maintenance on
large-capacity Disk Drives, Flash Drives or SCMs, it may take long to copy data. In the case of low-speed
LDEV Format performed due to volume addition, it may take time depending on the I/O frequency because
host I/Os are prioritized. In such a case, it is recommended to perform operations, such as replacement,
addition, and removal, after Dynamic Sparing, LDEV Format et cetera. is completed, based on the basic
maintenance policy, but the following maintenance operations are available.

[THEORY05-03-10]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-20

Storage System status


Maintenance operation Dynamic Correction Copy Back Correction Copied to LDEV
Sparing Copy Access spare Disk Format (*20)
Replacement CTL/CACHE Possible Possible Possible Possible Possible Impossible
(*16) (*16) (*16) (*1) (*14) (*13)
(*21)
CTL upgrade/ Possible Possible Possible Possible Possible Impossible
downgrade (*16) (*16) (*16) (*1) (*14) (*13)
(*21)
LANB Possible Possible Possible Possible Possible Impossible
(*16) (*16) (*16) (*1) (*14) (*13)
CHB Possible Possible Possible Possible Possible Impossible
(*13)
Power supply Possible Possible Possible Possible Possible Possible
SVP Possible Possible Possible Possible Possible Possible
SSVP Possible Possible Possible Possible Possible Possible
ENC/SAS Possible Possible Possible Possible Possible Impossible
cable/NSW/ (*1) (*14) (*13)
NVMe cable
DKB Possible Possible Possible Possible Possible Impossible
(*1) (*14) (*13)
(*22)
PDEV Possible Possible Possible Possible Possible Possible
(*6) (*6) (*6) (*1) (*14) (*4)
CFM Possible Possible Possible Possible Possible Possible
BKMF Possible Possible Possible Possible Possible Possible
(*1)
FAN (ISW) Possible Possible Possible Possible Possible Impossible
(*1) (*14) (*13)
Battery Possible Possible Possible Possible Possible Possible
(*1)
SFP Possible Possible Possible Possible Possible Possible
replacement
HSNPANEL Possible Possible Possible Possible Possible Possible
ISWPS Possible Possible Possible Possible Possible Possible
LAN cable Possible Possible Possible Possible Possible Possible
HIE Possible Possible Possible Possible Possible Impossible
(*1) (*14) (*13)
ISW Possible Possible Possible Possible Possible Impossible
(*1) (*14) (*13)
X-path Possible Possible Possible Possible Possible Impossible
(*1) (*14) (*13)
HSNBX Possible Possible Possible Possible Possible Impossible
chassis (*1) (*14) (*13)
ACLF Possible Possible Possible Possible Possible Possible
(To be continued)

[THEORY05-03-20]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-30

(Continued from preceding page)


Storage System status
Maintenance operation Dynamic Correction Copy Back Correction Copied to LDEV
Sparing Copy Access spare Disk Format (*20)
Addition/ CACHE/SM Impossible Impossible Impossible Possible Possible Impossible
Removal (*12) (*12) (*12) (*1) (*14) (*13)
CHB Impossible Impossible Impossible Possible Possible Impossible
(*12) (*12) (*12) (*1) (*14) (*13)
Power supply Possible Possible Possible Possible Possible Possible
SVP Possible Possible Possible Possible Possible Possible
SSVP Possible Possible Possible Possible Possible Possible
DKB Impossible Impossible Impossible Possible Possible Impossible
(*12) (*12) (*12) (*1) (*14) (*13)
Drive Impossible Impossible Impossible Possible Possible Impossible
(*12) (*12) (*12) (*1) (*14) (*2) (*13)
CFM Impossible Impossible Impossible Possible Possible Impossible
(*12) (*12) (*12) (*1) (*14) (*13)
SFP type Possible Possible Possible Possible Possible Possible
change
Parity Group Impossible Impossible Impossible Possible Possible Impossible
(*12) (*12) (*12) (*1) (*14) (*2) (*13)
Spare Impossible Impossible Impossible Possible Possible Impossible
(*12) (*12) (*12) (*1) (*14) (*2) (*13)
Drive Box Impossible Impossible Impossible Possible Possible Impossible
(*12) (*12) (*12) (*1) (*14) (*2) (*13)
ACLF Possible Possible Possible Possible Possible Possible
Addition Controller Impossible Impossible Impossible Impossible Impossible Impossible
Chassis (*18) (*18) (*18) (*17) (*19) (*13)
Removal Possible Possible Possible Possible Possible Impossible
(*1) (*18) (*1) (*18) (*1) (*18) (*1) (*17) (*1) (*19) (*13)
Addition Controller Impossible Impossible Impossible Impossible Impossible Impossible
Board (*18) (*18) (*18) (*17) (*19) (*13)
Removal Possible Possible Possible Possible Possible Impossible
(*1) (*18) (*1) (*18) (*1) (*18) (*1) (*17) (*1) (*19) (*13)
(To be continued)

[THEORY05-03-30]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-31

(Continued from preceding page)


Storage System status
Maintenance operation Dynamic Correction Copy Back Correction Copied to LDEV
Sparing Copy Access spare Disk Format (*20)
Micro- Online Possible Possible Possible Possible Possible Impossible
program (HDD micro- (*1) (*14) (*1) (*12)
exchange program
exchange is not
included.)
Online Impossible Impossible Impossible Impossible Impossible Impossible
(HDD micro- (*12)
program
exchange is
included.)
Offline Impossible Impossible Impossible Impossible Possible Impossible
(*12) (*12) (*12) (*14) (*12)
SVP only Possible Possible Possible Possible Possible Possible
LDEV Blockade Possible Possible Possible Possible Possible Possible
maintenance (*5) (*7) (*5) (*7) (*5) (*7) (*5) (*7) (*15)
(*15) (*15) (*15)
Restore Possible Possible Possible Possible Possible Possible
(*5) (*8) (*5) (*8) (*5) (*8) (*5) (*8) (*15)
(*15) (*15) (*15)
Format Possible Possible Possible Possible Possible Impossible
(*5) (*5) (*5) (*5) (*10) (*9)
(*15)
Verify Impossible Impossible Impossible Possible Possible Impossible
(*7) (*7) (*7) (*11) (*15) (*7)

[THEORY05-03-31]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-40

*1: The operation is prevented with a message. However, the operation is made possible by checking
the checkbox Forcibly run without safety checks and retrying the operation.
*2: It is impossible to remove a RAID group in which data is migrated to a spare Disk and the spare
Disk.
*3: (Blank)
*4: It is impossible when high-speed LDEV Format is running. When low-speed LDEV Format is
running, it is possible to replace drive in a RAID group in which LDEV Format is not running.
*5: It is possible to perform LDEV maintenance for LDEV defined in a RAID group in which
Dynamic Sparing, Correction Copy, Copy Back or Correction Access is not running.
*6: • The operation is prevented with the message [03005-002095] when the RAID group to which
the drive to be maintained belongs does not coincide with the RAID group in which Dynamic
Sparing/Correction Copy/Copy Back is running.
• The operation is prevented with the message [30762-208159] when the RAID group to which
the drive to be maintained belongs coincides with the RAID group in which Dynamic Sparing/
Correction Copy/Copy Back is running. When the RAID level is RAID6, the operation might be
prevented with the message [03005-002095] depending on the state of drives other than the drive
to be maintained.
If the operation is prevented with the message [03005-002095], the operation is made possible by
checking the checkbox Forcibly run without safety checks and retrying the operation. However,
a different message might be displayed depending on the timing when the conditions that cause a
prevention occur.
*7: It is prevented with message [03005-002095]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*8: It is prevented with message [03005-202002]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*9: It is prevented with message [03005-202001]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*10: It is prevented with message [03005-202005]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*11: It is prevented with message [03005-002011]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*12: It is prevented with message [30762-208159]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*13: It is prevented with message [30762-208158]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*14: It is prevented with message [30762-208180]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.

[THEORY05-03-40]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-50

*15: The operation is prevented with a message. However, the operation is made possible by clicking
Forcible Actions without safety checks and retrying the operation.
*16: For micro-program versions 90-01-41-xx/xx and 90-01-61-xx/xx, the operation is prevented
with the message [30762-208159]. Even if Forcibly run without safety checks is selected, the
operation is prevented.
*17: The operation is prevented with message [30762-208899] However, a different message might be
displayed depending on the occurrence timing of the state regarded as a prevention condition.
*18: The operation is prevented with message [30762-208159]. However, a different message might be
displayed depending on the occurrence timing of the state regarded as a prevention condition.
*19: The operation is prevented with message [30762-208980]. However, a different message might be
displayed depending on the occurrence timing of the state regarded as a prevention condition.
*20: Whether maintenance operations can be performed during shredding by Volume Shredder is the
same as during low-speed LDEV Format.
*21: The operation is not prevented when a drive which is connected to a DKB installed in the
maintenance target CTL is blocked.
*22: The operation is not prevented when a drive which is connected to the maintenance target DKB is
blocked.

[THEORY05-03-50]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-04-10

5.4 Verify (Parity Consistency Check)


The Verify processing verifies the consistency between data and parity. At execution of Verify, if a parity
that is not consistent with the data is found when Auto Correct is enabled, the consistency is recovered by
forcibly matching the parity to the current data that is assumed to be correct.
The time required for the Verify processing varies depending on the storage system configuration and
settings, the amount of allocated pages, the I/O load, and so on. So, when estimating the time of the Verify
processing, collect dumps and contact the Technical Support Division.

[THEORY05-04-10]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-10

5.5 PDEV Erase


5.5.1 Overview
When the specified system option (*1) is set, the DKC deletes the data of drive automatically in the case
according Table 5-19.
When the SOM for Media Sanitization is set to on, Media Sanitization is prioritized.
*1: Please contact to T.S.D.

Table 5-19 Overview


No. Item Content
1 SVP Operation Select system option from “Install”.
2 Status DKC only reports on SIM of starting the function. The progress status is
not displayed.
3 Result DKC reports on SIM of normality or abnormal end.
4 Recovery procedure at failure Re-Erase of drive that terminates abnormally is impossible. Please
exchange it for new service parts.
5 PS off or B/K off The Erase processing fails. It doesn’t restart after PS on.
6 How to stop the “PDEV Erase” Please execute Replace from the Maintenance Utility and exchange drive
that Erase wants to stop for new service parts.
7 Data Erase Pattern Data Erase Pattern is zero data.

Table 5-20 PDEV Erase execution case


No. Execution case
1 Drive is blocked according to Drive Copy completion.

[THEORY05-05-10]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-20

5.5.2 Rough Estimate of Erase Time


The Erase time is decided by capacity and the rotational speed of drive.
Time is indicated as follows. (Time is a standard and it might take the TOV)

Table 5-21 PDEV Erase completion expectation time


Type of drive 375 GB 750 GB 800 GB 960 GB 1.5 TB 1.9 TB 2.4 TB
SAS (7.2 krpm)
SAS (10 krpm) 210 min
Flash Drive 1 to 20 min 1 to 40 min
Flash Module Drive
SCM 10 min 15 min 15 min 25 min

Type of drive 3.8 TB 7.0 TB 7.6 TB 10 TB 14 TB 15 TB 18 TB


SAS (7.2 krpm) 990 min 1260 min 1980 min
SAS (10 krpm)
Flash Drive 1 to 85 min 1 to 140 min 1 to 240 min
Flash Module Drive 1 min 1 min
SCM

Type of drive 30 TB
SAS (7.2 krpm)
SAS (10 krpm)
Flash Drive 1 to 540 min
Flash Module Drive
SCM

[THEORY05-05-20]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-21

Table 5-22 PDEV Erase TOV


Type of drive 375 GB 750 GB 800 GB 960 GB 1.5 TB 1.9 TB 2.4 TB
SAS (7.2 krpm)
SAS (10 krpm) 410 min
Flash Drive 70 min 110 min
Flash Module Drive
SCM 50 min 60 min 60 min 80 min

Type of drive 3.8 TB 7.0 TB 7.6 TB 10 TB 14 TB 15 TB 18 TB


SAS (7.2 krpm) 1920 min 2320 min 3990 min
SAS (10 krpm)
Flash Drive 190 min 310 min 510 min
Flash Module Drive 9 min 9 min
SCM

Type of drive 30 TB
SAS (7.2 krpm)
SAS (10 krpm)
Flash Drive 1010 min
Flash Module Drive
SCM

[THEORY05-05-21]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-30

5.5.3 Influence in Combination with Other Maintenance Operation


The influence on the maintenance operation during executing PDEV Erase becomes as follows.

Table 5-23 Drive Replace


No. Object part Influence Countermeasure
1 Replace from Maintenance PDEV Erase terminates abnormally. —
Utility as for drive that does
PDEV Erase.
2 Replace from Maintenance Nothing —
Utility as for drive that does
not PDEV Erase.
3 User Replace Please do not execute the user Please execute it after completing PDEV
replacement during PDEV Erase. Erase.

Table 5-24 DKB Replace


No. Object part Influence Countermeasure
1 DKB connected with drive [SVP4198W] may be displayed. <SIM4d8xxx/4d9xxx/4daxxx/4dbxxx
that is executed PDEV Erase The DKB replacement might fail by about this drive is not reported>
[ONL2412E] when the password is Please replace drive (to which Erase is
entered. (*2) done) to new service parts. (*1)
The DKB replacement might fail by
[ONL2412E] when the password is
entered. (*2)
2 DKB other than the above Nothing Nothing

Table 5-25 I/F Board Replace/I/F Board Removal


No. Object part Influence Countermeasure
1 I/F Board that is executed [SVP4198W] may be displayed. The <SIM4d8xxx/4d9xxx/4daxxx/4dbxxx
PDEV Erase I/F Board replacement might fail by about this drive is not reported>
[ONL2412E] when the password is Please replace drive (to which Erase is
entered. (*2) done) to new service parts. (*1)
The I/F Board replacement might fail
by [ONL2412E] when the password is
entered. (*2)
2 I/F Board other than the above Nothing Nothing

[THEORY05-05-30]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-40

Table 5-26 ENC Replace


No. Object part Influence Countermeasure
1 ENC connected with DKB [SVP4198W] may be displayed. <SIM4d8xxx/4d9xxx/4daxxx/4dbxxx
connected with HDD that The ENC replacement might fail by about this drive is not reported>
does PDEV Erase [ONL2788E] [ONL3395E] when the Please replace drive (to which Erase is
password is entered. (*2) done) to new service parts. (*1)
The ENC replacement might fail by
[ONL2788E][ONL3395E] when the
password is entered. (*2)
2 ENC other than the above Nothing Nothing

Table 5-27 Drive Addition/Removal


No. Object part Influence Countermeasure
1 ANY Addition/Removal might fail by Please wait for the Erase completion or
[SVP739W]. replace drive (to which Erase is done) to
new service parts. (*1)

Table 5-28 Exchanging micro-program


No. Object part Influence Countermeasure
1 DKC MAIN [SVP0732W] may be displayed. Please wait for the Erase completion or
Micro-program exchanging might replace drive (to which Erase is done) to
fail by [SMT2433E], when the new service parts. (*1)
password is entered. (*2)
2 Drive [SVP0732W] may be displayed. Please wait for the Erase completion or
Micro-program exchanging might replace drive (to which Erase is done) to
fail by [SMT2433E], when the new service parts. (*1)
password is entered. (*2)

Table 5-29 LDEV Format


No. Object part Influence Countermeasure
1 ANY There is a possibility that PATH- Please wait for the Erase completion or
Inline fails. There is a possibility replace drive (to which Erase is done) to
that the cable connection cannot new service parts. (*1)
be checked when the password is
entered.

[THEORY05-05-40]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-50

Table 5-30 PATH-Inline


No. Object part Influence Countermeasure
1 DKB connected with drive There is a possibility of detecting the Please wait for the Erase completion or
that is executed PDEV Erase trouble by PATH-Inline. replace drive (to which Erase is done) to
new service parts. (*1)

Table 5-31 PS/OFF


No. Object part Influence Countermeasure
1 ANY PDEV Erase terminates abnormally. <SIM4d8xxx/4d9xxx/4daxxx/4dbxxx
about this drive is not reported>
Please wait for the Erase completion or
replace drive (to which Erase is done) to
new service parts. (*1)

*1: When drive that stops PDEV Erase is installed into DKC again, it might fail by Spin-up failure.
*2: It is not likely to be able to maintain it when failing because of concerned MSG until PDEV Erase
is completed or terminates abnormally.

[THEORY05-05-50]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-60

5.5.4 Notes of Various Failures


Notes of the failure during PDEV Erase become as follows.

No. Failure Object part Notice Countermeasure


1 B/K OFF/ Drive BOX There is a possibility that PDEV Erase fails Please replace drive of the Erase
Black Out (DB) due to the failure. object to new service parts after PS
on.
2 DKC Because monitor JOB of Erase disappears, Please replace drive of the Erase
it is not possible to report on normality/ object to new service parts after PS
abnormal termination SIM of Erase. on.
3 MP failure I/F Board [E/C 9470 is reported at the MP failure] Please replace drive of the Erase
JOB of the Erase monitor is reported on object to new service parts after the
E/C 9470 when Abort is done due to the recovery of MP failure.
MP failure and completes processing. In
this case, it is not possible to report on
normality/abnormal termination SIM of
Erase.
4 [E/C 9470 is not reported at the MP failure] Please replace drive to new service
It becomes impossible to communicate with parts after judging the Erase
the Controller who is doing Erase due to the success or failure after it waits
MP failure. In this case, it becomes TOV while TOV of PDEV Erase after
of monitor JOB with E/C 9450, and reports the recovery of MP.
abnormal SIM.

[THEORY05-05-60]
Hitachi Proprietary DKC910I
Rev.11.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-10

5.6 Media Sanitization


5.6.1 Overview
Media Sanitization erases data in a drive by overwriting it. Data in the drive that caused Dynamic Sparing
(hereinafter referred to as DS) to be started is overwritten by the defined erase pattern data when DS ends.
Then, the data in the drive is compared with the erase pattern data and the data erase is completed.

Table 5-32 Overview


No. Item Description
1 Erase specifications See Table 5-33.
2 Execution method See Table 5-34.
3 Execution process See Table 5-35.
4 Check of result SIM indicating end of Media Sanitization is reported (normal end, abnormal end,
or end with warning). For details, see 5.6.3 Checking Result of Erase .
5 Recovery from failure Replacement with a new drive
6 Stopping method Replacement of the drive for which the data erase needs to be stopped with a new
drive by using Maintenance Utility

Table 5-33 Erase Specifications


No. Item Description
1 Number of erases One erase for an entire drive (all LBAs) (for flash drives, excluding over
provisioning space)
2 Erase pattern 0 data
3 Check of erase Drive data after write of the erase pattern data is read to compare it with the erase
pattern data.
4 Erase time See 5.6.2 Estimated Erase Time .
5 LED action on drive In process of erase: The green LED is blinking.
After completion of erase: The red LED is lit. (The red LED might not light up
depending on the drive failure type.)

Table 5-34 Execution Method


No. Description
1 Setting the SOM to on is necessary. Contact the Technical Support Division.
When the SOM for PDEV erase is set to on, the SOM for Media Sanitization is prioritized.

Table 5-35 Execution Process


No. Description
1 After completion of DS, Media Sanitization is automatically started.
When erase is started, SIM is reported.

[THEORY05-06-10]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-20

5.6.2 Estimated Erase Time


Estimated time required for erase is shown below.
Erase time might significantly exceed the estimated time due to the load on the storage system and a drive
error occurring during erase.

Table 5-36 Estimated Erase Time


Type of drive 375 GB 750 GB 800 GB 960 GB 1.5 TB 1.9 TB 2.4 TB
SAS (7.2 krpm) - - - - - - -
SAS (10 krpm) - - - - - - 47h 30m
Flash Drive - - - 2h - 4h -
Flash Module Drive - - - - - - -
SCM 1h 1h 30m 1h 30m - 3h - -

Type of drive 3.8 TB 7.0 TB 7.6 TB 10 TB 14 TB 15 TB 18 TB


SAS (7.2 krpm) - - - 333h 465h 30m - 539h 30m
SAS (10 krpm) - - - - - - -
Flash Drive 9h - 17h 30m - - 30h 30m -
Flash Module Drive - 108h 30m - - 218h - -
SCM - - - - - - -

Type of drive 30 TB
SAS (7.2 krpm) -
SAS (10 krpm) -
Flash Drive 61h
Flash Module Drive -
SCM -

[THEORY05-06-20]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-30

5.6.3 Checking Result of Erase


5.6.3.1 SIMs Indicating End of Media Sanitization
Check the result of erase by referring to the following SIM list.

Table 5-37 SIMs Indicating End of Erase


No. SIM (*1) Type of end Result
1 4e4xxx Normal end Data erase by writing the erase pattern data to an entire drive (all
4e5xxx LBAs) ends normally (for flash drives, excluding over provisioning
space).
2 4e6xxx Abnormal end Data erase ends abnormally because either of the following erase
4e7xxx errors occurs:
• Writing the erase pattern data fails.
• In process of data comparison after the erase pattern data is written,
an inconsistency with the erase pattern data is detected.

Tell the customer that user data might remain in the drive.
When the customer has the DRO agreements, give the faulty drive to
him or her and recommend destroying it physically or other methods
like that.
When the customer does not have the DRO agreements, bring the
faulty drive back with you after making the customer understand that
user data might remain in the drive.
(If the customer does not allow you to bring out the drive, explain
him or her that he or she needs to use services for erasing data or
make the DRO agreements.)
3 4e8xxx End with warning Data erase ends with warning because reading some areas of the
4e9xxx drive is unsuccessful while writing the erase pattern data is successful
(for flash drives, excluding over provisioning space).
Tell the customer that writing the erase pattern data to an entire drive
is completed but data in some areas cannot be read. Then, ask the
customer whether he or she wants you to bring out the drive.
For how to check the number of the areas (LBAs) where data cannot
be read, see 5.6.3.2 Checking Details of End with Warning .
*1: The SIM indicating drive port blockade (see (SIMRC02-70)) might be also reported when the SIM
indicating end of Media Sanitization is reported. In such a case, prioritize the SIM indicating end
of Media Sanitization.

[THEORY05-06-30]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-40

5.6.3.2 Checking Details of End with Warning


Factors of end with warning are shown below.

Table 5-38 Factors of End with Warning


No. Factor
1 In the erase process, the write by using the erase pattern data succeeds but the read fails.

Check SIMs indicating end with warning and related SSBs to know factors of end with warning as follows:

[1] In the Content – SIM window of the SIM indicating end with warning, click [Refer].

[THEORY05-06-40]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-50

[2] SSBs related to SIMs indicating end with warning


Details of each field of SSB are shown below.
Check the details of each field and consult a security administrator to determine whether it is possible to
bring out the target drive for data erase.

Table 5-39 Internal Information of SSB Related to SIM indicating End with Warning
Field Details
(a) Total number of LBAs on the target drive for data erase
(Field size: 6 bytes)
(a) = (b) + (c)
(b) The number of LBAs for which data erase is complete on the target drive for data erase
(Field size: 6 bytes)
(c) The number of LBAs for which the write by using the erase pattern data is successful and the read is
unsuccessful on the target drive for data erase
(Field size: 6 bytes)
(d) DB# and RDEV# of the target drive for data erase
(Lower 1 byte: DB#, upper 1 byte: RDEV#)

(b)
(c)
(d)
(a)

[THEORY05-06-50]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-60

5.6.4 Influence between Media Sanitization and Maintenance Work


The following table shows whether each maintenance work is possible or not when Media Sanitization is in
process.

No Maintenance Part to be Maintenance work is possible or not Operation of Media Sanitization


type maintained possible when maintenance is performed
1 Replacement CTL/CACHE Possible Media Sanitization might end
abnormally due to disconnection of
the path to the target drive for Media
Sanitization.
2 CTL upgrade/ Not possible —
downgrade
3 LANB Possible Operates continuously.
4 CHB Possible Operates continuously.
5 Power supply Possible Operates continuously.
6 SVP Possible Operates continuously.
7 SSVP Possible Operates continuously.
8 ENC/SAS cable/ Possible Media Sanitization might end
NSW/NVMe abnormally due to disconnection of
cable the path to the target drive for Media
Sanitization.
9 DKB/DKBN Possible Media Sanitization might end
abnormally due to disconnection of
the path to the target drive for Media
Sanitization.
10 Drive Possible Media Sanitization ends abnormally
if you replace a drive in process of
it.
11 CFM Possible Operates continuously.
12 BKMF Possible Operates continuously.
13 FAN (ISW Possible Operates continuously.
14 Battery Possible Operates continuously.
15 SFP Possible Operates continuously.
16 HSNPANEL Possible Operates continuously.
17 ISWPS Possible Operates continuously.
18 LAN cable Possible Operates continuously.
19 HIE Possible Operates continuously.
20 ISW Possible Operates continuously.
21 X-path Possible Operates continuously.
(To be continued)

[THEORY05-06-60]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-70

(Continued from preceding page)


No Maintenance Part to be Maintenance work is possible or not Operation of Media Sanitization
type maintained possible when maintenance is performed
22 Replacement HSNBX Possible Operates continuously.
chassis
23 ACLF Possible Operates continuously.
24 Addition/ CACHE/SM Possible Operates continuously.
25 Removal CHB Possible Operates continuously.
26 Power supply Possible Operates continuously.
27 SVP Possible Operates continuously.
28 SSVP Possible Operates continuously.
29 DKB/DKBN Possible Operates continuously.
30 Drive Addition: Possible Addition: Operates continuously.
Removal: Possible (*2) Removal: Media Sanitization ends
abnormally if you remove a drive in
process of it.
31 CFM Possible Operates continuously.
32 SFP type change Possible Operates continuously.
33 Parity Group Possible (*2) Operates continuously.
34 Spare drive Possible (*2) Operates continuously.
35 Drive Box Possible (*2) Operates continuously.
36 ACLF Possible Operates continuously.
37 Addition Controller Not possible (*3) —
38 Removal Chassis Possible (*1), (*3) Operates continuously.
39 Addition Controller Board Not possible (*3) —
40 Removal Possible (*1), (*3) Operates continuously.
41 Micro-program Online (not Possible (*1) When either of the following is
exchange including the met, micro-program exchange with
HDD micro- a micro-program version that does
program) not support Media sanitization is not
possible.
• The SOM for Media Sanitization is
set to on.
• Media Sanitization is in process.
42 Online Not possible —
(including the
HDD micro-
program)
43 Offline Possible For the DKCMAIN micro-program,
Media Sanitization ends abnormally.
44 SVP micro- Possible Operates continuously.
program only
(To be continued)

[THEORY05-06-70]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-80

(Continued from preceding page)


No Maintenance Part to be Maintenance work is possible or not Operation of Media Sanitization
type maintained possible when maintenance is performed
45 LDEV Blockade Possible Operates continuously.
46 maintenance Restore Possible Operates continuously.
47 Format Possible Operates continuously.
48 Verify Possible Operates continuously.
*1: The operation is suppressed with a message displayed. You can retry the operation after checking
the checkbox for Forcibly run without safety checks .
*2: You cannot remove a RAID group whose data is stored in a spare drive or the spare drive.
Perform either (1) or (2).
(1) If you want to prioritize the maintenance work, restore the blocked drive on which Media
Sanitization is being executed and have the copy back performed. However, if you restore the
blocked drive, Media Sanitization ends abnormally and cannot be executed again.
(2) If you want to prioritize Media Sanitization, confirm that Media Sanitization ends, and then
replace the blocked drive on which Media Sanitization is completed with a new drive and
have the copy back performed.
*3: The operation is suppressed with the message [30762-208980] displayed. A different message
might be displayed depending on the timing and conditions.

[THEORY05-06-80]
Hitachi Proprietary DKC910I
Rev.11.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-90

5.6.5 Notes when Errors Occur


The following table shows influence on Media Sanitization when each failure occurs.

No. Item 1 Item 2 Influence on Media Sanitization


1 DKC Power outage Media Sanitization ends abnormally.
2 DKB/DKBN Failure Media Sanitization might end abnormally due to disconnection of the
path to the target drive for Media Sanitization.
3 Drive Box Power outage Media Sanitization ends abnormally.
4 ENC/NSW Failure Media Sanitization might end abnormally due to disconnection of the
path to the target drive for Media Sanitization.

[THEORY05-06-90]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-01-10

6. Data In Place
6.1 Overview
Data In Place (hereinafter referred to as DIP) is a function to upgrade the storage system to the next
generation model without data migration. Drives used before DIP can continue to be used after DIP.

[THEORY06-01-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-02-10

6.2 DIP Procedures from VSP 5100/5100H, 5500/5500H to VSP 5600/5600H


There are differences between the DIP procedure from VSP 5100/5100H to VSP 5600/5600H and that from
VSP 5500/5500H to VSP 5600/5600H.
Each DIP procedure is shown below.

Upgrading from VSP 5100/5100H

Check DKCMAIN micro-program version


and GUM micro-program version.

Are
DKCMAIN
micro-program version and No
GUM micro-program version
90-08-21-00/00 or Exchange DKCMAIN micro-program and
later? GUM micro-program for versions 90-08-21-
Yes 00/00 or later. (*1)

Replace all DKCPSs with DKCPSLs by


using Maintenance Utility. (*2)

Install Controller Boards for VSP


5600/5600H by using Maintenance Utility.
(*3)

Perform model upgrade of all Controller


Boards for VSP 5100/5100H by using
Maintenance Utility. (*4)

END

*1: For details, see MICRO-FC SECTION (MICRO00-00).


*2: For details, see [PSU REPLACEMENT PROCESSING – RPSU] (REP(RPSU)00-00).
*3: For details, see Adding Controller Boards (INST(AD)13-01-10).
*4: For details, see [Controller Board REPLACEMENT PROCESSING (Model Upgrade) - RCTLU]
(REP(RCTLU)00-00).

[THEORY06-02-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-02-20

Upgrading from VSP 5500/5500H

Check DKCMAIN micro-program version


and GUM micro-program version.

Are
DKCMAIN
micro-program version and No
GUM micro-program version
90-08-01-00/00 or Exchange DKCMAIN micro-program and
later? GUM micro-program for versions 90-08-01-
Yes 00/00 or later. (*5)

Replace all DKCPSs with DKCPSLs by


using Maintenance Utility. (*6)

Perform model upgrade of all Controller


Boards for VSP 5500/5500H by using
Maintenance Utility. (*7)

END

*5: For details, see MICRO-FC SECTION (MICRO00-00).


*6: For details, see [PSU REPLACEMENT PROCESSING – RPSU] (REP(RPSU)00-00).
*7: For details, see [Controller Board REPLACEMENT PROCESSING (Model Upgrade) - RCTLU]
(REP(RCTLU)00-00).

[THEORY06-02-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-03-10

6.3 Estimating the Work Time


Estimate the work time of DIP based on the following:
(1) Time required for installing Controller Boards
See Adding Controller Boards (INST(AD)13-01-10).

(2) Time required for replacing a DKCPS


# Process Estimated work time Remarks
1 DKCPS replacement 5 min.
2 Micro-program processing time 5 min.

(3) Time required for upgrading a Controller Board


# Process Estimated work time Remarks
1 Component installation 20 min. Installation of DIMMs, CFMs, BKMFs/
ACLFs, and batteries in a Controller
Board
2 Micro-program processing time 35 min.

[THEORY06-03-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-04-10

6.4 Effects on Performance


Storage system performance is reduced by amount for one Controller Board during DIP because a Controller
Board needs to be blocked.
The following table shows approximate rate of performance reduction in each storage system configuration.
# Storage system configuration Rate of performance reduction
1 2 nodes - 4 CTLs 25.00 %
2 4 nodes - 8 CTLs 12.50 %
3 6 nodes - 12 CTLs 8.30 %

[THEORY06-04-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-10

7. Appendix A : Maintenance Associated with MF Products


7.1 Channel Commands
Command Overview
1. READ commands
The read commands transfer the readout data from devices to channels.

2. WRITE commands
The write commands write the transfer data from channels to devices.

3. SEARCH commands
The search commands follow a control command and logically search for the target data.

4. CONTROL commands
The control commands include the SEEK command that positions cylinder and head positions, the
SET SECTOR command that executes latency time processing, the LOCATE RECORD command
that specifies the operation of the ECKD command, the SET FILE MASK command that defines the
permissible ranges for the WRITE and SEEK operations, and the DEFINE EXTENT command that
defines the permissible ranges for the WRITE and SEEK operations and that defines the cache access
mode.

5. SENSE commands
The sense commands transfer sense bytes and device specifications.

6. PATH CONTROL commands


The path control commands enable and disable the exclusive control of devices.

7. TEST I/O command


The TEST I/O command transfers the specified device and its path state to a given channel in the form of
DSBs.

8. SUBSYSTEM commands
The subsystem commands include the commands and paths that specify the information for cache control
to DKCs, and the commands that transfer the channel information and cache related information to
channels.

[THEORY07-01-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-20

Table 7-1 Command Summary (1/3)


Command Name Command Code
Single Track Multitrack
READ READ INITIAL PROGRAM LOAD (RD IPL) 02 –
commands READ HOME ADDRESS (RD HA) 1A 9A
READ RECORD ZERO (RD R0) 16 96
READ COUNT,KEY,DATA (RD CKD) 1E 9E
READ KEY,DATA (RD KD) 0E 8E
READ DATA (RD D) 06 86
READ COUNT (RD C) 12 92
READ MULTIPLE COUNT,KEY AND DATA (RD MCKD) 5E –
READ TRACK (RD TRK) DE –
READ SPECIAL HOME ADDRESS (RD SP HA) 0A –
WRITE WRITE HOME ADDRESS (WR HA) 19 –
commands WRITE RECORD ZORO (WR R0) 15 –
WRITE COUNT,KEY,DATA (WR CKD) 1D –
WRITE COUNT,KEY,DATA NEXT TRACK (WR CKD NT) 9D –
ERASE (ERS) 11 –
WRITE KEY AND DATA (WR KD) 0D –
WRITE UPDATE KEY AND DATA (WR UP KD) 8D –
WRITE DATA (WR D) 05 –
WRITE UPDATE DATA (WR UP D) 85 –
WRITE SPECIAL HOME ADDRESS (WR SP HA) 09 –
SEARCH SEARCH HOME ADDRESS (SCH HA EQ) 39 B9
commands SEARCH ID EQUAL (SCH ID EQ) 31 B1
SEARCH ID HIGH (SCH ID HI) 51 D1
SEARCH ID HIGH OR EQUAL (SCH ID HE) 71 F1
SEARCH KEY EQUAL (SCH KEY EQ) 29 A9
SEARCH KEY HIGH (SCH KEY HI) 49 C9
SEARCH KEY HIGH OR EQUAL (SCH KEYD HE) 69 E9

[THEORY07-01-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-30

Table 7-2 Command Summary (2/3)


Command Name Command Code
Single Track Multitrack
CONTROL DEFINE EXTENT (DEF EXT) 63 –
commands LOCATE RECORD (LOCATE) 47 –
LOCATE RECORD EXTENDED (LOCATE EXT) 4B –
SEEK (SK) 07 –
SEEK CYLINDER (SK CYL) 0B –
SEEK HEAD (SK HD) 1B –
RECALIBRATE (RECAL) 13 –
SET SECTOR (SET SECT) 23 –
SET FILE MASK (SET FM) 1F –
READ SECTOR (RD SECT) 22 –
SPACE COUNT (SPC) 0F –
NO OPERATION (NOP) 03 –
RESTORE (REST) 17 –
DIAGNOSTIC CONTROL (DIAG CTL) F3 –
SENSE SENSE (SNS) 04 –
commands READ AND RESET BUFFERED LOG (RRBL) A4 –
SENSE IDENTIFICATION (SNS ID) E4 –
READ DEVICE CHARACTERISTICS (RD CHR) 64 –
DIAGNOSTIC SENSE/READ (DIAG SNS/RD) C4 –
PATH DEVICE RESERVE (RSV) B4 –
CONTROL DEVICE RELEASE (RLS) 94 –
commands UNCONDITIONAL RESERVE (UNCON RSV) 14 –
SET PATH GROUP ID (SET PI) AF –
SENSE SET PATH GROUP ID (SNS PI) 34 –
SUSPEND MULTIPATH RECONNECTION (SUSP MPR) 5B –
RESET ALLEGIANCE (RST ALG) 44 –
READ CONFIGURATION DATA (RD CONF DATA) FA –
TST I/O TEST I/O (TIO) 00 –
TIC TRANSFER IN CHANNEL (TIC) X8 –

[THEORY07-01-30]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-40

Table 7-3 Command Summary (3/3)


Command Name Command Code
Single Track Multitrack
STORAGE SET STORAGE SYSTEM MODE (SET SUB MD) 87 –
SYSTEM PERFORM STORAGE SYSTEM FUNCTION (PERF SUB FUNC) 27 –
commands READ STORAGE SYSTEM DATA (RD SUB DATA) 3E –
SENSE STORAGE SYSTEM STATUS (SNS SUB STS) 54 –
READ MESSAGE ID (RD MSG IDL) 4E –

NOTE: • Command Reject, format 0, and message 1 are issued for the commands that are not
listed in this table.
• TEST I/O is a CPU instruction and cannot be specified directly. However, it appears
as a command to the interface.
• TIC is a type of command but runs only on a channel. It will never be visible to the
interface.

[THEORY07-01-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-02-10

7.2 Comparison of Pair Status on Storage Navigator, Command Control Interface (CCI)
The following are the TrueCopy, Universal Replicator, and global-active device pair statuses displayed on
Storage Navigator and CCI.
Some pair status displays are different among program products. For details, see the user guide of each
program product.

Table 7-4 Comparison of Pair Statuses of TrueCopy, Universal Replicator, and Global-Active
Device on Storage Navigator and CCI
Status on Storage Navigator Status on CCI
SMPL SMPL
COPY (*1) COPY
PAIR PAIR
PSUS PSUS
PSUE PSUE
SSUS SSUS
SSWS (*2) SSWS
*1: INIT/COPY might be displayed.
*2: PSUS or PSUE might be displayed.

[THEORY07-02-10]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-03-10

7.3 Locations where Configuration Information is Stored and Timing of Information


Update
Locations where configuration information is stored and timing of information update is indicated.

CTL


Shared
MP  memory


CFM  
Locations where configuration
information is stored

SVP

Backup media

No. Locations Update/Save/Load


 MP  Shared memory • The configuration information is updated due to the configuration change by
operators for VOL creation, LUNM setting, and the like.
• The configuration information is updated due to the change of resource allocation.
 Shared memory  MP If the storage system starts up when the shared memory is not volatile, the
configuration information in the shared memory is loaded into MPs.
 Shared memory  CFM • When the storage system is powered off, the configuration information is saved
into the CFM.
• When the configuration information is updated, it is saved into the CFM by online
configuration backup.
 CFM  Shared memory If the storage system starts up when the shared memory is volatile, the configuration
information saved into the CFM in  is loaded into the shared memory.
 Shared memory  SVP When the configuration information is updated, the configuration information in
the shared memory is saved into the SVP.
 SVP  Backup media The configuration information in the shared memory is saved into the backup
media, according to the operation settings for configuration information backup
(Create Configuration Backup) in the SVP.
 Backup media  (SVP) The configuration information saved in the backup media is loaded into the
 Shared memory shared memory, according to the operation settings for Restoring Configuration
Information in the SVP.

[THEORY07-03-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-10

7.4 TPF
7.4.1 An outline of TPF

TPF is one of operating systems (OS) mainly used for airline on-line reservation systems (CRS/Computer
Reservation System).
To correspond to TPF, DKC must support logical exclusive lock facility and extended cache facility.
The former is a function which is called MPLF (Multi-Path Lock Facility) and the latter is a function which
is called RC (Record Cache).
A DKC which supports TPF has the MPLF and RC functions defined in RPQ#8B0178 in the IBM public
manual: IBM3990 Transaction Processing Facility support RPQs (GA32-0134-03).

A DKC which corresponds to TPF implements a special version of microprogram which supports the MPLF
and RC functions of TPF feature (RPQ#8B0178), described in the following IBM public manuals:
(1) IBM3990 Transaction Processing Facility support RPQs (GA32-0134-03)
(2) IBM3990 Storage Control Reference for Model 6 (GA32-0274-03)

1. Outline of MPLF

A host system can control concurrent use of resources by using logical locks of DKC. Logical locks are
defined in shared resources of each logical CU in DKC. These shared resources are managed by MPL
(Multi-Path Lock). Each MPL can have up to 16 types of lock statuses.

Figure 7-1 shows the overview of the I/O sequence with MPLF. A TPF host uses a unique MPLF user
identifier. Up to 32 MPLF users can be connected to one logical CU.

In Figure 7-1, MPLF users are USER A and USER B. Each user specifies MPLP (Multi-Path Lock
Partition) to use MPLF. MPLP is a logical partition that divides a group of MPLs which are divided
for each logical CU. Up to eight MPLPs can be set. Two MPLPs are usually used: one MPLP is for
transactions and the other for maintenance jobs. MPLP is specified by an MPLP identifier.

[THEORY07-04-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-20

(1) Before starting an I/O sequence, each host performs the CONNECT processing to get permission
to use MPLP. Only a user who is given permission by the processing can use the logical lock
facility of MPLF.
(2) Each user performs the SET LOCK STATE processing by specifying a multi-path lock name
(equivalent to a dataset name for which concurrent use is controlled) to get a logical lock.
(3) The user who gets a logical lock by the SET LOCK STATE processing performs the R/W
processing for the specified multi-path lock name.
(4) The user who finishes the R/W processing performs the UNLOCK processing by specifying the
multi-path lock name to release the logical lock. This processing enables DASD to be shared while
maintaining the data consistency.
(5) The user who does not have to use each MPLP performs the DISCONNECT processing to give up
permission to use each MPLP.

Figure 7-1 Overview of MPLF

CONNECT
USER A
LOCK
H R/W
O
S
T CONNECT
USER B
LOCK
R/W

SET LOCK STATE


CONNECT DISCONNECT
UNLOCK
SET LOCK STATE
PURGE LOCK

MPLP 0 Connect state

D MPL 0 Lock state


K
C

MPL 1

[THEORY07-04-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-30

2. Outline of RC

RC has the following two features:


(1) Record Mode Chain
(2) Record Caching

The following explains these features.


(1) Record Mode Chain
Record Mode Chain consists of the following 4 command chains:
(a) Mainline Processing (Read)
(b) Mainline Processing (Write)
(c) Capture
(d) Restore
To run Record Mode Chain, Record Mode needs to be allowed for each device by the command. If
Record Mode is not allowed for the target device, the chain is processed as an I/O in the standard
mode (non-TPF mode).

(2) Record Caching


When the host specifies Set Cache Allocation Parameters by a command, record caching is enabled
in DKC.

[THEORY07-04-30]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-40

7.4.2 TPF Support Requirement

1. OS
TPF Ver.4.1./zTPF VER.1.1

2. Hardware
The following table shows the storage system hardware specifications for TPF support.

Table 7-5 TPF Support Hardware Specification


Item Description
Number of MODs Max. 16,384/box
Number of LCUs/Box Max. 64
Number of SSIDs/LCU 1
Cache/SM capacity Refer to (INST(GE)03-04-10)
RAID level 1, 5 or 6
Emulation type
(1) DKC 2107
(2) Device 3390-3/9/L/M
Number of host paths Max. 32
Number or Multi-Path Locks 16,384/LCU (assigned to only 16LCUs)
4,096/LCU (assigned to 64LCUs)
Channel Board 4Mx16 (Not supported on 4Mx32.)

[THEORY07-04-40]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-50

7.4.3 TPF trouble shooting method


Basically TPF environment and MVS (as a standard operation system) are same in trouble shooting.

An example order is below:


(1) Collect system error information by Syslog, EREP, and so on.
(2) Collect DKC error information by SVP dump operation.
(3) Send the above data to T.S.D.

[THEORY07-04-50]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-60

7.4.4 The differences of DASD-TPF (MPLF) vs DASD-MVS


1. Controlling concurrent use of data by MPLF

MVS environments
(1) Logical volume (device) is the unit of controlling concurrent use of data among multiple CPUs.
(2) Device is owned by one CPU during CPU processing (accessing), and Device-busy status is
reported to another CPUs accesses.
(3) Device-end status is used to notify the waiting CPUs when the device becomes free.

TPF environments
(1) A logical lock is the unit of exclusive control of data among multiple CPUs, in place of a logical
volume (device) in MVS.
(2) Each logical lock can be accessed in parallel.
(3) When a CPU occupies a certain logical lock, DSB x4C/x0C is returned in response to a request for
the logical lock by other CPU. DSB x4C indicates that the logical lock succeeded, and DSB x0C
indicates that the logical lock failed (changing to the waiting for logical lock state).
(4) When the logical lock is released, it is given to the CPU that is changing to the waiting for logical
lock state. Attention is reported to other waiting CPUs.

Figure 7-2 Differences between TPF DASD and MVS DASD

CPU-A CPU-B
 Reserve/Read&Write Access by CPU-A

  (Successful).
 CPU-B’s trial is rejected by Device-busy
 (Failed).
 Terminate its process and release the volume.
Logical Volume  Free (Device-end) will be sent.
CPU-B can use this volume.

 Set Lock/Read&Write process *1 by CPU-A


CPU-A CPU-B
(Successful).
 CPU-B’s trial is rejected by Not-granted

  (failed).
 Terminate with Unlock, by CPU-A.
  Free (Attention) will be sent. *2.
CPU-B can use this Dataset.
*1: Typical CCW chain:
Dataset - Set lock State (x27/order(x30));
- Read Storage system Data (x3E);
Logical Volume
- TIC (to be continued if granted)
- (ordinary CCW chain)
*2: This report’s path/Address is usually different
from above .

[THEORY07-04-60]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-70

2. Path-group

(1) The TPF system does not use a path-group composed of multiple paths.

3. Connect Path/Device

(1) TPF system issues Connect order to define :


• User registration to each logical CU and MPLP
• MPL resource allocation to each logical CU and MPLP
• Setting of paths and devices that report attentions
(2) This order is code (x33) of Perform Storage system Function (x27) command.
(3) CPU (channel) only has the capability to change this path and device definition.

4. Channel Re-drive function

Function unique to the TPF channels, which makes a sub channel try the reconnection on the same path
for a certain period of time when an I/O request is rejected because CU is busy

5. Fixed length record

The TPF system uses fixed length records for faster update writes of records and for more efficient cache
handling in reads of single records. In the online environment, the system usually operates at a hit rate of
almost 100% for writes and a hit rate of 80% or higher for reads.

[THEORY07-04-70]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-80

6. Prime/Dupe MODs pairs

(1) To improve Data-integrity of DASD, TPF system often makes the Data-duplications on different
two DASD storage systems.
(2) The following figure shows one example of these pairs.
Prime MOD (module)s and Dupe MODs are always located on each side of storage system (spread
to all storage systems).

Figure 7-3 Prime/Dupe MODs pairs

DASD-0 (HDS) DASD-1 (not HDS)

Prime MOD (pair-1) Dupe MOD

Dupe MOD Prime MOD


(pair-2)
• • •
• • •
• • •
• • •

7. Data Copy procedures

The Copy procedures are taken for the following purposes:


(1) To make a pair (To copy data from Prime MOD to Dupe).
(2) To recover the failed data.

There are two ways to make a pair.


(1) AFC (All File Copy)
(2) AMOD (Alter Module)

[THEORY07-04-80]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-90

7.4.5 Notices for TrueCopy for Mainframe-option setting


<SVP operation>
1. RCU Option
We strongly recommend you to select No in the PPRC support by host column of the RCU Option
window.
We strongly recommend you to select the Not Report in the Service SIM of Remote Copy column of
the RCU Option window.

2. Add Pair
We strongly recommend you to select the Copy to R-VOL in the CFW Data column of the Add Pair
window.

3. Suspend Pair
We strongly recommend you to select the Disable in the SSB (F/M = FB) column of the Suspend Pair
window.

<Host (TPF-OS) consideration>


When MVS is used, TC-MF requires customers to extend the I/O patrol time to prevent MIH reporting.
This applies also to TPF. You need to discuss with your customer to find opportunity to extend the Stalled
Module Queue timer. If you cannot extend the timer, you need to avoid the reporting by using split
volumes of SI-MF, for example.

[THEORY07-04-90]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-100

7.4.6 TPF Guideline


7.4.6.1 SOM required as TPF
Please refer NewModeList.xlsx. Detailed explanations are described in the file.

SOM Title
15 Improve the host response time to be about within 6 sec
142 Prevent storage systems from going down by blocking a failure drive at an early point when time-
out occurs frequently for commands issued to the drive
309 Block the failed HDD at an early stage
310 Set the monitoring timer for MP hang-up is 6 seconds and returning a response to the host within
8 is guaranteed
359 Block the failed HDD at an early stage
809 Improve host response time to be within 3 sec
862 Change drive response watching time on backend
911 Allocate time slots for synchronous processing and asynchronous processing to each MP of an
MPB
1207 Send LOGO to a mainframe host with zTPF OS when a controller is blocked

[THEORY07-04-100]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-110

7.4.6.2 How to set up TPF configuration


Shows how to set TPF ENABLE.
(Refer to System Tuning SVP Procedure (SVP02-18-10).)

Step1: Check TPF Enable .


Step2: Select Number of MPLs 4096 or 16384.
Default is 4096.
Step3: Specify CU Number within x00-3F, x40-
7F, x80-xBF or xC0-FF. Default is x00-
x3F.
NOTE: • When Number of MPLs is 16384, it
need to manage CU Number within
x00-x0F from TPF OS if x00-3F is
selected as CU Number.
• MPL is a lock table for controlling
MPLF functions. MPL is used for
exclusive control when multiple
hosts access one record. From past
results, 4096 / CU is sufficient for
MPL.

[THEORY07-04-110]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-120

Shows how to configure I-2107-TPF.


(Refer to System Tuning SVP Procedure (SVP02-18-10).)
Step1: Select a Port
Step2: Click [Emulation]
Step3: Select I-2107-TPF

[THEORY07-04-120]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-130

7.4.6.3 Combination of copy P.P.


Basic policy: Remote copy (TC-MF or UR-MF) is recommended to be combined with local copy (SI-MF)
SVOL in order to eliminate the response effect on the production volume of TPF. In addition, split operation
is recommended for local copy (SI-MF).

PVOL PVOL PVOL

SI-MF Split Status SI-MF Duplex SI-MF Split


Use Case1
SVOL 1 SVOL 1 SVOL 1 Daily Backup

SI-MF Duplex Status SI-MF Duplex Status SI-MF Duplex Status

SVOL 2 SVOL 2 SVOL 2

TC-MF or UR-MF TC-MF or UR-MF TC-MF or UR-MF


Use Case2
Disaster VOLs

Copy Type Single Unit Combination with TC-MF Note


or UR-MF
SI-MF Support Support Remote copy (TC-MF or
UR-MF) is recommended to
be combined with local copy
(SI-MF) SVOL.Especially
TC-MF.
TC-MF Support - 2DC only
UR-MF Support - 2DC only
FCv2/FCSE No support - -
XRC No support - -

[THEORY07-04-130]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-140

7.4.6.4 Mixed both zTPF and zOS/zVM


Configure CU/LDEV setting from zOS to appropriate path on zTPF volume. No special setting, like SOM, is
needed.
zOS or zVM need to connect to a FICON port on a different FICON board with CU = I-2107 and then
connect to CLPR0 CU:x00-x3F.

TPF zOS or zVM

I-2107-TPF I-2107

FICON Board FICON Board

DKC910I CLPR0 CLPR1


CU:x00-x3F CU:x40-x7F

7.4.6.5 Combination of HDP


• DVE (Dynamic Volume Expansion) and PP combination are restricted.
• The maximum volume in TPF is 64 K cylinders (65520 cylinders). It is common to HDP and Non-HDP.

7.4.6.6 TPF Copy Manager


TPF Copy Manager is a product of Hitachi Vantara (HV).
Please contact HV for TPF Copy Manager.

7.4.6.7 BCM
You can logically control the TPF volume from BCM on zOS. However, there is no evaluation record.

[THEORY07-04-140]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-150

7.4.6.8 Dual Write


TPF implements double writing as standard. Volumes for double writing are called Prime and Dupe. The
master volume is Prime and the secondary volume is Dupe.
If the TPF determines that the Prime volume cannot be read or written, the TPF will read or write to the
Dupe. If a total of 32 Prime volumes become unreadable and unwritable, the TPF will go down. It is
recommended that the Prime volume and Dupe volume be placed in separate storage, and that the Prime
volume and Dupe volume be evenly distributed for each device.

TPF

DKC910I #1 DKC910I #2

Prime 1 Prime 2 Prime 2 Prime 1

Prime 3 Prime 4 Prime 4 Prime 3

Case A: When combined with SI-MF, operate in the split state in order to suppress the response delay to the
host. Perform primary/secondary volume resynchronization and resplit during times when the load is
low (for example, at night).
Case B: When combining with TC-MF or UR-MF, it is recommended to combine TC-MF or UR-MF with
the secondary volume of SI-MF in order to suppress the response delay to the host.

Prime Prime Prime Prime


SI-MF SI-MF
PVOL PVOL PVOL PVOL

Split Status Duplex Status Split Status Split Status


Case A Case B
SVOL
SVOL SVOL SVOL /MVOL RVOL

TC-MF or UR-MF

[THEORY07-04-150]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-160

7.4.6.9 MPL
MPLs are fixed at 262144 (256K) in DKC910I.

4096 MPLs/CU case 16384 MPLs/CU case


CU:x00 CU:x0F CU:x00 CU:x0F
... ...
4096 4096 16384 16384
... ... ... ... ... ...
CU:x30 CU:x3F CU:x30 CU:x3F
... ...
4096 4096 0 0

Total MPLs:4096 x 64 = 262144 (256K) Total MPLs:16384 x 16 = 262144 (256K)

CU range where TPF can use MPLF function

NOTE: MPL is a lock table for controlling MPLF functions. MPL is used for exclusive
control when multiple hosts access one record. From past results, 4096/CU is
sufficient for MPL.

Lock request conflicts from TPF1,2,3 to Record1. TPF1 reserves Record1, and TPF2 and 3 are registered in
the MPL table as Waiters.
Attention (DSB80) is notified to TPF2 after TPF1 unlocks Record1.
TPF2 becomes the owner of Record2 and accesses it.
In this way, MPL is used to control the exclusion of one record in the MPLF function.

TPF1 TPF2 TPF3 TPF1 TPF2 TPF3


Lock Lock
Lock
Attention (DSB80)
MPL
Record1 MPL
Record1 Record1
Owner Record1
TPF1 Owner
Waiter1 TPF2
TPF2 Waiter1
Waiter2 TPF3
TPF3

[THEORY07-04-160]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-05-10

7.5 CHB/DKB - SASCTL#/PSW#, Port# Matrixes


Table 7-6 Relationship between CHB/DKB and SASCTL#/PSW#, Port# (1/3)
CBX Pair Location Channel SAS CTL#/ SAS Port#/
Port# PSW# NVMe Port#
CBX Pair 0 CHB-01A 00 ~ 03 - -
CHB-01B 04 ~ 07 - -
CHB-01E 08 ~ 0b - -
CHB-01F 0c ~ 0f - -
CHB-02A 10 ~ 13 - -
CHB-02B 14 ~ 17 - -
CHB-02E 18 ~ 1b - -
CHB-02F 1c ~ 1f - -
CHB-11A 20 ~ 23 - -
CHB-11B 24 ~ 27 - -
CHB-11E 28 ~ 2b - -
CHB-11F 2c ~ 2f - -
CHB-12A 30 ~ 33 - -
CHB-12B 34 ~ 37 - -
CHB-12E 38 ~ 3b - -
CHB-12F 3c ~ 3f - -
DKB-01D - 01 02/03/12/13
DKB-01H - 00 00/01/10/11
DKB-02D - 03 06/07/16/17
DKB-02H - 02 04/05/14/15
DKB-11D - 05 0a/0b/1a/1b
DKB-11H - 04 08/09/18/19
DKB-12D - 07 0e/0f/1e/1f
DKB-12H - 06 0c/0d/1c/1d

[THEORY07-05-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-05-20

Table 7-7 Relationship between CHB/DKB and SASCTL#/PSW#, Port# (2/3)


CBX Pair Location Channel SAS CTL#/ SAS Port#/
Port# PSW# NVMe Port#
CBX Pair 1 CHB-21A 40 ~ 43 - -
CHB-21B 44 ~ 47 - -
CHB-21E 48 ~ 4b - -
CHB-21F 4c ~ 4f - -
CHB-22A 50 ~ 53 - -
CHB-22B 54 ~ 57 - -
CHB-22E 58 ~ 5b - -
CHB-22F 5c ~ 5f - -
CHB-31A 60 ~ 63 - -
CHB-31B 64 ~ 67 - -
CHB-31E 68 ~ 6b - -
CHB-31F 6c ~ 6f - -
CHB-32A 70 ~ 73 - -
CHB-32B 74 ~ 77 - -
CHB-32E 78 ~ 7b - -
CHB-32F 7c ~ 7f - -
DKB-21D - 09 22/23/32/33
DKB-21H - 08 20/21/30/31
DKB-22D - 0b 26/27/36/37
DKB-22H - 0a 24/25/34/35
DKB-31D - 0d 2a/2b/3a/3b
DKB-31H - 0c 28/29/38/39
DKB-32D - 0f 2e/2f/3e/3f
DKB-32H - 0e 2c/2d/3c/3d

[THEORY07-05-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-05-30

Table 7-8 Relationship between CHB/DKB and SASCTL#/PSW#, Port# (3/3)


CBX Pair Location Channel SAS CTL#/ SAS Port#/
Port# PSW# NVMe Port#
CBX Pair 2 CHB-41A 80 ~ 83 - -
CHB-41B 84 ~ 87 - -
CHB-41E 88 ~ 8b - -
CHB-41F 8c ~ 8f - -
CHB-42A 90 ~ 93 - -
CHB-42B 94 ~ 97 - -
CHB-42E 98 ~ 9b - -
CHB-42F 9c ~ 9f - -
CHB-51A a0 ~ a3 - -
CHB-51B a4 ~ a7 - -
CHB-51E a8 ~ ab - -
CHB-51F ac ~ af - -
CHB-52A b0 ~ b3 - -
CHB-52B b4 ~ b7 - -
CHB-52E b8 ~ bb - -
CHB-52F bc ~ bf - -
DKB-41D - 11 42/43/52/53
DKB-41H - 10 40/41/50/51
DKB-42D - 13 46/47/56/57
DKB-42H - 12 44/45/54/55
DKB-51D - 15 4a/4b/5a/5b
DKB-51H - 14 48/49/58/59
DKB-52D - 17 4e/4f/5e/5f
DKB-52H - 16 4c/4d/5c/5d

[THEORY07-05-30]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-06-10

7.6 CTL/MPU - MPU#, MP# Matrixes


Table 7-9 Relationship between MP Unit ID,MP Location and MP#
DKC# CTL Location MPU Location MPU# MP Location MP#
0 CTL01 MPU-010 0x00 MP010-00 ~ MP010-13 0x00 ~ 0x13
CTL02 MPU-020 0x01 MP020-00 ~ MP020-13 0x14 ~ 0x27
1 CTL11 MPU-110 0x02 MP110-00 ~ MP110-13 0x28 ~ 0x3B
CTL12 MPU-120 0x03 MP120-00 ~ MP120-13 0x3C ~ 0x4F
2 CTL21 MPU-210 0x04 MP210-00 ~ MP210-13 0x50 ~ 0x63
CTL22 MPU-220 0x05 MP220-00 ~ MP220-13 0x64 ~ 0x77
3 CTL31 MPU-310 0x06 MP310-00 ~ MP310-13 0x78 ~ 0x8B
CTL32 MPU-320 0x07 MP320-00 ~ MP320-13 0x8C ~ 0x9F
4 CTL41 MPU-410 0x08 MP410-00 ~ MP410-13 0xA0 ~ 0xB3
CTL42 MPU-420 0x09 MP420-00 ~ MP420-13 0xB4 ~ 0xC7
5 CTL51 MPU-510 0x0A MP510-00 ~ MP510-13 0xC8 ~ 0xDB
CTL52 MPU-520 0x0B MP520-00 ~ MP520-13 0xDC ~ 0xEF

[THEORY07-06-10]

You might also like