e05THEORY0
e05THEORY0
THEORY OF OPERATION
SECTION
[THEORY00-00-00]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY00-00-10
Contents
1. Storage System Overview ..........................................................................................THEORY01-01-10
1.1 External View of Hardware...................................................................................THEORY01-01-10
1.2 Hardware Component ..........................................................................................THEORY01-02-10
1.3 Hardware Architecture .........................................................................................THEORY01-03-10
1.4 Network Topology.................................................................................................THEORY01-04-10
1.4.1 Management Software .................................................................................THEORY01-04-20
1.4.2 Maintenance Software..................................................................................THEORY01-04-30
1.5 Storage System Function Overview .....................................................................THEORY01-05-10
1.5.1 Basic Functions ............................................................................................THEORY01-05-10
1.5.2 Redundant Design........................................................................................THEORY01-05-50
1.5.3 Impact of Each Failure Part on Storage System ........................................THEORY01-05-100
[THEORY00-00-20]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY00-00-30
[THEORY00-00-30]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-01-10
[THEORY01-01-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-10
Front View
Controller Board
CFM
Rear View
BKMF
DKCPS
CHB
LAN Board
HIE DKCPS
DKB
[THEORY01-02-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-20
Battery
BKMF
CFM
[THEORY01-02-20]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-30
Top view
Controller Board
DIMM Location
• The DIMM with the DIMM location
number DIMM0x belongs to CMG0
DIMM13
(Cache Memory Group 0) and the
DIMM12
DIMM with DIMM1x belongs to
CMG1
CMG1 (Cache Memory Group 1).
DIMM02
• Be sure to install the DIMM in
DIMM03
CMG0.
• Install the same capacity of DIMMs DIMM11
by a set of four. DIMM10
• CMG1 is a slot for adding DIMMs. CMG0
DIMM00
DIMM01
[THEORY01-02-30]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-40
NOTE : • It is necessary to match the type (model name) of CFM-x10/x20 and CFM-x11/x21
(addition side).
When adding Cache Memories, check the model name of CFM-x10/x20 and add the
same model.
• When replacing Cache Flash Memories, it is necessary to match the type (model
name) defined in the configuration information.
Example: When the configuration information is defined as BM35, replacing to
BM45, BM3E or BM4E is impossible.
[THEORY01-02-40]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-50
(4) Battery
The battery for the data saving is installed on each Controller Chassis.
• When the power failure continues for more than 20 milliseconds, the Storage System uses power
from the batteries to back up the Cache Memory data and the Storage System configuration data
onto the Cache Flash Memory.
• Environmentally friendly nickel hydride battery is used for the Storage System.
[THEORY01-02-50]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-60
Table 1-4 Number of Installed DKBs and SAS Ports / NVMe Ports by CBX Configuration
Item VSP 5100, 5200/ VSP 5500, 5600/VSP 5500H, 5600H
VSP 5100H, 5200H
2 CBX 2 CBX 4 CBX 6 CBX
(2CBX-2CTL
Configuration)
Number of DKB/ 2 piece/CTL 2 piece/CTL 2 piece/CTL 2 piece/CTL
DKBN (4 piece/system) (8 piece/system) (8, 16 piece/system) (8, 16, 24 piece/
system)
Number of SAS Port 8 port/system 16 port/system 16, 32 port/system 16, 32, 48 port/
system
Number of NVMe 8 port/system 16 port/system 16, 32 port/system 16, 32, 48 port/
Port system
The drive-less configuration that does not require DKBs is also supported.
[THEORY01-02-60]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-70
[THEORY01-02-70]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-80
The CHB for Fibre Channel connection can correspond to Shortwave or Longwave by port unit by
selecting a transceiver to be installed in each port.
Note that a port of each CHB installs a transceiver for Shortwave as standard.
When changing to a Longwave supported port, addition of SFP for Longwave is required.
[THEORY01-02-80]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-81
*1: DKC-x
DKC No. (0, 1, 2, ........ , 5)
*2: CTLx
CTL No. (1, 2)
NOTE: The above illustrations are for VSP 5500, VSP 5500H, VSP 5600, and VSP 5600H.
For VSP 5100, VSP 5100H, VSP 5200, and VSP 5200H, only CTL01 and CTL12 are
installed.
[THEORY01-02-81]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-82
BKMF- BKMF-
x20 x22
BKMF- BKMF-
x10 x12
*1: DKC-x
DKC No. (0, 1, 2, ........ , 5)
*2: CTLx
CTL No. (1, 2)
NOTE: The above illustrations are for VSP 5600, and VSP 5600H. For VSP 5200, and VSP
5200H, only CTL01 and CTL12 are installed.
Label ACLF
[THEORY01-02-82]
Hitachi Proprietary DKC910I
Rev.15.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-90
2. Drive Box
(1) Drive Box (DBS2)
The Drive Box (DBS2) is a chassis to install the 2.5-inch Disk Drives and the 2.5-inch Flash
Drives, and consists of two ENCs and two Power Supplies with a built-in cooling fan.
There are two types of DBS2. One contains 80 PLUS Gold level certified power supplies and the
other contains 80 PLUS Platinum level certified power supplies. There is no difference in usage
and specifications (dimensions and weight) between them. Only the energy efficiency of their
power supplies differs.
24 SFF HDDs can be installed. ENC and Power Supply take a duplex configuration.
12 LFF HDDs can be installed. ENC and Power Supply take a duplex configuration.
[THEORY01-02-90]
Hitachi Proprietary DKC910I
Rev.15.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-91
12 FMDs can be installed. ENC and Power Supply take a duplex configration.
24 SFF Drives can be installed. ENC and Power Supply take a duplex configuration.
[THEORY01-02-91]
Hitachi Proprietary DKC910I
Rev.0 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-100
ISWn2
ISWPSn2
ISWn1
HSNPANELn
ISWPSn1
SVP
SSVPn
General view of HSNBX-n
[THEORY01-02-100]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-110
NOTE: The host name of the SVP is automatically set based on the IP address and other
information. Do not change to another host name.
*1: For the SVP micro-program version 90-04-01/00 or later, reports created by Storage
Navigator might not be able to be displayed on the Maintenance PC or client PC depending
on the Web browser version. Use the latest version of the Web browser. For the client PC, ask
the customer to do so. (Use the Maintenance PC or client PC on which the OS that supports
the latest version of the Web browser is installed.)
*2: Installed when the OS is Windows® 10 IoT Enterprise LTSC 2019 64bit.
*3: Installed when the OS is Windows® 10 IoT Enterprise LTSC 2021 64bit.
[THEORY01-02-110]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-120
4. Drives
The drives supported by DKC910I are shown below.
[THEORY01-02-120]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-130
[THEORY01-02-130]
Hitachi Proprietary DKC910I
Rev.15.2 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-140
[THEORY01-02-140]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-150
Item DKC-F2000-30RRWM
Flash Drive KIOXIA SNB5C-R30RNC
Model Name Samsung SNM5C-R30RNC
User Capacity 30095.90 GB
Form Factor 2.5 inch
[THEORY01-02-150]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-02-160
Item DKC-F910I-1R5YVM
SCM HGST
Model Name KIOXIA
Intel SPN5A-Y1R5NC
User Capacity 1500.30 GB
Form Factor 2.5 inch
[THEORY01-02-160]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-03-10
,6:
'ULYH
DKU 'ULYH
(1&
(1& (1&
(1&
(1& (1&
(SBX/FBX) (1& 'ULYH (1&
'ULYH
Front-end path
X-path
'ULYH
DKU 'ULYH
(1& (1& Back-end path
(SBX/UBX/FBX) (1&
(1&
(1&
'ULYH
(1&
(1&
(1&
'ULYH
Shared DB Group
[THEORY01-03-10]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-03-20
[THEORY01-03-20]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-10
GUM:
GUM is a communication port that can be physically accessed from a management LAN port. When the
storage system is not turned on but electricity is supplied to Controller Chassis, the GUM can be accessed
through GUI or CLI. When the storage system is turned on, the GUM operates by sharing information with
micro-programs.
+61%; +61%;
693 6693+8% 693 6693
0DQDJHPHQW
0DLQWHQDQFH 0DQDJHPHQW +8%
PDQDJHPHQWWRRO /$1
/$1
6WRUDJH1DYLJDWRU
:HE&RQVROH
0DLQWHQDQFH8WLOLW\
&RQQHFWVWRᄃ
693ZLQGRZ
0DLQWHQDQFH/$1
7KH693DQG6693LQWKH+61%;DUHRSWLRQDOFRPSRQHQWV
[THEORY01-04-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-11
For VSP 5100, 5200/VSP 5100H, 5200H consisting of only CTL-1 and CTL-3, unlike the above figure,
GUM in CTL-3 needs to be connected to SSVP in HSNBX0. Furthermore, the maintenance LAN ports (not
illustrated in the above figure) on CTL-0 and CTL-3 need to be directly connected with each other by using
a LAN cable. When the optional SSVP is installed in HSNBX1, GUM in CTL-3 needs to be connected to the
optional SSVP in HSNBX1.
[THEORY01-04-11]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-20
[THEORY01-04-20]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-04-30
• Web Console
Storage management software used for storage system hardware management (setting configuration
information, defining logical devices, and displaying the statuses) and performance management (tuning).
Web Console can be used also for maintenance work. A remote login to the SVP is required for accessing
Web Console.
• SVP window
Used for status check, collection of dumps and logs, network settings, micro-program exchange, and so on.
The SVP window is started from Web Console. Maintenance Utility is started from SVP window.
• Maintenance Utility
Web application for storage system failure monitoring, replacement work, and so on. Maintenance Utility
is embedded in the GUM (Gateway for Unified Management) controller installed in the Controller Chassis.
Installation is not necessary. Maintenance Utility is started from the SVP window.
[THEORY01-04-30]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-10
)URQWHQG
ղ ,6:
ճձ
&+% +,( &+% +,( &+% +,( &+% +,(
մ
DBS2 or DBF3
(1&
(;3 'ULYH
:ULWH
(1&
GDWD
(;3 'ULYH
Front-end path
X-path
DBS2 or DBF3
(1& Back-end path
(;3 'ULYH
(1&
'ULYH
Shared DB Group
(;3
NOTE: CTLs installed in VSP 5100, 5200/VSP 5100H, 5200H are only CTL-01 and CTL-12.
Therefore, unlike the above figure, the cache redundancy for VSP 5100, 5200/VSP
5100H, 5200H is configured in CTL-01 and CTL-12.
[THEORY01-05-10]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-20
)URQWHQG
,6:
&7/
&7/ &7/ &7/ &7/
(1&
(;3 'ULYH
:ULWH
(1&
GDWD
(;3 'ULYH
[THEORY01-05-20]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-30
)URQWHQG
,6:
&7/
&7/ &7/ &7/ &7/
&DFKH
:ULWH &DFKH &DFKH &DFKH
GDWD
'.% '.% '.% '.% '.% '.% '.% '.%
(1&
(;3 'ULYH
:ULWH
(1&
GDWD
(;3 'ULYH
[THEORY01-05-30]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-31
The ENC for DBS2 (SBX), DBF3 (FBX), and DBN (NBX) has two EXP routes, and the EXP routes
in the ENC are connected each other. Two DKBs in a CBX pair are connected to one ENC by four
logical paths. Even if one DKB is blocked, I/O can be continued by the other DKB because one ENC is
connected to two DKBs.
The following is the logical path diagram when SBX or FBX is connected to SBX or FBX.
For NBX, a CBX pair can be connected to one Disk Unit (DKU) only, but cannot be connected to other
DKUs. Four logical paths are created by connecting the EXP routes as shown in the figure for SBX and
FBX.
Figure 1-15 Logical Paths When SBX or FBX Is Connected to SBX or FBX
)URQWHQG
,6:
'%6RU'%)
(1&
(;3 'ULYH
(1&
SBX or FBX
(;3 'ULYH
'%6RU'%)
(1&
(;3 'ULYH
(1&
Logical path
SBX or FBX
(;3 'ULYH
[THEORY01-05-31]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-32
UBX needs to be connected to SBX or FBX. Each ENC of DBL (UBX) is equipped with one EXP route,
so two DBLs need to be connected to one DBS2 or DBF3.
The following is the logical path diagram when UBX is connected to SBX or FBX.
)URQWHQG
,6:
&+% +,( &+% +,( &+% +,( &+% +,(
&7/ &7/ &7/ &7/
&DFKH &DFKH &DFKH &DFKH
'.% '.% '.% '.% '.% '.% '.% '.%
'%6RU'%)
(1&
(;3 'ULYH
SBX or FBX (1&
(;3 'ULYH
'%/ '%/
(1& (1& (1& (1&
Logical path
UBX
(;3 'ULYH (;3 'ULYH
[THEORY01-05-32]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-40
)URQWHQG
,6:
&DFKH
:ULWH &DFKH &DFKH &DFKH
GDWD
'.% '.% '.% '.% '.% '.% '.% '.%
:ULWH
(1&
GDWD
(;3 'ULYH
(1&
(;3 'ULYH
(1&
(;3 'ULYH
(1&
(;3 'ULYH
[THEORY01-05-40]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-50
• Drive redundancy
RAID configurations composed of multiple drives prevent data from being lost in case of a drive failure.
RAID configurations can be kept even when a drive failure occurs, by installing spare drives in which data
is restored.
• SVP redundancy
Installing the SVP in each of two HSN Boxes provides redundant access to the maintenance and
management tools. One SVP operates as Master SVP, and the other SVP operates as Standby SVP. When
the Master SVP fails, the Standby SVP automatically substitutes the Master SVP.
A redundant SVP to be installed in HSNBX-1 is not standard equipment but an optional component.
• X-path redundancy
Connecting two HSNBXs (four ISWs) and CBXs (HIEs) with X-path cables in mesh topology makes the
communication among Controller Boards have redundancy.
Even if a failure occurs in an X-path cable or HSNBX (ISW), the storage system can continue to operate.
*1: The data backup processing is continued when the power outage is
restored while the data is being backed up.
1. Battery lifetime
The battery lifetime is affected by the battery temperature. The battery temperature changes depending
on the intake temperature and installation altitude of the storage system, the configuration and operation
of the Controller Chassis, charge-discharge count, and individual differences of batteries. Therefore, the
battery lifetime varies in the range between three and five years.
The battery lifetime (estimated value) in the standard environment is shown below.
[THEORY01-05-60]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-70
[THEORY01-05-70]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-80
No. Power Status SM/CM Data Backup Methods Data Restore Methods during Restart
1 PS OFF (planned power off) SM data (including CM directory SM data is restored from CFM.
information) is stored in CFM before If CM data was stored, CM data is
PS OFF is completed. also restored from CFM.
If PIN data exists, all the CM data
including PIN data is also stored.
2 When power Instant power If power is recovered in a moment, SM/CM data in memory is used.
outage occurs outage SM/CM data remains in memory and
is not stored in CFM.
3 Power outage All the SM/CM data is stored in All the SM/CM data is restored from
while the CFM. CFM.
system is in However, if a power outage occurs If CM data was not stored, only CM
operation after the system starts up in the data is volatilized and the system
condition of <Case 2> and before the starts up.
battery charge level of <Case 3> is
restored, only SM data is stored (see
2. Relation between Battery Charge
Level and System Startup Action ).
4 Power outage Data storing in CFM is not done. The data that was stored in the latest
while the (The latest backup data that was power off operation or power outage
system is successfully stored remains.) is restored from CFM.
starting up
[THEORY01-05-80]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-90
[THEORY01-05-90]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY01-05-100
[THEORY01-05-100]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-10
2. Hardware Specifications
2.1 Storage System Specifications
Table 2-1 and Table 2-2 show the storage system specifications.
Table 2-1 Storage System Specifications (VSP 5100/VSP 5100H/VSP 5500/VSP 5500H)
Item Specifications
VSP 5500/ VSP 5500/ VSP 5500/ VSP 5100/
VSP 5500H VSP 5500H VSP 5500H VSP 5100H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
System Number of 2.5 Minimum 0
Drives Maximum 2,304 1,536 768 768
Number of 3.5 Minimum 0
Drives (*1) Maximum 1,152 768 384 384
Number of Flash Minimum 0
Module Drives Maximum 576 384 192 192
Number of Minimum 0
NVMe SSDs Maximum 288 192 96 96
Number of Minimum 0
NVMe SCMs Maximum 33 (*10) 33 (*10) 33 (*10) 33 (*10)
RAID Level RAID6/RAID5/RAID1 (*9)
RAID Group RAID6 6D+2P, 14D+2P
Configuration RAID5 3D+1P, 7D+1P
RAID1 2D+2D
Maximum Number of Spare 192 (*2) 128 (*2) 64 (*2) 64 (*2)
Disk Drives
Maximum Number of Volumes 65,280
Maximum 30 TB 2.5 61.5 PiB 41.0 PiB 20.5 PiB 20.5 PiB
Storage System SAS SSD
Capacity used
(Physical 30 TB 2.5 7.6 PiB 5.1 PiB 2.5 PiB 2.5 PiB
Capacity) NVMe SSD
used
Maximum External 255 PiB
Configuration
Maximum DBS2/DBL 96 64 32 32
Number of DBs DBF3 48 32 16 16
DBN 12 8 4 4
Memory Cache Memory Capacity 1.536 GiB to 1,024 GiB to 512 GiB to 2,048 256 GiB to 1,024
6,144 GiB 4,096 GiB GiB GiB
Cache Flash Memory Type BM35/BM45/BM3E/BM4E
(To be continued)
[THEORY02-01-10]
Hitachi Proprietary DKC910I
Rev.2 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-20
[THEORY02-01-20]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-30
[THEORY02-01-30]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-40
*8: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*9: RAID1 supported by these storage systems is commonly referred to as RAID1+0. RAID1+0
mirrors blocks across two drives and then creates a striped set across multiple drive pairs.
In this manual, the above RAID level is referred to as RAID1.
*10: The maximum number of SCMs that can be controlled per storage system is shown.
[THEORY02-01-40]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-50
Table 2-2 Storage System Specifications (VSP 5200/VSP 5200H/VSP 5600/VSP 5600H)
Item Specifications
VSP 5600/ VSP 5600/ VSP 5600/ VSP 5200/
VSP 5600H VSP 5600H VSP 5600H VSP 5200H
(6CBX) (4CBX) (2CBX) (2CBX, 2CTL)
System Number of 2.5 Minimum 0
Drives Maximum 2,304 1,536 768 768
Number of 3.5 Minimum 0
Drives (*1) Maximum 1,152 768 384 384
Number of Minimum 0
NVMe SSDs Maximum 288 192 96 96
Number of Minimum 0
NVMe SCMs Maximum 33 (*10) 33 (*10) 33 (*10) 33 (*10)
RAID Level RAID6/RAID5/RAID1 (*9)
RAID Group RAID6 6D+2P, 14D+2P
Configuration RAID5 3D+1P, 7D+1P
RAID1 2D+2D
Maximum Number of Spare 192 (*2) 128 (*2) 64 (*2) 64 (*2)
Disk Drives
Maximum Number of Volumes 65,280
Maximum 30 TB 2.5 61.5 PiB 41.0 PiB 20.5 PiB 20.5 PiB
Storage System SAS SSD
Capacity used
(Physical 30 TB 2.5 7.6 PiB 5.1 PiB 2.5 PiB 2.5 PiB
Capacity) NVMe SSD
used
Maximum External 255 PiB
Configuration
Maximum DBS2/DBL 96 64 32 32
Number of DBs DBN 12 8 4 4
Memory Cache Memory Capacity 1.536 GiB to 1,024 GiB to 512 GiB to 2,048 256 GiB to 1,024
6,144 GiB 4,096 GiB GiB GiB
Cache Flash Memory Type BM95/BM9E
(To be continued)
[THEORY02-01-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-60
[THEORY02-01-60]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-70
[THEORY02-01-70]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-01-80
*8: It is recommended to install the storage system in a computer room in a data center and the like.
It is possible to install the storage system in a general office, however, take measures against noise
as required.
When you replace the old Hitachi storage system with the new one in a general office, especially
note the following to take measures against noise.
The cooling fans in the storage system are downsized to enhance the high density of the storage
system. As a result, the rotation number of the fan is increased than before to maintain the cooling
performance. Therefore, the rate of the noise occupied by high-frequency content is high.
*9: RAID1 supported by these storage systems is commonly referred to as RAID1+0. RAID1+0
mirrors blocks across two drives and then creates a striped set across multiple drive pairs.
In this manual, the above RAID level is referred to as RAID1.
*10: The maximum number of SCMs that can be controlled per storage system is shown.
[THEORY02-01-80]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-02-10
[THEORY02-02-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-02-20
DKCPS-1
CTL2
DKCPS-2
AC0(*1) to PDU AC1(*1)
CTL1
C14
2. Drive Box
ENC ENC
to PDU to PDU
to PDU to PDU
ISW ISW
AC0(*1) C14 PS1 ISW1 ISW2 PS2 C14 AC1(*1)
ISWPS
Plug Power Cord
[THEORY02-02-20]
Hitachi Proprietary DKC910I
Rev.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-02-30
• CBX/HSNBX/DBS2/DBL/DBF3/DBN
Input Voltage Voltage Tolerance Frequency Wire Connection
200V to 240V +10% or -11% 50Hz 2Hz 1 Phase 2 Wire + Ground
60Hz 2Hz
[THEORY02-02-30]
Hitachi Proprietary DKC910I
Rev.9.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-10
1. Environmental Conditions
Item Condition
Non-Operating (*2)
Model Name CBX DBS2/DBL/DBF3/DBN HSNBX
Temperature range (ºC) (*9) -10 to 50 -10 to 50 -10 to 50
Relative humidity (%) (*4) 8 to 90 8 to 90 8 to 90
Maximum wet-bulb 29 29 29
temperature (ºC)
Temperature gradient 10 10 10
(ºC/hour)
Dust (mg/m3) — — —
Gaseous contaminants (*6) G1 classification levels
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000
[THEORY02-03-10]
Hitachi Proprietary DKC910I
Rev.9.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-20
Item Condition
Transportation, Storage (*3)
Model Name CBX DBS2/DBL/DBF3/DBN HSNBX
Temperature range (ºC) (*10) -30 to 60 -30 to 60 -30 to 60
Relative humidity (%) (*4) 5 to 95 5 to 95 5 to 95
Maximum wet-bulb 29 29 29
temperature (ºC)
Temperature gradient 10 10 10
(ºC/hour)
Dust (mg/m3) — — —
Gaseous contaminants (*6) —
Altitude (m) -60 to 12,000 -60 to 12,000 -60 to 12,000
*1: Storage system which is ready for being powered on
*2: Including packed and unpacked storage systems
*3: Storage system packed for shipping
*4: No dew condensation is allowed.
*5: Fire suppression systems and acoustic noise:
Some data center inert gas fire suppression systems when activated release gas from pressurized
cylinders that moves through the pipes at very high velocity. The gas exits through multiple
nozzles in the data center. The release through the nozzles could generate high-level acoustic
noise. Similarly, pneumatic sirens could also generate high-level acoustic noise. These acoustic
noises may cause vibrations to the hard disk drives in the storage systems resulting in I/O
errors, performance degradation in and to some extent damage to the hard disk drives. Hard
disk drives (HDD) noise level tolerance may vary among different models, designs, capacities
and manufactures. The acoustic noise level of 90dB or less in the operating environment table
represents the current operating environment guidelines in which Hitachi storage systems are
designed and manufactured for reliable operation when placed 2 meters from the source of the
noise.
Hitachi does not test storage systems and hard disk drives for compatibility with fire suppression
systems and pneumatic sirens. Hitachi also does not provide recommendations or claim
compatibility with any fire suppression systems and pneumatic sirens. Customer is responsible to
follow their local or national regulations.
To prevent unnecessary I/O error or damages to the hard disk drives in the storage systems, Hitachi
recommends the following options:
(1) Install noise-reducing baffles to mitigate the noise to the hard disk drives in the storage
systems.
(2) Consult the fire suppression system manufacturers on noise reduction nozzles to reduce the
acoustic noise to protect the hard disk drives in the storage systems.
(3) Locate the storage system as far as possible from noise sources such as emergency sirens.
(4) If it can be safely done without risk of personal injury, shut down the storage systems to avoid
data loss and damages to the hard disk drives in the storage systems.
DAMAGE TO HARD DISK DRIVES FROM FIRE SUPPRESSION SYSTEMS OR
PNEUMATIC SIRENS WILL VOID THE HARD DISK DRIVE WARRANTY.
[THEORY02-03-20]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-30
*6: See ANSI/ISA-71.04-2013 Environmental Conditions for Process Measurement and Control
Systems: Airborne Contaminants.
*7: Meets the highest allowable temperature conditions and complies with ASHRAE (American
Society of Heating, Refrigerating and Air-Conditioning Engineers) 2011 Thermal Guidelines Class
A2. The maximum value of the ambient temperature and the altitude is from 35 degrees C at an
altitude of 950 meters (3000 feet) to 28 degrees C at an altitude of 3050 meters (10000 feet).
The allowable ambient temperature is decreased by 1 degree C for every 300-meter increase in
altitude above 950 meters.
*8: The system monitors the intake temperature and the internal temperature of the Controller and the
Power Supply. It executes the following operations in accordance with the temperatures.
(2) DBS2
• If the internal temperature of the Power Supply rises to 65 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 75 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.
(3) DBL
• If the internal temperature of the Power Supply rises to 55 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 64.5 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.
[THEORY02-03-30]
Hitachi Proprietary DKC910I
Rev.9.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-40
(4) DBF3
• If the internal temperature of the Power Supply rises to 68 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 78 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.
(5) DBN
• If the internal temperature of the Power Supply rises to 65 degrees C or higher, the DB external
temperature warning (SIM-RC = af7000) is notified.
• If the internal temperature of the Power Supply rises to 75 degrees C or higher, the DB external
temperature alarm (SIM-RC = af7100) is notified.
(6) HSNBX
• If the use environment temperature rises to 50 degrees C or higher, the HSNBX ambient
temperature warning (SIM-RC = afb0xx) is notified.
*9: If the storage system is stored at temperatures lower than 40 degrees C after SSDs are installed
in the storge system, power on the storage system within three months. If the storage system is
stored at 40 degrees C or higher after SSDs are installed in the storge system, power on the storage
system within two weeks.
*10: Regarding transportation and storage for relocation, if the storage system in which SSDs are
installed is stored at temperatures lower than 40 degrees C, do not leave the storage system
powered off for three months or more. If the storage system in which SSDs are installed is stored
at 40 degrees C or higher, do not leave the storage system powered off for two weeks or more.
[THEORY02-03-40]
Hitachi Proprietary DKC910I
Rev.5 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-03-50
[THEORY02-03-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-10
[THEORY02-04-10]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-20
Port-WWN
Byte 7 6 5 4 3 2 1 0
Value 50 06 0E 80 Y8 NN NN PP
[THEORY02-04-20]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-30
[THEORY02-04-30]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-40
[THEORY02-04-40]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-04-50
[THEORY02-04-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-10
[THEORY02-05-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-20
FICON specification
Table 2-8 FICON support DKC specification
Items Contents
Support DKC emulation type I-2107
Range of CU address 0 to FE (*1)
Number of logical volumes 1 to 65280 (*2)
Number of connectable channel port 8 to 192 (*3)
8 to 128 (*4)
8 to 64 (*5)
Support fibre channel Bandwidth 4Mx16: 4Gbps/8Gbps/16Gbps
4Mx32: 8Gbps/16Gbps/32Gbps (*6)
(*7) (*8) (*9)
Cable and connector LC-Duplex
Mode Single Mode Fibre/Multi Mode Fibre
*1: When the number of CUs per FICON channel (CHPID) exceeds the limitation, there is a
possibility that HOST OS IPL fails.
*2: Number of logical volumes connectable to the one FICON channel (CHPID) is 16384.
*3: In the 3 CBX Pairs configuration
*4: In the 2 CBX Pairs configuration
*5: In the 1 CBX Pair configuration
*6: The IBM host Z server is now supported in Z14 and later.
*7: z/TPF is not supported.
*8: Hitachi hosts are not supported.
*9: The mixed configurations that share volumes with IBM/Hitachi hosts are not supported.
[THEORY02-05-20]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-30
*1: Execute this after deleting the logical paths from the host with the CHPID OFFLINE
operation. If this operation is not executed, Incident log may be reported.
*2: Alternate method using switch/director configuration.
<Operating procedure from switch control window (Example: in the Brocade Network
Advisor (BNA))>
(a) Block the associated outbound switch/director port to the CHB interface that is
currently Negotiated to not the fastest speed (Example: 8Gbps).
(b) Change the port speed setting from Negotiate mode to Fastest speed fix mode
(Example: 32Gbps mode for 4Mx32 and 16Gbps mode for 4Mx16) , then from Fastest
speed fix mode (Example: 32Gbps mode for 4Mx32 and 16Gbps mode for 4Mx16)
back to Negotiate mode in the switch/director port configuration window.
(c) Unblock the switch/director port.
(d) Confirm that an Online and Fastest speed (Example: 32Gbps for 4Mx32 and
16Gbps for 4Mx16) link is established without errors on the switch/director port
status window.
*3: From the menu of the Maintenance Utility (Sub Panel) window, select [Display]-[Mainframe
Path...].
[THEORY02-05-30]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-40
CHB CHB
HIE HIE
HTP HTP HTP HTP
core core core core
core core core core
MPU MPU
The mainframe fibre 4 port adapter can access all processors from the four ports. Regardless of
the locations of the used ports and the number of ports, it is always possible to perform processing
by using all processors. HTP is shared by two ports, so if you use ports of different HTPs (for
example, Port 1A and 3A), the throughput performance of one path is better than using ports of the
same HTP (for example, Port 1A and 5A).
In addition to the package structure described above, power redundancy is provided using the
cluster configurations.
Considering the structures and performance, we recommend you to set paths based on the
following priorities when configuring alternate paths.
[THEORY02-05-40]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-41
HIE HIE
core core core core core core core core
MFMH MFMH MFMH MFMH MFMH MFMH MFMH MFMH
CHB CHB
MPU MPU
The mainframe fibre 4 port adapter can access all processors from the four ports. Regardless of the
locations of the used ports and the number of ports, it is always possible to perform processing by
using all processors.
However, because some of the internal management information resources are shared on two ports,
the combination of two adjacent ports (for example, Port 1A and 3A) is more than a combination
of one-port use (e.g. port 1A and 5A), which improves throughput performance per pass without
resource contention.
In addition to the package structure described above, power redundancy is provided using the
cluster configurations.
Considering the structures and performance, we recommend you to set paths based on the
following priorities when configuring alternate paths.
[THEORY02-05-41]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-50
(4) Notes on connection with an HBA that supports FICON Express16S series or FICON Express32S
Do not install or uninstall the P.P. license of High Performance Connectivity for FICON (R)
when paths are online.
For the online path connected with an HBA that supports IBM FICON Express16S series or
FICON Express32S, the logical path is temporarily released by installing or uninstalling the
P.P. license of Compatible High Performance Connectivity for FICON (R) . The logical path is
restored by the recovery of the host. However, there is a possibility that all logical paths of the path
group are released depending on the timing and a system down might be caused.
Occurrence condition:
• Mainframe path connected with an HBA that supports FICON Express16S series or FICON
Express32S.
• The path is online with the host.
• The P.P. license of Compatible High Performance Connectivity for FICON (R) is installed or
uninstalled.
[THEORY02-05-50]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-05-60
The combination of the connection destination, FEC/TTS setting, and linkup speed is shown
below.
• For 32Gbps
No. Connection destination Link setting FEC/TTS setting Linkup speed
1 HBA that supports FICON Auto enable 8/16/16 FEC/32 FEC
Express32S
2 32G Switch Auto disable 8/16
3 enable 8/16 FEC/32 FEC
4 32G Fix disable -
5 enable 32 FEC
[THEORY02-05-60]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-10
Table 2-12 List of Allowable Maximum Values of Mainframe Host Connection Interface Items
on the DKC Side
Item Fibre channel
Maximum number of CUs 255
Maximum number of SSIDs 1020
Maximum number of LDEVs 65280
Table 2-13 Allowable Range of Mainframe Host Connection Interface Items on DKC Side
Item Fibre channel
CU address 0 to FE (*1)
SSID 0004 to FFFD (*2)
Number of logical volumes 1 to 65280 (*3)
*1: Number of CUs connectable to the one FICON channel (CHPID) is 64 or 255.
In the case of 2107 emulation, the CU addresses in the interface with a host are 00 to FE for the
FICON channel.
*2: In the case of 2107 emulation, the SSID in the interface with a host is 0x0004 to 0xFEFF.
*3: Number of logical volumes connectable to one FICON channel (CHPID) is 32768.
NOTE: If you use PPRC command and specify 0xFFXX as SSID of MCU and RCU, the command may
be rejected. Please specify 0x0004, 0xFEFF as SSID of MCU and RCU.
XP8 cannot assign from 0x0001 to 0x0003. Because, XP8 is using them for internally. If you will
set SSID for mainframe, please follow mainframe to hand range of SSID.
If a specified value of SSID is out of the allowable range, the mainframe host might not be able
to use the volume. Specify an SSID value within the allowable range for the volume that the
mainframe host accesses.
Detailed numbers of logical paths of the mainframe fibre and serial channels are shown in Table 2-14.
[THEORY02-06-10]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-20
TrueCopy for Mainframe operations from a Web Console and the corresponding TSO commands are
shown in Table 2-15. Before using TSO commands or DSF commands for PPRC, the serial interface ports
to which the RCU(s) will be connected must be set to the Bidirectional mode.
Table 2-17 shows the value of the SAID (system adapter ID) parameters required for CESTPATH
command. For full description on TSO commands or DSF commands for PPRC, refer to the appropriate
manuals published by IBM corporation.
Table 2-15 TrueCopy for Mainframe operations and corresponding TSO commands for
PPRC
Function TC-MF operations TSO commands
Registering an RCU and establishing remote Add RCU CESTPATH (NOTE)
copy connections
Adding or removing remote copy connection(s) Edit Path CESTPATH
Deleting an RCU registration Delete RCU CDELPATH
Establishing an TC-MF volume pair Add Pair CESTPAIR MODE (COPY)
Suspending an TC-MF volume pair Suspend Pair CSUSPEND
Disestablishing an TC-MF volume pair Delete Pair CDELPAIR
Recovering an TC-MF volume pair from Resume Pair CESTPAIR MODE (RESYNC)
suspended condition
Controlling TC-MF volume groups CGROUP
[THEORY02-06-20]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-30
SAID setting values for the DKC emulation type 2107 are shown in Table 2-16.
Table 2-16 SAID Setting Values for DKC Emulation Type 2107
DKC CHB Port SAID CHB Port SAID
Location Location (HEX) Location Location (HEX)
DKC-0 1A 1A 0000 2A 1B 0010
3A 0001 3B 0011
5A 0002 5B 0012
7A 0003 7B 0013
1B 1C 0004 2B 1D 0014
3C 0005 3D 0015
5C 0006 5D 0016
7C 0007 7D 0017
1E 1E 0008 2E 1F 0018
3E 0009 3F 0019
5E 000A 5F 001A
7E 000B 7F 001B
1F 1G 000C 2F 1H 001C
3G 000D 3H 001D
5G 000E 5H 001E
7G 000F 7H 001F
DKC-1 1A 2A 0020 2A 2B 0030
4A 0021 4B 0031
6A 0022 6B 0032
8A 0023 8B 0033
1B 2C 0024 2B 2D 0034
4C 0025 4D 0035
6C 0026 6D 0036
8C 0027 8D 0037
1E 2E 0028 2E 2F 0038
4E 0029 4F 0039
6E 002A 6F 003A
8E 002B 8F 003B
1F 2G 002C 2F 2H 003C
4G 002D 4H 003D
6G 002E 6H 003E
8G 002F 8H 003F
(To be continued)
[THEORY02-06-30]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-40
[THEORY02-06-40]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-50
[THEORY02-06-50]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-60
Table 2-17 SAID Setting Values Used for CESTPATH Command Parameter
DKC CHB Port SAID CHB Port SAID CHB Port SAID CHB Port SAID
Location Location (HEX) Location Location (HEX) Location Location (HEX) Location Location (HEX)
DKC-0 1A 1A 0000 1B 1C 0002 1E 1E 0004 1F 1G 0006
3A 0020 3C 0022 3E 0024 3G 0026
5A 0040 5C 0042 5E 0044 5G 0046
7A 0060 7C 0062 7E 0064 7G 0066
2A 1B 0001 2B 1D 0003 2E 1F 0005 2F 1H 0007
3B 0021 3D 0023 3F 0025 3H 0027
5B 0041 5D 0043 5F 0045 5H 0047
7B 0061 7D 0063 7F 0065 7H 0067
DKC-1 1A 2A 0010 1B 2C 0012 1E 2E 0014 1F 2G 0016
4A 0030 4C 0032 4E 0034 4G 0036
6A 0050 6C 0052 6E 0054 6G 0056
8A 0070 8C 0072 8E 0074 8G 0076
2A 2B 0011 2B 2D 0013 2E 2F 0015 2F 2H 0017
4B 0031 4D 0033 4F 0035 4H 0037
6B 0051 6D 0053 6F 0055 6H 0057
8B 0071 8D 0073 8F 0075 8H 0077
DKC-2 1A 1J 0008 1B 1L 000A 1E 1N 000C 1F 1Q 000E
3J 0028 3L 002A 3N 002C 3Q 002E
5J 0048 5L 004A 5N 004C 5Q 004E
7J 0068 7L 006A 7N 006C 7Q 006E
2A 1K 0009 2B 1M 000B 2E 1P 000D 2F 1R 000F
3K 0029 3M 002B 3P 002D 3R 002F
5K 0049 5M 004B 5P 004D 5R 004F
7K 0069 7M 006B 7P 006D 7R 006F
DKC-3 1A 2J 0018 1B 2L 001A 1E 2N 001C 1F 2Q 001E
4J 0038 4L 003A 4N 003C 4Q 003E
6J 0058 6L 005A 6N 005C 6Q 005E
8J 0078 8L 007A 8N 007C 8Q 007E
2A 2K 0019 2B 2M 001B 2E 2P 001D 2F 2R 001F
4K 0039 4M 003B 4P 003D 4R 003F
6K 0059 6M 005B 6P 005D 6R 005F
8K 0079 8M 007B 8P 007D 8R 007F
(To be continued)
[THEORY02-06-60]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-06-70
[THEORY02-06-70]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-07-10
Management Maintenance
port port
[THEORY02-07-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY02-07-20
Terms
MP
Microprocessor. The processor that executes storage processing.
GUM
Gateway for Unified Management. The system that controls the storage configuration and so on. The
GUM provides Maintenance Utility as GUI.
H/B
Heart Beat. Used to check the state (alive or not) and normality by communicating and responding
periodically.
PCI
Peripheral Component Interconnect. The circuit that performs data transmission (sending and receiving)
among processors and peripheral devices.
[THEORY02-07-20]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-01-10
3. Software Specifications
3.1 Micro-program and Program Product
Software can be categorized into two basic types: micro-program and program product (PP). The micro-
program is the essential software for controlling the DKC910I storage system while the PPs provide a variety
of storage system functions to customers. Customers can select a suitable PP software package, in which
same kinds of PPs are packed together, according to their needs.
A host can read and write the data on the storage system by installing PPs in addition to the micro-program.
PPs also allow customers to handle volumes of external storage systems virtually on the DKC910I storage
system, copy volumes, and make use of various features. For the list of PPs, see THEORY03-03-10.
Upgrades of the micro-program versions are performed by a maintenance person. Micro-program upgrades
are automatically applied to all DKCs. Therefore, a maintenance person does not need to upgrade the micro-
program for each DKC.
PPs chosen by a customer are installed in the storage system before shipment. If the customer wants to add
PPs, the customer or system administrator needs to install the license keys for the PPs. The license keys are
applied to the whole storage system.
[THEORY03-01-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-10
[THEORY03-02-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-20
A pool VOL is a logical volume comprised of multiple drives and is a component of a pool.
A pool is a virtual area composed of one or more pool VOLs. A pool capacity can be expanded by
adding pool VOLs to the pool. Creating DP-VOLs from a pool allows you to allocate volumes to a host
without considering physical drives.
Host
N N N
DP-VOL
Pool
N N
Pool VOL
N N N N
Physical drive
[THEORY03-02-20]
Hitachi Proprietary DKC910I
Rev.8 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-30
Host
Port Port
CHB CHB
Storage system
• Host group
A group of hosts that are connected to the same port of the storage system and operate on the same
platform is referred to as host group. To connect a host to the storage system, register the host to a
host group, associate the host group to a port, and then allocate logical volumes to the combination of
the host group and the port.
• NVM subsystem
The NVM subsystem is the control system of the flash memory storage using the NVMe protocol that
has one or more namespaces and one or more communication ports. The NVM subsystem is defined as
a logical resource under which storage system logical volumes, channel ports, and NVMe connection
hosts that use the logical volumes are grouped.
• Namespace
The namespace is a flash memory space formatted in logical blocks. By defining logical volumes in the
storage system as namespaces, the host can use the logical volumes as the ones supporting NVMe.
[THEORY03-02-30]
Hitachi Proprietary DKC910I
Rev.8 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-31
• CLPR
The cache memory can be logically divided. Each cache partition after the division is referred to as
CLPR. Allocating CLPRs to DP-VOLs and parity groups prevents a host from occupying a large part
of the cache memory.
[THEORY03-02-31]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-02-40
Remote path
CHB Port Port CHB
Pair
Primary Secondary
volume Copy direction volume
[THEORY03-02-40]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-03-10
[THEORY03-03-10]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-03-20
[THEORY03-03-20]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY03-04-10
SVP
[THEORY03-04-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-01-10
4. Maintenance Work
4.1 Overview of Maintenance Work
Maintenance work of the storage system includes addition and removal of optional components, preventive
replacement of installed components, change of setting information, and micro-program update, as well as
troubleshooting. Maintenance work can be performed while the storage system is operating.
Troubleshooting must be started as soon as a failure notification from a customer or a report from the remote
monitoring system is received. A maintenance person isolates a failed part by analyzing the notification or
report, and then performs recovery actions according to the troubleshooting workflow. Recovery actions for
some types of failures might need to be performed by a customer.
Maintenance work other than troubleshooting is performed upon a request from the Technical Support
Division.
If a failure occurs, the system must be quickly restored from the failure. The storage system that features a
redundant configuration can operate even after a failure occurs, but the redundancy becomes incomplete. If
another failure occurs in a part operating in a normal state before the storage system is restored, a system
down might be caused.
[THEORY04-01-10]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-02-10
4.2 Maintenance Management Tools for Maintenance Person and Their Usage
The following maintenance management tools installed in the SVP are used for maintenance work of the
storage system. The tools are operated by remotely connecting to the SVP.
[THEORY04-02-10]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-03-10
Is a SIM-RC (*1) No
reported?
Yes
Does
a recovery
procedure in
Yes
TROUBLESHOOTING
SECTION need to be
performed for the
SIM-RC?
No
Check ACCs (*2) to identify Perform the recovery procedure in
components to be replaced, and then TROUBLESHOOTING SECTION.
replace the components.
END
*1: SIM-RC is a reference code that represents an error name, and is viewed in the SVP window.
*2: ACC is a code that indicates a location of a failed part, and is displayed with SIM-RC.
[THEORY04-03-10]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-04-10
• Perform maintenance work when a customer is not changing a storage system configuration.
• Do not start Maintenance Utility by entering an IP address of a CTL in your browsers address bar.
• Maintenance work can be generally performed without stopping I/O. However, in some cases, depending
on the failure, maintenance procedure, and configuration, I/O needs to be stopped. If the maintenance
manual instructs you to stop I/O, ask your customer to stop I/O.
• A customer might change the password for the maintenance account of the storage system after the storage
system is installed. Ask the customer about the password for the maintenance account of the storage
system.
• Before collecting dumps, check that the loads on CTLs are not high. Dump collections, which are
concurrently executed by all CTLs, impose a heavy load on the storage system.
• Color code labels for distinguishing connection destinations are attached to the connection cable between
CBX and HSN Box. Colors presented in the maintenance manual might be different from actual colors.
When performing maintenance work, be sure to check location numbers printed on the labels in addition to
colors.
• Setting change operations of the Windows OS on the SVP are prohibited unless specifically allowed.
Precautions are described also in the beginning or middle of the maintenance procedures in the maintenance
manual. When performing the maintenance procedures, be sure to read the precautions.
[THEORY04-04-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-10
The user account security settings are included in the HDvM-SN backup configuration file. If the HDvM-
SN configuration is restored by using the HDvM-SN configuration file and the information in HDvM-SN
configuration file is old, user account will be locked out if the password has expired according to the old
information. In this case, the Security Administrator must release the account lockout.
User account security events are recorded in the audit log for the storage system, except for the following
three events that are not recorded in the audit log:
• Account lockout when the password has expired.
• Account lockout when the user exceeds the maximum number of login attempts.
• Account unlock when the lockout mode is lock.
[THEORY04-05-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-20
NOTICE: A password cannot be changed when any of the following conditions applies:
• The password change prohibition period has not yet elapsed.
• The requirements for the new password (for example, number of uppercase letters)
are not met.
• The new password is the same as a previous password within the defined range.
• The new password contains a user name.
[THEORY04-05-20]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-30
NOTICE: When the maximum number of login attempts is exceeded and the lockout mode is
lock, the user must wait until the account lockout period has elapsed before logging
in again. When the maximum number of login attempts is exceeded and the lockout
mode is disable, the Security Administrator must re-enable the user account and
reset the password.
1. Email notifications
• Warning 30 days before password expiration
• Warning 14 days before password expiration and daily thereafter
• Notification at password expiration and none thereafter
2. Login notifications
• Warning at each GUI login starting 14 days before password expiration
• Login failure at each login after password expiration
To prevent a password from expiring, the user must change the existing password before the end of the day
(23:59 or earlier) on which the password expires.
After a password has expired, the Security Administrator must re-enable the account and reset the password.
NOTICE: • If a password expires while the user is logged in, the next navigation within HDvM-
SN will fail and the user account is disabled. The user must contact the Security
Administrator to regain access to HDvM-SN.
• The mail server settings for email notification of password expiration are not backed
up in the HDvM-SN configuration file. If the backup HDvM-SN configuration file is
applied, you must re-enter the email notification settings for password expiration.
[THEORY04-05-30]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY04-05-40
NOTICE: The account lockout specifications apply to all user accounts, including the
maintenance personnel user account. If the maintenance personnel user account
becomes locked or disabled, the Security Administrator must re-enable the account
and reset the password.
[THEORY04-05-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-10
5. Drive Formatting
5.1 Logical Volume Formatting
5.1.1 Overviews
DKC can format two or more ECCs at the same time by providing HDDs and Flash Module Drive with the
Logical Volume formatting function. However, when using the encryption function, the high-speed format is
unusable.
[THEORY05-01-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-11
Logical volume formatting time varies depending on the provisioning type and attribute of the logical
volume. To estimate the time required for formatting a logical volume, check the provisioning type and
attribute of the logical volume in the Logical Devices window, and then see the appropriate reference shown
below.
Table 5-2 Logical Volume Formatting Time References Corresponding to Formatting Target
Provisioning type Attribute Logical volume formatting time reference
Basic 5.1.2 Estimation of Logical Volume Formatting Time
Pool-VOL 5.1.3 Estimation of Logical Volume (Pool-VOL) Formatting Time
DP 5.1.4 Estimation of Logical Volume (DP-VOL) Formatting Time
DP (with Capacity Saving 5.1.5 Estimation of Logical Volume (DP-VOL with Capacity Saving
enabled) Enabled) Formatting Time
[THEORY05-01-11]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-20
1. HDD
The formatting times for HDD do not vary depending on the number of logical volumes, but instead vary
depending on the capacity of HDD and the rotation speed of HDD.
(1) High speed LDEV formatting
The following table shows the standard formatting times.
The formatting times are an estimation only. Results from real world use might vary depending on
RAID groups and the drive type.
[THEORY05-01-20]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-30
[THEORY05-01-30]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-40
The formatting time becomes the same in 16 drives because the transmission of the format data does not
arrive even at the limit of passing.
Depending on SAS SSD/NVMe SSD/SCM internal condition, formatting time would be approximately
4x faster than these values.
[THEORY05-01-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-50
3. FMD
The formatting times for FMD do not vary depending on the number of logical volumes, but instead vary
depending on the capacity of FMD.
(1) High speed LDEV formatting
The following table shows the standard formatting times.
The formatting times are an estimation only. Results from real world use might vary depending on
RAID groups and the drive type.
[THEORY05-01-50]
Hitachi Proprietary DKC910I
Rev.1.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-60
*1: After the standard formatting time has elapsed, the display on the Web Console shows 99% until it
reaches to the monitoring time. Because Drive itself performs the format, and the progress rate to
the total capacity is not understood, the ratio at the elapsed time from the format beginning to the
Formatting time required is displayed.
*2: If there is an I/O operation, the minimum formatting time is over 6 times as long as the discrete
value, depending on the I/O load.
[THEORY05-01-60]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-70
*3: The formatting time varies according to the generation of the Drive in standard time distance.
NOTE: The formatting time when mixing the Drive types and the configurations described
in (1) High speed LDEV formatting and (2) Low speed LDEV formatting divides
into the following cases.
(a) When only the high speed formatting available Drives (1. HDD, 3. FMD) are
mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.
(b) When only the low speed formatting available Drives (2. SAS SSD/NVMe SSD/
SCM) are mixed
The formatting time is the same as the formatting time of Drive types and
configurations with the maximum standard time.
(c) When the high speed formatting available Drives (1. HDD, 3. FMD) and the low
speed formatting available Drives (2. SAS SSD/NVMe SSD/SCM) are mixed
(1) The maximum standard time in the high speed formatting available Drive
configuration is the maximum high speed formatting time.
(2) The maximum standard time in the low speed formatting available Drive
configuration is the maximum low speed formatting time.
The formatting time is the sum of the above formatting time (1) and (2).
When the high speed formatting available Drives and the low speed formatting
available Drives are mixed in one formatting process, the low speed formatting
starts after the high speed formatting is completed. Even after the high speed
formatting is completed, the logical volumes with the completed high speed
formatting cannot be used until the low speed formatting is completed.
In all cases of (a), (b) and (c), the time required to start using the logical volumes
takes longer than the case that the high speed formatting available Drives and the low
speed formatting available Drives are not mixed.
Therefore, when formatting multiple Drive types and the configurations, we
recommend dividing the formatting work and starting the work individually from a
Drive type and a configuration with the shorter standard time.
*4: The time required to format the drive might be increased by up to approximately 20% in the DB on
the rear stage in cascade connection.
[THEORY05-01-70]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-80
*1: There is no difference in formatting time between drive types or between RAID levels.
*2: When the allocated page capacity is large, actual formatting time might be shorter than the
estimated time.
[THEORY05-01-80]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-90
*1: There is no difference in formatting time between drive types or between RAID levels.
[THEORY05-01-90]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-01-100
5.1.5 Estimation of Logical Volume (DP-VOL with Capacity Saving Enabled) Formatting
Time
Rough formatting time per LDEV without I/O is as follows:
Logical volume (DP-VOL with Capacity Saving enabled) formatting performance = 0.95GB per second (*1)
Logical volume (DP-VOL with Capacity Saving enabled) formatting time
= (Capacity of one LDEV/Logical volume (DP-VOL with Capacity Saving enabled) formatting
performance)
(Formula) : Indicates rounding up the calculated value of the formula.
*1: The logical volume formatting performance varies depending on the storage system configuration,
data layout, and data contents.
[THEORY05-01-100]
Hitachi Proprietary DKC910I
Rev.0.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-10
[THEORY05-02-10]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-20
[THEORY05-02-20]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-30
[THEORY05-02-30]
Hitachi Proprietary DKC910I
Rev.6 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-40
Table 5-12 Control Information Format Time of M/F VOL (Per 1K Volume)
Emulation type Format time (minute)
3390-A 133
3390-M 34
3390-MA/MB/MC 28
6588-M 34
6588-MA/MB/MC 28
3390-L 18
3390-LA/LB/LC 14
6588-L 18
6588-LA/LB/LC 14
3390-9 9
3390-9A/9B/9C 5
6588-9 9
6588-9A/9B/9C 5
Others 3
The above is the time required when formatting a 1K volume and it is proportional to the number of
volumes.
[THEORY05-02-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-50
[THEORY05-02-50]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-60
• When Quick Format is executed to parity groups with different Drive capacities at the same time, calculate
the time based on the parity group with the largest capacity.
[THEORY05-02-60]
Hitachi Proprietary DKC910I
Rev.0 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-70
[THEORY05-02-70]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-80
[THEORY05-02-80]
Hitachi Proprietary DKC910I
Rev.0 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-90
[THEORY05-02-90]
Hitachi Proprietary DKC910I
Rev.12 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-02-100
NOTE: “x and y are an arbitrary number. Some drives do not contain the
number of y (e.g. Q13R).
The numbers (x, y) of Type of Drive in Operation need not be the
same as those of Type of Usable Spare Drive.
For example, when the drives in operation are Q6R4, the drives of
Q6R4, Q13R, etc. can be used as spare drives.
[THEORY05-02-100]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-10
[THEORY05-03-10]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-20
[THEORY05-03-20]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-30
[THEORY05-03-30]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-31
[THEORY05-03-31]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-40
*1: The operation is prevented with a message. However, the operation is made possible by checking
the checkbox Forcibly run without safety checks and retrying the operation.
*2: It is impossible to remove a RAID group in which data is migrated to a spare Disk and the spare
Disk.
*3: (Blank)
*4: It is impossible when high-speed LDEV Format is running. When low-speed LDEV Format is
running, it is possible to replace drive in a RAID group in which LDEV Format is not running.
*5: It is possible to perform LDEV maintenance for LDEV defined in a RAID group in which
Dynamic Sparing, Correction Copy, Copy Back or Correction Access is not running.
*6: • The operation is prevented with the message [03005-002095] when the RAID group to which
the drive to be maintained belongs does not coincide with the RAID group in which Dynamic
Sparing/Correction Copy/Copy Back is running.
• The operation is prevented with the message [30762-208159] when the RAID group to which
the drive to be maintained belongs coincides with the RAID group in which Dynamic Sparing/
Correction Copy/Copy Back is running. When the RAID level is RAID6, the operation might be
prevented with the message [03005-002095] depending on the state of drives other than the drive
to be maintained.
If the operation is prevented with the message [03005-002095], the operation is made possible by
checking the checkbox Forcibly run without safety checks and retrying the operation. However,
a different message might be displayed depending on the timing when the conditions that cause a
prevention occur.
*7: It is prevented with message [03005-002095]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*8: It is prevented with message [03005-202002]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*9: It is prevented with message [03005-202001]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*10: It is prevented with message [03005-202005]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*11: It is prevented with message [03005-002011]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*12: It is prevented with message [30762-208159]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*13: It is prevented with message [30762-208158]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
*14: It is prevented with message [30762-208180]. However, a different message might be displayed
depending on the occurrence timing of the state regarded as a prevention condition.
[THEORY05-03-40]
Hitachi Proprietary DKC910I
Rev.9 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-03-50
*15: The operation is prevented with a message. However, the operation is made possible by clicking
Forcible Actions without safety checks and retrying the operation.
*16: For micro-program versions 90-01-41-xx/xx and 90-01-61-xx/xx, the operation is prevented
with the message [30762-208159]. Even if Forcibly run without safety checks is selected, the
operation is prevented.
*17: The operation is prevented with message [30762-208899] However, a different message might be
displayed depending on the occurrence timing of the state regarded as a prevention condition.
*18: The operation is prevented with message [30762-208159]. However, a different message might be
displayed depending on the occurrence timing of the state regarded as a prevention condition.
*19: The operation is prevented with message [30762-208980]. However, a different message might be
displayed depending on the occurrence timing of the state regarded as a prevention condition.
*20: Whether maintenance operations can be performed during shredding by Volume Shredder is the
same as during low-speed LDEV Format.
*21: The operation is not prevented when a drive which is connected to a DKB installed in the
maintenance target CTL is blocked.
*22: The operation is not prevented when a drive which is connected to the maintenance target DKB is
blocked.
[THEORY05-03-50]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-04-10
[THEORY05-04-10]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-10
[THEORY05-05-10]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-20
Type of drive 30 TB
SAS (7.2 krpm)
SAS (10 krpm)
Flash Drive 1 to 540 min
Flash Module Drive
SCM
[THEORY05-05-20]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-21
Type of drive 30 TB
SAS (7.2 krpm)
SAS (10 krpm)
Flash Drive 1010 min
Flash Module Drive
SCM
[THEORY05-05-21]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-30
[THEORY05-05-30]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-40
[THEORY05-05-40]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-50
*1: When drive that stops PDEV Erase is installed into DKC again, it might fail by Spin-up failure.
*2: It is not likely to be able to maintain it when failing because of concerned MSG until PDEV Erase
is completed or terminates abnormally.
[THEORY05-05-50]
Hitachi Proprietary DKC910I
Rev.3 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-05-60
[THEORY05-05-60]
Hitachi Proprietary DKC910I
Rev.11.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-10
[THEORY05-06-10]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-20
Type of drive 30 TB
SAS (7.2 krpm) -
SAS (10 krpm) -
Flash Drive 61h
Flash Module Drive -
SCM -
[THEORY05-06-20]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-30
Tell the customer that user data might remain in the drive.
When the customer has the DRO agreements, give the faulty drive to
him or her and recommend destroying it physically or other methods
like that.
When the customer does not have the DRO agreements, bring the
faulty drive back with you after making the customer understand that
user data might remain in the drive.
(If the customer does not allow you to bring out the drive, explain
him or her that he or she needs to use services for erasing data or
make the DRO agreements.)
3 4e8xxx End with warning Data erase ends with warning because reading some areas of the
4e9xxx drive is unsuccessful while writing the erase pattern data is successful
(for flash drives, excluding over provisioning space).
Tell the customer that writing the erase pattern data to an entire drive
is completed but data in some areas cannot be read. Then, ask the
customer whether he or she wants you to bring out the drive.
For how to check the number of the areas (LBAs) where data cannot
be read, see 5.6.3.2 Checking Details of End with Warning .
*1: The SIM indicating drive port blockade (see (SIMRC02-70)) might be also reported when the SIM
indicating end of Media Sanitization is reported. In such a case, prioritize the SIM indicating end
of Media Sanitization.
[THEORY05-06-30]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-40
Check SIMs indicating end with warning and related SSBs to know factors of end with warning as follows:
[1] In the Content – SIM window of the SIM indicating end with warning, click [Refer].
[THEORY05-06-40]
Hitachi Proprietary DKC910I
Rev.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-50
Table 5-39 Internal Information of SSB Related to SIM indicating End with Warning
Field Details
(a) Total number of LBAs on the target drive for data erase
(Field size: 6 bytes)
(a) = (b) + (c)
(b) The number of LBAs for which data erase is complete on the target drive for data erase
(Field size: 6 bytes)
(c) The number of LBAs for which the write by using the erase pattern data is successful and the read is
unsuccessful on the target drive for data erase
(Field size: 6 bytes)
(d) DB# and RDEV# of the target drive for data erase
(Lower 1 byte: DB#, upper 1 byte: RDEV#)
(b)
(c)
(d)
(a)
[THEORY05-06-50]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-60
[THEORY05-06-60]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-70
[THEORY05-06-70]
Hitachi Proprietary DKC910I
Rev.15.4 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-80
[THEORY05-06-80]
Hitachi Proprietary DKC910I
Rev.11.1 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY05-06-90
[THEORY05-06-90]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-01-10
6. Data In Place
6.1 Overview
Data In Place (hereinafter referred to as DIP) is a function to upgrade the storage system to the next
generation model without data migration. Drives used before DIP can continue to be used after DIP.
[THEORY06-01-10]
Hitachi Proprietary DKC910I
Rev.15 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-02-10
Are
DKCMAIN
micro-program version and No
GUM micro-program version
90-08-21-00/00 or Exchange DKCMAIN micro-program and
later? GUM micro-program for versions 90-08-21-
Yes 00/00 or later. (*1)
END
[THEORY06-02-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-02-20
Are
DKCMAIN
micro-program version and No
GUM micro-program version
90-08-01-00/00 or Exchange DKCMAIN micro-program and
later? GUM micro-program for versions 90-08-01-
Yes 00/00 or later. (*5)
END
[THEORY06-02-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-03-10
[THEORY06-03-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY06-04-10
[THEORY06-04-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-10
2. WRITE commands
The write commands write the transfer data from channels to devices.
3. SEARCH commands
The search commands follow a control command and logically search for the target data.
4. CONTROL commands
The control commands include the SEEK command that positions cylinder and head positions, the
SET SECTOR command that executes latency time processing, the LOCATE RECORD command
that specifies the operation of the ECKD command, the SET FILE MASK command that defines the
permissible ranges for the WRITE and SEEK operations, and the DEFINE EXTENT command that
defines the permissible ranges for the WRITE and SEEK operations and that defines the cache access
mode.
5. SENSE commands
The sense commands transfer sense bytes and device specifications.
8. SUBSYSTEM commands
The subsystem commands include the commands and paths that specify the information for cache control
to DKCs, and the commands that transfer the channel information and cache related information to
channels.
[THEORY07-01-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-20
[THEORY07-01-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-30
[THEORY07-01-30]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-01-40
NOTE: • Command Reject, format 0, and message 1 are issued for the commands that are not
listed in this table.
• TEST I/O is a CPU instruction and cannot be specified directly. However, it appears
as a command to the interface.
• TIC is a type of command but runs only on a channel. It will never be visible to the
interface.
[THEORY07-01-40]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-02-10
7.2 Comparison of Pair Status on Storage Navigator, Command Control Interface (CCI)
The following are the TrueCopy, Universal Replicator, and global-active device pair statuses displayed on
Storage Navigator and CCI.
Some pair status displays are different among program products. For details, see the user guide of each
program product.
Table 7-4 Comparison of Pair Statuses of TrueCopy, Universal Replicator, and Global-Active
Device on Storage Navigator and CCI
Status on Storage Navigator Status on CCI
SMPL SMPL
COPY (*1) COPY
PAIR PAIR
PSUS PSUS
PSUE PSUE
SSUS SSUS
SSWS (*2) SSWS
*1: INIT/COPY might be displayed.
*2: PSUS or PSUE might be displayed.
[THEORY07-02-10]
Hitachi Proprietary DKC910I
Rev.14 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-03-10
CTL
Shared
MP memory
CFM
Locations where configuration
information is stored
SVP
Backup media
[THEORY07-03-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-10
7.4 TPF
7.4.1 An outline of TPF
TPF is one of operating systems (OS) mainly used for airline on-line reservation systems (CRS/Computer
Reservation System).
To correspond to TPF, DKC must support logical exclusive lock facility and extended cache facility.
The former is a function which is called MPLF (Multi-Path Lock Facility) and the latter is a function which
is called RC (Record Cache).
A DKC which supports TPF has the MPLF and RC functions defined in RPQ#8B0178 in the IBM public
manual: IBM3990 Transaction Processing Facility support RPQs (GA32-0134-03).
A DKC which corresponds to TPF implements a special version of microprogram which supports the MPLF
and RC functions of TPF feature (RPQ#8B0178), described in the following IBM public manuals:
(1) IBM3990 Transaction Processing Facility support RPQs (GA32-0134-03)
(2) IBM3990 Storage Control Reference for Model 6 (GA32-0274-03)
1. Outline of MPLF
A host system can control concurrent use of resources by using logical locks of DKC. Logical locks are
defined in shared resources of each logical CU in DKC. These shared resources are managed by MPL
(Multi-Path Lock). Each MPL can have up to 16 types of lock statuses.
Figure 7-1 shows the overview of the I/O sequence with MPLF. A TPF host uses a unique MPLF user
identifier. Up to 32 MPLF users can be connected to one logical CU.
In Figure 7-1, MPLF users are USER A and USER B. Each user specifies MPLP (Multi-Path Lock
Partition) to use MPLF. MPLP is a logical partition that divides a group of MPLs which are divided
for each logical CU. Up to eight MPLPs can be set. Two MPLPs are usually used: one MPLP is for
transactions and the other for maintenance jobs. MPLP is specified by an MPLP identifier.
[THEORY07-04-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-20
(1) Before starting an I/O sequence, each host performs the CONNECT processing to get permission
to use MPLP. Only a user who is given permission by the processing can use the logical lock
facility of MPLF.
(2) Each user performs the SET LOCK STATE processing by specifying a multi-path lock name
(equivalent to a dataset name for which concurrent use is controlled) to get a logical lock.
(3) The user who gets a logical lock by the SET LOCK STATE processing performs the R/W
processing for the specified multi-path lock name.
(4) The user who finishes the R/W processing performs the UNLOCK processing by specifying the
multi-path lock name to release the logical lock. This processing enables DASD to be shared while
maintaining the data consistency.
(5) The user who does not have to use each MPLP performs the DISCONNECT processing to give up
permission to use each MPLP.
CONNECT
USER A
LOCK
H R/W
O
S
T CONNECT
USER B
LOCK
R/W
MPL 1
[THEORY07-04-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-30
2. Outline of RC
[THEORY07-04-30]
Hitachi Proprietary DKC910I
Rev.13 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-40
1. OS
TPF Ver.4.1./zTPF VER.1.1
2. Hardware
The following table shows the storage system hardware specifications for TPF support.
[THEORY07-04-40]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-50
[THEORY07-04-50]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-60
MVS environments
(1) Logical volume (device) is the unit of controlling concurrent use of data among multiple CPUs.
(2) Device is owned by one CPU during CPU processing (accessing), and Device-busy status is
reported to another CPUs accesses.
(3) Device-end status is used to notify the waiting CPUs when the device becomes free.
TPF environments
(1) A logical lock is the unit of exclusive control of data among multiple CPUs, in place of a logical
volume (device) in MVS.
(2) Each logical lock can be accessed in parallel.
(3) When a CPU occupies a certain logical lock, DSB x4C/x0C is returned in response to a request for
the logical lock by other CPU. DSB x4C indicates that the logical lock succeeded, and DSB x0C
indicates that the logical lock failed (changing to the waiting for logical lock state).
(4) When the logical lock is released, it is given to the CPU that is changing to the waiting for logical
lock state. Attention is reported to other waiting CPUs.
CPU-A CPU-B
Reserve/Read&Write Access by CPU-A
(Successful).
CPU-B’s trial is rejected by Device-busy
(Failed).
Terminate its process and release the volume.
Logical Volume Free (Device-end) will be sent.
CPU-B can use this volume.
[THEORY07-04-60]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-70
2. Path-group
(1) The TPF system does not use a path-group composed of multiple paths.
3. Connect Path/Device
Function unique to the TPF channels, which makes a sub channel try the reconnection on the same path
for a certain period of time when an I/O request is rejected because CU is busy
The TPF system uses fixed length records for faster update writes of records and for more efficient cache
handling in reads of single records. In the online environment, the system usually operates at a hit rate of
almost 100% for writes and a hit rate of 80% or higher for reads.
[THEORY07-04-70]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-80
(1) To improve Data-integrity of DASD, TPF system often makes the Data-duplications on different
two DASD storage systems.
(2) The following figure shows one example of these pairs.
Prime MOD (module)s and Dupe MODs are always located on each side of storage system (spread
to all storage systems).
[THEORY07-04-80]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-90
2. Add Pair
We strongly recommend you to select the Copy to R-VOL in the CFW Data column of the Add Pair
window.
3. Suspend Pair
We strongly recommend you to select the Disable in the SSB (F/M = FB) column of the Suspend Pair
window.
[THEORY07-04-90]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-100
SOM Title
15 Improve the host response time to be about within 6 sec
142 Prevent storage systems from going down by blocking a failure drive at an early point when time-
out occurs frequently for commands issued to the drive
309 Block the failed HDD at an early stage
310 Set the monitoring timer for MP hang-up is 6 seconds and returning a response to the host within
8 is guaranteed
359 Block the failed HDD at an early stage
809 Improve host response time to be within 3 sec
862 Change drive response watching time on backend
911 Allocate time slots for synchronous processing and asynchronous processing to each MP of an
MPB
1207 Send LOGO to a mainframe host with zTPF OS when a controller is blocked
[THEORY07-04-100]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-110
[THEORY07-04-110]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-120
[THEORY07-04-120]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-130
[THEORY07-04-130]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-140
I-2107-TPF I-2107
7.4.6.7 BCM
You can logically control the TPF volume from BCM on zOS. However, there is no evaluation record.
[THEORY07-04-140]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-150
TPF
DKC910I #1 DKC910I #2
Case A: When combined with SI-MF, operate in the split state in order to suppress the response delay to the
host. Perform primary/secondary volume resynchronization and resplit during times when the load is
low (for example, at night).
Case B: When combining with TC-MF or UR-MF, it is recommended to combine TC-MF or UR-MF with
the secondary volume of SI-MF in order to suppress the response delay to the host.
TC-MF or UR-MF
[THEORY07-04-150]
Hitachi Proprietary DKC910I
Rev.11 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-04-160
7.4.6.9 MPL
MPLs are fixed at 262144 (256K) in DKC910I.
NOTE: MPL is a lock table for controlling MPLF functions. MPL is used for exclusive
control when multiple hosts access one record. From past results, 4096/CU is
sufficient for MPL.
Lock request conflicts from TPF1,2,3 to Record1. TPF1 reserves Record1, and TPF2 and 3 are registered in
the MPL table as Waiters.
Attention (DSB80) is notified to TPF2 after TPF1 unlocks Record1.
TPF2 becomes the owner of Record2 and accesses it.
In this way, MPL is used to control the exclusion of one record in the MPLF function.
[THEORY07-04-160]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-05-10
[THEORY07-05-10]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-05-20
[THEORY07-05-20]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-05-30
[THEORY07-05-30]
Hitachi Proprietary DKC910I
Rev.10 Copyright © 2019, 2024, Hitachi, Ltd.
THEORY07-06-10
[THEORY07-06-10]