THI4012 Student Guide v1-0 Secured
THI4012 Student Guide v1-0 Secured
THI4012
© Hitachi Vantara LLC 2020. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Hitachi Content Platform Anywhere, Live Insight,
VSP, ShadowImage, TrueCopy and Hi-Track are trademarks or registered trademarks of Hitachi Vantara Corporation. IBM and FlashCopy are trademarks or
registered trademarks of International Business Machines Corporation. Microsoft and SQL Server are trademarks or registered trademarks of Microsoft Corporation.
All other trademarks, service marks and company names are properties of their respective owners.
ii
Table of Contents
Introduction ..............................................................................................................xvii
Welcome ................................................................................................................................................. xvii
Please Give Us Feedback .......................................................................................................................... xviii
Course Description ................................................................................................................................... xviii
Prerequisites ............................................................................................................................................. xix
Course Objectives ..................................................................................................................................... xix
Course Topics ............................................................................................................................................ xx
Learning Paths Overview ........................................................................................................................... xxi
Stay Connected During and After Your Training ..........................................................................................xxii
iii
Table of Contents
iv
Table of Contents
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
viii
Table of Contents
ix
Table of Contents
5. VSP 5000 Series High Availability and Storage Navigator Differences From
G1x00 ........................................................................................................................ 5-1
Module Objectives .................................................................................................................................... 5-1
HA Differences From VSP G1000/ VSP G1500 ............................................................................................ 5-2
Difference From G1000/1500 ............................................................................................................... 5-2
Single Point Failure ............................................................................................................................. 5-2
Two Point Failure ................................................................................................................................ 5-3
VSP 5500 and VSP 5100 – Two Point Failure......................................................................................... 5-4
X-Path/HIE/ISW .................................................................................................................................. 5-5
X-Path/HIE/ISW .................................................................................................................................. 5-5
Active Learning Exercise: Jigsaw Puzzle ................................................................................................ 5-6
Storage Navigator Differences From VSP G1x00 ......................................................................................... 5-6
DKC ................................................................................................................................................... 5-6
Logical Devices – Column Settings ....................................................................................................... 5-7
Logical Devices ................................................................................................................................... 5-7
Pools – More Actions ........................................................................................................................... 5-8
Ports – Column Settings ...................................................................................................................... 5-8
Port Conditions ................................................................................................................................... 5-9
Module Summary ..................................................................................................................................... 5-9
Questions to IT PRO................................................................................................................................ 5-10
x
Table of Contents
xi
Table of Contents
xii
Table of Contents
xiii
Table of Contents
xiv
Table of Contents
xv
Table of Contents
xvi
Introduction
Welcome
Name
+ Pick
Last movie
Title you saw
Experience One
Favorite
Expectations vacation spot
xvii
Introduction
Please Give Us Feedback
Course Description
xviii
Introduction
Prerequisites
Prerequisites
Supplemental courses
• TSI2690 – Managing Hitachi Ops Center Automator
Course Objectives
xix
Introduction
Course Topics
Course Topics
Modules Lab Activities
1. Hitachi Ops Center Deployment and Installation - 1 1. Hitachi Ops Center Features
2. Hitachi Ops Center Deployment and Installation - 2 2. Hitachi Administrator
Functions
3. Resource Monitoring from
Ops Center Analyzer
4. Hitachi Configuration
Manager REST API
3. VSP 5000 Series Models
xx
Introduction
Learning Paths Overview
xxi
Introduction
Stay Connected During and After Your Training
Support Connect
The site for Hitachi Vantara product documentation is accessed through:
https://support.hitachivantara.com/en_us/anonymous-dashboard.html
xxii
1a. Hitachi Ops Center Deployment and
Installation - 1
Module Objectives
Following are the expanded versions of the acronyms used for products:
• HDID – Hitachi Data Instance Director
• NVMe – non-volatile memory express
• HCS – Hitachi Command Suite
• HTnM – Hitachi Tuning Manager
• HRpM – Hitachi Replication Manager
• SVOS – Hitachi Storage Virtualization Operating Systems
• HDLM – Hitachi Dynamic Link Manager
• HGLM – Hitachi Global Link Manager
Page 1a-1
Hitachi Ops Center Deployment and Installation - 1
What Is Hitachi Ops Center?
Ops Center
Foundation for a Modern, AI-Enhanced with Simple, Built on Legendary Hitachi
Enterprise Infrastructure Powerful, Federated, Resilience and Performance,
Management Optimized for NVMe
In the above slide, acronyms used have the following expanded names:
• VSP 5000 series – Hitachi Virtual Storage Platform 5000 series
• SVOS RF – Hitachi Storage Virtualization Operating Software RF
Hitachi Ops Center enables you to optimize your data center operations through
integrated configuration, analytics, automation, and copy data management
Automator
Data Protection
(HDID)
Page 1a-2
Hitachi Ops Center Deployment and Installation - 1
New Product Names
HIAA Hitachi Infrastructure Analytics Advisor Analyzer Hitachi Ops Center Analyzer
Analyzer
- - Hitachi Ops Center Analyzer viewpoint
viewpoint
Analyzer detail
HDCA Hitachi Data Center Analytics Hitachi Ops Center Analyzer detail view
view
HSA Hitachi Storage Advisor Administrator Hitachi Ops Center Administrator
APIs
Hitachi Ops Center
HCM Hitachi Configuration Manager Configuration APIs Configuration Manager
Manager
Data Protection
HDID Hitachi Data Instance Director Hitachi Data Instance Director
(HDID)
Common
- - Hitachi Ops Center Common Services
Services
© Hitachi Vantara LLC 2020. All Rights Reserved.
Licensing Packages
In this section you will learn about licensing package.
Note: Hitachi Global Link Manager (HGLM) and Hitachi Dynamic Link Manager (HDLM) are part
of the Base package.
Page 1a-3
Hitachi Ops Center Deployment and Installation - 1
Optional Software Contents for Ops Center
Page 1a-4
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Common Services
The Hitachi Ops Center products that support the single sign-on functionality are as follows:
Page 1a-5
Hitachi Ops Center Deployment and Installation - 1
Common Login Screen
Single Sign-On
Automator Dashboard
Analyzer Dashboard
Page 1a-6
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Administrator
The Hitachi Ops Center products that support the single sign-on functionality are as follows:
Page 1a-7
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Administrator Functions
Verify that there is an active zone configuration set with at least one
dummy zone available when a switch is added
Page 1a-8
Hitachi Ops Center Deployment and Installation - 1
Instructor Demonstration
Instructor Demonstration
Page 1a-9
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Analyzer
Automated Management
Machine Learning (ML) Operations
ML analysis for trends, anomalies and IT Integrated management workflows to
management recommendations automate configuration changes to correct
problems
Page 1a-10
Hitachi Ops Center Deployment and Installation - 1
Problem Analysis
Problem Analysis
Page 1a-11
Hitachi Ops Center Deployment and Installation - 1
Predictive Analytics
Predictive Analytics
Central Viewpoint
Page 1a-12
Hitachi Ops Center Deployment and Installation - 1
On-Premises and SaaS Analytics
Data
Ops Center Ops Center
Analyzer Analyzer
SaaS
Page 1a-13
Hitachi Ops Center Deployment and Installation - 1
Analyzer Dashboard
Analyzer Dashboard
Customization
Page 1a-14
Hitachi Ops Center Deployment and Installation - 1
Automated Root Cause Analysis and Resolution: How We Do It – Problem Analysis
Provide an end-to-end
topology view of the current
infrastructure from a common
console
analysis
Res Time 5ms Res Time 10ms Res Time 20ms
IOPS 1200 IOPS 1000 IOPS 800
Page 1a-15
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Analyzer: Resource Optimization Planning
Page 1a-16
Hitachi Ops Center Deployment and Installation - 1
Active Learning Exercise: Group Discussion
Hitachi Ops Center Analyzer idea is to guide the user through the
process of problem determination by showing alerts and warnings
• Without reasonable thresholds the user experience will be low
• Default thresholds after installation should be reviewed and adjusted if
required
Topic: What are the most common causes for performance problems and
can Analyzer help not to run into this kind of issues?
Page 1a-17
Hitachi Ops Center Deployment and Installation - 1
Hitachi Ops Center Data Instance Director
Data Instance Director provides a modern, holistic approach to data protection, recovery and
retention. It has a unique workflow-based policy engine, presented in an easy-to-use
whiteboard-style user interface that helps map the copy data management processes to
business priorities. HDID includes a wide range of fully integrated storage-based and host-
based incremental-forever data capture capabilities that can be combined into complex
workflows to automate and simplify copy data management.
Page 1a-18
Hitachi Ops Center Deployment and Installation - 1
Storage Configurations: Block Storage
HDID HDID
(Repository) (Source Node)
Oracle / SQL /
CCI
Command device Exchange
Operational Recovery
Page 1a-19
Hitachi Ops Center Deployment and Installation - 1
Storage-Based Operational Recovery
Storage-based operational recovery leverages the snapshot and clone technologies available in
Hitachi storage systems.
• Thin Image snapshots (block) and file replication (file) require very little space and can
run far more frequently than traditional backup to improve recovery point objectives
• ShadowImage (block) and directory clone (file) create a full copy that can be used for
repurposing, such as for test/dev, secondary backup, and so on
• HDID integrates both snapshots and clones with supported applications, creating
nondisruptive, application-consistent point-in-time copies. HDID also includes a scripting
interface to allow the quiescing of other application environments
Page 1a-20
Hitachi Ops Center Deployment and Installation - 1
Host-Based Operational Recovery
Live backup
Hitachi Data
• Captures every change (CDP) then creates a point-in-time Instance
application-consistent software snapshot Director
Server
• Integrated with Microsoft Volume Shadow Copy Service (VSS)
Batch backup
• Incremental-forever capture with full restore to any backup set
• For IBM ® AIX ®, Linux, Oracle Solaris and Microsoft Windows file
systems HDID Repository
© Hitachi Vantara LLC 2020. All Rights Reserved.
Host-based operational recovery captures and copies data from the application or file server
being protected. Host-based capabilities include:
• Continuous data protection (CDP) automatically saves a copy of every change made to
data, essentially capturing every version of the data that the user saves. It allows the
user or administrator to restore data to any point in time. CDP runs as a service that
captures changes to data to a separate storage location. It is best suited for highly
critical applications and data sets that do not include a built-in journaling or transaction
logging feature
• Bare metal restore allows the restoration of an entire server, including the operating
system and applications, from a single backup copy
Page 1a-21
Hitachi Ops Center Deployment and Installation - 1
Active Learning Exercise: Group Discussion
Module Summary
Page 1a-22
1b. Hitachi Ops Center Deployment and
Installation - 2
Module Objectives
Following are the expanded versions of the acronyms used for products:
• HDID – Hitachi Data Instance Director
• NVMe – non-volatile memory express
• HCS – Hitachi Command Suite
• HTnM – Hitachi Tuning Manager
• HRpM – Hitachi Replication Manager
• SVOS – Hitachi Storage Virtualization Operating Systems
• HDLM – Hitachi Dynamic Link Manager
• HGLM – Hitachi Global Link Manager
Page 1b-1
Hitachi Ops Center Deployment and Installation - 2
Deployment Options
Deployment Options
This section explains about deployment options.
Deployment Considerations
Ops Center
Deployment OVAs are for first time NEW installs only.
No
Done
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 1b-2
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment Options
1. Ops Center Management Server Installer Medias ISOs (Windows and Linux)
• Analyzer and Detail View (Linux Only), Automator, HDID Master + Client, API
Configuration Manager, Common Services (Linux only for October Release)
2. Administrator Installer Media ISO
3. Analyzer Probe Installer ISOs (Windows and Linux)
4. Configuration Manager Installer ZIP (Windows and Linux)
Installation Media
Page 1b-3
Hitachi Ops Center Deployment and Installation - 2
Ops Center Preconfigured Media
Ops Center Installation Medias for Linux are a set media, that include
installers to be use for new installs or upgrades
Installation Media Usage
Administrator Install/Upgrade Administrator
Analyzer Probe Install/Upgrade Analyzer Probes for
storage, OS and switch
Analyzer Detail View Add On Package 3rd Party Probes and management
API Configuration Management Install/Upgrade API-CM
Common Services Install/Upgrade Common Service
Management Software Install/Upgrade Automator, Analyzer, HDID
Master, HDID Client, API-CM, Common
Services
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 1b-4
Hitachi Ops Center Deployment and Installation - 2
Ops Center Installation Media (Windows)
Ops Center Installation Medias for Windows are a set media, that
include installers to be used for new installs or upgrades
Administrator OVA
Administrator
Administrator OVA
Administrator
The diagram shows an example system configuration in which the Hitachi Ops Center product
runs on one management server.
Page 1b-5
Hitachi Ops Center Deployment and Installation - 2
Activate / Deactivate Products
Purpose: Provide an ability of flexible OVA configuration in products combination point of view.
Also they can reduce the OVA configuration size if they deactivate some products.
Function: Customers can deactivate/activate each product in consolidated OVA.
Proposal: Provide a command of deactivate/activate registered services.
Example: opsvmservicectl disable Automator / Analyzer
API API
1. disable/enable
/opt/OpsVM/vmtool/opsvmservicectl disable|enable product1 [product2 …] Table: Available product options
# Product Product Option
Deactivate/activate specified product. “deactivate” will do followings.
1 Hitachi Ops Center Automator Automator
1. Make specified product not to start when OS booting.
2 Hitachi Ops Center Analyzer Analyzer
2. Stop specified product’s service.
“activate” will do opposite operations of “deactivate”.
3 Hitachi Ops Center Analyzer Analyzerdetailview
Example: Following command will deactivate Automator and Analyzer. detail view
opsvmservicectl disable Automator Analyzer 4 Hitachi Ops Center APIConfigurationManager
API Configuration Manager
5 Hitachi Data Instance Director HDID
2. status 6 Hitachi Ops Center Common CommonServices
/opt/OpsVM/vmtool/opsvmservicectl status Services
Page 1b-6
Hitachi Ops Center Deployment and Installation - 2
Analyzer OVA Specification
Analyzer and Analyzer detail view in separate OVA for enterprise customers
who need to scale their Analyzer systems
V10.0.1 V10.1.0 (2M)
Ops Center OVA Ops Center OVA Ops Center
Common Service Common Service Analyzer OVA
OVA
Automator Analyzer Automator Analyzer
HDID detail view HDID detail view System CPU 16 core 16 core
API API Require
Memory 48 GiB 48 GiB
ment of
Administrator OVA Administrator OVA VM Disk size 900 GiB 900 GiB
Administrator Administrator
OVA size File size 13 GiB 13 GiB
Analyzer OVA
Analyzer
detail view
Page 1b-7
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment Overview
Page 1b-8
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment
Page 1b-9
Hitachi Ops Center Deployment and Installation - 2
Ops Center Deployment Overview
Note: Virtual Appliances (pre-configured OVAs). © Hitachi Vantara LLC 2020. All Rights Reserved.
Following table shows which OVA /Installer to be used for each scenario:
# Situation Configuration Description
Page 1b-10
Hitachi Ops Center Deployment and Installation - 2
Configuration #1: New installation
Quick start to use Ops Center for the first time new installation
# Situation Configuration
Automator Analyzer
HDID Analyzer
detail view
API
(2) Deploy Ops Center by “Ops Center OVA(Lin)”
Next Page
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 1b-11
Hitachi Ops Center Deployment and Installation - 2
Configuration #1: New Installation(2) Deploy Ops Center by “Ops Center OVA(Lin)” (2/2)
Apply licenses
Automator Analyzer
Page 1b-12
Hitachi Ops Center Deployment and Installation - 2
System Configuration (For Servers)
Following table shows which OVA /Installer to be used for each scenario:
# Situation Configuration Description
Use multiple Ops Center products and register one common service
# Situation Configuration
(1) Download the “Ops Center OVA(Lin)”, “Analyzer
2 New Ops Center OVA Ops Center OVA Viewpoint viewpoint OVA(Lin)”
Common
installation Common Service
Automato
Common Service
Automato
OVA
Service
(more than r
Analyzer
Analyzer r
Analyzer
Analyzer
Viewpoint
Analyzer
one OVA)
HDID detail
view
HDID detail
view
detail (2) Deploy 2 Ops Centers by “Ops Center OVA(Lin)”
view
API API
and 1 Viewpoint by “Analyzer viewpoint OVA(Lin)”
Page 1b-13
Hitachi Ops Center Deployment and Installation - 2
Configuration #2: New Installation (more than one OVA)(3) Register Each Product to Common Service
Automator
Analyzer
HDID
Following table shows which OVA /Installer to be used for each scenario:
# Situation Configuration Description
Last one is the upgraded case for existing customers. This procedure is installing Ops Center
products and registering each product to common service.
Page 1b-14
Hitachi Ops Center Deployment and Installation - 2
Configuration #3: Upgrade to Ops Center
Register to CS by executing
“setupcommonservice” command
Page 1b-15
Hitachi Ops Center Deployment and Installation - 2
Administrator SSO
Administrator SSO
Procedure to connect Administrator with CS for SSO
# Procedure Documentation
0 We need to setup CS in advance, and Hitachi Ops Center Installation
need to check credential for CS and Configuration Guide
Page 1b-16
Hitachi Ops Center Deployment and Installation - 2
Administrator SSO
Showing UG list on CS
We need to map UG on CS to
Administrator user roles
Page 1b-17
Hitachi Ops Center Deployment and Installation - 2
Administrator SSO
CS Launcher screen
Log in to Administrator
Page 1b-18
Hitachi Ops Center Deployment and Installation - 2
Hitachi Ops Center Upgrade Scenarios
CMREST > API Ops Center Integrated Server OVA Upgrade with installer
Configuration Manager
Analyzer viewpoint Analyzer Viewpoint OVA N/A (new product)
Common Services Ops Center Server OVA or Viewpoint OVA Install onto existing Linux server or new Linux
(pick one to be the master) server (Windows support in Mar/2020)
HCS Product > Ops Center N/A Deploy OVA for respective products
(HDvM > Administrator, HTnM > Analyzer, HRpM
> HDID)
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 1b-19
Hitachi Ops Center Deployment and Installation - 2
Module Summary
Module Summary
Page 1b-20
Hitachi Ops Center Deployment and Installation - 2
Appendix
Appendix
It’s time to explore few topics in detail.
There is no tool available to backup and restore all Ops Center components at
once
Page 1b-21
Hitachi Ops Center Deployment and Installation - 2
Hitachi Ops Center Common Services
Location:
• installation-directory-of-Common-Services/utility/bin/csbackup.sh
• installation-directory-of-Common-Services/utility/bin/csrestore.sh
Backup and restore for the Administrator can be done with the “Virtual
Appliance Manager”
Use Backup to
a file
download a tar.gz
Page 1b-22
Hitachi Ops Center Deployment and Installation - 2
Administrator VAM Tool: Backup
o Element inventories: Storage Systems, Servers, and Fabric Switch inventories are
preserved
o SNMP Managers: Locations for forwarding SNMP traps are preserved
o Jobs: All jobs on the system are preserved
o Alerts: Monitoring alerts are preserved
o Tier Names: Tier names for HDT pools are preserved
o Security information: Local usernames and passwords, as well as integrated
Active Directories are preserved
o Replication groups: All copy groups and replication groups and their associated
snapshot schedules are preserved
o Virtual Appliance Manager settings: Connected NTP servers, log level settings,
SSL certificate and service settings are preserved. Host settings are not
preserved
o Migration tasks: All migration tasks and their associated migration pairs are
preserved
Page 1b-23
Hitachi Ops Center Deployment and Installation - 2
Administrator VAM Tool: Restore
You can back up the following four components of the Ops Center
Analyzer system, so that they can be restored later. For example, if
failure occurs, causing your system to go down
• Analyzer Server
• Analyzer detail view server
• Analyzer probe server
• RAID Agent
Page 1b-24
Hitachi Ops Center Deployment and Installation - 2
Automator – CLI commands
Ops Center Automator provides CLI commands for backup and restore
of the database and system information
Automator – backupsystem
where:
• /dir is an absolute or relative directory path that contains backup data
• /auto directs the Ops Center Automator, Common Component services and
database to start and stop automatically
Page 1b-25
Hitachi Ops Center Deployment and Installation - 2
Automator – restoresystem
Automator – restoresystem
where:
• /dir is an absolute or relative directory path that contains data that is backed
up by the backupsystem command
• /auto directs the Ops Center Automator, Common Component services and
database to start and stop automatically
Note: Before restoring Ops Center Automator, confirm that the following conditions are the
same for the backup source Ops Center Automator server host and the restore destination Ops
Center Automator server host:
• Types, versions, and revisions for the installed Common Component products
• Installation location for each product using Common Component, Common Component,
the Common Component product database, and Common Component database
• If the above conditions are not the same, Ops Center Automator cannot be restored
Page 1b-26
2. VSP 5000 Series Models
Module Objectives
Page 2-1
VSP 5000 Series Models
Controller Box (CBX) Components
Parts Explanation
Controller Board (CTL) Consists of CPU, DIMM and GUM.
BKMF
Channel Board (CHB) Front-End I/O Module (FC / iSCSI / FICON)
DCK- 5
DCK -1
CTL 52
DCK -3
VSP 5500 4N CTL 12 CTL 32
CTL 11 CTL 51
CTL 31
VSP 5500 2N
DCK -3
DCK-1
CTL 12 CTL 32
DCK -4
DCK- 0
CTL 42
DCK -2
CTL 02 CTL 22
CTL 11 CTL 31
CTL 01 CTL 41
CTL 21
DCK -1
CTL 12
DCK -0
DCK- 2
CTL 02 CTL 22
CTL 11
CTL 01 CTL 21
VSP 5100
DCK- 0
CTL 02 Upgrade
CTL 01
DCK -1
CTL 12 Upgrade
Upgrade
DCK- 0
CTL 01
Page 2-2
VSP 5000 Series Models
Controller Box (CBX)
VSP 5500 6N
Node 2
Node 0
Node 2
Controller 1 Controller 5 Controller 9
VSP 5500 4N
Controller 2 Controller 6 Controller 10
Node 0
Node 2
VSP 5500 2N Controller 1 Controller 5
Node 3
Node 1
Node 3
Controller 3 Controller 7 Controller 11
Controller 2 Controller 6
Controller 4 Controller 8 Controller 12
Node 0
Controller 1
Node 1
Node 3
Controller 3 Controller 7
Controller 2
Controller 4 Controller 8
VSP 5100
Node 1
Controller 3 Upgrade
Controller 4
Node 0
Controller 1 Upgrade
Upgrade
Node 1
Controller 4
CTLx2
CBX or DKC-x
CTLx1
Page 2-3
VSP 5000 Series Models
VSP 5000 Series Offering
• For VSP 5500 and VSP 5500H, four CTLs are installed in two DKCs (two CTLs in each
DKC)
• For VSP 5100 and VSP 5100H, two CTLs are installed in two DKCs (one CTL in each
DKC). The locations of CTLs are CTL01 in DKC-0 and CTL12 in DKC-1
Table quantities assume all CB’s are the same type of port, VSP 5100 VSP 5500 1 CB VSP 5500 2 CB’s VSP 5500 3 CB’s
backend & media chassis. Intermix rules are provided. (10U, 2 controllers) (10U, 4 controllers) (18U, 8 controllers) (26U, 12 controllers)
CPU Cores, Memory 40c, (.5TiB MF only) 1TiB 80c, 2TiB 160c, 4TiB 240c, 6TiB
FC1 32G/16G SFP (8p increments) 32 64 128 192 1) FC ports are NVMeoF ready for
Frontend
software upgrade in 2HCY20
Optical I/O Ports
FiCON 16G SFP (8p increments) 32 64 128 192
(can intermix types
within CB) iSCSI 10G SFP (4p increments) 16 32 64 96
Backend PCIe Gen3 x 4 Lane NVMe ports
8 16 32 48
I/O Ports Or 12G SAS 4W Lane SAS ports
Global Spare Drives (8 per Media Chassis) 64 64 128 192
NAND Flash: SFF 1.9, 3.8, 7.6, 15.3TB , 30.6TB {TBD} SCM Flash: 3.75TB {TBD}
SFF
Drive Capacities # of 8U 96 slot chassis 1 1 2 3
NVMe
{Post-GA ETA}
# of drives w/max chassis 96 96 192 288
Max # of NAND Flash: 960GB, 1.9, 3.8, 7.6, 15.3, 30.6TB 10K HDD: 2.4TB
media chassis SFF
* # of 8U 96 slot chassis 8 8 16 24
SAS
Max # of drives
# of drives w/max chassis 768 768 1536 2304
7.2K NL-SAS HDD: 14TB
LFF
Each CB can be * # of 16U 96 slot chassis 4 4 8 12
SAS
either diskless, or # of drives w/max chassis 384 384 768 1152
all NVMe, or all
SAS. Different CB NAND Flash: 7, 14TB
types can intermix FMD
in a system in any * # of 8U 48 slot chassis 4 4 8 12
SAS
combination # of drives w/max chassis 192 192 384 576
* SFF / LFF / FMD chassis intermix Each SAS CB can have up to 8 media chassis. First chassis per CB must be SFF or FMD.
within Controller block Each SAS CB can have up to 4 FMD and/or up to 4 LFF chassis; the rest must be SFF
Parity 2D+2D, 3D+1P, 7D+1P, 6D+2P, 14D+2P
Page 2-4
VSP 5000 Series Models
CPU and GUM Specs
• NAND flash memory is a type of nonvolatile storage technology that does not require
power to retain data
• A flash solid state drive (SSD) is a non-volatile storage device that stores persistent
data in flash memory. There are two types of flash memory, NAND and
NOR. ... NAND has significantly higher storage capacity than NOR. NOR flash is faster,
but it's also more expensive.
When comparing the CPU of the VSP Gxxx and VSP 5000, take note of the Core number and
Maximum Core number.
The VSP Gxxx unit has 2 controllers, but the VSP 5000 Series unit has 4 controllers, and a fully
expanded VSP 5000 Series will have 12 controllers.
Page 2-5
VSP 5000 Series Models
VSP 5100 Block Diagram
<Point of Development>
Redundant Need to develop the # cover” for empty CTL slots (safety standards)
paths
2 0 2
0 2 2
2 2 4
4 0 4
0 4 4
6 0 6
0 6 6
4 2 6
Scale out from 2 to 4, 6 nodes in pairs and still manage as single image (1 array)
2 4 6
• DBS2 must be the first drive box in VSP 5000 configuration, more information in the
next module
Page 2-6
VSP 5000 Series Models
VSP 5000 Portfolio Positioning
VSP G/F1500
Models Nodes/Controllers Media Supported
Target Based
• The table is the comparison from VSP 5XXX and G/F1500 with a single or multiple pair
of VSD (x 2MPB’s)
Page 2-7
VSP 5000 Series Models
VSP E990
VSP E990
In this section you will learn about VSP E990 hardware specifications.
Drive Boxes
Page 2-8
VSP 5000 Series Models
Module Summary
Module Summary
Page 2-9
VSP 5000 Series Models
Module Summary
Page 2-10
3a. VSP 5000 Series Architecture and
Availability - 1
Module Objectives
Page 3a-1
VSP 5000 Series Architecture and Availability - 1
Hardware
Hardware
This section discuss about VSP 5000 series hardware.
• NVMe: SFF (288) © Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3a-2
VSP 5000 Series Architecture and Availability - 1
Module (CBX Pair) Component Location
2x1U
4U
10U
4U
Connections
16G FC (4 ports per card) (64/128/192)
Cable Path
12A 12B
32G FC (4 ports per card) (64/128/192)
12A 12B
• 8/16/32Gbps
DKBs
HIEs
12E
12E 12F
12F
4U
11A
11A 11B
11B
11E
11E 11F
11F
16G FiCON (4 ports per card) (64/128/192)
02A
02A 02B
02B
• 4/8/16Gbps
02E 02F
DKBs
02E 02F
HIEs
4U
01A
10G iSCSI (2 optical ports per card)
01A 01B
01B
(32/64/96)
01E 01F
Rear view
• 10Gbps
Page 3a-3
VSP 5000 Series Architecture and Availability - 1
ISW (Interconnect Switch) / HIE
12G
4U
11C
11G
02C
02G
4U
01C
01G
Rear view
8U SAS
SFF Media
Chassis
16U LFF
SAS Media
Chassis
8U SAS
SFF Media
Chassis
1U HSNBX-1
1U HSNBX-0
DKC
8U DKC
Pairs
Pair
0
0
Page 3a-4
VSP 5000 Series Architecture and Availability - 1
Interconnection Architectures
Interconnection Architectures
VSP G/VSP F1500 VSP 5500
(x-paths via cache boards with alternate routes) (Controller-level “by 4” (x4) independent switching)
VSP 5500 ‘x4’ improves resiliency, especially during maintenance or upgrade events
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3a-5
VSP 5000 Series Architecture and Availability - 1
VSP 5000 Series Logical System Connectivity
Page 3a-6
VSP 5000 Series Architecture and Availability - 1
Front End Ports
• CFM = Cache Flash Module – Nonvolatile store for configuration information and in the
case of a power outage, cache contents
• MNT = Maintenance Ethernet port on controller. It is not used for Jupiter and [need to
confirm the following] is physically blocked. Therefore I changed it to GUM to indicate
each controller has a Gateway for Unified Management processor
• The ethernet connection shown between the management ports on controller pairs is
internal, not externally cabled. It is shown to make it clear that if one HSNBX LAN is
down, the controller connected to it can still be communicated with by routing through
the paired controller
• Note that there is no correlation between the ISW color and the controller color. There
are only so many colors to choose from that can be easily differentiated
• Customer Management LAN includes things like HiTrack Monitor Server, CM REST server,
Hitachi Storage Advisor, Hitachi Infrastructure Analytics Advisor, Hitachi Automation
Director, Hitachi Data Instance Director
CBX #3
CBX #5
DKBs
DKBs
DKBs
HIEs
HIEs
52G
HIEs
12E 12F 32E 32F 52E
CBX #2
02A 02B
DKBs
DKBs
DKBs
CBX #4
HIEs
HIEs
HIEs
CHB FE Port CHB FE Port CHB FE Port CHB FE Port CHB FE Port CHB FE Port
Page 3a-7
VSP 5000 Series Architecture and Availability - 1
VSP 5000 Series Rear – 5500 2Node
DBS2
HSNBX
CBX (2 Nodes)
HSNBX
CBX 1 (2 Nodes)
CBX 2 (2 Nodes)
CBX 3 (2 Nodes)
Page 3a-8
VSP 5000 Series Architecture and Availability - 1
Drive Boxes and RAID Configuration
Drive Boxes
DB Explanation G/F1500 VSP 5000 Series
type
Height BE port# Protocol Installable Drive# Support Support DKU
*1: FMD: This is the conventional FMD that does not support Accelerated Compression or Encryption features in VSP 5000.
PG with 8 drives:
- Taking 1 Drive from 8 sequential
PG #1: (7D+1P) x 2
PG #1: (7D+1P) x 2
PG #0: 14D+2P
PG #0: 14D+2P
PG #4: 2D+2D
PG #3: 7D+1P
C
B PG with 4 drives:
PG #1: (7D+1P) x 2
PG #1: (7D+1P) x 2
PG #0: 14D+2P
PG #0: 14D+2P
PG #2: 6D+2P
PG #5: 3D+1P
X
PG #3: 7D+1P
Page 3a-9
VSP 5000 Series Architecture and Availability - 1
PG and RAID Layout – Multiple Pair Controller Block
C P
B a
X i
r
PG #N: 6D+2P
Fixed PG assignment method
C P 1 PG assignment range is within the same CBX Pair
B a
X i
r
C P
B a
X i
r
Reserved
method (same policy as R800’s spec)
Spare Drive can be
assigned only to Slot#11 or S
Slot#23 in each DB
Each Media Chassis has 8
DB’s therefore a max of 8
spares
PG #N: 2D+2D
PG #N: 2D+2D
reserved for additional
Spare drive and they can’t
S
be assigned as PG
anymore.
Remaining 4 drives in
Slot#11 and/or Slot#23 can
be assigned as PG with 4
d
Reserve
Page 3a-10
VSP 5000 Series Architecture and Availability - 1
Spare Drive Qty
Recommended quantity
Drive type Recommendation for Spare Drive quantity
SAS (10k) 1 Spare Drive for every 32 Drives
NL-SAS (7.2k) 1 Spare Drive for every 16 Drives
SSD 1 Spare Drive for every 32 Drives
FMD 1 Spare Drive for every 24 Drives
Chassis 7
EXP 060 EXP EXP 061 EXP EXP 062 EXP EXP 063 EXP
EXP 056 EXP EXP 057 EXP EXP 058 EXP EXP 059 EXP
DB-056&057 DB-058&059 DB-060&061 DB-062&063
Chassis 6
052 053 054 055
EXP x 2, SAS Port x 2 EXP 048 EXP EXP 049 EXP EXP 050 EXP EXP 051 EXP
DB-048&049 DB-050&051 DB-052&053 DB-054&055
8x DBL (LFF)
Chassis 5
EXP 044 EXP EXP 045 EXP EXP 046 EXP EXP 047 EXP
EXP 040 EXP EXP 041 EXP EXP 042 EXP EXP 043 EXP
DB-040&041 DB-042&043 DB-044&045 DB-046&047
Chassis 4
EXP 032 EXP EXP 036 EXP EXP 033 EXP EXP 037 EXP EXP 034 EXP EXP 038 EXP EXP 035 EXP EXP 039 EXP
Chassis 3
EXP 024 EXP EXP 028 EXP EXP 025 EXP EXP 029 EXP EXP 026 EXP EXP 030 EXP EXP 027 EXP EXP 031 EXP
Chassis 2
EXP 016 EXP EXP 020 EXP EXP 017 EXP EXP 021 EXP EXP 018 EXP EXP 022 EXP EXP 019 EXP EXP 023 EXP
Chassis 1
EXP 008 EXP EXP 012 EXP EXP 009 EXP EXP 013 EXP EXP 010 EXP EXP 014 EXP EXP 011 EXP EXP 015 EXP
Chassis 0
EXP 004 EXP EXP 005 EXP EXP 006 EXP EXP 007 EXP
EXP 000 EXP EXP 001 EXP EXP 002 EXP EXP 003 EXP
DB-000&001 DB-002&003 DB-004&005 DB-006&007
Page 3a-11
VSP 5000 Series Architecture and Availability - 1
SAS Media Chassis Connectivity Optimization
First chassis must be SFF or FMD to provide enough connectivity so I/O to any PG can be
performed by any controller without having to use inter-controller access
Chassis 1
(DBL x 8)
EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP EXP
008 012 009 013 010 014 011 015
(PG#1: 7D+1P)
DB - 008 DB - 009 DB - 010 DB - 011 DB - 012 DB - 013 DB - 014 DB - 015
EXP 004 EXP EXP 005 EXP EXP 006 EXP EXP 007 EXP
Chassis 0
EXP 000 EXP EXP 001 EXP EXP 002 EXP EXP 003 EXP (DBS2 x 4)
DB - 000 & 001 DB - 002 & 003 DB - 004 & 005 DB - 006 & 007
(PG#0: 7D+1P)
SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL SAS CTL
Table quantities assume all CB’s are the same type of port, VSP 5100 VSP 5500 1 CB VSP 5500 2 CB’s VSP 5500 3 CB’s
backend & media chassis. Intermix rules are provided. (10U, 2 controllers) (10U, 4 controllers) (18U, 8 controllers) (26U, 12 controllers)
CPU Cores, Memory 40c, (.5TiB Mf only) 1TiB 80c, 2TiB 160c, 4TiB 240c, 6TiB
Frontend FC1 32G/16G SFP (8p increments) 32 64 128 192 1) FC ports are NVMeoF
Optical I/O Ports
(can intermix types
FiCON 16G SFP (8p increments) 32 64 128 192 ready for software
within CB) iSCSI 10G SFP (4p increments) 16 32 64 96 upgrade in 2HCY20
Backend PCIe Gen3 x 4 Lane NVMe ports
8 16 32 48
I/O Ports Or 12G SAS 4W Lane SAS ports
Global Spare Drives (8 per Media Chassis) 64 64 128 192
NAND Flash: SFF 1.9, 3.8, 7.6, 15.3TB , 30.6TB {TBD} SCM Flash: 3.75TB {TBD}
SFF
Drive Capacities # of 8U 96 slot chassis 1 1 2 3
NVMe
{Post-GA ETA}
# of drives w/max chassis 96 96 192 288
Max # of NAND Flash: 960GB, 1.9, 3.8, 7.6, 15.3, 30.6TB 10K HDD: 2.4TB
media chassis SFF
* # of 8U 96 slot chassis 8 8 16 24
SAS
Max # of drives
# of drives w/max chassis 768 768 1536 2304
7.2K NL-SAS HDD: 14TB
LFF
Each CB can be * # of 16U 96 slot chassis 4 4 8 12
SAS
either diskless, or # of drives w/max chassis 384 384 768 1152
all NVMe, or all
SAS. Different CB NAND Flash: 7, 14TB
types can intermix FMD
in a system in any * # of 8U 48 slot chassis 4 4 8 12
SAS
combination # of drives w/max chassis 192 192 384 576
* SFF / LFF / FMD chassis intermix Each SAS CB can have up to 8 media chassis. First chassis per CB must be SFF or FMD.
within Controller block Each SAS CB can have up to 4 FMD and/or up to 4 LFF chassis; the rest must be SFF
Parity 2D+2D, 3D+1P, 7D+1P, 6D+2P, 14D+2P
Page 3a-12
VSP 5000 Series Architecture and Availability - 1
Active Learning Exercise: Raise Your Hands If You Know It!
Page 3a-13
VSP 5000 Series Architecture and Availability - 1
System Configuration (NVMe Backend)
Page 3a-14
VSP 5000 Series Architecture and Availability - 1
Power
Power
This section explains the power resiliency.
Power Resiliency
Each CBX, HSNBX, and DB should be powered from redundant PDPs to avoid system failure from a single PDP failure.
Recommended
UBX Every CBX Pair must be supplied power from the same pair of PDPs. If not, there is a high possibility of data loss when
DB-023 power failure occurs. Both CBXs in a CBX-pair are recommended to be next to each other to avoid misconnection.
DB-022
DB-021 Every Drive Box in a Media Chassis must to be placed next to each other, powered from the same pair of PDPs. If not,
DB-020 lots of drives will be blocked when a pair of PDPs stops supplying power, and there is a high possibility of data loss.
DB-019
P P
D DB-018
D No Good No Good OK (Not recommended)
U DB-017 U
PDPs are Not redundant Paired CBXs are supplied power 2 CBX are supplied power from the same pair of PDPs.
DB-016
for each CBX. from different pairs of PDPs. (BTW not enough cable length if NVMe media chassis)
SBX
DB-014&015
DB-012&013 P P P P
DB-010&011 D D D D
CBX-3
DB-008&009 U U U U
CBX-2 CBX-1
SBX
DB-006&007
P P
DB-004&005
D D
DB-002&003 P P P P P P P P
U U
DB-000&001 D D D D D D D D
CBX-1 U U U U
HSNBX-1 U U U U
HSNBX-0 CBX-0 CBX-0 CBX-0 CBX-1
CBX-pair0
CBX-1
CBX-0
Offload By HIE
Inter-controller communication method will be changed from Intel CPU for VSP G200/VSP G400/VSP
G600/VSP G800/ VSP F200/VSP F400/VSP F600/VSP F800 to HIE for VSP 5000. Some processing of CPU
for the inter-controller communication will move to HIE
# Type of communication VSP G/F 200/400/600/800 (No HIE) VSP 5000 Series (HIE offload)
1 Read memory on the Request to a CPU on the target controller and HIE sends the target data to the own
other controller it reads the memory and write it to destination memory
2 Atomic access Request to a CPU on the target controller and HIEs adjust the atomic operation
it processes atomic operation
3 Inter-controller transfer of Intel DMA transfers the target to the other HIE transfers the target data to the other
user data controller and after finished, CPU on the controller and simultaneously verifies it
source controller requests to verify it to CPU by T10DIF calculation
on the destination controller, and it verifies the
target data by T10DIF calculation
Random Write 8KB(cache hit)
Performance 4000000
benefit +46%
(IOPS)
2000000
0
No HIE HIE offload
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3a-15
VSP 5000 Series Architecture and Availability - 1
Offload By HIE
Atomic operations in concurrent programming are program operations that run completely
independently of any other processes. Atomic operations are used in many
modern operating systems and parallel processing systems.
Page 3a-16
VSP 5000 Series Architecture and Availability - 1
VSP 5000 Series: Hardware Offload Design
G/F 200/400/600/800
1. Write a request for reading Memory2 2. Receive the request 3. Write the target data to Memory1
Controller 1 Controller 2
CPU1 CPU2
Page 3a-17
VSP 5000 Series Architecture and Availability - 1
Data Transfer to the Other Controller
G/F 200/400/600/800
1. Send a transfer request to Memory2 2. Transfer the data to Memory2 3. Request to verify
Controller 1 Controller 2
No load
CPU1 HIE1 HIE2 CPU2
to CPU2
Memory1 DATA DIF DATA DIF Memory2
Service Processor
In this section we will discuss about service processor.
SVP Unit
SSVP/HUB unit
Front side
SVP unit
VSP G1500/ VSP F1500 SVP: Switching VSP 5000 SVP: SVP unit is motherboard function only. To
HUB is embedded in SVP unit together with ensure higher availability, switching HUB is installed in SSVP
motherboard. unit with less failure rate. Thus, even during SVP
replacement, internal LAN can keep alive, for example user
can keep accessing all nodes/all CTLs even if in SVP
replacement. © Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3a-18
VSP 5000 Series Architecture and Availability - 1
SVP LAN Cable Routing
Rear Rear
PL SSVP/HUB * PL
Option SVP
Basic
SSVP/HUB
SVP *
(if option SVP installed)
Front Front
aggregated cable
(if option SVP installed and # of CBXs is 3-6)
aggregated cable
aggregated cable
(if # of CBXs is 3-6)
Mainte. LAN
aggregated cable* (if option SVP installed)
* mandatory
© Hitachi Vantara LLC 2020. All Rights Reserved.
- MP consolidation
- MP consolidation
- CTL consolidation
- CTL consolidation
Public LAN Public LAN
Page 3a-19
VSP 5000 Series Architecture and Availability - 1
Proxy on SVP
Proxy on SVP
Maintenance
External LAN
Personnel
Internal LAN
RDP Login
VSP 5000 SVP
SVP GUI PF REST JSON API
(MU Launch) (proxy) (proxy)
Page 3a-20
VSP 5000 Series Architecture and Availability - 1
Module Summary
Module Summary
Page 3a-21
VSP 5000 Series Architecture and Availability - 1
Module Review
Module Review
1. True or False: The VSP 5000 series is designed to support both open system
and mainframe needs.
2. True or False: The controller blocks (node pairs) can be only be NVMe
backend.
Page 3a-22
VSP 5000 Series Architecture and
Availability - 2
Module Objectives
Page 3b-1
VSP 5000 Series Architecture and Availability - 2
Cache and Shared Memory
Base extension includes SM Block #1-4 which is the minimum and default config
Most systems will never need more, but if capacity exceeds 4.4PiB, expansion is
nondisruptive and all controllers ship with max memory so no upgrade (with controller
offline) is needed
(*1) Max pool capacity of MF HDP/HDT is calculated with about 10% reduction of OPEN.
(*2) Max specification for all Replication PPs (SI/TI/TC/UR/GAD) is covered by the Base block of Shared Memory.
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
HIE
HIE
HIE
HIE
CHB
CHB
CHB
CHB
CHB
CHB
HIE
HIE
HIE
HIE
CHB
CHB
CHB
CHB
CHB
CHB
Cache-1 Cache-2
SM CM CM CTL--0 CTL--1 CTL--2 CTL--3
CTL--0 CTL--1
SM CM SM CM SM CM CM
SM CM CM
CPUs CPUs CPUs CPUs
CPUs CPUs MP-1 MP-2
DKB
DKB
CPUs CPUs
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKC-1 DKC-2
DKC DKC
SM/CM are duplicated on survived CTL.
Page 3b-2
VSP 5000 Series Architecture and Availability - 2
Shared Memory (SM) Design
In VSP 5000 Series, primary SM area and secondary SM area are placed on different
nodes (node 0, node 1) to be able to endure node failure
Nodes other than node0/node1 do not have SM area. Therefore, SM capacity does not
increase even if # of node increases
SM rs v SM rs v
CM CM CM CM
CM CM CM CM
Primary Secondary
© Hitachi Vantara LLC 2020. All Rights Reserved.
HAF
Page 3b-3
VSP 5000 Series Architecture and Availability - 2
Global Cache Mirroring
There are more complexities to the cache mirroring algorithm, but two primary considerations are:
• Mirror to the opposite “side” of a CBX pair (including other CBX pairs) in case of CBX failure
• Mirror to the owing controller of the LUN if the data was not received by the owning controller
As with prior generation products, only non-destaged (dirty) data is mirrored, for efficiency, until
destaged
Side A Side B Primary data CBX-0
CBX-0 CBX-0
Mod-0
Basic Mirror data CTL0 CTL1 CTL0 CTL1
Side A Side A
CBX Pair-0
Side B CBX-1 Side B CBX-1
Option1 CTL2 CTL3 CTL2 CTL3
One of these area is
selected as mirror.
CBX-2 VSP 5000 (2CBX/2CTL)
Mod-1 Option2 CTL4 CTL5
Side A
CBX Pair-1
CBX-3
Side B
Option3 CTL6 CTL7
CBX-4
VSP G1500 CTL8 CTL9
Side A
CBX Pair 2
CBX-5
Side B
CTL10 CTL11
VSP 5000 (6CBX) © Hitachi Vantara LLC 2020. All Rights Reserved.
In VSP 5000 series, the SM table is cached to each CTL’s PM area timely to avoid the overhead of inter-CTL access
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CHB
CTL#0 CTL#1 CTL#2 CTL#3 CTL#4 CTL#5 CTL#6 CTL#7 CTL#8 CTL#9 CTL#10 CTL#11
MP MP MP MP MP MP MP MP MP MP MP MP
U U U U U U U U U U U U
PM PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master PM
Master
(Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror) (Mirror)
SM SM SM SM CM CM CM CM CM CM CM CM
Master Reserved Slave Reserved
CM CM CM CM
LM LM LM LM LM LM LM LM LM LM LM LM
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HSNBX
ISW HSNBX ISW
1. The MPU which is in the same CTL of SM table(Master) accesses to the table directly.
2. The other MPUs can access to mirrored table in the same CTL’s PM area.
3. Most of SM accesses including replication PPs can be adopted this SM caching feature.
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3b-4
VSP 5000 Series Architecture and Availability - 2
Shared Memory (SM) Resiliency(Compare/Contrast VSP 5000 versus G/F1x00)
In VSP G1x00/VSP F1x00, both primary SM area and secondary SM area are finally
assigned in basic cache PK in both CL (CMPK#0, #1)
In VSP 5000 series, primary SM area and secondary SM area are placed on different
nodes (node 0, node 1) to be able to endure node failure
CMPK#0 CMPK#1
Data copy
Blocked
→SM Secondary SM Primary SM Secondary SM Primary
Page 3b-5
VSP 5000 Series Architecture and Availability - 2
Shared Memory Resiliency
In VSP 5000 series, the reserved area is same size of SM area, and it is not used except for the failure case
Log events used to ensure resiliency is re-established before service event continues
Blocked Blocked→
→SM Secondary Reserved Area SM Secondary Reserved Area
Node0 Node0
Data copy
CTL#2 CTL#3 CTL#2 CTL#3
SM Secondary
SM Primary →Reserved SM Primary Reserved Area
Node1 Node1
Page 3b-6
VSP 5000 Series Architecture and Availability - 2
VSP 5100 Shared Memory
But 2CTL configuration doesn’t have reserved area because it has only 2 CTLs.
SM Master SM Master
CBX1 CBX1
5. Recovered
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3b-7
VSP 5000 Series Architecture and Availability - 2
Optimization of Cache Access Logic – Simplifying the DIR Table Architecture
Case#1 VSP G1x00/VSP F1x00 Case#2 Case#3 VSP 5000 Series (2, 4, 6Nodes)
w/ 2 CMPKs VSP G1x00/VSP F1x00 w/ 4+ CMPKs
CTL#0 CTL#1
CMPK#0 CMPK#1 CMPK#0 CMPK#1
Improvement
SM Primary Reserved Area of Cache
SM Primary SM Secondary SM Primary SM Secondary Directory
Node0 Management
CMPK#2 CMPK#3
CTL#2 CTL#3 logic
Cache DIR: I/O data searching table on the cache to judge Cache Hit or Cache Miss
Case #3
VSP 5000 (2, 4, 6Nodes)
#0-a #0-b #0-c #0-d CTL#0 CTL#1 #1a #1-b #1-c #1-d
Cache DIR Cache DIR
#1-a #2-b #2-c #3-d #10a #2-b #2-c #3-d
SM Primary Reserved Area
#4-a #5-b #6-c #6-d #4-a #5-b #6-c #6-d
Node0
#6-a #6-b #6-c #6-d CTL#6 CTL#7 #7-a #7-b #7-c #7-d
Cache DIR Cache DIR
#2-a #2-b #2-c #3-d #1-a #2-b #2-c #3-d
Page 3b-8
VSP 5000 Series Architecture and Availability - 2
MP Failure
MP Failure
This section explains about MP failure.
VSP G1x00/ VSP F1x00: Host path is not affected by MPB and cache maintenance but
• Hosts may still have performance impact
• More ports are affected when a CHA or DKC fails (VSP 5000 are less effected)
VSP 5x00: Maintenance on any CTL (CPU or Cache) triggers host path failover but
• With max memory there is no need for memory upgrades
• With Hitachi Interconnect Edge (HIE) FRU’s there will be less need to replace or dummy replacement of controller (vs Gxxx
NTB)
• With reliability improvements to the logic board there should be even less controller failures
Failure part Number of FE Ports affected by failure
R800 R900
SFP 1 port 1 port
CHA/CHB ~ 8 port ~ 4 port
CTL/DIMM 0 ~ 16 port
Cluster ~ 32 port -
DKC/CBX ~ 128 port ~ 32 port © Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3b-9
VSP 5000 Series Architecture and Availability - 2
Active Learning Exercise: Raise Your Hands If You Know It!
Can you describe the new Shared Memory (SM) process and its
benefits within the VSP 5000 series?
Front
Front
End Memory
Front End Front Memory
Hitachi
End End Back End Back End
VSP F1x00
Memory Memory
High Speed Switch Back End Back End
Back Back
End End
Back Back
End End Media Media
Chassis Chassis
Page 3b-10
VSP 5000 Series Architecture and Availability - 2
Summary of Major Differences
Proprietary Flash Media (FMD) Yes. Compression and optional Yes. Compression or optional
encryption is on the drives encryption is not on the drives.3
SAS Links SAS 6G 4WL x 32 path SAS 12G 4WL x 48 path
Page 3b-11
VSP 5000 Series Architecture and Availability - 2
VSP 5000 Series vs. VSP G1500/VSP F1500 Summary (Major Changes Aligned to Value)
Refactoring the ASIC emulation logic to reduce the microcode overhead for host I/O transaction
Function: Function:
Message Message
receiving/sending receiving/sending
Integrated Function:
BE I/O Task
RAID configuration Note:
+
Single Drive access Shown with SAS. But,
Function:
RAID configuration with NVMe, this DCT
logic also applies
Function: Function:
Single Drive access SAS protocol
Drive Drive
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3b-12
VSP 5000 Series Architecture and Availability - 2
Hardware independence (ASIC-less)
Page 3b-13
VSP 5000 Series Architecture and Availability - 2
Volume Capacity Overview
Volumes/Capacity
Max Specification
Page 3b-14
VSP 5000 Series Architecture and Availability - 2
Key Features and Discussion Points
Key Features
Page 3b-15
VSP 5000 Series Architecture and Availability - 2
Rebuild Time Improvement
Drive rebuild time of SSD or FMD is improved to 1/5 times of the current
The “Bidirectional Port” can support the functionality of “Target”, “Initiator”, “RCU Target”
and “External” within one physical port. This means that one physical port can work both
as initiator and target
Page 3b-16
VSP 5000 Series Architecture and Availability - 2
Bidirectional Port Option - Considerations
In the case of Bi-Directional port, the limit performance of 512B Read IOPs with 2K I/O requests is decreased about
20%. However, in the case that external storage is Hitachi Storage, this degradation of initiator I/O is not an issue
because the limit performance of 512B Read IOPs in the target port is 400K IOPs which is less than 467,000 IOPs.
Thus, there is no problem to replace external port to Bi-directional port with 1K initiator I/O queue depth.
Discussion Points
Encryption highlights
• SAS encryption is back-end director (EDKB)
• KMIP support (qualifications, and so on) will be same as VSP G1x00/ VSP
F1x00 at GA
• NVMe Encryption is Back-end Director (EDKBN) © Hitachi Vantara LLC 2020. All Rights Reserved.
Page 3b-17
VSP 5000 Series Architecture and Availability - 2
Licensing for VSP 5000
Page 3b-18
VSP 5000 Series Architecture and Availability - 2
VSP 5000 Advanced Packages for Open and MF
Hitachi Ops Center Analyzer Nondisruptive Migration Hitachi Ops Center Analyzer
predictive analytics (NDM) predictive analytics
Nondisruptive Migration
(NDM) Nondisruptive Migration (NDM)
Page 3b-19
VSP 5000 Series Architecture and Availability - 2
Active Learning Exercise: Raise Your Hands If You Know It!
Module Summary
Page 3b-20
4. VSP 5000 Series Adaptive Data
Reduction
Module Objectives
Page 4-1
VSP 5000 Series Adaptive Data Reduction
ADR and It’s Functions
ADR is enabled at LUN level. A pool can have a mix of ADR enabled
LUNs and have no capacity savings (Non-ADR enabled)
What is Compression
LZ4 is a “lossless” compression method, so called because the data can be uncompressed
without losing any of the original info. LZ4 offers good capacity savings and one of the highest
performing algorithms.
Page 4-2
VSP 5000 Series Adaptive Data Reduction
What is Deduplication
What is Deduplication
Deduplication looks for matches in the input stream to data its already
seen. It is a function provided by the storage controller. It searches the
input data stream and looks for a string of matches to data in its
“dictionary” and when it sees a match, it replaces the actual data with a
pointer to a match
Deduplication watches the incoming data stream and catalogs all the
data it is seen
• It keeps a fingerprint of the data (a CRC like calculation)
• It also keeps a pointer to where the first instance got written
Page 4-3
VSP 5000 Series Adaptive Data Reduction
ADR Supported Platform and Requirements Overview
Supported Platforms
• Hitachi Virtual Storage Platform 5x00 Arrays
• FMD, SSD and spinning drives
• HDP / HDT(*1) Pools
• External storage virtualized into a pool (no matter the storage make-up)
• No MF support
Page 4-4
VSP 5000 Series Adaptive Data Reduction
ADR Constraints
ADR Constraints
ADR is not supported on Hitachi Thin Image SVols for example can’t
have ADR devices in dedicated HTI pool
ADR Terminology
DRD-Vol Compression Deduplication+Compression
DSD-Vol Compression
Note: DP-Vol with only deduplication enabled is not supported. Dedupe is performed for data on DRD-Vol with
Deduplication and Compression enabled in the same pool.
ADR can be applied on any DPVOL. The available settings are compression only, compression
and deduplication. There is no deduplication only setting.
Page 4-5
VSP 5000 Series Adaptive Data Reduction
ADR – DRD-VOL, FPT and DSD-Vol Distribution
For each pool there will be 24 FPT volumes and 24 DSD volumes. They are automatically
created in the highest CU FE. In Storage Navigator the attribute will show “Deduplication
System Volume (Fingerprint)” or Deduplication System Volume Datastore.
Owning CTL
FPT
DRD
DSD
DRD
Owning CTL
FPT
DRD
DSD
DRD
FPT DSD FPT DSD
pools Node
Owning CTL Node
Owning CTL
Node Node
Module Module
Pool 0 Pool 1
Pool 2
Page 4-6
VSP 5000 Series Adaptive Data Reduction
Industry Data Reduction Terms
Savings rate
• A measure of data reduction effect, measured as “Percent capacity saved”. So if the data reduction
ratio was 2:1, the savings rate would be 50%. If 3:1 then savings rate is 66%
• Savings Rate = 1 – Compressed Size / Uncompressed Size
• Savings Rate = 1 – 1 / Data Reduction Rate
Total efficiency
• The overall storage efficiency combining the data reduction, thin provisioning and snapshots
© Hitachi Vantara LLC 2020. All Rights Reserved.
ADR Notes
Pages written to the DSD-Vol and FPT occur in round robin fashion and
have no affinity to DRD-Vol and CBX pair
Page 4-7
VSP 5000 Series Adaptive Data Reduction
What is Effective Capacity
On VSP 5000 arrays, when you create the first DRD-Vol that has ADR
attributes the DSD volumes and FPT volumes will be created as follows
• For each pool, there will be 24 data store devices (DSD) and 24 fingerprint
volumes (FPT) created distributed across CBX pairs
• The capacity of each DSD volume between 5.98TB to 42.7TB
• The capacity of each FPT volume is 1.7TB
Raw capacity
• The physical media in the array or pool, depending on the scope of data reduction
• 8 drives of 1.92TB SSDs equals 15.36TB of raw capacity
Usable capacity
• Capacity available after RAID data protection
• For example RAID 6 (6+2) of 1.92TB SSDs is 11.52TB of usable capacity
• For example RAID 5 (7+1) of 1.92TB SSDs is 13.44TB of usable capacity
Effective capacity
• The amount of data written by, or available to the host
• In the above R6 example, if compression yields 1.5:1 and dedupe yields 2.0:1, then the effective
capacity would be 34.56TB [ = 1.5 * 2 * 11.52TB]
Page 4-8
VSP 5000 Series Adaptive Data Reduction
ADR Pool Requirements Overview
ADR feature requires metadata that consumes pool capacity 10% (*1)
• The capacity consumed by “metadata” for the capacity savings function is 3% of the
total consumed capacity of all ADR enabled devices (DRD-Vols). This 3% metadata
overhead (2% is metadata, 1% is Deduplication Fingerprint table)
• The Capacity consumed by “garbage data” is 7% of the total consumed capacity of all
ADR enabled devices (DRD-Vols)
• Pool buffer space (HDP) – Manage the used capacity of the pool so it is lower than the
“Warning” threshold of 70%. This will prevent IO from being degraded. If pool usage
exceeds “Depletion” threshold of 80% or when an operation is performed while the
pool is almost full, garbage collection is prioritised which may impact performance
(*1) During periods of high write activity from the host, this capacity might increase
over 10% temporarily and then returns to 10% when the activity decreases
© Hitachi Vantara LLC 2020. All Rights Reserved.
• The 10% for X covers 1% for Fingerprint, 2% for metadata and 7% for garbage data
Page 4-9
VSP 5000 Series Adaptive Data Reduction
ADR Hitachi Dynamic Tiering Smart Tiers
Note: Data reduction is applied only to the data in the lowest tier.
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 4-10
VSP 5000 Series Adaptive Data Reduction
ADR Garbage Collection
Garbage Collection
When compressed data is updated and its size changes, the original
stored data in the data storage area is no longer needed. This
unneeded data called “garbage” data
It is possible for the write I/O rate to exceed the rate at which garbage collection
can reclaim free space. In the worst case, a pool could fill up with garbage data,
even in the absence of new writes
Page 4-11
VSP 5000 Series Adaptive Data Reduction
ADR Inline vs Post Process
Page 4-12
VSP 5000 Series Adaptive Data Reduction
ADR Monitoring
ADR Monitoring
This section explains ADR monitoring.
Monitoring
If the ratio is not what you expect, check the LDEV level reporting to see
if there are any DRD-Vols that not attaining the expected ratio
Disabling ADR will cause data to be rehydrated to its full size. The used
capacity of the pool is increased by data decompression. Before
performing this operation, make sure that the pool has enough free
capacity for the capacity used by the DP-Vol of the target DRD-Vol and
array resources. This does take additional MP cycles
© Hitachi Vantara LLC 2020. All Rights Reserved.
Monitor MP utilisation and cache write pending (CWP) and look for any
elevated levels using Ops Center Analyser and performance monitor
Page 4-13
VSP 5000 Series Adaptive Data Reduction
Monitoring Pool Window
Page 4-14
VSP 5000 Series Adaptive Data Reduction
ADR Inline vs Post Process
Note: (*1) – if the data length is smaller than 64KB then dedupe is performed asynchronously with I/O.
Note: (*2) – Update write area for which the compress / dedupe is not performed asynchronously with I/O.
Page 4-15
VSP 5000 Series Adaptive Data Reduction
ADR Sizing
ADR Sizing
In this section you will learn about ADR sizing.
ADR Calculator
Then click on the ADR calculator icon in the middle of the top banner
Page 4-16
VSP 5000 Series Adaptive Data Reduction
ADR Input
ADR Input
• The Total on the bottom right is what you tell the DOC to configure
Page 4-17
VSP 5000 Series Adaptive Data Reduction
Active Learning Exercise: Jigsaw Puzzle
Module Summary
Page 4-18
5. VSP 5000 Series High Availability and
Storage Navigator Differences From
G1x00
Module Objectives
Page 5-1
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
HA Differences From VSP G1000/ VSP G1500
5 Frontend ports aren’t blocked during a cache Frontend ports are blocked during a cache memory
memory maintenance. maintenance.
Page 5-2
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Two Point Failure
CHB
CHB
CHB
CHB
CHB
CTL CTL CTL CTL CTL CTL
Maintenance Maintenance
DKC0 DKC1 DKC0 DKC1
1 Two controllers simultaneous The system is down and some Super rare because
failure. of data will be lost. “simultaneous” means “in
several minutes” normally.
2 Short-circuit failure of The system is down and some Super rare because the
backboards in two DKCs. of data will be lost. backboard in a DKC is “Passive
backboard”.
3 Short-circuit failure of The system is down. Super rare because the
All backboards in two HSNBXs. backboard in a HSNBX is
“Passive backboard”.
4 Thermal sensors incorrectly The system will be off. Super rare because a failure of
interpret that the temperature “incorrectly interpreting
exceeds the specification on temperature” hardly occurs.
two DKCs.
5 Two PSOFF signal lines The system will be off. Super rare because a signal line
failure on a HSNPANEL. failure hardly occurs.
Page 5-3
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
VSP 5500 and VSP 5100 – Two Point Failure
6 RAID5 7D+1P Two drives failure in one parity The data in the parity Only when 2nd failure occurs
RAID5 3D+1P group. group will be lost. before completing to copy data
RAID1 to a spare drive.
7 RAID5 7D+1P Two enclosure boards or two Some of parity groups will
RAID6 14D+2P PSUs failure on one Drive box. be blocked.
DKC0 DKC1
CTL CTL CTL CTL
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
#7 Two enclosure boards
failure in one Drive box
Storage Subsystem
© Hitachi Vantara LLC 2020. All Rights Reserved.
DKC0 DKC1
#9 Two DKBs failure CTL CTL
DKB
DKB
DKB
DKB
Storage Subsystem
Page 5-4
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
X-Path/HIE/ISW
X-Path/HIE/ISW
# Configuration Failure Phenomena Note
1 Two X-paths failure on one Only the HIE will be blocked.
HIE.
2 Two HIEs failure on one Only the controller will be
All
controller. blocked.
3 Two HIEs failure on different Only one of the controllers will The controller which has 2nd
controllers. be blocked. failed HIE will be blocked.
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
ISW ISW ISW ISW ISW ISW ISW ISW
HSNBX0 HSNBX1 HSNBX0 HSNBX1
Storage Subsystem Storage Subsystem
© Hitachi Vantara LLC 2020. All Rights Reserved.
X-Path/HIE/ISW
# Configuration Failure Phenomena Note
4 Two ISWs or two PSUs All of HIEs connected to the A HSNBX replacement
failure on one HSNBX. HSNBX will be blocked. procedure can recover all failed
All HIEs without offline.
5 One ISW and one X-path The failed ISW, X-path and
failure. one HIE will be blocked.
#4 Two ISWs failure on one HSNBX #5 One ISW and one X-path failure
DKC0 DKC1 DKC0 DKC1
CTL CTL CTL CTL CTL CTL CTL CTL
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
HIE
Page 5-5
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Active Learning Exercise: Jigsaw Puzzle
Configuration Phenomena
1. Two paths between the host 1. The system is down, and
and the storage some data will be lost.
DKC
VSP G1x00 and previous generations could exist with one DKC, but
VSP 5000 Series will have a minimum of two DKC
VSP G1x00 has controller temperature displayed; this is not present for
VSP 5000 Series
G1x00 vs VSP 5000 Series
Page 5-6
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Logical Devices – Column Settings
Note: Pinned status indicates data still in cache and hasn’t been destaged to devices.
Logical Devices
Page 5-7
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Pools – More Actions
Biggest change - Now there are only two ports attributes visible: Target and Bidirectional
Page 5-8
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Port Conditions
Port Conditions
Module Summary
Page 5-9
VSP 5000 Series High Availability and Storage Navigator Differences From G1x00
Questions to IT PRO
Questions to IT PRO
We are updating our 100% Data Availability Guarantee for Zeus2 and need confirmation of the following availability /
resilience. Is there any conditions where:
2. VSP 5500 does not have an outage from a single component failure, but requires an outage (for example POR) to
return to normal operation during / after the repair?
Yes, but only the failure of “Passive backboard” in a drive box.
3. VSP 5500 has an outage from a redundant component failure? (for example both HIE boards on the same
controller fail)
Yes. It depends on the configuration.
4. VSP 5500 does not have an outage from a redundant component failure, but requires an outage (i.e. POR) to
return to normal operation during / after the repair?
No.
5. VSP 5500 has an outage from a two non-redundant component failures? (for example HIE board on one
controller and CPU on another controller)
No.
6. We ask this one because there have been cases on Panama2 where I-path blocked + controller blocked has
required a POR to return to normal.
No, because there are four redundant paths between each controller, and DKC chassis can be recover without
offline. © Hitachi Vantara LLC 2020. All Rights Reserved.
Page 5-10
6. VSP 5000 Series Security and Encryption
Enhancements
Module Objectives
Page 6-1
VSP 5000 Series Security and Encryption Enhancements
Encryption
Encryption
The data-at-rest encryption feature protects your sensitive data against breaches
associated with storage media
Encryption can be applied to some or all supported internal drives (HDD, SSD, FMD)
Each encrypted internal drive is protected with a unique data encryption key
Encryption Components
Encryption hardware
• The data at-rest encryption (DARE) functionality is implemented using
cryptographic chips included as part of the encryption hardware. For Hitachi
Virtual Storage Platform 5000 series, VSP G700/ VSP F700, and VSP G900/
VSP F900, encryption hardware encrypting back-end modules (EBEMs)
perform the encryption
When encryption is enabled for a parity group, DEKs are automatically assigned to the drives in
the parity group. Similarly, when encryption is disabled, DEKs are automatically replaced (old
DEKs are destroyed, and keys from the free keys are assigned as new DEKs). You can combine
this functionality with migrating data between parity groups to accomplish rekeying of the DEKs
Page 6-2
VSP 5000 Series Security and Encryption Enhancements
Encryption Components
Key management
• Managing following key types:
Data encryption keys (DEKs): Each encrypted internal drive is protected with a
unique DEK that is used with the AES-based encryption. AES-XTS uses a pair of
keys, so each key used as a DEK which is a pair of 256-bit keys
Certificate encryption keys (CEKs): Each encrypted back-end module or encrypted
controller requires a key for the encryption of the certificate (registration of the
EBEM/ECTL) and a key to encrypt the DEKs stored on the EBEM/ECTL
Key encryption keys (KEKs): A single key, the KEK, is used to encrypt the CEKs
that are stored in the system
Page 6-3
VSP 5000 Series Security and Encryption Enhancements
Key Management Options
The key management can be configured in a stand-alone mode (integrated key management),
or key management can be configured to use third-party key management (external key
management). When external key management is leveraged, some or all the following
functionality can be used:
All communications with a KMS are performed using the OASIS Key Management
Interoperability Protocol (KMIP) version 1.0 over a mutually authenticated Transport Layer
Security (TLS) version 1.2 connection. The TLS authentication is performed using X.509 digital
certificates for both the storage system and two cluster members of the KMS.
Page 6-4
VSP 5000 Series Security and Encryption Enhancements
Support Specifications for Encryption License Key
4 Support Media Type HDD(SAS), SSD(SAS), SSD(NVMe), HDD(SAS), SSD(SAS), SSD(NVMe) 1. Vendors does not have plan to
SCM provide SED in SCM area.
2. In case of SED, some part of
encryption spec will be defined
by vendors, then there is risk,
that compatible drive cannot
be developed / supplied due to
vendor spec change.
• DEK: Data Encryption Key. The key for the encryption of the stored data
• CEK: Certificate Encryption Key. The key for the encryption of the certificate and the
key for the encryption of DEK per drive to register DEK on EBEM or ECTL
• KEK: Key Encryption Key. The key for encrypting a key in a storage system with an
attribute other than KEK
Page 6-5
VSP 5000 Series Security and Encryption Enhancements
Encryption Documentation
Encryption Documentation
Sanitization Concepts
2. Cryptographic Erasure:
• Available as part of Data At-rest Encryption
• Data Encryption Keys (DEK) destroyed when encryption is disabled and/or
media is removed
Note: Complete data erasure can be guaranteed only for hard disk drives (HDDs). For flash
drives (SSDs and FMDs), complete data erasure (overwriting all cells including overprovisioned
cells) cannot be guaranteed. For information about data erasure for flash drives (for example,
cryptographic erasure, data eradication services), contact customer support.
Page 6-6
VSP 5000 Series Security and Encryption Enhancements
Shredder Operations
Shredder Operations
1. Verify that the current shredding status for the volume is Normal.
5. Shred volumes.
Enhanced Sanitization
Page 6-7
VSP 5000 Series Security and Encryption Enhancements
Sanitation Documentation
Sanitation Documentation
Audit Logging
Audit logging of encryption events. The audit log feature provides logging of events that occur
in the storage system, including events related to encryption and data encryption keys. When
the KMIP key manager is configured, the interactions between the storage system and the KMIP
key manager are also recorded in the audit log. You can use the audit log to check and
troubleshoot key generation and backup. If you enable and schedule regular encryption key
backups, the regular backup tasks are recorded in the audit log with the regular backup user
name, even if the regular backup user was not logged in when the backup was performed.
Page 6-8
VSP 5000 Series Security and Encryption Enhancements
Additional Security Changes
Page 6-9
VSP 5000 Series Security and Encryption Enhancements
Module Summary
Module Summary
Questions
Page 6-10
7. VSP 5000 Series and Mainframes
Module Objectives
Page 7-1
VSP 5000 Series and Mainframes
Hitachi Vantara Solutions for Mainframe >40 Years Experience and 14 Generations of Solutions
1978 1985 1991 1993 1995 1998 2000 2002 2004 2007 2010 2014 2016 2019
7350 7380 7390 7690 7700 7700E 9900 9980V USP USP V VSP VSP G1000 VSP G/F1500 VSP 5000
Page 7-2
VSP 5000 Series and Mainframes
VSP 5000 Changes
Support for z/15 and new Ficon board (16SA) but no compression
currently on these boards
Page 7-3
VSP 5000 Series and Mainframes
Mainframe and VSP 5000 Series
Module Summary
Page 7-4
8. VSP 5000 Series HDP-HDT
Module Objectives
Page 8-1
VSP 5000 Series HDP-HDT
Pools
Pools
In this section you will learn about pools.
Pool Definitions
Pool Configuration
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
POOL
Page 8-2
VSP 5000 Series HDP-HDT
Hitachi Dynamic Tiering Overview
Pool crossing CBX pairs is best practice but may not be the situation for all configurations.
Need to review the configuration and customer requirements. Not crossing CBX pairs may
very well limit performance and processing power for ADR
Not crossing CBX pairs will result in FE cross and backend cross since ownership is
distributed round robin across the MPs depending on pool configuration
In most cases pools are recommended to span all CBX pairs to optimise flexibility and
load balancing
Need to understand Flash page placement so RGs across the CBX pairs are equivalent
when a pool extends across CBX pairs
One exception is mixing SSD and NVMe drives in the same pool. They are the same tier
but have performance differences
HDT Tiers
HDT tiering same as VSP G1x00/ VSP Gx00 – NVMe SSD same as SAS
SSD/FMD Priority Media
HDT treats NVMe and SAS SSD as the same tier, so should they be mixed in
the same pool
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 8-3
VSP 5000 Series HDP-HDT
Smart Tier
Smart Tier
Media with short response time are positioned in higher tiers and with
long response time is positioned as lower tiers
With HDT, order of tiers is defined based on media type and rotational
speed (rpm)
The lower tier cannot be added to a pool to which the ADR is enabled
Note: In an HDT pool with ADR enabled you cannot add a lower tier,
only higher tier.
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 8-4
VSP 5000 Series HDP-HDT
LU Ownership
LU Ownership
This section explains about LU ownership.
DP Pool (consists of one CBX-Pair) DP Pool (consists of one CBX-Pair) DP Pool (consists of one CBX-Pair)
When a pool only spans a node pair then DP/DRD volumes ownership
is only assigned to MPs within that node pair. You can assign or move
them to MPs outside the node pair. In general that would not be a good
idea because it increases backend cross
When a pool spans more than one node pair, DP/DRD volumes
ownership is spread across all node pairs even if there are no pool PGs
behind them. For example, if pool spans 2 node pairs but the system
has 3 node pairs then all three node pairs get volume assigned by
default. The same logic applies to DDS and FPT volumes
Page 8-5
VSP 5000 Series HDP-HDT
LU Ownership Assignment Range in Multi CBX
DP-Vol DRD-Vol
Pool
LDEV
LDEV
These DKU’s will get DP-VOL’s/DRD-VOL’s, DSD and FPT. Requires manual operation to
change ownership
DP-Vol DRD-Vol
When the pool only spans a single node pair then DP/DRD volumes ownership is only assigned
to MPs within that node pair. You can manually assign or move them to MPs outside the node
pair. In general that would not be a good idea because it increases backend cross. When a pool
spans more than one node pair, DP/DRD volumes ownership is spread across all node pairs
even if there are no pool PGs behind them. The same logic applies to DDS and FPT volumes.
Page 8-6
VSP 5000 Series HDP-HDT
LU Assignments
Pool Pool
These DKU’s will get DP-VOL’s/DRD-VOL’s, DSD and
LDEV LDEV
LDEV LDEV FPT. Requires manual operation to change ownership
.
DP-Vol DRD-Vol
LU Assignments
DP-VOL, DRD-VOL
• Case #1 (Pool consists of one CBX pair) – Assign the MPU within
same CBX pair that provides the pool as round-robin order
• Case #2 (Pool consists of plural CBX pairs) – Assign the MPU in
whole system as round-robin order
Page 8-7
VSP 5000 Series HDP-HDT
Active Learning Exercise: Group Discussion
FE Straight: FE I/O Port and LUN Owning CTL are same FE Cross: FE I/O Port and LUN owning CTL are different
BE Straight: Owning CTL BE I/O to a PG in the same CBX pair BE Cross: Owning CTL BE I/O to a PG in a different CBX pair
Front-End Front-End
CH
CH
CH
CH
CH
CH
CH
CH
B
B
B
B
B
CH
CH
CH
CH
CH
CH
CH
CH
B
B
B
B
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
• FE Cross; BE Straight
CBX 1 CBX 0 CBX 1 CBX 2 CBX 3
CBX 0 CBX 2 CBX 3
• FE Cross, BE Cross
BE Straight
Back-End Back-End
With an ASIC-less design, some consideration of cross I/O is appropriate since it has some performance impact
• Various optimizations reduce the occurrence of cross I/O, and HIE DMA offload reduces the overhead of cross I/O when
required
• Testing shows that in practice, under most conditions, the optimizations and offload are effective with little overhead
For mainframe, FE straight is similar to FE cross because even for straight, the data has to be sent to HIE for CKD/FBA
conversion offload optimization. If the I/O is FE cross it gets the conversion while passing through HIE
Page 8-8
VSP 5000 Series HDP-HDT
Back End Optimization and DP Page Placement
Flash – New DP Page allocation of DP-Vol is assigned on the same CBX pair as
PG to avoid BE cross (BE straight configuration will be the best practice in case
the HDP pool is configured across the modules)
Page 8-9
VSP 5000 Series HDP-HDT
Back End Cross Optimisation (Flash Data Placement)
For Flash pool, pages are distributed round robin across all PGs in the pool behind the CBX pair that owns the LUN
• This is to optimize for BE straight, which has more benefit than spreading the workload across all PGs in the pool
• If there is not enough capacity on PGs behind the owning controller, some pages will have to be stored on other PGs in
the pool
HDD Flash
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7 CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
Straight
Node0 Node1 Node2 Node3 Node4 Node5
BE Cross Node0 Node1 Node2 Node3 Node4 Node5
• For pages in DRD VOLs (ADR enabled DP-Vols) the media dependent (HDD vs Flash)
back-end cross optimisation is used because these volumes only contain data owned by
the same LUN
• For pages in DSD VOLs the media dependent (HDD vs Flash) back end cross
optimization is not used. DSD Vols are spread across all PGs in a pool. This is because
within each DSD Volume page there are 8K blocks referenced by multiple DRD Vols and
those DRD Vols could be owned by any controller, so it is not practical to optimize the
page placement to be in the same module as the owing LUN
Page 8-10
VSP 5000 Series HDP-HDT
Back End Cross Optimisation With HDT vs HDP
Typically HDP does not have a mix of HDD and flash but it is possible
For HDT it is common to have a mix of flash and HDD (different tiers)
Drive type in Pool
Flash (SAS, HDD (SAS,
Pool kind NVMe) NLSAS) Method of page allocation
✔ As BE straight as possible
HDP ✔ Distributed across all PGs
✔ ✔ Distributed across all PGs
✔ As BE straight as possible
HDT ✔ Distributed across all PGs
✔ ✔ As BE straight as possible
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 8-11
VSP 5000 Series HDP-HDT
Back End Cross Optimisation – No Pool Span
CH
CH
CH
CH
CH
CH
CH
CH
CH
CH
CH
CH
B
B
B
B
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
Owner Owner
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
BE Straight
Back-End Back-End
• If you manually assign DRD vol ownership to another CBX pair, then ownership of all DSD and FPT Vols for that
pool will automatically be distributed equally to all controllers in every CBX pair that owns any DRD Vols from the
pool
• Conversely, moving all DRD-Vol ownership off of a CBX pair will automatically redistribute DSD and FPT ownership
off that CBX pair
• Whether load balancing or BE straight is more beneficial, depends on the situation and priority
CH
CH
CH
CH
CH
CH
CH
CH
B
B
B
B
Owner
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
Back-End
Page 8-12
VSP 5000 Series HDP-HDT
Back End Cross Optimisation – Pool Span
• If the span is across all CBX pairs then load balancing is most convenient and there may be less BE cross
• If there are 3 CBX pairs and pool PGs span 2 CBX pairs, all 3 CBX pairs will have DP / DRD Vols ownership
assigned and therefore DSD and FPT Vols will also be distributed to all CBX pairs
• You can manually assign / move LUNs if you want to prioritise BE straight over ownership workload distribution.
Therefore DSD and FPT Vols will also re-distribute to only CBX pairs with DRD Vols from the pool
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7 CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
OwnerOwner Owner Owner OwnerOwner OwnerOwner OwnerOwner OwnerOwner Owner Owner Owner Owner Owner Owner OwnerOwner Owner Owner OwnerOwner
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
Node0 Node1 Node2 Node3 Node4 Node5 Node0 Node1 Node2 Node3 Node4 Node5
Page Placement
Smart allocation with flash. If the local pool (CBX) volumes fill up, page
allocations will move down to the next CBX pair
• Volume ownership will not change but new pages will be in PGs on the other node /
CBX pair. This leads to backend cross which will impact flash performance. In theory if
you knew this was happening you might manually move the ownership and perform
pool rebalance so that data in the other node pair will be moved and now backend
straight
In a flash environment with 4 or 6 nodes, assume that the device owner is node
0/1 and admin moves the owner to node 2/3. At that time, smart allocation will
use pool volumes for those nodes as long as the pool is extended across all
nodes
• If you do a smart rebalance, the previously written data will be moved over to straight
PGs (space permitting)
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 8-13
VSP 5000 Series HDP-HDT
Pool Rebalance Overview
Pool Rebalance
Pages in DSD and FPT will be rebalanced across all PGs for both Flash
and HDD
For Flash, pages in DP / DRD Vols will not be rebalanced to PGs behind
the other node pair but will be rebalanced across PGs within the node
pair
For HDD, pages in DP / DRD Vols will be rebalanced across all PGs in
the pool
Page 8-14
VSP 5000 Series HDP-HDT
Module Summary
Module Summary
Page 8-15
VSP 5000 Series HDP-HDT
Module Summary
Page 8-16
9. VSP 5000 Series and Replication
Module Objectives
Page 9-1
VSP 5000 Series and Replication
Hitachi Thin Image Enhancements
Garbage can be reused only when snapshot data is stored in the same
snapshot tree and it cannot be used for other purposes
It consolidates the areas being used to return the pages that store only
garbage data to unallocated pages. It achieves the following effects:
• The free capacity of a pool is increased
• The released pages can be used for other purposes
Page 9-2
VSP 5000 Series and Replication
When to Perform Defrag
Defrag Operations
Page 9-3
VSP 5000 Series and Replication
Remote Replication
Remote Replication
In this section you will learn about HUR replication enhancements.
Replication Roadmap
HUR support for VMware VVol for Hitachi Virtual Storage Platform 5000
and Panama II – SVOS 9.3 (GA – May)(*2)
Support matrix
• VSP 5x00, G1x00, F1500, Fxx0, Gxx0 TC HUR GAD Replication Intermix
Matrix
Note:
(*1) – HDT and ADR is not supported
(*2) – VP and HUR GA May. SVOS 9.3 is release recently
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 9-4
VSP 5000 Series and Replication
Active Learning Exercise: Follow the Manual
Page 9-5
VSP 5000 Series and Replication
GAD Enhancements
GAD Enhancements
GAD migrations
• Only supports previous subsystems that support GAD (G1x00 and Panama
II)
• Quorum less (Migration only)
• Multiple quorum support options – Traditional, iSCSI and AWS cloud
• Customer deployable and fully automated by using HSA/HDID/HAD
• GAD + UR – maintain RPO/RTO if source array has HUR (MC 90-03-01-
00/00)
Page 9-6
VSP 5000 Series and Replication
Module Review
Module Review
Page 9-7
VSP 5000 Series and Replication
Module Review
Page 9-8
10. Hitachi Ops Center Replication
Module Objectives
Page 10-1
Hitachi Ops Center Replication
Ops Center Replication Overview
• TI – Thin Image
• SI – ShadowImage
• TC – Total Cost
• UR – Universal Replicator
Page 10-2
Hitachi Ops Center Replication
Hitachi Ops Center Administrator Replication
Administrator Replication
Page 10-3
Hitachi Ops Center Replication
Hitachi Ops Center Administrator Local Replication
Click Submit
The Job:
• Creates SVol
• Maps SVol to dummy HG (for
example “HID-DP-00”)
• Creates and suspends SI-Pair
• Creates a local replication
group in Administrator
Page 10-4
Hitachi Ops Center Replication
Administrator Local Replication
SN compared to Administrator
Page 10-5
Hitachi Ops Center Replication
Administrator Local Replication
Job:
• Creates and attach volume
• Creates replication group
Page 10-6
Hitachi Ops Center Replication
Hitachi Ops Center Administrator Remote Replication
Page 10-7
Hitachi Ops Center Replication
Administrator High Availability Setup
Page 10-8
Hitachi Ops Center Replication
Active Learning Exercise: Writing One-Minute-Paper
Topic: What are the uses and features of local and remote replication?
Page 10-9
Hitachi Ops Center Replication
Administrator High Availability Setup
Page 10-10
Hitachi Ops Center Replication
Administrator Remote Replication
The wizard starts with the basic volume settings, like for local volumes
Page 10-11
Hitachi Ops Center Replication
Administrator Remote Replication
ALUA (asymmetric logical unit access) is an industry standard protocol for identifying optimized
paths between a storage system and a host. ALUA enables the initiator to query the target
about path attributes, such as primary path and secondary path.
Page 10-12
Hitachi Ops Center Replication
Administrator Remote Replication
Configure Volume Paths, Host group and non preferred settings (ALUA)
for the secondary storage system
Page 10-13
Hitachi Ops Center Replication
Module Summary
Module Summary
Page 10-14
11. Hitachi Ops Center Automator
Module Objectives
Page 11-1
Hitachi Ops Center Automator
Introducing Automator
Introducing Automator
Intelligent Automation
Best practices-based automation workflows
IT Service Catalog
Application-based services with abstracted
infrastructure requests
Infrastructure Services
Flexibility to create and customize services
Automator provides solutions and benefits for data center management by customizing service
catalog and integrating other management tools.
Page 11-2
Hitachi Ops Center Automator
Automator Features
Automator Features
This section covers Automator features.
IT Admin/
Operator
Daily
Operation
Cycle
Automate delivery of any Create template with Integration with existing asset
resource that has a plugin and configure
(such as Script)
REST API role-based service
Integration with 3rd party tool
3 Integrate 4 Optimize
via API or CLI
Bunch of prebuilt plugins
Integrate with IT service Reduce manual
management tools for processes and free staff
greater savings to focus on strategy
Hitachi storage with Data
Protection and Quality of
Service (QoS) policies
Environment for
Digital Business
Page 11-3
Hitachi Ops Center Automator
Service Catalog
Service Catalog
Easily deploy services with customized settings for each user or user group
• Set and hide common items to eliminate human error and improve efficiency
• Preconfigure selectable values
Page 11-4
Hitachi Ops Center Automator
From HDvM to Configuration Manager REST API Automator Transition
Data Mobility
New*
© Hitachi Vantara LLC 2020. All Rights Reserved.
As Hitachi Command Suite (HCS) and Device Manager (HDvM) become legacy. Automator is
migrating service templates from HDvM to Configuration Manager REST API. These include
templates for smart provisioning, VMware, Oracle, 2DC and 3DC replication, and other
allocation workflows. These changes removed the dependency of Automator on Device Manager
(HDvM) and HCS.
Automator Configuration
Page 11-5
Hitachi Ops Center Automator
Active Learning Exercise: Group Discussion
Page 11-6
Hitachi Ops Center Automator
Automator Architecture
Automator Architecture
This section provides overview of Automator architecture.
Midrange
Tier 2 Class - SAS
Midrange
Tier 3 Class - SATA
Network
Tier 3
Abstraction and Automation Engine Attached
Storage
Automator
© Hitachi Vantara LLC 2020. All Rights Reserved.
Page 11-7
Hitachi Ops Center Automator
Key Terms
Key Terms
Service template
• A deployment blueprint for the application-based storage capacity
provisioning process to encapsulate configuration settings, instructions and
tasks
Service
• An instance of a service template configured to work with your needs
Task
• An instance of a service that can be scheduled to run immediately or based
on a schedule and is created when a service is submitted
• Service template
• Service
• Task
Page 11-8
Hitachi Ops Center Automator
Service and Service Template
Service Template
Task
Task
2-1
Service Instance 2 2-2
Task
2-3
Key Terms
Infrastructure group
• Organize storage resources and associate them with services and grant
access to users
Service group
• A collection of services associated with a user group
User group
• A set of users with a defined level of access to the services in the service
group to which they are associated
• Infrastructure group
o Organize storage resources and enable you to associate them with services and
grant access to users. Resource groups that contain pools for storage are
assigned to infrastructure groups
Page 11-9
Hitachi Ops Center Automator
Grouping Infrastructure and Access Control
Service group
• User group
o A user group is a set of users with a defined level of access. User groups are
associated with service groups to enable users to access the services in the
service group
Page 11-10
Hitachi Ops Center Automator
Automator Use Cases
Smart Provisioning
Issue Before allocation, a storage admin must select appropriate volume from many
pools in the storage system to consider performance
BR=15% BR=30%
User Storage admin
Requests more capacity At first, must select volume
for volume allocation
Page 11-11
Hitachi Ops Center Automator
ServiceNow Integration
ServiceNow Integration
The storage admin is responsible for checking tickets issued by helpdesk, and for solving
Issue problem on tickets with many other IT staff members
Tickets
Alert
Capacity
IT staff members Storage admin
Implement solutions Checks many tickets
Instead of IT staff members, ServiceNow executes solution programs which are on the
Solution
Automator with few settings
Less Settings
Data center Automate settings as service catalog Helpdesk Users Speed
Tickets
Alert
Capacity
Automator
Solution Automate and integrate with 3rd party cloud management platforms using
Automator REST API
Page 11-12
Hitachi Ops Center Automator
Cloud Environment Management
…
Tries to explain the trouble Tries to understand the trouble
…
Analyzer dramatically improves communications between both admins. Also, the storage admin
Solution can detect and solve the problem quickly using Automator integrated with Analyzer
Data center
Executes
Analyzer Analyzer Allocate like
There is a capacity volume
Capacity shortage! shortage Capacity shortage!
VM admin
I saw that too. Storage admin Automator
Uses Analyzer to monitor I’m already working on it Uses HIAA to monitor
the storage system
the storage system
© Hitachi Vantara LLC 2020. All Rights Reserved.
Online Migration
Issue Users need to set a lot of settings for Data Migration. It is complicated and has a
risk of operational mistakes
Solution Automator provides two templates for Data Migration which include SAN Zoning
setting
Page 11-13
Hitachi Ops Center Automator
Active Learning Exercise: Brainstorming
GUI Overview
This section shows how to navigate the Automator GUI.
After login into Ops Center, select the Automator on the Launcher tab
Page 11-14
Hitachi Ops Center Automator
Login Directly to Automator
http://automation-software-server-address:port-number/Automation/login.htm
Where:
• port-number is the port number of the Ops Center Automator server. The default port
number is 22016.
Page 11-15
Hitachi Ops Center Automator
GUI Components
GUI Components
Search Global tabs area
Tools
Application
pane
Navigation pane
Global monitoring bar © Hitachi Vantara LLC 2020. All Rights Reserved.
Page 11-16
Hitachi Ops Center Automator
Instructor Demonstration
Global tabs
The Dashboard and Tasks tabs are always visible, regardless of which window is active. Access
to Services, Service Templates, and Administration tabs depends on the user role assigned. The
tabs provide access to services, tasks, administrative functions.
• Navigation pane – This pane varies with the active tab. From the navigation pane, you
can access resources and frequently used tasks
• Application pane – This pane varies with the active tab. The application pane shows
summary information, resource objects, and details about the current task
• Global monitoring bar – This bar is always visible, regardless of which window is active.
It provides links to information about submitted tasks
• Search – This box is available on the Service, Tasks, and Service Templates tab and
provides keyword and criteria-based search functions
Instructor Demonstration
Page 11-17
Hitachi Ops Center Automator
Services Management –Request (Run) a Service
Request a Service
Submitting a service
• Runs the service by creating the tasks required to perform the service
• Select the service in Release status and click Create Request
Submit a Service
In the Submit Service Request window, in the Settings pane, configure the volume, host,
and task settings as required by the service.
• You can click Submit to submit the service immediately
• You can click Submit and View Task to submit the service and go to the Tasks tab
Page 11-18
Hitachi Ops Center Automator
Review a Service
Review a Service
Select the Tasks tab to review the status of the tasks related to the
service
You can verify the tasks that are associated with the submitted service, listed on the Tasks tab.
Manage Tasks
Tasks
• Perform the function of the
service
• Generated automatically when a
service is submitted
• Monitored from the Dashboard
tab, Global Monitoring Area, or
Tasks tab
Tasks are generated automatically when a service is submitted. The tasks in Automator
correspond with the tasks that perform functions in Hitachi Command Suite without having to
manually enter the task each time. You can monitor the progress of a task as it executes its
function through completion.
The Tasks tab includes Tasks, History and Debug tabs:
• Tasks: Display the tasks associated with released services on the Tasks tab
• History: Include tasks that have been archived from the Tasks tab
• Debug: Display tasks that are generated from a service in debug, test, or maintenance
status. Available to users with modify (or higher) role
Page 11-19
Hitachi Ops Center Automator
Services Management – Create a Service
Service Creation
Create a service
Login
Service Admin
System Admin
Developer
2. On the Services tab, in the Services pane, click Create to open the Select Service
Template window.
Page 11-20
Hitachi Ops Center Automator
Create a Service
Create a Service
3. In Service template view, select a template to open the service template preview.
Page 11-21
Hitachi Ops Center Automator
Create a Service
In the Settings pane of the Create Service window, enter the following information, which is
summarized in the General Settings area of the Navigation pane:
• Name of the service.
• Description of the service.
• State: Select Test for new services to allow only users in the Admin, Develop, or Modify
role to submit the service.
• Tags: Specify one or more tags for the service (to a maximum of 256 characters). The
tags you select for the service also apply to the tasks generated by the service.
• Service Group: Select the service group of users who can access the service.
• Service Template: The template on which service is based. Click the template name to
open the Template Preview, which includes detailed information about the template.
In the Template Preview, you can click View Flow to open the flow window for the
template.
• Expand Advanced Options and select the options you want:
o Scheduling Options:
Immediate: Run the service when it is submitted
Scheduled: Run the service once
Recurrence: Run the service multiple times
Page 11-22
Hitachi Ops Center Automator
Create a Service
o Display Flow Detail for Submit User: Select to show the details of the service to
the service user
• In the Navigation pane, click each settings group and configure the required and
optional parameters. You can also navigate through the settings groups using the links
at the bottom of the Settings pane. You can choose to retain default settings from the
service or template you started with. For Volume Settings, you can choose whether to
allow users to change certain settings or to hide them altogether.
1. Click Preview to open a view of the service as it would appear to users. Then click
Save and Close to save the service.
Page 11-23
Hitachi Ops Center Automator
Instructor Demonstration
Instructor Demonstration
Service Builder
Page 11-24
Hitachi Ops Center Automator
Automator Video – Create Service Template
Service templates are based on plug-ins that serve as the building blocks for
running scripts
• Modify service templates to fit into each
customer's data center operations or
environment
• Create new plug-ins (which can be used as
steps in service templates) using their own
existing automation scenarios (implemented
in scripts)
• The Service Templates and plug-ins can be linked together as a sequence of steps
that dictate the flow of operations
Automator provides some canned services as built-in services, which are based on our best
practices and provides general orchestration plug-in such as: call REST API, invoke CLI, transfer
the file to remote server, send email notification, and so on.
Users can also create or customize their own service template to fit in their environment,
operation policy and workflow by utilizing existing homegrown script.
Page 11-25
Hitachi Ops Center Automator
Module Summary
Module Summary
Appendix
Let’s learn more know.
Smart Provisioning
Automator selects provisioning resources based on built-in best practices and
user specified policies for optimized resource usage across the data center
User Performance
Requirements
policies SAN
Smart
provisioning
Built-in Availability
Best practices Main site DR site
Optimization
Page 11-26
Hitachi Ops Center Automator
Smart Provisioning Overview
Current utilization
Current performance
Page 11-27
Hitachi Ops Center Automator
Smart Provisioning (Allocate Volumes)
Page 11-28
12. Migration Capabilities
Module Objectives
Page 12-1
Migration Capabilities
Migration Capabilities NDM – UVM / GAD
Migration Capabilities
Page 12-2
Migration Capabilities
UVM NDM Migrations
TC/HUR – supported arrays VSP G1x00, VSP G350/ VSP 370/ VSP
700/ VSP 900, VSP/HUS VM
Page 12-3
Migration Capabilities
GAD NDM Migrations
GAD NDM – VSP G1x00, VSP G370/ VSP G700/VSP G900/ VSP F370/
VSP F700/VSP F900/
• Scripts are developed (Similar to UVM NDM process)
• ADR is supported
• Traditional quorum, internal loopback and quorum less
• No capability to maintain RPO/RTO for TrueCopy. HUR only supported(*1)
(*1) – GAD Scripts doesn’t have the functionality to configure HUR replication on migration
target arrays
Page 12-4
Migration Capabilities
GAD NDM Migrations
Page 12-5
Migration Capabilities
Module Summary
Module Summary
Page 12-6
13. VSP 5000 Series SOM Changes
Module Objectives
Page 13-1
VSP 5000 Series SOM Changes
SOM Changes
SOM Changes
https://teamsites.sharepoint.hds.com/sites/ECCP/Public/SitePages/Home.aspx
Page 13-2
VSP 5000 Series SOM Changes
New SOMs
New SOMs
Page 13-3
VSP 5000 Series SOM Changes
New SOM 1168
Page 13-4
VSP 5000 Series SOM Changes
SOM 868 Meaning Changed SOMs
• OFF:
If RAID type of a local (internal) VOL is RAID1, the RMF report displays “RAID-5”
If the VOL is an external VOL, the RMF report displays “RAID-5” as well
Page 13-5
VSP 5000 Series SOM Changes
SOM 1115 Meaning Changed
When this mode is set to ON, data is initialized without using metadata at LDEV format for a virtual
volume with capacity saving enabled
We will change the default setting from OFF at RAID800 to ON at RAID900 because the formatting
speed without using the metadata is faster than the one which uses metadata in R900 anymore
R800: Only the “Comp only” VOL formats the data without using the metadata
R900: Both the “Comp only” and “Comp/Dedup” VOL formats the data without the metadata
When this mode is set to ON, data is initialized without using metadata at LDEV format for a virtual volume with Capacity Saving
enabled.
Note:
1. (1) The mode is applied to recover a blocked pool volume in a pool to which a virtual
volume whose capacity saving setting is Compression belongs.
2. For the information of setting timing, refer to the procedure for blocked pool volume
recovery in the Maintenance Manual.
3. (2) The processing time increases with increase in pool capacity. (*1)
4. (3) Do not change the mode setting during LDEV format for a virtual volume whose
capacity saving setting is Compression. If the setting is changed, the processing cannot
be performed correctly and may end abnormally depending on the timing.
5. (4) The mode is effective only for LDEV format for a virtual volume whose capacity
saving setting is Compression, so that there is no side effect in relation to user data, but
the processing may take more time than that when the mode is set to OFF depending
on the pool capacity. Therefore, basically do not use the mode for cases other than pool
volume blockage recovery.
• *1: Estimate of processing time
• Processing time (minute) = (pool capacity (TB)/40) + 5
• * If the result of dividing pool capacity by 40 has decimal places, round it up to an
integral number.
• * The processing finishes early if there is less capacity of allocated pages.
Page 13-6
VSP 5000 Series SOM Changes
SOMs Removed
SOMs Removed
Removed SOMs
No SOM Function
1 SOM218 <The target function is not available.>
DB Validation Enabler SIM option for limited purpose
2 SOM219 <The target function is not available.>
DB Validation Enabler SIM option for limited purpose
3 SOM292 <The target phenomenon doesn’t occur.>
Issuing OLS when Switching Port:
In case the mainframe host (FICON) is connected with the CNT-made FC switch (FC9000
etc.), and is using along with the TrueCopy S/390 with Open Fibre connection, the
occurrence of Link Incident Report for the mainframe host from the FC switch will be
deterred when switching the CHT port attribute (including automatic switching when
executing CESTPATH and CDELPATH in case of Mode114=ON).
Mode292=ON: When switching the port attribute, issue the OLS (100ms) first, and then
reset the Chip.
Mode292=OFF (default): When switching the port attribute, reset the Chip without issuing
the OLS.
Page 13-7
VSP 5000 Series SOM Changes
Removed SOMs
No SOM Function
4 SOM448 <This function is available by another UI.>
Mode 448 = ON:
After a physical path failure (such as path disconnection) is detected, a mirror is split (suspended) one
minute after the detection. On MCU side, the mirror is suspended one minute after read journal commands
from RCU stop. On RCU side, the mirror is suspended one minute after read journal commands fail.
Mode 448 = OFF (default):
After a physical path failure (such as path disconnection) is detected, a mirror is split (suspended) if the path
is not restored within the path mornitoring time set by the mirror option.
5 SOM449 <This function is available by another UI.>
This mode is used to enable and disable detection of communication failures between MCU and RCU.
The default setting is ON.
Mode 449 = ON
When a physical path failure is detected, the pair is not suspended. On MCU side, checking read journal
command disruption from RCU is disabled, and monitoring read journal command failures is disabled on
RCU side.
Mode 449 = OFF
When a physical path failure is detected, the pair is suspended after the path monitoring time set by the
mirror option has passed or after a minute. Detecting communication failures between MCU and RCU is
enabled. When the mode is set to OFF, the SOM448 setting is enabled.
No SOM Function
Page 13-8
VSP 5000 Series SOM Changes
Removed SOMs
No SOM Function
No SOM Function
10 SOM696 <The target function is not available.>
This mode is available to enable or disable the QoS function.
Mode 696 = ON:
QoS is enabled. (In accordance with the Share value set to SM, I/Os are scheduled. The Share value setting
from RMLIB is accepted)
Mode 696 = OFF (default):
QoS is disabled. (The Share value set to SM is cleared. I/O scheduling is stopped. The Share value setting
from host is rejected)
11 SOM791 <This function is available by another UI.>
This mode enables multiple JOBs of ShadowImage Resync--Normal Copy to be executed.
Mode 791 = ON:
Up to 24 JOBs of Resync--Normal Copy are executed at a time.
Depending on ShadowImage option setting, the maximum number of JOBs for a pair varies. For details, see
the "SOM791" sheet.
Mode 791 = OFF (default):
<R600, R700, HM700, R800 earlier than 80-05-41, HM800 earlier than 83-04-41>
A resync copy job is performed for one pair (default).
<R800 80-05-41 and later, HM800 83-04-41 and later>
A resync copy job is performed for one pair (default), but if local replica option #26 is set to ON, resync copy
jobs can be performed with the same multiplicity as those when SOM791 is set to ON. (see the "SOM791"
sheet)
Page 13-9
VSP 5000 Series SOM Changes
Removed SOMs
No SOM Function
12 SOM857 <The target phenomenon doesn’t occur.>
This option enables or disables to limit the cache allocation capacity per MPB (RAID700/RAID800) or MPU
(HM700/HM800) to within the prescribed capacity (*) except for Cache Residency.
*: 128GB (RAID700/HM700/HM800 H model), 256GB (RAID800), 64GB (HM800 M model), 16GB (HM800 S
model).
Mode 857 = ON:
The cache allocation capacity is limited to within the prescribed capacity.
Mode 857 = OFF (default):
The cache allocation capacity is not limited to within the prescribed capacity.
13 SOM897 <It is difficult to apply this function to RAID900.>
By the combination of SOM897 and 898 setting, the expansion width of Tier Range upper I/O value (IOPH) can
be changed as follows.
Mode 897 = ON:
SOM898 is OFF: 110%+0IO
SOM898 is ON: 110%+2IO
Mode 897 = OFF (Default)
SOM898 is OFF: 110%+5IO (Default)
SOM898 is ON: 110%+1IO
By setting the SOMs to ON to lower the upper limit for each tier, the gray zone between other tiers becomes
narrow and the frequency of page allocation increases.
No SOM Function
14 SOM898 <It is difficult to apply this function to RAID900.>
By the combination of SOM898 and 897 setting, the expansion width of Tier Range upper I/O value (IOPH) can
be changed as follows.
Mode 898 = ON:
SOM897 is OFF: 110%+1IO
SOM897 is ON: 110%+2IO
Mode 898 = OFF (Default):
SOM897 is OFF: 110%+5IO (Default)
SOM897 is ON: 110%+0IO
By setting the SOMs to ON to lower the upper limit for each tier, the gray zone between other tiers becomes
narrow and the frequency of page allocation increases.
15 SOM1015 <This function is available by another UI.>
When a delta resync is performed in TC-UR delta configuration, this mode is used to change the pair status to
PAIR directly and to complete the delta resync.
Mode 1015 = ON:
The pair status changes to COPY and then PAIR when a delta resync is performed in TC-UR delta configuration.
Mode 1015 = OFF (default):
The pair status changes directly to PAIR (not via COPY) in TC-UR delta configuration.
Page 13-10
VSP 5000 Series SOM Changes
Removed SOMs
No SOM Function
16 SOM1046 <The target function is not available.>
To enable connection of Brocade 8G FCSW in mode=3.
Mode 1046 = ON:
Connection of Brocade 8G FCSW with firmware Ver6.X.X series is enabled.
Mode 1046 = OFF (default):
Connection of Brocade 8G FCSW with firmware Ver7.X.X series is enabled.
17 SOM1047 <The target phenomenon doesn’t occur.>
This mode can switch to support or not to support zHPF enhanced functions.
Mode 1047 = ON:
The storage system returns “not supported” for each enhanced function for Read Feature Codes from
channels, but can accept zHPF enhanced I/Os even the mode is ON.
Mode 1047 = OFF
The storage system returns “Support” for each enhanced function.
No SOM Function
18 SOM1050 <This function is available by another UI.>
This mode enables creation of pairs using user capacity in excess of 1.8 PB per system by managing
differential BMP in hierarchical memory for pair volumes whose capacity is 4 TB (OPEN) or 262,668 Cyl
(Mainframe) or less.
Mode 1050 = ON:
For pair volumes of 4 TB (OPEN)/262,668 Cyl (Mainframe) or less, differential BMP is managed in
hierarchical memory that performs caching to CM/PM using HDD as a master and enables creation of pairs
using user capacity in excess of 1.8 PB per system.
Mode 1050= OFF (default):
For pair volumes of 4TB (OPEN)/262,668 Cyl (Mainframe) or less, differential BMP is managed in SM as
usual so that the user capacity to create pairs is limited to 1.8 PB per system. Also, differential MPB
management can be switched from the hierarchical memory to SM by performing a resync operation for pairs
whose volume capacity is 4 TB (OPEN)/ 262,668 Cyl (Mainframe) or less.
Page 13-11
VSP 5000 Series SOM Changes
Removed SOMs
No SOM Function
19 SOM1058 <This function is available by another UI.>
This mode can change differential BMP management from SM to hierarchical memory so that the number of pairs to be created on
a system and user capacity used for pairs increase.
- For Mainframe systems, all pairs can be managed in hierarchical memory so that pairs can be created by all LDEVs.
- For OPEN systems, pairs that can only be managed in SM use SM so that the number of pairs that can be created using non-
DP VOLs increases.
Mode 1058 = ON:
<SOM1050 is set to ON>
- By resynchronizing Mainframe VOLs of 262,668 Cyl or less, the differential BMP management is switched from SM to hierarchical
memory. (Hierarchical memory management remains as is.)
- By resynchronizing Open VOLs (DP-Vols only) of 4 TB or less, the differential BMP management is switched from SM to
hierarchical memory. (Hierarchical memory management remains as is.)
<SOM1050 is set to OFF>
- By resynchronizing Mainframe VOLs of 262,668 Cyl or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)
- By resynchronizing Open VOLs (DP-Vols only) of 4 TB or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)
Mode 1058 = OFF (default):
<SOM1050 is set to ON>
- The differential BMP management does not change by resynchronizing pairs.
<SOM1050 is set to OFF>
- By resynchronizing Mainframe VOLs of 262,668 Cyl or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)
- By resynchronizing Open VOLs (DP-Vols only) of 4 TB or less, the differential BMP management is switched from hierarchical
memory to SM. (SM management remains as is.)
No SOM Function
20 SOM1081 <This function is provided by default.>
The value of Initiation Delay Time on PRLO (Process Logout) frame is changed.
Mode 1081 = ON:
"Initiation Delay Time" on PRLO frame sent from RAID800 is 1 sec.
Mode 1081 = OFF (default):
"Initiation Delay Time" on PRLO frame sent from RAID800 is 4 sec.
21 SOM1093 <The target function is not available.>
This mode is used to disable background unmap during microcode downgrade from a version that supports
pool reduction rate correction to a version that does not support the function.
Mode 1093 = ON:
Background unmap cannot work.
Mode 1093 = OFF (default):
Background unmap can work.
Page 13-12
VSP 5000 Series SOM Changes
Removed SOMs
No SOM Function
22 SOM1108 <The target function is not available.>
This mode is used to extend the processing time for updating metadata managed by the data deduplication and
compression (capacity saving) function.
Mode 1108 = ON:
The internal processing time used to update metadata managed by the data deduplication and compression
function is extended.
Mode 1108 = OFF (default):
The internal processing time used to update metadata managed by the data deduplication and compression
function does not change.
23 SOM1119 <The target phenomenon doesn’t occur.>
The mode is used to disuse the control information added with 80-05-05-00/00 (R800)/ 83-04-03-x0/00 (HM800)
when capacity saving is enabled, so that downgrading the microcode as follows is enabled.
Mode 1119 = ON:
The control information is not used when capacity saving is enabled.
Mode 1119 = OFF (default):
The control information is used when capacity saving is enabled.
No SOM Function
24 SOM1120 <The target phenomenon doesn’t occur.>
This system option mode disables TI pair creation with DP pool specified and releases cache management devices
to enable the microcode downgrade with the following versions.
Mode 1120 = ON:
TI pair creation with DP pool specified is disabled. Also, if any cache management devices are reserved while there
is no TI pool on the storage system, all of them are released.
Mode 1120 = OFF (default):
No action
25 SOM1122 <The target function is not available.>
This mode can change the operating speed of BGU.
Mode 1122 = ON:
The BGU speed becomes up to 10GB/s.
Mode 1122 = OFF (default):
The BGU speed is up to 42MB/s.
Page 13-13
VSP 5000 Series SOM Changes
Advanced System Settings
No SOM function
Page 13-14
VSP 5000 Series SOM Changes
SOM’s Converted to Advanced System Settings
SOM Changes
Page 13-15
VSP 5000 Series SOM Changes
Active Learning Exercise: One Minute Paper
Module Review
Page 13-16
14. Best Practices and Information Sources
Module Objectives
Page 14-1
Best Practices and Information Sources
Best Practices ADR
ADR Notes
The use case for capacity savings option (Dedupe and Compression) is
Office, virtual desktop infrastructure (VDI) and Backup. The
deduplication is effective due to many identical file copies, OS area
cloning and backups
Page 14-2
Best Practices and Information Sources
Best Practices Pool Recommendations
Pool Recommendations
HDP Maximum Pool Capacity 16.6PB for Open and 15PB for MF
Pool to expand across all CBX pairs in a multi CBX pair configuration. There are
exceptions but cross CBX pairs provides best performance and reduce BE cross
DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol
DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol DPvol
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
CTL0 CTL1 CTL2 CTL3 CTL4 CTL5 CTL6 CTL7 CTL4 CTL5 CTL6 CTL7
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
DKB
CBX0 CBX1 CBX2 CBX3 CBX4 CBX5
CBX0 CBX1 CBX2 CBX3 CBX4 CBX5
Page 14-3
Best Practices and Information Sources
Best Practices Parity Group and Spare Drive Recommendations
PG #1: (7D+1P) x 2
PG #1: (7D+1P) x 2
PG #0: 14D+2P
PG #0: 14D+2P
PG #4: 2D+2D
PG #2: 6D+2P
PG #3: 7D+1P
C
protected from single
B
PG #1: (7D+1P) x 2
PG #1: (7D+1P) x 2
logical tray failure
PG #0: 14D+2P
PG #0: 14D+2P
PG #2: 6D+2P
PG #5: 3D+1P
PG #3: 7D+1P
P Fixed PG assignment method (same policy as R800’s spec)
a PG with 16 drives:
i - Taking 2 Drives from 8 sequential DB#s (Ex: DB#0~#7)
r - Starting even# of Slot# (Ex: Slot#0 and 1, #2 and 3,,,)
PG with 8 drives:
- Taking 1 Drive from 8 sequential DB#s (Ex: DB#0~#7, DB#8~#15)
PG with 4 drives:
- Taking 1 Drive from 4 sequential even# of DB#s or odd# of DB#s (Ex: DB#0/2/4/6,
DB#1/3/5/7)
Recommended quantity
Drive type Recommendation for Spare Drive quantity
SAS (10k) 1 Spare Drive for every 32 Drives
NL-SAS (7.2k) 1 Spare Drive for every 16 Drives
SSD 1 Spare Drive for every 32 Drives
FMD 1 Spare Drive for every 24 Drives
Page 14-4
Best Practices and Information Sources
Best Practices Replication
Local Replication
When the same pool has both PVol and SVol of ShadowImage, if ADR
is set for both volumes, physically only one data item is saved because
deduplication is performed between PVol and SVol. To protect the data,
it is recommended to use separate pools for PVol and SVol
SI/HTI PVol and SVol are assigned to same microprocessor unit (MPU)
prior to paircreate. If they are not on the same MPU, array will shift
SVol MPU to same MPU as PVol
Can not have ADR devices in dedicated Hitachi Thin Image (HTI) Pool
Remote Replication
Page 14-5
Best Practices and Information Sources
Best Practices Encryption Recommendations
Encryption Recommended
Encryption hardware:
• Enabling and disabling DARE is controlled at the parity group level (that is, all
drives in a parity group are either encrypting or non-encrypting)
• While it is possible to have both encrypting and non-encrypting parity groups
configured on an EBEM, it is recommended to encrypt all parity groups on an
EBEM
• It is important to note that different spare drives are used for encrypting and
non-encrypting parity groups
Page 14-6
Best Practices and Information Sources
Active Learning Exercise: Group Discussion
Information Sources
This section highlights the information sources.
https://support.hitachivantara.com
• Documentation
• Hitachi Data Instance Director (HDID) support matrix
https://community.hitachivantara.com
• Products Storage Management Community
• Developers OpsCenter Automator community
• Developers HDID community
Internal: https://hitachivantara.sharepoint.com/sites/OpsCenterInfo
Page 14-7
Best Practices and Information Sources
Hitachi Ops Center Information Sources
Page 14-8
Best Practices and Information Sources
Hitachi Ops Center
http://cumulus-systems.com/hdcalicense
• User Name : hdcalic
• Password : hdcalic123
• A confirmation E-Mail is sent after submitting the form. The license keys are
sent separately. That may takes 1-2 days
Page 14-9
Best Practices and Information Sources
Module Summary
Module Summary
Page 14-10
Best Practices and Information Sources
Your Next Steps
Follow us on
Validate yourknowledge
Validate your knowledgeandand skills
skills withwith certification
certification. social media:
Get practical advice and insight with Hitachi Vantara white papers.
Join the conversation with your peers in the Hitachi Vantara Community.
• Certification: https://www.hitachivantara.com/en-us/services/training-
certification.html#certification
• Learning Paths:
o Employees: https://connect.hitachivantara.com/en_us/user/employee-
center/my-learning-and-development/global-learning-catalogs.html
o Partners: https://partner.hitachivantara.com/
o Customers: https://www.hitachivantara.com/en-us/pdf/training/global-learning-
catalog-customer.pdf
• Hitachi University / Hitachi Vantara Learning Center
o Employees: Hitachi University -
https://hitachi.csod.com/client/hitachi/default.aspx
o Partners / Customers: Hitachi Vantara Learning Center -
https://hitachi.csod.com/client/hitachi/default.aspx
• Hitachi White Papers:
https://www.hitachivantara.com/search?filter=0&q=white%20papers&site=hitachi_insig
ht&client=hitachi_insight&proxystylesheet=hitachi_insight&getfields=content-type
• Hitachi Support Connect: https://support.hitachivantara.com
• Hitachi Vantara Community: https://community.hitachivantara.com/s/
• Hitachi Vantara Twitter: http://www.twitter.com/HitachiVantara
Page 14-11
Best Practices and Information Sources
We Value Your Feedback
Page 14-12
Communicating in a Virtual Classroom:
Tools and Features
Virtual Classroom Basics
This section covers the basic functions available when communicating in a virtual classroom.
Chat
Q&A
Feedback Options
• Raise Hand
• Yes/No
• Emoticons
Markup Tools
• Drawing Tools
• Text Tool
© Hitachi Vantara Corporation 2020. All Rights Reserved.
Page V-1
Communicating in a Virtual Classroom: Tools and Features
Reminders: Intercall Call-Back Teleconference
Page V-2
Communicating in a Virtual Classroom: Tools and Features
Feedback Features — Try Them
Page V-3
Communicating in a Virtual Classroom: Tools and Features
Intercall (WebEx) Technical Support
Call 800.374.1852
Page V-4
Evaluating This Course
Please use the online evaluation system to help improve our
courses.
https://hitachiuniversity/Web/Main
Page E-1
Evaluating This Course
3. On the Transcript page, click the down arrow in the Active menu.
4. In the Active menu, select Completed. Your completed courses will display.
6. Click the down arrow in the View Certificate drop down menu.
Page E-2